Fix move_blocks loop exit conditions

The move_blocks ioctl intends to only move extents whose bytes fall
inside i_size.  This is easy except for a final extent that straddles an
i_size that isn't aligned to 4K data blocks.

The code that either checked for an extent being entirely past i_size or
for limiting the number of blocks to move by i_size clumsily compared
i_size offsets in bytes with extent counts in 4KB blocks.  In just the
right circumstances, probably with the help of a byte length to move
that is much larger than i_size, the length calculation could result in
trying to move 0 blocks.  Once this hit the loop would keep finding that
extent and calculating 0 blocks to move and would be stuck.

We fix this by clamping the count of blocks in extents to move in terms
of byte offsets at the start of the loop.  This gets rid of the extra
size checks and byte offset use in the loop.  We also add a sanity check
to make sure that we can't get stuck if, say, corruption resulted in an
otherwise impossible zero length extent.

Signed-off-by: Zach Brown <zab@versity.com>
This commit is contained in:
Zach Brown
2023-01-06 15:00:26 -08:00
parent 78279ffb4a
commit a23e7478a0

View File

@@ -1192,9 +1192,9 @@ static void truncate_inode_pages_extent(struct inode *inode, u64 start, u64 len)
* explained above the move_blocks ioctl argument structure definition.
*
* The caller has processed the ioctl args and performed the most basic
* inode checks, but we perform more detailed inode checks once we have
* the inode lock and refreshed inodes. Our job is to safely lock the
* two files and move the extents.
* argument sanity and inode checks, but we perform more detailed inode
* checks once we have the inode lock and refreshed inodes. Our job is
* to safely lock the two files and move the extents.
*/
#define MOVE_DATA_EXTENTS_PER_HOLD 16
int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
@@ -1254,6 +1254,15 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
count = (byte_len + SCOUTFS_BLOCK_SM_MASK) >> SCOUTFS_BLOCK_SM_SHIFT;
to_iblock = to_off >> SCOUTFS_BLOCK_SM_SHIFT;
/* only move extent blocks inside i_size, careful not to wrap */
from_size = i_size_read(from);
if (from_off >= from_size) {
ret = 0;
goto out;
}
if (from_off + byte_len > from_size)
count = ((from_size - from_off) + SCOUTFS_BLOCK_SM_MASK) >> SCOUTFS_BLOCK_SM_SHIFT;
if (S_ISDIR(from->i_mode) || S_ISDIR(to->i_mode)) {
ret = -EISDIR;
goto out;
@@ -1329,9 +1338,8 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
break;
}
/* only move extents within count and i_size */
if (ext.start >= from_iblock + count ||
ext.start >= i_size_read(from)) {
/* done if next extent starts after moving region */
if (ext.start >= from_iblock + count) {
done = true;
ret = 0;
break;
@@ -1339,13 +1347,15 @@ int scoutfs_data_move_blocks(struct inode *from, u64 from_off,
from_start = max(ext.start, from_iblock);
map = ext.map + (from_start - ext.start);
len = min3(from_iblock + count,
round_up((u64)i_size_read(from),
SCOUTFS_BLOCK_SM_SIZE),
ext.start + ext.len) - from_start;
len = min(from_iblock + count, ext.start + ext.len) - from_start;
to_start = to_iblock + (from_start - from_iblock);
/* we'd get stuck, shouldn't happen */
if (WARN_ON_ONCE(len == 0)) {
ret = -EIO;
goto out;
}
if (is_stage) {
ret = scoutfs_ext_next(sb, &data_ext_ops, &to_args,
to_start, 1, &off_ext);