Commit Graph

282 Commits

Author SHA1 Message Date
Zach Brown
a2ef5ecb33 scoutfs: remove item_forget
It's pretty dangerous to forcefully remove items without writing
deletion items to lsm segments.  This was only used for magical
ephemeral items when we were having them store file data.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-23 12:59:24 -07:00
Zach Brown
1f933016f0 scoutfs: remove ephemeral items
Ephemeral items were only used by the page cache which tracked page
contents in items whose values pointed to the pages.  Remove their
special case.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-23 12:58:09 -07:00
Zach Brown
b7bbad1fba scoutfs: add precise transation item reservations
We had a simple mechanism for ensuring that transaction didn't create
more items than would fit in a single written segment.  We calculated
the most dirty items that a holder could generate and assumed that all
holders dirtied that much.

This had two big problems.

The first was that it wasn't accounting for nested holds.
write_begin/end calls the generic inode dirtying path whild holding a
transaction.  This ended up deadlocking as the dirty inode waited to be
able to write while its trans held back in write_begin prevented
writeout.

The second was that the worst case (full size xattr) item dirtying is
enormous and meaningfully restricts concurrent transaction holders.
With no currently dirty items you can have less than 16 full size xattr
writes.  This concurrency limit only gets worse as the transaction fills
up with dirty items.

This fixes those problems.  It adds precise accounting of the dirty
items that can be created while a transaction is held.  These
reservations are tracked in journal_info so that they can be used by
nested holds.  The precision allows much greater concurrency as
something like a create will try to reserve a few hundreds bytes instead
of 64k.  Normal sized xattr operations won't try to reserve the largest
possible space.

We add some feedback from the item cache to the transaction to issue
warnings if a holder dirties more items than it reserved.

Now that we have precise item/key/value counts (segment space
consumption is a function of all three :/) we can't have a single atomic
track transaction holders.  We add a long-overdue trans_info and put a
proper lock and fields there and much more clearly track transaction
serialization amongst the holders and writer.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-23 12:15:13 -07:00
Zach Brown
297b859577 scoutfs: deletion items maintain counts
When we turned existing items into deletion items we'd remove their
values.  But we didn't update the count of dirty values to reflect that
removal so the dirty value count would slowly grow without bound.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-23 12:12:30 -07:00
Zach Brown
5f11cdbfe5 scoutfs: add and index inode meta and data seqs
For each transaction we send a message to to the server asking for a
unique sequence number to associate with the transaction.  When we
change metadata or data of an inode we store the current transaction seq
in the inode and we index it with index items like the other inode
fields.

The server remembers the sequences it gives out.  When we go to walk the
inode sequence indexes we ask the server for the largest stable seq and
limit results to that seq.  This ensures that we never return seqs that
are past dirty items so never have inodes and seqs appear in the past.

Nodes use the sync timer to regularly cycle through seqs and ensure that
inode seq index walks don't get stuck on their otherwise idle seq.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-23 12:12:24 -07:00
Zach Brown
b291818448 scoutfs: add sync deadline timer
Make sure that data is regularly synced.  We switch to a delayed work
struct that is always queued with the sync deadline.  If we need an
immediate sync we mod it to now.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-19 11:19:56 -07:00
Zach Brown
373def02f0 scoutfs: remove trade_time message
This was mostly just a demonstration for how to add messages.  We're
about to add a message that we always send on mount so this becomes
completely redundant.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-18 10:52:04 -07:00
Zach Brown
8ea414ac68 scoutfs: clear seg rb node after replacing
When inserting a newly allocated segment we might find an existing
cached stale segment.  We replace it in the cache so that its user can
keep using its stale contents while we work on the new segment.

Replacing doesn't clear the rb_node, though, so we trip over a warning
when we finally free the segment and it looks like it's still present in
the rb tree.

Clear the node after we replace it so that freeing sees a clear node and
doesn't issue a warning.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-16 14:51:36 -07:00
Zach Brown
5307c56954 scoutfs: add a stat_more ioctl
We have inode fields that we want to return to userspace with very low
overhead.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-16 14:28:10 -07:00
Zach Brown
b97587b8fa scoutfs: add indexing of inodes by fields
Add items for indexing inodes by their fields.  When we update the inode
item we also delete the old index items and create the new items.  We
rename and refactor the old inode since ioctl to now walk the inode
index items.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-16 10:48:12 -07:00
Zach Brown
4084d3d9dc scoutfs: add offline flag, releasing, and fiemap
Now that we have basic file extents we can add a flag to extents to
track offline extents.  We have to initialize and test the flags as we
work with extents.  Truncation can be told to leave removed extents
around with no block mapping and the offline bit set.  Only staging with
the correct data version can write to the offline regions.  Demand
staging isn't implemented yet.  Reads from offline extents are treated
like sparse regions.

Truncation is a straight forward iteration over the portions of existing
extents which overlap with the truncated blocks.

Writing to offline extents has to first remove the existing offline
extent before then adding the new allocated extents.  The 'changes'
mechanism relied on being able to search the current items to find the
changes that should be made before making any changes.  This doesn't
work for finding merge candidates for the new allocated insertion
because the old offline extent change won't have been applied yet.  We
replace the change mechanism with straight forward item modification and
unwinding.

The generic block fiemap can't communicate offline extents and iterates
over blocks instead of extents.  We add our fiemap that iterates
over extents and sets the 'UNKNOWN' flag on offline extents.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-16 10:48:12 -07:00
Zach Brown
e34f8db4a9 scoutfs: add release argument and result tracing
Signed-off-by: Zach Brown <zab@versity.com>
2017-05-16 10:48:12 -07:00
Zach Brown
a262a158ce scoutfs: fix single block release
The offset comparison in release that was meant to catch wrapping was
inverted and accidentally prevented releasing a single block.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-16 10:48:12 -07:00
Zach Brown
81866620a9 scoutfs: allow xattrs with 0 length values
xattrs can have 0 lenth values so fix the item iterator to emit a single
item in the case where the size is 0.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-16 10:48:12 -07:00
Zach Brown
c678923401 scoutfs: don't try to sync on mount errors
kill_sb tries to sync before calling kill_block_super.   It shouldn't do
this on mount errors that wouldn't have initialized the higher level
systems needed for syncing.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-16 10:48:12 -07:00
Zach Brown
66dd35b9a5 scoutfs: fix ring next/prev
The ring node rb walker was returning an exact match for the search key
instead of the last node that was traversed.  This stopped callers from
then iterating from the traversed node to find the next or previous
node.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-16 10:48:11 -07:00
Zach Brown
723e0368f8 scoutfs: add a trace point for item insertion
Signed-off-by: Zach Brown <zab@versity.com>
2017-05-16 10:48:11 -07:00
Zach Brown
6afeb97802 scoutfs: reference file data with extent items
Our first attempt at storing file data put them in items.  This was easy
to implement but won't be acceptable in the long term.  The cost of the
power of LSM indexing is compaction overhead.  That's acceptable for
fine grained metadata but is totally unacceptable for bulk file data.

This switches to storing file data in seperate block allocations which
are referenced by extent items.

The bulk of the change is the mechanics of working with extents.  We
have high level callers which add or remove logical extents and then
underlying mechanisms that insert, merge, or split the items that
the extents are stored in.

We have three types of extent items.  The primary type maps logical file
regions to physical block extents.  The next two store free extents
per-node so that clients don't create lock and LSM contention as they
try and allocate extents.

To fill those per-node free extents we add messages that communcate free
extents in the form of lists of segment allocations from the server.

We don't do any fancy multi-block allocation yet.  We only allocate
blocks in get_blocks as writes find unmapped blocks.  We do use some
per-task cursors to cache block allocation positions so that these
single block allocations are very likely to merge into larger extents as
tasks stream wites.

This is just the first chunk of the extent work that's coming.  A later
patch adds offline flags and fixes up the change nonsense that seemed
like a good idea here.

The final moving part is that we initiate writeback on all newly
allocated extents before we commit the metadata that references the new
blocks.  We do this with our own dirty inode tracking because the high
level vfs methods are unusably slow in some upstream kernels (they walk
all inodes, not just dirty inodes.)

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-16 10:48:11 -07:00
Zach Brown
6719733ddc scoutfs: output full dirent name when tracing
The dirent name formatting code accidentally copied the calculation for
the length of the name from the xattrs, which are null terminated.  The
durents are not, their length is just the value length minus the dirent
header.

Signed-off-by: Zach Brown <zab@versity.com>
2017-05-16 10:29:59 -07:00
Zach Brown
d5a2b0a6db Move towards compaction messages
The compaction code is still directly referencing the super block
and calling sync methods as though it was still standalone.  This is
mostly OK because only the server runs it.  But it isn't quite right
because the sync methods no longer make the rings persistent as they
write the item transaction.  The server is in control of that now.

Eventually we'll have compaction messages being sent between the mount
clients and the server.  Let's take a step in that direction by having
the compaction work call net methods to get its compaction parameters
and finish the compaction.  Eventually these would be marshalled through
request/process/reply code.

But in this first step we know that the compaction code is running on
the server so we can forgo all the messaging and just call in to and out
of compaction.  The net calls just holds the ring consistency locks in
the server and call into the manifest to do the work, commiting the
changes when its done.

This is more careful about segno alloction and freeing.  Compaction
doesn't call the allocator directly.  It gets allocaitons from the
messages and returns them if it doesn't use them.  We actually now
free segnos as they're removed from the manifest.

With the server controlling compaction and can tear all the fiddly level
count watching code out of the manifest.  Item transactions can't care
about the level counts and the server always tries compaction after the
manifest is updated intead of having the manifest watch the level counts
and call compaction.

Now that the server owns the rings they should not be torn down as the
super is torn down, net does that now.  And we need to be more careful
to be sure that writes from dirtying and compaction are stable before
killing the super.

With all this in place moving to shared compaction involves adding the
messages and negotiating concurrent compactions in the manifest.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-24 14:02:18 -07:00
Zach Brown
8b82aa7f18 Consistently initialize inode fields
Inode info struct initialization spread out over three places:
 - once for the memory of a slab obect
 - when reading an existing inode from items
 - when initializing a newly allocated inode

Over time field initializtion got out of sync with these rules.  This
makes it more clear which fields get initialized where.  In the inode
info struct we group fields by where there initialized.  We order the
fields by size and location in the inode struct.

Then we make sure that all the initialization sites have everything
covered.  Doing everything in consistent struct order makes it easier
to audit that we haven't missed anything.

What lead to this was realizing that we missed initializing the seqcount
when reading existing inodes.  It should have been initialized in the
slab object constructor.  The 'staging' boolean has the same problem.

Signed-off-by: Zach Brown <zab@versity.com>
Reviewed-by: Mark Fasheh <mfasheh@versity.com>
2017-04-18 14:17:55 -07:00
Nic Henke
5c54bdbf85 Change type for DATA_VERSION ioctl to __u64
For consistency and to keep upstream users (scout-utils, etc) from
needing to include different type headers, we'll change the type to
match the rest of the header.

Signed-off-by: Nic Henke <nic.henke@versity.com>
2017-04-18 14:07:23 -07:00
Zach Brown
37ba46213c Add suport for more xattr namespaces
Add support for more of the known xattr namespaces.  This helps
generic/062 in xfstests pass.

Signed-off-by: Zach Brown <zab@versity.com>
Reviewed-by: Mark Fasheh <mfasheh@versity.com>
2017-04-18 14:06:29 -07:00
Zach Brown
2aa274b38b Add xattr iops for special files
xfstests generic/062 was failing because it was getting an unexpected
error code when trying to work with xattrs on special files.  Adding our
ops gives it the errnos it expects.

Signed-off-by: Zach Brown <zab@versity.com>
Reviewed-by: Mark Fasheh <mfasheh@versity.com>
2017-04-18 14:06:29 -07:00
Zach Brown
78d15a019c Print inode nr and err on inode upate error
We're currently excessively freaking out if inode updates fail.  Let's
add a little more context to help us track down what goes wrong.

Signed-off-by: Zach Brown <zab@versity.com>
Reviewed-by: Mark Fasheh <mfasheh@versity.com>
2017-04-18 14:06:26 -07:00
Zach Brown
2591e54fdc Make it easier to build scoutfs.ko
We were duplicating the make args a few times so make a little ARGS
variable.

Default to the /lib/modules/$(uname -r) installed kernel source if
SK_KSRC isn't set.

And only try a sparse build that can fail if we can execute the sparse
command.

Signed-off-by: Zach Brown <zab@versity.com>
Reviewed-by: Mark Fasheh <mfasheh@versity.com>
2017-04-18 14:03:24 -07:00
Nic Henke
9fc47dedf8 Add unlocked ioctls for directories.
The use of the Scout ioctls for inode-since and data-since on the root
directory is a rather helpful boost. This allows user code to start on
blank filesystems and monitor activity without needing to create files.

The existing ioctl code was already present, so wiring into the
directory file operations was all that needed to happen.

Signed-off-by: Nic Henke <nic.henke@versity.com>
Reviewed-by: Zach Brown <zab@versity.com>
Reviewed-by: Mark Fasheh <mfasheh@versity.com>
2017-04-18 14:03:24 -07:00
Zach Brown
e61697a54e Add generic file and dir seek methods
Two more xfstests pass when we can seek in files and dirs.

Signed-off-by: Zach Brown <zab@versity.com>
Reviewed-by: Mark Fasheh <mfasheh@versity.com>
2017-04-18 14:03:22 -07:00
Zach Brown
efd95688d3 Add printf format checking to scoutfs msg funcs
scoutfs_msg() was missing the attribute to check printf formats and
arguments.

Signed-off-by: Zach Brown <zab@versity.com>
Reviewed-by: Mark Fasheh <mfasheh@versity.com>
2017-04-18 13:59:54 -07:00
Zach Brown
cec3f9468a Further isolate rings and compaction
Each mount was still loading the manifest and allocator rings and
starting compaction, even if they were coordinating segment reads
and writes with the server.

This moves ring and compaction setup and teardown from on mount and
unmount to as the server starts up and shuts down.  Now only the server
has the rings resident and is running compaction.

We had to null some of the super info fields so that we can repeatedly
load and destroy the ring indices over the lifetime of a mount.

We also have to be careful not to call between item transactions and
compaction.   We'll restore this functionality with the server in the
future.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:51:10 -07:00
Zach Brown
5eefaf34f8 Server updates ring for level0 segment writes
Transaction commits currently directly modify the ring and super block
as segments are written.  As we introduce shared mounts only the server
can modify the ring and super blocks.

This adds network messages to let mounts write items in a level 0
segment while the server modifies the allocator and manifest.

The item transaction commit now sends a message to the server to get an
allocated segno for its new level0 segment and sends a manifest entry to
the server once the segment is written.  The request and reply handlers
for the functions are straight forward.  The processing paths are simple
wrappers around the allocation and update functions that transaction
writing used to call directly.

Now that the item transactions aren't updating the super sync can't
work with the super sequence numbers.

The server needs to make both allocations and manifest updates
persistent before it sends replies to the client.  We add the ability
for the server processing paths to queue and wait for commits of the
rings and super block.  We can hopefull get reasonable batching by using
a work struct for the commit.  We update the other processing path
callers that modify the rings to use the new commit mechanism.

We add a few segment and manifest functions to work with manifest
entries that describe segments.  This creats a bit of similar looking
code thorughout the segment and manifest code but we'll come back and
clean this up once we see what the final shared support looks like.

scoutfs_seg_alloc() now takes the segno from the caller for the segment
it's allocating and inserting into the cache.  Transaction commit uses
the segno it got from the server while compaction still allocates
locally.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:51:10 -07:00
Zach Brown
5487aee6a7 Read items with manifest entries from server
Item reading tries to directly walk the manifest to find segments to
read.  That doesn't work when only the server has read the ring and
loaded the manifest.

This adds a network message to ask the server for the manifest entries
that describe the segments that will be needed to read items.

Previously item reading would walk the manifest and build up native
manifest references in a list that it'd use to read.   To implement the
network message we add request sending, processing, and reply parsing
around those original functions.  Item reading now packs its key range
and sends it to the server.  The server walks the manifest and sends the
entries that intersect with the key range.  Then the reply function
builds up the native manifest references that item reading will use.

The net reply functions needed an argument so that the manifest reading
request could pass in the caller's list that the native manifest
references should be added to.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:51:10 -07:00
Zach Brown
b50de90196 Alloc inodes from pool from server
Inode allocation was always modifying the in-memory super block.  This
doesn't work when the server is solely responsible for modifying the
super blocks.  We add network messages to have mounts send a message to
the server to request inodes that they can use to satisfy allocation.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:51:10 -07:00
Zach Brown
453715a78d Only shutdown locks that were setup
Lock shutdown was crashing trying to deref a null linf on cleanup from
mont errors that happened before locks were setup.  Make sure lock
shutdown only tries to do work if the locks have been setup.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:51:10 -07:00
Zach Brown
45882f5a77 Add some ring tracing
Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:51:10 -07:00
Zach Brown
5e0e9ac12e Move to much simpler manifest/alloc storage
Using the treap to be able to incrementally read and write the manifest
and allocation storage from all nodes wasn't quite ready for prime time.
The biggest problem is that invalidating cached nodes which are the
target of native pointers, either for consistency or memory pressure, is
problematic.  This was getting in the way of adding shared support as
readers and writers try to use as much of their treap caches as they
can.  There were other serious problems that we'd run into eventually:
memory pressure from duplicate caching in native nodes and the page
cache, small IOs from reading a page at a time, the risk of
pathologically imbalanced treaps, and the ring being corrupted if the
migration balancing doesn't work (the model assumed you could always
dirty an individual node in a transaction, you have to dirty all the
parents in each new transaction).

Let's back off to a much simpler mechanism while we build the rest of
the system around it.  We can revisit aggressively optimizing this when
it's our worst problem.

We'll store the indexes that the manifest server needs in simple
preallocated rings with log entries.   The server has to read the index
in its entirety into a native rbtree before it can work on it.  We won't
access the physical ring from mounts anymore, they'll send messages to
the server.

The ring callers are now working with a pinned tree in memory so the
interface can be a bit simpler.  By storing the indexes in their own
rings the code and write path become a lot simper: we have an IO
submission path for each index instead of "dirtying" calls per index and
then a writing call.

All this is much more robust and much less likely to get in our way as
we stand up the rest of the system around it.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:51:10 -07:00
Zach Brown
86d3090982 Tighten lock range error handling
If lock_range returns an error then the caller won't unlock the range.
Make sure to unlock the range if we have it locked when we get errors
that we're going to return to the caller.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:51:10 -07:00
Zach Brown
104bbb06a9 Remove cached range when invalidating items
When invalidating items we need to remove the cached
range that covers the range of keys that we're removing so that
the removed items aren't then considered negative cached items.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:51:10 -07:00
Zach Brown
2ea5f1d734 invalidate_others could return uninit ret
Make sure to initialize ret in case there aren't other mounts.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:51:10 -07:00
Zach Brown
8c59902b70 scoutfs: cleanup socket callbacks
The first attempt at wiring up the socket callbacks was a bit too
precious.  We can simplify and do what other modern socket callback
users do: don't bother with the callback locks and call shutdown before
release.

We also protect against spurious callbacks by only doing work in the
callbacks when the sk user_data points to a sock_info which points back
to the socket.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:51:10 -07:00
Zach Brown
27e55eb43c Flesh out some pieces of the scoutfs.md doc
Trying to keep adding coverage across the design.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:51:10 -07:00
Zach Brown
39ae89d85f Add network messaging between mounts
We're going to need communication between mounts to update and
distribute the manifest and allocators in the treap ring.

This adds a netwoking core where one mount becomes the server and other
mounts send requests to it.  The messaging semantics are pretty simple
in that clients reliably send requests and the server passively reply to
requests.  Complexity beyond that is up to the callers implementing the
requests.

It relies on locking to establish the server role and to broadcast the
address of the server socket.  We add a trivial lvb back to our local
test locking implementation to store the address.  We also add the
ability to shut down locking so that the locking networking work stops
blocking.

A little demonstration request is included which just gives visibility
into client and server clocks in the trace logs.  Next up we'll add the
requests that do real work.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:51:10 -07:00
Zach Brown
392ed81c43 Add some simple lock/invalidation tracing
Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:51:10 -07:00
Zach Brown
955d940c64 Restore key tracing
Now that the keys are a contiguous buffer we can format them for the
trace buffers with a much more straight forward type check around
per-key snprintfs.  We can get rid of all the weird kvec code that tried
to deal with keys that straddled vectors.

With that fixed we can uncomment out the tracing statements that were
waiting the key formatting.

I was testing with xattr keys so they're added as the code is updated.
The rest of the key types will be added seperately as they're used.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:51:10 -07:00
Zach Brown
607eff9b7c Add range locking to xattr ops
We can use easy xattrs to test range locking and item consistency
between mounts.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:49:16 -07:00
Zach Brown
b3b2693939 Add simple debugging range locking layer
We can work on shared mechanics without requiring a full locking server.
We can stand up a simple layer which uses shared data structures in a
kernel image to lock between mounts in the same kernel.

On mount we add supers to a list.  Held locks are tracked in a rbtree.
A lock attempt blocks until it doesn't conflict with anything in the
rbtree.

As locks are acquired we walk all the other supers and write/invaludate
any items they have which intersect with the acquired range.  This is
easier to implement and less efficient than caching locks after they're
unlocked and implementing downconvert/blocking/revoke.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:44:55 -07:00
Zach Brown
f373f05fb7 Add engineering markdown document
Let's put the engineering doc in the source tree so that eventually
it'll be easily found upstream.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:44:55 -07:00
Zach Brown
97cb75bd88 Remove dead btree, block, and buddy code
Remove all the unused dead code from the previous btree block design.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:44:55 -07:00
Zach Brown
6bcdca3cf9 Update dirent last pos and update first comment
The last valid pos for us is now a full u64 because we're storing
entries at an increasing counter instead of at a hahs of the entry name.

And might as well add a clarifying comment to the first pos while we're
here.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:44:55 -07:00
Zach Brown
00fed84c68 Build statfs f_blocks from total_segs
Use the current total_segs field to calculate the total number of blocks
in the system instead of the old and redundant total_segs field which is
going away.

Signed-off-by: Zach Brown <zab@versity.com>
2017-04-18 13:44:54 -07:00