You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
core: Improve access tracking performance for read/write operations
Currently, read and write operations to the VirtualFileSystem are
"stateless" in the sense that there is no corresponding "open" and
"close" exposed. At the NFS4 level, this exists, and nfs4j tracks these
internally already (`stateid4.opaque` byte sequences), but they're not
exposed to `VirtualFileSystem`.
This is a problem not only for scenarios where an explicit open/close is
required, but even for currently implemented scenarios it's a
performance problem: PseudoFs calls `checkAccess(inode,
ACE4_READ_DATA/ACE4_WRITE_DATA)` for each `read`/`write`, which in turn
triggers a Stat for each read/write operation.
This incurs an unnecessary performance penalty of more than 20%.
The access check is unnecessary because a successful check upon "open"
will be valid for the entire lifetime of the call, just as it is when
opening a file descriptor in POSIX.
In order to properly track granted access, we can leverage data stored
in stateid4 and the existing work in FileTracker.
Since stateid4 exists in NFSv4 only, we make sure there is no
performance degradation in NFSv3 and no exposure of NFSv4-only internals
in the VirtualFileSystem API:
1. Expose "Opaque" stateids to VirtualFileSystem read/write/commit; add
open/close for custom FS interop.
2. Rework stateid4, which currently conflates stateid and seqid --
directly reference stateid4.other Opaque when possible.
3. In PseudoFS, check SharedAccess state from FileTracker and use this
to determine access during read/write when available.
With the change, for sustained reads I'm seeing 854 MB/s instead of 712
MB/s on LocalFileSystem, and 2000 MB/s instead of 1666 MB/s on a custom
implementation.
Signed-off-by: Christian Kohlschütter <[email protected]>
0 commit comments