Podman --volume
and modifying the lower dir of an O
verlay mount
#26339
-
Podman's podman/docs/source/markdown/options/volume.md Lines 136 to 137 in 6b8bc6f I did some Git blaming and this notice exists from the start. What "unexpected failures" does modifying the lower dir of an overlay mount cause? I use (better: used, I'm currently reviving a few things) the "original" |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 3 replies
-
I think this is kernel warning mostly, I don't know what bad things could happen maybe @giuseppe knows more about it https://docs.kernel.org/filesystems/overlayfs.html#changes-to-underlying-filesystems
|
Beta Was this translation helpful? Give feedback.
-
with "unexpected failures" we only refer to the kernel warning that @Luap99 pointed out. If you make sure the "metacopy", "index", "xino" and "redirect_dir" features are not used for the overlay mount, then offline modifications are ok; but with a running container, you are basically changing the data overlay works on under its feet. Some operations are not atomic (and that is ok if the lower dir is not modified as it should be), so depending on the timing you might get some weird results. |
Beta Was this translation helpful? Give feedback.
-
Thank you very much! This helps a lot. I didn't even think about this being a kernel limitation at first, because there's no such limitation with Does this limitation apply to Unfortunately unmounting (i.e. stopping the container) is no option. In theory I could setup things similar to how deploying to a remote server works, but this also requires rather complex path rewriting and more critical path merging (unfortunately; that's a technical debt of an old framework we chose 14 years ago). Unfortunately my IDE doesn't support that. As a last resort I could write a sync script with inotify hooks... It looks like that the reason this being no issue with So, I tested it and it indeed seems like that bind-mounting a |
Beta Was this translation helpful? Give feedback.
Of course, race conditions are still possible, just as with any other concurrent I/O. For example,
unionfs-fuse
uses the FUSE page cache. My question rather was whetherfuse-overlayfs
implements additional caching that might cause issues not related to race conditions.I did some testing and the answer is: It does.
fuse-overlayfs
doesn't just end up doing weird things like nativeoverlayfs
, it just completely breaks.Let's create a simple test setup for
unionfs-fuse
vs. nativeoverlayfs
vs.fuse-overlayfs
: