S3 support via a new adapter#677
Conversation
stevearc
left a comment
There was a problem hiding this comment.
Did a short initial review. Overall looks pretty good! Left some inline comments, and a few high level thoughts:
- Any ideas on how to best handle regions? It feels like it might make the most sense as part of the URL
- Adding the new
buckettype makes me a bit nervous. There are a lot of locations in the codebase that switch and do things with the type. I'd have to do a lot of reading to convince myself that this is safe. If it's possible and not too janky to implement this without adding a new entry type, that would be preferred. - If you're looking for a shorter url scheme while waiting for the upstream bug to be fixed, you could use
oil-sss:// - I expect that there will be desire in the future to support other forms of blob storage (azure, google cloud, etc). No action items here, I think that with the current layering approach to the API it's already in a decent place for extension, but just something to keep in the back of your mind.
lua/oil/adapters/s3.lua
Outdated
| -- paths ending with "/-" are usually used for an "empty folder" in s3 | ||
| if path and vim.endswith(path, "/-") then |
There was a problem hiding this comment.
Can you explain a bit more about how this manifests in the CLI tool? Is it like
foo/ # <-- this may have children
bar/- # <-- this one will be empty
or like inside of bar you see a single entry:
-
or something else?
There was a problem hiding this comment.
Here is an example with some commands:
➜ aws s3 ls s3://danielkonge-new-test-bucket --profile=danielkonge
➜ echo "" | aws s3 cp - s3://danielkonge-new-test-bucket/folder/ --profile=danielkonge
➜ aws s3 ls s3://danielkonge-new-test-bucket --profile=danielkonge
PRE folder/
➜ aws s3 ls s3://danielkonge-new-test-bucket/folder/ --profile=danielkonge
2025-11-03 16:45:50 1 -
➜ echo "" | aws s3 cp - s3://danielkonge-new-test-bucket/folder/file.txt --profile=danielkonge
➜ aws s3 ls s3://danielkonge-new-test-bucket/folder/ --profile=danielkonge
2025-11-03 16:45:50 1 -
2025-11-03 16:46:21 1 file.txt
Since there is no real concept of folders in s3, what it actually does when you create a "folder" is to make a marker file with file name - to note that there is a folder there. If you then put something into the folder everything works as expected, but you keep having the marker file.
If instead you create the file in the folder straightaway, you don't get these marker files:
➜ aws s3 ls s3://danielkonge-new-test-bucket2 --profile=danielkonge
➜ echo "" | aws s3 cp - s3://danielkonge-new-test-bucket2/folder/file.txt --profile=danielkonge
➜ aws s3 ls s3://danielkonge-new-test-bucket2 --profile=danielkonge
PRE folder/
➜ aws s3 ls s3://danielkonge-new-test-bucket2/folder/ --profile=danielkonge
2025-11-03 16:51:25 1 file.txt
But I think the workflow of creating the folder first and then individual files in it afterwards is pretty common in oil (at least to me).
There was a problem hiding this comment.
Actually, we should never have the /- at the end of the oil URL, so I have removed this code. The important part for handling "-" files is in M.list_dir in s3fs.lua, where I write:
if name ~= "-" then
local cache_entry = cache.create_entry(url, name, type)
table.insert(cache_entries, cache_entry)
cache_entry[FIELD_META] = meta
endSo we just don't show the - marker files in Oil.
lua/oil/adapters/s3.lua
Outdated
| end | ||
| end | ||
|
|
||
| M.supported_cross_adapter_actions = { files = "all" } |
There was a problem hiding this comment.
What's the reason for adding this? I would expect "move" to work fine here, and I don't see where you're using the new value
There was a problem hiding this comment.
I think "move" should be fine, yes. I thought I needed this originally because it supported both copy and move, but it seems that the logic for it just checking if a move should be done by copy + delete, so simply saying that we have move is enough I guess. (I think I had some more logic too at some point, but it was probably removed while doing some cleanup.)
It will probably be hard to find a really good solution for regions. When I wrote that I tested with The problem with having the region in the url is that it seems hard to get the region for each bucket. At least I don't know a better solution than something like this for each bucket (which can be very slow). We could write something for specifying the region when creating buckets though we just wouldn't display it then. I think something like would be fine for writing when creating buckets, but I would have liked to have this displayed too then.
It is probably possible, but I think the new type is the easiest solution I could see. At least I would like to keep something like the changes I have made to
I think
I think the current layout with the "filesystem" ( For both this and for the |
|
If the region is only really needed for creating new buckets, then maybe we should just not allow the creation of buckets using oil. It seems like a pretty uncommon use case; I would imagine that most of the time this would be used for exploring and manipulating buckets that already exist. If we also give up the special icon/highlight for buckets then I don't think we would need the new entry type at all. I know it's a slight downgrade in usability, but it keeps all the functionality and significantly decreases the complexity of this change. We might come up with some other way to customize the icons/highlights in the future, since this is something that people keep asking for. |
I think I might have confused a bit with what I wrote earlier. The region is generally not needed, and you can create buckets without the region. The only time you might run into problems is when you manually switch aws account with the
I have removed the entry type and icon/highlight for buckets now, and everything should still work fine. I also updated the README and docs now. |
stevearc
left a comment
There was a problem hiding this comment.
Looking good! I pulled it down to test and ran into one friction point: if the aws cli tool is not installed you just get an ugly error when attempting to open a S3 buffer. Would be great to detect that and display a friendlier error message instead.
Other than that it works great! It's super cool to see it working.
|
I fixed the wrong version check and added a quick check of whether |
|
Looks good! Thanks for the addition to oil! |
This is still a draft and not completely ready yet, but I might need a bit of help with some of the non-S3 related things (see below). [Update: It is mostly ready now and I think it makes sense to get some feedback before I do much more on it.]
s3 is a bit different from ssh since we don't keep a constant connection and instead make new
aws s3 lscalls when moving around, but I have still tried to build similar behavior to what is currently insshfs.lua, sees3fs.lua.Having the "filesystem" setup similarly, we then also build the adapter similar to what is done for ssh, see
ssh.luaands3.lua. (Again only with minor differences from the fact that we don't keep an active connection in the same way.)I am still running into a few problems, among which the primary of the top of my head is:
oil-sthree://since any scheme with a number in its name made it so that neovim didn't open the buffer correctly. I am not sure how to best work around this, butoil-s3://would be the clearly better scheme name here.Oil --s3or similar, but I haven't added that yet.vim.notifycalls to track behavior that should of course be removed before this is ready.For testing this out I made a new aws account and used
extra_s3_args = { "--profile=<my-profile>", "--region=<my-region>" }when running stuff.If you want to edit some of this yourself if that is easier, feel free to edit directly on this branch. (Also let me know if you want cleaner commits, then I can just rebase.)
This PR will close #633.