Unsaved file synchronisation #50
Replies: 1 comment 1 reply
-
We played with a version of the protocol that did try to stub out the entire file-system, the idea was that you could then run the agent on a different machine than the codebase. The problem is twofold:
So, although it introduces some race conditions, the pragmatic solution is to ignore them. Proxying reads and edits through is important because the agent is typically editing files you're also editing, and conflicts are hard to resolve if you're going via file system writes. Searching is much less sensitive to the stale context problem, and if your search results are missing a file that you've just edited, you can always tell the model where to look. We also used to notify the Zed native agent whenever files changed; but this had the unexpected downside of sending your model potentially a lot of confusing and irrelevant data. In summary, a general purpose filesystem sharing API would look very different than something that is focused on making agents work well. There are probably areas we can make it better, but we'd love to see actual user experience reports of the problems caused by the current approach in real use. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I found ACP on Hacker News and I have a question / problem statement.
ACP has a concept of reading from and writing to unsaved files. Meanwhile, ai agents typically use file search tools like ripgrep to search for relevant information in the repo.
Here comes the problem: the search results that agent is getting from using rg might be wrong, because there are unsaved files. LSP solves this problem with the concept of open files, where LSP client notifies every LSP server about files it owns and modifies, so the servers can both access the file system and use the in-memory state of editor-opened files.
Can ACP solve this?
Beta Was this translation helpful? Give feedback.
All reactions