You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add section on using LLMs to CONTRIBUTING.md (#5211)
This adds a section on using LLMs to the CONTRIBUTING.md guide,
including how to connect to a MCP that knows about AMReX's
documentation.
The proposed changes:
- [ ] fix a bug or incorrect behavior in AMReX
- [ ] add new capabilities to AMReX
- [ ] changes answers in the test suite to more than roundoff level
- [ ] are likely to significantly affect the results of downstream AMReX
users
- [ ] include documentation in the code and/or rst files, if appropriate
Additional information regarding Doxygen comment formatting can be found
278
278
in the [Doxygen Manual](https://www.doxygen.nl/manual/).
279
+
280
+
## LLM-assisted workflows
281
+
282
+
Large Language Models (LLMs) can assist with AMReX development by helping to understand the existing
283
+
codebase, writing new code, reviewing pull requests, finding bugs, and adding tests and documentation.
284
+
This section of the developer's guide documents how the AMReX repository is configured for LLM-based coding assistants
285
+
and how to get the most out of them.
286
+
287
+
> **_NOTE:_** LLMs can hallucinate and sometimes produce code that compiles and runs but is incorrect. LLM-written code should
288
+
be manually reviewed with care before opening a pull request. Please respect the AMReX maintainers' time by making sure you understand
289
+
any bot-generated code before requesting a review.
290
+
291
+
### Best Practices
292
+
293
+
When working with LLM coding assistants, keep in mind that *"most best practices are based on one constraint: [the] context window fills up fast, and performance degrades as it fills"* ([Claude Code Best Practices](https://code.claude.com/docs/en/best-practices)).
294
+
Starting from examples and iterating incrementally — as described below — helps keep sessions focused and productive.
295
+
296
+
1. **Start small and iterate incrementally.**
297
+
Run your coding assistant inside the AMReX source directory.
298
+
Point the assistant to an existing function, Test, or example code and ask it modify it, gradually adding complexity and verifying along the way.
299
+
300
+
2. **Write a test.**
301
+
Give the agent a way to verify its work. Often, the agent itself can write tests first, then implement functionality.
302
+
303
+
3. **Be specific in your prompts.**
304
+
Reference specific files, tell the agent where to look for code patterns, and describe specific cases to test.
305
+
306
+
### Connecting to a documentation Context through an MCP Server
307
+
308
+
A [Model Context Protocol (MCP)](https://modelcontextprotocol.io) server is a standardized way to provide external context, such as library documentation, to LLM-based coding assistants.
309
+
When an MCP server is configured, the assistant can query up-to-date AMReX documentation on demand, rather than relying solely on its training data. This is helpful because agents might otherwise rely on out-of-date information from their training data and/or have to read too many files into their contexts to perform development tasks.
310
+
311
+
### Setting Up Context7 as an MCP Server
312
+
313
+
[Context7](https://context7.com) is a service that indexes open-source project documentation and serves it through the MCP protocol.
Once connected, a coding assistant (Claude Code, Cursor, VS Code Copilot, Windsurf, etc.) can retrieve relevant sections of the AMReX documentation in real time when helping you develop applications and inputs files.
For popular coding assistants, see the [Context7 documentation](https://context7.com/docs/resources/all-clients) to configure [AMReX](https://context7.com/amrex-codes/amrex).
0 commit comments