-
Notifications
You must be signed in to change notification settings - Fork 330
Add various prompts to prompts.md to assist with project maintenance #8668
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
docs/prompts.md
Outdated
| 4. **Implement Fix:** | ||
| * Resolve the specific issue (e.g., remove the override-only call, migrate the internal API). | ||
| * Ensure the fix is minimal and targeted. | ||
| 5. **Verify Fix:** Run `./gradlew verifyPlugin` again to confirm the specific warning is gone and no new issues were introduced. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I love this but worry that the hardest part is missing -- verifyPlugin only verifies that the warning is gone but it doesn't verify that the fix is behavior-preserving.
Maybe add a section that includes ideas from your experience on how to do that? Or maybe start a doc and we can all collect ideas?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before a PR is created (before LLMs and now), manual testing is the answer here. Additional unit tests and integration tests would help give confidence here too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The intention of this doc is to be a place to have stored prompts to paste into an agent, not for commentary on what should be done before a PR is generated or landed in the repo(s). I will follow up with another PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess I'd still feel more comfortable if this automation made explicit to the runner that it's not sufficient to call a fix "verified" but will defer to @helin24 .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we add a note to the Objective above? "Note: the branches will still require manual testing"
Or maybe better, add a "6. Suggest manual test steps: Check the code changes made and write test steps for a user to execute that will trigger the code paths that have changed. If needed, add logging statements to verify that the code paths have successfully run"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. This is not automation, PRs can't be landed without review, and should be created without appropriate testing & possible manual testing. prompts.md is collection of shared prompts for future project owners, instead of individuals having their own individual copies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm.. I may be confused about the purpose of this PR. This is meant to be a repository of prompts for us to copy and paste to gemini if we are working on something related, so that we don't have to spend time thinking of our own prompts? And if we are writing prompts for some other category of work, we'd want to update this so that future maintainers of the plugin could have an idea of what we have asked for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or maybe better, add a "6. Suggest manual test steps: Check the code changes made and write test steps for a user to execute that will trigger the code paths that have changed. If needed, add logging statements to verify that the code paths have successfully run"
I like this-- adding it now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm.. I may be confused about the purpose of this PR. This is meant to be a repository of prompts for us to copy and paste to gemini if we are working on something related, so that we don't have to spend time thinking of our own prompts? And if we are writing prompts for some other category of work, we'd want to update this so that future maintainers of the plugin could have an idea of what we have asked for?
100%, that is how I understand the idea as well.
I went though and added the "Suggest manual test steps: Check the code changes made and write test steps for a user to execute that will trigger the code paths that have changed. If needed, add logging statements to verify that the code paths have successfully run." to a bunch of the prompts.
helin24
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these look helpful, but I'm confused about where some of these are coming from, like the accessibility audit. How did you decide on these categories?
It seems like these would be reasonable things to ask gemini to check on for us, but it also sounds like we aren't intending for this to be instructions to gemini. I'm not sure I would think to come check here if I'm working on a something though. Can you explain a little more in the original comment what you're envisioning here?
This document serves as the authoritative 'Runbook for Robots,' codifying standard operating procedures into executable prompts for AI agents. It operationalizes 'Prompt Engineering as Infrastructure,' treating these instructions not as casual suggestions but as critical configuration that ensures deterministic and hermetic development environments. By strictly adhering to these workflows, agents are grounded in the project's 'Sensory Input'—reliable metrics like exit codes, code coverage, and static analysis—rather than operating on assumptions. This approach enforces a 'Verify and Harden' loop where every task is validated against rigorous testing suites and lints.
PR updated with comments and conversation-- thank you @pq!
This document serves as the authoritative "Runbook for Robots," codifying standard operating procedures into executable prompts for AI agents. It operationalizes "Prompt Engineering as Infrastructure," treating these instructions not as casual suggestions but as critical configuration that ensures deterministic and hermetic development environments.
By strictly adhering to these workflows, agents are grounded in the project's "Sensory Input"—reliable metrics like exit codes, code coverage, and static analysis—rather than operating on assumptions. This approach enforces a "Verify and Harden" loop where every task is validated against rigorous testing suites and lints.