/init: A collection of prompts for having AI scan the current project and generate approriate rules, feedback/ thoughts appreciated #2602
pixlmint
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all, I created a few prompts that can be used to have an LLM scan the current project directory and generate appropriate rules files. The prompts are meant to generate rules files that follow Cursor's best practices.
This is mostly as a proof of concept, and it generates 2 files:
.rules/project.mdand.rules/testing.md. The idea being thatproject.mdcan be included with any interaction, whiletesting.mdis specifically for when you're working on tests.The below code includes both a one-shot prompt as well as a workflow spanning 3 steps, where in the first one the LLM is just tasked with doing some preliminary analysis, then it must create the
project.md, and finally thetesting.md. The prompts for each of these steps are also available separately, in case something goes wrong with the workflow.I would love get y'alls feedback about what you think of this. I've tried it with a few projects, and it works pretty well, even when using smaller local LLM's. But do you think this is usable? Or do you already have a much better way of generating your rules files?
View Code
%s
Language/Framework: [your analysis]
Key files:
%s
]] .. tree .. [[
(https://gist.github.com/pixlmint/7b7d1656bcbe9399d2eb245001d33c38)
Beta Was this translation helpful? Give feedback.
All reactions