faire is a Claude Plugin Marketplace of tested and evaluated software development plugins that help Claude create high quality software.
Important
Anything post 1.0 is tested and evaluated. Anything 0.X is in active development.
See this section for more details on the testing and evaluation strategy.
Add the faire marketplace to your Claude Code configuration:
/plugin marketplace add jack-michaud/faireThen install the faire plugin:
/plugin install faire@faireOr browse and install interactively:
/pluginConfigure your .claude/settings.json to automatically add the marketplace:
{
"extraKnownMarketplaces": {
"faire": {
"source": {
"source": "github",
"repo": "jack-michaud/faire"
}
}
}
}Note
"this is a gamechanger, trust me bro" Creating evaluations to systematically measure the performance of AI systems is how we stay objective to the real impact of AI tools. I'm inspired by people and teams like:
- Cognition (which created Devin) (their eval for devin)
- ARC Prize Foundation
- spences10 (who created svelte-claude-skills with an eval for hooks)
These people and teams are data driven and transparent about the quality of AI systems.
I will fill this out as I become more opinionated about this.
I'm currently working on a python service writing skill with an eval here.
MIT License - see LICENSE file for details