This plugin is designed to work out-of-the-box with sensible defaults. This document covers tuning for specific environments and use cases.
Hooks live in hooks/hooks.json. Both hooks use prompt-based logic — customization means editing the prompt text.
The default threshold is 7/10. Prompts scoring ≥7 pass through silently.
Lower the threshold (more permissive — fewer interruptions):
Find in hooks/hooks.json (UserPromptSubmit prompt):
Score ≥7 → approve.
Change to:
Score ≥6 → approve.
Use this if you find the hook interrupting too many prompts you consider clear enough.
Raise the threshold (stricter — more clarifications):
Score ≥8 → approve.
Use this if you frequently find yourself needing to re-run tasks after missed context.
To disable the Prompt Architect but keep the MLOps Guard:
{
"UserPromptSubmit": [],
"PreToolUse": [ ... ]
}To disable both hooks, remove or empty both arrays.
In automated pipelines where no human is present to answer clarifying questions, hooks should not fire. Add this to the top of each hook prompt to enable a bypass:
If the environment variable CLAUDE_SKIP_HOOKS is set to "1", output {"decision":"approve"} immediately.
Then in your CI environment:
export CLAUDE_SKIP_HOOKS=1To add more keywords for domain classification, edit the DOMAIN CLASSIFY section of the UserPromptSubmit prompt. Example — adding reinforcement learning and RL to ML_RESEARCH:
• ML_RESEARCH — model, train, fine-tune, dataset, eval, paper, embedding, loss, architecture, rl, reinforcement
Keep keywords short (1-2 words) and lowercase. Longer phrases slow down classification.
By default, the MLOps Guard uses ask_user which pauses execution. To make it log-only (never interrupt), change:
"decision": "ask_user"to:
"decision": "approve"and remove the quality gate response, keeping only the approve path. This turns the guard into a silent observer — useful for teams that want visibility without interruption.
To add a 5th check (e.g., type hints required), add to the AUDIT section:
5. TYPE_HINTS: function definitions include ': ' type annotations or 'def.*->.*:' return type
And update the pass threshold logic accordingly.
Skills are Markdown files. Edit them directly to tune for your stack.
In skills/mlops-standards/SKILL.md and skills/mlops-standards/references/tracking-patterns.md, references default to PyTorch + W&B + MLflow. To switch defaults to JAX + Neptune:
- Find
torch.manual_seedreferences → addjax.random.PRNGKeyequivalents - Find
wandb.initreferences → addneptune.initequivalents - Update the W&B vs. MLflow decision table to include Neptune
In skills/prompt-architect/references/domain-patterns.md, add new patterns following the existing format:
### ML7: My Custom Pattern
Signal: [how to recognize it]
Fix: [specific fix]
Example: [before → after]Commands are Markdown files with YAML frontmatter. Common adjustments:
In commands/experiment.md, find:
- Tracking backend: W&B (default) / MLflow / both
Change the default to match your infrastructure.
Add to the beginning of commands/mlops.md:
Default cloud provider: AWS. Use AWS-native services (ECS, ECR, CloudWatch, SageMaker)
unless the user specifies otherwise.
To enforce copyright headers on all generated Python files, add to commands/experiment.md Quality Standards section:
Every generated .py file must begin with:
# Copyright (c) [year] TechKnowmad AI. All rights reserved.
When you customize this plugin, note the base version in your fork's CHANGELOG so you can track upstream changes that may affect your customizations.