Skip to content
This repository was archived by the owner on Mar 23, 2026. It is now read-only.

Latest commit

 

History

History
149 lines (102 loc) · 4.27 KB

File metadata and controls

149 lines (102 loc) · 4.27 KB

Customization Guide

This plugin is designed to work out-of-the-box with sensible defaults. This document covers tuning for specific environments and use cases.


Hook Customization

Hooks live in hooks/hooks.json. Both hooks use prompt-based logic — customization means editing the prompt text.

Adjusting Prompt Architect Clarity Threshold

The default threshold is 7/10. Prompts scoring ≥7 pass through silently.

Lower the threshold (more permissive — fewer interruptions): Find in hooks/hooks.json (UserPromptSubmit prompt):

Score ≥7 → approve.

Change to:

Score ≥6 → approve.

Use this if you find the hook interrupting too many prompts you consider clear enough.

Raise the threshold (stricter — more clarifications):

Score ≥8 → approve.

Use this if you frequently find yourself needing to re-run tasks after missed context.

Disabling a Hook Entirely

To disable the Prompt Architect but keep the MLOps Guard:

{
  "UserPromptSubmit": [],
  "PreToolUse": [ ... ]
}

To disable both hooks, remove or empty both arrays.

CI/CD and Automated Sessions

In automated pipelines where no human is present to answer clarifying questions, hooks should not fire. Add this to the top of each hook prompt to enable a bypass:

If the environment variable CLAUDE_SKIP_HOOKS is set to "1", output {"decision":"approve"} immediately.

Then in your CI environment:

export CLAUDE_SKIP_HOOKS=1

Extending Domain Keywords

To add more keywords for domain classification, edit the DOMAIN CLASSIFY section of the UserPromptSubmit prompt. Example — adding reinforcement learning and RL to ML_RESEARCH:

• ML_RESEARCH — model, train, fine-tune, dataset, eval, paper, embedding, loss, architecture, rl, reinforcement

Keep keywords short (1-2 words) and lowercase. Longer phrases slow down classification.

MLOps Guard — Warn-Only Mode

By default, the MLOps Guard uses ask_user which pauses execution. To make it log-only (never interrupt), change:

"decision": "ask_user"

to:

"decision": "approve"

and remove the quality gate response, keeping only the approve path. This turns the guard into a silent observer — useful for teams that want visibility without interruption.

Adding Custom MLOps Checks

To add a 5th check (e.g., type hints required), add to the AUDIT section:

5. TYPE_HINTS: function definitions include ': ' type annotations or 'def.*->.*:' return type

And update the pass threshold logic accordingly.


Skill Customization

Skills are Markdown files. Edit them directly to tune for your stack.

Changing Default Framework References

In skills/mlops-standards/SKILL.md and skills/mlops-standards/references/tracking-patterns.md, references default to PyTorch + W&B + MLflow. To switch defaults to JAX + Neptune:

  1. Find torch.manual_seed references → add jax.random.PRNGKey equivalents
  2. Find wandb.init references → add neptune.init equivalents
  3. Update the W&B vs. MLflow decision table to include Neptune

Adding Domain-Specific Patterns to Prompt Architect

In skills/prompt-architect/references/domain-patterns.md, add new patterns following the existing format:

### ML7: My Custom Pattern
Signal: [how to recognize it]
Fix: [specific fix]
Example: [before → after]

Command Customization

Commands are Markdown files with YAML frontmatter. Common adjustments:

Change Default Tracking Backend

In commands/experiment.md, find:

- Tracking backend: W&B (default) / MLflow / both

Change the default to match your infrastructure.

Restrict /mlops Output to Your Cloud Provider

Add to the beginning of commands/mlops.md:

Default cloud provider: AWS. Use AWS-native services (ECS, ECR, CloudWatch, SageMaker)
unless the user specifies otherwise.

Add Mandatory Headers to Generated Code

To enforce copyright headers on all generated Python files, add to commands/experiment.md Quality Standards section:

Every generated .py file must begin with:
# Copyright (c) [year] TechKnowmad AI. All rights reserved.

Version Pinning

When you customize this plugin, note the base version in your fork's CHANGELOG so you can track upstream changes that may affect your customizations.