Skip to content

Add comprehensive AI agent guidance in .github/copilot-instructions.md#3

Open
Copilot wants to merge 9 commits intomainfrom
copilot/update-copilot-instructions
Open

Add comprehensive AI agent guidance in .github/copilot-instructions.md#3
Copilot wants to merge 9 commits intomainfrom
copilot/update-copilot-instructions

Conversation

Copy link

Copilot AI commented Mar 1, 2026

Description

Enables AI coding agents to work autonomously by documenting architecture, workflows, conventions, and operational guidelines in a structured reference document.

Type of Change

  • 🐛 Bug fix
  • ✨ New feature
  • 📝 Documentation update
  • 🤖 Training data addition
  • 🔧 Refactor / cleanup

Changes Made

  • Created .github/copilot-instructions.md (467 lines, 2,089 words)
  • Architecture: Directory structure, component breakdown (agent, pipeline, collectors, formatters, SDKs)
  • Development workflows: Setup, running (CLI/server), data pipeline, fine-tuning (GPU/CPU/cloud), testing, linting, Docker
  • Code conventions: Python style guide, naming patterns, error handling, API design, security practices
  • CI/CD: GitHub Actions workflows, deployment targets (Fly.io, Netlify, Spaces)
  • Security: Ethical guidelines, input validation, API security, vulnerability management
  • AI-specific guidance: Use cases, debugging tips, common pitfalls, code quality checklist
  • Reference: Key files, configuration locations, external resources, 5-week learning path

Testing

  • Verified file structure and section completeness
  • Validated code examples and command syntax
  • Confirmed alignment with README.md and CONTRIBUTING.md
  • Verified no secrets in committed files

Checklist

  • My code follows the existing style
  • I have NOT committed .env or any API keys
  • All training data is from public, legally sourced knowledge bases
  • The agent's ethical guardrails remain intact
Original prompt

prompt all necessary agents to keep all repos and projects flowing self sufficcienty

[Chronological Review: The conversation began with a user request to analyze the codebase for the Hancock repository to generate or update the `.github/copilot-instructions.md` file. The user specified the need to focus on the architecture, developer workflows, project-specific conventions, and integration points. The user also provided context about the repository, including its name, owner, current branch, and an active pull request. Following this, the user requested to prompt all necessary agents to ensure all repositories and projects flow self-sufficiently.]

[Intent Mapping:

  1. "Analyze this codebase to generate or update .github/copilot-instructions.md for guiding AI coding agents." - The user wants to create or update documentation for AI agents to work effectively with the codebase.
  2. "prompt all necessary agents to keep all repos and projects flowing self sufficcienty" - The user aims to ensure ongoing productivity and self-sufficiency across projects.]

[Technical Inventory:

[Code Archaeology: No specific files or code changes were discussed in detail, but the user indicated the need to reference existing AI conventions from various markdown files in the repository.]

[Progress Assessment: The user has initiated the process of updating documentation but has not yet completed it. The request to prompt agents indicates a desire for ongoing support.]

[Context Validation: The user’s goals for documentation and project management are clear and focused on enhancing productivity and clarity for AI agents.]

[Recent Commands Analysis:

  • Last Agent Commands: The user requested an analysis of the codebase and to prompt agents for self-sufficiency.
  • Tool Results Summary: No specific tool results were returned as the conversation focused on user requests rather than tool executions.
  • Pre-Summary State: The agent was preparing to analyze the codebase and generate documentation when the token budget was exceeded.
  • Operation Context: The commands were executed to align with the user’s goal of improving AI agent productivity and ensuring project continuity.]
1. Conversation Overview: - Primary Objectives: 1. "Analyze this codebase to generate or update `.github/copilot-instructions.md` for guiding AI coding agents." 2. "prompt all necessary agents to keep all repos and projects flowing self sufficcienty." - Session Context: The conversation focused on enhancing AI agent productivity through documentation and ensuring project continuity. - User Intent Evolution: The user initially sought to improve documentation for AI agents and later emphasized the need for self-sufficiency across projects.
  1. Technical Foundation:
  1. Codebase Status:
  • No specific files or code changes were discussed in detail, but the user indicated the need to reference existing AI conventions from various markdown files in the repository.
  1. Problem Resolution:
  • Issues Encountered: No specific technical problems were mentioned.
  • Solutions Implemented: The user is in the process of generating documentation to guide AI agents.
  • Debugging Context: No ongoing troubleshooting efforts were noted.
  • Lessons Learned: The importance of clear documentation for AI productivity was highlighted.
  1. Progress Tracking:
  • Completed Tasks: None yet; the user has initiated the documentation update process.
  • Partially Complete Work: The request for documentation is in progress.
  • Validated Outcomes: No features or code have been confirmed working through testing yet.
  1. Active Work State:
  • Current Focus: The user was focused on analyzing the codebase and generating documentation for AI agents.
  • Recent Context: The user requested an analysis of the codebase and to prompt agents for self-sufficiency.
  • Working Code: No specific code snippets were discussed recently.
  • Immediate Context: The user was preparing to update documentation when the token budget was exceeded.
  1. Recent Operations:
  • Last Agent Commands: The user requested an analysis of the codebase and to prompt agents for self-sufficiency.
  • Tool Results Summary: No specific tool results were returned as the conversation focused on user requests.
  • Pre-Summary State: The agent was preparing to analyze the codebase and generate documentation.
  • Operation Context: The commands were executed to align with the user’s goal of improving AI agent productivity and ensuring project continuity.
  1. Continuation Plan:
  • [Pen...

Created from VS Code.


✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

juanaquas and others added 8 commits March 1, 2026 00:40
This workflow automates the building and testing of a Python package using Conda on push events. It sets up Python, installs dependencies, lints the code with flake8, and runs tests with pytest.

Signed-off-by: juanaquas <juanaquas@cyberviserai.com>
Co-authored-by: juanaquas <264702634+juanaquas@users.noreply.github.com>
[WIP] Fix issues with the settings configuration
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: juanaquas <juanaquas@cyberviserai.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: juanaquas <juanaquas@cyberviserai.com>
…dance

Co-authored-by: juanaquas <264702634+juanaquas@users.noreply.github.com>
Copilot AI changed the title [WIP] Update .github/copilot-instructions.md for AI agents Add comprehensive AI agent guidance in .github/copilot-instructions.md Mar 1, 2026
@juanaquas juanaquas marked this pull request as ready for review March 1, 2026 17:50
Copy link
Owner

@copilot git apply --3way <<'EOF'
diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md
new file mode 100644
index 0000000..b5bfb0d
--- /dev/null
+++ b/.github/copilot-instructions.md
@@ -0,0 +1,467 @@
+# Hancock AI Agent — Copilot Instructions
+
+This document provides comprehensive guidance for AI coding agents working with the Hancock cybersecurity AI agent repository.
+
+## 🎯 Project Overview
+
+Hancock is CyberViser's AI-powered cybersecurity agent, fine-tuned on Mistral 7B, specializing in penetration testing and SOC operations. The agent operates through multiple specialist modes and exposes a REST API for cybersecurity automation.
+
+### Core Capabilities
+- Pentest Specialist: Reconnaissance, exploitation, CVE analysis, reporting (PTES methodology)
+- SOC Analyst: Alert triage, SIEM queries, incident response (PICERL framework), threat hunting
+- Code Generation: Security code (YARA, Sigma, KQL, SPL, Python, Bash)
+- CISO Advisory: Compliance, risk reporting, board summaries
+- IOC Enrichment: Threat intelligence for IPs, domains, URLs, hashes, emails
+
+### Technology Stack
+- Language: Python 3.10+
+- AI Backend: NVIDIA NIM (Mistral 7B Instruct, Qwen 2.5 Coder 32B)
+- Fine-tuning: LoRA on Mistral 7B using Unsloth
+- API Framework: Flask (REST API)
+- Training Data Sources: MITRE ATT&CK, NVD CVE, CISA KEV, Atomic Red Team, GitHub Security Advisories
+- Deployment: Docker, Fly.io, Netlify (docs), Hugging Face Spaces
+
+## 🏗️ Architecture
+
+### Directory Structure
+ +Hancock/ +├── hancock_agent.py # Main agent (CLI + REST API server) +├── hancock_pipeline.py # Master data collection pipeline +├── hancock_finetune*.py # Fine-tuning scripts (GPU/CPU/Cloud) +├── hancock_constants.py # Shared constants +├── collectors/ # Data collection modules +│ ├── mitre_collector.py # MITRE ATT&CK TTPs +│ ├── nvd_collector.py # NVD/CVE vulnerabilities +│ ├── cisa_kev_collector.py # CISA Known Exploited Vulns +│ ├── atomic_collector.py # Atomic Red Team tests +│ ├── ghsa_collector.py # GitHub Security Advisories +│ ├── pentest_kb.py # Pentest knowledge base Q&A +│ └── soc_kb.py # SOC knowledge base Q&A +├── formatter/ # Training data formatters +│ ├── to_mistral_jsonl.py # v1 formatter +│ ├── to_mistral_jsonl_v2.py # v2 formatter +│ └── formatter_v3.py # v3 formatter (all sources) +├── clients/ # SDK clients +│ ├── python/ # Python SDK + CLI +│ └── nodejs/ # Node.js SDK + CLI +├── tests/ # Test suite +│ ├── test_hancock_api.py # API endpoint tests +│ └── test_sdk_client.py # SDK integration tests +├── data/ # Training datasets (generated) +├── docs/ # Documentation + website +├── .github/workflows/ # CI/CD pipelines +└── hancock-cpu-adapter/ # Pre-trained CPU adapter +
+
+### Key Components
+
+#### 1. Main Agent (hancock_agent.py)
+- CLI Mode: Interactive chat with conversation history
+- Server Mode: REST API server (port 5000) with 13+ endpoints
+- Personas: Pentest, SOC, Auto, Code, CISO, Sigma, YARA, IOC
+- Models: Supports NVIDIA NIM model switching
+
+#### 2. Data Pipeline (hancock_pipeline.py)
+- Phase 1: MITRE ATT&CK + NVD + Pentest KB
+- Phase 2: + SOC KB (produces hancock_v2.jsonl)
+- Phase 3: + CISA KEV + Atomic Red Team + GHSA (produces hancock_v3.jsonl)
+- Modes: --phase all, --phase 3, --kb-only (offline)
+
+#### 3. Fine-tuning Scripts
+- GPU: hancock_finetune_gpu.py (local GPU)
+- Cloud: train_modal.py (Modal.com), Hancock_Universal_Finetune.ipynb (Colab/Kaggle)
+- CPU: hancock_cpu_finetune.py (TinyLlama-1.1B, no GPU required)
+- Method: LoRA adapters using Unsloth + TRL
+
+#### 4. Client SDKs
+- Python SDK: clients/python/hancock_client.py + CLI
+- Node.js SDK: clients/nodejs/hancock.js + CLI
+- Both use OpenAI-compatible API interface
+
+## 🛠️ Development Workflows
+
+### Initial Setup
+bash +# Clone and setup environment +git clone https://github.com/cyberviser/Hancock.git +cd Hancock +python3 -m venv .venv +source .venv/bin/activate # or .venv\Scripts\activate on Windows +pip install -r requirements.txt + +# Configure API key +cp .env.example .env +# Edit .env and add: NVIDIA_API_KEY="nvapi-..." +
+
+### Running the Agent
+bash +# Interactive CLI +python hancock_agent.py + +# REST API server +python hancock_agent.py --server --port 5000 + +# With custom model +python hancock_agent.py --model "mistralai/mistral-7b-instruct-v0.3" +
+
+### Building Training Data
+bash +# Full v3 dataset (requires internet) +python hancock_pipeline.py --phase 3 + +# Offline mode (KB only) +python hancock_pipeline.py --kb-only + +# Skip rate-limited NVD +python hancock_pipeline.py --skip-nvd +
+
+### Fine-tuning
+bash +# GPU fine-tuning (requires CUDA) +python hancock_finetune_v3.py --steps 300 --export-gguf + +# CPU fine-tuning (TinyLlama-1.1B) +python hancock_cpu_finetune.py --debug # 10 steps test +python hancock_cpu_finetune.py --max-steps 500 # full run + +# Cloud fine-tuning (Modal.com) +modal run train_modal.py +
+
+### Testing
+bash +# Run all tests +make test + +# Run with coverage +make test-cov + +# Run specific test +pytest tests/test_hancock_api.py -v + +# Manual API testing +python hancock_agent.py --server & +curl http://localhost:5000/health +
+
+### Linting
+bash +# Run flake8 (critical errors only) +make lint + +# Or directly +flake8 . --count --select=E9,F63,F7,F82 --exclude=.venv,data,docs --show-source --statistics +
+
+### Docker
+bash +# Build image +docker build -t cyberviser/hancock:latest . + +# Run with docker-compose +docker-compose up -d + +# View logs +docker-compose logs -f hancock +
+
+## 📝 Code Conventions
+
+### Python Style
+- Version: Python 3.10+
+- Linter: flake8 (critical errors only: E9, F63, F7, F82)
+- No strict style enforcement: Use common sense and follow existing patterns
+- Type hints: Optional but encouraged for public APIs
+- Docstrings: Required for public functions/classes
+
+### Naming Conventions
+- Files: snake_case.py
+- Functions/Variables: snake_case
+- Classes: PascalCase
+- Constants: UPPER_SNAKE_CASE
+- Private: _leading_underscore
+
+### Error Handling
+- Use explicit error messages for user-facing errors
+- Log errors with context (timestamps, request IDs for API)
+- Fail gracefully with helpful guidance
+- Example:

  • if not api_key:
  •  print("[Hancock] ❌ NVIDIA_API_KEY not set")
    
  •  print("[Hancock] Get one free at: https://build.nvidia.com")
    
  •  sys.exit(1)
    

+### API Design
+- Follow REST conventions (GET for reads, POST for writes)
+- Return JSON with consistent structure:

  • {
  • "success": true,
  • "data": {...},
  • "error": null
  • }

+- Use HTTP status codes correctly (200, 400, 401, 500)
+- Include /health and /metrics endpoints
+
+### Security Practices
+- Never commit API keys: Use .env files (in .gitignore)
+- Validate inputs: Sanitize user queries before LLM calls
+- Authorization checks: Always confirm scope for offensive techniques
+- Rate limiting: Implement for public endpoints
+- HMAC verification: For webhook endpoints (see /v1/webhook)
+
+## 🧪 Testing Strategy
+
+### Test Organization
+- Unit tests: Test individual collectors, formatters
+- Integration tests: Test API endpoints (test_hancock_api.py)
+- SDK tests: Test client libraries (test_sdk_client.py)
+- Manual tests: CLI interaction, API curl commands
+
+### Writing Tests
+```python
+# Follow existing pattern in tests/
+import pytest
+from hancock_agent import app
+
+@pytest.fixture
+def client():

  • with app.test_client() as client:
  •    yield client
    

+def test_endpoint(client):

  • response = client.post('/v1/ask', json={'question': 'test'})
  • assert response.status_code == 200
  • assert 'answer' in response.json
    +```

+### Test Data
+- Use small, deterministic samples
+- Mock external API calls (MITRE, NVD, etc.)
+- Store fixtures in tests/fixtures/ if needed
+
+## 🚀 CI/CD & Deployment
+
+### GitHub Actions Workflows
+
+#### 1. python-package-conda.yml
+- Trigger: Every push
+- Purpose: Lint and test with conda environment
+- Steps: Setup Python 3.10, install deps via conda, run flake8, run pytest
+
+#### 2. codeql.yml
+- Trigger: Push to main, PRs, schedule
+- Purpose: CodeQL security scanning
+- Languages: Python
+
+#### 3. finetune.yml
+- Trigger: Manual (workflow_dispatch)
+- Purpose: Cloud fine-tuning job
+- Platform: Likely triggers Modal.com or similar
+
+#### 4. deploy.yml
+- Trigger: Push to main (or manual)
+- Purpose: Deploy to production (Fly.io, Netlify, Spaces)
+
+### Deployment Targets
+
+#### Fly.io (API Server)
+bash +# One-time setup +flyctl launch --config fly.toml + +# Deploy updates +flyctl deploy +# or: make fly-deploy +
+
+#### Netlify (Documentation)
+- Config: netlify.toml
+- Source: docs/index.html
+- Auto-deploy: Push to main
+
+#### Hugging Face Spaces (Demo)
+- App: spaces_app.py
+- README: spaces_README.md
+- Deploy: Push to HF repo or use gradio deploy
+
+#### Docker Hub
+bash +docker build -t cyberviser/hancock:latest . +docker push cyberviser/hancock:latest +
+
+## 🔐 Security Considerations
+
+### Ethical Guidelines
+- Authorized scope only: Agent always confirms authorization before suggesting offensive techniques
+- Responsible disclosure: Recommend responsible disclosure for vulnerabilities
+- No malicious use: Do not assist with unauthorized hacking, malware creation, or illegal activities
+- Legal compliance: All training data from public, legally sourced cybersecurity knowledge bases
+
+### Input Validation
+- Sanitize user inputs before passing to LLM
+- Limit query length (e.g., max 4000 chars)
+- Check for malicious patterns (SQL injection attempts, XSS)
+
+### API Security
+- API Key: Required via X-API-Key header or env var
+- HMAC Signature: Verify webhook signatures (see /v1/webhook)
+- Rate Limiting: 100 requests/minute per IP (implement in production)
+- CORS: Configure for frontend integration
+
+### Data Privacy
+- No PII in training data: Scrub sensitive data from collected datasets
+- No conversation logging: Don't persist user queries in production
+- Secure storage: Use encrypted storage for API keys (e.g., Fly.io secrets)
+
+### Vulnerability Management
+- Dependency scanning: Use pip-audit or Dependabot
+- CVE monitoring: Track vulnerabilities in dependencies
+- Security updates: Patch critical vulnerabilities within 48 hours
+- Disclosure: Report security issues to cyberviser@proton.me
+
+## 🤖 AI Agent-Specific Instructions
+
+### When to Use Hancock
+
+#### Perfect Use Cases
+1. Security alert triage: "Analyze this Splunk alert for potential C2 communication"
+2. Vulnerability analysis: "Explain CVE-2024-1234 and suggest mitigations"
+3. Detection engineering: "Write a Sigma rule to detect LSASS memory dumps"
+4. Threat hunting: "Generate a KQL query to hunt for Kerberoasting"
+5. Incident response: "Provide a PICERL playbook for ransomware"
+6. Pentest reconnaissance: "Suggest nmap commands for service enumeration"
+7. CISO reporting: "Summarize top 5 risks for the board"
+
+#### Not Suitable For
+- General programming tasks (use specialized code assistants)
+- Non-security infrastructure/DevOps (unless security-related)
+- Business logic or frontend development
+- General IT support
+
+### Working with the Codebase
+
+#### Adding New Features
+1. New API endpoint: Add to hancock_agent.py following existing pattern
+2. New data source: Create collector in collectors/, update pipeline
+3. New persona: Add system prompt and mode in hancock_agent.py
+4. New SDK method: Add to both Python and Node.js clients
+
+#### Modifying Training Data
+1. Edit the relevant collector in collectors/
+2. Update formatter if structure changes
+3. Regenerate dataset: python hancock_pipeline.py --phase 3
+4. Fine-tune with new data: python hancock_finetune_v3.py
+
+#### Debugging Tips
+- API errors: Check Flask logs, verify JSON structure
+- LLM errors: Inspect response.choices[0].message.content
+- Training errors: Check dataset format (JSONL, Mistral instruction format)
+- Collector errors: Add --debug flag, check rate limits
+
+### Code Quality Checklist
+
+Before submitting changes:
+- [ ] Code follows existing conventions
+- [ ] No API keys or secrets in code
+- [ ] Linter passes: make lint
+- [ ] Tests pass: make test
+- [ ] Manual testing completed
+- [ ] Documentation updated (if API/workflow changed)
+- [ ] Commit uses conventional format: feat:, fix:, docs:, refactor:
+
+### Common Pitfalls
+
+1. Rate limits: NVD API requires key for >10 req/min
+2. Memory usage: Fine-tuning requires 12GB+ GPU RAM
+3. NVIDIA API key: Required for inference, get free at build.nvidia.com
+4. Dataset format: Must be Mistral instruction format (<s>[INST] ... [/INST] ... </s>)
+5. Model switching: Some models require different prompt formats
+
+## 📚 Key Files Reference
+
+### Must-Read Files
+1. README.md — Project overview, quick start, API reference
+2. CONTRIBUTING.md — Contribution guidelines
+3. hancock_agent.py — Main agent implementation
+4. hancock_pipeline.py — Data collection pipeline
+5. tests/test_hancock_api.py — API test examples
+
+### Configuration Files
+- .env.example — Environment variables template
+- requirements.txt — Python dependencies
+- requirements-dev.txt — Dev dependencies (pytest, flake8, coverage)
+- pyproject.toml — Python package metadata
+- Dockerfile — Docker image definition
+- docker-compose.yml — Multi-container setup
+- fly.toml — Fly.io deployment config
+- netlify.toml — Netlify deployment config
+- Makefile — Common commands
+
+### Documentation Files
+- BUSINESS_PROPOSAL.md — Business case and monetization
+- LAUNCH.md — Launch strategy
+- COMPETITOR_ANALYSIS.md — Market analysis
+- CHANGELOG.md — Version history
+- SECURITY.md — Security policy and disclosure
+- LICENSE — Proprietary license terms
+
+## 🔄 Continuous Improvement
+
+### Dataset Updates
+- Frequency: Quarterly or when major CVEs/tactics emerge
+- Process: Run python hancock_pipeline.py --phase 3, review samples, fine-tune
+- Quality: Manually review 100+ samples before training
+
+### Model Updates
+- Base model: Monitor Mistral/Meta releases for newer versions
+- Fine-tuning: Re-train when dataset grows by 20%+ or quality improves
+- Evaluation: Test on held-out samples, compare to previous version
+
+### API Evolution
+- Versioning: Use /v1/ prefix, maintain backward compatibility
+- Deprecation: 90-day notice for breaking changes
+- Documentation: Update OpenAPI spec (docs/openapi.yaml)
+
+## 🆘 Getting Help
+
+### Internal Resources
+- Issues: Search existing GitHub issues before creating new ones
+- Discussions: Use GitHub Discussions for questions
+- Code comments: Check inline comments in complex functions
+
+### External Resources
+- MITRE ATT&CK: https://attack.mitre.org
+- NVD/CVE: https://nvd.nist.gov
+- NVIDIA NIM: https://build.nvidia.com
+- Unsloth: https://github.com/unslothai/unsloth
+- Mistral AI: https://docs.mistral.ai
+
+### Contact
+- Email: cyberviser@proton.me
+- GitHub: https://github.com/cyberviser
+- Website: https://cyberviser.netlify.app
+
+---
+
+## 🎓 Learning Path for New AI Agents
+
+1. Week 1: Read README.md, explore hancock_agent.py, run CLI mode
+2. Week 2: Understand data pipeline, run hancock_pipeline.py --kb-only
+3. Week 3: Study API endpoints, test with curl, read test files
+4. Week 4: Run fine-tuning on Colab, understand LoRA adapters
+5. Week 5: Contribute: add data samples, improve collectors, write tests
+
+---
+
+Last Updated: 2025-03-01
+Maintained By: CyberViser Team
+Version: 1.0.0
diff --git a/clients/python/hancock_client.py b/clients/python/hancock_client.py
index 83a209d..4ae97e6 100644
--- a/clients/python/hancock_client.py
+++ b/clients/python/hancock_client.py
@@ -19,7 +19,36 @@

import os
from typing import Optional
-from openai import OpenAI
+
+try:

  • from hancock_constants import OPENAI_IMPORT_ERROR_MSG, require_openai
    +except ImportError:
  • Fallback definitions when hancock_constants is not available (e.g., in installed package).

  • OPENAI_IMPORT_ERROR_MSG = (
  •    "The 'openai' package is required to use HancockClient. "
    
  •    "Install it with 'pip install openai' and ensure NVIDIA_API_KEY is set."
    
  • )
  • def require_openai(client_cls: Optional[object] = None) -> None:
  •    """
    
  •    Ensure that the optional 'openai' dependency is available.
    
  •    This mirrors the behavior of the helper from hancock_constants: it defers
    
  •    the ImportError until runtime (e.g., in the constructor) and provides a
    
  •    clear error message when the dependency is missing.
    
  •    The optional client_cls argument is accepted for compatibility with call
    
  •    sites that invoke require_openai(OpenAI); if not provided, the module-level
    
  •    OpenAI symbol is used.
    
  •    """
    
  •    client = client_cls if client_cls is not None else OpenAI  # type: ignore[name-defined]
    
  •    if client is None:
    
  •        raise ImportError(OPENAI_IMPORT_ERROR_MSG)
    

+try:

  • from openai import OpenAI
    +except ImportError: # allow import; require_openai() enforces dependency in constructor
  • OpenAI = None # type: ignore

── Models ──────────────────────────────────────────────────────────────────

MODELS: dict[str, str] = {
@@ -83,6 +112,7 @@ def init(
coder_model: str = "qwen-coder",
base_url: str = "https://integrate.api.nvidia.com/v1",
):

  •    require_openai(OpenAI)
       key = api_key or os.environ.get("NVIDIA_API_KEY")
       if not key:
           raise ValueError(
    

diff --git a/hancock_agent.py b/hancock_agent.py
index 6561f71..49e97e0 100644
--- a/hancock_agent.py
+++ b/hancock_agent.py
@@ -28,17 +28,19 @@
export NVIDIA_API_KEY="nvapi-..."
or pass --api-key "nvapi-..."
"""
+from future import annotations
import argparse
import hmac
import json
import os
import sys
import readline # noqa: F401 — enables arrow-key history in CLI
+from hancock_constants import OPENAI_IMPORT_ERROR_MSG, require_openai

try:
from openai import OpenAI
-except ImportError:

  • sys.exit("Run: .venv/bin/pip install openai flask")
    +except ImportError: # allow import without OpenAI; client factories enforce requirement at runtime
  • OpenAI = None # type: ignore

── Hancock identity ──────────────────────────────────────────────────────────

PENTEST_SYSTEM = """You are Hancock, an elite penetration tester and offensive security specialist built by CyberViser.
@@ -246,21 +248,28 @@

def make_ollama_client() -> OpenAI:
"""Returns an OpenAI-compatible client pointed at the local Ollama server."""

  • require_openai(OpenAI)
    return OpenAI(base_url=OLLAMA_BASE_URL, api_key="ollama")

def make_client(api_key: str) -> OpenAI:
"""Returns an OpenAI-compatible client pointed at NVIDIA NIM (legacy)."""

  • require_openai(OpenAI)
    return OpenAI(base_url=NIM_BASE_URL, api_key=api_key)

def make_openai_client() -> OpenAI | None:

  • """Returns an OpenAI client if credentials are available, else None."""
  • """Returns an OpenAI client if credentials are available, else None.
  • Unlike the Ollama/NIM factories, OpenAI is a best-effort fallback, so the
  • absence of the dependency simply disables this path.
  • """
  • if OpenAI is None:
  •    return None
    
    key = os.getenv("OPENAI_API_KEY", "")
  • org = os.getenv("OPENAI_ORG_ID", "")
    if not key or key.startswith("sk-your"):
    return None
  • return OpenAI(api_key=key, organization=org or None)
  • return OpenAI(api_key=key, organization=os.getenv("OPENAI_ORG_ID") or None)

def chat(client: OpenAI, history: list[dict], model: str, stream: bool = True,
diff --git a/hancock_constants.py b/hancock_constants.py
new file mode 100644
index 0000000..5cd249a
--- /dev/null
+++ b/hancock_constants.py
@@ -0,0 +1,9 @@
+"""Shared constants for Hancock modules."""
+
+OPENAI_IMPORT_ERROR_MSG = "OpenAI client not installed. Run: pip install openai"
+
+
+def require_openai(openai_cls):

  • """Raise ImportError when the OpenAI dependency is missing."""
  • if openai_cls is None:
  •    raise ImportError(OPENAI_IMPORT_ERROR_MSG)
    

EOF

Copy link
Author

Copilot AI commented Mar 2, 2026

@copilot git apply --3way <<'EOF'
diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md
new file mode 100644...

All changes from the patch are already applied in the current branch. The branch contains:

  • .github/copilot-instructions.md (commit 935e258)
  • hancock_constants.py with shared constants (commit 3dfea33)
  • hancock_agent.py with OpenAI import handling (commit 3dfea33)
  • clients/python/hancock_client.py with OpenAI import handling (commits 5865203, 258fe1c)

No additional changes needed - the patch matches the current state of the repository.

@juanaquas juanaquas mentioned this pull request Mar 2, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants