An AI-driven, headless engineering team that monitors Jira, analyzes requirements, writes code, runs security audits, creates branches, opens PRs, and manages the full SDLC β autonomously.
The project is split into four independently deployable services connected via HTTP and Redis Cloud.
Jira ββPOSTβββΆ listener :8000 ββPOSTβββΆ agents-api :9000 βββΆ Ollama (LLM)
β β
βββ reads Redis Cloud βββ writes Redis Cloud
βββ calls Jira API
βββ pushes to GitHub
| Service | Directory | Purpose |
|---|---|---|
| Jira Listener | listener/ |
Receives Jira webhooks, delegates to agents-api |
| Agents API | root + ai-agents-core/ |
Runs all AI agents via CrewAI |
| Ollama | ollama/ |
Serves the LLM locally (no token limits) |
| Phoenix | phoenix/ |
Observability β traces every agent step and LLM call |
| Component | Technology |
|---|---|
| Agent Orchestration | CrewAI + LiteLLM |
| LLM β Local | Ollama (DeepSeek-Coder-V2:Lite) |
| LLM β Cloud | Groq / OpenAI (via LLM_PROVIDER env var) |
| State Store | Redis Cloud (free tier) |
| Observability | Arize Phoenix |
| Webhook Receiver | FastAPI |
| Infrastructure | Docker Compose |
| Version Control | GitHub CLI (gh) |
ai-sdlc-factory/
β
βββ listener/ β Standalone Jira webhook receiver
β βββ jira_listener.py
β βββ Dockerfile (python:3.11-slim, minimal deps)
β βββ docker-compose.yml
β βββ requirements.txt
β βββ .env.sample (REDIS_URL, AGENTS_API_URL)
β
βββ ollama/ β Standalone LLM server
β βββ Dockerfile
β βββ init.sh (auto-pulls model on first start)
β βββ docker-compose.yml
β βββ .env.sample (OLLAMA_MODEL)
β
βββ phoenix/ β Standalone observability server
β βββ docker-compose.yml (uses official arizephoenix/phoenix image)
β βββ .env.sample
β
βββ ai-agents-core/ β Agent logic
β βββ agents_api.py (FastAPI β POST /agents/analyze, /agents/produce)
β βββ main.py (AIFactory + all CrewAI agents)
β βββ tools/
β βββ jira_tools.py
β βββ shell_tool.py
β
βββ Dockerfile (agents-api image)
βββ docker-compose.yml (agents-api + db only β all infra services are external)
βββ entrypoint.sh (gh auth + repo clone on startup)
βββ .env.sample
Jira β "In Progress"
ββββΆ analyzing
ββββΆ awaiting_approval β plan posted to Jira, waits for human
Human comments "proceed"
ββββΆ branching_{context} β git checkout -b feature/ISSUE-KEY
ββββΆ coding_{context}
ββββΆ integrating_{context}
ββββΆ security_scanning_{context}
ββββΆ reviewing_{context} β commit + push + PR
ββββΆ completed_{context}
| Agent | Role |
|---|---|
| Analyst | Reads AGENTS.md, produces technical plan + confidence score |
| Backend Developer | Implements FastAPI + SQLAlchemy changes |
| Frontend Developer | Implements Angular standalone components |
| Integration Specialist | Verifies API contract between backend and frontend |
| SecOps | Runs Bandit (Python) and npm audit (Node) |
| Git Manager | Creates branches, commits, pushes, opens PRs |
| Doc Architect | Updates AGENTS.md history |
| Reviewer | Final quality gate, posts PR link to Jira |
- Docker Desktop
- Redis Cloud free tier account β redis.io/try-free
- Groq API key (if not using local Ollama) β console.groq.com
cp .env.sample .env
# Fill in: REDIS_URL, GITHUB_TOKEN, JIRA_*, GIT_USER_*, OLLAMA_HOST or GROQ_API_KEYcd ollama && cp .env.sample .env
docker-compose up -d --build
# First run downloads deepseek-coder-v2:lite (~9GB) β subsequent starts are instantcd phoenix && docker-compose up -d
# UI available at http://localhost:6006
# Set PHOENIX_ENDPOINT=http://localhost:4317 in root .env
# Leave PHOENIX_ENDPOINT blank to skip tracing (e.g. on Hugging Face)# From project root
docker-compose --profile ai up -d --buildcd listener && cp .env.sample .env
# Set REDIS_URL and AGENTS_API_URL in listener/.env
docker-compose up -d --buildIn Jira β Settings β System β Webhooks, create a webhook:
- URL:
http://<your-host>:8000/webhook/jira - Events: Issue updated, Comment created
Each service deploys as a separate HF Space:
| Space | Directory | Port | Notes |
|---|---|---|---|
your-org/ai-sdlc-listener |
listener/ |
8000 | Always on |
your-org/ai-sdlc-agents |
root | 9000 | Always on |
your-org/ai-sdlc-ollama |
ollama/ |
11434 | Optional β use Groq instead |
your-org/ai-sdlc-phoenix |
phoenix/ |
6006 | Optional β use Arize Cloud instead |
Phoenix on HF: leave
PHOENIX_ENDPOINTblank to disable tracing, or use Arize Phoenix Cloud (free) and setPHOENIX_ENDPOINT=https://app.phoenix.arize.com/v1/traces+PHOENIX_API_KEY.
Required HF secrets for agents Space:
REDIS_URL rediss://default:<pwd>@<host>:<port>
GITHUB_TOKEN PAT with repo scope
JIRA_DOMAIN yourorg.atlassian.net
JIRA_USERNAME your@email.com
JIRA_API_TOKEN your-jira-token
GIT_USER_NAME Your Name
GIT_USER_EMAIL your@email.com
BACKEND_REPO_URL https://github.com/your-org/backend.git
FRONTEND_REPO_URL https://github.com/your-org/frontend.git
OLLAMA_HOST https://your-org-ai-sdlc-ollama.hf.space
LLM_PROVIDER ollama
No Ollama Space? Use Groq instead: set
LLM_PROVIDER=groq,LLM_MODEL=llama-3.3-70b-versatile,GROQ_API_KEY=...
| Resource | URL |
|---|---|
| Phoenix traces | http://localhost:6006 |
| Ollama API | http://localhost:11434 |
| Agents API | http://localhost:9000/health |
| Jira Listener | http://localhost:8000/webhook/jira |
Container logs:
docker logs -f agents-api
docker logs -f jira-listener
docker logs -f ai-brainRedis state inspection:
# Requires redis-cli pointed at your Redis Cloud instance
redis-cli -u $REDIS_URL hgetall task:ISSUE-KEYThe Analyst agent posts a score with every plan:
| Score | Meaning | Action |
|---|---|---|
| > 85% | Low risk | Safe to comment proceed |
| 75β85% | Moderate risk | Review the plan carefully first |
| < 75% | AI is uncertain | Add more detail to the Jira ticket, do NOT proceed |
Each product repo contains an AGENTS.md. This is the source of truth for the AI agents β it defines libraries, patterns, constraints, and history.
- Want the AI to use
ngx-charts? Add it tofrontend/AGENTS.md. - Changed the database schema? Update
backend/AGENTS.md. - Every completed task is logged in the AGENTS.md history by the Doc Agent.
# Rebuild agents after a code change
docker-compose --profile ai up -d --build agents-api
# Rebuild listener after a code change
cd listener && docker-compose up -d --build
# Rebuild Ollama (e.g. to change model)
cd ollama && docker-compose up -d --build
# Start / stop Phoenix
cd phoenix && docker-compose up -d
cd phoenix && docker-compose down
# Stop agents
docker-compose --profile ai down
# Validate environment before starting
bash docker-compose.check.shsequenceDiagram
participant H as Human (Jira)
participant L as Jira Listener :8000
participant R as Redis Cloud
participant AG as Agents API :9000
participant LM as Ollama / Groq
participant GH as GitHub
H->>L: Move ticket to "In Progress"
L->>AG: POST /agents/analyze
AG->>R: state = analyzing
AG->>LM: Analyst reads AGENTS.md
AG->>H: Post plan + confidence score to Jira
AG->>R: state = awaiting_approval
Note over H,R: Human reviews plan
H->>L: Comment "proceed"
L->>R: Read state (awaiting_approval β
)
L->>AG: POST /agents/produce
AG->>GH: git checkout -b feature/ISSUE-KEY
AG->>LM: Backend / Frontend agents write code
AG->>LM: Security scan + Integration check
AG->>GH: git push + gh pr create
AG->>H: Post PR link to Jira
AG->>R: state = completed