Skip to content

shadman/ai-sdlc-factory

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

22 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🏭 Agentic SDLC Factory

An AI-driven, headless engineering team that monitors Jira, analyzes requirements, writes code, runs security audits, creates branches, opens PRs, and manages the full SDLC β€” autonomously.


πŸ—οΈ Architecture

The project is split into four independently deployable services connected via HTTP and Redis Cloud.

Jira ──POST──▢  listener :8000  ──POST──▢  agents-api :9000  ──▢  Ollama (LLM)
                    β”‚                           β”‚
                    └── reads Redis Cloud       └── writes Redis Cloud
                                                └── calls Jira API
                                                └── pushes to GitHub
Service Directory Purpose
Jira Listener listener/ Receives Jira webhooks, delegates to agents-api
Agents API root + ai-agents-core/ Runs all AI agents via CrewAI
Ollama ollama/ Serves the LLM locally (no token limits)
Phoenix phoenix/ Observability β€” traces every agent step and LLM call

πŸ› οΈ Tech Stack

Component Technology
Agent Orchestration CrewAI + LiteLLM
LLM β€” Local Ollama (DeepSeek-Coder-V2:Lite)
LLM β€” Cloud Groq / OpenAI (via LLM_PROVIDER env var)
State Store Redis Cloud (free tier)
Observability Arize Phoenix
Webhook Receiver FastAPI
Infrastructure Docker Compose
Version Control GitHub CLI (gh)

πŸ“‚ Project Structure

ai-sdlc-factory/
β”‚
β”œβ”€β”€ listener/               ← Standalone Jira webhook receiver
β”‚   β”œβ”€β”€ jira_listener.py
β”‚   β”œβ”€β”€ Dockerfile          (python:3.11-slim, minimal deps)
β”‚   β”œβ”€β”€ docker-compose.yml
β”‚   β”œβ”€β”€ requirements.txt
β”‚   └── .env.sample         (REDIS_URL, AGENTS_API_URL)
β”‚
β”œβ”€β”€ ollama/                 ← Standalone LLM server
β”‚   β”œβ”€β”€ Dockerfile
β”‚   β”œβ”€β”€ init.sh             (auto-pulls model on first start)
β”‚   β”œβ”€β”€ docker-compose.yml
β”‚   └── .env.sample         (OLLAMA_MODEL)
β”‚
β”œβ”€β”€ phoenix/                ← Standalone observability server
β”‚   β”œβ”€β”€ docker-compose.yml  (uses official arizephoenix/phoenix image)
β”‚   └── .env.sample
β”‚
β”œβ”€β”€ ai-agents-core/         ← Agent logic
β”‚   β”œβ”€β”€ agents_api.py       (FastAPI β€” POST /agents/analyze, /agents/produce)
β”‚   β”œβ”€β”€ main.py             (AIFactory + all CrewAI agents)
β”‚   └── tools/
β”‚       β”œβ”€β”€ jira_tools.py
β”‚       └── shell_tool.py
β”‚
β”œβ”€β”€ Dockerfile              (agents-api image)
β”œβ”€β”€ docker-compose.yml      (agents-api + db only β€” all infra services are external)
β”œβ”€β”€ entrypoint.sh           (gh auth + repo clone on startup)
└── .env.sample

πŸ”„ State Machine

Jira β†’ "In Progress"
    └──▢ analyzing
             └──▢ awaiting_approval   ← plan posted to Jira, waits for human

Human comments "proceed"
    └──▢ branching_{context}          ← git checkout -b feature/ISSUE-KEY
             └──▢ coding_{context}
                      └──▢ integrating_{context}
                               └──▢ security_scanning_{context}
                                        └──▢ reviewing_{context}   ← commit + push + PR
                                                 └──▢ completed_{context}

πŸ€– AI Agents

Agent Role
Analyst Reads AGENTS.md, produces technical plan + confidence score
Backend Developer Implements FastAPI + SQLAlchemy changes
Frontend Developer Implements Angular standalone components
Integration Specialist Verifies API contract between backend and frontend
SecOps Runs Bandit (Python) and npm audit (Node)
Git Manager Creates branches, commits, pushes, opens PRs
Doc Architect Updates AGENTS.md history
Reviewer Final quality gate, posts PR link to Jira

πŸš€ Setup

Prerequisites

Step 1 β€” Configure environment

cp .env.sample .env
# Fill in: REDIS_URL, GITHUB_TOKEN, JIRA_*, GIT_USER_*, OLLAMA_HOST or GROQ_API_KEY

Step 2 β€” Start Ollama (local LLM, optional)

cd ollama && cp .env.sample .env
docker-compose up -d --build
# First run downloads deepseek-coder-v2:lite (~9GB) β€” subsequent starts are instant

Step 3 β€” Start Phoenix (observability, optional)

cd phoenix && docker-compose up -d
# UI available at http://localhost:6006
# Set PHOENIX_ENDPOINT=http://localhost:4317 in root .env
# Leave PHOENIX_ENDPOINT blank to skip tracing (e.g. on Hugging Face)

Step 4 β€” Start the AI agents

# From project root
docker-compose --profile ai up -d --build

Step 5 β€” Start the Jira listener

cd listener && cp .env.sample .env
# Set REDIS_URL and AGENTS_API_URL in listener/.env
docker-compose up -d --build

Step 6 β€” Configure Jira Webhook

In Jira β†’ Settings β†’ System β†’ Webhooks, create a webhook:

  • URL: http://<your-host>:8000/webhook/jira
  • Events: Issue updated, Comment created

☁️ Hugging Face Deployment

Each service deploys as a separate HF Space:

Space Directory Port Notes
your-org/ai-sdlc-listener listener/ 8000 Always on
your-org/ai-sdlc-agents root 9000 Always on
your-org/ai-sdlc-ollama ollama/ 11434 Optional β€” use Groq instead
your-org/ai-sdlc-phoenix phoenix/ 6006 Optional β€” use Arize Cloud instead

Phoenix on HF: leave PHOENIX_ENDPOINT blank to disable tracing, or use Arize Phoenix Cloud (free) and set PHOENIX_ENDPOINT=https://app.phoenix.arize.com/v1/traces + PHOENIX_API_KEY.

Required HF secrets for agents Space:

REDIS_URL          rediss://default:<pwd>@<host>:<port>
GITHUB_TOKEN       PAT with repo scope
JIRA_DOMAIN        yourorg.atlassian.net
JIRA_USERNAME      your@email.com
JIRA_API_TOKEN     your-jira-token
GIT_USER_NAME      Your Name
GIT_USER_EMAIL     your@email.com
BACKEND_REPO_URL   https://github.com/your-org/backend.git
FRONTEND_REPO_URL  https://github.com/your-org/frontend.git
OLLAMA_HOST        https://your-org-ai-sdlc-ollama.hf.space
LLM_PROVIDER       ollama

No Ollama Space? Use Groq instead: set LLM_PROVIDER=groq, LLM_MODEL=llama-3.3-70b-versatile, GROQ_API_KEY=...


πŸ“ˆ Monitoring

Resource URL
Phoenix traces http://localhost:6006
Ollama API http://localhost:11434
Agents API http://localhost:9000/health
Jira Listener http://localhost:8000/webhook/jira

Container logs:

docker logs -f agents-api
docker logs -f jira-listener
docker logs -f ai-brain

Redis state inspection:

# Requires redis-cli pointed at your Redis Cloud instance
redis-cli -u $REDIS_URL hgetall task:ISSUE-KEY

🧠 The "Confidence Score" Protocol

The Analyst agent posts a score with every plan:

Score Meaning Action
> 85% Low risk Safe to comment proceed
75–85% Moderate risk Review the plan carefully first
< 75% AI is uncertain Add more detail to the Jira ticket, do NOT proceed

πŸ“œ The AGENTS.md Law

Each product repo contains an AGENTS.md. This is the source of truth for the AI agents β€” it defines libraries, patterns, constraints, and history.

  • Want the AI to use ngx-charts? Add it to frontend/AGENTS.md.
  • Changed the database schema? Update backend/AGENTS.md.
  • Every completed task is logged in the AGENTS.md history by the Doc Agent.

πŸ”§ Useful Commands

# Rebuild agents after a code change
docker-compose --profile ai up -d --build agents-api

# Rebuild listener after a code change
cd listener && docker-compose up -d --build

# Rebuild Ollama (e.g. to change model)
cd ollama && docker-compose up -d --build

# Start / stop Phoenix
cd phoenix && docker-compose up -d
cd phoenix && docker-compose down

# Stop agents
docker-compose --profile ai down

# Validate environment before starting
bash docker-compose.check.sh

πŸ“Š Flow Diagram

sequenceDiagram
    participant H as Human (Jira)
    participant L as Jira Listener :8000
    participant R as Redis Cloud
    participant AG as Agents API :9000
    participant LM as Ollama / Groq
    participant GH as GitHub

    H->>L: Move ticket to "In Progress"
    L->>AG: POST /agents/analyze
    AG->>R: state = analyzing
    AG->>LM: Analyst reads AGENTS.md
    AG->>H: Post plan + confidence score to Jira
    AG->>R: state = awaiting_approval

    Note over H,R: Human reviews plan

    H->>L: Comment "proceed"
    L->>R: Read state (awaiting_approval βœ…)
    L->>AG: POST /agents/produce
    AG->>GH: git checkout -b feature/ISSUE-KEY
    AG->>LM: Backend / Frontend agents write code
    AG->>LM: Security scan + Integration check
    AG->>GH: git push + gh pr create
    AG->>H: Post PR link to Jira
    AG->>R: state = completed
Loading

About

ai based sdlc

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors