A comprehensive system for case workers to conduct child abuse screening and reporting with AI assistance.
- Linear Screening Protocol: Step-by-step questionnaire with risk scoring
- Branching Decision Tree: Dynamic reporting workflow based on screening results
- AI Assistance: Ollama-powered guidance based on protocol documents
- Protocol Search (Phase 1): Client-side keyword/BM25 search with citations (Vercel-friendly)
- Document RAG (Future): Retrieval + generation architecture documented in
docs/rag-architecture-roadmap.md - Admin Rule Editor: No-code interface for updating screening rules and decision trees
- Audit Logging: Complete trail of all decisions and actions
Frontend (React + TypeScript) → Port 3001
Backend (FastAPI) → Port 8000
Ollama (Local LLM) → Port 11434
Firebase (Hosting) → Cloud
Firestore (Database) → Cloud
Notes:
- Phase 1 protocol search is client-side and does not require the backend to be running.
- The backend is still used for other features and for future RAG generation.
- Python 3.10+
- Node.js 18+
- Ollama - Install Ollama
- Firebase CLI (optional for hosting/deploy)
cd backend
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Create .env file
cp .env.example .env
# Edit .env with your configuration
# Required: OLLAMA_ENDPOINT, OLLAMA_MODEL# Pull the model (first time only)
ollama pull llama3.1:latest
# Ollama should be running on http://127.0.0.1:11434
# Verify with: curl http://127.0.0.1:11434/api/tagsPlace your PDF documents in the appropriate folders:
docs/screening/- Screening protocol PDFsdocs/reporting/- Reporting guideline PDFs
Generate the client-side search corpus JSON (Phase 1):
python3 backend/scripts/generate_corpus_simple.py
# Output:
# frontend/public/protocol_corpus.jsonOptional (backend-managed documents for future RAG / non-client search):
curl -X POST "http://localhost:8000/api/documents/upload" \
-F "file=@docs/screening/your_protocol.pdf" \
-F "document_type=screening_guide" \
-F "title=Child Abuse Screening Protocol"cd backend
source venv/bin/activate
python -m api.main
# Backend will start on http://localhost:8000
# API docs available at http://localhost:8000/docsNote: The backend is not required for Phase 1 protocol search, but is required for AI assistance and other API features.
cd frontend
# Install dependencies
npm install
# Start development server
npm run dev
# Frontend will start on http://localhost:3001Protocol Search (Phase 1) will load its index from:
frontend/public/protocol_corpus.json
- Frontend: Port 3001 (configured in
vite.config.ts) - Backend: Port 8000 (configured in
api/main.py) - Ollama: Port 11434 (default)
To change ports:
- Frontend: Edit
frontend/vite.config.ts - Backend: Edit
backend/api/main.py(uvicorn.run port parameter) - Update CORS in
backend/.env(ALLOWED_ORIGINS)
Persistent storage is planned to use Firestore (Firebase) for production deployments.
Note: Firebase/Firestore integration is not documented here yet. This README will be updated once the data model and auth rules are finalized.
social-service-screening-assistant/
├── backend/
│ ├── api/
│ │ ├── main.py # FastAPI application
│ │ └── ollama_service.py # Ollama integration
│ ├── document_processor/
│ │ └── pdf_processor.py # PDF text extraction
│ ├── vector_store/
│ │ └── faiss_store.py # FAISS vector database
│ ├── models/
│ │ └── schemas.py # Pydantic models
│ └── requirements.txt
├── frontend/
│ ├── src/
│ │ ├── components/
│ │ │ ├── screening/ # Screening interface
│ │ │ ├── reporting/ # Decision tree UI
│ │ │ ├── admin/ # Rule editor
│ │ │ └── knowledge-base/ # Document viewer
│ │ └── services/
│ │ ├── api.ts # API client
│ └── package.json
├── docs/
│ ├── screening/ # Your screening PDFs
│ └── reporting/ # Your reporting PDFs
└── data/
├── vector_store/ # FAISS index
└── synthesized_cases/ # Test data
POST /api/screening/questions- Get screening questionsPOST /api/screening/submit- Submit responsePOST /api/screening/ai-assist- Get AI assistancePOST /api/screening/analyze- Analyze case risk
GET /api/decision-tree/start- Get start nodePOST /api/decision-tree/traverse- Navigate tree
POST /api/reporting/submit- Submit report
POST /api/documents/upload- Upload PDFPOST /api/documents/search- Search protocolsGET /api/documents/stats- Vector store stats
Note: /api/documents/search is currently commented out in the UI in favor of Phase 1 client-side search.
GET /api/admin/rules- List rulesPOST /api/admin/rules- Create rulePUT /api/admin/rules/{id}- Update ruleDELETE /api/admin/rules/{id}- Deactivate rule
- Use admin UI or API to create rule:
{
"rule_type": "screening_question",
"rule_data": {
"question_id": "q8",
"order": 8,
"text": "Your question here?",
"type": "yes_no",
"risk_score_yes": 5,
"risk_score_no": 0
}
}- Access admin UI at
/admin/decision-tree - Visual editor allows drag-and-drop node creation
- Set conditions and actions for each node
- Changes are versioned automatically
# Upload via API
curl -X POST "http://localhost:8000/api/documents/upload" \
-F "file=@path/to/document.pdf" \
-F "document_type=screening_guide" \
-F "title=Document Title"
# Or use the admin UI document managercd backend
pytest tests/cd frontend
npm testThe database seed file includes realistic test cases. Use these for testing workflows.
- Authentication: Implement proper user authentication (planned via Firebase Auth)
- Authorization: Implement role-based access control and least-privilege rules (planned via Firestore Security Rules)
- Audit Logging: All actions are logged with user ID and timestamp
- Data Encryption: Use HTTPS in production
- HIPAA Compliance: Consult legal counsel before production deployment
- Access Control: Limit admin access to authorized personnel only
- Use Docker for containerization
- Deploy to AWS/GCP/Azure
- Configure environment variables
- Set up SSL/TLS certificates
- Build:
npm run build - Deploy to Vercel/Netlify/Cloudflare Pages
- Configure environment variables
# Check if Ollama is running
curl http://127.0.0.1:11434/api/tags
# Restart Ollama
ollama serve# Kill process on port 3001
lsof -ti :3001 | xargs kill -9
# Kill process on port 8000
lsof -ti :8000 | xargs kill -9# Clear and rebuild vector store
curl -X DELETE http://localhost:8000/api/documents/clear
# Then re-upload documentsThis is a prototype for internal use. For production deployment:
- Implement Firebase/Firestore persistence (planned)
- Implement comprehensive testing
- Add user authentication
- Conduct security audit
- Obtain legal/compliance review
Internal use only. Not for public distribution.
For questions or issues, contact the development team.
Version: 1.0.0
Last Updated: January 2026