- Project Overview
- System Architecture
- Core Components
- Tools and Dependencies
- Workflow
- Error Handling
- Extending the Project
This project provides a structured approach to diabetes risk assessment using OpenAI's language models. It processes patient data, generates risk assessments, and validates the output to ensure reliability and consistency.
├── diabetes_diagnosis.py # Main module with core functionality
├── tests/ # Test suite
│ ├── __init__.py
│ ├── conftest.py # Test configuration
│ ├── test_edge_cases.py # Edge case tests
│ └── test_validation.py # Validation tests
├── scripts/
│ └── run_tests.sh # Test runner script
├── requirements.txt # Python dependencies
└── .env.example # Environment template
Located in diabetes_diagnosis.py, this module contains:
-
Assessment Model (
Assessmentclass):- Defines the structure for risk assessments
- Uses Pydantic for data validation
- Enforces required fields and data types
-
Risk Assessment Function (
get_risk):- Takes patient data as input
- Communicates with OpenAI's API
- Returns structured risk assessment
-
Response Validation (
validate_response):- Validates JSON structure
- Ensures all required fields are present
- Validates data types and constraints
-
Error Handling:
- Catches and reports JSON parsing errors
- Validates against the Assessment model
- Provides clear error messages
-
Test Types:
- Unit tests for individual components
- Integration tests for API interactions
- Edge case testing
-
Mocking:
- Uses
pytest-mockfor API call mocking - Prevents unnecessary API calls during testing
- Ensures consistent test results
- Uses
- Python 3.8+: Core programming language
- Pydantic: Data validation and settings management
- OpenAI Python Client: Interface with OpenAI's API
- python-dotenv: Environment variable management
- pytest: Testing framework
- pytest-mock: Mocking for tests
- pytest-html: HTML test reporting
- black: Code formatting
- flake8: Linting
- mypy: Static type checking
-
Input Processing:
- Patient data is collected and formatted
- Data is validated for required fields
-
API Interaction:
- Request is sent to OpenAI's API
- Response is captured and parsed
-
Response Validation:
- JSON structure is validated
- Data types and constraints are checked
- Results are formatted for output
-
Result Handling:
- Valid results are processed
- Errors are caught and reported
- Logs are generated for debugging
The system implements robust error handling:
-
Input Validation:
- Checks for required fields
- Validates data types and formats
- Provides clear error messages
-
API Error Handling:
- Handles network issues
- Manages API rate limits
- Processes API-specific errors
-
Response Validation:
- Validates JSON structure
- Checks for required fields
- Ensures data consistency
To add new fields to the assessment model:
- Update the
Assessmentclass indiabetes_diagnosis.py - Add validation rules as needed
- Update the test suite to cover new fields
To modify validation rules:
- Edit the
validate_responsefunction - Add new validation rules to the
Assessmentclass - Update tests to verify new validation rules
To add new tests:
- Create test cases in the
tests/directory - Use the existing test fixtures
- Follow the pattern of existing tests
- Fork the repository
- Create a feature branch
- Make your changes
- Run tests:
bash scripts/run_tests.sh - Submit a pull request
[Specify your license here]
For setup and installation instructions, please refer to README.md.