@@ -63,37 +63,39 @@ The main pipeline handles the complete test generation workflow:
6363└─────────────────────────────────────────────────────────────────┘
6464```
6565
66- ## 📦 Quick Start
66+ ## 🚀 Quick Start
6767
68- ### Prerequisites
69-
70- - Python 3.9+
71- - ZenML installed (` pip install zenml ` )
72- - Git
73- - OpenAI API key (optional, can use fake provider)
74-
75- ### Setup
68+ Get QualityFlow running in 3 simple steps:
7669
70+ ### 1. Install Dependencies
7771``` bash
7872pip install -r requirements.txt
7973```
8074
81- 2 . ** Set up OpenAI (optional) ** :
75+ ### 2. Optional: Set up OpenAI API Key
8276``` bash
8377export OPENAI_API_KEY=" your-api-key-here"
8478```
79+ * Skip this step to use the fake provider for testing*
8580
86- 3 . ** Run the pipeline ** :
81+ ### 3. Run the Pipeline
8782``` bash
8883python run.py
8984```
9085
91- That's it! The pipeline will:
92- - Clone the configured repository (default: requests library)
93- - Analyze Python files and select candidates
94- - Generate tests using OpenAI ( or fake provider if no API key)
86+ ** That's it!** The pipeline will automatically :
87+ - Clone a sample repository (requests library by default )
88+ - Analyze Python files and select test candidates
89+ - Generate tests using LLM or fake provider
9590- Run tests and measure coverage
96- - Generate a comprehensive report comparing approaches
91+ - Create a detailed comparison report
92+
93+ ### What Happens Next?
94+
95+ - Check the ZenML dashboard to see pipeline results
96+ - View generated test files and coverage reports
97+ - Compare LLM vs baseline test approaches
98+ - Experiment with different configurations
9799
98100## ⚙️ Configuration
99101
@@ -171,18 +173,17 @@ Requirements:
171173
172174### A/B Testing Experiments
173175
174- Use run templates for systematic comparisons :
176+ Compare different configurations by running with different config files :
175177
176178```bash
177179# Compare prompt versions
178- python scripts/run_experiment .py --config configs/experiment.default.yaml
179- python scripts/run_experiment .py --config configs/experiment.strict.yaml
180+ python run .py --config configs/experiment.default.yaml
181+ python run .py --config configs/experiment.strict.yaml
180182
181- # Compare in ZenML dashboard:
183+ # Compare results in ZenML dashboard:
182184# - Coverage metrics
183185# - Test quality scores
184186# - Token usage and cost
185- # - Promotion decisions
186187```
187188
188189### Production Deployment
@@ -199,36 +200,23 @@ zenml stack register production_stack \
199200 -a s3_store -c ecr_registry -o k8s_orchestrator --set
200201```
201202
202- ### Scheduled Regression
203-
204- Register batch regression for daily execution:
203+ ### Scheduled Execution
205204
206- ``` bash
207- python scripts/run_batch.py --config configs/schedule.batch.yaml --schedule
208- ```
205+ For automated runs, set up scheduled execution using your preferred orchestration tool or ZenML's scheduling features.
209206
210207## 🏗️ Project Structure
211208
212209```
213210qualityflow/
214211├── README.md
215- ├── pyproject.toml
216212├── requirements.txt
217- ├── .env.example
218- ├── zenml.yaml
219213│
220214├── configs/ # Pipeline configurations
221215│ ├── experiment.default.yaml # Standard experiment settings
222- │ ├── experiment.strict.yaml # High-quality gates
223- │ └── schedule.batch.yaml # Batch regression schedule
224- │
225- ├── domain/ # Core data models
226- │ ├── schema.py # Pydantic models
227- │ └── stages.py # Deployment stages
216+ │ └── experiment.strict.yaml # High-quality gates
228217│
229218├── pipelines/ # Pipeline definitions
230- │ ├── generate_and_evaluate.py # Experiment pipeline
231- │ └── batch_regression.py # Scheduled regression
219+ │ └── generate_and_evaluate.py # Main pipeline
232220│
233221├── steps/ # Pipeline steps
234222│ ├── select_input.py # Source specification
@@ -237,43 +225,27 @@ qualityflow/
237225│ ├── gen_tests_agent.py # LLM test generation
238226│ ├── gen_tests_baseline.py # Heuristic test generation
239227│ ├── run_tests.py # Test execution & coverage
240- │ ├── evaluate_coverage.py # Metrics & gate evaluation
241- │ ├── compare_and_promote.py # Model registry promotion
242- │ ├── resolve_test_pack.py # Test pack resolution
228+ │ ├── evaluate_coverage.py # Metrics evaluation
243229│ └── report.py # Report generation
244230│
245231├── prompts/ # Jinja2 prompt templates
246232│ ├── unit_test_v1.jinja # Standard test generation
247233│ └── unit_test_strict_v2.jinja # Comprehensive test generation
248234│
249- ├── materializers/ # Custom artifact handling
250- ├── utils/ # Utility functions
251- │
252- ├── registry/ # Test Pack registry docs
253- │ └── README.md
254- │
255- ├── run_templates/ # Experiment templates
256- │ ├── ab_agent_vs_strict.json # A/B testing configuration
257- │ └── baseline_only.json # Baseline establishment
258- │
259- ├── scripts/ # CLI scripts
260- │ ├── run_experiment.py # Experiment runner
261- │ └── run_batch.py # Batch regression runner
235+ ├── examples/ # Demo code for testing
236+ │ └── toy_lib/ # Sample library
237+ │ ├── calculator.py
238+ │ └── string_utils.py
262239│
263- └── examples/ # Demo code for testing
264- └── toy_lib/ # Sample library
265- ├── calculator.py
266- └── string_utils.py
240+ └── run.py # Main entry point
267241```
268242
269243### Key Components
270244
271- - ** Domain Models** : Pydantic schemas for type safety and validation
272245- ** Pipeline Steps** : Modular, reusable components with clear interfaces
273246- ** Prompt Templates** : Jinja2 templates for LLM test generation
274- - ** Configuration** : YAML-driven experiment and deployment settings
275- - ** Quality Gates** : Configurable thresholds for coverage and promotion
276- - ** Model Registry** : ZenML Model Registry integration for test pack versioning
247+ - ** Configuration** : YAML-driven experiment settings
248+ - ** Test Generation** : Both LLM-based and heuristic approaches for comparison
277249
278250## 🚀 Production Deployment
279251
@@ -295,17 +267,7 @@ zenml stack register production \
295267
296268### Scheduled Execution
297269
298- Set up automated regression testing:
299-
300- ``` bash
301- # Register schedule (example with ZenML Cloud)
302- python scripts/run_batch.py --config configs/schedule.batch.yaml --schedule
303-
304- # Monitor via dashboard:
305- # - Daily regression results
306- # - Coverage trend analysis
307- # - Test pack performance
308- ```
270+ Set up automated regression testing using ZenML's scheduling capabilities or your preferred orchestration platform.
309271
310272## 🤝 Contributing
311273
@@ -344,7 +306,7 @@ Run with debug logging:
344306
345307``` bash
346308export ZENML_LOGGING_VERBOSITY=DEBUG
347- python scripts/run_experiment .py --config configs/experiment.default.yaml
309+ python run .py --config configs/experiment.default.yaml
348310```
349311
350312## 📚 Resources
0 commit comments