|
| 1 | +# Azure CLI Help Documentation Evaluator |
| 2 | + |
| 3 | +This tool evaluates the quality of Azure CLI help documentation using Azure OpenAI. It extracts help content from Python files and evaluates them using two different evaluation frameworks. |
| 4 | + |
| 5 | +## Overview |
| 6 | + |
| 7 | +The evaluator performs three main steps: |
| 8 | +1. **Extract**: Extracts help documentation from `*_help.py` files |
| 9 | +2. **Evaluate**: Runs two evaluators: |
| 10 | + - **Simple Evaluator**: Quick quality assessment |
| 11 | + - **Document Quality Scoring Framework (DQSF)**: Comprehensive Microsoft Learn quality standards assessment |
| 12 | +3. **Report**: Generates detailed markdown reports with both evaluations |
| 13 | + |
| 14 | +## Prerequisites |
| 15 | + |
| 16 | +- Python 3.10 or higher |
| 17 | +- Azure OpenAI access with API credentials |
| 18 | + |
| 19 | +## Installation |
| 20 | + |
| 21 | +1. Install required dependencies: |
| 22 | +```bash |
| 23 | +pip install -r requirements.txt |
| 24 | +``` |
| 25 | + |
| 26 | +2. Create a `.env` file in the `scripts/ai/` directory with your Azure OpenAI credentials: |
| 27 | + |
| 28 | +```bash |
| 29 | +# Azure OpenAI Configuration |
| 30 | +AZURE_OPENAI_API_KEY=your_api_key_here |
| 31 | +AZURE_OPENAI_API_VERSION=2025-04-01-preview |
| 32 | +AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/ |
| 33 | +AZURE_OPENAI_DEPLOYMENT_NAME=gpt-4o-mini |
| 34 | +``` |
| 35 | + |
| 36 | +### Getting Azure OpenAI Credentials |
| 37 | + |
| 38 | +- `AZURE_OPENAI_API_KEY`: Your Azure OpenAI API key from Azure Portal |
| 39 | +- `AZURE_OPENAI_API_VERSION`: API version (e.g., `2025-04-01-preview`) |
| 40 | +- `AZURE_OPENAI_ENDPOINT`: Your Azure OpenAI resource endpoint URL |
| 41 | +- `AZURE_OPENAI_DEPLOYMENT_NAME`: The name of your deployed model (e.g., `gpt-4o-mini`, `gpt-4`) |
| 42 | + |
| 43 | +## Usage |
| 44 | + |
| 45 | +### Evaluate a Single Help File |
| 46 | + |
| 47 | +```bash |
| 48 | +python evaluate-help.py -i ../../src/azure-cli/azure/cli/command_modules/search/_help.py |
| 49 | +``` |
| 50 | + |
| 51 | +### Evaluate All Help Files in a Directory |
| 52 | + |
| 53 | +```bash |
| 54 | +python evaluate-help.py -i ../../src/azure-cli/azure/cli/command_modules/ |
| 55 | +``` |
| 56 | + |
| 57 | +### Specify Custom Output Directory |
| 58 | + |
| 59 | +```bash |
| 60 | +python evaluate-help.py -i ../../src/azure-cli/azure/cli/command_modules/ -o ./custom-analysis |
| 61 | +``` |
| 62 | + |
| 63 | +## Command Line Options |
| 64 | + |
| 65 | +- `-i, --input` (required): Path to help file or directory containing help files |
| 66 | +- `-o, --output` (optional): Output directory for analysis results (default: `./analysis`) |
| 67 | + |
| 68 | +## Output |
| 69 | + |
| 70 | +The tool generates markdown files in the output directory with the following structure: |
| 71 | + |
| 72 | +``` |
| 73 | +analysis/ |
| 74 | + ├── modulename_YYYYMMDD-HHMMSS.md |
| 75 | + ├── ... |
| 76 | +``` |
| 77 | + |
| 78 | +Each analysis file contains: |
| 79 | +- **Metadata**: Date, source file, model used, token usage |
| 80 | +- **Original Source Code**: Collapsible section with the raw Python code |
| 81 | +- **Extracted Help Content**: The extracted documentation |
| 82 | +- **Simple Quality Evaluation**: Quick assessment results |
| 83 | +- **DQSF Evaluation**: Comprehensive quality scores (out of 100 points) |
| 84 | + |
| 85 | +## Evaluation Frameworks |
| 86 | + |
| 87 | +### Simple Evaluator |
| 88 | +Provides a quick quality assessment across: |
| 89 | +- Clarity and Readability |
| 90 | +- Completeness |
| 91 | +- Accuracy |
| 92 | +- Structure and Organization |
| 93 | +- Examples and Practical Usage |
| 94 | +- Accessibility |
| 95 | + |
| 96 | +### Document Quality Scoring Framework (DQSF) |
| 97 | +A comprehensive framework based on Microsoft Learn standards, evaluating five categories (20 points each): |
| 98 | +1. **Module Description**: Overview and context |
| 99 | +2. **Command Description**: Behavior and prerequisites |
| 100 | +3. **Examples**: Runnable, up-to-date examples |
| 101 | +4. **Parameter Descriptions**: Clear, detailed parameter documentation |
| 102 | +5. **Parameter Properties/Sets**: Complete parameter specifications |
| 103 | + |
| 104 | +Each category is scored across six dimensions: |
| 105 | +- Practical & example-rich |
| 106 | +- Consistent with style guide |
| 107 | +- Detailed & technically complete |
| 108 | +- Current and up-to-date |
| 109 | +- Easy to navigate |
| 110 | +- Clear parameter descriptions |
| 111 | + |
| 112 | +## Architecture |
| 113 | + |
| 114 | +The tool consists of two main components: |
| 115 | + |
| 116 | +### `help_evaluator.py` |
| 117 | +The `HelpEvaluator` class handles: |
| 118 | +- Azure OpenAI client initialization |
| 119 | +- Prompt template management |
| 120 | +- Help content extraction |
| 121 | +- Running both evaluators |
| 122 | +- Generating analysis reports |
| 123 | + |
| 124 | +### `evaluate-help.py` |
| 125 | +The main script that: |
| 126 | +- Parses command-line arguments |
| 127 | +- Finds help files (supports single file or directory) |
| 128 | +- Manages the evaluation workflow |
| 129 | +- Provides progress feedback with spinner |
| 130 | +- Generates summary statistics |
| 131 | + |
| 132 | +## Prompts |
| 133 | + |
| 134 | +Evaluation prompts are stored in the `prompts/` directory: |
| 135 | +- `extractor.md`: Extracts help content and module name |
| 136 | +- `simple-evaluator.md`: Simple quality evaluation criteria |
| 137 | +- `document-quality-scoring-framework.md`: DQSF evaluation criteria |
| 138 | + |
| 139 | +You can customize these prompts to adjust the evaluation criteria. |
| 140 | + |
| 141 | +## Token Usage |
| 142 | + |
| 143 | +The tool tracks token usage for each operation: |
| 144 | +- Extraction tokens |
| 145 | +- Simple evaluation tokens |
| 146 | +- DQSF evaluation tokens |
| 147 | +- Total tokens per file |
| 148 | + |
| 149 | +This helps estimate costs and optimize evaluations. |
| 150 | + |
| 151 | +## Example Output |
| 152 | + |
| 153 | +``` |
| 154 | +Searching for help files in: ../../src/azure-cli/azure/cli/command_modules/search/ |
| 155 | +Found 1 help file(s) |
| 156 | +
|
| 157 | +Initializing HelpEvaluator... |
| 158 | +Output directory: ./analysis |
| 159 | +Loaded prompts: ['extractor', 'simple-evaluator', 'document-quality-scoring-framework'] |
| 160 | +
|
| 161 | +================================================================================ |
| 162 | +Starting evaluation |
| 163 | +================================================================================ |
| 164 | +
|
| 165 | +[1/1] Processing: ../../src/azure-cli/azure/cli/command_modules/search/_help.py |
| 166 | + ⠋ Working... |
| 167 | + ✓ Analysis saved to: search_20251127-123045.md |
| 168 | + Total tokens used: 4821 |
| 169 | +
|
| 170 | +================================================================================ |
| 171 | +Evaluation Complete |
| 172 | +================================================================================ |
| 173 | +
|
| 174 | +Processed: 1/1 files |
| 175 | +Total tokens used: 4,821 |
| 176 | +
|
| 177 | +Analysis files saved to: ./analysis/ |
| 178 | +
|
| 179 | +Results summary: |
| 180 | + - search: 4821 tokens → search_20251127-123045.md |
| 181 | +``` |
| 182 | + |
| 183 | +## Troubleshooting |
| 184 | + |
| 185 | +### Missing Environment Variables |
| 186 | +If you get an error about missing API credentials, ensure your `.env` file is in the `scripts/ai/` directory and contains all required variables. |
| 187 | + |
| 188 | +### API Rate Limits |
| 189 | +If you encounter rate limit errors, consider: |
| 190 | +- Adding delays between evaluations |
| 191 | +- Using a model with higher rate limits |
| 192 | +- Processing files in smaller batches |
| 193 | + |
| 194 | +### Token Limits |
| 195 | +If evaluations fail due to token limits: |
| 196 | +- Use a model with larger context windows |
| 197 | +- Increase `max_tokens` parameter in the code |
| 198 | +- Split large help files into smaller sections |
| 199 | + |
| 200 | +## Contributing |
| 201 | + |
| 202 | +To modify evaluation criteria: |
| 203 | +1. Edit the appropriate prompt file in `prompts/` |
| 204 | +2. Test with a sample help file |
| 205 | +3. Adjust scoring weights or add new dimensions as needed |
| 206 | + |
| 207 | +## License |
| 208 | + |
| 209 | +This tool is part of the Azure CLI project and follows the same license terms. |
0 commit comments