Commit ffe2361
Add report verification tool (#7)
* Add verification tool for analysis reports
- Created verify_analysis_report.py to regenerate all statistics from trace data
- Supports optional --expected-values parameter for PASS/FAIL verification
- Full test coverage (16 tests) with mocked analysis functions
- Type-safe implementation with mypy strict mode
- All CI checks pass (black, ruff, mypy, bandit)
- Updated README with usage documentation
Verification tool regenerates latency distribution, bottleneck analysis,
and parallel execution metrics deterministically for audit purposes.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* feat: Add Phase 3B cost analysis - initial implementation (TDD)
Implements configurable cost analysis for LangSmith traces following
strict test-driven development methodology.
- PricingConfig: Configurable dataclass for any LLM provider pricing
- TokenUsage: Extract token data from trace outputs/inputs
- CostBreakdown: Calculate costs with input/output/cache breakdown
- Full test coverage: 12 tests passing
Test-first approach (RED-GREEN-REFACTOR):
- Tests written before implementation
- Minimal implementation to pass tests
- All validations and edge cases covered
Phase 3B implementation plan documented in plans/ directory.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add workflow cost aggregation (TDD GREEN)
Implemented WorkflowCostAnalysis and calculate_workflow_cost()
following test-first approach. Tests passing: 14/14.
Features:
- Aggregate costs across all traces in workflow
- Track node-level cost breakdowns
- Sum total tokens across workflow
- Handle workflows with no token data gracefully
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add scaling cost projections (TDD GREEN)
Implemented ScalingProjection and project_scaling_costs()
following test-first approach. Tests passing: 17/17.
Features:
- Project costs at 1x, 10x, 100x, 1000x scale factors
- Calculate monthly cost estimates if provided
- Handle zero-cost scenarios gracefully
- Configurable scaling factors
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add node aggregation and main analyze_costs() function (TDD GREEN)
Completed Phase 3B cost analysis implementation following TDD.
Tests passing: 20/20.
Features:
- NodeCostSummary and CostAnalysisResults dataclasses
- aggregate_node_costs() - aggregate by node type with percentages
- analyze_costs() - main orchestration function
- Complete end-to-end cost analysis workflow
- Configurable pricing model (PricingConfig)
- Scaling projections at 1x, 10x, 100x, 1000x
- Node-level cost breakdowns
- Data quality tracking
Phase 3B COMPLETE - Ready for real data analysis!
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Apply code quality fixes (Ruff + Black formatting)
Fixed unused imports and applied Black auto-formatting.
All quality checks passing:
- ✅ Ruff: No linting issues
- ✅ Black: Formatted to standard
- ✅ Mypy: No type errors
- ✅ Bandit: Only expected test assertions (low severity)
- ✅ Tests: 20/20 passing
Ready for CI/CD pipeline.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add Phase 3C failure detection and retry analysis (TDD GREEN)
Implemented core failure analysis functions following TDD.
Tests passing: 13/13 for failure detection and retry sequences.
Features:
- FailureInstance, RetrySequence data structures
- detect_failures() - identify failures from trace status
- classify_error() - regex-based error classification
- detect_retry_sequences() - heuristic retry detection
- calculate_retry_success_rate() - retry effectiveness metric
Phase 3C foundation complete - ready for node analysis.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Complete Phase 3C failure analysis with node stats and main function (TDD GREEN)
Implemented node failure analysis and main orchestration function.
Tests passing: 15/15 for complete Phase 3C.
Features:
- analyze_node_failures() - aggregate failures by node type
- Node-level stats: execution count, failure rate, retry sequences
- Error type tracking per node
- analyze_failures() - main orchestration function
- Overall success rate calculation
- Error distribution aggregation
- Retry success rate analysis
Phase 3C COMPLETE with code quality checks passing:
- ✅ Ruff: No linting issues
- ✅ Black: Formatted
- ✅ Mypy: No type errors
- ✅ Tests: 15/15 passing
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Extend verification tool with Phase 3B/3C support
Added verify_cost_analysis() and verify_failure_analysis() functions
to verify_analysis_report.py with command-line control.
Features:
- verify_cost_analysis() - Verify Phase 3B cost calculations
* Workflow cost statistics (avg, median, range)
* Top 3 cost drivers by node
* Scaling projections (1x, 10x, 100x, 1000x)
* Cache effectiveness if available
- verify_failure_analysis() - Verify Phase 3C failure calculations
* Overall success/failure rates
* Top 5 nodes by failure rate
* Error distribution analysis
* Retry sequence analysis
* Validator effectiveness
- New CLI arguments:
* --phases: Select 3a, 3b, 3c, or all (default: 3a)
* --pricing-model: Choose pricing model for cost analysis
Usage examples:
python verify_analysis_report.py traces.json --phases all
python verify_analysis_report.py traces.json --phases 3b
python verify_analysis_report.py traces.json --phases 3c
All quality checks passing (Ruff, Black, Mypy).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add Phase 3B/3C documentation and fix type errors
- Update README with comprehensive Phase 3B cost analysis docs
- Update README with comprehensive Phase 3C failure analysis docs
- Update verification tool CLI examples with --phases argument
- Update test counts: 99 total tests (33 + 31 + 20 + 15)
- Update project structure with new modules
- Fix mypy type errors:
- Add return type annotation to PricingConfig.__post_init__()
- Filter None start_time traces in retry detection
- Fix max() key function for error distribution
- All 35 tests passing (20 cost + 15 failure)
- All quality checks passing (Ruff, Black, Mypy, Bandit)
* Add Phase 3B/3C analysis script and fix None outputs handling
- Create run_phase3bc_analysis.py for automated Phase 3B/3C analysis
- Fix analyze_cost.py to handle None outputs/inputs gracefully
- Generates intermediate JSON data files for Assessment
- Reports limitations when token usage data unavailable
- All tests still passing (20 cost + 15 failure)
* Remove client-specific analysis script
- Removed run_phase3bc_analysis.py (client-specific naming and paths)
- Analysis tools remain generic and reusable
- Client-specific analysis scripts should live in client repos
* Add token usage export to LangSmith traces
Implemented using TDD (RED-GREEN-REFACTOR):
- RED: Added 2 failing tests for token export
- GREEN: Added total_tokens, prompt_tokens, completion_tokens to trace export
- REFACTOR: Fixed integration tests to include token fields in mocks
Changes:
- export_langsmith_traces.py: Extract token fields from Run objects
- test_export_langsmith_traces.py: Add token usage tests + update mocks
Token fields exported:
- total_tokens: Total tokens used (LLM runs only)
- prompt_tokens: Input/prompt tokens (LLM runs only)
- completion_tokens: Generated/output tokens (LLM runs only)
- All fields gracefully handle None for non-LLM runs
All 133 tests passing.
Enables Phase 3B cost analysis with real token usage data.
* Update cost analysis to extract token fields from trace level
Modified extract_token_usage() to check top-level trace fields first:
- total_tokens
- prompt_tokens
- completion_tokens
This supports the updated export format where token data is exported
at the trace level (not nested in outputs/usage_metadata).
Maintains backwards compatibility with legacy format in outputs/inputs.
All 20 cost analysis tests still passing.
* Add token fields to Trace dataclass and JSON loading
Extended Trace dataclass with token fields:
- total_tokens: Total tokens used (None for non-LLM traces)
- prompt_tokens: Input/prompt tokens (None for non-LLM traces)
- completion_tokens: Output/completion tokens (None for non-LLM traces)
Updated _build_trace_from_dict() to load token fields from JSON.
This completes the end-to-end token tracking chain:
1. Export: Token data exported at trace level
2. Loading: Token fields loaded into Trace objects
3. Analysis: Cost analysis extracts and calculates costs
Verified with test showing ~$0.14 avg cost per workflow.
All 133 tests passing.
* feat: Add cache token extraction for cost analysis
Add cache_read_tokens and cache_creation_tokens fields to trace export
to enable cache effectiveness measurement in Phase 3B cost analysis.
Changes:
- Extract cache tokens from nested outputs/inputs["usage_metadata"]["input_token_details"]
- Support both cache_creation and cache_creation_input_tokens field names
- Multi-level fallback: top-level -> outputs -> inputs
- Preserve 0 values correctly (explicit None checks)
- Add 3 comprehensive tests for nested extraction and backward compatibility
- All 46 tests passing
Implements TDD approach with test-first development following PDCA framework.
Generated with Claude Code (https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: Extract cache tokens from LangChain message structure
Fixed cache token extraction to look in the correct location for LangSmith
exports that use LangChain-serialized AIMessage format.
Changes:
- Added Fallback 1: Extract from outputs.generations[0][0].message.kwargs.usage_metadata
- Updated test to use correct LangChain message structure
- Verified with 1000-trace export: 684 runs now have cache_read_tokens
The LangSmith Python SDK exports token metadata in LangChain AIMessage
format under generations[0][0].message.kwargs.usage_metadata, not
directly under outputs.usage_metadata.
Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* style: Apply black formatting
Applied black code formatter to maintain consistent code style.
Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* chore: Configure bandit to exclude test files
Added test file exclusion pattern to .bandit config to avoid
B101 (assert_used) warnings in pytest test files where asserts
are expected and appropriate.
For CI/CD, run: bandit -r export_langsmith_traces.py
Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Remove client-specific references and fix type issues
- Remove client-specific references from codebase:
- Replace specific node names with generic examples in docs
- Update test data to use generic node names (process_data, transform_output)
- Delete temporary debug scripts with client file paths
- Fix mypy type errors in cache effectiveness functions:
- Add explicit None checks for cached_tokens in analyze_cost.py
- Ensure type safety in cache calculations
- Code quality improvements:
- Apply black formatting
- All 146 tests passing
- Mypy strict mode passing on all source files
- No security issues (bandit)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* feat: Add Phase 3B cost analysis - initial implementation (TDD)
Implements configurable cost analysis for LangSmith traces following
strict test-driven development methodology.
- PricingConfig: Configurable dataclass for any LLM provider pricing
- TokenUsage: Extract token data from trace outputs/inputs
- CostBreakdown: Calculate costs with input/output/cache breakdown
- Full test coverage: 12 tests passing
Test-first approach (RED-GREEN-REFACTOR):
- Tests written before implementation
- Minimal implementation to pass tests
- All validations and edge cases covered
Phase 3B implementation plan documented in plans/ directory.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add workflow cost aggregation (TDD GREEN)
Implemented WorkflowCostAnalysis and calculate_workflow_cost()
following test-first approach. Tests passing: 14/14.
Features:
- Aggregate costs across all traces in workflow
- Track node-level cost breakdowns
- Sum total tokens across workflow
- Handle workflows with no token data gracefully
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add scaling cost projections (TDD GREEN)
Implemented ScalingProjection and project_scaling_costs()
following test-first approach. Tests passing: 17/17.
Features:
- Project costs at 1x, 10x, 100x, 1000x scale factors
- Calculate monthly cost estimates if provided
- Handle zero-cost scenarios gracefully
- Configurable scaling factors
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add node aggregation and main analyze_costs() function (TDD GREEN)
Completed Phase 3B cost analysis implementation following TDD.
Tests passing: 20/20.
Features:
- NodeCostSummary and CostAnalysisResults dataclasses
- aggregate_node_costs() - aggregate by node type with percentages
- analyze_costs() - main orchestration function
- Complete end-to-end cost analysis workflow
- Configurable pricing model (PricingConfig)
- Scaling projections at 1x, 10x, 100x, 1000x
- Node-level cost breakdowns
- Data quality tracking
Phase 3B COMPLETE - Ready for real data analysis!
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Apply code quality fixes (Ruff + Black formatting)
Fixed unused imports and applied Black auto-formatting.
All quality checks passing:
- ✅ Ruff: No linting issues
- ✅ Black: Formatted to standard
- ✅ Mypy: No type errors
- ✅ Bandit: Only expected test assertions (low severity)
- ✅ Tests: 20/20 passing
Ready for CI/CD pipeline.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add Phase 3C failure detection and retry analysis (TDD GREEN)
Implemented core failure analysis functions following TDD.
Tests passing: 13/13 for failure detection and retry sequences.
Features:
- FailureInstance, RetrySequence data structures
- detect_failures() - identify failures from trace status
- classify_error() - regex-based error classification
- detect_retry_sequences() - heuristic retry detection
- calculate_retry_success_rate() - retry effectiveness metric
Phase 3C foundation complete - ready for node analysis.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Complete Phase 3C failure analysis with node stats and main function (TDD GREEN)
Implemented node failure analysis and main orchestration function.
Tests passing: 15/15 for complete Phase 3C.
Features:
- analyze_node_failures() - aggregate failures by node type
- Node-level stats: execution count, failure rate, retry sequences
- Error type tracking per node
- analyze_failures() - main orchestration function
- Overall success rate calculation
- Error distribution aggregation
- Retry success rate analysis
Phase 3C COMPLETE with code quality checks passing:
- ✅ Ruff: No linting issues
- ✅ Black: Formatted
- ✅ Mypy: No type errors
- ✅ Tests: 15/15 passing
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Extend verification tool with Phase 3B/3C support
Added verify_cost_analysis() and verify_failure_analysis() functions
to verify_analysis_report.py with command-line control.
Features:
- verify_cost_analysis() - Verify Phase 3B cost calculations
* Workflow cost statistics (avg, median, range)
* Top 3 cost drivers by node
* Scaling projections (1x, 10x, 100x, 1000x)
* Cache effectiveness if available
- verify_failure_analysis() - Verify Phase 3C failure calculations
* Overall success/failure rates
* Top 5 nodes by failure rate
* Error distribution analysis
* Retry sequence analysis
* Validator effectiveness
- New CLI arguments:
* --phases: Select 3a, 3b, 3c, or all (default: 3a)
* --pricing-model: Choose pricing model for cost analysis
Usage examples:
python verify_analysis_report.py traces.json --phases all
python verify_analysis_report.py traces.json --phases 3b
python verify_analysis_report.py traces.json --phases 3c
All quality checks passing (Ruff, Black, Mypy).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add Phase 3B/3C documentation and fix type errors
- Update README with comprehensive Phase 3B cost analysis docs
- Update README with comprehensive Phase 3C failure analysis docs
- Update verification tool CLI examples with --phases argument
- Update test counts: 99 total tests (33 + 31 + 20 + 15)
- Update project structure with new modules
- Fix mypy type errors:
- Add return type annotation to PricingConfig.__post_init__()
- Filter None start_time traces in retry detection
- Fix max() key function for error distribution
- All 35 tests passing (20 cost + 15 failure)
- All quality checks passing (Ruff, Black, Mypy, Bandit)
* Add Phase 3B/3C analysis script and fix None outputs handling
- Create run_phase3bc_analysis.py for automated Phase 3B/3C analysis
- Fix analyze_cost.py to handle None outputs/inputs gracefully
- Generates intermediate JSON data files for Assessment
- Reports limitations when token usage data unavailable
- All tests still passing (20 cost + 15 failure)
* Remove client-specific analysis script
- Removed run_phase3bc_analysis.py (client-specific naming and paths)
- Analysis tools remain generic and reusable
- Client-specific analysis scripts should live in client repos
* Add token usage export to LangSmith traces
Implemented using TDD (RED-GREEN-REFACTOR):
- RED: Added 2 failing tests for token export
- GREEN: Added total_tokens, prompt_tokens, completion_tokens to trace export
- REFACTOR: Fixed integration tests to include token fields in mocks
Changes:
- export_langsmith_traces.py: Extract token fields from Run objects
- test_export_langsmith_traces.py: Add token usage tests + update mocks
Token fields exported:
- total_tokens: Total tokens used (LLM runs only)
- prompt_tokens: Input/prompt tokens (LLM runs only)
- completion_tokens: Generated/output tokens (LLM runs only)
- All fields gracefully handle None for non-LLM runs
All 133 tests passing.
Enables Phase 3B cost analysis with real token usage data.
* Update cost analysis to extract token fields from trace level
Modified extract_token_usage() to check top-level trace fields first:
- total_tokens
- prompt_tokens
- completion_tokens
This supports the updated export format where token data is exported
at the trace level (not nested in outputs/usage_metadata).
Maintains backwards compatibility with legacy format in outputs/inputs.
All 20 cost analysis tests still passing.
* Add token fields to Trace dataclass and JSON loading
Extended Trace dataclass with token fields:
- total_tokens: Total tokens used (None for non-LLM traces)
- prompt_tokens: Input/prompt tokens (None for non-LLM traces)
- completion_tokens: Output/completion tokens (None for non-LLM traces)
Updated _build_trace_from_dict() to load token fields from JSON.
This completes the end-to-end token tracking chain:
1. Export: Token data exported at trace level
2. Loading: Token fields loaded into Trace objects
3. Analysis: Cost analysis extracts and calculates costs
Verified with test showing ~$0.14 avg cost per workflow.
All 133 tests passing.
* feat: Add cache token extraction for cost analysis
Add cache_read_tokens and cache_creation_tokens fields to trace export
to enable cache effectiveness measurement in Phase 3B cost analysis.
Changes:
- Extract cache tokens from nested outputs/inputs["usage_metadata"]["input_token_details"]
- Support both cache_creation and cache_creation_input_tokens field names
- Multi-level fallback: top-level -> outputs -> inputs
- Preserve 0 values correctly (explicit None checks)
- Add 3 comprehensive tests for nested extraction and backward compatibility
- All 46 tests passing
Implements TDD approach with test-first development following PDCA framework.
Generated with Claude Code (https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: Extract cache tokens from LangChain message structure
Fixed cache token extraction to look in the correct location for LangSmith
exports that use LangChain-serialized AIMessage format.
Changes:
- Added Fallback 1: Extract from outputs.generations[0][0].message.kwargs.usage_metadata
- Updated test to use correct LangChain message structure
- Verified with 1000-trace export: 684 runs now have cache_read_tokens
The LangSmith Python SDK exports token metadata in LangChain AIMessage
format under generations[0][0].message.kwargs.usage_metadata, not
directly under outputs.usage_metadata.
Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* style: Apply black formatting
Applied black code formatter to maintain consistent code style.
Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* chore: Configure bandit to exclude test files
Added test file exclusion pattern to .bandit config to avoid
B101 (assert_used) warnings in pytest test files where asserts
are expected and appropriate.
For CI/CD, run: bandit -r export_langsmith_traces.py
Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Remove client-specific references and fix type issues
- Remove client-specific references from codebase:
- Replace specific node names with generic examples in docs
- Update test data to use generic node names (process_data, transform_output)
- Delete temporary debug scripts with client file paths
- Fix mypy type errors in cache effectiveness functions:
- Add explicit None checks for cached_tokens in analyze_cost.py
- Ensure type safety in cache calculations
- Code quality improvements:
- Apply black formatting
- All 146 tests passing
- Mypy strict mode passing on all source files
- No security issues (bandit)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>1 parent 570dea8 commit ffe2361
File tree
12 files changed
+4308
-57
lines changed- plans
12 files changed
+4308
-57
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
5 | 5 | | |
6 | 6 | | |
7 | 7 | | |
8 | | - | |
| 8 | + | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
9 | 12 | | |
10 | 13 | | |
11 | 14 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
4 | 4 | | |
5 | 5 | | |
6 | 6 | | |
7 | | - | |
| 7 | + | |
8 | 8 | | |
9 | | - | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
10 | 12 | | |
11 | | - | |
| 13 | + | |
12 | 14 | | |
13 | 15 | | |
14 | 16 | | |
| |||
241 | 243 | | |
242 | 244 | | |
243 | 245 | | |
244 | | - | |
| 246 | + | |
245 | 247 | | |
246 | 248 | | |
| 249 | + | |
| 250 | + | |
| 251 | + | |
| 252 | + | |
| 253 | + | |
| 254 | + | |
| 255 | + | |
| 256 | + | |
247 | 257 | | |
248 | 258 | | |
| 259 | + | |
| 260 | + | |
| 261 | + | |
249 | 262 | | |
250 | 263 | | |
251 | 264 | | |
252 | 265 | | |
253 | 266 | | |
254 | 267 | | |
| 268 | + | |
255 | 269 | | |
256 | 270 | | |
257 | 271 | | |
| |||
270 | 284 | | |
271 | 285 | | |
272 | 286 | | |
| 287 | + | |
| 288 | + | |
| 289 | + | |
| 290 | + | |
| 291 | + | |
| 292 | + | |
| 293 | + | |
| 294 | + | |
| 295 | + | |
| 296 | + | |
| 297 | + | |
| 298 | + | |
| 299 | + | |
| 300 | + | |
| 301 | + | |
| 302 | + | |
| 303 | + | |
| 304 | + | |
| 305 | + | |
| 306 | + | |
| 307 | + | |
| 308 | + | |
| 309 | + | |
| 310 | + | |
| 311 | + | |
| 312 | + | |
| 313 | + | |
| 314 | + | |
| 315 | + | |
| 316 | + | |
| 317 | + | |
| 318 | + | |
| 319 | + | |
| 320 | + | |
| 321 | + | |
| 322 | + | |
| 323 | + | |
| 324 | + | |
| 325 | + | |
| 326 | + | |
| 327 | + | |
| 328 | + | |
| 329 | + | |
| 330 | + | |
| 331 | + | |
| 332 | + | |
| 333 | + | |
| 334 | + | |
| 335 | + | |
| 336 | + | |
| 337 | + | |
| 338 | + | |
| 339 | + | |
| 340 | + | |
| 341 | + | |
| 342 | + | |
| 343 | + | |
| 344 | + | |
| 345 | + | |
| 346 | + | |
| 347 | + | |
| 348 | + | |
| 349 | + | |
| 350 | + | |
| 351 | + | |
| 352 | + | |
| 353 | + | |
| 354 | + | |
| 355 | + | |
| 356 | + | |
| 357 | + | |
| 358 | + | |
| 359 | + | |
| 360 | + | |
| 361 | + | |
| 362 | + | |
| 363 | + | |
| 364 | + | |
| 365 | + | |
| 366 | + | |
| 367 | + | |
| 368 | + | |
| 369 | + | |
| 370 | + | |
| 371 | + | |
| 372 | + | |
| 373 | + | |
| 374 | + | |
| 375 | + | |
| 376 | + | |
| 377 | + | |
| 378 | + | |
| 379 | + | |
| 380 | + | |
| 381 | + | |
| 382 | + | |
| 383 | + | |
| 384 | + | |
| 385 | + | |
| 386 | + | |
| 387 | + | |
| 388 | + | |
| 389 | + | |
| 390 | + | |
| 391 | + | |
| 392 | + | |
| 393 | + | |
| 394 | + | |
| 395 | + | |
| 396 | + | |
| 397 | + | |
| 398 | + | |
| 399 | + | |
| 400 | + | |
| 401 | + | |
| 402 | + | |
| 403 | + | |
| 404 | + | |
| 405 | + | |
| 406 | + | |
| 407 | + | |
| 408 | + | |
| 409 | + | |
| 410 | + | |
273 | 411 | | |
274 | 412 | | |
275 | 413 | | |
| |||
420 | 558 | | |
421 | 559 | | |
422 | 560 | | |
| 561 | + | |
| 562 | + | |
| 563 | + | |
| 564 | + | |
| 565 | + | |
| 566 | + | |
| 567 | + | |
| 568 | + | |
| 569 | + | |
| 570 | + | |
| 571 | + | |
| 572 | + | |
| 573 | + | |
| 574 | + | |
| 575 | + | |
| 576 | + | |
| 577 | + | |
| 578 | + | |
| 579 | + | |
| 580 | + | |
| 581 | + | |
| 582 | + | |
| 583 | + | |
| 584 | + | |
| 585 | + | |
| 586 | + | |
| 587 | + | |
| 588 | + | |
423 | 589 | | |
424 | 590 | | |
| 591 | + | |
425 | 592 | | |
| 593 | + | |
| 594 | + | |
| 595 | + | |
426 | 596 | | |
427 | 597 | | |
428 | 598 | | |
| |||
435 | 605 | | |
436 | 606 | | |
437 | 607 | | |
438 | | - | |
| 608 | + | |
439 | 609 | | |
440 | 610 | | |
441 | | - | |
| 611 | + | |
442 | 612 | | |
| 613 | + | |
| 614 | + | |
| 615 | + | |
| 616 | + | |
| 617 | + | |
443 | 618 | | |
444 | 619 | | |
445 | 620 | | |
| |||
490 | 665 | | |
491 | 666 | | |
492 | 667 | | |
| 668 | + | |
| 669 | + | |
| 670 | + | |
| 671 | + | |
| 672 | + | |
| 673 | + | |
| 674 | + | |
| 675 | + | |
| 676 | + | |
| 677 | + | |
| 678 | + | |
| 679 | + | |
| 680 | + | |
| 681 | + | |
| 682 | + | |
| 683 | + | |
| 684 | + | |
| 685 | + | |
| 686 | + | |
| 687 | + | |
| 688 | + | |
| 689 | + | |
493 | 690 | | |
494 | 691 | | |
495 | 692 | | |
496 | | - | |
497 | | - | |
| 693 | + | |
| 694 | + | |
498 | 695 | | |
499 | 696 | | |
500 | 697 | | |
| |||
0 commit comments