Skip to content

Commit 745d6d5

Browse files
yossiovadiaclaude
andcommitted
fix: resolve pre-commit hook failures
- Fix markdown linting issues (MD032, MD031, MD047) in README files - Remove binary distribution files from git tracking - Add Python build artifacts to .gitignore - Auto-format Python files with black and isort - Add CLAUDE.md exclusion to prevent future commits 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]> Signed-off-by: Yossi Ovadia <[email protected]>
1 parent 45bae5f commit 745d6d5

17 files changed

+60
-26
lines changed

.gitignore

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ dist/
1818
build/
1919
*.egg-info/
2020
*.whl
21+
*.tar.gz
2122

2223
# Go
2324
*.exe
@@ -123,4 +124,7 @@ results/
123124
.cursorrules.*
124125

125126
# augment editor rules
126-
.augment
127+
.augment
128+
129+
# Claude Code configuration (should not be committed)
130+
CLAUDE.md

.pre-commit-config.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ repos:
3030
entry: bash -c "make markdown-lint"
3131
language: system
3232
files: \.md$
33-
exclude: ^(\node_modules/)
33+
exclude: ^(\node_modules/|CLAUDE\.md)
3434

3535
# Yaml specific hooks
3636
- repo: local

e2e-tests/00-client-request-test.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,9 @@
2222
# Constants
2323
ENVOY_URL = "http://localhost:8801"
2424
OPENAI_ENDPOINT = "/v1/chat/completions"
25-
DEFAULT_MODEL = "Qwen/Qwen2-0.5B-Instruct" # Use configured model that matches router config
25+
DEFAULT_MODEL = (
26+
"Qwen/Qwen2-0.5B-Instruct" # Use configured model that matches router config
27+
)
2628
MAX_RETRIES = 3
2729
RETRY_DELAY = 2
2830

e2e-tests/README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -71,14 +71,17 @@ Will be added in future PRs for testing with actual model inference.
7171
## Available Tests
7272

7373
Currently implemented:
74+
7475
- **00-client-request-test.py** ✅ - Complete client request validation and smart routing
7576

7677
Individual tests can be run with:
78+
7779
```bash
7880
python e2e-tests/00-client-request-test.py
7981
```
8082

8183
Or run all available tests with:
84+
8285
```bash
8386
python e2e-tests/run_all_tests.py
8487
```

e2e-tests/llm-katan/README.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,21 +25,25 @@ pip install llm-katan
2525
#### HuggingFace Token (Required)
2626

2727
LLM Katan uses HuggingFace transformers to download models. You'll need a HuggingFace token for:
28+
2829
- Private models
2930
- Avoiding rate limits
3031
- Reliable model downloads
3132

3233
**Option 1: Environment Variable**
34+
3335
```bash
3436
export HUGGINGFACE_HUB_TOKEN="your_token_here"
3537
```
3638

3739
**Option 2: Login via CLI**
40+
3841
```bash
3942
huggingface-cli login
4043
```
4144

4245
**Option 3: Token file in home directory**
46+
4347
```bash
4448
# Create ~/.cache/huggingface/token file with your token
4549
echo "your_token_here" > ~/.cache/huggingface/token
@@ -186,4 +190,4 @@ Contributions welcome! Please see the main repository for guidelines.
186190

187191
---
188192

189-
*Part of the [semantic-router project ecosystem](https://vllm-semantic-router.com/)*
193+
*Part of the [semantic-router project ecosystem](https://vllm-semantic-router.com/)*
-10.9 KB
Binary file not shown.
-11 KB
Binary file not shown.
-11.1 KB
Binary file not shown.
-11.4 KB
Binary file not shown.
-11.9 KB
Binary file not shown.

0 commit comments

Comments
 (0)