Skip to content

Commit 5d3f578

Browse files
README updated;
1 parent bbb3e10 commit 5d3f578

File tree

1 file changed

+19
-12
lines changed

1 file changed

+19
-12
lines changed

README.md

Lines changed: 19 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ The package doesn't have the dataset, it is stored on our [HuggingFace page](htt
2828

2929
### This package contains
3030
- Support for modern LLMs.
31-
- Tools for **evaluation**, **inference**, and **finetuning**.
31+
- Tools for **inference** and **evaluation**.
3232
- Support for Hugging Face models out-of-the-box.
3333
- Structured for reproducibility and benchmarking.
3434

@@ -46,10 +46,10 @@ We therefore recommend that most users:
4646
- Evaluate results against the benchmark with the [`llmsql.LLMSQLEvaluator`](./llmsql/evaluation/evaluator.py) evaluator class.
4747

4848
2. **Optional finetuning**:
49-
- For research or domain adaptation, we provide finetuning script for HF models. Use `llmsql finetune --help` or read [Finetune Readme](./llmsql/finetune/README.md) to find more about finetuning.
49+
- For research or domain adaptation, we provide finetuning version for HF models. Use [Finetune Ready](https://huggingface.co/datasets/llmsql-bench/llmsql-benchmark-finetune-ready) dataset from HuggingFace.
5050

5151
> [!Tip]
52-
> You can find additional manuals in the README files of each folder([Inferece Readme](./llmsql/inference/README.md), [Evaluation Readme](./llmsql/evaluation/README.md), [Finetune Readme](./llmsql/finetune/README.md))
52+
> You can find additional manuals in the README files of each folder([Inferece Readme](./llmsql/inference/README.md), [Evaluation Readme](./llmsql/evaluation/README.md))
5353
5454
> [!Tip]
5555
> vllm based inference require vllm optional dependency group installed: `pip install llmsql[vllm]`
@@ -61,9 +61,7 @@ We therefore recommend that most users:
6161
6262
llmsql/
6363
├── evaluation/ # Scripts for downloading DB + evaluating predictions
64-
├── inference/ # Generate SQL queries with your LLM
65-
└── finetune/ # Fine-tuning with TRL's SFTTrainer
66-
64+
└── inference/ # Generate SQL queries with your LLM
6765
```
6866

6967

@@ -111,21 +109,30 @@ print(report)
111109

112110

113111

114-
## Finetuning (Optional)
115-
116-
If you want to adapt a base model on LLMSQL:
112+
## Vllm inference (Recommended)
117113

114+
To speed up your inference we recommend using vllm inference. You can do it with optional llmsql[vllm] dependency group
118115
```bash
119-
llmsql finetune --config_file examples/example_finetune_args.yaml
116+
pip install llmsql[vllm]
120117
```
121118

122-
This will train a model on the train/val splits with the parameters provided in the config file. You can find example config file [here](./examples/example_finetune_args.yaml).
119+
After that run
120+
```python
121+
from llmsql import inference_vllm
122+
results = inference_vllm(
123+
"Qwen/Qwen2.5-1.5B-Instruct",
124+
"test_results.jsonl",
125+
do_sample=False,
126+
batch_size=20000
127+
)
128+
```
129+
for fast inference.
123130

124131

125132

126133
## Suggested Workflow
127134

128-
* **Primary**: Run inference on `dataset/questions.jsonl` → Evaluate with `evaluation/`.
135+
* **Primary**: Run inference on `dataset/questions.jsonl` with vllm → Evaluate with `evaluation/`.
129136
* **Secondary (optional)**: Fine-tune on `train/val` → Test on `test_questions.jsonl`.
130137

131138

0 commit comments

Comments
 (0)