You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -287,7 +287,7 @@ and then use the same in your OpenAI client. You can pass any HuggingFace model
287
287
with your HuggingFace key. We also support adding any number of LoRAs on top of the model by using the `+` separator.
288
288
289
289
E.g. The following code loads the base model `meta-llama/Llama-3.2-1B-Instruct` and then adds two LoRAs on top - `patched-codes/Llama-3.2-1B-FixVulns` and `patched-codes/Llama-3.2-1B-FastApply`.
290
-
You can specify which LoRA to use using the `active_adapter` param in `extra_args` field of OpenAI SDK client. By default we will load the last specified adapter.
290
+
You can specify which LoRA to use using the `active_adapter` param in `extra_body` field of OpenAI SDK client. By default we will load the last specified adapter.
291
291
292
292
```python
293
293
OPENAI_BASE_URL="http://localhost:8000/v1"
@@ -748,4 +748,4 @@ If you use this library in your research, please cite:
748
748
749
749
<palign="center">
750
750
⭐ <ahref="https://github.com/codelion/optillm">Star us on GitHub</a> if you find OptiLLM useful!
0 commit comments