You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/inference-providers/guides/function-calling.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,7 +37,7 @@ client = OpenAI(
37
37
</hfoption>
38
38
<hfoptionid="huggingface_hub">
39
39
40
-
In the Hugging Face Hub client, we'll use the `provider` parameter to specify the provider we want to use for the request.
40
+
In the Hugging Face Hub client, we'll use the `provider` parameter to specify the provider we want to use for the request. By default, it is `"auto"`.
41
41
42
42
```python
43
43
import json
@@ -477,7 +477,7 @@ Streaming is not supported by all providers. You can check the provider's docume
477
477
478
478
## Next Steps
479
479
480
-
Now that you've seen how to use function calling with Inference Providers, you can start building your own assistants! Why not try out some of these ideas:
480
+
Now that you've seen how to use function calling with Inference Providers, you can start building your own agents and assistants! Why not try out some of these ideas:
481
481
482
482
- Try smaller models for faster responses and lower costs
0 commit comments