Skip to content

Commit 1c35c31

Browse files
committed
minor tweaks
1 parent 3da7ac5 commit 1c35c31

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/inference-providers/guides/function-calling.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ client = OpenAI(
3737
</hfoption>
3838
<hfoption id="huggingface_hub">
3939

40-
In the Hugging Face Hub client, we'll use the `provider` parameter to specify the provider we want to use for the request.
40+
In the Hugging Face Hub client, we'll use the `provider` parameter to specify the provider we want to use for the request. By default, it is `"auto"`.
4141

4242
```python
4343
import json
@@ -477,7 +477,7 @@ Streaming is not supported by all providers. You can check the provider's docume
477477

478478
## Next Steps
479479

480-
Now that you've seen how to use function calling with Inference Providers, you can start building your own assistants! Why not try out some of these ideas:
480+
Now that you've seen how to use function calling with Inference Providers, you can start building your own agents and assistants! Why not try out some of these ideas:
481481

482482
- Try smaller models for faster responses and lower costs
483483
- Build an agent that can fetch real-time data

0 commit comments

Comments
 (0)