This example demonstrates how to use the Inference Gateway with tools functionality, allowing models to call functions and process the results.
- Inference Gateway: The main service that proxies requests to various LLM providers
- Agent: A shell-based agent that demonstrates tools-use by making curl requests to the inference-gateway
The agent in this example:
- Makes an initial request to the inference-gateway with a query that likely requires tools
- Processes any tool calls requested by the model
- Simulates tool execution (weather data and web search)
- Returns the tool execution results to the model for completion
-
Configure your API keys in the
.envfile:MODEL=openai/gpt-3.5-turbo # Or another model that supports function calling OPENAI_API_KEY=your_openai_api_key_here -
Start the services:
docker compose up
-
Watch the agent logs to see the tool calls in action:
docker compose logs -f agent
The agent implementation is written in the agent.sh file using curl commands for clarity.
The agent currently implements two example tools:
get_weather: Simulates retrieving weather data for a specified locationsearch_web: Simulates searching the web for information
These are simple examples that return mock data. In a real implementation, you would connect these to actual APIs.
You can expand this example by:
- Adding more sophisticated tools
- Connecting to real APIs for weather data, search results, etc.
- Implementing a more interactive agent that can take user input
- Adding a web UI for interacting with the agent