A minimal agent that demonstrates how to give LLMs the ability to call external tools. This is the simplest pattern for extending what an LLM can do beyond its training data.
This template shows how to build a web search agent, but the real focus is on tool calling - the fundamental pattern that lets LLMs interact with external systems. Once you understand this pattern, you can connect agents to any API, database, or service.
Template Highlights:
- How to define and bind tools to an LLM
- How the LLM decides when to use a tool vs. answer directly
- Using LangChain's
create_agenthelper for quick setup - Automatic tracing of tool calls on the Gradient AI Platform
Tool calling is how LLMs interact with the outside world. Instead of just generating text, the LLM can decide to invoke a function, receive the result, and then continue reasoning. This template uses DuckDuckGo search as the tool, but the same pattern applies to any external capability you want to give your agent.
The agent works by binding a tool definition to the LLM. When a user asks a question, the LLM examines the available tools and decides whether to call one. If it calls a tool, the result is fed back to the LLM, which then formulates the final response.
┌─────────────────────────────────────────────────────┐
│ WebSearch Agent │
├─────────────────────────────────────────────────────┤
│ │
│ Input: { prompt } │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────┐ │
│ │ LLM (GPT-4.1) │ │
│ │ │ │
│ │ Decides whether to: │ │
│ │ 1. Answer directly │ │
│ │ 2. Search the web first │ │
│ └──────────────┬──────────────────────┘ │
│ │ │
│ (needs search) │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────┐ │
│ │ DuckDuckGo Search Tool │ │
│ │ │ │
│ │ - Queries the web │ │
│ │ - Returns search results │ │
│ │ - No API key needed │ │
│ └──────────────┬──────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────┐ │
│ │ LLM (GPT-4.1) │ │
│ │ │ │
│ │ Synthesizes search results │ │
│ │ into a coherent answer │ │
│ └──────────────┬──────────────────────┘ │
│ │ │
│ ▼ │
│ Output: Answer with web search results │
│ │
└─────────────────────────────────────────────────────┘
- Python 3.10+
- DigitalOcean account
-
DigitalOcean API Token:
- Go to API Settings
- Generate a new token with read/write access
-
DigitalOcean Inference Key:
- Go to GenAI Settings
- Create or copy your inference key
No additional API keys required - DuckDuckGo search is free.
cd WebSearch
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activatepip install -r requirements.txtcp .env.example .envEdit .env:
DIGITALOCEAN_INFERENCE_KEY=your_inference_key
export DIGITALOCEAN_API_TOKEN=your_token
gradient agent runcurl --location 'http://localhost:8080/run' \
--header 'Content-Type: application/json' \
--data '{
"prompt": "Who won the 2024 Super Bowl?"
}'Edit .gradient/agent.yml:
agent_name: my-web-search-agentgradient agent deploycurl --location 'https://agents.do-ai.run/<DEPLOYED_AGENT_ID>/main/run' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <DIGITALOCEAN_API_TOKEN>' \
--data '{
"prompt": "Who won the 2024 Super Bowl?"
}'{
"prompt": "What is the current price of Bitcoin?"
}{
"response": "Based on my web search, Bitcoin is currently trading at approximately $67,500 USD. However, cryptocurrency prices are highly volatile and change constantly. For the most accurate real-time price, I recommend checking a cryptocurrency exchange like Coinbase or Binance directly."
}{
"prompt": "What is the capital of France?"
}{
"response": "The capital of France is Paris."
}WebSearch/
├── .gradient/
│ └── agent.yml # Deployment configuration
├── main.py # Agent with DuckDuckGo tool
├── prompts.py # System prompt (edit this to customize!)
├── requirements.txt # Dependencies
├── .env.example # Environment template
└── README.md
from langchain_gradient import ChatGradient
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_community.tools import DuckDuckGoSearchRun
# Initialize the LLM
llm = ChatGradient(model="openai-gpt-4.1")
# Create the search tool
search_tool = DuckDuckGoSearchRun()
# Create the agent
agent = create_tool_calling_agent(llm, [search_tool], prompt)
executor = AgentExecutor(agent=agent, tools=[search_tool])
# Run
result = executor.invoke({"input": "Your question here"})The easiest way to adapt this template is by editing prompts.py. This file contains the system prompt that defines how the agent behaves.
Example: Research Assistant
# In prompts.py, change SYSTEM_PROMPT to:
SYSTEM_PROMPT = """You are a research assistant that helps users find accurate,
up-to-date information. When searching the web:
- Always cite your sources with URLs
- Distinguish between facts and opinions
- Note when information might be outdated
- Provide balanced perspectives on controversial topics"""Example: Technical Support Agent
SYSTEM_PROMPT = """You are a technical support specialist. When helping users:
- Search for the most recent documentation and solutions
- Provide step-by-step instructions when applicable
- Warn about common pitfalls or mistakes
- Suggest alternative approaches when the primary solution is complex"""Example: News Summarizer
SYSTEM_PROMPT = """You are a news analyst that helps users stay informed.
When searching for news:
- Summarize key points concisely
- Include publication dates to show recency
- Present multiple perspectives on news stories
- Focus on factual reporting over opinion pieces"""Add additional tools to the agent:
from langchain_community.tools import WikipediaQueryRun
from langchain_community.utilities import WikipediaAPIWrapper
# Create tools
search_tool = DuckDuckGoSearchRun()
wiki_tool = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())
# Add both tools to the agent
agent = create_tool_calling_agent(llm, [search_tool, wiki_tool], prompt)Replace DuckDuckGo with Tavily (requires API key):
from langchain_community.tools.tavily_search import TavilySearchResults
search_tool = TavilySearchResults(max_results=5)Modify agent behavior:
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful research assistant. Always cite your sources."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}")
])Enable conversation history:
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
executor = AgentExecutor(
agent=agent,
tools=[search_tool],
memory=memory
)| Issue | Solution |
|---|---|
| "Tool not found" error | Ensure duckduckgo-search is installed |
| Rate limiting | DuckDuckGo may rate limit; add delays between requests |
| Empty search results | Try rephrasing your query or check internet connectivity |
- DuckDuckGo search is free but may have rate limits for high-volume usage
- For production use, consider Tavily or Serper for more reliable search
- All tool calls are automatically traced to the Gradient AI Platform after deployment