A Research Agent powered by Resonate and OpenAI, using Kafka as the message transport. The Research Agent is a distributed, recursive agent that breaks a research topic into subtopics, researches each subtopic recursively, and synthesizes the results.
This example demonstrates how complex, distributed agentic applications can be implemented with simple code in Resonate's Distributed Async Await: The research agent is a recursive generator function that breaks down topics into subtopics and invokes itself for each subtopic:
function* research(ctx, topic, depth) {
const messages = [
{ role: "system", content: "Break topics into subtopics..." },
{ role: "user", content: `Research ${topic}` }
];
while (true) {
// Ask the LLM about the topic
const response = yield* ctx.run(prompt, messages, ...);
messages.push(response);
// If LLM wants to research subtopics...
if (response.tool_calls) {
const handles = [];
// Spawn parallel research for each subtopic
for (const tool_call of response.tool_calls) {
const subtopic = ...;
const handle = yield* ctx.beginRpc(research, subtopic, depth - 1);
handles.push([tool_call, handle]);
}
// Wait for all subtopic results
for (const [tool_call, handle] of handles) {
const result = yield* handle;
messages.push({ role: "tool", ..., content: result });
}
} else {
// LLM provided final summary
return response.content;
}
}
}The following video visualizes how this recursive pattern creates a dynamic call graph, spawning parallel research branches that fan out as topics are decomposed, then fan back in as results are synthesized:
call-graph.mov
Key concepts:
- Concurrent Execution: Multiple subtopics are researched concurrently via
ctx.beginRpc - Coordination: Handles are collected first, then awaited together (fork/join, fan-out/fan-in)
- Depth control: Recursion stops when
depthreaches 0
Install the Resonate Server
brew install resonatehq/tap/resonateClone the repository
git clone https://github.com/resonatehq-examples/example-openai-deep-research-agent-kafka-ts.git
cd example-openai-deep-research-agent-kafka-tsInitialize submodules
git submodule update --init --recursiveInstall dependencies
npm installTo run this project you need an OpenAI API Key and export the key as an environment variable
export OPENAI_API_KEY="sk-..."In Terminal #1, navigate to the Confluent directory and start the services:
cd cp-all-in-one/cp-all-in-one
docker compose up -dWait for the broker to be ready (~15 seconds), then create the required topics:
docker exec broker kafka-topics --create --topic default --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
docker exec broker kafka-topics --create --topic resonate --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1In Terminal #2, start the Resonate server in development mode with Kafka support enabled:
resonate dev --api-kafka-enable --aio-kafka-enableIn Terminal #3, start the Resonate worker:
npm run devIn Terminal #4, start the research agent:
resonate invoke research.1 --func research --arg "What are distributed systems" --arg 1 --target kafka://defaultParameters:
research.1: Unique ID for this research task--arg "What are distributed systems": The topic to research--arg 1: Recursion depth (how many levels of subtopics to explore)
Get details of a specific research task:
resonate promises get research.1View all promises:
resonate promises search "*"Read messages from the default topic (task dispatch):
docker exec broker kafka-console-consumer --bootstrap-server localhost:9092 --topic default --from-beginning --timeout-ms 5000Read messages from the resonate topic (worker ↔ server communication):
docker exec broker kafka-console-consumer --bootstrap-server localhost:9092 --topic resonate --from-beginning --timeout-ms 5000The Deep Research Agent depends on OpenAI and the OpenAI TypeScript and JavaScript SDK. If you are having trouble, verify that your OpenAI credentials are configured correctly and the model is accessible by running the following command in the project's directory:
node -e "import OpenAI from 'openai'; const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY }); (async () => { const res = await client.chat.completions.create({ model: 'gpt-5', messages: [{ role: 'user', content: 'knock knock' }] }); console.log(res.choices[0].message); })();"If everything is configured correctly, you will see a response from OpenAI such as:
{ role: 'assistant', content: "Who's there?", refusal: null, annotations: []}
If you are still having trouble, please open an issue on the GitHub repository.
