Skip to content

A recursive Deep Research Assistant powered by OpenAI and Resonate's Distributed Async Await.

Notifications You must be signed in to change notification settings

resonatehq-examples/example-openai-deep-research-agent-kafka-ts

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Research Agent (Kafka)

A Research Agent powered by Resonate and OpenAI, using Kafka as the message transport. The Research Agent is a distributed, recursive agent that breaks a research topic into subtopics, researches each subtopic recursively, and synthesizes the results.

Deep Research Agent Demo

How It Works

This example demonstrates how complex, distributed agentic applications can be implemented with simple code in Resonate's Distributed Async Await: The research agent is a recursive generator function that breaks down topics into subtopics and invokes itself for each subtopic:

function* research(ctx, topic, depth) {
  const messages = [
    { role: "system", content: "Break topics into subtopics..." },
    { role: "user", content: `Research ${topic}` }
  ];

  while (true) {
    // Ask the LLM about the topic
    const response = yield* ctx.run(prompt, messages, ...);
    messages.push(response);

    // If LLM wants to research subtopics...
    if (response.tool_calls) {
      const handles = [];

      // Spawn parallel research for each subtopic
      for (const tool_call of response.tool_calls) {
        const subtopic = ...;
        const handle = yield* ctx.beginRpc(research, subtopic, depth - 1);
        handles.push([tool_call, handle]);
      }

      // Wait for all subtopic results
      for (const [tool_call, handle] of handles) {
        const result = yield* handle;
        messages.push({ role: "tool", ..., content: result });
      }
    } else {
      // LLM provided final summary
      return response.content;
    }
  }
}

The following video visualizes how this recursive pattern creates a dynamic call graph, spawning parallel research branches that fan out as topics are decomposed, then fan back in as results are synthesized:

call-graph.mov

Key concepts:

  • Concurrent Execution: Multiple subtopics are researched concurrently via ctx.beginRpc
  • Coordination: Handles are collected first, then awaited together (fork/join, fan-out/fan-in)
  • Depth control: Recursion stops when depth reaches 0

Setup Dependencies

1. Setup the Resonate Server

Install the Resonate Server

brew install resonatehq/tap/resonate

2. Setup the Deep Research Agent

Clone the repository

git clone https://github.com/resonatehq-examples/example-openai-deep-research-agent-kafka-ts.git
cd example-openai-deep-research-agent-kafka-ts

Initialize submodules

git submodule update --init --recursive

Install dependencies

npm install

3. Set the OpenAI API Key

To run this project you need an OpenAI API Key and export the key as an environment variable

export OPENAI_API_KEY="sk-..."

Running the Project

1. Start Confluent Platform

In Terminal #1, navigate to the Confluent directory and start the services:

cd cp-all-in-one/cp-all-in-one
docker compose up -d

Wait for the broker to be ready (~15 seconds), then create the required topics:

docker exec broker kafka-topics --create --topic default --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
docker exec broker kafka-topics --create --topic resonate --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1

2. Start the Resonate Server

In Terminal #2, start the Resonate server in development mode with Kafka support enabled:

resonate dev --api-kafka-enable --aio-kafka-enable

3. Start the Worker

In Terminal #3, start the Resonate worker:

npm run dev

4. Invoke the Research Agent

In Terminal #4, start the research agent:

resonate invoke research.1 --func research --arg "What are distributed systems" --arg 1 --target kafka://default

Parameters:

  • research.1: Unique ID for this research task
  • --arg "What are distributed systems": The topic to research
  • --arg 1: Recursion depth (how many levels of subtopics to explore)

Checking Progress

Get details of a specific research task:

resonate promises get research.1

View all promises:

resonate promises search "*"

Inspecting Kafka Messages

Read messages from the default topic (task dispatch):

docker exec broker kafka-console-consumer --bootstrap-server localhost:9092 --topic default --from-beginning --timeout-ms 5000

Read messages from the resonate topic (worker ↔ server communication):

docker exec broker kafka-console-consumer --bootstrap-server localhost:9092 --topic resonate --from-beginning --timeout-ms 5000

Troubleshooting

The Deep Research Agent depends on OpenAI and the OpenAI TypeScript and JavaScript SDK. If you are having trouble, verify that your OpenAI credentials are configured correctly and the model is accessible by running the following command in the project's directory:

node -e "import OpenAI from 'openai'; const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY }); (async () => { const res = await client.chat.completions.create({ model: 'gpt-5', messages: [{ role: 'user', content: 'knock knock' }] }); console.log(res.choices[0].message); })();"

If everything is configured correctly, you will see a response from OpenAI such as:

{ role: 'assistant', content: "Who's there?", refusal: null, annotations: []}

If you are still having trouble, please open an issue on the GitHub repository.

About

A recursive Deep Research Assistant powered by OpenAI and Resonate's Distributed Async Await.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published