You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on May 20, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: docs/guides/deno/byo-deep-research.mdx
+51-51Lines changed: 51 additions & 51 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,9 +32,9 @@ Before diving into the implementation, let's understand why local testing is val
32
32
33
33
Before we start implementing our research system, let's set up the project and install the necessary dependencies:
34
34
35
-
1.**Create a new Nitric project**:
35
+
### 1. **Create a new Nitric project**:
36
36
37
-
If you haven't already install Nitric CLI by following the [official installation guide](https://nitric-docs-git-docs-byo-deep-research-nitrictech.vercel.app/docs/get-started/installation)
37
+
If you haven't already, install Nitric CLI by following the [official installation guide](https://nitric-docs-git-docs-byo-deep-research-nitrictech.vercel.app/docs/get-started/installation)
38
38
39
39
Then create a new Nitric project with:
40
40
@@ -45,29 +45,29 @@ cd deep-research
45
45
46
46
2.**Configure dependencies in deno.json**:
47
47
48
-
Create or update the `deno.json` file in your project root:
49
-
50
-
```json title:deno.json
51
-
{
52
-
"imports": {
53
-
"@nitric/sdk": "npm:@nitric/sdk",
54
-
"openai": "npm:openai",
55
-
"duck-duck-scrape": "npm:duck-duck-scrape",
56
-
"cheerio": "npm:cheerio",
57
-
"turndown": "npm:turndown"
58
-
},
59
-
"tasks": {
60
-
"start": "deno run --allow-net --allow-env --allow-read main.ts"
61
-
}
62
-
}
63
-
```
48
+
Create or update the `deno.json` file in your project root:
49
+
50
+
```json title:deno.json
51
+
{
52
+
"imports": {
53
+
"@nitric/sdk": "npm:@nitric/sdk",
54
+
"openai": "npm:openai",
55
+
"duck-duck-scrape": "npm:duck-duck-scrape",
56
+
"cheerio": "npm:cheerio",
57
+
"turndown": "npm:turndown"
58
+
},
59
+
"tasks": {
60
+
"start": "deno run --allow-net --allow-env --allow-read main.ts"
The LLM integration handles the "Summarization"and "Reflection" steps:
146
+
The LLM integration handles the "Summarization", "Reflection", and "Iteration" steps:
147
147
148
148
- Summarization: Condense findings
149
149
- Reflection: Identify knowledge gaps
@@ -268,8 +268,8 @@ Provide only the follow-up query in your response, if there are no follow-up que
268
268
269
269
These prompts work together to create a research system that:
270
270
271
-
-Generate search queries from topics
272
-
-Find relevant content using the duckduckgo search API
271
+
-Generates search queries from topics
272
+
-Finds relevant content using the duckduckgo search API
273
273
- Cleans and converts content to simple markdown
274
274
- Summarizes findings
275
275
- Attempts to identify knowledge gaps
@@ -681,48 +681,48 @@ To test the system locally:
681
681
682
682
1.**Install and Start Ollama (optional)**:
683
683
684
-
First, [install Ollama](https://ollama.ai/) for your operating system.
684
+
First, [install Ollama](https://ollama.ai/) for your operating system.
685
685
686
-
Then pull and start the model:
686
+
Then pull and start the model:
687
687
688
-
```bash
689
-
ollama pull llama2:3b
690
-
ollama serve
691
-
```
688
+
```bash
689
+
ollama pull llama2:3b
690
+
ollama serve
691
+
```
692
692
693
-
> You can skip this step if you want to use OpenAI or other hosted solution as your LLM provider.
693
+
> You can skip this step if you want to use OpenAI or other hosted solution as your LLM provider.
694
694
695
695
2.**Configure Environment**:
696
696
697
-
Create a `.env` file with local testing configuration:
697
+
Create a `.env` file with local testing configuration:
698
698
699
-
```bash
700
-
LLM_BASE_URL=http://localhost:11434/v1
701
-
LLM_API_KEY=ollama
702
-
LLM_MODEL=llama2:3b
703
-
MAX_ITERATIONS=3
704
-
SEARCH_RESULTS=3
705
-
```
699
+
```bash
700
+
LLM_BASE_URL=http://localhost:11434/v1
701
+
LLM_API_KEY=ollama
702
+
LLM_MODEL=llama2:3b
703
+
MAX_ITERATIONS=3
704
+
SEARCH_RESULTS=3
705
+
```
706
706
707
707
3.**Start the Local Development Server**:
708
708
709
-
```bash
710
-
nitric start
711
-
```
709
+
```bash
710
+
nitric start
711
+
```
712
712
713
713
4.**Test the API**:
714
714
715
-
Send a POST request to start research:
715
+
Send a POST request to start research:
716
716
717
-
```bash
718
-
curl -X POST http://localhost:4001/query \
719
-
-H "Content-Type: text/plain" \
720
-
-d "quantum computing basics"
721
-
```
717
+
```bash
718
+
curl -X POST http://localhost:4001/query \
719
+
-H "Content-Type: text/plain" \
720
+
-d "quantum computing basics"
721
+
```
722
722
723
-
The system will begin its research process, and you can monitor the progress in the Nitric development server logs.
723
+
The system will begin its research process, and you can monitor the progress in the Nitric development server logs.
724
724
725
-
Note: Local testing with smaller models may produce different results compared to production models, but the workflow and functionality will remain the same. This allows you to iterate quickly on your implementation without incurring API costs.
725
+
<Note>Local testing with smaller models may produce different results compared to production models, but the workflow and functionality will remain the same. This allows you to iterate quickly on your implementation without incurring API costs.</Note>
0 commit comments