Skip to content
This repository was archived by the owner on May 20, 2025. It is now read-only.

Commit d464db2

Browse files
Apply suggestions from code review
Co-authored-by: Ryan Cartwright <[email protected]>
1 parent 36927e2 commit d464db2

File tree

1 file changed

+51
-51
lines changed

1 file changed

+51
-51
lines changed

docs/guides/deno/byo-deep-research.mdx

Lines changed: 51 additions & 51 deletions
Original file line numberDiff line numberDiff line change
@@ -32,9 +32,9 @@ Before diving into the implementation, let's understand why local testing is val
3232

3333
Before we start implementing our research system, let's set up the project and install the necessary dependencies:
3434

35-
1. **Create a new Nitric project**:
35+
### 1. **Create a new Nitric project**:
3636

37-
If you haven't already install Nitric CLI by following the [official installation guide](https://nitric-docs-git-docs-byo-deep-research-nitrictech.vercel.app/docs/get-started/installation)
37+
If you haven't already, install Nitric CLI by following the [official installation guide](https://nitric-docs-git-docs-byo-deep-research-nitrictech.vercel.app/docs/get-started/installation)
3838

3939
Then create a new Nitric project with:
4040

@@ -45,29 +45,29 @@ cd deep-research
4545

4646
2. **Configure dependencies in deno.json**:
4747

48-
Create or update the `deno.json` file in your project root:
49-
50-
```json title:deno.json
51-
{
52-
"imports": {
53-
"@nitric/sdk": "npm:@nitric/sdk",
54-
"openai": "npm:openai",
55-
"duck-duck-scrape": "npm:duck-duck-scrape",
56-
"cheerio": "npm:cheerio",
57-
"turndown": "npm:turndown"
58-
},
59-
"tasks": {
60-
"start": "deno run --allow-net --allow-env --allow-read main.ts"
61-
}
62-
}
63-
```
48+
Create or update the `deno.json` file in your project root:
49+
50+
```json title:deno.json
51+
{
52+
"imports": {
53+
"@nitric/sdk": "npm:@nitric/sdk",
54+
"openai": "npm:openai",
55+
"duck-duck-scrape": "npm:duck-duck-scrape",
56+
"cheerio": "npm:cheerio",
57+
"turndown": "npm:turndown"
58+
},
59+
"tasks": {
60+
"start": "deno run --allow-net --allow-env --allow-read main.ts"
61+
}
62+
}
63+
```
6464

6565
3. **Install dependencies**:
6666

67-
```bash
68-
# Install dependencies using Deno
69-
deno install
70-
```
67+
```bash
68+
# Install dependencies using Deno
69+
deno install
70+
```
7171

7272
4. **Project structure**:
7373

@@ -143,7 +143,7 @@ export default async (query: string) => {
143143

144144
### 2. Configurable LLM Integration
145145

146-
The LLM integration handles the "Summarization" and "Reflection" steps:
146+
The LLM integration handles the "Summarization", "Reflection", and "Iteration" steps:
147147

148148
- Summarization: Condense findings
149149
- Reflection: Identify knowledge gaps
@@ -268,8 +268,8 @@ Provide only the follow-up query in your response, if there are no follow-up que
268268

269269
These prompts work together to create a research system that:
270270

271-
- Generate search queries from topics
272-
- Find relevant content using the duckduckgo search API
271+
- Generates search queries from topics
272+
- Finds relevant content using the duckduckgo search API
273273
- Cleans and converts content to simple markdown
274274
- Summarizes findings
275275
- Attempts to identify knowledge gaps
@@ -681,48 +681,48 @@ To test the system locally:
681681

682682
1. **Install and Start Ollama (optional)**:
683683

684-
First, [install Ollama](https://ollama.ai/) for your operating system.
684+
First, [install Ollama](https://ollama.ai/) for your operating system.
685685

686-
Then pull and start the model:
686+
Then pull and start the model:
687687

688-
```bash
689-
ollama pull llama2:3b
690-
ollama serve
691-
```
688+
```bash
689+
ollama pull llama2:3b
690+
ollama serve
691+
```
692692

693-
> You can skip this step if you want to use OpenAI or other hosted solution as your LLM provider.
693+
> You can skip this step if you want to use OpenAI or other hosted solution as your LLM provider.
694694
695695
2. **Configure Environment**:
696696

697-
Create a `.env` file with local testing configuration:
697+
Create a `.env` file with local testing configuration:
698698

699-
```bash
700-
LLM_BASE_URL=http://localhost:11434/v1
701-
LLM_API_KEY=ollama
702-
LLM_MODEL=llama2:3b
703-
MAX_ITERATIONS=3
704-
SEARCH_RESULTS=3
705-
```
699+
```bash
700+
LLM_BASE_URL=http://localhost:11434/v1
701+
LLM_API_KEY=ollama
702+
LLM_MODEL=llama2:3b
703+
MAX_ITERATIONS=3
704+
SEARCH_RESULTS=3
705+
```
706706

707707
3. **Start the Local Development Server**:
708708

709-
```bash
710-
nitric start
711-
```
709+
```bash
710+
nitric start
711+
```
712712

713713
4. **Test the API**:
714714

715-
Send a POST request to start research:
715+
Send a POST request to start research:
716716

717-
```bash
718-
curl -X POST http://localhost:4001/query \
719-
-H "Content-Type: text/plain" \
720-
-d "quantum computing basics"
721-
```
717+
```bash
718+
curl -X POST http://localhost:4001/query \
719+
-H "Content-Type: text/plain" \
720+
-d "quantum computing basics"
721+
```
722722

723-
The system will begin its research process, and you can monitor the progress in the Nitric development server logs.
723+
The system will begin its research process, and you can monitor the progress in the Nitric development server logs.
724724

725-
Note: Local testing with smaller models may produce different results compared to production models, but the workflow and functionality will remain the same. This allows you to iterate quickly on your implementation without incurring API costs.
725+
<Note>Local testing with smaller models may produce different results compared to production models, but the workflow and functionality will remain the same. This allows you to iterate quickly on your implementation without incurring API costs.</Note>
726726

727727
## Conclusion
728728

0 commit comments

Comments
 (0)