Skip to content

Commit f535292

Browse files
author
eugenio.segala
committed
fix: improve embedding documentation to use encoders
1 parent 28c7984 commit f535292

File tree

2 files changed

+6
-7
lines changed

2 files changed

+6
-7
lines changed

docs/guide/embedding.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,11 @@ const __dirname = path.dirname(
122122
123123
const llama = await getLlama();
124124
const model = await llama.loadModel({
125-
modelPath: path.join(__dirname, "my-model.gguf")
125+
/*
126+
You can also load quantized models such as "Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf", which generate embeddings
127+
using their intermediate layers. However, specialized encoders models are generally more accurate for search.
128+
*/
129+
modelPath: path.join(__dirname, "nomic-embed-text-v1.5.f16.gguf")
126130
});
127131
const context = await model.createEmbeddingContext();
128132

docs/index.md

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,6 @@ const session = new LlamaChatSession({
117117
contextSequence: context.getSequence()
118118
});
119119

120-
121120
const q1 = "Hi there, how are you?";
122121
console.log("User: " + q1);
123122

@@ -139,14 +138,10 @@ const __dirname = path.dirname(
139138

140139
const llama = await getLlama();
141140
const model = await llama.loadModel({
142-
modelPath: path.join(__dirname, "my-model.gguf")
141+
modelPath: path.join(__dirname, "my-embedding-model.gguf")
143142
});
144143
const context = await model.createEmbeddingContext();
145144

146-
147-
148-
149-
150145
const text = "Hello world";
151146
console.log("Text:", text);
152147

0 commit comments

Comments
 (0)