Skip to content

Commit 0ef443c

Browse files
authored
llama 3.1 docs (#1812)
* llama 3.1 docs * update links
1 parent 2eafdff commit 0ef443c

File tree

20 files changed

+142
-133
lines changed

20 files changed

+142
-133
lines changed

core/autocomplete/completionProvider.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -446,7 +446,7 @@ export class CompletionProvider {
446446
) {
447447
shownGptClaudeWarning = true;
448448
throw new Error(
449-
`Warning: ${llm.model} is not trained for tab-autocomplete, and will result in low-quality suggestions. See the docs to learn more about why: https://docs.continue.dev/walkthroughs/tab-autocomplete#i-want-better-completions-should-i-use-gpt-4`,
449+
`Warning: ${llm.model} is not trained for tab-autocomplete, and will result in low-quality suggestions. See the docs to learn more about why: https://docs.continue.dev/features/tab-autocomplete#i-want-better-completions-should-i-use-gpt-4`,
450450
);
451451
}
452452

core/config/promptFile.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ export async function getPromptFiles(
3434
const DEFAULT_PROMPT_FILE = `# This is an example ".prompt" file
3535
# It is used to define and reuse prompts within Continue
3636
# Continue will automatically create a slash command for each prompt in the .prompts folder
37-
# To learn more, see the full .prompt file reference: https://docs.continue.dev/walkthroughs/prompt-files
37+
# To learn more, see the full .prompt file reference: https://docs.continue.dev/features/prompt-files
3838
temperature: 0.0
3939
---
4040
{{{ diff }}}

core/context/retrieval/retrieval.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ export async function retrieveContextItemsFromEmbeddings(
2525
(await extras.ide.getIdeInfo()).ideType === "jetbrains"
2626
) {
2727
throw new Error(
28-
"The transformers.js context provider is not currently supported in JetBrains. For now, you can use Ollama to set up local embeddings, or use our 'free-trial' embeddings provider. See here to learn more: https://docs.continue.dev/walkthroughs/codebase-embeddings#embeddings-providers",
28+
"The transformers.js context provider is not currently supported in JetBrains. For now, you can use Ollama to set up local embeddings, or use our 'free-trial' embeddings provider. See here to learn more: https://docs.continue.dev/features/codebase-embeddings#embeddings-providers",
2929
);
3030
}
3131

docs/docs/customization/context-providers.md

Lines changed: 5 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ Type '@open' to reference the contents of all of your open files. Set `onlyPinne
6060

6161
### Codebase Retrieval
6262

63-
Type '@codebase' to automatically retrieve the most relevant snippets from your codebase. Read more about indexing and retrieval [here](../walkthroughs/codebase-embeddings.md).
63+
Type '@codebase' to automatically retrieve the most relevant snippets from your codebase. Read more about indexing and retrieval [here](../features/codebase-embeddings.md).
6464

6565
```json
6666
{ "name": "codebase" }
@@ -498,9 +498,7 @@ Continue exposes an API for registering context providers from a 3rd party VSCod
498498

499499
```json
500500
{
501-
"extensionDependencies": [
502-
"continue.continue"
503-
],
501+
"extensionDependencies": ["continue.continue"]
504502
}
505503
```
506504

@@ -513,7 +511,6 @@ Here is an example:
513511
import * as vscode from "vscode";
514512

515513
class MyCustomProvider implements IContextProvider {
516-
517514
get description(): ContextProviderDescription {
518515
return {
519516
title: "custom",
@@ -525,7 +522,7 @@ class MyCustomProvider implements IContextProvider {
525522

526523
async getContextItems(
527524
query: string,
528-
extras: ContextProviderExtras
525+
extras: ContextProviderExtras,
529526
): Promise<ContextItem[]> {
530527
return [
531528
{
@@ -537,7 +534,7 @@ class MyCustomProvider implements IContextProvider {
537534
}
538535

539536
async loadSubmenuItems(
540-
args: LoadSubmenuItemsArgs
537+
args: LoadSubmenuItemsArgs,
541538
): Promise<ContextSubmenuItem[]> {
542539
return [];
543540
}
@@ -554,5 +551,4 @@ const continueApi = continueExt?.exports;
554551

555552
// register your custom provider
556553
continueApi?.registerCustomContextProvider(customProvider);
557-
558-
```
554+
```
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.

docs/docs/setup/select-model.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ _You can also use other autocomplete models by adding them to your `config.json`
7575

7676
## Embeddings
7777

78-
We recommend the following embeddings models, which are used for codebase retrieval as described [here](../walkthroughs/codebase-embeddings.md#embeddings-providers)
78+
We recommend the following embeddings models, which are used for codebase retrieval as described [here](../features/codebase-embeddings.md#embeddings-providers)
7979

8080
### Open-source models
8181

docs/docs/setup/select-provider.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ You can use commercial LLMs via APIs using:
6666
- [Azure OpenAI Service](../reference/Model%20Providers/openai.md)
6767
- [Google Gemini API](../reference/Model%20Providers/geminiapi.md)
6868
- [Mistral API](../reference/Model%20Providers/mistral.md)
69-
- [Voyage AI API](../walkthroughs/codebase-embeddings.md#openai)
69+
- [Voyage AI API](../features/codebase-embeddings.md#openai)
7070
- [Cohere API](../reference/Model%20Providers/cohere.md)
7171

7272
**In addition to selecting providers, you will need to figure out [what models to use](./select-model.md).**

0 commit comments

Comments
 (0)