|
36 | 36 |
|
37 | 37 | ## 👋 Continue Contribution Ideas |
38 | 38 |
|
39 | | -[This GitHub project board](https://github.com/orgs/continuedev/projects/2) is a list of ideas for how you can contribute to Continue. These aren't the only ways, but are a great starting point if you are new to the project. |
| 39 | +[This GitHub project board](https://github.com/orgs/continuedev/projects/2) is a list of ideas for how you can contribute to Continue. These aren't the only ways, but are a great starting point if you are new to the project. You can also browse the list of [good first issues](https://github.com/continuedev/continue/issues?q=is:issue%20state:open%20label:good-first-issue). |
40 | 40 |
|
41 | 41 | ## 🐛 Report Bugs |
42 | 42 |
|
@@ -147,11 +147,13 @@ See [`intellij/CONTRIBUTING.md`](./extensions/intellij/CONTRIBUTING.md) for the |
147 | 147 |
|
148 | 148 | ### Our Git Workflow |
149 | 149 |
|
150 | | -We keep a single permanent branch: `main`. When we are ready to create a "pre-release" version, we create a tag on the `main` branch titled `v0.9.x-vscode`, which automatically triggers the workflow in [preview.yaml](./.github/workflows/preview.yaml), which builds and releases a version of the VS Code extension. When a release has been sufficiently tested, we will create a new release titled `v0.8.x-vscode`, triggering a similar workflow in [main.yaml](./.github/workflows/main.yaml), which will build and release a main release of the VS Code extension. Any hotfixes can be made by creating a feature branch from the tag for the release in question. This workflow is well explained by <http://releaseflow.org>. |
| 150 | +We keep a single permanent branch: `main`. When we are ready to create a "pre-release" version, we create a tag on the `main` branch titled `v1.1.x-vscode`, which automatically triggers the workflow in [preview.yaml](./.github/workflows/preview.yaml), which builds and releases a version of the VS Code extension. When a release has been sufficiently tested, we will create a new release titled `v1.0x-vscode`, triggering a similar workflow in [main.yaml](./.github/workflows/main.yaml), which will build and release a main release of the VS Code extension. Any hotfixes can be made by creating a feature branch from the tag for the release in question. This workflow is well explained by <http://releaseflow.org>. |
151 | 151 |
|
152 | | -### Development workflow |
| 152 | +### What makes a good PR? |
153 | 153 |
|
154 | | -- Open a new issue or comment on an existing one before writing code. This ensures your proposed changes are aligned with the project direction. |
| 154 | +To keep the Continue codebase clean and maintainable, we expect the following from our own team and all contributors: |
| 155 | + |
| 156 | +- Open a new issue or comment on an existing one before writing code. This ensures your proposed changes are aligned with the project direction |
155 | 157 | - Keep changes focused. Multiple unrelated fixes should be opened as separate PRs |
156 | 158 | - Write or update tests for new functionality |
157 | 159 | - Update relevant documentation in the `docs` folder |
@@ -184,39 +186,16 @@ Join [#contribute on Discord](https://discord.gg/vapESyrFmJ) to engage with main |
184 | 186 | Continue has support for more than a dozen different LLM "providers", making it easy to use models running on OpenAI, Ollama, Together, LM Studio, Msty, and more. You can find all of the existing providers [here](https://github.com/continuedev/continue/tree/main/core/llm/llms), and if you see one missing, you can add it with the following steps: |
185 | 187 |
|
186 | 188 | 1. Create a new file in the `core/llm/llms` directory. The name of the file should be the name of the provider, and it should export a class that extends `BaseLLM`. This class should contain the following minimal implementation. We recommend viewing pre-existing providers for more details. The [LlamaCpp Provider](./core/llm/llms/LlamaCpp.ts) is a good simple example. |
187 | | - |
188 | | -- `providerName` - the identifier for your provider. |
189 | | -- At least one of `_streamComplete` or `_streamChat` - This is the function that makes the request to the API and returns the streamed response. You only need to implement one because Continue can automatically convert between "chat" and "raw completion". |
190 | | - |
191 | 189 | 2. Add your provider to the `LLMs` array in [core/llm/llms/index.ts](./core/llm/llms/index.ts). |
192 | 190 | 3. If your provider supports images, add it to the `PROVIDER_SUPPORTS_IMAGES` array in [core/llm/autodetect.ts](./core/llm/autodetect.ts). |
193 | | -4. Add the necessary JSON Schema types to [`config_schema.json`](./extensions/vscode/config_schema.json). This makes sure that Intellisense shows users what options are available for your provider when they are editing `config.json`. |
194 | | -5. Add a documentation page for your provider in [`docs/docs/customize/model-providers`](./docs/docs/customize/model-providers). This should show an example of configuring your provider in `config.json` and explain what options are available. |
| 191 | +4. Add a documentation page for your provider in [`docs/docs/customize/model-providers/more`](./docs/docs/customize/model-providers/more). This should show an example of configuring your provider in `config.yaml` and explain what options are available. |
195 | 192 |
|
196 | 193 | ### Adding Models |
197 | 194 |
|
198 | 195 | While any model that works with a supported provider can be used with Continue, we keep a list of recommended models that can be automatically configured from the UI or `config.json`. The following files should be updated when adding a model: |
199 | 196 |
|
200 | | -- [config_schema.json](./extensions/vscode/config_schema.json) - This is the JSON Schema definition that is used to validate `config.json`. You'll notice a number of rules defined in "definitions.ModelDescription.allOf". Here is where you write rules that can specify something like "for the provider 'anthropic', only models 'claude-2' and 'claude-instant-1' are allowed. Look through all of these rules and make sure that your model is included for providers that support it. |
201 | | -- [AddNewModel page](./gui/src/pages/AddNewModel) - This directory defines which model options are shown in the side bar model selection UI. To add a new model: |
| 197 | +- [AddNewModel page](./gui/src/pages/AddNewModel/configs/) - This directory defines which model options are shown in the side bar model selection UI. To add a new model: |
202 | 198 | 1. Add a `ModelPackage` entry for the model into [configs/models.ts](./gui/src/pages/AddNewModel/configs/models.ts), following the lead of the many examples near the top of the file |
203 | | - 2. Add the model within its provider's array to [AddNewModel.tsx](./gui/src/pages/AddNewModel/AddNewModel.tsx) (add provider if needed) |
204 | | -- [index.d.ts](./core/index.d.ts) - This file defines the TypeScript types used throughout Continue. You'll find a `ModelName` type. Be sure to add the name of your model to this. |
| 199 | + 2. Add the model within its provider's array to [configs/providers.ts](./gui/src/pages/AddNewModel/configs/providers.ts) (add provider if needed) |
205 | 200 | - LLM Providers: Since many providers use their own custom strings to identify models, you'll have to add the translation from Continue's model name (the one you added to `index.d.ts`) and the model string for each of these providers: [Ollama](./core/llm/llms/Ollama.ts), [Together](./core/llm/llms/Together.ts), and [Replicate](./core/llm/llms/Replicate.ts). You can find their full model lists here: [Ollama](https://ollama.ai/library), [Together](https://docs.together.ai/docs/inference-models), [Replicate](https://replicate.com/collections/streaming-language-models). |
206 | | -- [Prompt Templates](./core/llm/index.ts) - In this file you'll find the `autodetectTemplateType` function. Make sure that for the model name you just added, this function returns the correct template type. This is assuming that the chat template for that model is already built in Continue. If not, you will have to add the template type and corresponding edit and chat templates. |
207 | | - |
208 | | -## 📐 Continue Architecture |
209 | | - |
210 | | -Continue consists of 2 parts that are split so that it can be extended to work in other IDEs as easily as possible: |
211 | | - |
212 | | -1. **Continue GUI** - The Continue GUI is a React application that gives the user control over Continue. It displays the current chat history, allows the user to ask questions, invoke slash commands, and use context providers. The GUI also handles most state and holds as much of the logic as possible so that it can be reused between IDEs. |
213 | | - |
214 | | -2. **Continue Extension** - The Continue Extension is a plugin for the IDE which implements the [IDE Interface](./core/index.d.ts#L229). This allows the GUI to request information from or actions to be taken within the IDE. This same interface is used regardless of IDE. The first Continue extensions we have built are for VS Code and JetBrains, but we plan to build clients for other IDEs in the future. The IDE Client must 1. implement IDE Interface, as is done [here](./extensions/vscode/src/VsCodeIde.ts) for VS Code and 2. display the Continue GUI in a sidebar, like [here](./extensions/vscode/src/ContinueGUIWebviewViewProvider.ts). |
215 | | - |
216 | | -### Continue VS Code Extension |
217 | | - |
218 | | -The starting point for the VS Code extension is [activate.ts](./extensions/vscode/src/activation/activate.ts). The `activateExtension` function here will register all commands and load the Continue GUI in the sidebar of the IDE as a webview. |
219 | | - |
220 | | -### Continue JetBrains Extension |
221 | | - |
222 | | -The JetBrains extension is currently in alpha testing. Please reach out on [Discord](https://discord.gg/vapESyrFmJ) if you are interested in contributing to its development. |
| 201 | +- [Prompt Templates](./core/llm/autodetect.ts) - In this file you'll find the `autodetectTemplateType` function. Make sure that for the model name you just added, this function returns the correct template type. This is assuming that the chat template for that model is already built in Continue. If not, you will have to add the template type and corresponding edit and chat templates. |
0 commit comments