diff --git a/.github/dependabot.yml b/.github/dependabot.yml new file mode 100644 index 0000000000..6ef87cee37 --- /dev/null +++ b/.github/dependabot.yml @@ -0,0 +1,11 @@ +# To get started with Dependabot version updates, you'll need to specify which +# package ecosystems to update and where the package manifests are located. +# Please see the documentation for all configuration options: +# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file + +version: 2 +updates: + - package-ecosystem: "pnpm" # See documentation for possible values + directory: "/" # Location of package manifests + schedule: + interval: "weekly" diff --git a/README.md b/README.md index 0127dd0fae..0d204b29d0 100644 --- a/README.md +++ b/README.md @@ -124,6 +124,15 @@ Optionally, you can set the debug level: VITE_LOG_LEVEL=debug ``` +And if using Ollama set the DEFAULT_NUM_CTX, the example below uses 8K context and ollama running on localhost port 11434: + +``` +OLLAMA_API_BASE_URL=http://localhost:11434 +DEFAULT_NUM_CTX=8192 +``` + +First off, thank you for considering contributing to Bolt.new! This fork aims to expand the capabilities of the original project by integrating multiple LLM providers and enhancing functionality. Every contribution helps make Bolt.new a better tool for developers worldwide. + **Important**: Never commit your `.env.local` file to version control. It's already included in .gitignore. ## Run with Docker @@ -192,31 +201,6 @@ sudo npm install -g pnpm pnpm run dev ``` -## Super Important Note on Running Ollama Models - -Ollama models by default only have 2048 tokens for their context window. Even for large models that can easily handle way more. -This is not a large enough window to handle the Bolt.new/oTToDev prompt! You have to create a version of any model you want -to use where you specify a larger context window. Luckily it's super easy to do that. - -All you have to do is: - -- Create a file called "Modelfile" (no file extension) anywhere on your computer -- Put in the two lines: - -``` -FROM [Ollama model ID such as qwen2.5-coder:7b] -PARAMETER num_ctx 32768 -``` - -- Run the command: - -``` -ollama create -f Modelfile [your new model ID, can be whatever you want (example: qwen2.5-coder-extra-ctx:7b)] -``` - -Now you have a new Ollama model that isn't heavily limited in the context length like Ollama models are by default for some reason. -You'll see this new model in the list of Ollama models along with all the others you pulled! - ## Adding New LLMs: To make new LLMs available to use in this version of Bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider. diff --git a/docs/docs/index.md b/docs/docs/index.md index 5c12a005a8..fe689250a1 100644 --- a/docs/docs/index.md +++ b/docs/docs/index.md @@ -5,7 +5,7 @@ Join the community for oTToDev! https://thinktank.ottomator.ai -## Whats Bolt.new +## Whats oTToDev Bolt.new is an AI-powered web development agent that allows you to prompt, run, edit, and deploy full-stack applications directly from your browser—no local setup required. If you're here to build your own AI-powered web dev agent using the Bolt open source codebase, [click here to get started!](./CONTRIBUTING.md)