Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
77 changes: 35 additions & 42 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,28 @@
[![Bolt.new: AI-Powered Full-Stack Web Development in the Browser](./public/social_preview_index.jpg)](https://bolt.new)

# Bolt.new Fork by Cole Medin - oTToDev
# oTToDev - The Multi-LLM Fork of Bolt.new

This fork of Bolt.new (oTToDev) allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
oTToDev is a powerful fork of Bolt.new that enables you to choose from multiple LLM providers for each prompt. Currently supported models include:

Join the community for oTToDev!
- OpenAI
- Anthropic
- Ollama
- OpenRouter
- Gemini
- LMStudio
- Mistral
- xAI
- HuggingFace
- DeepSeek
- Groq

The platform is easily extendable to support any model compatible with the Vercel AI SDK!

## 🤖 Join the community for oTToDev!

https://thinktank.ottomator.ai

# Requested Additions to this Fork - Feel Free to Contribute!!
## 📚 Requested Additions to this Fork - Feel Free to Contribute!!

- ✅ OpenRouter Integration (@coleam00)
- ✅ Gemini Integration (@jonathands)
Expand Down Expand Up @@ -49,11 +63,11 @@ https://thinktank.ottomator.ai
- ⬜ Upload documents for knowledge - UI design templates, a code base to reference coding style, etc.
- ⬜ Voice prompting

# Bolt.new: AI-Powered Full-Stack Web Development in the Browser
## 📝 Bolt.new: AI-Powered Full-Stack Web Development in the Browser

Bolt.new is an AI-powered web development agent that allows you to prompt, run, edit, and deploy full-stack applications directly from your browser—no local setup required. If you're here to build your own AI-powered web dev agent using the Bolt open source codebase, [click here to get started!](./CONTRIBUTING.md)

## What Makes Bolt.new Different
## What Makes Bolt.new Different

Claude, v0, etc are incredible- but you can't install packages, run backends, or edit code. That’s where Bolt.new stands out:

Expand All @@ -70,7 +84,7 @@ Whether you’re an experienced developer, a PM, or a designer, Bolt.new allows

For developers interested in building their own AI-powered development tools with WebContainers, check out the open-source Bolt codebase in this repo!

## Setup
## 🛠️ Setup

Many of you are new users to installing software from Github. If you have any installation troubles reach out and submit an "issue" using the links above, or feel free to enhance this documentation by forking, editing the instructions, and doing a pull request.

Expand All @@ -85,16 +99,17 @@ On all operating systems, the path to Node.js should automatically be added to y
```
echo $PATH .
```
## 💻 Installation

If you see usr/local/bin in the output then you're good to go.

3. Clone the repository (if you haven't already) by opening a Terminal window (or CMD with admin permissions) and then typing in this:
1. Clone the repository (if you haven't already) by opening a Terminal window (or CMD with admin permissions) and then typing in this:

```
git clone https://github.com/coleam00/bolt.new-any-llm.git
```

3. Rename .env.example to .env.local and add your LLM API keys. You will find this file on a Mac at "[your name]/bold.new-any-llm/.env.example". For Windows and Linux the path will be similar.
2. Rename .env.example to .env.local and add your LLM API keys. You will find this file on a Mac at "[your name]/bold.new-any-llm/.env.example". For Windows and Linux the path will be similar.

![image](https://github.com/user-attachments/assets/7e6a532c-2268-401f-8310-e8d20c731328)

Expand Down Expand Up @@ -126,7 +141,7 @@ VITE_LOG_LEVEL=debug

**Important**: Never commit your `.env.local` file to version control. It's already included in .gitignore.

## Run with Docker
## 🐳 Run with Docker

Prerequisites:

Expand Down Expand Up @@ -172,7 +187,7 @@ When you run the Docker Compose command with the development profile, any change
make on your machine to the code will automatically be reflected in the site running
on the container (i.e. hot reloading still applies!).

## Run Without Docker
## 📥 Run Without Docker

1. Install dependencies using Terminal (or CMD in Windows with admin permissions):

Expand All @@ -191,33 +206,7 @@ sudo npm install -g pnpm
```bash
pnpm run dev
```

## Super Important Note on Running Ollama Models

Ollama models by default only have 2048 tokens for their context window. Even for large models that can easily handle way more.
This is not a large enough window to handle the Bolt.new/oTToDev prompt! You have to create a version of any model you want
to use where you specify a larger context window. Luckily it's super easy to do that.

All you have to do is:

- Create a file called "Modelfile" (no file extension) anywhere on your computer
- Put in the two lines:

```
FROM [Ollama model ID such as qwen2.5-coder:7b]
PARAMETER num_ctx 32768
```

- Run the command:

```
ollama create -f Modelfile [your new model ID, can be whatever you want (example: qwen2.5-coder-extra-ctx:7b)]
```

Now you have a new Ollama model that isn't heavily limited in the context length like Ollama models are by default for some reason.
You'll see this new model in the list of Ollama models along with all the others you pulled!

## Adding New LLMs:
## 🔧 Adding New LLMs:

To make new LLMs available to use in this version of Bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider.

Expand All @@ -236,7 +225,7 @@ When you add a new model to the MODEL_LIST array, it will immediately be availab
- `pnpm run typegen`: Generates TypeScript types using Wrangler.
- `pnpm run deploy`: Builds the project and deploys it to Cloudflare Pages.

## Development
## 🚀 Development

To start the development server:

Expand All @@ -246,7 +235,7 @@ pnpm run dev

This will start the Remix Vite development server. You will need Google Chrome Canary to run this locally if you use Chrome! It's an easy install and a good browser for web development anyway.

## FAQ
## 🤔 FAQ

### How do I get the best results with oTToDev?

Expand All @@ -258,9 +247,9 @@ This will start the Remix Vite development server. You will need Google Chrome C

- **Batch simple instructions**: Save time by combining simple instructions into one message. For example, you can ask oTToDev to change the color scheme, add mobile responsiveness, and restart the dev server, all in one go saving you time and reducing API credit consumption significantly.

### How do I contribute to oTToDev?
## 🤝 Contributing

[Please check out our dedicated page for contributing to oTToDev here!](CONTRIBUTING.md)
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details on how to get started.

### Do you plan on merging oTToDev back into the official Bolt.new repo?

Expand Down Expand Up @@ -298,3 +287,7 @@ We promise you that we are constantly testing new PRs coming into oTToDev and th
### Everything works but the results are bad

This goes to the point above about how local LLMs are getting very powerful but you still are going to see better (sometimes much better) results with the largest LLMs like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b. If you are using smaller LLMs like Qwen-2.5-Coder, consider it more experimental and educational at this point. It can build smaller applications really well, which is super impressive for a local LLM, but for larger scale applications you want to use the larger LLMs still!

---

Built with ❤️ by the oTToDev community
Loading