Skip to content

Commit e1a75ea

Browse files
Update README.md
Signed-off-by: Gerome <[email protected]>
1 parent 4ce54ba commit e1a75ea

File tree

1 file changed

+5
-118
lines changed

1 file changed

+5
-118
lines changed

README.md

Lines changed: 5 additions & 118 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
![CodinIT.dev: AI-Powered Full-Stack Web Development in the Browser](https://github.com/user-attachments/assets/de684e88-a65c-42ea-b067-d1a3bc85a420)
22

33
<p align="center">
4-
<strong>AI-Powered Code Execution and Development Platform</strong>
4+
<strong>CodinIT.dev Build With AI In Local Enviroment or With Our Web App</strong>
55
</p>
66

77
<p align="center">
@@ -44,7 +44,7 @@
4444
- 📦 **Package Installation** - Install any npm or pip package on the fly
4545

4646
### Supported LLM Providers
47-
- 🔸 **OpenAI** (GPT-4, GPT-3.5)
47+
- 🔸 **OpenAI** (GPT-5, GPT-4)
4848
- 🔸 **Anthropic** (Claude models)
4949
- 🔸 **Google AI** (Gemini)
5050
- 🔸 **Groq** (Fast inference)
@@ -86,15 +86,15 @@
8686
In your terminal:
8787

8888
```
89-
git clone https://github.com/e2b-dev/fragments.git
89+
git clone https://github.com/Gerome-Elassaad/CodingIT.git
9090
```
9191

9292
### 2. Install the dependencies
9393

9494
Enter the repository:
9595

9696
```
97-
cd fragments
97+
cd CodingIT
9898
```
9999

100100
Run the following to install the required dependencies for both workspaces:
@@ -190,123 +190,10 @@ pnpm desktop:build:win # Windows only
190190
pnpm desktop:build:linux # Linux only
191191
```
192192

193-
## Customize
194-
195-
### Adding custom personas
196-
197-
1. Make sure [E2B CLI](https://e2b.dev/docs/cli) is installed and you're logged in.
198-
199-
2. Add a new folder under [sandbox-templates/](sandbox-templates/)
200-
201-
3. Initialize a new template using E2B CLI:
202-
203-
```
204-
e2b template init
205-
```
206-
207-
This will create a new file called `e2b.Dockerfile`.
208-
209-
4. Adjust the `e2b.Dockerfile`
210-
211-
Here's an example streamlit template:
212-
213-
```Dockerfile
214-
# You can use most Debian-based base images
215-
FROM python:3.19-slim
216-
217-
RUN pip3 install --no-cache-dir streamlit pandas numpy matplotlib requests seaborn plotly
218-
219-
# Copy the code to the container
220-
WORKDIR /home/user
221-
COPY . /home/user
222-
```
223-
224-
5. Specify a custom start command in `e2b.toml`:
225-
226-
```toml
227-
start_cmd = "cd /home/user && streamlit run app.py"
228-
```
229-
230-
6. Deploy the template with the E2B CLI
231-
232-
```
233-
e2b template build --name <template-name>
234-
```
235-
236-
After the build has finished, you should get the following message:
237-
238-
```
239-
✅ Building sandbox template <template-id> <template-name> finished.
240-
```
241-
242-
7. Open [lib/templates.json](lib/templates.json) in your code editor.
243-
244-
Add your new template to the list. Here's an example for Streamlit:
245-
246-
```json
247-
"streamlit-developer": {
248-
"name": "Streamlit developer",
249-
"lib": [
250-
"streamlit",
251-
"pandas",
252-
"numpy",
253-
"matplotlib",
254-
"request",
255-
"seaborn",
256-
"plotly"
257-
],
258-
"file": "app.py",
259-
"instructions": "A streamlit app that reloads automatically.",
260-
"port": 8501 // can be null
261-
},
262-
```
263-
264-
Provide a template id (as key), name, list of dependencies, entrypoint and a port (optional). You can also add additional instructions that will be given to the LLM.
265-
266-
4. Optionally, add a new logo under [public/thirdparty/templates](public/thirdparty/templates)
267-
268-
### Adding custom LLM models
269-
270-
1. Open [lib/models.json](lib/models.ts) in your code editor.
271-
272-
2. Add a new entry to the models list:
273-
274-
```json
275-
{
276-
"id": "mistral-large",
277-
"name": "Mistral Large",
278-
"provider": "Ollama",
279-
"providerId": "ollama"
280-
}
281-
```
282-
283-
Where id is the model id, name is the model name (visible in the UI), provider is the provider name and providerId is the provider tag (see [adding providers](#adding-custom-llm-providers) below).
284-
285-
### Adding custom LLM providers
286-
287-
1. Open [lib/models.ts](lib/models.ts) in your code editor.
288-
289-
2. Add a new entry to the `providerConfigs` list:
290-
291-
Example for fireworks:
292-
293-
```ts
294-
fireworks: () => createOpenAI({ apiKey: apiKey || process.env.FIREWORKS_API_KEY, baseURL: baseURL || 'https://api.fireworks.ai/inference/v1' })(modelNameString),
295-
```
296-
297-
3. Optionally, adjust the default structured output mode in the `getDefaultMode` function:
298-
299-
```ts
300-
if (providerId === 'fireworks') {
301-
return 'json'
302-
}
303-
```
304-
305-
4. Optionally, add a new logo under [public/thirdparty/logos](public/thirdparty/logos)
306-
307193
## Contributing
308194

309195
As an open-source project, we welcome contributions from the community. If you are experiencing any bugs or want to add some improvements, please feel free to open an issue or pull request.
196+
310197
## 🔧 Customize
311198

312199
### Adding Custom Development Templates

0 commit comments

Comments
 (0)