|
1 | 1 |  |
2 | 2 |
|
3 | 3 | <p align="center"> |
4 | | - <strong>AI-Powered Code Execution and Development Platform</strong> |
| 4 | + <strong>CodinIT.dev Build With AI In Local Enviroment or With Our Web App</strong> |
5 | 5 | </p> |
6 | 6 |
|
7 | 7 | <p align="center"> |
|
44 | 44 | - 📦 **Package Installation** - Install any npm or pip package on the fly |
45 | 45 |
|
46 | 46 | ### Supported LLM Providers |
47 | | -- 🔸 **OpenAI** (GPT-4, GPT-3.5) |
| 47 | +- 🔸 **OpenAI** (GPT-5, GPT-4) |
48 | 48 | - 🔸 **Anthropic** (Claude models) |
49 | 49 | - 🔸 **Google AI** (Gemini) |
50 | 50 | - 🔸 **Groq** (Fast inference) |
|
86 | 86 | In your terminal: |
87 | 87 |
|
88 | 88 | ``` |
89 | | -git clone https://github.com/e2b-dev/fragments.git |
| 89 | +git clone https://github.com/Gerome-Elassaad/CodingIT.git |
90 | 90 | ``` |
91 | 91 |
|
92 | 92 | ### 2. Install the dependencies |
93 | 93 |
|
94 | 94 | Enter the repository: |
95 | 95 |
|
96 | 96 | ``` |
97 | | -cd fragments |
| 97 | +cd CodingIT |
98 | 98 | ``` |
99 | 99 |
|
100 | 100 | Run the following to install the required dependencies for both workspaces: |
@@ -190,123 +190,10 @@ pnpm desktop:build:win # Windows only |
190 | 190 | pnpm desktop:build:linux # Linux only |
191 | 191 | ``` |
192 | 192 |
|
193 | | -## Customize |
194 | | - |
195 | | -### Adding custom personas |
196 | | - |
197 | | -1. Make sure [E2B CLI](https://e2b.dev/docs/cli) is installed and you're logged in. |
198 | | - |
199 | | -2. Add a new folder under [sandbox-templates/](sandbox-templates/) |
200 | | - |
201 | | -3. Initialize a new template using E2B CLI: |
202 | | - |
203 | | - ``` |
204 | | - e2b template init |
205 | | - ``` |
206 | | -
|
207 | | - This will create a new file called `e2b.Dockerfile`. |
208 | | -
|
209 | | -4. Adjust the `e2b.Dockerfile` |
210 | | -
|
211 | | - Here's an example streamlit template: |
212 | | -
|
213 | | - ```Dockerfile |
214 | | - # You can use most Debian-based base images |
215 | | - FROM python:3.19-slim |
216 | | -
|
217 | | - RUN pip3 install --no-cache-dir streamlit pandas numpy matplotlib requests seaborn plotly |
218 | | -
|
219 | | - # Copy the code to the container |
220 | | - WORKDIR /home/user |
221 | | - COPY . /home/user |
222 | | - ``` |
223 | | -
|
224 | | -5. Specify a custom start command in `e2b.toml`: |
225 | | -
|
226 | | - ```toml |
227 | | - start_cmd = "cd /home/user && streamlit run app.py" |
228 | | - ``` |
229 | | -
|
230 | | -6. Deploy the template with the E2B CLI |
231 | | -
|
232 | | - ``` |
233 | | - e2b template build --name <template-name> |
234 | | - ``` |
235 | | -
|
236 | | - After the build has finished, you should get the following message: |
237 | | -
|
238 | | - ``` |
239 | | - ✅ Building sandbox template <template-id> <template-name> finished. |
240 | | - ``` |
241 | | -
|
242 | | -7. Open [lib/templates.json](lib/templates.json) in your code editor. |
243 | | -
|
244 | | - Add your new template to the list. Here's an example for Streamlit: |
245 | | -
|
246 | | - ```json |
247 | | - "streamlit-developer": { |
248 | | - "name": "Streamlit developer", |
249 | | - "lib": [ |
250 | | - "streamlit", |
251 | | - "pandas", |
252 | | - "numpy", |
253 | | - "matplotlib", |
254 | | - "request", |
255 | | - "seaborn", |
256 | | - "plotly" |
257 | | - ], |
258 | | - "file": "app.py", |
259 | | - "instructions": "A streamlit app that reloads automatically.", |
260 | | - "port": 8501 // can be null |
261 | | - }, |
262 | | - ``` |
263 | | -
|
264 | | - Provide a template id (as key), name, list of dependencies, entrypoint and a port (optional). You can also add additional instructions that will be given to the LLM. |
265 | | -
|
266 | | -4. Optionally, add a new logo under [public/thirdparty/templates](public/thirdparty/templates) |
267 | | -
|
268 | | -### Adding custom LLM models |
269 | | -
|
270 | | -1. Open [lib/models.json](lib/models.ts) in your code editor. |
271 | | -
|
272 | | -2. Add a new entry to the models list: |
273 | | -
|
274 | | - ```json |
275 | | - { |
276 | | - "id": "mistral-large", |
277 | | - "name": "Mistral Large", |
278 | | - "provider": "Ollama", |
279 | | - "providerId": "ollama" |
280 | | - } |
281 | | - ``` |
282 | | -
|
283 | | - Where id is the model id, name is the model name (visible in the UI), provider is the provider name and providerId is the provider tag (see [adding providers](#adding-custom-llm-providers) below). |
284 | | -
|
285 | | -### Adding custom LLM providers |
286 | | -
|
287 | | -1. Open [lib/models.ts](lib/models.ts) in your code editor. |
288 | | -
|
289 | | -2. Add a new entry to the `providerConfigs` list: |
290 | | -
|
291 | | - Example for fireworks: |
292 | | -
|
293 | | - ```ts |
294 | | - fireworks: () => createOpenAI({ apiKey: apiKey || process.env.FIREWORKS_API_KEY, baseURL: baseURL || 'https://api.fireworks.ai/inference/v1' })(modelNameString), |
295 | | - ``` |
296 | | -
|
297 | | -3. Optionally, adjust the default structured output mode in the `getDefaultMode` function: |
298 | | -
|
299 | | - ```ts |
300 | | - if (providerId === 'fireworks') { |
301 | | - return 'json' |
302 | | - } |
303 | | - ``` |
304 | | -
|
305 | | -4. Optionally, add a new logo under [public/thirdparty/logos](public/thirdparty/logos) |
306 | | -
|
307 | 193 | ## Contributing |
308 | 194 |
|
309 | 195 | As an open-source project, we welcome contributions from the community. If you are experiencing any bugs or want to add some improvements, please feel free to open an issue or pull request. |
| 196 | + |
310 | 197 | ## 🔧 Customize |
311 | 198 |
|
312 | 199 | ### Adding Custom Development Templates |
|
0 commit comments