Skip to content

Commit 099bc68

Browse files
committed
Updated Project.MD
1 parent b806f69 commit 099bc68

File tree

1 file changed

+103
-7
lines changed

1 file changed

+103
-7
lines changed

PROJECT.md

Lines changed: 103 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ This is a simple Ganga job that executes on a Local backend and prints "Hello, G
1212
## **LHC.pdf "it" Counting Job**
1313

1414
### **Description**
15+
1516
This task is handled by splitting it into three parts:
1617

1718
- **Splitting the LHC.pdf using a script**
@@ -28,6 +29,7 @@ This task is handled by splitting it into three parts:
2829

2930

3031
### **Workflow**
32+
3133
1. **Job 1 (J1) - Splitting LHC.pdf**
3234
- Uses **split_pdf.py** to generate individual pages.
3335
2. **Job 2 (J2) - Counting "it" in each page**
@@ -37,18 +39,22 @@ This task is handled by splitting it into three parts:
3739

3840

3941
### **Execution**
42+
4043
To run the job, execute:
44+
4145
```sh
4246
ganga my_code/submit_jobs.py
4347
```
48+
4449
This submits the jobs and waits for completion.
4550

4651
### The output should look something like this
4752

4853
![Output](images/output.png)
4954

5055

51-
### **File Structure**
56+
### **File Structure**
57+
5258
```
5359
My_Ganga/
5460
│── my_code/
@@ -64,6 +70,7 @@ My_Ganga/
6470
## **Interfacing Ganga Job**
6571

6672
### **Description**
73+
6774
This task is to demonstrate programmatic interaction with a Large Language Model (LLM) to generate and execute a Ganga job. The generated job will calculate an approximation of π using the accept-reject simulation method. The total number of simulations will be 1 million, split into 1000 subjobs, each performing 1000 simulations.
6875

6976
### **Workflow Execution**
@@ -250,13 +257,14 @@ This task is to demonstrate programmatic interaction with a Large Language Model
250257
management.
251258
252259
#### **Step 4: Parsing the output code**
260+
253261
Make 3 python files as follows with the code provided above.
254262
255263
1. create_pi_job.py
256264
2. pi_merger.py
257265
3. pi_simulation.py
258266
259-
**Make sure the job_id is set correctly**
267+
**MAKE SURE THE JOB_ID IS SET CORRECTLY**
260268
261269
**MAKE SURE THE ADRESSES IN THE CODE ARE CORRECT**
262270
@@ -278,10 +286,98 @@ Make 3 python files as follows with the code provided above.
278286
279287
![Running pi_merger.py in python ](images/Pi_merge.png)
280288
281-
## **Current Status**
282-
- [x] LLM-based code generation tested with Ollama.
283-
- [x] Job submission, execution, and result aggregation working.
284-
- [x] Debugging and improvements made to dynamically set job IDs.
285-
- [ ] Automated test for execution success in progress.
289+
***
290+
291+
## **LLM Django interfacing**
292+
293+
### **Description**
294+
295+
This project involves creating a Django web application that interacts with a **local LLM (Large Language Model)** to answer user questions. The app:
296+
297+
1. Accept questions via **HTTP** (GET/POST)
298+
2. Used **_deepseek-coder-v2:latest_** via Ollama CLI
299+
3. Return **LLM-generated responses** without maintaining chat history
300+
301+
### **Usage**
302+
303+
1. **Start the Django App**
304+
305+
Run the Django development server:
306+
```bash
307+
python manage.py runserver
308+
309+
```
310+
311+
2. **Interact with the LLM API**
312+
313+
- Using a **_Web Browser_** :
314+
- Open http://127.0.0.1:8000/chat/ in your browser.
315+
- Enter a question and get a response.
316+
317+
- Using **curl (Command Line)** :
318+
319+
- Send a question via **_curl_** :
320+
321+
```bash
322+
curl -X POST http://127.0.0.1:8000/chat/query/ -H "Content-Type: application/json" -d
323+
'{"question":"Inset your question here"}'
324+
325+
```
326+
327+
- Expected response:
328+
329+
```json
330+
{
331+
"response": "LLM output response"
332+
}
333+
```
334+
335+
### **Demo**
336+
337+
*Start the Django App*
338+
339+
1. **Using Web Browser**
340+
341+
[Watch the full Demo](https://github.com/Sigma-Verma/My_Ganga/blob/main/images/LLM_chat_DEMO.webm)
342+
343+
- ![Demo_1](images/1.png)
344+
- ![Demo_2](images/2.png)
345+
- ![Demo_3](images/3.png)
346+
- ![Demo_4](images/4.png)
347+
- ![Demo_5](images/5.png)
348+
349+
2. **Using curl (Command Line)**
350+
351+
[Watch the full Demo](https://github.com/Sigma-Verma/My_Ganga/blob/main/images/LLM_curl_DEMO.webm)
352+
353+
- ![Demo_6](images/6.png)
354+
- ![Demo_7](images/7.png)
355+
356+
### **Project Structure**
357+
358+
```
359+
ollama_django/
360+
│── chat/
361+
│ ├── templates/chat/index.html # Frontend UI
362+
│ ├── views.py # Backend logic
363+
│ ├── urls.py # URL routing
364+
│── ollama_django/
365+
│ ├── settings.py # Django settings
366+
│── manage.py # Django entry point
367+
368+
```
369+
370+
### **Possible Updates**
371+
372+
- Add **conversation memory** to enable context-aware interactions.
373+
- Enhance security by **implementing authentication and HTTPS**.
374+
- **Improve UI** with a frontend framework.
375+
- Support **multiple LLMs**, allowing users to choose different models.
376+
- **Integrate a database** to store chat logs for future analysis.
377+
378+
379+
380+
381+
286382
287383

0 commit comments

Comments
 (0)