You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: PROJECT.md
+103-7Lines changed: 103 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,6 +12,7 @@ This is a simple Ganga job that executes on a Local backend and prints "Hello, G
12
12
## **LHC.pdf "it" Counting Job**
13
13
14
14
### **Description**
15
+
15
16
This task is handled by splitting it into three parts:
16
17
17
18
-**Splitting the LHC.pdf using a script**
@@ -28,6 +29,7 @@ This task is handled by splitting it into three parts:
28
29
29
30
30
31
### **Workflow**
32
+
31
33
1.**Job 1 (J1) - Splitting LHC.pdf**
32
34
- Uses **split_pdf.py** to generate individual pages.
33
35
2.**Job 2 (J2) - Counting "it" in each page**
@@ -37,18 +39,22 @@ This task is handled by splitting it into three parts:
37
39
38
40
39
41
### **Execution**
42
+
40
43
To run the job, execute:
44
+
41
45
```sh
42
46
ganga my_code/submit_jobs.py
43
47
```
48
+
44
49
This submits the jobs and waits for completion.
45
50
46
51
### The output should look something like this
47
52
48
53

49
54
50
55
51
-
### **File Structure**
56
+
### **File Structure**
57
+
52
58
```
53
59
My_Ganga/
54
60
│── my_code/
@@ -64,6 +70,7 @@ My_Ganga/
64
70
## **Interfacing Ganga Job**
65
71
66
72
### **Description**
73
+
67
74
This task is to demonstrate programmatic interaction with a Large Language Model (LLM) to generate and execute a Ganga job. The generated job will calculate an approximation of π using the accept-reject simulation method. The total number of simulations will be 1 million, split into 1000 subjobs, each performing 1000 simulations.
68
75
69
76
### **Workflow Execution**
@@ -250,13 +257,14 @@ This task is to demonstrate programmatic interaction with a Large Language Model
250
257
management.
251
258
252
259
#### **Step 4: Parsing the output code**
260
+
253
261
Make 3 python files as follows with the code provided above.
254
262
255
263
1. create_pi_job.py
256
264
2. pi_merger.py
257
265
3. pi_simulation.py
258
266
259
-
**Make sure the job_id is set correctly**
267
+
**MAKE SURE THE JOB_ID IS SET CORRECTLY**
260
268
261
269
**MAKE SURE THE ADRESSES IN THE CODE ARE CORRECT**
262
270
@@ -278,10 +286,98 @@ Make 3 python files as follows with the code provided above.
278
286
279
287

280
288
281
-
## **Current Status**
282
-
- [x] LLM-based code generation tested with Ollama.
283
-
- [x] Job submission, execution, and result aggregation working.
284
-
- [x] Debugging and improvements made to dynamically set job IDs.
285
-
- [ ] Automated test for execution success in progress.
289
+
***
290
+
291
+
## **LLM Django interfacing**
292
+
293
+
### **Description**
294
+
295
+
This project involves creating a Django web application that interacts with a **local LLM (Large Language Model)** to answer user questions. The app:
296
+
297
+
1. Accept questions via **HTTP** (GET/POST)
298
+
2. Used **_deepseek-coder-v2:latest_** via Ollama CLI
299
+
3. Return **LLM-generated responses** without maintaining chat history
300
+
301
+
### **Usage**
302
+
303
+
1. **Start the Django App**
304
+
305
+
Run the Django development server:
306
+
```bash
307
+
python manage.py runserver
308
+
309
+
```
310
+
311
+
2. **Interact with the LLM API**
312
+
313
+
- Using a **_Web Browser_** :
314
+
- Open http://127.0.0.1:8000/chat/ in your browser.
315
+
- Enter a question and get a response.
316
+
317
+
- Using **curl (Command Line)** :
318
+
319
+
- Send a question via **_curl_** :
320
+
321
+
```bash
322
+
curl -X POST http://127.0.0.1:8000/chat/query/ -H "Content-Type: application/json" -d
323
+
'{"question":"Inset your question here"}'
324
+
325
+
```
326
+
327
+
- Expected response:
328
+
329
+
```json
330
+
{
331
+
"response": "LLM output response"
332
+
}
333
+
```
334
+
335
+
### **Demo**
336
+
337
+
*Start the Django App*
338
+
339
+
1. **Using Web Browser**
340
+
341
+
[Watch the full Demo](https://github.com/Sigma-Verma/My_Ganga/blob/main/images/LLM_chat_DEMO.webm)
342
+
343
+
- 
344
+
- 
345
+
- 
346
+
- 
347
+
- 
348
+
349
+
2. **Using curl (Command Line)**
350
+
351
+
[Watch the full Demo](https://github.com/Sigma-Verma/My_Ganga/blob/main/images/LLM_curl_DEMO.webm)
352
+
353
+
- 
354
+
- 
355
+
356
+
### **Project Structure**
357
+
358
+
```
359
+
ollama_django/
360
+
│── chat/
361
+
│ ├── templates/chat/index.html # Frontend UI
362
+
│ ├── views.py # Backend logic
363
+
│ ├── urls.py # URL routing
364
+
│── ollama_django/
365
+
│ ├── settings.py # Django settings
366
+
│── manage.py # Django entry point
367
+
368
+
```
369
+
370
+
### **Possible Updates**
371
+
372
+
- Add **conversation memory** to enable context-aware interactions.
373
+
- Enhance security by **implementing authentication and HTTPS**.
374
+
- **Improve UI** with a frontend framework.
375
+
- Support **multiple LLMs**, allowing users to choose different models.
376
+
- **Integrate a database** to store chat logs for future analysis.
0 commit comments