Skip to content

Commit f809b10

Browse files
committed
Add Deepseek model configuration and update README with usage instructions
1 parent 7a3f9e1 commit f809b10

File tree

4 files changed

+105
-2
lines changed

4 files changed

+105
-2
lines changed

k8s-gpt/README.md

Lines changed: 52 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,10 @@ Run the model
1515
```bash
1616
ollama run llama3
1717
```
18-
18+
# Create a ChatBox UI using open-web-ui
19+
```bash
20+
docker run -d -p 9783:8080 -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main
21+
```
1922

2023
# INSTALLATION
2124

@@ -37,3 +40,51 @@ This will install the Operator into the cluster, which will await a K8sGPT resou
3740
kubectl apply -f backend-ollama-local.yml
3841
```
3942

43+
## Looking at the results of the analysis made by k8sgpt
44+
```bash
45+
kubectl get result -n k8sgpt-operator-system -o json | jq .
46+
```
47+
48+
# Secrity scanning using Trivy
49+
50+
```bash
51+
k8sgpt integration list
52+
```
53+
54+
## Activate Trivy
55+
```bash
56+
k8sgpt integration activate trivy
57+
```
58+
59+
## New filters are added now
60+
ConfigAuditReport (integration) and VulnerabilityReport (integration)
61+
```bash
62+
k8sgpt filters list
63+
```
64+
65+
66+
67+
## Option 2 - Run Deepseek model (NOT WORKING)
68+
69+
Install ollama on MacOS
70+
```bash
71+
brew install ollama
72+
```
73+
74+
Start ollama server
75+
```bash
76+
OLLAMA_HOST=0.0.0.0 ollama serve
77+
```
78+
79+
Run the model = Deepseek
80+
```bash
81+
ollama run deepseek-r1:8b
82+
```
83+
84+
To run custom Model
85+
```bash
86+
ollama create praj-deepseek-r1 -f deepseek-model/Modelfile
87+
ollama run praj-deepseek-r1
88+
```
89+
90+

k8s-gpt/backend-deepseek.yml

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
# apiVersion: core.k8sgpt.ai/v1alpha1
2+
# kind: K8sGPT
3+
# metadata:
4+
# name: k8sgpt-deepseek
5+
# namespace: k8sgpt-operator-system
6+
# spec:
7+
# ai:
8+
# enabled: true
9+
# model: praj-deepseek-r1
10+
# backend: localai
11+
# baseUrl: "http://10.0.0.38:11434/v1"
12+
# # secret:
13+
# # name: k8sgpt-localai-secret
14+
# # key: localai-api-key
15+
# # anonymized: false
16+
# # language: english
17+
# # integrations:
18+
# # trivy:
19+
# # enabled: true
20+
# # namespace: default
21+
# noCache: false
22+
# version: v0.3.41
23+
# # filters: ["Pod"]
24+
# repository: ghcr.io/k8sgpt-ai/k8sgpt
25+
26+
27+
apiVersion: core.k8sgpt.ai/v1alpha1
28+
kind: K8sGPT
29+
metadata:
30+
name: k8sgpt-deepseek
31+
namespace: k8sgpt-operator-system
32+
spec:
33+
ai:
34+
enabled: true
35+
model: deepseek-r1:8b
36+
backend: localai
37+
baseUrl: "http://10.0.0.38:11434/v1"
38+
noCache: false
39+
version: v0.3.41
40+
repository: ghcr.io/k8sgpt-ai/k8sgpt

k8s-gpt/backend-ollama-local.yml

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,13 @@ spec:
1414
# key: localai-api-key
1515
# anonymized: false
1616
# language: english
17+
# integrations:
18+
# trivy:
19+
# enabled: true
20+
# namespace: default
1721
noCache: false
1822
version: v0.3.41
19-
filters: ["Pod"]
23+
# filters: ["Pod"]
2024
repository: ghcr.io/k8sgpt-ai/k8sgpt
2125

2226

k8s-gpt/deepseek-model/Modelfile

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
FROM deepseek-r1:8b
2+
# sets the temperature to 1 [higher is more creative, lower is more coherent]
3+
#PARAMETER temperature 1
4+
# sets the context window size to 4096, this controls how many tokens the LLM can use as context to generate the next token
5+
#PARAMETER num_ctx 4096
6+
7+
# sets a custom system message to specify the behavior of the chat assistant
8+
SYSTEM You are using Prajwal's Deepseek model, beware!!!!!!

0 commit comments

Comments
 (0)