Skip to content

Commit 390705a

Browse files
committed
k8sgpt - for troubleshooting k8s issues and vulns scanning
1 parent 0539e46 commit 390705a

File tree

9 files changed

+191
-7
lines changed

9 files changed

+191
-7
lines changed

.github/workflows/ci.yml

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -20,14 +20,15 @@ jobs:
2020
with:
2121
username: ${{ vars.DOCKERHUB_USERNAME }} # Use secrets for sensitive info
2222
password: ${{ secrets.DOCKERHUB_TOKEN }} # Use secrets for sensitive info
23+
2324

24-
- name: Build and push
25-
uses: docker/build-push-action@v6
26-
with:
27-
push: true
28-
context: "{{defaultContext}}:microservices/python-microservice" # Set the build context to the directory with the Dockerfile
29-
platforms: linux/amd64, linux/arm64
30-
tags: ${{ vars.DOCKERHUB_USERNAME }}/python-ms-crisp-devops:${{ github.run_id }}-${{ github.sha }} # Use secrets for sensitive info
25+
# - name: Build and push
26+
# uses: docker/build-push-action@v6
27+
# with:
28+
# push: true
29+
# context: "{{defaultContext}}:microservices/python-microservice" # Set the build context to the directory with the Dockerfile
30+
# platforms: linux/amd64, linux/arm64
31+
# tags: ${{ vars.DOCKERHUB_USERNAME }}/python-ms-crisp-devops:${{ github.run_id }}-${{ github.sha }} # Use secrets for sensitive info
3132

3233
build_push_golang_ms:
3334
if: false

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ pip-wheel-metadata/
1818
*.egg-info/
1919
dist/
2020
build/
21+
*venv*
2122

2223
# Go
2324
*.exe

istio/mTLS.yml

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
apiVersion: security.istio.io/v1beta1
2+
kind: PeerAuthentication
3+
metadata:
4+
name: default
5+
spec:
6+
mtls:
7+
mode: STRICT

k8s-gpt/README.md

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
# PRE-REQUISITES
2+
Ollama server and model should be running and accessible from the cluster.
3+
4+
Install ollama on MacOS
5+
```bash
6+
brew install ollama
7+
```
8+
9+
Start ollama server
10+
```bash
11+
OLLAMA_HOST=0.0.0.0 ollama serve
12+
```
13+
14+
Run the model
15+
```bash
16+
ollama run llama3
17+
```
18+
19+
20+
# INSTALLATION
21+
22+
## Operator Installation
23+
To install the operator, run the following command:
24+
25+
```bash
26+
helm repo add k8sgpt https://charts.k8sgpt.ai/
27+
helm repo update
28+
helm install release k8sgpt/k8sgpt-operator -n k8sgpt-operator-system --create-namespace
29+
```
30+
31+
This will install the Operator into the cluster, which will await a K8sGPT resource before anything happens.
32+
33+
## Create secret for backend LLM model if using Ollama API (which is PAID version). SKIP for local running ollaama model
34+
35+
## Update the baseUrl in `backend-ollama-local.yaml` to point to the ollama server and apply the yaml file
36+
```bash
37+
kubectl apply -f backend-ollama-local.yaml
38+
```
39+

k8s-gpt/backend-ollama-local.yml

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
apiVersion: core.k8sgpt.ai/v1alpha1
2+
kind: K8sGPT
3+
metadata:
4+
name: k8sgpt-ollama
5+
namespace: k8sgpt-operator-system
6+
spec:
7+
ai:
8+
enabled: true
9+
model: llama3
10+
backend: localai
11+
baseUrl: "http://192.168.105.1:11434/v1"
12+
# secret:
13+
# name: k8sgpt-localai-secret
14+
# key: localai-api-key
15+
# anonymized: false
16+
# language: english
17+
noCache: false
18+
version: v0.3.41
19+
# filters:
20+
# - Ingress
21+
# sink:
22+
# type: slack
23+
# webhook: <webhook-url>
24+
# extraOptions:
25+
# backstage:
26+
# enabled: true
27+
28+
29+

k8s-gpt/backend-secret.yml

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
apiVersion: v1
2+
data:
3+
openai-api-key: <YOUR-OPENAI-API-KEY>
4+
kind: Secret
5+
metadata:
6+
creationTimestamp: null
7+
name: k8sgpt-sample-secret
8+
namespace: k8sgpt-operator-system

k8s-gpt/backend.yml

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
apiVersion: core.k8sgpt.ai/v1alpha1
2+
kind: K8sGPT
3+
metadata:
4+
name: k8sgpt-sample
5+
namespace: k8sgpt-operator-system
6+
spec:
7+
ai:
8+
enabled: true
9+
model: gpt-4o-mini
10+
backend: openai
11+
secret:
12+
name: k8sgpt-openai-secret
13+
key: openai-api-key
14+
# anonymized: false
15+
# language: english
16+
noCache: false
17+
version: v0.3.41
18+
# filters:
19+
# - Ingress
20+
# sink:
21+
# type: slack
22+
# webhook: <webhook-url>
23+
# extraOptions:
24+
# backstage:
25+
# enabled: true
26+
27+
28+

k8s-gpt/test-llama.py

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
# import json
2+
# from llamaapi import LlamaAPI
3+
4+
# # Initialize the SDK
5+
# llama = LlamaAPI("")
6+
7+
# # Build the API request
8+
# api_request_json = {
9+
# "model": "llama3.1-70b",
10+
# "messages": [
11+
# {"role": "user", "content": "What is the weather like in Boston?"},
12+
# ],
13+
# "functions": [
14+
# {
15+
# "name": "get_current_weather",
16+
# "description": "Get the current weather in a given location",
17+
# "parameters": {
18+
# "type": "object",
19+
# "properties": {
20+
# "location": {
21+
# "type": "string",
22+
# "description": "The city and state, e.g. San Francisco, CA",
23+
# },
24+
# "days": {
25+
# "type": "number",
26+
# "description": "for how many days ahead you wants the forecast",
27+
# },
28+
# "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
29+
# },
30+
# },
31+
# "required": ["location", "days"],
32+
# }
33+
# ],
34+
# "stream": False,
35+
# "function_call": "get_current_weather",
36+
# }
37+
38+
# # Execute the Request
39+
# response = llama.run(api_request_json)
40+
# print(json.dumps(response.json(), indent=2))
41+
42+
43+
44+
import json
45+
import requests
46+
47+
llm_url = "http://127.0.0.1:11434/api/generate"
48+
49+
api_request_json = {
50+
"model": "llama3:latest",
51+
"prompt": "What is the weather like in Calgary?"
52+
}
53+
54+
response = requests.post(llm_url, json=api_request_json)
55+
56+
print(response.json())

k8s-gpt/test-openai.py

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
from openai import OpenAI
2+
3+
client = OpenAI(
4+
api_key=""
5+
)
6+
7+
completion = client.chat.completions.create(
8+
model="gpt-4o-mini",
9+
store=True,
10+
messages=[
11+
{"role": "user", "content": "write a haiku about ai"}
12+
]
13+
)
14+
15+
print(completion.choices[0].message);

0 commit comments

Comments
 (0)