Skip to content

Commit fe57b88

Browse files
committed
Update README.md
Signed-off-by: Keshava Munegowda <keshava.gowda@gmail.com>
1 parent 90f917f commit fe57b88

File tree

1 file changed

+26
-49
lines changed

1 file changed

+26
-49
lines changed

README.md

Lines changed: 26 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -15,43 +15,6 @@ The sbk-charts application can be used to visualize these results in a more user
1515

1616
**sbk-charts uses AI to generate descriptive summaries about throughput and latency analysis**
1717

18-
## AI Backends
19-
20-
SBK Charts supports multiple AI backends for analysis:
21-
22-
1. **LM Studio** - For local AI inference with LM Studio
23-
2. **Ollama** - For running local LLMs through the Ollama API
24-
3. **Hugging Face** - For cloud-based AI analysis (default)
25-
26-
### LM Studio Setup
27-
28-
1. Install [LM Studio](https://lmstudio.ai/)
29-
2. Download and host a suitable model (e.g., Mistral 7B, Llama 2)
30-
3. Start the LM Studio server
31-
32-
Example usage:
33-
```bash
34-
sbk-charts -i input.csv -o output.xlsx lmstudio --lm-model mistral
35-
```
36-
37-
### Ollama Setup
38-
39-
1. Install [Ollama](https://ollama.com/)
40-
2. Pull required models:
41-
```bash
42-
ollama pull llama3
43-
ollama pull mistral
44-
```
45-
46-
Example usage:
47-
```bash
48-
sbk-charts -i input.csv -o output.xlsx ollama --model llama3
49-
```
50-
51-
For more details, see the documentation in [custom AI models](src/custom_ai/README.md)
52-
53-
---
54-
5518
## Running SBK Charts:
5619

5720
```
@@ -177,24 +140,38 @@ As of today, The analysis is performed using the Hugging Face model and includes
177140
- Identifies performance bottlenecks
178141
- Compares percentile distributions across storage systems
179142

180-
### Usage
143+
## AI Backends
181144

182-
To use AI analysis, run the tool with one of the available AI subcommands:
145+
SBK Charts supports multiple AI backends for analysis:
183146

184-
```bash
185-
# Using Hugging Face model (default)
186-
sbk-charts -i input.csv -o output.xlsx huggingface
147+
1. **LM Studio** - For local AI inference with LM Studio
148+
2. **Ollama** - For running local LLMs through the Ollama API
149+
3. **Hugging Face** - For cloud-based AI analysis (default)
187150

188-
# Example
189-
sbk-charts -i ./samples/charts/sbk-file-read.csv,./samples/charts/sbk-rocksdb-read.csv huggingface
151+
### LM Studio Setup
190152

153+
1. Install [LM Studio](https://lmstudio.ai/)
154+
2. Download and host a suitable model (e.g., Mistral 7B, Llama 3.1)
155+
3. Start the LM Studio server
191156

192-
# Using NoAI (fallback with error messages)
193-
sbk-charts -i input.csv -o output.xlsx noai
194-
# Example
195-
sbk-charts -i ./samples/charts/sbk-file-read.csv,./samples/charts/sbk-rocksdb-read.csv noai
157+
Example usage:
158+
```bash
159+
sbk-charts -i input.csv -o output.xlsx lmstudio
160+
```
161+
162+
### Ollama Setup
196163

164+
1. Install [Ollama](https://ollama.com/)
165+
2. Pull required models:
166+
```bash
167+
ollama pull llama3.1
168+
```
169+
170+
Example usage:
171+
```bash
172+
sbk-charts -i input.csv -o output.xlsx ollama
197173
```
198174

199-
for further details on custom AI implementations, please refer to the [custom AI](./src/custom_ai/README.md) directory.
175+
For more details, see the documentation in [custom AI models](src/custom_ai/README.md)
176+
---
200177

0 commit comments

Comments
 (0)