You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+34-2Lines changed: 34 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -157,9 +157,41 @@ The `guidellm benchmark` command is used to run benchmarks against a generative
157
157
158
158
GuideLLM UI is a companion frontend for visualizing the results of a GuideLLM benchmark run.
159
159
160
-
### 🛠 Running the UI
160
+
### 🛠 Generating an HTML report with a benchmark run
161
161
162
-
The UI is a WIP, check more recent PRs for the latest updates
162
+
Set the output to benchmarks.html for your run:
163
+
164
+
```base
165
+
--output-path=benchmarks.html
166
+
```
167
+
168
+
1. Use the Hosted Build (Recommended for Most Users)
169
+
170
+
This is preconfigured. The latest stable version of the hosted UI (https://neuralmagic.github.io/guidellm/ui/latest) will be used to build the local html file.
171
+
172
+
Open benchmarks.html in your browser and you're done—no setup required.
173
+
174
+
2. Build and Serve the UI Locally (For Development) This option is useful if:
175
+
176
+
- You are actively developing the UI
177
+
178
+
- You want to test changes to the UI before publishing
179
+
180
+
- You want full control over how the report is displayed
181
+
182
+
```bash
183
+
npm install
184
+
npm run build
185
+
npm run serve
186
+
```
187
+
188
+
This will start a local server (e.g., at http://localhost:3000). Then set the Environment to LOCAL before running your benchmarks.
189
+
190
+
```bash
191
+
export GUIDELLM__ENV=local
192
+
193
+
Alternatively, in config.py update the ENV_REPORT_MAPPING used as the asset base for report generation to the LOCAL option.
0 commit comments