>ollama-benchmark speed --question 81 --model llama3 --max-workers 1 --max_turns 1
version: 0.2.4
model: llama3
question_ids: ["81"]
max_workers: 1
max_turns: 1
tokenizer_model: None
mirostat: 0
mirostat_eta: 0.1
mirostat_tau: 5.0
num_ctx: 2048
repeat_last_n: 64
repeat_penalty: 1.1
temperature: 0.8
seed: 0
stop: None
tfs_z: 1.0
num_predict: -1
top_k: 40
top_p: 0.9
min_p: 0.0
0;81;total_durations: [9778.9804]
0;81;total_duration_mean: 9778.9804
0;81;total_duration_stdev: 0.0
0;81;total_duration_min: 9778.9804
0;81;total_duration_max: 9778.9804
0;81;load_duration_mean: 18.8168
0;81;load_duration_stdev: 0.0
0;81;load_duration_min: 18.8168
0;81;load_duration_max: 18.8168
0;81;prompt_eval_duration_mean: 3.2342
0;81;prompt_eval_duration_stdev: 0.0
0;81;prompt_eval_duration_min: 3.2342
0;81;prompt_eval_duration_max: 3.2342
0;81;prompt_eval_rate_mean: 9894.255148104632
0;81;prompt_eval_rate_stdev: 0.0
0;81;prompt_eval_rate_min: 9894.255148104632
0;81;prompt_eval_rate_max: 9894.255148104632
0;81;eval_count_mean: 785
0;81;eval_count_stdev: 0.0
0;81;eval_count_min: 785
0;81;eval_count_max: 785
0;81;prompt_eval_count_mean: 32
0;81;prompt_eval_count_stdev: 0.0
0;81;prompt_eval_count_min: 32
0;81;prompt_eval_count_max: 32
0;81;eval_duration_mean: 9756.7965
0;81;eval_duration_stdev: 0.0
0;81;eval_duration_min: 9756.7965
0;81;eval_duration_max: 9756.7965
0;81;eval_rate_mean: 80.45673597886355
0;81;eval_rate_stdev: 0.0
0;81;eval_rate_min: 80.45673597886355
0;81;eval_rate_max: 80.45673597886355
prompt_eval_duration_mean: 3.2342
prompt_eval_duration_stdev: 0.0
prompt_eval_rate_mean: 9894.255148104632
prompt_eval_rate_stdev: 0.0
eval_count_mean: 785
eval_count_stdev: 0.0
prompt_eval_count_mean: 32
prompt_eval_count_stdev: 0.0
eval_duration_mean: 9756.7965
eval_duration_stdev: 0.0
eval_rate_mean: 80.45673597886355
eval_rate_stdev: 0.0
total_duration: 9778.9804
real_duration: 10.032639503479004
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\REDACTED\AppData\Local\Programs\Python\Python312\Scripts\ollama-benchmark.exe\__main__.py", line 7, in <module>
File "C:\Users\REDACTED\AppData\Local\Programs\Python\Python312\Lib\site-packages\ollama_benchmark\main.py", line 75, in main
action(args)
File "C:\Users\REDACTED\AppData\Local\Programs\Python\Python312\Lib\site-packages\ollama_benchmark\speed\main.py", line 112, in main
with open(args.monitoring_output, 'w') as fd:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/dev/stderr'
Hi, I'm encountering this issue upon running your benchmark: