Skip to content

ai-dynamo/aiperf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

561 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

AIPerf

PyPI version License Codecov Discord Ask DeepWiki

AIPerf is a comprehensive benchmarking tool that measures the performance of generative AI models served by your preferred inference solution. It provides detailed metrics using a command line display as well as extensive benchmark performance reports.

AIPerf UI Dashboard

Quick Start

This quick start guide leverages Ollama via Docker Desktop.

Setting up a Local Server

In order to set up an Ollama server, run granite4:350m using the following commands:

docker run -d \
  --name ollama \
  -p 11434:11434 \
  -v ollama-data:/root/.ollama \
  ollama/ollama:latest
docker exec -it ollama ollama pull granite4:350m

Basic Usage

Create a virtual environment and install AIPerf:

python3 -m venv venv
source venv/bin/activate
pip install aiperf

To run a simple benchmark against your Ollama server:

aiperf profile \
  --model "granite4:350m" \
  --streaming \
  --endpoint-type chat \
  --tokenizer ibm-granite/granite-4.0-micro \
  --url http://localhost:11434

Example with Custom Configuration

aiperf profile \
  --model "granite4:350m" \
  --streaming \
  --endpoint-type chat \
  --tokenizer ibm-granite/granite-4.0-micro \
  --url http://localhost:11434
  --concurrency 5 \
  --request-count 10

Example output:

NOTE: The example performance is reflective of a CPU-only run and does not represent an official benchmark.

                                               NVIDIA AIPerf | LLM Metrics
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━┓
┃                               Metric ┃       avg ┃      min ┃       max ┃       p99 ┃       p90 ┃       p50 ┃      std ┃
┑━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━┩
β”‚             Time to First Token (ms) β”‚  7,463.28 β”‚ 7,125.81 β”‚  9,484.24 β”‚  9,295.48 β”‚  7,596.62 β”‚  7,240.23 β”‚   677.23 β”‚
β”‚            Time to Second Token (ms) β”‚     68.73 β”‚    32.01 β”‚    102.86 β”‚    102.55 β”‚     99.80 β”‚     67.37 β”‚    24.95 β”‚
β”‚      Time to First Output Token (ms) β”‚  7,463.28 β”‚ 7,125.81 β”‚  9,484.24 β”‚  9,295.48 β”‚  7,596.62 β”‚  7,240.23 β”‚   677.23 β”‚
β”‚                 Request Latency (ms) β”‚ 13,829.40 β”‚ 9,029.36 β”‚ 27,905.46 β”‚ 27,237.77 β”‚ 21,228.48 β”‚ 11,338.31 β”‚ 5,614.32 β”‚
β”‚             Inter Token Latency (ms) β”‚     65.31 β”‚    53.06 β”‚     81.31 β”‚     81.24 β”‚     80.64 β”‚     63.79 β”‚     9.09 β”‚
β”‚     Output Token Throughput Per User β”‚     15.60 β”‚    12.30 β”‚     18.85 β”‚     18.77 β”‚     18.08 β”‚     15.68 β”‚     2.05 β”‚
β”‚                    (tokens/sec/user) β”‚           β”‚          β”‚           β”‚           β”‚           β”‚           β”‚          β”‚
β”‚      Output Sequence Length (tokens) β”‚     95.20 β”‚    29.00 β”‚    295.00 β”‚    283.12 β”‚    176.20 β”‚     63.00 β”‚    77.08 β”‚
β”‚       Input Sequence Length (tokens) β”‚    550.00 β”‚   550.00 β”‚    550.00 β”‚    550.00 β”‚    550.00 β”‚    550.00 β”‚     0.00 β”‚
β”‚ Output Token Throughput (tokens/sec) β”‚      6.85 β”‚      N/A β”‚       N/A β”‚       N/A β”‚       N/A β”‚       N/A β”‚      N/A β”‚
β”‚    Request Throughput (requests/sec) β”‚      0.07 β”‚      N/A β”‚       N/A β”‚       N/A β”‚       N/A β”‚       N/A β”‚      N/A β”‚
β”‚             Request Count (requests) β”‚     10.00 β”‚      N/A β”‚       N/A β”‚       N/A β”‚       N/A β”‚       N/A β”‚      N/A β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

CLI Command: aiperf profile --model 'granite4:350m' --streaming --endpoint-type 'chat' --tokenizer 'ibm-granite/granite-4.0-micro' --url 'http://localhost:11434'
Benchmark Duration: 138.89 sec
CSV Export: /home/user/aiperf/artifacts/granite4:350m-openai-chat-concurrency1/profile_export_aiperf.csv
JSON Export: /home/user/Code/aiperf/artifacts/granite4:350m-openai-chat-concurrency1/profile_export_aiperf.json
Log File: /home/user/Code/aiperf/artifacts/granite4:350m-openai-chat-concurrency1/logs/aiperf.log

Features

  • Scalable multiprocess architecture with 9 services communicating via ZMQ
  • 3 UI modes: dashboard (real-time TUI), simple (progress bars), none (headless)
  • Multiple benchmarking modes: concurrency, request-rate, request-rate with max concurrency, trace replay
  • Extensible plugin system for endpoints, datasets, transports, and metrics
  • Public dataset support including ShareGPT and custom formats

Supported APIs

  • OpenAI chat completions, completions, embeddings, audio, images
  • NIM embeddings, rankings

Tutorials and Feature Guides

Getting Started

Load Control and Timing

Workloads and Data

Endpoint Types

Analysis and Monitoring

Documentation

Document Purpose
Architecture Three-plane architecture, core components, credit system, data flow
CLI Options Complete command and option reference
Metrics Reference All metric definitions, formulas, and requirements
Environment Variables All AIPERF_* configuration variables
Plugin System Plugin architecture, 25+ categories, creation guide
Creating Plugins Step-by-step plugin tutorial
Accuracy Benchmarks Accuracy evaluation stubs and datasets
Benchmark Modes Trace replay and timing modes
Server Metrics Prometheus-compatible server metrics collection
Tokenizer Auto-Detection Pre-flight tokenizer detection
Conversation Context Mode How conversation history accumulates in multi-turn
Dataset Synthesis API Synthesis module API reference
Code Patterns Code examples for services, models, messages, plugins
Migrating from Genai-Perf Migration guide and feature comparison
Design Proposals Enhancement proposals and discussions

Contributing

See CONTRIBUTING.md for development setup, coding conventions, and contribution guidelines.

Known Issues

  • Output sequence length constraints (--output-tokens-mean) cannot be guaranteed unless you pass ignore_eos and/or min_tokens via --extra-inputs to an inference server that supports them.
  • Very high concurrency settings (typically >15,000) may lead to port exhaustion on some systems. Adjust system limits or reduce concurrency if connection failures occur.
  • Startup errors caused by invalid configuration settings can cause AIPerf to hang indefinitely. Terminate the process and check configuration settings.
  • Copying selected text may not work reliably in the dashboard UI. Use the c key to copy all logs.

About

AIPerf is a comprehensive benchmarking tool that measures the performance of generative AI models served by your preferred inference solution.

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages