Real-world (batch × input_length) tokenizer benchmark + cross-library leaderboard#2030
Open
ArthurZucker wants to merge 1 commit intomainfrom
Open
Real-world (batch × input_length) tokenizer benchmark + cross-library leaderboard#2030ArthurZucker wants to merge 1 commit intomainfrom
ArthurZucker wants to merge 1 commit intomainfrom
Conversation
…derboard Rewrites the tiktoken comparison bench into a standardized (batch_size × input_length) sweep mirroring the knobs used by fastokens' `examples/ablation.sh` and wordchipper's fineweb batch bench. Samples are pulled from `zai-org/LongBench-v2` and truncated/repeated per prompt to hit exact token lengths (same helper as fastokens' `_adjust_tokens`). **Python side — `bindings/python/benches/test_tiktoken.py`** - Five backends on a uniform encode/decode API, skipped gracefully if unavailable: `tokenizers`, `tiktoken`, `wordchipper` (https://github.com/zspacelabs/wordchipper), `iree.tokenizer` (https://github.com/iree-org/iree-tokenizer-py), `bpe` via `bpe-openai` (https://github.com/github/rust-gems). - Accepts OpenAI encoding names (`cl100k_base`, `o200k_base`, `gpt2`, `llama3`) or any HF repo id. `--hf-models` iterates a list and prints a cross-model leaderboard. - Both encode and decode are timed (best of warmup+iters); `rich` renders live colored tables with per-row winner and a geo-mean summary. - Cross-backend correctness probe before timing. - Fairness preflight (CPU model, load avg, pinned CPUs, governor) with an optional `--strict-fairness` abort above 50% nproc. - `--save-json` / `--save-md` serialize full results + a markdown leaderboard. **Rust side — `tokenizers/benches/matrix_benchmark.rs`** - New Criterion bench that sweeps the same (batch, input_length) matrix and measures `encode_batch`, `encode_batch_fast`, and **`decode_batch`** — the prior suite (`ci_benchmark`, `llama3_benchmark`) had no decode coverage and no parametric matrix. - Matrix is env-configurable via `MATRIX_BATCH_SIZES`, `MATRIX_INPUT_LENGTHS`. - Registered in `tokenizers/Cargo.toml`.
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
bindings/python/benches/test_tiktoken.pyinto a standardized (batch_size × input_length) matrix bench mirroring fastokens and wordchipper ablation knobs. Samples are sourced fromzai-org/LongBench-v2and truncated/repeated to exact token lengths (same helper as fastokens'_adjust_tokens).tokenizers(this repo)tiktokenwordchipper— https://github.com/zspacelabs/wordchipperiree.tokenizer— https://github.com/iree-org/iree-tokenizer-pybpe(viabpe-openai) — https://github.com/github/rust-gems/tree/main/crates/bpewarmup + iters).richrenders live colored tables with per-row winner and geo-mean summary panels.--hf-models …to sweep arbitrary HF repo ids (Qwen / DeepSeek / GLM-4.5 / Mistral-Nemo / Yi / starcoder2 / gpt-neox / falcon / Llama-3, etc.) and print a cross-model leaderboard.cl100k_base/o200k_base; tokenizers + tiktoken + iree agree on llama-3.--strict-fairnessaborts above 50% nproc. Results can be saved via--save-json/--save-md.Rust side
Adds a new Criterion bench
tokenizers/benches/matrix_benchmark.rsthat sweeps the same(batch, input_length)matrix and measures:encode_batch(with offsets)encode_batch_fast(no offsets)decode_batch— the prior suite (ci_benchmark,llama3_benchmark) had no decode bench and no parametric matrix.Env-configurable via
MATRIX_BATCH_SIZES/MATRIX_INPUT_LENGTHS. Registered intokenizers/Cargo.toml.Headline findings (pinned 8 cores on AMD EPYC 7R13, llama-3 for apples-to-apples)
Python vs Rust overhead (bs=128, len=8192):
10-model Python leaderboard — decode is the biggest optimization opportunity:
ireebeatstokenizerson decode by a consistent 5.5–7.5× across every non-OpenAI model (Qwen2.5/3, DeepSeek-V3, GLM-4.5, Mistral-Nemo, Yi-1.5, starcoder2, gpt-neox, falcon, Llama-3). On encode we are competitive or ahead on 6/10 models.Full tables + raw JSON/logs: https://gist.github.com/ArthurZucker/b5f60b51af22ecd62b16939db25efc5f
Test plan
pip install rich tiktoken iree-tokenizer wordchipper bpe-openaithenpython bindings/python/benches/test_tiktoken.py -e cl100k_base -b 1 32 -l 128 1024 -p 6 --iters 2 --warmup 1python bindings/python/benches/test_tiktoken.py --hf-models -b 1 32 -l 128 2048 --backends tokenizers iree tiktoken -p 8 --iters 2 --warmup 1 --save-md /tmp/out.mdcd tokenizers && make data/llama-3-tokenizer.json data/big.txt && cargo bench --bench matrix_benchmark -- --warm-up-time 1 --measurement-time 3matrix/encode-batch,matrix/encode-batch-fast,matrix/decode-batch.🤖 Generated with Claude Code
Co-Authored-By: Claude Opus 4.7 (1M context) noreply@anthropic.com