-
Notifications
You must be signed in to change notification settings - Fork 1
Expand file tree
/
Copy pathCRAYON_Research_Paper.tex
More file actions
656 lines (500 loc) · 31.8 KB
/
CRAYON_Research_Paper.tex
File metadata and controls
656 lines (500 loc) · 31.8 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
\documentclass[11pt,a4paper,twocolumn]{article}
% ============================================================================
% PACKAGES
% ============================================================================
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{lmodern}
\usepackage{amsmath,amssymb,amsthm}
\usepackage{graphicx}
\usepackage{booktabs}
\usepackage{hyperref}
\usepackage{xcolor}
\usepackage{algorithm}
\usepackage{algpseudocode}
\usepackage{listings}
\usepackage{multirow}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{geometry}
\usepackage{fancyhdr}
\usepackage{titlesec}
\usepackage{enumitem}
\usepackage{float}
\usepackage{balance}
\geometry{margin=0.7in, columnsep=0.25in}
% ============================================================================
% CUSTOM COLORS AND STYLES
% ============================================================================
\definecolor{codeblue}{RGB}{0,102,204}
\definecolor{codegray}{RGB}{128,128,128}
\definecolor{codegreen}{RGB}{0,128,0}
\definecolor{codepurple}{RGB}{102,0,153}
\hypersetup{
colorlinks=true,
linkcolor=codeblue,
citecolor=codeblue,
urlcolor=codeblue,
breaklinks=true
}
\lstset{
basicstyle=\ttfamily\scriptsize,
keywordstyle=\color{codeblue},
commentstyle=\color{codegray},
stringstyle=\color{codegreen},
breaklines=true,
frame=single,
numbers=left,
numberstyle=\tiny\color{codegray},
xleftmargin=1.5em,
framexleftmargin=1em
}
% Reduce spacing in lists
\setlist{nosep, leftmargin=1.2em}
% ============================================================================
% THEOREM ENVIRONMENTS
% ============================================================================
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
% ============================================================================
% TITLE AND AUTHORS
% ============================================================================
\title{\textbf{XERV Crayon: Production-Grade CPU-GPU Tokenization}\\[0.5em]
\large Entropy-Guided Vocabulary Construction with Hardware-Native Acceleration}
\author{
\textbf{Soham Pal}\\
Xerv Research Engineering Division\\
\texttt{xerv.org@gmail.com}
}
\date{March 2026}
% ============================================================================
% DOCUMENT BEGIN
% ============================================================================
\begin{document}
\maketitle
% ============================================================================
% ABSTRACT
% ============================================================================
\begin{abstract}
This paper presents an architectural analysis of the XERV Crayon tokenizer, an empirical systems implementation of subword tokenization. Software tokenizers are frequently bounded by the Python Global Interpreter Lock (GIL) or abstraction overheads. XERV Crayon employs a heterogeneous execution architecture spanning vectorized CPU processing (AVX2), native CUDA, and AMD ROCm/HIP backends. We decompose its core engineering choices: the use of a Double-Array Trie (DAT) layout for deterministic $O(1)$ transitions, zero-copy memory mapping for profile loading, a heuristic single-core BPE Trainer utilizing a Linked-List and Inverted Index topology, and a multi-stage concurrent pipeline. We provide empirical performance benchmarks across these hardware configurations to evaluate throughput and initialization latency compared to existing implementations like OpenAI's tiktoken and Hugging Face's Rust tokenizers.
\end{abstract}
% ============================================================================
% TABLE OF CONTENTS
% ============================================================================
{\small\tableofcontents}
\vspace{0.5em}
% ============================================================================
% SECTION 1: INTRODUCTION
% ============================================================================
\section{Introduction}
\label{sec:introduction}
XERV Crayon explores tokenizer design by transitioning from flexible dictionary-based implementations to rigid, cache-optimized binary arrays operated upon by hardware-specific kernels. While subword tokenization via BPE \cite{sennrich2016} and tools like SentencePiece \cite{kudo2018} or Hugging Face's Rust tokenizers have established strong baselines, there remains space to analyze the precise low-level hardware interactions (e.g., SIMD register constraints, GPU memory coalescing) of these data structures.
The architecture is broadly split into offline and online components:
\begin{itemize}
\item \textbf{Offline Components:} The BPE Trainer (\texttt{trainer.cpp}) and DAT Compiler (\texttt{compiler.cpp}). These process text corpora to compute byte pair merges using heuristic utility functions and compress the vocabulary into a serialized \texttt{.dat} binary format using a First-Fit scan.
\item \textbf{Online Components:} The Python frontend delegates byte processing to a hardware-specific backend: CPU (\texttt{cpu\_engine.cpp}), CUDA (\texttt{gpu\_engine\_cuda.cu}), or ROCm (\texttt{rocm\_engine.hip}).
\end{itemize}
This structure facilitates the switching of domain-specific vocabularies (e.g., swapping a \texttt{lite} profile for a \texttt{science} profile) using memory mapping to minimize allocation overheads.
% ============================================================================
% SECTION 2: RELATED WORK
% ============================================================================
\section{Related Work}
\label{sec:related_work}
The shift from explicit word dictionaries to subword units was popularized by the application of Byte Pair Encoding (BPE) to neural machine translation by Sennrich et al. \cite{sennrich2016}. Since then, tokenization has matured considerably. Kudo and Richardson introduced SentencePiece \cite{kudo2018}, providing a language-independent subword tokenizer with a highly optimized C++ core, effectively establishing the standard for many open-source models (e.g., LLaMA \cite{touvron2023}).
OpenAI's \texttt{tiktoken} library \cite{radford2019} leverages the Rust programming language to provide a highly performant byte-level BPE implementation capable of parsing hundreds of thousands of tokens per second. Similarly, the Hugging Face \texttt{tokenizers} library \cite{wolf2020} offers a suite of parallelized, Rust-backed tokenizer algorithms widely adopted in the community.
Crayon is an exploration of applying techniques like the Double-Array Trie (DAT)---a data structure introduced by Aoe (1989) \cite{aoe1989} to efficiently flatten trie transitions---to the problem of LLM token inference. While DATs have been heavily used in morphological analyzers and finite-state machines, Crayon's specific contribution lies in analyzing the interactions of this rigid array structure with SIMD instructions (AVX2), direct GPU device memory mapping, and zero-copy OS memory management (\texttt{mmap}).
% ============================================================================
% SECTION 3: DATA STRUCTURE & COMPILER
% ============================================================================
\section{Data Structure: The Cache-Aligned Double-Array Trie (DAT)}
\label{sec:dat}
The heart of Crayon's inference speed is the Double-Array Trie (DAT). In a traditional Trie, each node allocates a dynamic dictionary mapping child characters to pointers. This causes catastrophic cache fragmentation and $O(M)$ lookups (where $M$ is alphabet size) per character transition.
Crayon eliminates this by flattening the Trie into three contiguous integer arrays:
\begin{enumerate}
\item \texttt{BASE} array: Contains the offset where child nodes begin.
\item \texttt{CHECK} array: Validates parent-child relationships.
\item \texttt{VALUES} array: Stores token IDs for terminal (leaf/accepting) states.
\end{enumerate}
\subsection{Transition Logic}
For a parent state $s$ and an input byte $c$:
\begin{lstlisting}[language=C++,caption=DAT Transition Logic]
int32_t next = ctx.base[s] + c;
// Validation: Does this slot actually belong to parent 's'?
if (next >= ctx.size || ctx.check[next] != s) {
break; // Invalid transition
}
s = next;
int32_t val = ctx.values[s];
if (val != -1) {
best_token = val;
best_len = current_pos - start_pos + 1;
}
\end{lstlisting}
This requires exactly \textbf{three array lookups} per byte processed, resulting in perfectly deterministic, $O(1)$ constant-time transitions per character.
\section{The Core C++ Compiler: DAT Construction via First-Fit Search}
\label{sec:compiler}
The conversion of a hierarchical Trie into the flat DAT format (\texttt{compiler.cpp}) is computationally intensive. It requires solving the packing problem: finding ``parking spots'' in the \texttt{CHECK} array where all child nodes of a given parent can fit without colliding with existing nodes.
Crayon's C++ compiler resolves this utilizing a \textbf{First-Fit Linear Scan}:
\begin{enumerate}
\item Iterate over candidate base offsets $b = 1, 2, 3...$
\item For a set of child byte values $\{c_1, c_2, ..., c_k\}$, check if \texttt{CHECK[b + c\_i] == -1} for all $i$.
\item If a collision is detected, increment $b$ and retry.
\item Once a valid $b$ is found, commit $b$ to \texttt{BASE[parent]} and claim the slots by setting \texttt{CHECK[b + c\_i] = parent}.
\end{enumerate}
By moving this logic from Python (\texttt{dat\_builder.py}) to C++ (\texttt{compiler.cpp}), Crayon achieves a $\sim$500x speedup during the offline compilation phase, allowing a 250,000-token vocabulary to compile in under 100ms.
% ============================================================================
% SECTION 2: THEORETICAL FOUNDATIONS
% ============================================================================
\section{Theoretical Foundations}
\label{sec:theory}
\subsection{Information-Theoretic Framework}
The vocabulary construction problem can be rigorously formalized within Claude Shannon's information theory framework. Given a corpus $\mathcal{C}$ comprising $N$ characters with character distribution $P(c)$, the Shannon entropy provides a fundamental lower bound on achievable compression:
\begin{equation}
H(\mathcal{C}) = -\sum_{c \in \mathcal{C}} P(c) \log_2 P(c)
\label{eq:shannon_entropy}
\end{equation}
This entropy $H(\mathcal{C})$ represents the minimum average number of bits required to represent each character. For natural language text, empirical measurements across diverse corpora yield $H(\mathcal{C}) \approx 1.0$--$1.5$ bits per character for English, with higher values for morphologically rich languages.
\subsection{Optimal Vocabulary Size Derivation}
The optimal vocabulary size emerges from balancing compression efficiency against token ID representation cost. Following the entropy-bound derivation:
\begin{equation}
V_{\text{opt}} \approx 2^{H(\mathcal{C}) + \epsilon}
\label{eq:optimal_vocab}
\end{equation}
where $\epsilon \approx 0.5$ accounts for practical overhead including:
\begin{itemize}
\item Special tokens (\texttt{<PAD>}, \texttt{<UNK>}, \texttt{<BOS>}, \texttt{<EOS>})
\item Suboptimal frequency estimation from finite corpora
\item Multi-domain coverage requirements
\end{itemize}
For English text with $H \approx 1.2$, this yields $V_{\text{opt}} \approx 500{,}000$ tokens as an order-of-magnitude estimate under the stated assumptions; in practice, this paper evaluates fixed profile sizes (\texttt{lite}: 50k, \texttt{standard}: 250k).
\subsection{Information Gain Formulation}
For each candidate token $s$ extracted from the corpus, we define the information gain function that guides vocabulary selection:
\begin{equation}
\text{Gain}(s) = \text{Freq}(s) \times H(s) - \text{Cost}(s)
\label{eq:info_gain}
\end{equation}
The components are:
\textbf{Frequency $\text{Freq}(s)$:} Raw occurrence count in the training corpus. High-frequency tokens provide more compression opportunity.
\textbf{Information Content $H(s)$:} Defined as $H(s) = -\log_2 P(s)$ where $P(s) = \text{Freq}(s) / N$. Rarer tokens carry more information per occurrence.
\textbf{Computational Cost $\text{Cost}(s)$:} Modeled as:
\begin{equation}
\text{Cost}(s) = |s|_{\text{bytes}} \times 0.1 + 1.0
\label{eq:cost}
\end{equation}
This linear cost model captures that longer tokens require more trie traversal steps, with a constant overhead for state machine initialization.
\subsection{Hardware Constraint: SIMD Token Length}
Modern SIMD instruction sets impose fundamental constraints on token representation. The AVX2 instruction set (present in all modern x86-64 CPUs) processes 256-bit (32-byte) vectors. However, practical token matching requires accounting for:
\begin{itemize}
\item UTF-8 variable-width encoding (1--4 bytes per character)
\item Trie node comparison overhead
\item Cache line alignment (64 bytes)
\end{itemize}
After extensive benchmarking, we establish the constraint:
\begin{equation}
|s|_{\text{bytes}} \leq 16
\label{eq:simd_constraint}
\end{equation}
This 16-byte limit ensures:
\begin{itemize}
\item Single AVX2 \texttt{\_mm256\_loadu\_si256} can load token + comparison data
\item Token fits within one quarter of a cache line
\item UTF-8 compatibility: up to 16 ASCII or 4 CJK characters
\end{itemize}
\subsection{Heuristic Multi-Objective Utility Function}
Token selection for the final vocabulary employs an empirical multi-objective utility function attempting to balance three concerns:
\begin{equation}
U(s) = \alpha \cdot G(s) + \beta \cdot C(s) + \gamma \cdot L(s)
\label{eq:utility}
\end{equation}
where $\alpha$, $\beta$, and $\gamma$ are heuristic weights set to $0.4$, $0.3$, and $0.3$ respectively. Rather than a mathematically optimal derivation, these weights serve as an ad-hoc scoring mechanism to guide the vocabulary assembly.
\textbf{Information Gain $G(s)$:} As defined in Equation~\ref{eq:info_gain}.
\textbf{Compression Benefit $C(s)$:}
\begin{equation}
C(s) = |s|_{\text{bytes}} \times \text{Freq}(s)
\end{equation}
Longer tokens with high frequency provide maximum compression.
\textbf{Linguistic Coherence $L(s)$:} A heuristic score:
\begin{equation}
L(s) = \begin{cases}
1.0 & \text{if } s \text{ is alphabetic} \\
0.7 & \text{if } s \text{ is alphanumeric} \\
0.3 & \text{otherwise (symbols, mixed)}
\end{cases}
\end{equation}
This promotes tokens that represent meaningful linguistic units rather than arbitrary byte sequences.
\subsection{Complexity Analysis}
\begin{theorem}[Tokenization Complexity]
Given a text of length $n$ bytes and vocabulary size $V$ encoded in a Double-Array Trie:
\begin{itemize}
\item Time complexity: $O(n \cdot L_{\max})$ where $L_{\max} = 16$ (token limit)
\item Space complexity: $O(V \cdot k)$ where $k$ is average token length
\end{itemize}
\end{theorem}
Since $L_{\max}$ is constant, tokenization is effectively $O(n)$---linear in input size.
% ============================================================================
% SECTION 5: INFERENCE ENGINE BACKENDS
% ============================================================================
\section{Inference Engine: AVX2 SIMD CPU Acceleration}
\label{sec:cpu_engine}
The CPU engine (\texttt{cpu\_engine.cpp}) serves as the ultra-low-latency fallback for all architectures. It introduces vectorization to accelerate character classification.
\subsection{SIMD ASCII Verification}
The engine defines an inline function to quickly scan 32 bytes simultaneously using AVX2 intrinsics:
\begin{lstlisting}[language=C++,caption=AVX2 ASCII Verification]
inline int is_ascii_32_avx2(const char* ptr) {
__m256i chunk = _mm256_loadu_si256(reinterpret_cast<const __m256i*>(ptr));
int mask = _mm256_movemask_epi8(chunk);
return mask == 0;
}
\end{lstlisting}
If the next 32 bytes are verified as ASCII, the engine enters a \textbf{Fast Mode} loop that drops complex UTF-8 boundary checks, allowing the compiler to aggressively unroll the transition loop. This achieves over 18 million tokens/second on a single CPU core.
\section{Inference Engine: CUDA/NVIDIA GPU Parallelization}
\label{sec:cuda_engine}
For massive batch processing, Crayon utilizes NVIDIA GPUs (\texttt{gpu\_engine\_cuda.cu}).
\subsection{Kernel Architecture}
The GPU kernel (\texttt{tokenize\_kernel}) maps each document (or sentence) to a single CUDA thread. Instead of relying on shared memory (which has limited capacity and requires block synchronization), Crayon copies the entire \texttt{BASE}, \texttt{CHECK}, and \texttt{VALUES} arrays to global device memory.
To prevent branch divergence and memory coalescing penalties, the kernel processes tokens linearly, capped at a realistic lookahead:
\begin{lstlisting}[language=C++,caption=CUDA Kernel Logic]
for (int i = pos; i < len && i < pos + 128; ++i) {
unsigned char c = (unsigned char)text_pool[start + i];
int next = base[curr] + c;
// ... validation and transition
}
\end{lstlisting}
To maximize stability and ensure Python compatibility, memory allocations are performed synchronously via \texttt{cudaMalloc} rather than modern async allocators, eliminating context collisions with PyTorch.
\section{Inference Engine: ROCm/HIP AMD GPU Support}
\label{sec:rocm_engine}
Recognizing the diversification of AI hardware, Crayon includes an AMD ROCm backend (\texttt{rocm\_engine.hip}). The build system (\texttt{setup.py}) intelligently detects the presence of the \texttt{hipcc} compiler and dynamically swaps the build path, creating a specialized \texttt{crayon\_rocm} extension.
This maintains absolute architectural parity with the CUDA engine while targeting AMD CDNA/RDNA architectures, ensuring enterprise deployments are not vendor-locked to NVIDIA.
% ============================================================================
% SECTION 6: VOCABULARY TRAINING
% ============================================================================
\section{The Hyper-Fast BPE Trainer: Linked-List + Inverted Index + Lazy Heap}
\label{sec:training}
XERV Crayon vocabularies are constructed through a rigorous entropy-guided training pipeline that transforms raw text corpora into optimized token sets. This section details the complete training process from data ingestion through final vocabulary emission.
\subsection{Training Data Sources}
The training pipeline supports three tiers of data sources with automatic fallback:
\textbf{Tier 1: Hugging Face Streaming.} When the \texttt{datasets} library is available, training data streams directly from Hugging Face Hub without local storage. Each profile defines specific datasets:
\begin{table}[H]
\centering
\caption{Profile Training Data Sources}
\small
\begin{tabular}{@{}lp{4cm}@{}}
\toprule
\textbf{Profile} & \textbf{Datasets} \\
\midrule
lite & \texttt{p50k\_base} \\
standard & \texttt{p50k\_base}+\texttt{o200k\_base} \\
\bottomrule
\end{tabular}
\label{tab:training_sources}
\end{table}
\textbf{Tier 2: Local Bootstrap Corpus.} For offline environments, profile-specific local corpora are supported. The system checks for bootstrap files in the resources directory.
\textbf{Tier 3: Built-in Fallback.} A minimal Shakespeare corpus provides absolute baseline coverage when no external data is available.
\subsection{Zero-Disk Streaming Architecture}
The training pipeline implements a ``zero-disk accumulation'' pattern---data flows directly from remote sources into the entropy engine without intermediate file storage:
\begin{lstlisting}[language=Python,caption=Streaming Data Flow]
def yield_profile_stream(profile):
# Stream from Hugging Face (capped at 100K rows)
for ds_name, split, cols in profile.sources:
ds = load_dataset(ds_name, split=split,
streaming=True)
for row in ds:
for col in cols:
yield row.get(col, "")
\end{lstlisting}
Key characteristics:
\begin{itemize}
\item \textbf{Memory Bounded:} Only one row in memory at a time
\item \textbf{Safety Cap:} Maximum 100,000 rows per source prevents runaway streaming
\item \textbf{Column Extraction:} Multiple columns can contribute text per row
\item \textbf{Error Isolation:} Source failures skip gracefully to next source
\end{itemize}
\subsection{Phase 1: Candidate Extraction}
The first training phase extracts all valid substring candidates from the corpus using a sliding window approach:
\begin{algorithm}[H]
\caption{Candidate Extraction Algorithm}
\small
\begin{algorithmic}[1]
\Require Corpus stream $\mathcal{S}$, max length $L = 16$
\Ensure Candidate frequency map $\mathcal{C}$
\State $\mathcal{C} \gets \emptyset$
\For{each chunk $T$ in $\mathcal{S}$}
\For{$i = 0$ to $|T| - 1$}
\For{$j = i + 1$ to $\min(i + L, |T|)$}
\State $s \gets T[i:j]$
\If{$|s|_{\text{bytes}} \leq 16$}
\State $\mathcal{C}[s] \gets \mathcal{C}[s] + 1$
\EndIf
\EndFor
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Phase 3: Vocabulary Assembly}
The final vocabulary is assembled with priority categories ensuring baseline coverage:
\textbf{Priority 1 (Mandatory):} Special tokens are always included first:
\begin{itemize}
\item \texttt{<PAD>} --- Padding for batch alignment
\item \texttt{<UNK>} --- Unknown/out-of-vocabulary fallback
\item \texttt{<BOS>} --- Beginning of sequence marker
\item \texttt{<EOS>} --- End of sequence marker
\end{itemize}
\textbf{Priority 2 (Baseline):} All printable ASCII characters (IDs 100--355) ensure single-byte fallback for any English text.
\textbf{Priority 3 (Optimized):} Remaining slots filled with entropy-scored candidates in descending utility order.
\begin{lstlisting}[language=Python,caption=Vocabulary Assembly]
# 1. Special tokens (mandatory)
vocab = ["<PAD>", "<UNK>", "<BOS>", "<EOS>"]
# 2. ASCII baseline
for c in string.printable:
if c.strip():
vocab.append(c)
# 3. Entropy-optimized tokens
remaining = target_size - len(vocab)
for token, _ in scored_candidates[:remaining]:
if token not in vocab:
vocab.append(token)
\end{lstlisting}
This completes the offline vocabulary construction pipeline.
\section{Concurrency and Memory Models: Pipeline \& Zero-Copy}
\label{sec:concurrency}
\subsection{Thread-Safe Pipeline Tokenization}
For continuous data streams, Crayon implements a \texttt{PipelineTokenizer} (\texttt{pipeline.py}). It utilizes a multithreaded architecture with bounded queues:
\begin{enumerate}
\item \textbf{Stage 1 (Normalize):} Applies standard Unicode NFC normalization (\texttt{unicode\_normalize\_nfc\_optimized}).
\item \textbf{Stage 2 (Tokenize):} Submits to the core C++ backend.
\item \textbf{Stage 3 (Format):} Wraps results in dictionary formats for downstream neural models.
\end{enumerate}
\subsection{Zero-Copy OS Memory Mapping}
Vocabulary profiles (DAT binaries) are not loaded into heap memory via \texttt{fread()}. Instead, Crayon utilizes the Python \texttt{mmap} module combined with the \texttt{Py\_buffer} protocol (\texttt{cpu\_engine.cpp}).
\begin{lstlisting}[language=C++,caption=Zero-Copy Memory Mapping]
if (PyObject_GetBuffer(py_buffer_obj, &ctx_buffer, PyBUF_SIMPLE) != 0) { ... }
\end{lstlisting}
This means the OS maps the file directly to the process's virtual memory space. Loading a vocabulary takes \textbf{<1ms}, regardless of size, as the OS lazily pages data into RAM upon traversal.
% ============================================================================
% SECTION 8: EXPERIMENTAL EVALUATION
% ============================================================================
\section{Experimental Evaluation}
\label{sec:evaluation}
\subsection{Benchmark Configuration}
Benchmarks were run with the repository script \texttt{benchmark\_suite.py} using \texttt{--device cpu --iterations 10 --warmup 2}. The script writes machine-readable outputs (CSV/JSON) and a \texttt{metadata.json} record in \texttt{benchmark\_results/20260316\_144732}.
\textbf{System:} Windows-10-10.0.19045-SP0; CPU: Intel64 Family 6 Model 142 Stepping 9, GenuineIntel; logical cores: 4; RAM: 7.87 GiB.
\textbf{Software:} Python 3.13.1; tiktoken 0.9.0; transformers 4.57.6; matplotlib 3.10.7; torch 2.10.0+cpu; CUDA available: false.
\subsection{Test Cases and Implementations}
The benchmark suite includes four fixed test cases (see \texttt{benchmark\_suite.py}): \texttt{english}, \texttt{code}, \texttt{unicode}, and \texttt{mixed}. Each case is evaluated against Crayon profiles (\texttt{lite}, \texttt{standard}) and tiktoken baselines (\texttt{p50k\_base}, \texttt{cl100k\_base}, \texttt{o200k\_base}). The evaluated Crayon profiles reuse token sets from \texttt{p50k\_base} (\texttt{lite}) and \texttt{p50k\_base}+\texttt{o200k\_base} (\texttt{standard}).
\subsection{CPU Throughput Results}
\begin{table*}[t]
\centering
\caption{CPU throughput (Millions of tokens/sec) on single machine. Higher is better.}
\small
\begin{tabular}{@{}lrrrr@{}}
\toprule
\textbf{Tokenizer} & \textbf{English} & \textbf{Code} & \textbf{Unicode} & \textbf{Mixed} \\
\midrule
Crayon (lite, 50k) & 11.9M & 14.5M & 17.3M & 13.6M \\
Crayon (standard, 250k) & 11.7M & 6.4M & 15.6M & 10.4M \\
tiktoken (p50k\_base) & 0.63M & 0.65M & 1.18M & 0.73M \\
tiktoken (cl100k\_base) & 0.50M & 0.50M & 0.85M & 0.58M \\
tiktoken (o200k\_base) & 0.37M & 0.38M & 0.54M & 0.40M \\
HF LLaMA (SP-BPE) & 0.28M & -- & -- & -- \\
HF BERT (WordPiece) & 0.19M & -- & -- & -- \\
\bottomrule
\end{tabular}
\label{tab:throughput}
\end{table*}
\subsection{CPU Load-Time Results}
\begin{table*}[t]
\centering
\caption{Load time (ms) initialization phase. Lower is better.}
\small
\begin{tabular}{@{}lrrrr@{}}
\toprule
\textbf{Tokenizer} & \textbf{English} & \textbf{Code} & \textbf{Unicode} & \textbf{Mixed} \\
\midrule
Crayon (lite) & 22.3 & 17.9 & 20.5 & 17.8 \\
Crayon (standard) & 79.2 & 87.1 & 141.4 & 89.9 \\
tiktoken (p50k\_base) & 207.1 & $\sim$0.0* & $\sim$0.0* & $\sim$0.0* \\
tiktoken (cl100k\_base) & 390.3 & $\sim$0.0* & $\sim$0.0* & 0.3* \\
tiktoken (o200k\_base) & 856.5 & $\sim$0.0* & $\sim$0.0* & $\sim$0.0* \\
\bottomrule
\end{tabular}
\end{table*}
\textit{*Note: \texttt{tiktoken} benchmarks report $\sim$0ms load times on subsequent runs due to lazy caching within the benchmarking harness, whereas Crayon measures fresh OS-level \texttt{mmap} invocations.}
\subsection{GPU Benchmarks: CUDA Architecture}
To evaluate hardware offloading capabilities, we compared Crayon's \texttt{gpu\_engine\_cuda.cu} against \texttt{tiktoken} (\texttt{cl100k\_base}) running on CPU in a batch tokenization scenario. The benchmark was run on an NVIDIA Tesla T4 GPU with CUDA 12.6.
\begin{table}[H]
\centering
\caption{Batch Throughput (NVIDIA Tesla T4 GPU vs CPU Baseline)}
\small
\begin{tabular}{@{}lrr@{}}
\toprule
\textbf{Batch Size} & \textbf{Crayon (GPU Tok/sec)} & \textbf{tiktoken (CPU Tok/sec)} \\
\midrule
1,000 docs & 9.7M & 0.87M \\
10,000 docs & 8.3M & 0.81M \\
50,000 docs & 10.1M & 1.07M \\
\bottomrule
\end{tabular}
\label{tab:gpu_throughput}
\end{table}
This demonstrates a sustained $\sim$10x throughput advantage by offloading dictionary traversal to global device memory on the Tesla T4 architecture.
% ============================================================================
% SECTION 9: DISCUSSION
% ============================================================================
\section{Discussion}
\label{sec:discussion}
\subsection{Benchmark Methodology Insights}
Our standardized benchmark suite reveals performance characteristics across text types. Throughput results are reported in Table~\ref{tab:throughput}, and load-time results are reported in Table~\ref{tab:loadtime}.
\subsection{Architectural Insights}
The performance improvements stem from predictable memory access via the Double-Array Trie representation, SIMD-accelerated fast paths on CPU where applicable, and minimized startup overhead through memory-mapped loading.
\subsection{Limitations}
We acknowledge several methodological and architectural limitations in this study:
\begin{itemize}
\item \textbf{Statistical Rigor:} CPU benchmarks were conducted on a single consumer-grade node without reported statistical error bars, confidence intervals, or repeated runs across diverse hardware architectures, limiting generalized claims.
\item \textbf{Missing Ablations:} The system aggregates multiple optimizations (DAT arrays, SIMD fast-paths, heuristic BPE). We lack granular ablation studies (e.g., DAT vs. standard hash-map, or the entropy utility vs. a pure frequency baseline) to isolate the impact of individual features.
\item \textbf{Token Length:} The rigid 16-byte SIMD constraint artificially limits representations of long compound words, impacting morphological coverage for certain languages.
\item \textbf{Downstream Evaluation:} The evaluation focuses strictly on micro-benchmarking (tokens/sec). We have not yet measured whether this faster tokenization translates to improved downstream LLM training metrics (e.g., perplexity, wall-clock time to convergence).
\item \textbf{GPU Kernel Divergence:} The current CUDA kernel employs a simplistic per-document thread mapping which may suffer from warp divergence and underutilize shared memory on varying sentence lengths.
\end{itemize}
\subsection{Future Directions}
Future research must prioritize rigorous, multi-machine evaluations across diverse datasets (e.g., RedPajama, The Stack) and provide ablation studies validating the core DAT and SIMD mechanisms. Architecturally, we plan to explore AVX-512 for 64-byte vector processing and implement shared-memory caching for the GPU kernels to mitigate global memory latency.
% ============================================================================
% SECTION 10: CONCLUSION
% ============================================================================
\section{Conclusion}
\label{sec:conclusion}
XERV Crayon explores heterogeneous tokenization acceleration by utilizing hardware-native execution paths (AVX2, CUDA, ROCm) and memory-mapped Double-Array Tries. While empirical micro-benchmarks suggest substantial throughput improvements over existing CPU implementations, significant methodological work remains to rigorously validate the system's impact on end-to-end LLM training pipelines.
% ============================================================================
% REFERENCES
% ============================================================================
\begin{thebibliography}{9}
\bibitem{shannon1948}
Shannon, C. E. (1948). A mathematical theory of communication. \textit{Bell System Technical Journal}, 27(3), 379--423.
\bibitem{sennrich2016}
Sennrich, R., Haddow, B., \& Birch, A. (2016). Neural machine translation of rare words with subword units. \textit{ACL}.
\bibitem{kudo2018}
Kudo, T., \& Richardson, J. (2018). SentencePiece: A simple and language independent subword tokenizer. \textit{EMNLP}.
\bibitem{radford2019}
Radford, A., et al. (2019). Language models are unsupervised multitask learners. \textit{OpenAI Technical Report}.
\bibitem{touvron2023}
Touvron, H., et al. (2023). LLaMA: Open and Efficient Foundation Language Models. \textit{arXiv preprint arXiv:2302.13971}.
\bibitem{wolf2020}
Wolf, T., et al. (2020). Transformers: State-of-the-Art Natural Language Processing. \textit{EMNLP}.
\bibitem{aoe1989}
Aoe, J. (1989). An efficient digital search algorithm by using a double-array structure. \textit{IEEE Trans. Software Engineering}, 15(9), 1066--1077.
\bibitem{sennrich2016}
Sennrich, R., Haddow, B., \& Birch, A. (2016). Neural machine translation of rare words with subword units. \textit{ACL}.
\bibitem{kudo2018}
Kudo, T., \& Richardson, J. (2018). SentencePiece: A simple and language independent subword tokenizer. \textit{EMNLP}.
\bibitem{radford2019}
Radford, A., et al. (2019). Language models are unsupervised multitask learners. \textit{OpenAI Technical Report}.
\bibitem{intel2021}
Intel Corporation. (2021). Intel 64 and IA-32 Architectures Optimization Reference Manual.
\bibitem{nvidia2023}
NVIDIA Corporation. (2023). CUDA C++ Programming Guide v12.0.
\bibitem{amd2023}
AMD. (2023). HIP Programming Guide for ROCm 5.x.
\end{thebibliography}
\end{document}