Skip to content

Conversation

compilade
Copy link
Collaborator

SuperBPE (from https://huggingface.co/UW/OLMo2-8B-SuperBPE-t180k) uses superword tokens which makes some of its tokens be multiple words.

This PR simply adds the necessary regular expressions for this tokenizer since the BPE algorithm in src/llama-vocab.cpp apparently already works properly for this.

Here's the same example as the one from the model card linked above:

$ ./bin/llama-tokenize -m ../models/ggml-vocab-superbpe.gguf -p "By the way, I am a fan of the Milky Way." --log-disable
189205 -> 'By the way'
181251 -> ', I am'
   244 -> ' a'
  4332 -> ' fan'
180235 -> ' of the'
199785 -> ' Milky Way'
    13 -> '.'

I've also tested this tokenizer with test-tokenizer-0, and it passes.

Test output for superbpe with test-tokenizer-0
$  ./bin/test-tokenizer-0 ../models/ggml-vocab-superbpe.gguf
main : reading vocab from: '../models/ggml-vocab-superbpe.gguf'
llama_model_loader: loaded meta data with 22 key-value pairs and 0 tensors from ../models/ggml-vocab-superbpe.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = olmo2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Superbpe
llama_model_loader: - kv   3:                          olmo2.block_count u32              = 32
llama_model_loader: - kv   4:                       olmo2.context_length u32              = 3000
llama_model_loader: - kv   5:                     olmo2.embedding_length u32              = 4096
llama_model_loader: - kv   6:                  olmo2.feed_forward_length u32              = 11008
llama_model_loader: - kv   7:                 olmo2.attention.head_count u32              = 32
llama_model_loader: - kv   8:              olmo2.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:                       olmo2.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  10:     olmo2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  11:                          general.file_type u32              = 1
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  13:                         tokenizer.ggml.pre str              = superbpe
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,200064]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,200064]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,199757]  = ["Ġ t", "Ġ a", "h e", "i n", "r e",...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 200004
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 200004
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 200004
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 200001
llama_model_loader: - kv  21:               general.quantization_version u32              = 2
print_info: file format = GGUF V3 (latest)
print_info: file type   = F16
print_info: file size   = 0.00 MiB (-nan BPW) 
init_tokenizer: initializing tokenizer for type 2
load: control token: 200001 '<|padding|>' is not marked as EOG
load: special tokens cache size = 5
load: token to piece cache size = 1.3823 MB
print_info: arch             = olmo2
print_info: vocab_only       = 1
print_info: model type       = ?B
print_info: model params     = 0.00 K
print_info: general.name     = Superbpe
print_info: vocab type       = BPE
print_info: n_vocab          = 200064
print_info: n_merges         = 199757
print_info: BOS token        = 200004 '<|endoftext|>'
print_info: EOS token        = 200004 '<|endoftext|>'
print_info: EOT token        = 200004 '<|endoftext|>'
print_info: UNK token        = 200004 '<|endoftext|>'
print_info: PAD token        = 200001 '<|padding|>'
print_info: LF token         = 185 'Ċ'
print_info: EOG token        = 200004 '<|endoftext|>'
print_info: max token length = 512
llama_model_load: vocab only - skipping tensors
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 512
llama_context: n_ctx_per_seq = 512
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 0.0
llama_context: freq_scale    = 1
llama_context: n_ctx_pre_seq (512) > n_ctx_train (0) -- possible training context overflow

src: ''
res: ''
tok: 

src: '  '
res: '  '
tok: 184 

src: '
'
res: '
'
tok: 23409 

src: '
'
res: '
'
tok: 185 

src: '

'
res: '

'
tok: 185 185 

src: '


'
res: '


'
tok: 185 185 185 

src: '
 

 


                         
  
   
    
     
🚀 (normal) 😶‍🌫(multiple emojis concatenated) ✅ 🦙🦙 3 33 333 3333 33333 333333 3333333 33333333 3.3 3..3 3...3 កាន់តែពិសេសអាច😁 ?我想在apple工作1314151天~ ------======= нещо на Български ''''''```````""""......!!!!!!?????? I've been 'told he's there, 'RE you sure? 'M not sure I'll make it, 'D you like some tea? We'Ve a'lL'
res: '
 

 


                         
  
   
    
     
🚀 (normal) 😶‍🌫(multiple emojis concatenated) ✅ 🦙🦙 3 33 333 3333 33333 333333 3333333 33333333 3.3 3..3 3...3 កាន់តែពិសេសអាច😁 ?我想在apple工作1314151天~ ------======= нещо на Български ''''''```````""""......!!!!!!?????? I've been 'told he's there, 'RE you sure? 'M not sure I'll make it, 'D you like some tea? We'Ve a'lL'
tok: 185 2492 185 2492 185 185 207 36818 94071 23409 11791 38518 16210 67908 174535 367 7945 8 19559 114 73191 87410 104 34832 367 42386 81097 83890 8 148574 9804 99 234 7340 99 234 207 18 207 3173 207 21863 207 18 21863 207 3173 21863 207 21863 21863 207 18 21863 21863 207 3173 21863 21863 207 18 13 18 207 18 463 18 207 18 722 18 155170 209 128460 172612 69472 220 39810 224 69472 211 39810 231 39810 115 39810 240 69472 210 39810 240 39810 95 128460 39810 214 36569 210 4396 28504 73058 25375 20885 126420 16 31306 23655 57865 126847 87763 134034 55940 46539 4947 32784 86579 80774 10908 22928 62768 123060 35229 29599 80630 8847 115301 16451 51395 95685 182848 855 37709 180710 526 181224 2519 306 1242 30 855 44 181237 181542 690 180669 855 35 184678 514 6899 182578 6 19685 244 6 75 43 

src: '
 ='
res: '
 ='
tok: 185 817 

src: ' '
res: ' '
tok: 207 

src: '  '
res: '  '
tok: 283 

src: '   '
res: '   '
tok: 456 

src: '    Hello'
res: '    Hello'
tok: 350 12586 

src: '    Hello
    Hello'
res: '    Hello
    Hello'
tok: 350 12586 185 350 12586 

src: '   Hello'
res: '   Hello'
tok: 283 16151 

src: '  Hello'
res: '  Hello'
tok: 283 12586 

src: ' ('
res: ' ('
tok: 367 

src: ' Hello'
res: ' Hello'
tok: 16151 

src: ' Hello World'
res: ' Hello World'
tok: 16151 2707 

src: ' Hello World!'
res: ' Hello World!'
tok: 16151 2707 0 

src: ' Hello world'
res: ' Hello world'
tok: 16151 902 

src: ' Hello, world!'
res: ' Hello, world!'
tok: 16151 11 902 0 

src: ' discards'
res: ' discards'
tok: 84445 

src: ' this is 🦙.cpp'
res: ' this is 🦙.cpp'
tok: 180505 9804 99 234 13 29808 

src: '!!!!!!'
res: '!!!!!!'
tok: 51395 

src: '' era'
res: '' era'
tok: 6 6282 

src: '3'
res: '3'
tok: 18 

src: '33'
res: '33'
tok: 3173 

src: '333'
res: '333'
tok: 21863 

src: '3333'
res: '3333'
tok: 18 21863 

src: '33333'
res: '33333'
tok: 3173 21863 

src: '333333'
res: '333333'
tok: 21863 21863 

src: '3333333'
res: '3333333'
tok: 18 21863 21863 

src: '33333333'
res: '33333333'
tok: 3173 21863 21863 

src: '333333333'
res: '333333333'
tok: 21863 21863 21863 

src: 'Cửa Việt'
res: 'Cửa Việt'
tok: 34 148753 64 131865 

src: 'Führer'
res: 'Führer'
tok: 37 84923 

src: 'Hello'
res: 'Hello'
tok: 12586 

src: 'Hello World'
res: 'Hello World'
tok: 12586 2707 

src: 'Hello world'
res: 'Hello world'
tok: 12586 902 

src: 'Hello, world!'
res: 'Hello, world!'
tok: 12586 11 902 0 

src: 'Hello, y'all! How are you 😁 ?我想在apple工作1314151天~'
res: 'Hello, y'all! How are you 😁 ?我想在apple工作1314151天~'
tok: 12586 184466 193427 190724 181027 161635 4396 28504 73058 25375 20885 126420 16 31306 23655 57865 126847 

src: 'ied 4 ½ months'
res: 'ied 4 ½ months'
tok: 920 207 19 207 16705 2158 

src: 'w048 7tuijk dsdfhu'
res: 'w048 7tuijk dsdfhu'
tok: 86 32519 207 22 24777 46426 35260 7387 14619 

src: 'нещо на Български'
res: 'нещо на Български'
tok: 55707 46539 4947 32784 86579 80774 10908 22928 62768 123060 

src: 'កាន់តែពិសេសអាចខលចេញ'
res: 'កាន់តែពិសេសអាចខលចេញ'
tok: 158227 128460 172612 69472 220 39810 224 69472 211 39810 231 39810 115 39810 240 69472 210 39810 240 39810 95 128460 39810 214 39810 210 39810 236 39810 214 69472 210 39810 218 

src: '🚀 (normal) 😶‍🌫(multiple emojis concatenated) ✅ (only emoji that has its own token)'
res: '🚀 (normal) 😶‍🌫(multiple emojis concatenated) ✅ (only emoji that has its own token)'
tok: 174535 367 7945 8 19559 114 73191 87410 104 34832 367 42386 81097 83890 8 148574 195935 52672 180867 181064 11997 8 

Tests passed

Make sure to read the contributing guidelines before submitting a PR

@compilade compilade requested a review from ngxson March 23, 2025 20:39
@github-actions github-actions bot added the python python script changes label Mar 23, 2025
@compilade compilade added the model Model specific label Mar 23, 2025
@ngxson ngxson merged commit 00d5380 into master Mar 24, 2025
53 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

model Model specific python python script changes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants