|
| 1 | +--- |
| 2 | +title: Use Microsoft.ML.Tokenizers for text tokenization |
| 3 | +description: Learn how to use the Microsoft.ML.Tokenizers library to tokenize text for AI models, manage token counts, and work with various tokenization algorithms. |
| 4 | +ms.topic: how-to |
| 5 | +ms.date: 10/29/2025 |
| 6 | +ai-usage: ai-assisted |
| 7 | +#customer intent: As a .NET developer, I want to use the Microsoft.ML.Tokenizers library to tokenize text so I can work with AI models, manage costs, and handle token limits effectively. |
| 8 | +--- |
| 9 | +# Use Microsoft.ML.Tokenizers for text tokenization |
| 10 | + |
| 11 | +The [Microsoft.ML.Tokenizers](https://www.nuget.org/packages/Microsoft.ML.Tokenizers) library provides a comprehensive set of tools for tokenizing text in .NET applications. Tokenization is essential when working with large language models (LLMs), as it allows you to manage token counts, estimate costs, and preprocess text for AI models. |
| 12 | + |
| 13 | +This article shows you how to use the library's key features and work with different tokenizer models. |
| 14 | + |
| 15 | +## Prerequisites |
| 16 | + |
| 17 | +- [.NET 8.0 SDK](https://dotnet.microsoft.com/download/dotnet/8.0) or later |
| 18 | + |
| 19 | +## Install the package |
| 20 | + |
| 21 | +Install the Microsoft.ML.Tokenizers NuGet package: |
| 22 | + |
| 23 | +```dotnetcli |
| 24 | +dotnet add package Microsoft.ML.Tokenizers |
| 25 | +``` |
| 26 | + |
| 27 | +## Key features |
| 28 | + |
| 29 | +The Microsoft.ML.Tokenizers library provides: |
| 30 | + |
| 31 | +- **Extensible tokenizer architecture**: Allows specialization of Normalizer, PreTokenizer, Model/Encoder, and Decoder components. |
| 32 | +- **Multiple tokenization algorithms**: Supports BPE (Byte Pair Encoding), Tiktoken, Llama, CodeGen, and more. |
| 33 | +- **Token counting and estimation**: Helps manage costs and context limits when working with AI services. |
| 34 | +- **Flexible encoding options**: Provides methods to encode text to token IDs, count tokens, and decode tokens back to text. |
| 35 | + |
| 36 | +## Use Tiktoken tokenizer |
| 37 | + |
| 38 | +The Tiktoken tokenizer is commonly used with OpenAI models like GPT-4. The following example shows how to initialize a Tiktoken tokenizer and perform common operations: |
| 39 | + |
| 40 | +:::code language="csharp" source="./snippets/use-tokenizers/csharp/TokenizersExamples/TiktokenExample.cs" id="TiktokenBasic"::: |
| 41 | + |
| 42 | +The tokenizer instance should be cached and reused throughout your application for better performance. |
| 43 | + |
| 44 | +### Manage token limits |
| 45 | + |
| 46 | +When working with LLMs, you often need to manage text within token limits. The following example shows how to trim text to a specific token count: |
| 47 | + |
| 48 | +:::code language="csharp" source="./snippets/use-tokenizers/csharp/TokenizersExamples/TiktokenExample.cs" id="TiktokenTrim"::: |
| 49 | + |
| 50 | +## Use Llama tokenizer |
| 51 | + |
| 52 | +The Llama tokenizer is designed for the Llama family of models. It requires a tokenizer model file, which you can download from model repositories like Hugging Face: |
| 53 | + |
| 54 | +:::code language="csharp" source="./snippets/use-tokenizers/csharp/TokenizersExamples/LlamaExample.cs" id="LlamaBasic"::: |
| 55 | + |
| 56 | +### Advanced encoding options |
| 57 | + |
| 58 | +The tokenizer supports advanced encoding options, such as controlling normalization and pretokenization: |
| 59 | + |
| 60 | +:::code language="csharp" source="./snippets/use-tokenizers/csharp/TokenizersExamples/LlamaExample.cs" id="LlamaAdvanced"::: |
| 61 | + |
| 62 | +## Use BPE tokenizer |
| 63 | + |
| 64 | +Byte Pair Encoding (BPE) is the underlying algorithm used by many tokenizers, including Tiktoken. The following example demonstrates BPE tokenization: |
| 65 | + |
| 66 | +:::code language="csharp" source="./snippets/use-tokenizers/csharp/TokenizersExamples/BpeExample.cs" id="BpeBasic"::: |
| 67 | + |
| 68 | +The library also provides specialized tokenizers like `BpeTokenizer` and `EnglishRobertaTokenizer` that you can configure with custom vocabularies for specific models. |
| 69 | + |
| 70 | +## Common tokenizer operations |
| 71 | + |
| 72 | +All tokenizers in the library implement the `Tokenizer` base class, which provides a consistent API: |
| 73 | + |
| 74 | +- **`EncodeToIds`**: Converts text to a list of token IDs |
| 75 | +- **`Decode`**: Converts token IDs back to text |
| 76 | +- **`CountTokens`**: Returns the number of tokens in a text string |
| 77 | +- **`EncodeToTokens`**: Returns detailed token information including values and IDs |
| 78 | +- **`GetIndexByTokenCount`**: Finds the character index for a specific token count from the start |
| 79 | +- **`GetIndexByTokenCountFromEnd`**: Finds the character index for a specific token count from the end |
| 80 | + |
| 81 | +## Migration from other libraries |
| 82 | + |
| 83 | +If you're currently using `DeepDev.TokenizerLib` or `SharpToken`, consider migrating to Microsoft.ML.Tokenizers. The library has been enhanced to cover scenarios from those libraries and provides better performance and support. For migration guidance, see the [migration guide](https://github.com/dotnet/machinelearning/blob/main/docs/code/microsoft-ml-tokenizers-migration-guide.md). |
| 84 | + |
| 85 | +## Related content |
| 86 | + |
| 87 | +- [Understanding tokens](../conceptual/understanding-tokens.md) |
| 88 | +- [Microsoft.ML.Tokenizers API reference](/dotnet/api/microsoft.ml.tokenizers) |
| 89 | +- [Microsoft.ML.Tokenizers NuGet package](https://www.nuget.org/packages/Microsoft.ML.Tokenizers) |
0 commit comments