I added vietnamese words as AddedTokens to a pretrained tokenizer. An example is "đem". However, when I decoded the token ID of that added token, the result is wrong. This is caused by the character đ which are mapped to the byte \x11. Here are my code:
from tokenizers import Tokenizer
tokenizer = Tokenizer.from_file('/home/ec2-user/efs/OLMo/olmo_data/tokenizers/allenai_dolma2.json')
tokenizer.add_tokens(["đem"])
tokenizer.decode([tokenizer.token_to_id("đem")])
and here is the result
'\x11em'