Skip to content

Conversation

@ilan-theodoro
Copy link

This addresses the issue #1765.

@ngxson
Copy link
Collaborator

ngxson commented Nov 2, 2024

I think many things have changed since the issue was created, so the approach in this PR is not valid anymore.

User can now choose between using llama_tokenize or using their own implementation, so adding llama_set_custom_tokenizer provides no additional value.

Furthermore, adding custom tokenizer can potentially prevent the tokenizer API from being thread-safe.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants