-
Notifications
You must be signed in to change notification settings - Fork 13.4k
Description
Hello @CISC @MaggotHATE @slaren and @juliendenize (tagged due to prior related discussion).
I recognize that Mistral has been... enthusiastically... promoting the usage of mistral-common
, and I am sure many people indeed find that library helpful.
However in my opinion, I don't think it's correct to mandate its installation for all users. For example, there might be users who wish to simply convert their Qwen, Gemma or GLM-Air model, who are now forced to install mistral-common
ever since #14737 was merged.
For a practical example, @TheLocalDrummer encounters the same issue here: #15420 (comment)
Additionally, I also point you to the concerns raised by u/dobomex761604 and various redditors in this comment chain: https://www.reddit.com/r/LocalLLaMA/comments/1njgovj/magistral_small_2509_has_been_released/neq8w8p/
Ideally this dependency will not be mandated, and a llama.cpp community solution preferred, but failing that, perhaps we can consider placing it behind a try/except
block so others who do not wish to use Mistral models are not burdened by this limitation.
try:
from mistral_common.tokens.tokenizers.base import TokenizerVersion
from mistral_common.tokens.tokenizers.multimodal import DATASET_MEAN, DATASET_STD
from mistral_common.tokens.tokenizers.tekken import Tekkenizer
from mistral_common.tokens.tokenizers.sentencepiece import (
SentencePieceTokenizer,
)
except Exception:
print("Warning: No Mistral Common Installed. You cannot convert Mistral models.")
Alternative suggestions are welcome. I do like and use Mistral models, but I don't think they should be given special treatment in this project.