You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to replicate these results with the quantization code.
Is the environment setup any different from the eval.sh in the original LLaDA code? Also I'm getting an error that says llada isn't supported yet.
[rank3]: Traceback (most recent call last):
[rank3]: File "/nvme-data2/atharvchagi/LLaDA/quantization/eval_llada.py", line 580, in
[rank3]: cli_evaluate()
[rank3]: File "/data/atharvchagi/miniforge/envs/llm-inf/lib/python3.10/site-packages/lm_eval/main.py", line 389, in cli_evaluate
[rank3]: results = evaluator.simple_evaluate(
[rank3]: File "/data/atharvchagi/miniforge/envs/llm-inf/lib/python3.10/site-packages/lm_eval/utils.py", line 422, in _wrapper
[rank3]: return fn(*args, **kwargs)
[rank3]: File "/data/atharvchagi/miniforge/envs/llm-inf/lib/python3.10/site-packages/lm_eval/evaluator.py", line 209, in simple_evaluate
[rank3]: lm = lm_eval.api.registry.get_model(model).create_from_arg_string(
[rank3]: File "/data/atharvchagi/miniforge/envs/llm-inf/lib/python3.10/site-packages/lm_eval/api/model.py", line 151, in create_from_arg_string
[rank3]: return cls(**args, **args2)
[rank3]: File "/nvme-data2/atharvchagi/LLaDA/quantization/eval_llada.py", line 278, in init
[rank3]: self.model = GPTQModel.load(model_path, device='cuda' , trust_remote_code=True )
[rank3]: File "/data/atharvchagi/miniforge/envs/llm-inf/lib/python3.10/site-packages/gptqmodel/models/auto.py", line 293, in load
[rank3]: m = cls.from_quantized(
[rank3]: File "/data/atharvchagi/miniforge/envs/llm-inf/lib/python3.10/site-packages/gptqmodel/models/auto.py", line 365, in from_quantized
[rank3]: model_type = check_and_get_model_type(model_id_or_path, trust_remote_code)
[rank3]: File "/data/atharvchagi/miniforge/envs/llm-inf/lib/python3.10/site-packages/gptqmodel/models/auto.py", line 239, in check_and_get_model_type
[rank3]: raise TypeError(f"{config.model_type} isn't supported yet.")
[rank3]: TypeError: llada isn't supported yet.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Thank you for your great works. We have released the 4-bit GPTQ quantized LLaDA model on Hugging Face:
Based on the published evaluation code, we have evaluated the quantized base model. The results are as follows: