Skip to content

Commit d8ef6f8

Browse files
authored
[Bug fix] remove max_memory from from_config init in hf_ptq (#373)
Signed-off-by: realAsma <[email protected]>
1 parent 70abfb4 commit d8ef6f8

File tree

1 file changed

+1
-3
lines changed

1 file changed

+1
-3
lines changed

examples/llm_ptq/example_utils.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -204,9 +204,7 @@ def get_model(
204204
if auto_model_module != AutoModelForCausalLM:
205205
model_kwargs2.pop("trust_remote_code", None)
206206
model_kwargs2["torch_dtype"] = torch_dtype
207-
# DeciLMForCausalLM does not support max_memory argument
208-
if "architectures" in hf_config and "DeciLMForCausalLM" in hf_config.architectures:
209-
model_kwargs2.pop("max_memory", None)
207+
model_kwargs2.pop("max_memory", None)
210208
model = from_config(hf_config, **model_kwargs2)
211209

212210
max_memory = get_max_memory()

0 commit comments

Comments
 (0)