Lm-eval usage #73
yiliu30
announced in
Announcements
Replies: 1 comment
-
HPUPT_HPU_LAZY_MODE=1 \
VLLM_SKIP_WARMUP=true \
PT_HPU_ENABLE_LAZY_COLLECTIVES=true \
PT_HPU_WEIGHT_SHARING=0 \
lm_eval --model vllm-vlm \
--model_args "pretrained=${model_path},tensor_parallel_size=${tp_size},max_model_len=4096,max_num_seqs=128,gpu_memory_utilization=0.8,use_v2_block_manager=True,dtype=bfloat16,max_gen_toks=2048,disable_log_stats=True,max_images=1" \
--tasks mmmu_val \
--apply_chat_template \
--batch_size 128 --log_samples --output_path ${output_dir} --show_config 2>&1 | tee ${output_dir}/log.txt
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
mmodel
https://github.com/EleutherAI/lm-evaluation-harness/releases/tag/v0.4.5
lm_eval --model hf-multimodal \ --model_args pretrained=llava-hf/llava-1.5-7b-hf,attn_implementation=flash_attention_2,max_images=1,interleave=True,image_string=<image> \ --tasks mmmu_val \ --apply_chat_template lm_eval --model vllm-vlm \ --model_args pretrained=llava-hf/llava-1.5-7b-hf,max_images=1,interleave=True \ --tasks mmmu_val \ --apply_chat_templateBeta Was this translation helpful? Give feedback.
All reactions