Skip to content

How to do inference? #7

@Himanshunitrr

Description

@Himanshunitrr

I just want do inference on few images.
I followed these (kindly correct me if I am wrong):

  • git clone pa-llava
  • git clone xtuner
  • downloaded the weights of instruction_tuning_weight.pth from hugginface.
  • moved pa-llava inside xtuner
  • so:
    -- xtuner
    -- -- pa-llava
    -- -- tools
    -- -- xtuner

and then copied all the files from tool_add (in pa-llava/tool_add) to tools in xtuner which is inside xtuner.

After this I had these doubts:

  • Can you explain or give an example for oscc.json in --data-path absolute_path/OSCC/oscc.json?
  • Also I have assumed that ./instruction_tuning_weight_ft is instruction_tuning_weight.pth folder
  • whats --work-dir?

NPROC_PER_NODE=1 xtuner zero_shot meta-llama/Meta-Llama-3-8B-Instruct --visual-encoder PLIP --llava ./instruction_tuning_weight_ft --prompt-template llama3_chat --data-path absolute_path/OSCC/oscc.json --work-dir absolute_path/logs/oscc --launcher pytorch --anyres-image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions