@@ -47,11 +47,11 @@ pip install git+https://github.com/xinyu1205/recognize-anything.git --no-deps
4747Download the pre-trained model:
4848
4949``` bash
50- huggingface-cli download --resume-download xinyu1205/recognize_anything_model ram_swin_large_14m.pth
51- huggingface-cli download --resume-download IDEA-Research/grounding-dino-base
52- huggingface-cli download --resume-download Salesforce/blip2-flan-t5-xxl
53- huggingface-cli download --resume-download clip-vit-large-patch14
54- huggingface-cli download --resume-download masterful/gligen-1-4-generation-text-box
50+ hf download --resume-download xinyu1205/recognize_anything_model ram_swin_large_14m.pth
51+ hf download --resume-download IDEA-Research/grounding-dino-base
52+ hf download --resume-download Salesforce/blip2-flan-t5-xxl
53+ hf download --resume-download clip-vit-large-patch14
54+ hf download --resume-download masterful/gligen-1-4-generation-text-box
5555```
5656
5757Make the training data on 8 GPUs:
@@ -66,7 +66,7 @@ torchrun --master_port 17673 --nproc_per_node=8 make_datasets.py \
6666You can download the COCO training data from
6767
6868``` bash
69- huggingface-cli download --resume-download Hzzone/GLIGEN_COCO coco_train2017.pth
69+ hf download --resume-download Hzzone/GLIGEN_COCO coco_train2017.pth
7070```
7171
7272It's in the format of
@@ -125,7 +125,7 @@ Note that although the pre-trained GLIGEN model has been loaded, the parameters
125125The trained model can be downloaded from
126126
127127``` bash
128- huggingface-cli download --resume-download Hzzone/GLIGEN_COCO config.json diffusion_pytorch_model.safetensors
128+ hf download --resume-download Hzzone/GLIGEN_COCO config.json diffusion_pytorch_model.safetensors
129129```
130130
131131You can run ` demo.ipynb ` to visualize the generated images.
0 commit comments