Skip to content

Commit f70c9de

Browse files
committed
update
1 parent 9745024 commit f70c9de

File tree

2 files changed

+43
-32
lines changed

2 files changed

+43
-32
lines changed

README.md

Lines changed: 41 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -455,63 +455,72 @@ The following is a demo of image inference
455455
456456
## 📝 Pixi Cheat Sheet
457457
Here are some useful tasks you can run with Pixi. You must install pixi on your machine first. See the [installation](#-installation) section for more details.
458-
For all the commands below, you can add `-e cuda` if you have a CUDA enabled GPU on your machine.
459458
460-
Run a quickstart to check if your environment is setup correctly. By default, this will run in a CPU environment.
459+
> [!NOTE]
460+
> For all commands below, you can add `-e cuda` to run in a CUDA-enabled environment instead of CPU.
461+
462+
### 🚀 Getting Started
461463
```bash
464+
# Check environment setup (CPU)
462465
pixi run quickstart
463-
```
464466
465-
or to run in a CUDA enabled environment
466-
```bash
467+
# Check environment setup (CUDA)
467468
pixi run -e cuda quickstart
468469
```
469470
470-
Live inference with pretrained model on webcam
471+
### 🎥 Inference Commands
471472
```bash
473+
# Live inference with pretrained model (webcam)
472474
pixi run -e cuda live-inference-pretrained --webcam
473-
```
474475
475-
Run live inference on a custom onnx model
476-
```bash
477-
pixi run -e cuda live-inference --onnx model.onnx --webcam --provider cuda --class-names classes.txt --inference-size 640
478-
```
479-
480-
> [!TIP]
481-
> If you want to use TensorRT for inference, you may need to set the `LD_LIBRARY_PATH` environment variable to include the TensorRT libraries. To do so, navigate into the base directory of this repo and run the following command.
482-
>
483-
> For example
484-
> ```bash
485-
> export LD_LIBRARY_PATH=".pixi/envs/cuda/lib/python3.11/site-packages/tensorrt_libs:$LD_LIBRARY_PATH"
486-
> ```
476+
# Live inference with custom ONNX model (webcam)
477+
pixi run -e cuda live-inference \
478+
--onnx model.onnx \
479+
--webcam \
480+
--provider cuda \
481+
--class-names classes.txt \
482+
--inference-size 640
487483
488-
```bash
489-
pixi run -e cpu live-inference --onnx model.onnx --input video.mp4 --class-names classes.txt --inference-size 320
484+
# Video inference (CPU)
485+
pixi run -e cpu live-inference \
486+
--onnx model.onnx \
487+
--input video.mp4 \
488+
--class-names classes.txt \
489+
--inference-size 320
490490
```
491491
492-
Launch Gradio app
492+
### 🖥️ Gradio Demo
493493
```bash
494-
pixi run gradio-demo --model "best_prep.onnx" --classes "classes.txt" --examples "Rock Paper Scissors SXSW.v14i.coco/test"
495-
```
494+
# Launch Gradio demo with examples
495+
pixi run gradio-demo \
496+
--model "best_prep.onnx" \
497+
--classes "classes.txt" \
498+
--examples "Rock Paper Scissors SXSW.v14i.coco/test"
496499
497-
```bash
500+
# Launch Gradio demo (CPU only)
498501
pixi run -e cpu gradio-demo
499502
```
500503
501-
Train a model
504+
### 🏋️ Training & Export
502505
```bash
506+
# Train model (CUDA)
503507
pixi run -e cuda train-model
504-
```
505508
506-
```bash
509+
# Train model (CPU)
507510
pixi run -e cpu train-model
508-
```
509511
510-
Export model to ONNX
511-
```bash
512-
pixi run export --config config.yml --checkpoint model.pth --output model.onnx
512+
# Export model to ONNX
513+
pixi run export \
514+
--config config.yml \
515+
--checkpoint model.pth \
516+
--output model.onnx
513517
```
514518
519+
> [!TIP]
520+
> For TensorRT inference, set the `LD_LIBRARY_PATH` environment variable:
521+
> ```bash
522+
> export LD_LIBRARY_PATH=".pixi/envs/cuda/lib/python3.11/site-packages/tensorrt_libs:$LD_LIBRARY_PATH"
523+
> ```
515524
516525
## ⚠️ Disclaimer
517526
I'm not affiliated with the original DEIM authors. I just found the model interesting and wanted to try it out. The changes made here are of my own. Please cite and star the original repo if you find this useful.

scripts/gradio_demo.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,8 @@
1010
from PIL import Image, ImageDraw
1111
import cv2
1212

13+
ort.preload_dlls()
14+
1315

1416
# Use absolute paths instead of relative paths
1517
BASE_DIR = os.path.dirname(os.path.abspath(__file__))

0 commit comments

Comments
 (0)