Skip to content

Commit 4b8b3e4

Browse files
authored
Update README.md
1 parent c86af37 commit 4b8b3e4

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

README.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,6 +56,8 @@ import torch
5656

5757
from depth_anything_v2.dpt import DepthAnythingV2
5858

59+
DEVICE = 'cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu'
60+
5961
model_configs = {
6062
'vits': {'encoder': 'vits', 'features': 64, 'out_channels': [48, 96, 192, 384]},
6163
'vitb': {'encoder': 'vitb', 'features': 128, 'out_channels': [96, 192, 384, 768]},
@@ -67,7 +69,7 @@ encoder = 'vitl' # or 'vits', 'vitb', 'vitg'
6769

6870
model = DepthAnythingV2(**model_configs[encoder])
6971
model.load_state_dict(torch.load(f'checkpoints/depth_anything_v2_{encoder}.pth', map_location='cpu'))
70-
model.eval()
72+
model = model.to(DEVICE).eval()
7173

7274
raw_img = cv2.imread('your/image/path')
7375
depth = model.infer_image(raw_img) # HxW raw depth map in numpy
@@ -132,6 +134,7 @@ Please refer to [DA-2K benchmark](./DA-2K.md).
132134
**We sincerely appreciate all the community support for our Depth Anything series. Thank you a lot!**
133135

134136
- TensorRT: https://github.com/spacewalk01/depth-anything-tensorrt
137+
- ONNX: https://github.com/fabio-sim/Depth-Anything-ONNX
135138
- ComfyUI: https://github.com/kijai/ComfyUI-DepthAnythingV2
136139
- Transformers.js (real-time depth in web): https://huggingface.co/spaces/Xenova/webgpu-realtime-depth-estimation
137140
- Android:

0 commit comments

Comments
 (0)