You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
-32Lines changed: 0 additions & 32 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -377,38 +377,6 @@ Run the inference examples as usual, for example:
377
377
- If you have trouble with Ascend NPU device, please create a issue with **[CANN]** prefix/tag.
378
378
- If you run successfully with your Ascend NPU device, please help update the table `Verified devices`.
379
379
380
-
## Docker
381
-
382
-
### Prerequisites
383
-
384
-
- Docker must be installed and running on your system.
385
-
- Create a folder to store big models & intermediate files (ex. /whisper/models)
386
-
387
-
### Images
388
-
389
-
We have two Docker images available for this project:
390
-
391
-
1. `ghcr.io/ggerganov/whisper.cpp:main`: This image includes the main executable file as well as `curl` and `ffmpeg`. (platforms: `linux/amd64`, `linux/arm64`)
392
-
2. `ghcr.io/ggerganov/whisper.cpp:main-cuda`: Same as `main` but compiled with CUDA support. (platforms: `linux/amd64`)
393
-
394
-
### Usage
395
-
396
-
```shell
397
-
# download model and persist it in a local folder
398
-
docker run -it --rm \
399
-
-v path/to/models:/models \
400
-
whisper.cpp:main "./models/download-ggml-model.sh base /models"
0 commit comments