|
3 | 3 | ## Container Environment Preparation |
4 | 4 | First, download the image we provide: |
5 | 5 | ```bash |
6 | | -docker pull xllm/xllm-ai:xllm-0.6.0-dev-hb-rc2-py3.11-oe24.03-lts |
| 6 | +docker pull xllm/xllm-ai:xllm-0.6.1-dev-hb-rc2-x86 |
7 | 7 | ``` |
8 | 8 | Then create the corresponding container: |
9 | 9 | ```bash |
10 | | -sudo docker run -it --ipc=host -u 0 --privileged --name mydocker --network=host --device=/dev/davinci0 --device=/dev/davinci_manager --device=/dev/devmm_svm --device=/dev/hisi_hdc -v /var/queue_schedule:/var/queue_schedule -v /mnt/cfs/9n-das-admin/llm_models:/mnt/cfs/9n-das-admin/llm_models -v /usr/local/Ascend/driver:/usr/local/Ascend/driver -v /usr/local/Ascend/add-ons/:/usr/local/Ascend/add-ons/ -v /usr/local/sbin/npu-smi:/usr/local/sbin/npu-smi -v /usr/local/sbin/:/usr/local/sbin/ -v /var/log/npu/conf/slog/slog.conf:/var/log/npu/conf/slog/slog.conf -v /var/log/npu/slog/:/var/log/npu/slog -v /export/home:/export/home -w /export/home -v ~/.ssh:/root/.ssh -v /var/log/npu/profiling/:/var/log/npu/profiling -v /var/log/npu/dump/:/var/log/npu/dump -v /home/:/home/ -v /runtime/:/runtime/ xllm/xllm-ai:xllm-0.6.0-dev-hb-rc2-py3.11-oe24.03-lts |
| 10 | +sudo docker run -it --ipc=host -u 0 --privileged --name mydocker --network=host --device=/dev/davinci0 --device=/dev/davinci_manager --device=/dev/devmm_svm --device=/dev/hisi_hdc -v /var/queue_schedule:/var/queue_schedule -v /mnt/cfs/9n-das-admin/llm_models:/mnt/cfs/9n-das-admin/llm_models -v /usr/local/Ascend/driver:/usr/local/Ascend/driver -v /usr/local/Ascend/add-ons/:/usr/local/Ascend/add-ons/ -v /usr/local/sbin/npu-smi:/usr/local/sbin/npu-smi -v /usr/local/sbin/:/usr/local/sbin/ -v /var/log/npu/conf/slog/slog.conf:/var/log/npu/conf/slog/slog.conf -v /var/log/npu/slog/:/var/log/npu/slog -v /export/home:/export/home -w /export/home -v ~/.ssh:/root/.ssh -v /var/log/npu/profiling/:/var/log/npu/profiling -v /var/log/npu/dump/:/var/log/npu/dump -v /home/:/home/ -v /runtime/:/runtime/ -v /etc/hccn.conf:/etc/hccn.conf xllm/xllm-ai:xllm-0.6.1-dev-hb-rc2-x86 |
11 | 11 | ``` |
12 | 12 |
|
13 | 13 | ### Docker images |
14 | 14 |
|
15 | 15 | | Device | Arch | Images | |
16 | 16 | |:---------:|:-----------:|:-------------:| |
17 | | -| A2 | x86 | xllm/xllm-ai:xllm-0.6.0-dev-hb-rc2-py3.11-oe24.03-lts | |
18 | | -| A2 | arm | xllm/xllm-ai:xllm-0.6.0-dev-hb-rc2-py3.11-oe24.03-lts-aarch64 | |
19 | | -| A3 | arm | xllm/xllm-ai:xllm-0.6.0-dev-hc-rc2-py3.11-oe24.03-lts-aarch64 | |
| 17 | +| A2 | x86 | xllm/xllm-ai:xllm-0.6.1-dev-hb-rc2-x86 | |
| 18 | +| A2 | arm | xllm/xllm-ai:xllm-0.6.1-dev-hb-rc2-arm | |
| 19 | +| A3 | arm | xllm/xllm-ai:xllm-0.6.1-dev-hc-rc2-arm | |
20 | 20 |
|
21 | 21 | If you can't download it, you can use the following source instead: |
22 | 22 |
|
23 | 23 | | Device | Arch | Images | |
24 | 24 | |:---------:|:-----------:|:-------------:| |
25 | | -| A2 | x86 | quay.io/jd_xllm/xllm-ai:xllm-0.6.0-dev-hb-rc2-py3.11-oe24.03-lts | |
26 | | -| A2 | arm | quay.io/jd_xllm/xllm-ai:xllm-0.6.0-dev-hb-rc2-py3.11-oe24.03-lts-aarch64 | |
27 | | -| A3 | arm | quay.io/jd_xllm/xllm-ai:xllm-0.6.0-dev-hc-rc2-py3.11-oe24.03-lts-aarch64 | |
| 25 | +| A2 | x86 | quay.io/jd_xllm/xllm-ai:xllm-0.6.1-dev-hb-rc2-x86 | |
| 26 | +| A2 | arm | quay.io/jd_xllm/xllm-ai:xllm-0.6.1-dev-hb-rc2-arm | |
| 27 | +| A3 | arm | quay.io/jd_xllm/xllm-ai:xllm-0.6.1-dev-hc-rc2-arm | |
28 | 28 |
|
29 | 29 | ## Installation |
30 | 30 | After entering the container, download and compile using our [official repository](https://github.com/jd-opensource/xllm): |
|
0 commit comments