Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 10 additions & 3 deletions docs/CN/source/getting_started/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,16 @@ Lightllm 是一个纯python开发的推理框架,其中的算子使用triton
$ # 拉取官方镜像
$ docker pull ghcr.io/modeltc/lightllm:main
$
$ # 运行
$ # 运行服务, 注意现在的lightllm服务非常的依赖共享内存部分,在启动
$ # 前请确保你的docker设置中已经分配了足够的共享内存,否则可能导致
$ # 服务无法正常启动。
$ # 1.如果是纯文本服务,建议分配2GB以上的共享内存, 如果你的内存充足,建议分配16GB以上的共享内存.
$ # 2.如果是多模态服务,建议分配16GB以上的共享内存,具体可以根据实际情况进行调整.
$ # 如果你没有足够的共享内存,可以尝试在启动服务的时候调低 --running_max_req_size 参数,这会降低
$ # 服务的并发请求数量,但可以减少共享内存的占用。如果是多模态服务,也可以通过降低 --cache_capacity
$ # 参数来减少共享内存的占用。
$ docker run -it --gpus all -p 8080:8080 \
$ --shm-size 1g -v your_local_path:/data/ \
$ --shm-size 2g -v your_local_path:/data/ \
$ ghcr.io/modeltc/lightllm:main /bin/bash

你也可以使用源码手动构建镜像并运行:
Expand All @@ -37,7 +44,7 @@ Lightllm 是一个纯python开发的推理框架,其中的算子使用triton
$
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

为了保持一致性,建议在此处为手动构建的镜像添加关于共享内存要求的详细说明,就像为运行官方镜像所做的那样。这将避免选择从源代码构建的用户产生混淆。

    $ # 运行服务, 注意现在的lightllm服务非常的依赖共享内存部分,在启动
    $ # 前请确保你的docker设置中已经分配了足够的共享内存,否则可能导致
    $ # 服务无法正常启动。
    $ # 1.如果是纯文本服务,建议分配2GB以上的共享内存, 如果你的内存充足,建议分配16GB以上的共享内存.
    $ # 2.如果是多模态服务,建议分配16GB以上的共享内存,具体可以根据实际情况进行调整. 
    $ # 如果你没有足够的共享内存,可以尝试在启动服务的时候调低 --running_max_req_size 参数,这会降低
    $ # 服务的并发请求数量,但可以减少共享内存的占用。

$ # 运行
$ docker run -it --gpus all -p 8080:8080 \
$ --shm-size 1g -v your_local_path:/data/ \
$ --shm-size 2g -v your_local_path:/data/ \
$ <image_name> /bin/bash

或者你也可以直接使用脚本一键启动镜像并且运行:
Expand Down
19 changes: 15 additions & 4 deletions docs/EN/source/getting_started/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,20 @@ The easiest way to install Lightllm is using the official image. You can directl
$ # Pull the official image
$ docker pull ghcr.io/modeltc/lightllm:main
$
$ # Run
$ # Run,The current LightLLM service relies heavily on shared memory.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

这里似乎有一个错字。为了提高英文文档的可读性,全角逗号 (,) 应该替换为标准的冒号或句点。

    $ # Run: The current LightLLM service relies heavily on shared memory.

$ # Before starting, please make sure that you have allocated enough shared memory
$ # in your Docker settings; otherwise, the service may fail to start properly.
$ #
$ # 1. For text-only services, it is recommended to allocate more than 2GB of shared memory.
$ # If your system has sufficient RAM, allocating 16GB or more is recommended.
$ # 2.For multimodal services, it is recommended to allocate 16GB or more of shared memory.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

2.For 之间缺少一个空格,应该更正以获得更好的格式。

    $ # 2. For multimodal services, it is recommended to allocate 16GB or more of shared memory. 

$ # You can adjust this value according to your specific requirements.
$ #
$ # If you do not have enough shared memory available, you can try lowering
$ # the --running_max_req_size parameter when starting the service.
$ # This will reduce the number of concurrent requests, but also decrease shared memory usage.
$ docker run -it --gpus all -p 8080:8080 \
$ --shm-size 1g -v your_local_path:/data/ \
$ --shm-size 2g -v your_local_path:/data/ \
$ ghcr.io/modeltc/lightllm:main /bin/bash

You can also manually build the image from source and run it:
Expand All @@ -35,9 +46,9 @@ You can also manually build the image from source and run it:
$ # Manually build the image
$ docker build -t <image_name> .
$
$ # Run
$ # Run,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

本节缺少为官方镜像添加的关于共享内存的详细说明。$ # Run, 这一行似乎是不完整的编辑。为了确保一致性并向用户提供完整的信息,也请在此处添加完整的说明。

    $ # Run: The current LightLLM service relies heavily on shared memory.
    $ # Before starting, please make sure that you have allocated enough shared memory 
    $ # in your Docker settings; otherwise, the service may fail to start properly.
    $ #
    $ # 1. For text-only services, it is recommended to allocate more than 2GB of shared memory. 
    $ # If your system has sufficient RAM, allocating 16GB or more is recommended.
    $ # 2. For multimodal services, it is recommended to allocate 16GB or more of shared memory. 
    $ # You can adjust this value according to your specific requirements.
    $ #
    $ # If you do not have enough shared memory available, you can try lowering 
    $ # the --running_max_req_size parameter when starting the service. 
    $ # This will reduce the number of concurrent requests, but also decrease shared memory usage.

$ docker run -it --gpus all -p 8080:8080 \
$ --shm-size 1g -v your_local_path:/data/ \
$ --shm-size 2g -v your_local_path:/data/ \
$ <image_name> /bin/bash

Or you can directly use the script to launch the image and run it with one click:
Expand Down