-
Notifications
You must be signed in to change notification settings - Fork 300
update docs #990
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update docs #990
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -23,9 +23,20 @@ The easiest way to install Lightllm is using the official image. You can directl | |
| $ # Pull the official image | ||
| $ docker pull ghcr.io/modeltc/lightllm:main | ||
| $ | ||
| $ # Run | ||
| $ # Run,The current LightLLM service relies heavily on shared memory. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||
| $ # Before starting, please make sure that you have allocated enough shared memory | ||
| $ # in your Docker settings; otherwise, the service may fail to start properly. | ||
| $ # | ||
| $ # 1. For text-only services, it is recommended to allocate more than 2GB of shared memory. | ||
| $ # If your system has sufficient RAM, allocating 16GB or more is recommended. | ||
| $ # 2.For multimodal services, it is recommended to allocate 16GB or more of shared memory. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||
| $ # You can adjust this value according to your specific requirements. | ||
| $ # | ||
| $ # If you do not have enough shared memory available, you can try lowering | ||
| $ # the --running_max_req_size parameter when starting the service. | ||
| $ # This will reduce the number of concurrent requests, but also decrease shared memory usage. | ||
| $ docker run -it --gpus all -p 8080:8080 \ | ||
| $ --shm-size 1g -v your_local_path:/data/ \ | ||
| $ --shm-size 2g -v your_local_path:/data/ \ | ||
| $ ghcr.io/modeltc/lightllm:main /bin/bash | ||
|
|
||
| You can also manually build the image from source and run it: | ||
|
|
@@ -35,9 +46,9 @@ You can also manually build the image from source and run it: | |
| $ # Manually build the image | ||
| $ docker build -t <image_name> . | ||
| $ | ||
| $ # Run | ||
| $ # Run, | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 本节缺少为官方镜像添加的关于共享内存的详细说明。 |
||
| $ docker run -it --gpus all -p 8080:8080 \ | ||
| $ --shm-size 1g -v your_local_path:/data/ \ | ||
| $ --shm-size 2g -v your_local_path:/data/ \ | ||
| $ <image_name> /bin/bash | ||
|
|
||
| Or you can directly use the script to launch the image and run it with one click: | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为了保持一致性,建议在此处为手动构建的镜像添加关于共享内存要求的详细说明,就像为运行官方镜像所做的那样。这将避免选择从源代码构建的用户产生混淆。