You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: colossalai/inference/README.md
+51-3Lines changed: 51 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -207,13 +207,13 @@ Learnt from [PagedAttention](https://arxiv.org/abs/2309.06180) by [vLLM](https:/
207
207
Request handler is responsible for managing requests and scheduling a proper batch from exisiting requests. Based on [Orca's](https://www.usenix.org/conference/osdi22/presentation/yu) and [vLLM's](https://github.com/vllm-project/vllm) research and work on batching requests, we applied continuous batching with unpadded sequences, which enables various number of sequences to pass projections (i.e. Q, K, and V) together in different steps by hiding the dimension of number of sequences, and decrement the latency of incoming sequences by inserting a prefill batch during a decoding step and then decoding together.
<em>Continuous Batching: dynamically adjust the batch size by popping out finished sequences and inserting prefill batch</em>
219
219
</p>
@@ -222,14 +222,62 @@ Request handler is responsible for managing requests and scheduling a proper bat
222
222
223
223
Modeling contains models, layers, and policy, which are hand-crafted for better performance easier usage. Integrated with `shardformer`, users can define their own policy or use our preset policies for specific models. Our modeling files are aligned with [Transformers](https://github.com/huggingface/transformers). For more details about the usage of modeling and policy, please check `colossalai/shardformer`.
224
224
225
+
## Online Service
226
+
Colossal-Inference supports fast-api based online service. Simple completion and chat are both supported. Follow the commands below and you can simply construct a server with both completion and chat functionalities. For now we support `Llama2`,`Llama3` and `Baichuan2` model, etc. we will fullfill the blank quickly.
227
+
228
+
### API
229
+
230
+
- GET '/ping':
231
+
Ping is used to check if the server can receive and send information.
232
+
- GET '/engine_check':
233
+
Check is the background engine is working.
234
+
- POST '/completion':
235
+
Completion api is used for single sequence request, like answer a question or complete words.
236
+
- POST '/chat':
237
+
Chat api is used for conversation-style request, which often includes dialogue participants(i.e. roles) and corresponding words. Considering the input data are very different from normal inputs, we introduce Chat-Template to match the data format in chat models.
238
+
#### chat-template
239
+
Followed `transformers`, we add the chat-template argument. As chat models have been trained with very different formats for converting conversations into a single tokenizable string. Using a format that matches the training data is extremely important. This attribute(chat_template) is inclueded in HuggingFace tokenizers, containing a Jinja template that converts conversation histories into a correctly formatted string. You can refer to the [HuggingFace-blog](https://huggingface.co/blog/chat-templates) for more information. We also provide a simple example temlate bellow. Both str or file style chat template are supported.
240
+
### Usage
241
+
#### Args for customizing your server
242
+
The configuration for api server contains both serving interface and engine backend.
243
+
For Interface:
244
+
-`--host`: The host url on your device for the server.
245
+
-`--port`: The port for service
246
+
-`--model`: The model that backend engine uses, both path and transformers model card are supported.
247
+
-`--chat-template` The file path of chat template or the template string.
248
+
-`--response-role` The role that colossal-inference plays.
249
+
For Engine Backend:
250
+
-`--block_size`: The memory usage for each block.
251
+
-`--max_batch_size`: The max batch size for engine to infer. This changes the speed of inference,
252
+
-`--max_input_len`: The max input length of a request.
253
+
-`--max_output_len`: The output length of response.
254
+
-`--dtype` and `--use_cuda_kernel`: Deciding the precision and kernel usage.
255
+
For more detailed arguments, please refer to source code.
256
+
257
+
### Examples
258
+
```bash
259
+
# First, Lauch an API locally.
260
+
python3 -m colossalai.inference.server.api_server --model path of your model --chat-template "{% for message in messages %}{{'<|im_start|>'+message['role']+'\n'+message['content']+'<|im_end|>'+'\n'}}{% endfor %}"
261
+
262
+
# Second, you can turn to the page `http://127.0.0.1:8000/docs` to check the api
263
+
264
+
# For completion service, you can invoke it
265
+
curl -X POST http://127.0.0.1:8000/completion -H 'Content-Type: application/json' -d '{"prompt":"hello, who are you? "}'
266
+
267
+
# For chat service, you can invoke it
268
+
curl -X POST http://127.0.0.1:8000/chat -H 'Content-Type: application/json' -d '{"messages":[{"role":"system","content":"you are a helpful assistant"},{"role":"user","content":"what is 1+1?"}]}'
269
+
270
+
# You can check the engine status now
271
+
curl http://localhost:8000/engine_check
272
+
```
225
273
226
274
## 🌟 Acknowledgement
227
275
228
276
This project was written from scratch but we learned a lot from several other great open-source projects during development. Therefore, we wish to fully acknowledge their contribution to the open-source community. These projects include
0 commit comments