@@ -8,7 +8,7 @@ Roy is a HTTP server compatible with the OpenAI platform format that simulates e
88test your clients behaviour under weird circumstances. Once started, Roy will run the server on port 8000 and will
99return responses using [ Lorem Ipsum] ( https://www.lipsum.com/ ) dummy text.
1010
11- ## : floppy_disk : Installation
11+ ## 💾 Installation
1212
1313If you have Rust available, you can install Roy from [ crates.io] ( https://crates.io/ ) with:
1414```
2525# Roy server running on http://127.0.0.1:8000
2626```
2727
28- ## : memo : Control text responses
28+ ## 📝 Control text responses
2929
3030Roy will return responses containing fragments of "Lorem Ipsum". The length of the responses will determined the
3131number of tokens consumed and can be controlled. The length of the response is measured in number of characters, not
@@ -48,7 +48,7 @@ For example:
4848roy --response-length 10:100
4949```
5050
51- ## : boom : Simulate errors
51+ ## 💥 Simulate errors
5252
5353### HTTP Errors
5454
@@ -83,7 +83,7 @@ Or you can introduce random slowness between a range of milliseconds:
8383roy --slowdown 0:1000
8484```
8585
86- ## : control_knobs : Control rate limits
86+ ## 🎛️ Control rate limits
8787
8888Roy comes with a tokenizer, so that it can compute the number of tokens contained both in the request and in the
8989response with a decent approximation. The number of tokens will be used to set the proper headers in the response and
@@ -122,7 +122,9 @@ To set the tokens per minute limit:
122122roy --tpm 45000
123123```
124124
125- ## : card_index_dividers : Supported APIs
125+ ## 🗂️ Supported APIs
126126
127127- https://platform.openai.com/docs/api-reference/responses/create
128+ - https://platform.openai.com/docs/api-reference/responses-streaming
128129- https://platform.openai.com/docs/api-reference/chat/create
130+ - https://platform.openai.com/docs/api-reference/chat-streaming
0 commit comments