Skip to content

Commit 0e1592f

Browse files
committed
update supported APIs
1 parent 63f34a2 commit 0e1592f

File tree

1 file changed

+7
-5
lines changed

1 file changed

+7
-5
lines changed

README.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Roy is a HTTP server compatible with the OpenAI platform format that simulates e
88
test your clients behaviour under weird circumstances. Once started, Roy will run the server on port 8000 and will
99
return responses using [Lorem Ipsum](https://www.lipsum.com/) dummy text.
1010

11-
## :floppy_disk: Installation
11+
## 💾 Installation
1212

1313
If you have Rust available, you can install Roy from [crates.io](https://crates.io/) with:
1414
```
@@ -25,7 +25,7 @@ roy
2525
# Roy server running on http://127.0.0.1:8000
2626
```
2727

28-
## :memo: Control text responses
28+
## 📝 Control text responses
2929

3030
Roy will return responses containing fragments of "Lorem Ipsum". The length of the responses will determined the
3131
number of tokens consumed and can be controlled. The length of the response is measured in number of characters, not
@@ -48,7 +48,7 @@ For example:
4848
roy --response-length 10:100
4949
```
5050

51-
## :boom: Simulate errors
51+
## 💥 Simulate errors
5252

5353
### HTTP Errors
5454

@@ -83,7 +83,7 @@ Or you can introduce random slowness between a range of milliseconds:
8383
roy --slowdown 0:1000
8484
```
8585

86-
## :control_knobs: Control rate limits
86+
## 🎛️ Control rate limits
8787

8888
Roy comes with a tokenizer, so that it can compute the number of tokens contained both in the request and in the
8989
response with a decent approximation. The number of tokens will be used to set the proper headers in the response and
@@ -122,7 +122,9 @@ To set the tokens per minute limit:
122122
roy --tpm 45000
123123
```
124124

125-
## :card_index_dividers: Supported APIs
125+
## 🗂️ Supported APIs
126126

127127
- https://platform.openai.com/docs/api-reference/responses/create
128+
- https://platform.openai.com/docs/api-reference/responses-streaming
128129
- https://platform.openai.com/docs/api-reference/chat/create
130+
- https://platform.openai.com/docs/api-reference/chat-streaming

0 commit comments

Comments
 (0)