Skip to content

Commit de785b3

Browse files
Update README.md
1 parent f8fe285 commit de785b3

File tree

1 file changed

+32
-1
lines changed

1 file changed

+32
-1
lines changed

README.md

Lines changed: 32 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ Install LitServe via pip ([more options](https://lightning.ai/docs/litserve/home
6666
pip install litserve
6767
```
6868

69-
### Define a server
69+
### Toy example
7070
This toy example with 2 models (inference pipeline) shows LitServe's flexibility ([see real examples](#featured-examples)):
7171

7272
```python
@@ -99,6 +99,37 @@ if __name__ == "__main__":
9999
server.run(port=8000)
100100
```
101101

102+
### Agentic example
103+
104+
```python
105+
import re, requests, openai
106+
import litserve as ls
107+
108+
class NewsAgent(ls.LitAPI):
109+
def setup(self, device):
110+
self.openai_client = openai.OpenAI(api_key="OPENAI_API_KEY")
111+
112+
def decode_request(self, request):
113+
return request.get("website_url", "https://text.npr.org/")
114+
115+
def predict(self, website_url):
116+
# fetch news
117+
news_text = re.sub(r'<[^>]+>', ' ', requests.get(website_url).text)
118+
119+
# ask the LLM to tell you about the news
120+
llm_response = self.openai_client.Completion.create(model="text-davinci-003", prompt=f"Based on this, what is the latest: {news_text}",)
121+
answer = llm_response.choices[0].text.strip()
122+
123+
return {"answer": answer}
124+
125+
def encode_response(self, output):
126+
return {"response": output}
127+
128+
if __name__ == "__main__":
129+
server = ls.LitServer(NewsAgent())
130+
server.run(port=8000)
131+
```
132+
102133
Now deploy for free to [Lightning cloud](#hosting-options) (or self host anywhere):
103134

104135
```bash

0 commit comments

Comments
 (0)