Skip to content

Commit da36606

Browse files
Enhance README with examples and usage details
Updated README to include examples and clarify usage.
1 parent b628cdf commit da36606

File tree

1 file changed

+61
-2
lines changed

1 file changed

+61
-2
lines changed

README.md

Lines changed: 61 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ ______________________________________________________________________
1515

1616
<p align="center">
1717
<a href="#quick-start">Quick start</a> •
18+
<a href="#examples">Examples</a> •
1819
<a href="https://lightning.ai/docs/overview/experiment-management">Docs</a>
1920
</p>
2021

@@ -36,7 +37,7 @@ pip install litlogger
3637
```
3738

3839
### Hello world example
39-
Use LitLogger with any Python code (PyTorch, vLLM, LangChain, etc), and any usecase (training, inference, agents, etc).
40+
Use LitLogger with any Python code (PyTorch, vLLM, LangChain, etc).
4041

4142
```python
4243
from litlogger import LightningLogger
@@ -53,7 +54,8 @@ for i in range(10):
5354
logger.finalize()
5455
```
5556

56-
### More examples:
57+
# Examples
58+
Use LitLogger for any usecase (training, inference, agents, etc).
5759

5860
<details>
5961
<summary>Model training</summary>
@@ -92,6 +94,63 @@ litlogger.finalize()
9294
```
9395
</details>
9496

97+
<details>
98+
<summary>Model inference</summary>
99+
100+
Add LitLogger to any inference engine, LitServe, vLLM, FastAPI, etc...
101+
102+
<div align='center'>
103+
104+
<img alt="LitServe" src="https://github.com/user-attachments/assets/ac454da2-0825-4fcf-b422-c6d3a1526cf0" width="800px" style="max-width: 100%;">
105+
106+
&nbsp;
107+
</div>
108+
109+
```python
110+
import time
111+
import litserve as ls
112+
from litlogger import LightningLogger
113+
114+
class InferenceEngine(ls.LitAPI):
115+
def setup(self, device):
116+
# initialize your models here
117+
self.text_model = lambda x: x**2
118+
self.vision_model = lambda x: x**3
119+
# initialize LightningLogger
120+
self.logger = LightningLogger(metadata={"service_name": "InferenceEngine", "device": device})
121+
122+
def predict(self, request):
123+
start_time = time.time()
124+
x = request["input"]
125+
126+
# perform calculations using both models
127+
a = self.text_model(x)
128+
b = self.vision_model(x)
129+
c = a + b
130+
output = {"output": c}
131+
132+
end_time = time.time()
133+
latency = end_time - start_time
134+
135+
# log inference metrics
136+
self.logger.log_metrics({
137+
"input_value": x,
138+
"output_value": c,
139+
"prediction_latency_ms": latency * 1000,
140+
})
141+
142+
return output
143+
144+
def teardown(self):
145+
# ensure the logger is finalized when the service shuts down
146+
self.logger.finalize()
147+
148+
if __name__ == "__main__":
149+
server = ls.LitServer(InferenceEngine(max_batch_size=1), accelerator="auto")
150+
server.run(port=8000)
151+
```
152+
</details>
153+
95154
<details>
96155
<summary>PyTorch Lightning</summary>
97156

0 commit comments

Comments
 (0)