55** Log, track, compare, and share AI model experiments**
66
77<pre >
8- ✅ Lightweight ✅ Zero-setup ✅ Any Python code
9- ✅ Artifacts ✅ Machine metadata ✅ Cloud or on-prem
8+ ✅ Lightweight ✅ Zero-setup ✅ Any Python code
9+ ✅ Artifacts ✅ Machine metadata ✅ Cloud or on-prem
1010✅ Training ✅ Inference ✅ Agents, multi-modal
11- ✅ Fine-grain RBAC ✅ Share experiments ✅ Free tier
11+ ✅ Fine-grain RBAC ✅ Share experiments ✅ Free tier
1212</pre >
1313
1414______________________________________________________________________
@@ -22,14 +22,13 @@ ______________________________________________________________________
2222</div >
2323
2424# Why LitLogger?
25-
2625Reproducible model building is hard. As teams iterate on models, data, or prompts, it quickly becomes difficult to track what changed and why results improved or regressed. LitLogger is a *** lightweight, minimal*** experiment logger that tracks every run, including inputs, metrics, prompts, and model outputs, so teams can trace changes, compare results, and audit decisions over time without feature bloat or re-running everything from scratch.
2726
2827LitLogger is free for developers and built into [ Lightning AI] ( https://lightning.ai/ ) , an independent platform trusted by enterprises. It runs in the cloud or on-prem, giving teams long-term stability, clear auditability, and control over experiment history.
2928
3029<img width =" 3024 " height =" 1716 " alt =" image " src =" https://github.com/user-attachments/assets/27f9d8f1-2a13-4080-a64f-374d957712fa " />
3130
32- # Quick start
31+ # Quick start
3332
3433Install LitLogger with pip.
3534
@@ -38,7 +37,6 @@ pip install litlogger
3837```
3938
4039### Hello world example
41-
4240Use LitLogger with any Python code (PyTorch, vLLM, LangChain, etc).
4341
4442``` python
@@ -57,20 +55,18 @@ logger.finalize()
5755```
5856
5957# Examples
60-
6158Use LitLogger for any usecase (training, inference, agents, etc).
6259
6360<details >
6461<summary >Model training</summary >
65-
62+
6663Add LitLogger to any training framework, PyTorch, Jax, TensorFlow, Numpy, SKLearn, etc...
6764
6865<div align =' center ' >
6966
7067<img alt =" LitServe " src =" https://github.com/user-attachments/assets/50d9a2f7-17d0-4448-ad21-6be600ab53fc " width =" 800px " style =" max-width : 100% ;" >
7168
72-   ;
73-
69+   ;
7470</div >
7571
7672``` python
@@ -128,7 +124,7 @@ def train():
128124 f.write(f " num_epochs: { num_epochs} \n " )
129125 logger.log_model_artifact(" model_config.txt" )
130126 print (" model config artifact logged." )
131-
127+
132128 # Clean up the dummy artifact file after logging
133129 os.remove(" model_config.txt" )
134130
@@ -139,20 +135,18 @@ def train():
139135if __name__ == " __main__" :
140136 train()
141137```
142-
143138</details >
144139
145140<details >
146141<summary >Model inference</summary >
147-
142+
148143Add LitLogger to any inference engine, LitServe, vLLM, FastAPI, etc...
149144
150145<div align =' center ' >
151146
152147<img alt =" LitServe " src =" https://github.com/user-attachments/assets/ac454da2-0825-4fcf-b422-c6d3a1526cf0 " width =" 800px " style =" max-width : 100% ;" >
153148
154-   ;
155-
149+   ;
156150</div >
157151
158152``` python
@@ -170,8 +164,8 @@ class InferenceEngine(ls.LitAPI):
170164
171165 def predict (self , request ):
172166 start_time = time.time()
173- x = request[" input" ]
174-
167+ x = request[" input" ]
168+
175169 # perform calculations using both models
176170 a = self .text_model(x)
177171 b = self .vision_model(x)
@@ -187,7 +181,7 @@ class InferenceEngine(ls.LitAPI):
187181 " output_value" : c,
188182 " prediction_latency_ms" : latency * 1000 ,
189183 })
190-
184+
191185 return output
192186
193187 def teardown (self ):
@@ -200,7 +194,6 @@ if __name__ == "__main__":
200194```
201195
202196Ping the server from the terminal to have it generate some metrics
203-
204197``` bash
205198curl -X POST http://127.0.0.1:8000/predict -H " Content-Type: application/json" -d ' {"input": 4.0}'
206199curl -X POST http://127.0.0.1:8000/predict -H " Content-Type: application/json" -d ' {"input": 5.5}'
@@ -210,8 +203,8 @@ curl -X POST http://127.0.0.1:8000/predict -H "Content-Type: application/json" -
210203</details >
211204
212205<details >
213- <summary >PyTorch Lightning</summary >
214-
206+ <summary >PyTorch Lightning</summary >
207+
215208PyTorch Lightning now comes with LitLogger natively built in. It's also built by the PyTorch Lightning team for guaranteed fast performance at multi-node GPU scale.
216209
217210``` python
@@ -247,8 +240,7 @@ This is a fun example that simulates a long model training run.
247240
248241<img alt =" LitServe " src =" https://github.com/user-attachments/assets/fd15aa32-2b56-4324-81b6-c87c86db8a3b " width =" 800px " style =" max-width : 100% ;" >
249242
250-   ;
251-
243+   ;
252244</div >
253245
254246``` python
@@ -302,8 +294,8 @@ litlogger.finalize()
302294</details >
303295
304296# Community
305-
306297LitLogger accepts community contributions - Let's make the world's most advanced AI experiment manager.
307298
308- 💬 [ Get help on Discord] ( https://discord.com/invite/XncpTy7DSt )
309- 📋 [ License: Apache 2.0] ( https://github.com/Lightning-AI/litlogger/blob/main/LICENSE )
299+ 💬 [ Get help on Discord] ( https://discord.com/invite/XncpTy7DSt )
300+ 📋 [ License: Apache 2.0] ( https://github.com/Lightning-AI/litlogger/blob/main/LICENSE )
301+
0 commit comments