55** Log, track, compare, and share AI model experiments**
66
77<pre >
8- ✅ Lightweight ✅ Zero-setup ✅ Any Python code
9- ✅ Artifacts ✅ Machine metadata ✅ Cloud or on-prem
8+ ✅ Lightweight ✅ Zero-setup ✅ Any Python code
9+ ✅ Artifacts ✅ Machine metadata ✅ Cloud or on-prem
1010✅ Training ✅ Inference ✅ Agents, multi-modal
11- ✅ Fine-grain RBAC ✅ Share experiments ✅ Free tier
11+ ✅ Fine-grain RBAC ✅ Share experiments ✅ Free tier
1212</pre >
1313
1414______________________________________________________________________
@@ -22,13 +22,14 @@ ______________________________________________________________________
2222</div >
2323
2424# Why LitLogger?
25+
2526Reproducible model building is hard. As teams iterate on models, data, or prompts, it quickly becomes difficult to track what changed and why results improved or regressed. LitLogger is a *** lightweight, minimal*** experiment logger that tracks every run, including inputs, metrics, prompts, and model outputs, so teams can trace changes, compare results, and audit decisions over time without feature bloat or re-running everything from scratch.
2627
2728LitLogger is free for developers and built into [ Lightning AI] ( https://lightning.ai/ ) , an independent platform trusted by enterprises. It runs in the cloud or on-prem, giving teams long-term stability, clear auditability, and control over experiment history.
2829
2930<img width =" 3024 " height =" 1716 " alt =" image " src =" https://github.com/user-attachments/assets/27f9d8f1-2a13-4080-a64f-374d957712fa " />
3031
31- # Quick start
32+ # Quick start
3233
3334Install LitLogger with pip.
3435
@@ -37,6 +38,7 @@ pip install litlogger
3738```
3839
3940### Hello world example
41+
4042Use LitLogger with any Python code (PyTorch, vLLM, LangChain, etc).
4143
4244``` python
@@ -55,18 +57,20 @@ logger.finalize()
5557```
5658
5759# Examples
60+
5861Use LitLogger for any usecase (training, inference, agents, etc).
5962
6063<details >
6164<summary >Model training</summary >
62-
65+
6366Add LitLogger to any training framework, PyTorch, Jax, TensorFlow, Numpy, SKLearn, etc...
6467
6568<div align =' center ' >
6669
6770<img alt =" LitServe " src =" https://github.com/user-attachments/assets/50d9a2f7-17d0-4448-ad21-6be600ab53fc " width =" 800px " style =" max-width : 100% ;" >
6871
69-   ;
72+   ;
73+
7074</div >
7175
7276``` python
@@ -124,7 +128,7 @@ def train():
124128 f.write(f " num_epochs: { num_epochs} \n " )
125129 logger.log_model_artifact(" model_config.txt" )
126130 print (" model config artifact logged." )
127-
131+
128132 # Clean up the dummy artifact file after logging
129133 os.remove(" model_config.txt" )
130134
@@ -135,18 +139,20 @@ def train():
135139if __name__ == " __main__" :
136140 train()
137141```
142+
138143</details >
139144
140145<details >
141146<summary >Model inference</summary >
142-
147+
143148Add LitLogger to any inference engine, LitServe, vLLM, FastAPI, etc...
144149
145150<div align =' center ' >
146151
147152<img alt =" LitServe " src =" https://github.com/user-attachments/assets/ac454da2-0825-4fcf-b422-c6d3a1526cf0 " width =" 800px " style =" max-width : 100% ;" >
148153
149-   ;
154+   ;
155+
150156</div >
151157
152158``` python
@@ -164,8 +170,8 @@ class InferenceEngine(ls.LitAPI):
164170
165171 def predict (self , request ):
166172 start_time = time.time()
167- x = request[" input" ]
168-
173+ x = request[" input" ]
174+
169175 # perform calculations using both models
170176 a = self .text_model(x)
171177 b = self .vision_model(x)
@@ -181,7 +187,7 @@ class InferenceEngine(ls.LitAPI):
181187 " output_value" : c,
182188 " prediction_latency_ms" : latency * 1000 ,
183189 })
184-
190+
185191 return output
186192
187193 def teardown (self ):
@@ -194,6 +200,7 @@ if __name__ == "__main__":
194200```
195201
196202Ping the server from the terminal to have it generate some metrics
203+
197204``` bash
198205curl -X POST http://127.0.0.1:8000/predict -H " Content-Type: application/json" -d ' {"input": 4.0}'
199206curl -X POST http://127.0.0.1:8000/predict -H " Content-Type: application/json" -d ' {"input": 5.5}'
@@ -203,8 +210,8 @@ curl -X POST http://127.0.0.1:8000/predict -H "Content-Type: application/json" -
203210</details >
204211
205212<details >
206- <summary >PyTorch Lightning</summary >
207-
213+ <summary >PyTorch Lightning</summary >
214+
208215PyTorch Lightning now comes with LitLogger natively built in. It's also built by the PyTorch Lightning team for guaranteed fast performance at multi-node GPU scale.
209216
210217``` python
@@ -240,7 +247,8 @@ This is a fun example that simulates a long model training run.
240247
241248<img alt =" LitServe " src =" https://github.com/user-attachments/assets/fd15aa32-2b56-4324-81b6-c87c86db8a3b " width =" 800px " style =" max-width : 100% ;" >
242249
243-   ;
250+   ;
251+
244252</div >
245253
246254``` python
@@ -294,8 +302,8 @@ litlogger.finalize()
294302</details >
295303
296304# Community
297- LitLogger accepts community contributions - Let's make the world's most advanced AI experiment manager.
298305
299- 💬 [ Get help on Discord] ( https://discord.com/invite/XncpTy7DSt )
300- 📋 [ License: Apache 2.0] ( https://github.com/Lightning-AI/litlogger/blob/main/LICENSE )
306+ LitLogger accepts community contributions - Let's make the world's most advanced AI experiment manager.
301307
308+ 💬 [ Get help on Discord] ( https://discord.com/invite/XncpTy7DSt )
309+ 📋 [ License: Apache 2.0] ( https://github.com/Lightning-AI/litlogger/blob/main/LICENSE )
0 commit comments