Skip to content

Commit f308bc1

Browse files
committed
linting Readme
1 parent d8b1bfb commit f308bc1

File tree

1 file changed

+26
-12
lines changed

1 file changed

+26
-12
lines changed

README.md

Lines changed: 26 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -18,14 +18,14 @@
1818

1919
______________________________________________________________________
2020

21-
Save, load, host, and share models without slowing down training.
21+
Save, load, host, and share models without slowing down training.
2222
**LitModels** minimizes training slowdowns from checkpoint saving. Share public links on Lightning AI or your own cloud with enterprise-grade access controls.
2323

2424
<pre>
25-
✅ Checkpoint without slowing training.
26-
✅ Instant model loading anywhere.
25+
✅ Checkpoint without slowing training.
26+
✅ Instant model loading anywhere.
2727
✅ Share with secure, link-based access.
28-
✅ Host on Lightning or your own cloud.
28+
✅ Host on Lightning or your own cloud.
2929
</pre>
3030

3131
# Quick start
@@ -37,24 +37,26 @@ pip install litmodels
3737
```
3838

3939
Toy example ([see real examples](#examples)):
40+
4041
```python
4142
import litmodels as lm
4243
import torch
4344

4445
# save a model
4546
model = torch.nn.Module()
46-
upload_model(model=model, name='model-name')
47+
upload_model(model=model, name="model-name")
4748

4849
# load a model
49-
model = load_model(name='model-name')
50+
model = load_model(name="model-name")
5051
```
5152

5253
# Examples
5354

5455
<details>
5556
<summary>PyTorch</summary>
5657

57-
Save model:
58+
Save model:
59+
5860
```python
5961
import torch
6062
from litmodels import load_model, upload_model
@@ -64,6 +66,7 @@ upload_model(model=model, name="your_org/your_team/torch-model")
6466
```
6567

6668
Load model:
69+
6770
```python
6871
model_ = load_model(name="your_org/your_team/torch-model")
6972
```
@@ -74,6 +77,7 @@ model_ = load_model(name="your_org/your_team/torch-model")
7477
<summary>PyTorch Lightning</summary>
7578

7679
Save model:
80+
7781
```python
7882
from lightning import Trainer
7983
from litmodels import upload_model
@@ -91,6 +95,7 @@ upload_model(model=checkpoint_path, name="<organization>/<teamspace>/<model-name
9195
```
9296

9397
Load model:
98+
9499
```python
95100
from lightning import Trainer
96101
from litmodels import download_model
@@ -114,7 +119,8 @@ trainer.fit(BoringModel(), ckpt_path=checkpoint_path)
114119
<details>
115120
<summary>SKLearn</summary>
116121

117-
Save model:
122+
Save model:
123+
118124
```python
119125
from sklearn import datasets, model_selection, svm
120126
from litmodels import upload_model
@@ -137,6 +143,7 @@ upload_model(model=model, name="your_org/your_team/sklearn-svm-model")
137143
```
138144

139145
Use model:
146+
140147
```python
141148
from litmodels import load_model
142149

@@ -154,6 +161,7 @@ print(f"Prediction: {prediction}")
154161
</details>
155162

156163
# Features
164+
157165
<details>
158166
<summary>PyTorch Lightning Callback</summary>
159167

@@ -193,6 +201,7 @@ trainer.fit(
193201
Why is this useful???
194202

195203
Save model:
204+
196205
```python
197206
from litmodels.integrations.mixins import PickleRegistryMixin
198207

@@ -211,10 +220,11 @@ model.upload_model(name="my-org/my-team/my-model")
211220
```
212221

213222
Load model:
223+
214224
```python
215225
loaded_model = MyModel.download_model(name="my-org/my-team/my-model")
216226
```
217-
227+
218228
</details>
219229

220230
<details>
@@ -223,10 +233,12 @@ loaded_model = MyModel.download_model(name="my-org/my-team/my-model")
223233
why is this useful? why do i need this?
224234

225235
Save model:
236+
226237
```python
227238
import torch
228239
from litmodels.integrations.mixins import PyTorchRegistryMixin
229240

241+
230242
# Important: PyTorchRegistryMixin must be first in the inheritance order
231243
class MyTorchModel(PyTorchRegistryMixin, torch.nn.Module):
232244
def __init__(self, input_size, hidden_size=128):
@@ -237,24 +249,26 @@ class MyTorchModel(PyTorchRegistryMixin, torch.nn.Module):
237249
def forward(self, x):
238250
return self.activation(self.linear(x))
239251

252+
240253
# Create and push the model
241254
model = MyTorchModel(input_size=784)
242255
model.upload_model(name="my-org/my-team/torch-model")
243256
```
244257

245258
Use the model:
246-
```python
247259

260+
```python
248261
# Pull the model with the same architecture
249262
loaded_model = MyTorchModel.download_model(name="my-org/my-team/torch-model")
250263
```
251264

252265
</details>
253266

254267
# Performance
268+
255269
TODO: show the chart between not using this vs using this and the impact on training (the GPU utilization side-by-side)... also, what are tangible speed ups in training and inference.
256270

257271
# Community
258272

259-
💬 [Get help on Discord](https://discord.com/invite/XncpTy7DSt)
260-
📋 [License: Apache 2.0](https://github.com/Lightning-AI/litModels/blob/main/LICENSE)
273+
💬 [Get help on Discord](https://discord.com/invite/XncpTy7DSt)
274+
📋 [License: Apache 2.0](https://github.com/Lightning-AI/litModels/blob/main/LICENSE)

0 commit comments

Comments
 (0)