Skip to content

Commit 13e2ddc

Browse files
authored
readme: logo 800px (#17108)
1 parent e993747 commit 13e2ddc

File tree

1 file changed

+42
-31
lines changed

1 file changed

+42
-31
lines changed

README.md

Lines changed: 42 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
<div align="center">
22

3-
<img alt="Lightning" src="https://pl-public-data.s3.amazonaws.com/assets_lightning/LightningColor.png" width="600px" style="max-width: 100%;">
3+
<img alt="Lightning" src="https://pl-public-data.s3.amazonaws.com/assets_lightning/LightningColor.png" width="800px" style="max-width: 100%;">
44

55
<br/>
66
<br/>
@@ -9,7 +9,7 @@
99

1010
**NEW- Lightning 2.0 is featuring a clean and stable API!!**
1111

12-
----
12+
______________________________________________________________________
1313

1414
<p align="center">
1515
<a href="https://www.lightning.ai/">Lightning.ai</a> •
@@ -40,7 +40,6 @@
4040

4141
</div>
4242

43-
4443
## Install Lightning
4544

4645
Simple installation from PyPI
@@ -92,31 +91,31 @@ pip install -iU https://test.pypi.org/simple/ pytorch-lightning
9291
</details>
9392
<!-- end skipping PyPI description -->
9493

95-
----
94+
______________________________________________________________________
9695

9796
## Lightning has 3 core packages
9897

99-
[PyTorch Lightning: Train and deploy PyTorch at scale](#pytorch-lightning-train-and-deploy-pytorch-at-scale).
100-
[Lightning Fabric: Expert control](#lightning-fabric-expert-control).
101-
[Lightning Apps: Build AI products and ML workflows](#lightning-apps-build-ai-products-and-ml-workflows).
98+
[PyTorch Lightning: Train and deploy PyTorch at scale](#pytorch-lightning-train-and-deploy-pytorch-at-scale).
99+
[Lightning Fabric: Expert control](#lightning-fabric-expert-control).
100+
[Lightning Apps: Build AI products and ML workflows](#lightning-apps-build-ai-products-and-ml-workflows).
102101

103-
Lightning gives you granular control over how much abstraction you want to add over PyTorch.
102+
Lightning gives you granular control over how much abstraction you want to add over PyTorch.
104103

105104
<div align="center">
106105
<img src="https://pl-public-data.s3.amazonaws.com/assets_lightning/continuum.png" width="80%">
107106
</div>
108107

109-
----
108+
______________________________________________________________________
110109

111110
# PyTorch Lightning: Train and Deploy PyTorch at Scale
112111

113-
PyTorch Lightning is just organized PyTorch - Lightning disentangles PyTorch code to decouple the science from the engineering.
112+
PyTorch Lightning is just organized PyTorch - Lightning disentangles PyTorch code to decouple the science from the engineering.
114113

115114
![PT to PL](docs/source-pytorch/_static/images/general/pl_quick_start_full_compressed.gif)
116115

117-
----
116+
______________________________________________________________________
118117

119-
### Hello simple model
118+
### Hello simple model
120119

121120
```python
122121
# main.py
@@ -125,11 +124,12 @@ import os, torch, torch.nn as nn, torch.utils.data as data, torchvision as tv, t
125124
import lightning as L
126125

127126
# --------------------------------
128-
# Step 1: Define a LightningModule
127+
# Step 1: Define a LightningModule
129128
# --------------------------------
130-
# A LightningModule (nn.Module subclass) defines a full *system*
129+
# A LightningModule (nn.Module subclass) defines a full *system*
131130
# (ie: an LLM, difussion model, autoencoder, or simple image classifier).
132131

132+
133133
class LitAutoEncoder(L.LightningModule):
134134
def __init__(self):
135135
super().__init__()
@@ -155,6 +155,7 @@ class LitAutoEncoder(L.LightningModule):
155155
optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
156156
return optimizer
157157

158+
158159
# -------------------
159160
# Step 2: Define data
160161
# -------------------
@@ -170,11 +171,13 @@ trainer.fit(autoencoder, data.DataLoader(train), data.DataLoader(val))
170171
```
171172

172173
Run the model on your terminal
173-
``` bash
174+
175+
```bash
174176
pip install torchvision
175177
python main.py
176178
```
177-
----
179+
180+
______________________________________________________________________
178181

179182
## Advanced features
180183

@@ -197,6 +200,7 @@ trainer = Trainer(accelerator="gpu", devices=8)
197200
# 256 GPUs
198201
trainer = Trainer(accelerator="gpu", devices=8, num_nodes=32)
199202
```
203+
200204
</details>
201205

202206
<details>
@@ -206,6 +210,7 @@ trainer = Trainer(accelerator="gpu", devices=8, num_nodes=32)
206210
# no code changes needed
207211
trainer = Trainer(accelerator="tpu", devices=8)
208212
```
213+
209214
</details>
210215

211216
<details>
@@ -246,12 +251,13 @@ trainer = Trainer(logger=loggers.NeptuneLogger())
246251

247252
<details>
248253

249-
<summary>Early Stopping</summary>
254+
<summary>Early Stopping</summary>
250255

251256
```python
252257
es = EarlyStopping(monitor="val_loss")
253258
trainer = Trainer(callbacks=[es])
254259
```
260+
255261
</details>
256262

257263
<details>
@@ -261,6 +267,7 @@ trainer = Trainer(callbacks=[es])
261267
checkpointing = ModelCheckpoint(monitor="val_loss")
262268
trainer = Trainer(callbacks=[checkpointing])
263269
```
270+
264271
</details>
265272

266273
<details>
@@ -271,6 +278,7 @@ trainer = Trainer(callbacks=[checkpointing])
271278
autoencoder = LitAutoEncoder()
272279
torch.jit.save(autoencoder.to_torchscript(), "model.pt")
273280
```
281+
274282
</details>
275283

276284
<details>
@@ -287,7 +295,7 @@ with tempfile.NamedTemporaryFile(suffix=".onnx", delete=False) as tmpfile:
287295

288296
</details>
289297

290-
----
298+
______________________________________________________________________
291299

292300
## Advantages over unstructured PyTorch
293301

@@ -300,21 +308,20 @@ with tempfile.NamedTemporaryFile(suffix=".onnx", delete=False) as tmpfile:
300308
- [Tested rigorously with every new PR](https://github.com/Lightning-AI/lightning/tree/master/tests). We test every combination of PyTorch and Python supported versions, every OS, multi GPUs and even TPUs.
301309
- Minimal running speed overhead (about 300 ms per epoch compared with pure PyTorch).
302310

303-
----
311+
______________________________________________________________________
304312

305313
<div align="center">
306314
<a href="https://lightning.ai/docs/pytorch/stable/">Read the PyTorch Lightning docs</a>
307315
</div>
308316

309-
----
317+
______________________________________________________________________
310318

311319
# Lightning Fabric: Expert control.
312320

313321
Run on any device at any scale with expert-level control over PyTorch training loop and scaling strategy. You can even write your own Trainer.
314322

315323
Fabric is designed for the most complex models like foundation model scaling, LLMs, diffussion, transformers, reinforcement learning, active learning.
316324

317-
318325
```diff
319326
+ import lightning as L
320327
import torch
@@ -354,13 +361,13 @@ Fabric is designed for the most complex models like foundation model scaling, LL
354361
- Designed with multi-billion parameter models in mind
355362
- Build your own custom Trainer using Fabric primitives for training checkpointing, logging, and more
356363

357-
----
364+
______________________________________________________________________
358365

359366
<div align="center">
360367
<a href="https://lightning.ai/docs/fabric/stable/">Read the Lightning Fabric docs</a>
361368
</div>
362369

363-
----
370+
______________________________________________________________________
364371

365372
# Lightning Apps: Build AI products and ML workflows
366373

@@ -376,24 +383,28 @@ Lightning Apps remove the cloud infrastructure boilerplate so you can focus on s
376383
# app.py
377384
import lightning as L
378385

386+
379387
class TrainComponent(L.LightningWork):
380388
def run(self, x):
381-
print(f'train a model on {x}')
389+
print(f"train a model on {x}")
390+
382391

383392
class AnalyzeComponent(L.LightningWork):
384393
def run(self, x):
385-
print(f'analyze model on {x}')
394+
print(f"analyze model on {x}")
395+
386396

387397
class WorkflowOrchestrator(L.LightningFlow):
388398
def __init__(self) -> None:
389399
super().__init__()
390-
self.train = TrainComponent(cloud_compute=L.CloudCompute('cpu'))
391-
self.analyze = AnalyzeComponent(cloud_compute=L.CloudCompute('gpu'))
400+
self.train = TrainComponent(cloud_compute=L.CloudCompute("cpu"))
401+
self.analyze = AnalyzeComponent(cloud_compute=L.CloudCompute("gpu"))
392402

393403
def run(self):
394404
self.train.run("CPU machine 1")
395405
self.analyze.run("GPU machine 2")
396406

407+
397408
app = L.LightningApp(WorkflowOrchestrator())
398409
```
399410

@@ -407,13 +418,13 @@ lightning run app app.py --setup --cloud
407418
lightning run app app.py
408419
```
409420

410-
----
421+
______________________________________________________________________
411422

412423
<div align="center">
413424
<a href="https://lightning.ai/docs/app/stable/">Read the Lightning Apps docs</a>
414425
</div>
415426

416-
----
427+
______________________________________________________________________
417428

418429
## Examples
419430

@@ -444,7 +455,7 @@ lightning run app app.py
444455
- [Logistic Regression](https://lightning-bolts.readthedocs.io/en/stable/models/classic_ml.html#logistic-regression)
445456
- [Linear Regression](https://lightning-bolts.readthedocs.io/en/stable/models/classic_ml.html#linear-regression)
446457

447-
----
458+
______________________________________________________________________
448459

449460
## Continuous Integration
450461

@@ -470,7 +481,7 @@ Lightning is rigorously tested across multiple CPUs, GPUs, TPUs, IPUs, and HPUs
470481
</center>
471482
</details>
472483

473-
----
484+
______________________________________________________________________
474485

475486
## Community
476487

0 commit comments

Comments
 (0)