diff --git a/README.md b/README.md index 555fc6cc2e903..cc89a9563b083 100644 --- a/README.md +++ b/README.md @@ -55,6 +55,8 @@ ______________________________________________________________________ +English | [繁體中文](./README_zh.md) + # Looking for GPUs? Over 340,000 developers use [Lightning Cloud](https://lightning.ai/?utm_source=ptl_readme&utm_medium=referral&utm_campaign=ptl_readme) - purpose-built for PyTorch and PyTorch Lightning. - [GPUs](https://lightning.ai/pricing?utm_source=ptl_readme&utm_medium=referral&utm_campaign=ptl_readme) from $0.19. diff --git a/README_zh.md b/README_zh.md new file mode 100644 index 0000000000000..eca45ac8955f6 --- /dev/null +++ b/README_zh.md @@ -0,0 +1,632 @@ +
+ 快速開始 • + 範例 • + PyTorch Lightning • + Fabric • + Lightning Cloud • + 社群 • + 文件 +
+ + + +[](https://pypi.org/project/pytorch-lightning/) +[](https://badge.fury.io/py/pytorch-lightning) +[](https://pepy.tech/project/pytorch-lightning) +[](https://anaconda.org/conda-forge/lightning) +[](https://codecov.io/gh/Lightning-AI/pytorch-lightning) + +[](https://discord.gg/VptPCZkGNa) + +[](https://github.com/Lightning-AI/pytorch-lightning/blob/master/LICENSE) + + + +
+
+
+
+
+
+
+
+
調整部分 | +使用 Fabric 的結果 (copy me!) | +
---|---|
+ + +```diff ++ import lightning as L + import torch; import torchvision as tv + + dataset = tv.datasets.CIFAR10("data", download=True, + train=True, + transform=tv.transforms.ToTensor()) + ++ fabric = L.Fabric() ++ fabric.launch() + + model = tv.models.resnet18() + optimizer = torch.optim.SGD(model.parameters(), lr=0.001) +- device = "cuda" if torch.cuda.is_available() else "cpu" +- model.to(device) ++ model, optimizer = fabric.setup(model, optimizer) + + dataloader = torch.utils.data.DataLoader(dataset, batch_size=8) ++ dataloader = fabric.setup_dataloaders(dataloader) + + model.train() + num_epochs = 10 + for epoch in range(num_epochs): + for batch in dataloader: + inputs, labels = batch +- inputs, labels = inputs.to(device), labels.to(device) + optimizer.zero_grad() + outputs = model(inputs) + loss = torch.nn.functional.cross_entropy(outputs, labels) +- loss.backward() ++ fabric.backward(loss) + optimizer.step() + print(loss.data) +``` + + + | + + +```Python +import lightning as L +import torch; +import torchvision as tv + +dataset = tv.datasets.CIFAR10("data", download=True, + train=True, + transform=tv.transforms.ToTensor()) + +fabric = L.Fabric() +fabric.launch() + +model = tv.models.resnet18() +optimizer = torch.optim.SGD(model.parameters(), lr=0.001) +model, optimizer = fabric.setup(model, optimizer) + +dataloader = torch.utils.data.DataLoader(dataset, batch_size=8) +dataloader = fabric.setup_dataloaders(dataloader) + +model.train() +num_epochs = 10 +for epoch in range(num_epochs): + for batch in dataloader: + inputs, labels = batch + optimizer.zero_grad() + outputs = model(inputs) + loss = torch.nn.functional.cross_entropy(outputs, labels) + fabric.backward(loss) + optimizer.step() + print(loss.data) +``` + + + | +