diff --git a/README.md b/README.md index 546458bf80bc9..c37a195654ce3 100644 --- a/README.md +++ b/README.md @@ -55,6 +55,8 @@ ______________________________________________________________________ +English | [繁體中文](./README_zh.md) + # Why PyTorch Lightning? Training models in plain PyTorch is tedious and error-prone - you have to manually handle things like backprop, mixed precision, multi-GPU, and distributed training, often rewriting code for every new project. PyTorch Lightning organizes PyTorch code to automate those complexities so you can focus on your model and data, while keeping full control and scaling from CPU to multi-node without changing your core code. But if you want control of those things, you can still opt into more DIY. diff --git a/README_zh.md b/README_zh.md new file mode 100644 index 0000000000000..dd09e407a5a72 --- /dev/null +++ b/README_zh.md @@ -0,0 +1,632 @@ +
+ 快速開始 • + 範例 • + PyTorch Lightning • + Fabric • + Lightning AI • + 社群 • + 文件 +
+ + + +[](https://pypi.org/project/pytorch-lightning/) +[](https://badge.fury.io/py/pytorch-lightning) +[](https://pepy.tech/project/pytorch-lightning) +[](https://anaconda.org/conda-forge/lightning) +[](https://codecov.io/gh/Lightning-AI/pytorch-lightning) + +[](https://discord.gg/VptPCZkGNa) + +[](https://github.com/Lightning-AI/pytorch-lightning/blob/master/LICENSE) + + + +
+
+
+
+
+
+
+
+
調整部分 | +使用 Fabric 的結果 (copy me!) | +
---|---|
+ + +```diff ++ import lightning as L + import torch; import torchvision as tv + + dataset = tv.datasets.CIFAR10("data", download=True, + train=True, + transform=tv.transforms.ToTensor()) + ++ fabric = L.Fabric() ++ fabric.launch() + + model = tv.models.resnet18() + optimizer = torch.optim.SGD(model.parameters(), lr=0.001) +- device = "cuda" if torch.cuda.is_available() else "cpu" +- model.to(device) ++ model, optimizer = fabric.setup(model, optimizer) + + dataloader = torch.utils.data.DataLoader(dataset, batch_size=8) ++ dataloader = fabric.setup_dataloaders(dataloader) + + model.train() + num_epochs = 10 + for epoch in range(num_epochs): + for batch in dataloader: + inputs, labels = batch +- inputs, labels = inputs.to(device), labels.to(device) + optimizer.zero_grad() + outputs = model(inputs) + loss = torch.nn.functional.cross_entropy(outputs, labels) +- loss.backward() ++ fabric.backward(loss) + optimizer.step() + print(loss.data) +``` + + + | + + +```Python +import lightning as L +import torch; +import torchvision as tv + +dataset = tv.datasets.CIFAR10("data", download=True, + train=True, + transform=tv.transforms.ToTensor()) + +fabric = L.Fabric() +fabric.launch() + +model = tv.models.resnet18() +optimizer = torch.optim.SGD(model.parameters(), lr=0.001) +model, optimizer = fabric.setup(model, optimizer) + +dataloader = torch.utils.data.DataLoader(dataset, batch_size=8) +dataloader = fabric.setup_dataloaders(dataloader) + +model.train() +num_epochs = 10 +for epoch in range(num_epochs): + for batch in dataloader: + inputs, labels = batch + optimizer.zero_grad() + outputs = model(inputs) + loss = torch.nn.functional.cross_entropy(outputs, labels) + fabric.backward(loss) + optimizer.step() + print(loss.data) +``` + + + | +