You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/docs/examples/algorithms/8. Personalized Federated Learning Algorithms.md
+16-16Lines changed: 16 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,8 +3,8 @@
3
3
FedRep learns a shared data representation (the global layers) across clients and a unique, personalized local "head" (the local layers) for each client. In this implementation, after each round of local training, only the representation on each client is retrieved and uploaded to the server for aggregation.
4
4
5
5
```bash
6
-
cd examples/personalized_fl/fedrep
7
-
uv run fedrep.py -c ../configs/fedrep_CIFAR10_resnet18.yml
6
+
cd examples/personalized_fl
7
+
uv run fedrep/fedrep.py -c configs/fedrep_CIFAR10_resnet18.yml
8
8
```
9
9
10
10
**Reference:** Collins et al., "[Exploiting Shared Representations for Personalized Federated Learning](http://proceedings.mlr.press/v139/collins21a/collins21a.pdf)," in Proc. International Conference on Machine Learning (ICML), 2021.
@@ -16,8 +16,8 @@ uv run fedrep.py -c ../configs/fedrep_CIFAR10_resnet18.yml
16
16
FedBABU only updates the global layers of the model during FL training. The local layers are frozen at the beginning of each local training epoch.
17
17
18
18
```bash
19
-
cd examples/personalized_fl/fedbabu
20
-
uv run fedbabu.py -c ../configs/fedbabu_CIFAR10_resnet18.yml
19
+
cd examples/personalized_fl
20
+
uv run fedbabu/fedbabu.py -c configs/fedbabu_CIFAR10_resnet18.yml
21
21
```
22
22
23
23
**Reference:** Oh et al., "[FedBABU: Towards Enhanced Representation for Federated Image Classification](https://openreview.net/forum?id=HuaYQfggn5u)," in Proc. International Conference on Learning Representations (ICLR), 2022.
@@ -29,8 +29,8 @@ uv run fedbabu.py -c ../configs/fedbabu_CIFAR10_resnet18.yml
29
29
APFL jointly optimizes the global model and personalized models by interpolating between local and personalized models. Once the global model is received, each client will carry out a regular local update, and then conduct a personalized optimization to acquire a trained personalized model. The trained global model and the personalized model will subsequently be combined using the parameter "alpha," which can be dynamically updated.
30
30
31
31
```bash
32
-
cd examples/personalized_fl/apfl
33
-
uv run apfl.py -c ../configs/apfl_CIFAR10_resnet18.yml
32
+
cd examples/personalized_fl
33
+
uv run apfl/apfl.py -c configs/apfl_CIFAR10_resnet18.yml
34
34
```
35
35
36
36
**Reference:** Deng et al., "[Adaptive Personalized Federated Learning](https://arxiv.org/abs/2003.13461)," in Arxiv, 2021.
@@ -42,8 +42,8 @@ uv run apfl.py -c ../configs/apfl_CIFAR10_resnet18.yml
42
42
FedPer learns a global representation and personalized heads, but makes simultaneous local updates for both sets of parameters, therefore makes the same number of local updates for the head and the representation on each local round.
43
43
44
44
```bash
45
-
cd examples/personalized_fl/fedper
46
-
uv run fedper.py -c ../configs/fedper_CIFAR10_resnet18.yml
45
+
cd examples/personalized_fl
46
+
uv run fedper/fedper.py -c configs/fedper_CIFAR10_resnet18.yml
47
47
```
48
48
49
49
**Reference:** Arivazhagan et al., "[Federated learning with personalization layers](https://arxiv.org/abs/1912.00818)," in Arxiv, 2019.
@@ -55,8 +55,8 @@ uv run fedper.py -c ../configs/fedper_CIFAR10_resnet18.yml
55
55
With LG-FedAvg only the global layers of a model are sent to the server for aggregation, while each client keeps local layers to itself.
56
56
57
57
```bash
58
-
cd examples/personalized_fl/lgfedavg
59
-
uv run lgfedavg.py -c ../configs/lgfedavg_CIFAR10_resnet18.yml
58
+
cd examples/personalized_fl
59
+
uv run lgfedavg/lgfedavg.py -c configs/lgfedavg_CIFAR10_resnet18.yml
60
60
```
61
61
62
62
**Reference:** Liang et al., "[Think Locally, Act Globally: Federated Learning with Local and Global Representations](https://arxiv.org/abs/2001.01523)," in Proc. NeurIPS, 2019.
@@ -68,8 +68,8 @@ uv run lgfedavg.py -c ../configs/lgfedavg_CIFAR10_resnet18.yml
68
68
Ditto jointly optimizes the global model and personalized models by learning local models that are encouraged to be close together by global regularization. In this example, once the global model is received, each client will carry out a regular local update and then optimizes the personalized model.
69
69
70
70
```bash
71
-
cd examples/personalized_fl/ditto
72
-
uv run ditto.py -c ../configs/ditto_CIFAR10_resnet18.yml
71
+
cd examples/personalized_fl
72
+
uv run ditto/ditto.py -c configs/ditto_CIFAR10_resnet18.yml
73
73
```
74
74
75
75
**Reference:** Li et al., "[Ditto: Fair and robust federated learning through personalization](https://proceedings.mlr.press/v139/li21h.html)," in Proc ICML, 2021.
@@ -81,8 +81,8 @@ uv run ditto.py -c ../configs/ditto_CIFAR10_resnet18.yml
81
81
Per-FedAvg uses the Model-Agnostic Meta-Learning (MAML) framework to perform local training during the regular training rounds. It performs two forward and backward passes with fixed learning rates in each iteration.
82
82
83
83
```bash
84
-
cd examples/personalized_fl/perfedavg
85
-
uv run perfedavg.py -c ../configs/perfedavg_CIFAR10_resnet18.yml
84
+
cd examples/personalized_fl
85
+
uv run perfedavg/perfedavg.py -c configs/perfedavg_CIFAR10_resnet18.yml
86
86
```
87
87
88
88
**Reference:** Fallah et al., "[Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach](https://proceedings.neurips.cc/paper/2020/hash/24389bfe4fe2eba8bf9aa9203a44cdad-Abstract.html)," in Proc NeurIPS, 2020.
@@ -94,8 +94,8 @@ uv run perfedavg.py -c ../configs/perfedavg_CIFAR10_resnet18.yml
94
94
Hermes utilizes structured pruning to improve both communication efficiency and inference efficiency of federated learning. It prunes channels with the lowest magnitudes in each local model and adjusts the pruning amount based on each local model's test accuracy and its previous pruning amount. When the server aggregates pruned updates, it only averages parameters that were not pruned on all clients.
95
95
96
96
```bash
97
-
cd examples/personalized_fl/hermes
98
-
uv run hermes.py -c ../configs/hermes_CIFAR10_resnet18.yml
97
+
cd examples/personalized_fl
98
+
uv run hermes/hermes.py -c configs/hermes_CIFAR10_resnet18.yml
99
99
```
100
100
101
101
**Reference:** Li et al., "[Hermes: An Efficient Federated Learning Framework for Heterogeneous Mobile Clients](https://sites.duke.edu/angli/files/2021/10/2021_Mobicom_Hermes_v1.pdf)," in Proc. 27th Annual International Conference on Mobile Computing and Networking (MobiCom), 2021.
@@ -2241,7 +2183,7 @@ Here is a list of all the methods available in the `RunHistory` class:
2241
2183
2242
2184
When using the strategy pattern is no longer feasible, it is also possible to customize the training or testing procedure using subclassing, and overriding hook methods. To customize the training loop using subclassing, subclass the `basic.Trainer` class in `plato.trainers`, and override the following hook methods:
0 commit comments