You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/patterns/multicloud-federated-learning/_index.adoc
+14-14Lines changed: 14 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,15 +28,15 @@ As machine learning (ML) evolves, protecting data privacy becomes increasingly i
28
28
29
29
Federated learning addresses this by allowing multiple clusters or organizations to collaboratively train models without sharing sensitive data. Computation happens where the data lives, ensuring privacy, regulatory compliance, and efficiency.
30
30
31
-
By integrating federated learning with {rh-rhacm-first}, this pattern provides an automated and scalable solution for deploying FL workloads across hybrid and multicluster environments.
31
+
By integrating federated learning with {rh-rhacm-first}, this pattern provides an automated and scalable solution for deploying federated learning workloads across hybrid and multicluster environments.
32
32
33
33
==== Technologies
34
34
* Open Cluster Management (OCM)
35
35
- ManagedCluster
36
36
- ManifestWork
37
37
- Placement
38
38
...
39
-
* Federated Learning frameworks
39
+
* Federated Learning Frameworks
40
40
- Flower
41
41
- OpenFL
42
42
...
@@ -45,35 +45,35 @@ By integrating federated learning with {rh-rhacm-first}, this pattern provides
45
45
46
46
=== Why Use Advanced Cluster Management for Federated Learning?
47
47
48
-
**Advanced Cluster Management (ACM)** simplifies and automates the deployment and orchestration of Federated Learning (FL) workloads across clusters:
48
+
- **Advanced Cluster Management (ACM)** simplifies and automates the deployment and orchestration of federated learning workloads across clusters:
49
49
50
-
- **Automatic Deployment & Simplified Operations**: ACM provides a unified and automated approach to running FL workflows across different runtimes (e.g., Flower, OpenFL). Its controller manages the entire FL lifecycle—including setup, coordination, status tracking, and teardown—across multiple clusters in a multicloud environment. This eliminates repetitive manual configurations, significantly reduces operational overhead, and ensures consistent, scalable FL deployments.
50
+
- **Automatic Deployment & Simplified Operations**: ACM provides a unified and automated approach to running federated learning workflows across different runtimes (e.g., Flower, OpenFL). Its controller manages the entire federated learning lifecycle—including setup, coordination, status tracking, and teardown—across multiple clusters in a multicloud environment. This eliminates repetitive manual configurations, significantly reduces operational overhead, and ensures consistent, scalable federated learning deployments.
51
51
52
-
- **Dynamic Client Selection**: ACM's scheduling capabilities allow FL clients to be selected not only based on where the data resides, but also dynamically based on cluster labels, resource availability, and governance criteria. This enables a more adaptive and intelligent approach to client participation.
52
+
- **Dynamic Client Selection**: ACM is scheduling capabilities allow federated learning clients to be selected not only based on where the data resides, but also dynamically based on cluster labels, resource availability, and governance criteria. This enables a more adaptive and intelligent approach to client participation.
53
53
54
-
Together, these capabilities support a **flexible FL client model**, where clusters can join or exit the training process dynamically, without requiring static or manual configuration.
54
+
Together, these capabilities support a **flexible federated learning client model**, where clusters can join or exit the training process dynamically, without requiring static or manual configuration.
55
55
56
56
=== Benefits
57
57
58
-
- 🔒 Privacy-preserving training without moving sensitive data
58
+
- Privacy-preserving training without moving sensitive data
59
59
60
-
- ⚙️ Automated dynamic FL client orchestration across distributed clusters
60
+
- Automated dynamic federated learning client orchestration across distributed clusters
61
61
62
-
- 🧩 Adaptable to different FL frameworks, such as OpenFL and Flower
62
+
- Adaptable to different federated learning frameworks, such as OpenFL and Flower
63
63
64
-
- 🌍 Scalability across hybrid and edge clusters
64
+
- Scalability across hybrid and edge clusters
65
65
66
-
- 📉 Lower infrastructure and operational costs
66
+
- Lower infrastructure and operational costs
67
67
68
68
This approach empowers organizations to build smarter, privacy-first AI solutions with less complexity and more flexibility.
- In this architecture, a central **Hub Cluster** acts as the aggregator, running the Federated Learning (FL) controller and scheduling workloads using ACM APIs like `Placement` and `ManifestWork`.
74
+
- In this architecture, a central **Hub Cluster** acts as the aggregator, running the Federated Learning controller and scheduling workloads using ACM APIs like `Placement` and `ManifestWork`.
75
75
76
-
- Multiple **Managed Clusters**, potentially across different clouds, serve as FL clients—each holding private data. These clusters pull the global model from the hub, train it locally, and push model updates back.
76
+
- Multiple **Managed Clusters**, potentially across different clouds, serve as federated learning clients—each holding private data. These clusters pull the global model from the hub, train it locally, and push model updates back.
77
77
78
-
- The controller manages this lifecycle using custom resources and supports runtimes like Flower and OpenFL. This setup enables scalable, multi-cloud model training with **data privacy preserved by design**, requiring no changes to existing FL training code.
78
+
- The controller manages this lifecycle using custom resources and supports runtimes like Flower and OpenFL. This setup enables scalable, multi-cloud model training with **data privacy preserved by design**, requiring no changes to existing federated learning training code.
0 commit comments