We want to hear from you! EMF v3.1.0 Launch Discussion: What’s New & What's Next for Edge AI Infrastructure? #764
Replies: 5 comments 2 replies
-
That's a great overview of newest release. Thanks for your thoughts! Looking forward to learning others perception as well. |
Beta Was this translation helpful? Give feedback.
-
@scottmbaker could you share link to CLI docs. I am sure CLI will reduce the pain and easy to automate couple of tasks. Additionally if any recorded demo available to community then share the link. Thanks ! |
Beta Was this translation helpful? Give feedback.
-
Would a demo make sense to deploy and start multiple instances of the same reference app on different nodes - to then show metrics in a chart side-by-side to compare where it performs better or worse (criteria for the load balancer)? |
Beta Was this translation helpful? Give feedback.
-
Would a demo make sense to show metrics for a "fail-safe" or "robustness" use-case - like deploying workloads on the available nodes and then "kill" one (or multiple) instance (or the same or different workloads) and measure how long it took to re-deploy, re-start them back to a stable state (like until again "time-for-first-frame", "time-for-first-inference")? |
Beta Was this translation helpful? Give feedback.
-
The primary reason we switched from RKE2 to K3s was K3s's simplicity and reduced resource consumption, which frees up more resources for user applications. However, we understand that K3s has certain limitations and may not be the ideal Kubernetes distribution for all use cases. Including a full-stack Kubernetes option in our roadmap would make our platform more comprehensive. We'd appreciate hearing about specific use cases. One thing we may want to improve soon, we've disabled most default K3s add-ons, including metrics-server, Traefik, and ServiceLB. It would be great to have an easier method for users to re-enable these components without requiring a cluster reset. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Enhancing Remote-First Edge Operations
The Edge Manageability Framework 3.1 delivers powerful enhancements to streamline remote onboarding, provisioning, AI deployment workflows, and observability for edge infrastructure. Here’s a breakdown of key updates and how they empower Edge AI workloads:
What’s New in EMF v3.1.0 (GitHub)
Out-of-band Management via Intel AMT / vPro™
You can now perform Remote Power ON/OFF and Reboot even when the OS is unreachable—perfect for remote site maintenance or power-cycling nodes remotely.
Scalable Provisioning Enhancements
Support for provisioning Standalone Edge Nodes with lightweight Kubernetes (OXM profile) using PXE, HTTPS, or USB boot adds flexibility and scalability for rolling out nodes in diverse environments.
Desktop Virtualization Image with GPU SR-IOV
A new EMT Desktop virtualization image now supports GPU SR-IOV, enabling fine-grained GPU access: ideal for Edge AI workloads requiring GPU partitioning.
Custom Cloud-Init Configuration
Edge node configuration can now be customized at provisioning via cloud-init scripts—enabling static IPs, proxy settings, Kubernetes setup, GPU SR-IOV, or X11 straight from installation.
Optimized Provisioning Engine – HookOS removed
Replaced previous HookOS layer with a slimmer, EMT-native provisioning model for improved performance and control.
Serial Number Optionality
Onboarding now supports devices without serial numbers, easing deployment for certain hardware classes.
UI Overhaul for Host Provisioning
Simplify onboarding with an updated UI: e.g., selecting Kubernetes clusters during registration.
Expanded GPU Support
New support added for Intel Battlemage B580 dGPU (on EMT), NVIDIA P100 (Ubuntu 22.04), and Intel integrated GPUs with SR-IOV—accelerating AI inferencing at the edge.
Security Compliance via CVE Tracking
Ongoing security monitoring by tracking CVEs in installed packages gives a foundational layer for secure AI workloads.
Kubernetes Migration: rke2 → k3s
Transitioned edge node orchestration engine to k3s, reducing resource footprint and increasing responsiveness on constrained environments. Note: requires full node reprovisioning for upgrade.
Improved Application Onboarding
New CLI Tool: orch-cli
Manage EMF configurations from the terminal—much like kubectl, enabling scriptable and programmatic workflows.
Reference Applications Included
Prebuilt samples like Weld Porosity, Smart Parking, and Loitering Detection are now accessible for rapid evaluation and experimentation.
Trusted Compute Enhancements
Enhanced workload isolation and trusted OS benchmarking bolster Edge AI deployments with stronger security assurances.
Upgrade Notice
Upgrading from EMF 3.0 to 3.1 entails a significant change: migration from rke2 to k3s. This requires complete node reprovisioning to ensure full compatibility. Direct upgrade paths are supported for EMF 3.0 only, using the documented on-prem and cloud guides. (GitHub)
What This Means for Edge AI Infrastructure Developers
Greater Hardware Control
GPU SR-IOV and AMT/vPro support empower GPU-accelerated AI deployments with OS-independent management.
Flexible Provisioning
PXE, cloud-init, and desktop virtualization levels simplify scale-out and experimental workflows.
UI & CLI Efficiency
Helm integration, UI downloadability, and orch-cli reduce friction from integration to operations.
Lean Kubernetes with k3s
Reduced resource usage and faster orchestration, well-suited for constrained or remote nodes.
Secure & Compliant
Built-in CVE tracking and trusted compute support helps meet enterprise-grade security needs.
We want to hear from you
Join the discussion and help us guide the future of EMF for edge-first AI infrastructure!
References & Guides
Beta Was this translation helpful? Give feedback.
All reactions