How do I Build, Flash, and Validate an Edge OS? #317
biapalmeiro
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Learn how to go from cloning the repo to flashing your first Edge AI node. By the end of this guide, you’ll have:
Built a USB installer from source (or used a prebuilt ISO)
Flashed and installed Edge Microvisor Toolkit (EMT) on hardware
Validated your system post-install with a quick check
Let's get started.
Step 1: Clone the Toolkit
First, clone the stable version of the toolkit from GitHub:
Step 2: Build the ISO Installer (Optional)
If you want to build the ISO from source instead of using a published release:
Install dependencies (Ubuntu example):
Build the toolchain:
Create the ISO image (without real-time extensions):
Step 3: Flash ISO to USB
Now you’ll write the ISO to a USB stick for bare-metal installation.
Insert your USB stick (8GB+ recommended), and identify the device:
Flash the image using
dd
:sudo dd if=./build/iso/EdgeMicrovisorToolkit-*.iso of=/dev/sdX bs=4M status=progress oflag=sync
Replace⚠️ double-check to avoid wiping other drives!).
/dev/sdX
with your actual USB device (Once complete:
Step 4: Install EMT on Edge Node
Insert the USB into your edge device (Intel-based system recommended).
Boot into the USB via BIOS/UEFI menu.
Follow the installer UI (graphical or terminal):
Choose disk
Optional: skip disk encryption for dev testing
Set username/password (
root/root
for testing works)Start installation
Reboot after installation completes. The USB should automatically eject.
Step 5: Post-Install Validation
After booting into your new EMT system:
Log in:
Verify OS Version:
You should see something like:
Check kernel and partitioning:
uname -r mount | grep root
Verify update system (if A/B enabled):
EMT supports atomic updates via dual-root partitioning. To check:
Confirm you are running on a read-only or
LABEL=rootA
style setup.What’s Next
With your base OS installed, you can now:
Deploy container-based AI workloads using Docker/K3s
Install OpenVINO™, PyTorch, or your inference stack
Use EMT's real-time image if your model is latency-sensitive
Explore
toolkit/SPECS
to create custom RPMs for agents/modelsYou can find the complete step by step at the Install Edge Microvisor Toolkit - Developer Guide
Beta Was this translation helpful? Give feedback.
All reactions