Pre-built RouterOS Cloud Hosted Router (CHR) packages are available on GitHub Releases. Each release ZIP contains a ready-to-run QEMU configuration — download, extract, run ./qemu.sh, and you have a RouterOS instance in under 30 seconds.
No GUI needed. No disk image wrangling. Every release includes the CHR disk image, a QEMU config file (qemu.cfg), and a launch script (qemu.sh) that handles platform detection automatically.
Tip
These packages originate from mikropkl, which builds UTM virtual machine bundles for macOS. A .utm file is just a ZIP archive — and every bundle ships with qemu.cfg + qemu.sh so you can run CHR directly via QEMU on macOS or Linux without UTM installed.
Install QEMU once, then every CHR package works.
brew install qemuHomebrew provides qemu-system-x86_64, qemu-system-aarch64, and all UEFI firmware files. On Intel Macs, QEMU uses Apple's Hypervisor.framework (HVF) for near-native speed. On Apple Silicon, HVF accelerates aarch64 guests; x86_64 guests run under TCG emulation.
x86_64 host:
sudo apt-get install qemu-system-x86 qemu-system-arm qemu-efi-aarch64 qemu-utilsaarch64 host:
sudo apt-get install qemu-system-arm qemu-efi-aarch64 qemu-utilsFor hardware-accelerated virtualization (recommended on bare-metal Linux):
sudo apt-get install qemu-kvm
sudo usermod -aG kvm "$USER"
# Log out and back in to activatesudo dnf install qemu-system-x86 qemu-system-arm edk2-aarch64 qemu-img
sudo dnf install qemu-kvm # optional, for KVMNote
qemu.sh auto-detects KVM, HVF, or TCG — no manual accelerator configuration needed.
The CHR Images page is the quickest way — pick a version and architecture and it generates the download commands for your platform. Or use the methods below directly.
cd ~/Downloads
curl -fsSL -o chr.x86_64.qemu.7.22.utm.zip \
https://github.com/tikoci/mikropkl/releases/download/chr-7.22/chr.x86_64.qemu.7.22.utm.zip
unzip chr.x86_64.qemu.7.22.utm.zipEach release typically includes these variants:
| Package | Architecture | Use case |
|---|---|---|
chr.x86_64.qemu.<ver>.utm.zip |
x86_64 | Standard CHR — simplest, fastest on Intel/AMD |
chr.aarch64.qemu.<ver>.utm.zip |
aarch64 | ARM64 CHR — native on Apple Silicon / ARM servers |
rose.chr.x86_64.qemu.<ver>.utm.zip |
x86_64 | CHR + 4×10 GB extra disks for ROSE / disk testing |
rose.chr.aarch64.qemu.<ver>.utm.zip |
aarch64 | ARM64 variant with extra disks |
VERSION=7.22 ARCH=x86_64
curl -fsSL -o chr.utm.zip \
"https://github.com/tikoci/mikropkl/releases/download/chr-${VERSION}/chr.${ARCH}.qemu.${VERSION}.utm.zip"
unzip chr.utm.zipgit clone https://github.com/tikoci/mikropkl.git
cd mikropkl
make CHR_VERSION=7.22
# Output in Machines/chr.x86_64.qemu.7.22.utm/When working from a clone, the Makefile provides targets to manage QEMU machines directly:
make qemu-list # list machines and running state
make qemu-run QEMU_UTM=Machines/chr.x86_64.qemu.7.22.utm # interactive (foreground)
make qemu-start QEMU_UTM=Machines/chr.x86_64.qemu.7.22.utm # headless (background)
make qemu-stop QEMU_UTM=Machines/chr.x86_64.qemu.7.22.utm # stop a background instance
make qemu-status # debug info for all machines
make qemu-start-all # start all on ports 9180, 9181, ...
make qemu-stop-all # stop everythingcd ~/Downloads/chr.x86_64.qemu.7.22.utm
./qemu.sh chr.x86_64.qemu.7.22 accel=hvf accelerated
WebFig: http://localhost:9180/
Login: admin / no password
Ctrl-A X quit | Ctrl-A C monitor | Ctrl-A H help
Ctrl-C → RouterOS
RouterOS serial console appears directly in your terminal. Default login: admin with an empty password (just press Enter).
Exit: press Ctrl-A then X.
Note
Ctrl-C is forwarded to RouterOS (it does not kill QEMU). Use Ctrl-A X to exit.
RouterOS /quit returns to the RouterOS login prompt — it does not exit QEMU.
Tip
RouterOS clears the serial console on boot, which scrolls the banner above out of view. The exit shortcut is also set in the terminal title bar, and you can press Ctrl-A H at any time to redisplay QEMU's escape key help.
cd ~/Downloads/chr.x86_64.qemu.7.22.utm
./qemu.sh --background chr.x86_64.qemu.7.22 accel=hvf accelerated
WebFig: http://localhost:9180/
Login: admin / no password
PID: 54321
Log: /tmp/qemu-chr.x86_64.qemu.7.22.log
Serial: socat - UNIX-CONNECT:/tmp/qemu-chr.x86_64.qemu.7.22-serial.sock
Monitor: socat - UNIX-CONNECT:/tmp/qemu-chr.x86_64.qemu.7.22-monitor.sock
Stop: ./qemu.sh --stop
Background mode writes the PID to /tmp/qemu-<name>.pid and provides Unix sockets for serial console and QEMU monitor access. Use ./qemu.sh --stop to terminate QEMU running CHR.
Stop:
./qemu.sh --stop./qemu.sh --dry-runUseful for debugging or passing a modified command to QEMU manually.
When working from the project source, the Makefile wraps qemu.sh for convenience:
| Command | Equivalent qemu.sh |
|---|---|
make qemu-run QEMU_UTM=Machines/<name>.utm |
cd <name>.utm && ./qemu.sh (foreground) |
make qemu-start QEMU_UTM=Machines/<name>.utm |
cd <name>.utm && ./qemu.sh --background |
make qemu-stop QEMU_UTM=Machines/<name>.utm |
cd <name>.utm && ./qemu.sh --stop |
make qemu-start-all |
Start all machines on ports 9180, 9181, ... |
make qemu-stop-all |
Stop all running machines |
make qemu-list |
List machines with running/stopped state |
make qemu-status |
Debug info: PIDs, logs, sockets, CPU/memory |
QEMU_PORT is passed through — e.g. make qemu-start QEMU_UTM=... QEMU_PORT=8080.
Port 80 (HTTP) inside the VM is forwarded to localhost:9180 by default.
Open http://localhost:9180/ in a browser — no authentication required for the initial page.
# System identity
curl -u admin: http://localhost:9180/rest/system/identity
# List interfaces
curl -u admin: http://localhost:9180/rest/interface
# RouterOS version
curl -u admin: http://localhost:9180/rest/system/resourceThe REST API uses HTTP basic auth. The default password is empty, so -u admin: (note the trailing colon) is sufficient.
socat - UNIX-CONNECT:/tmp/qemu-chr.x86_64.qemu.7.22-serial.sockFull RouterOS CLI access. Press Ctrl-D to disconnect from socat (the VM keeps running).
RouterOS enables SSH by default on port 22. To expose it, forward an additional port:
QEMU_NETDEV="user,id=net0,hostfwd=tcp::9180-:80,hostfwd=tcp::9122-:22" ./qemu.shNote
QEMU_NETDEV replaces the default user-mode networking entirely. Include all hostfwd= entries you need, including port 80 if you still want HTTP. See Forwarding Additional Ports below.
The --port flag changes which host port maps to RouterOS HTTP (port 80):
./qemu.sh --port 8080
# RouterOS at http://localhost:8080/Or via environment variable:
QEMU_PORT=8080 ./qemu.sh --backgroundEach instance needs a unique port. Run different versions side-by-side for comparison testing:
cd ~/Downloads/chr.x86_64.qemu.7.22.utm
./qemu.sh --background --port 9180
cd ~/Downloads/chr.x86_64.qemu.7.21.utm
./qemu.sh --background --port 9181# Compare behavior across versions
curl -u admin: http://localhost:9180/rest/system/resource | jq .version
curl -u admin: http://localhost:9181/rest/system/resource | jq .versionStop each:
cd ~/Downloads/chr.x86_64.qemu.7.22.utm && ./qemu.sh --stop
cd ~/Downloads/chr.x86_64.qemu.7.21.utm && ./qemu.sh --stopTip
This is particularly useful when testing configuration migration between RouterOS versions — export from one, import to the other, and validate via the REST API.
chr.x86_64.qemu.7.22.utm/
├── qemu.sh ← Launch script (run this)
├── qemu.cfg ← QEMU machine definition (edit this for hardware)
├── qemu.env ← Optional: persistent overrides (create this yourself)
├── config.plist ← UTM configuration (ignore unless using UTM)
└── Data/
└── chr-7.22.img ← RouterOS CHR disk image (128 MiB)
A standard QEMU --readconfig INI file (see QEMU invocation docs). This is where you change VM hardware — memory, CPUs, disks, NIC settings:
[machine]
type = "q35"
[memory]
size = "1024M"
[smp-opts]
cpus = "2"
[drive "drive0"]
file = "./Data/chr-7.22.img"
format = "raw"
if = "virtio"
[device "nic0"]
driver = "virtio-net-pci"
netdev = "net0"
mac = "0e:fe:a9:e7:24:09"Editable. Change memory, CPU count, or add drives directly in this file. Paths are relative to the package directory.
Note
About the MAC address: The mac line in qemu.cfg exists because these packages originate from UTM, whose config.plist requires an explicit MAC address. For standalone QEMU use, this line is optional — QEMU auto-generates a unique MAC (from the 52:54:00:xx:xx:xx range) if omitted. You can safely remove or change it. If you run multiple instances on the same bridge network, you should either remove or change the MAC to avoid conflicts, since all packages of the same version share the same generated value.
A POSIX shell script that wraps qemu.cfg with platform detection (KVM/HVF/TCG), networking, UEFI firmware (aarch64), and serial/display setup. Things that QEMU's --readconfig format cannot express live here.
In most cases, edit qemu.cfg for hardware changes and use qemu.sh flags, environment variables, or a qemu.env file for runtime behavior.
Create a qemu.env file alongside qemu.sh to set persistent environment variable overrides without modifying any generated file. qemu.sh sources it automatically if present:
# qemu.env — example overrides
QEMU_PORT=9280
QEMU_ACCEL=tcg
QEMU_EXTRA="-m 2048"Command-line flags (--port, --shared, etc.) take precedence over values in qemu.env. The file is plain shell — any valid VAR=value assignment works. This is the recommended way to persist per-machine settings like a custom port or alternate networking.
Edit qemu.cfg:
[memory]
size = "2048M"Or override at launch without editing:
QEMU_EXTRA="-m 2048" ./qemu.shEdit qemu.cfg:
[smp-opts]
cpus = "4"Append to qemu.cfg (x86_64 example):
[drive "drive1"]
file = "./Data/extra.qcow2"
format = "qcow2"
if = "virtio"Create the disk image first:
qemu-img create -f qcow2 ./Data/extra.qcow2 10GRouterOS will see it as an additional drive — format it from the CLI with /disk format-drive.
Note
The ROSE variants (rose.chr.*.qemu) ship with 4×10 GB qcow2 disks pre-configured in qemu.cfg — useful for testing RouterOS disk features without manual setup.
The default configuration uses QEMU user-mode (SLIRP) networking with port forwarding. This works without root privileges and is sufficient for management access. Several alternatives exist for more advanced scenarios — see the QEMU networking docs for full details on each backend.
qemu.sh passes -netdev user,id=net0,hostfwd=tcp::<port>-:80 on the command line. The qemu.cfg defines the NIC hardware:
[device "nic0"]
driver = "virtio-net-pci"
netdev = "net0"
mac = "0e:fe:a9:e7:24:09"The netdev in qemu.cfg references net0, which qemu.sh creates. This separation exists because QEMU's --readconfig format does not support the hostfwd= option needed for port forwarding.
To expose SSH (22), WinBox (8291), and API (8728) alongside HTTP:
QEMU_NETDEV="user,id=net0,hostfwd=tcp::9180-:80,hostfwd=tcp::9122-:22,hostfwd=tcp::9291-:8291,hostfwd=tcp::9728-:8728" \
./qemu.shQEMU_NETDEV replaces the default netdev completely, so include the hostfwd for port 80 if you still want HTTP access. All services are then reachable on localhost at their mapped ports.
Tip
RouterOS uses many well-known ports. Common ones to forward:
| Service | Guest port | Example host port |
|---|---|---|
| HTTP/WebFig | 80 | 9180 |
| SSH | 22 | 9122 |
| WinBox | 8291 | 9291 |
| API | 8728 | 9728 |
| API-SSL | 8729 | 9729 |
macOS does not have kernel tap/tun support. Instead, QEMU 8+ supports Apple's vmnet.framework, which provides shared (NAT) and bridged networking modes. qemu.sh has built-in flags for both:
Shared networking (NAT):
sudo ./qemu.sh --sharedUses vmnet-shared — RouterOS gets an IP on a private NAT network (typically 192.168.64.x/24). The VM can reach the internet through macOS's NAT. Requires sudo because vmnet.framework needs root.
Bridged networking:
sudo ./qemu.sh --bridge en0Uses vmnet-bridged on the specified interface plus vmnet-shared as a second NIC. RouterOS sees two interfaces: ether1 bridged to your physical LAN (gets a real LAN IP via DHCP or static config), and ether2 on the private vmnet NAT. This gives full LAN access while keeping a management path. Requires sudo.
Note
When qemu.sh runs as root on macOS without explicit networking flags, it defaults to vmnet-shared automatically (more useful than SLIRP under sudo).
Port forwarding is not used with vmnet. Access RouterOS by its vmnet IP address directly. Find it via the serial console (/ip address print) or check your DHCP leases.
Both modes work in foreground and background:
sudo ./qemu.sh --shared --background
sudo ./qemu.sh --bridge en0 --background --port 9180
# (--port is ignored with vmnet, but harmless)Tip
If you only need management access (WebFig, SSH, REST API), the default user-mode networking with --port is simpler and does not require sudo. Use vmnet when you need the CHR to participate on a real network — for example, testing DHCP server, OSPF, or BGP.
Note
Bridging over Wi-Fi (macOS or Linux) introduces variable latency — the vmnet/tap bridge inserts the VM behind the wireless medium, so every packet pays the full 802.11 round-trip. This can add 5–30 ms of jitter, visible in speed tests, latency-sensitive routing protocols (OSPF hello timers, BGP hold timers), and queue testing. Use Ethernet when you need clean baseline numbers. That said, Wi-Fi jitter is great material for experimenting with fq_codel, CAKE, or other AQM techniques — set your queue target above the typical Wi-Fi RTT and watch the latency curve smooth out.
Bridge networking connects the VM directly to a host network — the CHR gets its own IP address on the LAN (or from DHCP). This is essential for testing DHCP server, routing protocols, or any scenario where port forwarding is insufficient.
Note
The commands below illustrate the general approach for Linux bridge + tap networking. Adapt interface names, IP addresses, and routes to your environment. See the QEMU networking docs for full tap/bridge details.
Setup:
# Create a bridge and tap interface (requires root)
sudo ip link add br0 type bridge
sudo ip link set br0 up
sudo ip tuntap add tap0 mode tap user "$USER"
sudo ip link set tap0 master br0
sudo ip link set tap0 up
# Attach your physical interface (e.g. eth0) to the bridge
sudo ip link set eth0 master br0
# Move IP from eth0 to br0 (adjust for your network)
sudo ip addr del 192.168.88.100/24 dev eth0
sudo ip addr add 192.168.88.100/24 dev br0
sudo ip route add default via 192.168.88.1 dev br0Launch with the tap interface:
QEMU_NETDEV="tap,id=net0,ifname=tap0,script=no,downscript=no" ./qemu.shThis replaces the user-mode netdev. RouterOS will bridge onto your physical network — configure its IP via CLI or DHCP as you would a real router.
Important
Bridge networking replaces user-mode networking entirely. Port forwarding (--port) has no effect in bridge mode — access RouterOS by its bridge IP address.
If you want multiple VMs to talk to each other and reach the internet, but don't want to bridge to a physical interface, create an isolated bridge with NAT.
Note
This is a conceptual example showing the general pattern. Interface names, subnets, and iptables rules will vary by distribution and existing network configuration.
# Create an isolated bridge
sudo ip link add br-chr type bridge
sudo ip addr add 10.99.0.1/24 dev br-chr
sudo ip link set br-chr up
# Create tap interfaces for each VM
sudo ip tuntap add tap0 mode tap user "$USER"
sudo ip link set tap0 master br-chr
sudo ip link set tap0 up
sudo ip tuntap add tap1 mode tap user "$USER"
sudo ip link set tap1 master br-chr
sudo ip link set tap1 up
# Enable NAT (outbound internet for the VMs)
sudo iptables -t nat -A POSTROUTING -s 10.99.0.0/24 ! -d 10.99.0.0/24 -j MASQUERADE
sudo sysctl -w net.ipv4.ip_forward=1Launch two CHR instances on the same bridge:
cd ~/Downloads/chr.x86_64.qemu.7.22.utm
QEMU_NETDEV="tap,id=net0,ifname=tap0,script=no,downscript=no" ./qemu.sh --background
cd ~/Downloads/chr.x86_64.qemu.7.21.utm
QEMU_NETDEV="tap,id=net0,ifname=tap1,script=no,downscript=no" ./qemu.sh --backgroundAssign static IPs in each RouterOS instance (e.g. 10.99.0.2/24 and 10.99.0.3/24) or run a DHCP server on one of them. The two CHRs can reach each other and the internet via the host's NAT.
QEMU socket networking lets two VMs communicate over a shared Unix socket — no root, no bridge, no tap:
# VM 1 (server side)
QEMU_NETDEV="socket,id=net0,listen=:9500" ./qemu.sh --background --port 9180
# VM 2 (client side) — in a different CHR package directory
QEMU_NETDEV="socket,id=net0,connect=:9500" ./qemu.sh --background --port 9181The two VMs share a virtual Ethernet segment. Assign IPs manually in RouterOS and they can ping each other. No host network involvement.
Note
Socket networking supports exactly two peers per socket. For more than two VMs, combine with a QEMU multi-point socket (mcast) or use bridge/tap.
To give RouterOS a WAN + LAN topology, add a second NIC in qemu.cfg:
[device "nic1"]
driver = "virtio-net-pci"
netdev = "net1"Then provide the second netdev at launch:
QEMU_EXTRA="-netdev tap,id=net1,ifname=tap1,script=no,downscript=no" ./qemu.shNote
Use QEMU_NETDEV to replace the first NIC's netdev (net0). Use QEMU_EXTRA to add a second netdev (net1) — the two work together.
RouterOS will see ether1 (net0, management) and ether2 (net1, your tap). Configure routing, NAT, or bridging in RouterOS as you would on hardware.
Override any default without editing files. For persistent per-machine overrides, put these in a qemu.env file alongside qemu.sh (see What's Inside the Package).
| Variable | Purpose | Example |
|---|---|---|
QEMU_PORT |
Host port for HTTP forwarding | QEMU_PORT=8080 ./qemu.sh |
QEMU_ACCEL |
Force accelerator | QEMU_ACCEL=tcg ./qemu.sh |
QEMU_BIN |
Path to QEMU binary | QEMU_BIN=/opt/qemu/bin/qemu-system-x86_64 ./qemu.sh |
QEMU_EXTRA |
Append additional QEMU flags | QEMU_EXTRA="-m 2048" ./qemu.sh |
QEMU_NETDEV |
Replace default netdev | QEMU_NETDEV="tap,id=net0,ifname=tap0,script=no,downscript=no" ./qemu.sh |
QEMU_EFI_CODE |
UEFI code ROM path (aarch64) | QEMU_EFI_CODE=/path/to/AAVMF_CODE.fd ./qemu.sh |
QEMU_EFI_VARS |
UEFI vars template (aarch64) | QEMU_EFI_VARS=/path/to/AAVMF_VARS.fd ./qemu.sh |
In background mode, the QEMU Human Monitor Protocol (HMP) is exposed on a Unix socket for diagnostics:
# Interactive session
socat - UNIX-CONNECT:/tmp/qemu-chr.x86_64.qemu.7.22-monitor.sock
# One-shot queries
echo "info block" | socat - UNIX-CONNECT:/tmp/qemu-chr.x86_64.qemu.7.22-monitor.sock
echo "info network" | socat - UNIX-CONNECT:/tmp/qemu-chr.x86_64.qemu.7.22-monitor.sock
echo "info cpus" | socat - UNIX-CONNECT:/tmp/qemu-chr.x86_64.qemu.7.22-monitor.sockUseful commands: info block (disk state), info network (netdev/NIC mapping), info snapshots, system_powerdown (graceful shutdown).
In foreground mode, press Ctrl-A then C to toggle between the serial console and the QEMU monitor. Ctrl-C is forwarded to RouterOS — use Ctrl-A X to quit QEMU. Press Ctrl-A H to see all available escape sequences.
- Machine type:
q35 - Firmware: SeaBIOS (built into QEMU — no external files needed)
- Disk:
if=virtioinqemu.cfg(resolves tovirtio-blk-pcion q35) - Boot time: ~10s with KVM/HVF, ~30–60s with TCG
- Machine type:
virt - Firmware: EDK2 UEFI —
qemu.shsearches standard paths automatically (/opt/homebrew/share/qemu/,/usr/local/share/qemu/,/usr/share/AAVMF/) - Disk: Explicit
virtio-blk-pcidevice inqemu.cfg(theif=virtioshorthand maps to VirtIO-MMIO onvirt, which RouterOS lacks a driver for — see QEMU VirtIO docs) - Boot time: ~10–20s with KVM/HVF, ~20–30s with TCG
- Cross-arch: aarch64 CHR boots on x86_64 hosts via TCG in ~20s — including macOS Intel
Note
The reverse direction (x86_64 CHR on an aarch64 host) is not viable. x86 firmware probes legacy I/O ports that have no ARM equivalent, making TCG emulation prohibitively slow.
The CHR disk image is a 128 MiB raw disk — small enough to keep multiple versions around.
RouterOS writes its configuration to the disk image. Every run accumulates state. To start fresh:
# Re-extract the original image from the ZIP
cd ~/Downloads
unzip -o chr.x86_64.qemu.7.22.utm.zip chr.x86_64.qemu.7.22.utm/Data/chr-7.22.imgConvert the raw image to qcow2 for snapshot support:
cd ~/Downloads/chr.x86_64.qemu.7.22.utm
qemu-img convert -f raw -O qcow2 ./Data/chr-7.22.img ./Data/chr-7.22.qcow2Update qemu.cfg:
[drive "drive0"]
file = "./Data/chr-7.22.qcow2"
format = "qcow2"
if = "virtio"Now you can snapshot from the QEMU monitor:
savevm baseline
# ... make changes ...
loadvm baseline
Need five CHRs running the same version? Share one base image:
cd ~/Downloads
# One base (read-only after this)
BASE=chr.x86_64.qemu.7.22.utm/Data/chr-7.22.img
for i in 1 2 3 4 5; do
qemu-img create -f qcow2 -b "$(pwd)/$BASE" -F raw "router${i}.qcow2"
doneEach router*.qcow2 is a thin clone (~200 KB initially) storing only its own changes. Point each instance's qemu.cfg at its own overlay file.
| Scenario | Boot time | Notes |
|---|---|---|
| x86_64 on Intel/AMD host (KVM) | ~10s | Bare-metal Linux, fastest |
| x86_64 on macOS Intel (HVF) | ~10s | Near-native via Hypervisor.framework |
| aarch64 on ARM host (KVM) | ~10–15s | ARM servers, Raspberry Pi 5, etc. |
| aarch64 on macOS Apple Silicon (HVF) | ~10s | M1/M2/M3 native |
| aarch64 on x86_64 host (TCG) | ~20s | Cross-arch — fast because ARM uses MMIO |
| x86_64 on macOS Intel (TCG) | ~30–60s | Same-arch emulation, no HVF |
qemu.sh selects the fastest available accelerator automatically. Force a specific one with QEMU_ACCEL=tcg or QEMU_ACCEL=kvm if needed.
ERROR: qemu-system-x86_64 not found. Install QEMU or set QEMU_BIN.
Install QEMU for your platform (see Platform Setup).
If qemu.sh reports accel=tcg on a Linux host, KVM may not be enabled:
ls -la /dev/kvm # should exist and be writable
sudo modprobe kvm-intel # or kvm-amd
sudo usermod -aG kvm "$USER"
# Log out and back inbind: Address already in use
Another instance (or process) is using port 9180:
./qemu.sh --port 9181
# or find and kill the conflicting process:
lsof -i :9180Check the log:
cat /tmp/qemu-chr.x86_64.qemu.7.22.logCommon causes: missing UEFI firmware (aarch64), disk image not found (wrong working directory), or port already in use.
qemu.sh searches standard paths automatically. Force a specific path:
QEMU_EFI_CODE=/usr/share/AAVMF/AAVMF_CODE.fd \
QEMU_EFI_VARS=/usr/share/AAVMF/AAVMF_VARS.fd \
./qemu.sh --dry-runRouterOS takes a few seconds after boot to start HTTP. On TCG, wait 30–60 seconds. Verify the process is running and the port is listening:
ps aux | grep qemu
lsof -i :9180 # macOS
ss -tlnp | grep 9180 # Linux- Execution blocked in UTM sandbox directory: When a
.utmbundle is imported via "Open in UTM", macOS applies quarantine attributes (com.apple.quarantinewith the0x0004hard-quarantine flag) to executable files in UTM's sandboxed container (~/Library/Containers/com.utmapp.UTM/Data/Documents/). This blocks./qemu.shwithzsh: operation not permitted— evensudocannot override it. Workaround: run via the interpreter (sh ./qemu.sh) or remove the quarantine attribute (xattr -d com.apple.quarantine qemu.sh). Alternatively, download the ZIP directly and extract to~/Downloads/or any non-sandboxed location. This does not affect UTM itself — UTM runs VMs normally regardless.Likely cause is when UTM imports any QEMU-type machine with a "raw" disk image, UTM converts image to
.qcow2format - thus a new file is created, and macOS quarantine rules are applied for new files, including the new.qcow2image.*.apple.*images are uneffected by this problem, as the image is not converted and remains.imgafter import to UTM, since "raw" is needed by Virtualization.framework (with means the./qemu.shwork for*.apple.*even when in UTM's "sandbox", QEMU images require the workground.). /system/check-installationfails on aarch64: The RouterOScheck-installationcommand returns an error on all aarch64 QEMU machines. This is a known CHR limitation (ARM checker binary behavior) — RouterOS itself works fine.- DHCP server with user-mode networking: QEMU's SLIRP backend does not pass broadcast traffic, so running a DHCP server in the CHR for external clients requires bridge or tap networking.
- Disk size: The CHR image is 128 MiB. RouterOS manages its own partition layout — there is no need to resize it for typical use.
- x86_64 on ARM64 hosts: Not viable under TCG emulation (see Architecture Notes).