A chronological record of every action taken, command run, and decision made during the development of K-PHD. Format:
[Date] — Step — Why
Command:
sudo pacman -S base-devel linux-headers gitWhy:
base-devel: Providesgcc,make, and other essential build tools required to compile C code and kernel modules.linux-headers: Provides the kernel header files for the currently running kernel version (6.18.7.arch1-1). The LKMMakefilereferences/lib/modules/$(uname -r)/build, which points here. Without these headers, the kernel module cannot be compiled.git: Version control for managing the project source code.
Result: ✅ Installed successfully. linux-headers-6.18.7.arch1-1 and pahole-1:1.31-2 were new installs; base-devel and git were already present and reinstalled.
Command:
sudo pacman -S qemu-full libvirt virt-manager dnsmasq bridge-utilsWhy:
Per the project's safety requirement (Section 5 of Readme.md), kernel modules must never be insmod-ed directly on the host machine. A kernel panic inside a module crashes the entire system. The workflow mandates:
qemu-full: The hypervisor (full QEMU with all device emulators) to run our sandboxed VM.libvirt: A management layer over QEMU that provides a stable API for creating/managing VMs.virt-manager: A GUI front-end for libvirt (easier VM management).dnsmasq: Provides DHCP/DNS for the virtual network so the VM can get a network address.bridge-utils: Intended to provide network bridging utilities (brctl).
Result: ❌ Failed. bridge-utils is not available in Arch Linux repositories (it was dropped; iproute2 handles bridging natively on Arch). Because pacman aborts on any missing target, none of the packages were installed.
Command:
sudo pacman -S qemu-full libvirt virt-manager dnsmasqWhy: Removed bridge-utils from the command. On Arch Linux, network bridge management is handled by iproute2 (the ip command), which is already installed as a core system package. bridge-utils / brctl is a legacy tool not needed here.
Expected Result: Installs the full VM stack.
Command:
sudo systemctl enable --now libvirtd.socket
sudo systemctl enable --now virtnetworkd.socketWhy:
- Modern Arch Linux libvirt (v9+) uses socket-activated daemons instead of a monolithic
libvirtd.service. Services start on-demand when a connection arrives. libvirtd.socket: The main libvirt control socket thatvirshandvirt-managerconnect to.virtnetworkd.socket: Manages virtual networks (e.g., thedefaultNAT network that gives the VM internet access).- Using
.socketunits instead of.serviceunits is the correct modern approach on Arch.
Result: ✅ Success. Systemd created the necessary symlinks in /etc/systemd/system/sockets.target.wants/.
Command:
sudo usermod -aG libvirt $(whoami)Why:
By default, only root can communicate with the libvirt daemon socket. Adding the user to the libvirt group grants permission to manage VMs without sudo for every virsh / virt-manager command.
Result: ✅ Success. User added to group.
Command:
virsh list --allWhy:
To confirm that the socket activation worked and that the virsh client can successfully talk to the libvirt daemon without sudo (after applying the group change).
Result: ✅ Success. Returned an empty list of VMs (Id Name State), meaning the daemon is reachable and functioning correctly.
Command:
mkdir -p K-PHD/{kernel,daemon}Why:
Per Phase 1 objectives, we need a clean workspace to separate the Kernel Module C code (kernel/) from the User-space floating-point predictive logic stack (daemon/).
Result: ✅ Success. Folders kernel and daemon created.
Command:
Created kernel/kphd.c and kernel/Makefile.
Then ran make in kernel/:
make -C /lib/modules/6.18.7.arch1-1/build M=/home/atharva/Desktop/K-PHD/kernel modulesWhy:
kphd.c: Contains the absolute minimum code to register a loadable object in the kernel (module_init()andmodule_exit()) and macros defining License/Author.Makefile: Configured as a Kbuild script to pull the Arch Linux kernel headers installed in Session 1 and compile ourkphd.cinto a.koobject.
Result: ✅ Success. The make command succeeded and produced the kphd.ko module.
Command:
Created initramfs/init script with busybox and created run_sandbox.sh.
sudo pacman -S --noconfirm busybox cpio
mkdir -p initramfs/bin
cp /usr/bin/busybox initramfs/bin/
cp kernel/kphd.ko initramfs/
# (Created init script to mount /proc /sys /dev)
cd initramfs && find . | cpio -o -H newc | gzip > ../rootfs.cpio.gzWhy:
The user requested a faster, easier alternative to downloading an ISO and installing Arch Linux via virt-manager.
We constructed an initramfs (Initial RAM Filesystem) that packs our compiled kphd.ko and a minimal busybox shell into a 2MB archive (rootfs.cpio.gz). QEMU can load this archive directly into RAM and boot using the host's existing /boot/vmlinuz-linux kernel in under 3 seconds.
This provides the exact same safety guarantees (ring-0 isolation) as a full VM without the overhead.
Result: ✅ Success. The rootfs.cpio.gz archive and run_sandbox.sh launcher were generated.
Problem:
The system uses the Limine bootloader with Unified Kernel Images (UKI). The kernel is bundled inside /boot/EFI/Linux/omarchy_linux.efi rather than the standard /boot/vmlinuz-linux. QEMU's -kernel flag cannot boot a UKI directly.
Solution: Extracted the raw kernel image from the UKI using:
sudo objcopy -O binary -j .linux /boot/EFI/Linux/omarchy_linux.efi vmlinuz-extractedWhy:
A UKI is a PE executable that bundles the kernel, initramfs, and boot parameters into a single .efi file. objcopy -j .linux extracts just the raw kernel binary (the .linux section), which QEMU can then use with its -kernel flag.
Result: ✅ QEMU successfully booted kernel 6.18.7-arch1-1.
Command (inside QEMU sandbox):
insmod kphd.ko
dmesg | tail -n 2
rmmod kphd
dmesg | tail -n 2Why: To prove the LKM scaffolding is correct and that the module can safely load into and unload from a running Linux kernel without crashing.
Result: ✅ Phase 1 Complete.
[ 91.059020] kphd: module verification failed: signature and/or required key missing
[ 91.061876] K-PHD: Module loaded successfully.
[ 106.891348] K-PHD: Module unloaded successfully.
The "module verification failed" warning is expected — it indicates the module isn't cryptographically signed, which is normal for development. The module loaded and unloaded without any kernel panic.
Changes to kphd.c:
- Added
probe_sched_wakeup()— recordsktime_get_ns()when a task enters the runqueue. - Added
probe_sched_switch()— computeslatency = T_switch - T_wakeupfor the incoming task. Logs any latency > 1ms. - Used
for_each_kernel_tracepoint()+tracepoint_probe_register()for runtime tracepoint discovery (kernel 6.18 does not export tracepoint symbols to out-of-tree modules).
Why for_each_kernel_tracepoint() instead of register_trace_sched_*:
The convenience macros (register_trace_sched_wakeup, etc.) reference linker symbols (__tracepoint_sched_wakeup) that modern kernels do not export to LKMs. The runtime discovery approach iterates all kernel tracepoints by name and stores pointers, then registers probes manually.
Compilation issues resolved:
sched_switchsignature mismatch — kernel 6.18 addedunsigned int prev_stateparameter.__tracepoint_sched_*undefined symbols — switched to runtime tracepoint lookup.
Result: ✅ Phase 2 Complete.
K-PHD: Initializing Phase 2 — Scheduler Hooks
K-PHD: Scheduler hooks registered successfully.
No latency warnings appeared because the sandbox has minimal scheduling contention (expected).
Changes to kphd.c:
- Replaced fixed
wakeup_table[1024]array withDEFINE_HASHTABLE(kphd_htable, 10)(1024 buckets, O(1) lookup). - Each new PID is dynamically allocated via
kmalloc(GFP_ATOMIC)(cannot sleep inside scheduler hooks). - All hash table access is protected by
spin_lock_irqsave(&kphd_lock, flags)for multi-core safety. - Added
max_latencytracking per process. - Module exit frees all entries via
hash_for_each_safe()+kfree().
Result: ✅ Phase 3 Complete.
K-PHD: Initializing Phase 3 — Hash Table + Spinlocks
K-PHD: Scheduler hooks registered (hash table ready).
K-PHD: Module unloaded, all resources freed.
Changes to kphd.c:
- Added
<linux/proc_fs.h>and<linux/seq_file.h>. - Created
/proc/kphd_statsviaproc_create("kphd_stats", 0444, NULL, &kphd_proc_ops). - Implemented
kphd_stats_show()usingseq_printf()to safely iterate the hash table and format a latency report table. - Enriched
kphd_entrywithlast_latency,total_latency, andsample_countfor average latency calculation. proc_remove()called inkphd_exit()for clean teardown.
Result: ✅ Phase 4 Complete.
K-PHD: Initializing Phase 4 — /proc/kphd_stats
K-PHD: /proc/kphd_stats created, hooks registered.
cat /proc/kphd_stats:
PID LAST(ns) MAX(ns) AVG(ns) SAMPLES
63 255 911 281 28
10 978492 978492 117161 9
Changes to kphd.c:
- Added
<net/genetlink.h>for Generic Netlink API. - Defined GENL family
KPHDwith multicast groupkphd_alerts. - Implemented
kphd_send_alert()— packs[PID, latency_ns, max_latency]into ansk_buffand broadcasts viagenlmsg_multicast(). - Alert is sent outside the spinlock to prevent deadlocks (
genlmsg_newwithGFP_ATOMIC). - Proper cleanup:
genl_unregister_family()in exit path.
Result: ✅ Phase 5 Complete.
K-PHD: Initializing Phase 5 — Netlink Integration
K-PHD: Netlink family 'KPHD' registered, /proc ready, hooks active.
K-PHD: Module unloaded, Netlink family removed, resources freed.
New files:
daemon/kphd_daemon.c— C daemon usinglibnl-genl-3.0.daemon/Makefile— Build system usingpkg-config.
Features:
- Connects to GENL family
KPHD, subscribes tokphd_alertsmulticast group. - Per-PID EMA tracker:
EMA_t = 0.3 * L_t + 0.7 * EMA_{t-1}. - Three severity levels: INFO (<2ms), WARNING (2-5ms), DANGER (>5ms).
- Predicts CPU starvation when EMA > 5ms for 3+ consecutive alerts.
- Colorized ANSI terminal output with timestamps.
Result: ✅ Phase 6 Complete. Compiled successfully with gcc + libnl-genl-3.0.
New files in tests/:
cpu_hog.c— Spawns N threads in tight CPU loops to starve the scheduler.io_stall.c— Combines fsync storms with mutex contention.run_validation.sh— End-to-end test: loads module, runs both tests, captures/proc/kphd_stats, validates detection, checks dmesg.Makefile— Build system for test programs.
Result: ✅ Phase 7 Code Complete. Awaiting user validation run.
This log is updated after every significant action. Append new sessions below this line.