A comprehensive RAID storage system implementation for the MIT xv6 educational operating system, supporting RAID 0, RAID 1, and RAID 0+1 configurations with advanced debugging capabilities and fault tolerance.
- Overview
- Requirements & Setup
- Building & Running
- RAID System Architecture
- System Calls API
- RAID Implementation Details
- Debug System
- Testing & Validation
- Project Structure
- Performance & Configuration
- Troubleshooting
This project implements a complete RAID (Redundant Array of Independent Disks) storage system within the xv6 operating system kernel. It was developed as part of an Operating Systems course project, which includes:
- RAID 0 (Striping): High performance through data striping across multiple disks
- RAID 1 (Mirroring): Fault tolerance through exact data duplication
- RAID 0+1 (Striped Mirrors): Combines striping and mirroring for both performance and reliability
- Complete System Call Interface: 7 system calls for full RAID management
- Fault Tolerance: Automatic disk failure detection and recovery mechanisms
- Persistent Metadata: Array configuration survives system reboots
- Advanced Debug System: 6-level debug system with ANSI colors
- Apple Silicon Support: Native compilation and execution on Apple M-series Macs
- Professional Implementation: Production-quality error handling and validation
- macOS 11+ (Apple Silicon: M-series chips)
- Homebrew package manager
- Terminal or iTerm2 (avoid other terminal emulators)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install QEMU with RISC-V support and cross-compilation toolchain
brew install qemu riscv64-elf-gcc riscv64-elf-binutils
# Confirm tools are available
riscv64-elf-gcc --version
which riscv64-elf-gcc
qemu-system-riscv64 --version
PATH Issues? Ensure Homebrew's path is configured:
echo 'export PATH="/opt/homebrew/bin:$PATH"' >> ~/.zprofile source ~/.zprofile
- Recommended IDE: CLion (free academic license available)
- Alternative: VS Code with C/C++ extensions
- Required: Basic C programming knowledge and OS concepts
# Clone the repository
git clone https://github.com/miloshimself/xv6-raid.git
cd xv6-raid
# Build with default configuration (2 RAID disks, SUCCESS debug level)
make TOOLPREFIX=riscv64-elf-
# Run in QEMU emulator
make TOOLPREFIX=riscv64-elf- qemu
# Configure number of RAID disks (2-7, disk 0 reserved for xv6)
make DISKS=4 TOOLPREFIX=riscv64-elf- qemu
# Note: All RAID disks must be of equal size (configured automatically)
# Debug levels (from silent to verbose)
make RAID_DEBUG=NONE TOOLPREFIX=riscv64-elf- qemu # No debug output
make RAID_DEBUG=ERROR TOOLPREFIX=riscv64-elf- qemu # Critical errors only
make RAID_DEBUG=WARNING TOOLPREFIX=riscv64-elf- qemu # Warnings + errors
make RAID_DEBUG=INFO TOOLPREFIX=riscv64-elf- qemu # General information
make RAID_DEBUG=SUCCESS TOOLPREFIX=riscv64-elf- qemu # Success messages (default)
make RAID_DEBUG=VERBOSE TOOLPREFIX=riscv64-elf- qemu # Detailed operations
make RAID_DEBUG=TRACE TOOLPREFIX=riscv64-elf- qemu # Everything including internals
# Disable colors for better performance
make RAID_DEBUG=INFO RAID_COLORS=OFF TOOLPREFIX=riscv64-elf- qemu
Edit the Makefile
and add at the top:
TOOLPREFIX = riscv64-elf-
Then simply run:
make qemu
Once the system boots, you can run the test program:
# In xv6 shell
javni_test # Runs comprehensive RAID functionality tests
The RAID system is implemented as a kernel subsystem with the following architecture:
User Applications
↓
System Call Interface (7 RAID syscalls)
↓
RAID Core Logic (raid.c)
↓
RAID Strategy Pattern (raid0.c, raid1.c, raid01.c)
↓
Disk Access Layer (virtio_disk.c)
↓
Physical Disks (1-7, disk 0 reserved for xv6)
- Disk 0: Reserved for xv6 file system (DO NOT MODIFY)
- Disks 1-7: Available for RAID arrays (configured via
DISKS
parameter) - Block 0 on each RAID disk: Contains RAID metadata
- Blocks 1+ on each RAID disk: User data storage
- BSIZE: Block size constant (typically 1024 bytes)
- Metadata: Stored in first block of each disk
- Strategy Pattern: Runtime selection of RAID implementation
- Error Handling: Comprehensive validation and error reporting
#include "kernel/types.h"
enum RAID_TYPE {
RAID0, // Striping for performance
RAID1, // Mirroring for fault tolerance
RAID0_1 // Striped mirrors for both
};
// Core RAID management
int init_raid(enum RAID_TYPE raid);
int destroy_raid(void);
// Data operations
int read_raid(int blkn, uchar* data);
int write_raid(int blkn, uchar* data);
// Fault management
int disk_fail_raid(int diskn);
int disk_repaired_raid(int diskn);
// Information retrieval
int info_raid(uint *blkn, uint *blks, uint *diskn);
Purpose: Initialize a new RAID array or restore existing configuration.
Parameters:
raid
: RAID type (RAID0, RAID1, or RAID0_1)
Return Value:
0
on success-1
on error (invalid RAID type, insufficient disks, etc.)
Behavior:
- Automatically detects existing RAID configuration and restores it
- If no existing array found, creates new array with specified type
- Calculates total capacity based on RAID type and available disks
- Writes metadata to block 0 of each participating disk
- Initializes all disks as healthy
Implementation Details:
// RAID0: Total capacity = sum of all disks minus metadata blocks
// RAID1: Total capacity = capacity of smallest disk minus metadata
// RAID0+1: Total capacity = (sum of all disks / 2) minus metadata
Purpose: Read a logical block from the RAID array.
Parameters:
blkn
: Logical block number (0-based)data
: Buffer to store read data (must be BSIZE bytes)
Return Value:
0
on success-1
on error (array not initialized, invalid block number, all disks failed, etc.)
Behavior:
- Validates array is initialized and block number is valid
- Maps logical block to physical disk(s) based on RAID type
- For RAID1/RAID0+1: Automatically tries alternate disk if primary fails
- Reads exactly BSIZE bytes into provided buffer
Purpose: Write a logical block to the RAID array.
Parameters:
blkn
: Logical block number (0-based)data
: Buffer containing data to write (must be BSIZE bytes)
Return Value:
0
on success-1
on error (array not initialized, invalid block number, write failure, etc.)
Behavior:
- Validates array is initialized and block number is valid
- Maps logical block to physical disk(s) based on RAID type
- For RAID1/RAID0+1: Writes to all healthy mirrors
- Returns success if at least one write succeeds (for redundant arrays)
Purpose: Simulate disk failure for testing fault tolerance.
Parameters:
diskn
: Physical disk number to mark as failed (1-based)
Return Value:
0
on success-1
on error (array not initialized, invalid disk number)
Behavior:
- Marks specified disk as failed in RAID state
- Array continues operating in degraded mode (if RAID type supports it)
- For RAID0: Array becomes unusable (no redundancy)
- For RAID1/RAID0+1: Array operates with reduced redundancy
Purpose: Repair a previously failed disk by rebuilding its data.
Parameters:
diskn
: Physical disk number to repair (1-based)
Return Value:
0
on success-1
on error (array not initialized, invalid disk number, repair failed)
Behavior:
- For RAID0: Always fails (no redundancy to rebuild from)
- For RAID1: Copies all data from healthy mirror to repaired disk
- For RAID0+1: Copies data from mirror partner to repaired disk
- Marks disk as healthy after successful rebuild
- Returns error if no healthy data source available
Purpose: Retrieve information about the current RAID array.
Parameters:
blkn
: Pointer to store total number of logical blocksblks
: Pointer to store block size in bytesdiskn
: Pointer to store total number of physical disks
Return Value:
0
on success-1
on error (array not initialized)
Behavior:
- Returns current array configuration
- Block count reflects logical capacity (after RAID calculations)
- Disk count shows physical disks participating in array
Purpose: Safely destroy the RAID array and erase data.
Parameters: None
Return Value:
0
on success-1
on error (array not initialized)
Behavior:
- Performs full erase of all blocks on all disks (configurable)
- Alternative: Quick erase (metadata only) via compile-time flag
- Resets internal RAID state
- Array becomes uninitialized after destruction
All system calls follow consistent error handling:
- Return Values: 0 for success, -1 for any error
- Validation: All parameters validated before processing
- State Checks: Array initialization verified for all operations
- Debug Output: Detailed error messages via debug system
- Graceful Degradation: Operations continue when possible
File: kernel/raid0.c
Algorithm: Round-robin block distribution across disks
// Block mapping formula
disk_number = (logical_block % total_disks) + 1
physical_block = (logical_block / total_disks) + 1 // +1 for metadata block
Characteristics:
- Performance: Excellent (parallel I/O across all disks)
- Capacity: Sum of all disks minus metadata blocks
- Fault Tolerance: None (any disk failure causes total data loss)
- Minimum Disks: 2
- Use Case: High-performance applications where data loss is acceptable
Implementation Highlights:
raid0_init()
: Calculates total capacity, initializes metadataraid0_read()
/raid0_write()
: Maps logical blocks to physical disksraid0_repair_disk()
: Always fails (no redundancy available)- Block distribution ensures even load across all disks
File: kernel/raid1.c
Algorithm: Complete data duplication across all disks
// All disks contain identical data
for (disk = 1; disk <= total_disks; disk++) {
write_block(disk, physical_block, data);
}
Characteristics:
- Performance: Good for reads (can read from any disk), slower writes
- Capacity: Size of smallest disk minus metadata block
- Fault Tolerance: Excellent (survives n-1 disk failures)
- Minimum Disks: 2
- Use Case: Critical data requiring maximum reliability
Implementation Highlights:
raid1_init()
: Sets capacity to smallest disk sizeraid1_read()
: Tries disks in order until successful readraid1_write()
: Writes to all healthy disks, succeeds if any write succeedsraid1_repair_disk()
: Rebuilds failed disk from any healthy mirror
File: kernel/raid01.c
Algorithm: Combines striping and mirroring
// Mirror pairs: (disk1,disk2), (disk3,disk4), etc.
stripe = logical_block % (total_disks / 2)
primary_disk = (stripe * 2) + 1
mirror_disk = primary_disk + 1
physical_block = (logical_block / (total_disks / 2)) + 1
Characteristics:
- Performance: Excellent (striping) with fault tolerance (mirroring)
- Capacity: Half the total disk space minus metadata
- Fault Tolerance: Good (survives mirror partner failures)
- Minimum Disks: 2 (must be even number)
- Use Case: High-performance applications requiring fault tolerance
Implementation Highlights:
raid01_init()
: Requires even number of disks, pairs them as mirrorsraid01_read()
: Reads from primary disk, falls back to mirror if failedraid01_write()
: Writes to both disks in mirror pairraid01_repair_disk()
: Rebuilds from mirror partner
File: kernel/raid.c
The core RAID system uses the Strategy Pattern for runtime algorithm selection:
struct raid_strategy {
int (*init)(void);
int (*read)(int, uchar *);
int (*write)(int, const uchar *);
int (*repair)(int);
};
static struct raid_strategy strategy; // Current strategy
// Strategy selection at runtime
static void set_raid_strategy(enum RAID_TYPE type) {
switch (type) {
case RAID0: set_raid0_strategy(); break;
case RAID1: set_raid1_strategy(); break;
case RAID0_1: set_raid01_strategy(); break;
}
}
This design provides:
- Runtime flexibility: RAID type determined at init time
- Code reuse: Common validation and error handling
- Maintainability: Clear separation of algorithm implementations
- Extensibility: Easy addition of new RAID levels
File: kernel/raid.h
Each RAID disk stores metadata in block 0:
struct raid_metadata {
int magic_number; // RAID_MAGIC (0x52414944 = "RAID")
int raid_type; // RAID0, RAID1, or RAID0_1
int total_disks; // Number of disks in array
int total_blocks; // Logical capacity of array
};
Metadata Features:
- Magic Number: Identifies valid RAID metadata
- Type Detection: Automatic RAID type restoration
- Capacity Calculation: Stored for consistency validation
- Persistence: Survives system reboots and power failures
Files: kernel/raid_debug.h
, kernel/console_colors.h
The xv6-raid project includes a sophisticated 6-level debug system with ANSI color support for comprehensive monitoring and troubleshooting.
Level | Macro | Purpose | Color | Example Output |
---|---|---|---|---|
0 | NONE |
No output | - | Silent operation |
1 | ERROR |
Critical failures | Red | [RAID-ERROR] Array not initialized |
2 | WARNING |
Degraded operation | Yellow | [RAID-WARN] Disk 2 marked as FAILED |
3 | INFO |
General status | Blue | [RAID-INFO] RAID1 array initialized |
4 | SUCCESS |
Success messages | Green | [RAID-SUCCESS] Disk repair completed |
5 | VERBOSE |
Configuration details | Cyan | [RAID-VERB] Writing metadata to disk 3 |
6 | TRACE |
Internal operations | Gray | [RAID-TRACE] stripe=2, disk=5, offset=1024 |
// Error reporting
DEBUG_ERROR("init_raid: Invalid RAID type %d", raid_type);
// Status updates
DEBUG_INFO("RAID1: Initializing mirrored array with %d disks", total_disks);
// Success confirmation
DEBUG_SUCCESS("Disk %d repair completed successfully", diskn);
// Detailed tracing
DEBUG_TRACE("Block %d mapped to disk %d, physical block %d", blkn, diskn, pblkn);
# Configure debug level at build time
make RAID_DEBUG=VERBOSE TOOLPREFIX=riscv64-elf- qemu
# Disable colors for better performance or piping
make RAID_DEBUG=INFO RAID_COLORS=OFF TOOLPREFIX=riscv64-elf- qemu
- Array initialization and destruction
- Block read/write operations
- Disk failure and repair management
- Metadata loading and validation
- Striping calculations
- Performance optimizations
- Data loss warnings
- Mirror synchronization
- Degraded mode operation
- Rebuild progress
- Stripe and mirror coordination
- Complex failure scenarios
- Performance balancing
File: kernel/console_colors.h
Provides ANSI color support with fallback for non-color terminals:
// Usage examples
printf(COLOR_GREEN "Success!" COLOR_RESET "\n");
printf(ERROR_COLOR "Critical error" COLOR_RESET "\n");
// Automatic color disable for performance
#ifdef DISABLE_COLORS // All colors become empty strings
Supported Colors:
- Standard: Black, Red, Green, Yellow, Blue, Magenta, Cyan, White
- Bright variants: Enhanced visibility versions of all colors
- Background colors: For highlighting important messages
- Text formatting: Bold, underline, reverse video
File: user/javni_test.c
Comprehensive test program included with the system:
int main(int argc, char *argv[]) {
// Initialize RAID1 array
init_raid(RAID1);
// Get array information
uint disk_num, block_num, block_size;
info_raid(&block_num, &block_size, &disk_num);
// Write test pattern to blocks
uint blocks = (512 > block_num ? block_num : 512);
uchar *blk = malloc(block_size);
for (uint i = 0; i < blocks; i++) {
for (uint j = 0; j < block_size; j++) {
blk[j] = j + i; // Unique pattern per block
}
write_raid(i, blk);
}
// Verify data integrity
check_data(blocks, blk, block_size);
// Test fault tolerance
disk_fail_raid(2); // Simulate disk failure
check_data(blocks, blk, block_size); // Verify data still accessible
// Test recovery
disk_repaired_raid(2); // Repair failed disk
check_data(blocks, blk, block_size); // Verify full recovery
free(blk);
exit(0);
}
# In xv6 shell after boot
javni_test # Run comprehensive test suite
# Expected output (with SUCCESS debug level):
# [RAID-INFO] RAID1: Initializing mirrored array with 2 disks
# [RAID-SUCCESS] RAID1 initialization completed successfully
# [RAID-WARN] Disk 2 marked as FAILED
# [RAID-SUCCESS] Disk 2 repair completed successfully
// Example custom test in user program
#include "kernel/types.h"
#include "user/user.h"
int main() {
// Test different RAID types
printf("Testing RAID0...\n");
init_raid(RAID0);
// Write/read test
uchar data[1024];
memset(data, 0xAA, 1024);
write_raid(0, data);
uchar read_data[1024];
read_raid(0, read_data);
// Verify data
if (memcmp(data, read_data, 1024) == 0) {
printf("RAID0 basic test: PASSED\n");
} else {
printf("RAID0 basic test: FAILED\n");
}
destroy_raid();
exit(0);
}
# Build with TRACE level to see all operations
make RAID_DEBUG=TRACE TOOLPREFIX=riscv64-elf- qemu
# In xv6, measure operations per second
time javni_test # Basic timing (if time command available)
# Test all failure scenarios
# 1. Single disk failure (RAID1/RAID0+1)
# 2. Multiple disk failures
# 3. Failure during repair operations
# 4. Metadata corruption recovery
xv6-raid/
├── LICENSE # Project license
├── README.md # This documentation
├── Makefile # Build configuration with RAID options
├── xv6-raid.code-workspace # VS Code workspace
│
├── kernel/ # Kernel implementation
│ ├── raid.c # Core RAID logic and strategy pattern
│ ├── raid.h # RAID data structures and constants
│ ├── raid0.c/.h # RAID0 striping implementation
│ ├── raid1.c/.h # RAID1 mirroring implementation
│ ├── raid01.c/.h # RAID0+1 striped mirrors implementation
│ ├── raid_debug.h # Multi-level debug system
│ ├── console_colors.c/.h # ANSI color support for debug output
│ ├── sysproc.c # RAID system call handlers
│ ├── syscall.c/.h # System call registration and dispatch
│ ├── virtio_disk.c # Low-level disk access interface
│ └── defs.h # Function declarations
│
├── user/ # User programs
│ ├── javni_test.c # Comprehensive RAID test program
│ └── user.h # RAID system call declarations
│
└── mkfs/ # File system creation tools
└── mkfs.c # Modified for RAID support
kernel/raid.c
: Central RAID management implementing strategy pattern for runtime algorithm selectionkernel/raid.h
: Data structures for RAID metadata, state management, and function declarationskernel/raid0.c
: High-performance striping implementation for parallel disk accesskernel/raid1.c
: Fault-tolerant mirroring with automatic failover capabilitieskernel/raid01.c
: Advanced striped mirroring combining performance and reliability
kernel/sysproc.c
: System call wrappers handling user-kernel data transfer and validationkernel/syscall.c/.h
: System call registration, dispatch table, and number definitionskernel/virtio_disk.c
: Hardware abstraction layer for disk operations (provided interface)
kernel/raid_debug.h
: Sophisticated 6-level debug system with conditional compilationkernel/console_colors.c/.h
: ANSI color support for enhanced debug output readability
user/javni_test.c
: Production-quality test suite covering all RAID functionality
The RAID system integrates seamlessly with the xv6 build system:
# Additional object files in Makefile
OBJS = \
# ... existing xv6 objects ...
$K/raid.o \
$K/raid0.o \
$K/raid1.o \
$K/raid01.o \
$K/console_colors.o
# RAID-specific configuration variables
DISKS := 2 # Number of RAID disks (configurable)
RAID_DEBUG := SUCCESS # Debug level (NONE to TRACE)
RAID_COLORS := ON # ANSI colors (ON/OFF)
# Compile-time configuration passed to compiler
CFLAGS += -DRAIDDISKS=$(DISKS)
CFLAGS += -DRAID_DEBUG_LEVEL=$(RAID_DEBUG_NUM)
CFLAGS += -DRAID_DEBUG_COLORS=$(RAID_COLORS_NUM)
# Minimum configuration (RAID1)
make DISKS=2 TOOLPREFIX=riscv64-elf- qemu
# High-performance RAID0 with 4 disks
make DISKS=4 RAID_DEBUG=NONE TOOLPREFIX=riscv64-elf- qemu
# Maximum configuration (7 RAID disks + 1 xv6 disk)
make DISKS=7 TOOLPREFIX=riscv64-elf- qemu
# Maximum performance: no debug output, no colors
make RAID_DEBUG=NONE RAID_COLORS=OFF TOOLPREFIX=riscv64-elf- qemu
# Development: full debugging with colors
make RAID_DEBUG=TRACE RAID_COLORS=ON TOOLPREFIX=riscv64-elf- qemu
# Production: success messages only
make RAID_DEBUG=SUCCESS RAID_COLORS=ON TOOLPREFIX=riscv64-elf- qemu
- Read Performance: Excellent (n × single disk speed)
- Write Performance: Excellent (n × single disk speed)
- Capacity Utilization: 100% (minus metadata)
- Best For: High-throughput applications, temporary data
- Read Performance: Good (can read from any mirror)
- Write Performance: Moderate (must write to all mirrors)
- Capacity Utilization: 100% of single disk (data mirrored to all disks)
- Best For: Critical data, database storage
- Read Performance: Excellent (striped reads)
- Write Performance: Good (striped writes to mirrors)
- Capacity Utilization: 50% (striping + mirroring overhead)
- Best For: High-performance applications requiring fault tolerance
- Per-array overhead: ~100 bytes (metadata structure)
- Per-operation overhead: 1KB buffer (BSIZE) on stack
- Debug system: Minimal impact when disabled
- Metadata overhead: 1 block per physical disk
- RAID0: No additional overhead
- RAID1: Logical capacity = single disk capacity (data mirrored across all disks)
- RAID0+1: 50% capacity loss to mirroring
# Error: riscv64-elf-gcc: command not found
Solution:
brew install riscv64-elf-gcc riscv64-elf-binutils
echo 'export PATH="/opt/homebrew/bin:$PATH"' >> ~/.zprofile
source ~/.zprofile
# Error: qemu-system-riscv64: command not found
Solution:
brew install qemu
# Error: QEMU hangs or crashes
Solutions:
1. Use Terminal.app or iTerm2 (avoid other terminals)
2. Check available RAM (QEMU needs ~512MB)
3. Try: make clean && make TOOLPREFIX=riscv64-elf- qemu
# Symptom: init_raid() returns -1
Debugging steps:
1. Check debug output: make RAID_DEBUG=VERBOSE
2. Verify disk count: minimum 2 for RAID0 and RAID1, even number ≥ 2 for RAID0+1
3. Check disk sizes: all RAID disks must be equal size
# Common error messages:
[RAID-ERROR] RAID0: Insufficient disks (1), minimum 2 required
[RAID-ERROR] RAID0+1: Invalid disk count (3), requires even number ≥ 2
# Symptom: check_data() reports mismatched data
Debugging steps:
1. Enable trace level: make RAID_DEBUG=TRACE
2. Check for disk failure messages
3. Verify write operations completed successfully
4. Test with smaller data sets
# Investigation commands in xv6:
javni_test # Run full test suite
# Check debug output for ERROR or WARN messages
# Symptom: Slow RAID operations
Solutions:
1. Disable debug: make RAID_DEBUG=NONE
2. Disable colors: make RAID_COLORS=OFF
3. Use RAID0 for maximum speed
4. Increase disk count for better parallelism
# Build with full debugging
make RAID_DEBUG=TRACE RAID_COLORS=ON TOOLPREFIX=riscv64-elf- qemu
# Expected output for successful RAID1 init:
[RAID-INFO] RAID: init_raid - Initializing RAID1 array
[RAID-VERBOSE] RAID: set_raid_strategy - Configuring strategy for RAID1
[RAID-TRACE] RAID1: set_raid_strategy - Mirroring strategy configured
[RAID-INFO] RAID1: raid1_init - Initializing mirrored array with 2 disks
[RAID-VERBOSE] RAID1: raid1_init - Configuration: 1023 blocks/disk, 1023 total blocks
[RAID-VERBOSE] RAID1: raid1_init - Writing metadata to all disks
[RAID-TRACE] RAID1: raid1_init - Writing metadata to disk 1
[RAID-TRACE] RAID1: raid1_init - Writing metadata to disk 2
[RAID-SUCCESS] RAID: init_raid - RAID1 initialization completed successfully
# Look for these patterns in debug output:
# Initialization errors
[RAID-ERROR] init_raid: Invalid RAID type
[RAID-ERROR] RAID*: *_init - Insufficient disks
# Runtime errors
[RAID-ERROR] *: validate_diskn - Disk * out of range
[RAID-ERROR] *: *_read - Failed to read block
[RAID-WARN] *: validate_diskn - Disk * is in FAILED state
# Recovery errors
[RAID-ERROR] *: *_repair_disk - Cannot repair * array
[RAID-ERROR] *: *_repair_disk - No healthy disks available
- This README: Complete implementation and usage guide
- Source Code: Extensively commented implementation files
- Debug System: Built-in debugging with multiple verbosity levels
- Author: Miloš Jovanović
- GitHub: github.com/miloshimself
- Repository: xv6-raid
- xv6 Documentation: MIT xv6 Book
- QEMU Documentation: QEMU RISC-V Guide
- RISC-V Resources: RISC-V Foundation
This project is developed as part of an academic assignment for Operating Systems course. The base xv6 operating system is developed by MIT and is available under their license terms.
Academic Use: This implementation is provided for educational purposes. Students using this code should understand and comply with their institution's academic integrity policies.
Technical Attribution:
- Base xv6 system: MIT PDOS (https://pdos.csail.mit.edu/6.S081/)
- RAID implementation: Miloš Jovanović
- Apple Silicon support: Based on community adaptations
- Debug system: Custom implementation with ANSI color support
Last Updated: August 2025
Version: 1.0 (Part 1 Implementation - 20 points)
Compatibility: xv6-riscv, Apple Silicon Macs, QEMU 7.0+