|
| 1 | +# HAL: Interrupt Management |
| 2 | + |
| 3 | +## Overview |
| 4 | +The Linmo kernel provides a comprehensive interrupt management system designed for RISC-V RV32I architecture. |
| 5 | +Its interrupt control mechanisms do not require explicit interrupt status variables to be declared by the caller. |
| 6 | +Instead, the system provides both low-level primitives and high-level critical section macros that automatically manage interrupt state. |
| 7 | + |
| 8 | +## Interrupt Control Levels |
| 9 | +Linmo provides two distinct levels of interrupt control to balance system responsiveness with data protection: |
| 10 | + |
| 11 | +### 1. Global Interrupt Control (`CRITICAL_*`) |
| 12 | +Controls ALL maskable interrupts, providing the strongest protection against concurrency. |
| 13 | + |
| 14 | +### 2. Scheduler Interrupt Control (`NOSCHED_*`) |
| 15 | +Controls ONLY the scheduler timer interrupt, allowing other hardware interrupts to be serviced while preventing task preemption. |
| 16 | + |
| 17 | +## Low-Level Interrupt Control |
| 18 | + |
| 19 | +### Basic Functions |
| 20 | +```c |
| 21 | +/* Enable/disable global interrupts */ |
| 22 | +#define _di() hal_interrupt_set(0) /* Disable interrupts */ |
| 23 | +#define _ei() hal_interrupt_set(1) /* Enable interrupts */ |
| 24 | + |
| 25 | +/* Fine-grained control with return value */ |
| 26 | +int32_t hal_interrupt_set(int32_t enable); |
| 27 | +``` |
| 28 | +
|
| 29 | +The `hal_interrupt_set()` function returns the previous interrupt enable state (1 if enabled, 0 if disabled), allowing for proper nesting. |
| 30 | +
|
| 31 | +### Correct Usage Example |
| 32 | +
|
| 33 | +```c |
| 34 | +void low_level_function(void) { |
| 35 | + int32_t prev_state = _di(); /* Disable interrupts, save previous state */ |
| 36 | + |
| 37 | + /* Critical code section */ |
| 38 | + /* ... */ |
| 39 | + |
| 40 | + hal_interrupt_set(prev_state); /* Restore previous state */ |
| 41 | +} |
| 42 | +``` |
| 43 | + |
| 44 | +### Incorrect Usage Example |
| 45 | + |
| 46 | +```c |
| 47 | +void bad_function(void) { |
| 48 | + _di(); |
| 49 | + /* Critical code section */ |
| 50 | + _ei(); /* WRONG: Always enables interrupts regardless of previous state */ |
| 51 | +} |
| 52 | +``` |
| 53 | +
|
| 54 | +Problem: The incorrect example always enables interrupts with `_ei()`, |
| 55 | +even if interrupts were already disabled when the function was called. |
| 56 | +This can lead to unexpected interrupt behavior in nested critical sections. |
| 57 | +
|
| 58 | +## High-Level Critical Section Macros |
| 59 | +
|
| 60 | +### Global Critical Sections |
| 61 | +
|
| 62 | +Use `CRITICAL_ENTER()` and `CRITICAL_LEAVE()` when protecting data shared with interrupt service routines (ISRs): |
| 63 | +
|
| 64 | +```c |
| 65 | +void shared_data_access(void) { |
| 66 | + CRITICAL_ENTER(); |
| 67 | + |
| 68 | + /* Access data shared with ISRs */ |
| 69 | + global_counter++; |
| 70 | + shared_buffer[index] = value; |
| 71 | + |
| 72 | + CRITICAL_LEAVE(); |
| 73 | +} |
| 74 | +``` |
| 75 | + |
| 76 | +When to use: |
| 77 | +- Modifying data structures accessed by ISRs |
| 78 | +- Hardware register manipulation |
| 79 | +- Memory allocation/deallocation in ISR context |
| 80 | +- Any operation that must be atomic with respect to ALL interrupts |
| 81 | + |
| 82 | +**Warning**: Increases interrupt latency - use sparingly and keep critical sections short. |
| 83 | + |
| 84 | +### Scheduler Critical Sections |
| 85 | + |
| 86 | +Use `NOSCHED_ENTER()` and `NOSCHED_LEAVE()` when protecting data shared only between tasks: |
| 87 | +```c |
| 88 | +void task_shared_data_access(void) { |
| 89 | + NOSCHED_ENTER(); |
| 90 | + |
| 91 | + /* Access data shared between tasks only */ |
| 92 | + task_list_modify(); |
| 93 | + update_task_counters(); |
| 94 | + |
| 95 | + NOSCHED_LEAVE(); |
| 96 | +} |
| 97 | +``` |
| 98 | +
|
| 99 | +When to use: |
| 100 | +- Modifying task-related data structures |
| 101 | +- Protecting against task preemption only |
| 102 | +- Operations that should be atomic with respect to the scheduler |
| 103 | +- Most kernel synchronization primitives (semaphores, mutexes, etc.) |
| 104 | +
|
| 105 | +Advantage: Lower interrupt latency - hardware interrupts (UART, etc.) can still be serviced. |
| 106 | +
|
| 107 | +## Automatic State Management |
| 108 | +
|
| 109 | +Unlike systems requiring explicit `intsts` declarations, Linmo's macros automatically handle interrupt state: |
| 110 | +
|
| 111 | +```c |
| 112 | +/* Linmo approach - automatic state management */ |
| 113 | +void linmo_function(void) { |
| 114 | + CRITICAL_ENTER(); /* Automatically saves current state */ |
| 115 | + /* Critical section */ |
| 116 | + CRITICAL_LEAVE(); /* Automatically restores previous state */ |
| 117 | +} |
| 118 | +
|
| 119 | +/* Compare with uT-Kernel approach */ |
| 120 | +void ut_kernel_function(void) { |
| 121 | + UINT intsts; /* Must explicitly declare */ |
| 122 | + DI(intsts); /* Must pass intsts parameter */ |
| 123 | + /* Critical section */ |
| 124 | + EI(intsts); /* Must pass intsts parameter */ |
| 125 | +} |
| 126 | +``` |
| 127 | + |
| 128 | +## Implementation Details |
| 129 | + |
| 130 | +### Preemptive vs Cooperative Mode |
| 131 | +The critical section macros are mode-aware: |
| 132 | +```c |
| 133 | +#define CRITICAL_ENTER() \ |
| 134 | + do { \ |
| 135 | + if (kcb->preemptive) \ |
| 136 | + _di(); \ |
| 137 | + } while (0) |
| 138 | + |
| 139 | +#define NOSCHED_ENTER() \ |
| 140 | + do { \ |
| 141 | + if (kcb->preemptive) \ |
| 142 | + hal_timer_disable(); \ |
| 143 | + } while (0) |
| 144 | +``` |
| 145 | +
|
| 146 | +In cooperative mode, these macros become no-ops since tasks yield voluntarily. |
| 147 | +
|
| 148 | +### RISC-V Implementation |
| 149 | +The interrupt control directly manipulates the `mstatus.MIE` bit: |
| 150 | +```c |
| 151 | +static inline int32_t hal_interrupt_set(int32_t enable) |
| 152 | +{ |
| 153 | + uint32_t mstatus_val = read_csr(mstatus); |
| 154 | + uint32_t mie_bit = (1U << 3); /* MSTATUS_MIE bit position */ |
| 155 | +
|
| 156 | + if (enable) { |
| 157 | + write_csr(mstatus, mstatus_val | mie_bit); |
| 158 | + } else { |
| 159 | + write_csr(mstatus, mstatus_val & ~mie_bit); |
| 160 | + } |
| 161 | + return (int32_t) ((mstatus_val >> 3) & 1); |
| 162 | +} |
| 163 | +``` |
| 164 | + |
| 165 | +## Common Pitfalls |
| 166 | + |
| 167 | +### 1. Forgetting Critical Sections |
| 168 | +```c |
| 169 | +/* WRONG - race condition possible */ |
| 170 | +volatile int shared_var; |
| 171 | +void unsafe_increment(void) { |
| 172 | + shared_var++; /* Not atomic on most architectures */ |
| 173 | +} |
| 174 | + |
| 175 | +/* CORRECT */ |
| 176 | +void safe_increment(void) { |
| 177 | + CRITICAL_ENTER(); |
| 178 | + shared_var++; |
| 179 | + CRITICAL_LEAVE(); |
| 180 | +} |
| 181 | +``` |
| 182 | +
|
| 183 | +### 2. Using Wrong Protection Level |
| 184 | +```c |
| 185 | +/* WRONG - insufficient protection for ISR-shared data */ |
| 186 | +void isr_shared_access(void) { |
| 187 | + NOSCHED_ENTER(); /* Only blocks scheduler, not ISRs! */ |
| 188 | + isr_shared_buffer[0] = value; |
| 189 | + NOSCHED_LEAVE(); |
| 190 | +} |
| 191 | +
|
| 192 | +/* CORRECT */ |
| 193 | +void isr_shared_access(void) { |
| 194 | + CRITICAL_ENTER(); /* Blocks all interrupts including ISRs */ |
| 195 | + isr_shared_buffer[0] = value; |
| 196 | + CRITICAL_LEAVE(); |
| 197 | +} |
| 198 | +``` |
| 199 | + |
| 200 | +### 3. Calling Blocking Functions in Critical Sections |
| 201 | +```c |
| 202 | +/* WRONG - can deadlock */ |
| 203 | +void bad_critical_section(void) { |
| 204 | + CRITICAL_ENTER(); |
| 205 | + mo_sem_wait(semaphore); /* May block indefinitely! */ |
| 206 | + CRITICAL_LEAVE(); |
| 207 | +} |
| 208 | + |
| 209 | +/* CORRECT */ |
| 210 | +void good_pattern(void) { |
| 211 | + mo_sem_wait(semaphore); /* Block outside critical section */ |
| 212 | + |
| 213 | + CRITICAL_ENTER(); |
| 214 | + /* Quick critical work */ |
| 215 | + CRITICAL_LEAVE(); |
| 216 | +} |
| 217 | +``` |
0 commit comments