Skip to content

Commit cf9e29b

Browse files
committed
docs(memory): 更新内存管理文档完善架构说明
1 parent 4d5beaa commit cf9e29b

File tree

10 files changed

+224
-121
lines changed

10 files changed

+224
-121
lines changed

docs/architecture/memory.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# 内存管理
22

3-
Proka Kernel 的内存管理分为以下几个层次
3+
Proka Kernel 的内存管理采用层次化架构,实现了从底层页帧管理到上层动态内存分配的完整链路
44

5-
1. **物理页分配 (PMM)**: 管理物理内存帧
6-
2. **虚拟内存分页 (VMM)**: 实现页表映射与保护
7-
3. **内核堆 (Heap)**: 提供动态内存分配功能
5+
1. **物理页分配 (PMM)**: 基于内存映射表的简单页帧分配器,负责管理物理页帧的生命周期
6+
2. **虚拟内存管理 (VMM)**: 引入了 `VmArea``MemorySet` 抽象,支持按需分页 (Lazy Allocation) 和内存区域保护
7+
3. **内核堆 (Heap)**: 基于 `talc` 的动态分配器,能够通过 VMM 自动扩展内存空间,解决固定堆大小的限制

docs/architecture/memory/heap.md

Lines changed: 41 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,41 @@
1-
# Heap
1+
# 内核堆管理 (Heap)
2+
3+
Proka 内核提供了一个可动态增长的堆管理器,基于 `talc` crate 实现。它支持全局内存分配 (`Box`, `Vec`, `Arc` 等) 并能根据内核运行需求自动扩展。
4+
5+
## 分配器架构
6+
7+
内核使用 `Talck` 分配器,这是一个基于 `talc` 且由 `spin::Mutex` 保护的线程安全包装。
8+
9+
### 动态扩展 (OOM Handling)
10+
与静态预留空间的分配器不同,Proka 实现了 `talc::OomHandler`
11+
1. **触发**: 当分配器中剩余的 `Span` 不足以满足分配请求时,触发 `handle_oom`
12+
2. **区域扩展**:
13+
- 处理器访问 `KERNEL_MEMORY_SET`
14+
- 将 "heap" 区域的结束地址向上推进(步进通常为 1MB)。
15+
3. **物理映射**:
16+
- 处理器**手动**在页表中建立新区域的物理映射。
17+
- *注意*: 必须手动映射以避免在持有分配器锁时再次触发缺页异常导致的死锁。
18+
4. **资源声明**: 调用 `talc.claim()` 将新映射的虚拟地址范围告知分配器。
19+
20+
## 引导序列 (Bootstrapping)
21+
22+
由于 VMM 本身在管理区域时需要分配内存(如 `Vec<VmArea>`),这产生了一个初始化循环依赖:
23+
**VMM 需要 Heap -> Heap 增长依赖 VMM**
24+
25+
Proka 通过以下步骤解决:
26+
1. **小规模预映射**: 在 `init_heap` 中,内核先手动映射一个 64KB 的物理区域。
27+
2. **基础堆初始化**: 分配器首先认领这 64KB。
28+
3. **VMM 启动**: VMM 初始化并接管 "heap" 区域的管理。
29+
4. **透明增长**: 随后的内存需求超出 64KB 时,动态扩展机制自动生效。
30+
31+
## 关键参数
32+
33+
- **起始地址**: `0x_4444_4444_0000` (位于规范的内核高半地址空间)。
34+
- **初始大小**: 64 KiB。
35+
- **扩展步长**: 1 MiB。
36+
37+
## 实现细节
38+
39+
代码位于 `kernel/src/memory/allocator.rs`
40+
- 定义了 `KernelOomHandler`
41+
- 导出 `#[global_allocator]`

docs/architecture/memory/paging.md

Lines changed: 44 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,44 @@
1-
# Paging
1+
# 虚拟内存管理 (VMM)
2+
3+
Proka 采用基于 x86_64 四级页表的虚拟内存管理方案,并在 `x86_64` crate 的基础上封装了更高级的抽象,以支持按需分配 (On-demand Allocation) 和动态区域管理。
4+
5+
## 核心概念
6+
7+
### VmArea (虚拟内存区域)
8+
`VmArea` 代表一段连续的虚拟内存范围。它定义了:
9+
- **范围**: 起止虚拟地址(按页对齐)。
10+
- **权限**: 读、写、执行以及内核/用户态权限。
11+
- **语义**: 该区域的用途(如 `.text` 段、堆段、栈段等)。
12+
13+
### MemorySet (地址空间)
14+
`MemorySet``VmArea` 的集合,代表一个完整的虚拟地址空间(通常对应一个页表)。
15+
- 管理一组不重叠的 `VmArea`
16+
- 绑定一个 `OffsetPageTable` 实例。
17+
- 处理该地址空间内的特定逻辑,如缺页异常。
18+
19+
## 缺页异常处理 (Page Fault)
20+
21+
内核通过 `MemorySet` 实现 **懒加载 (Lazy Allocation)**。当 CPU 访问一个尚未建立物理映射的虚拟地址时,会触发 `#PF` 异常。
22+
23+
1. **异常捕获**: 中断处理函数 `pagefault_handler` 获取出错地址。
24+
2. **区域查找**: 在 `KERNEL_MEMORY_SET` 中查找包含该地址的 `VmArea`
25+
3. **按需分配**:
26+
- 如果地址在合法的 `VmArea` 内,内核会分配一个物理页帧。
27+
- 根据 `VmArea` 定义的权限,在页表中建立映射。
28+
4. **恢复执行**: 映射建立后,CPU 重新执行触发异常的指令,此时访问成功。
29+
30+
这种机制极大地减少了内核初始化时的页表开销,并支持了动态堆的无缝扩展。
31+
32+
## 内核空间布局
33+
34+
在内核初始化阶段,`MemorySet::new_kernel` 会通过链接器符号自动扫描内核镜像段,并建立初始映射:
35+
- **.text**: 只读,可执行。
36+
- **.rodata**: 只读,不可执行。
37+
- **.data / .bss**: 可读写,不可执行。
38+
- **Heap**: 初始建立一个小范围映射,支持按需扩展。
39+
40+
## 实现细节
41+
42+
代码位于 `kernel/src/memory/vmm.rs`
43+
- 使用 `spin::Mutex` 保护全局 `KERNEL_MEMORY_SET`
44+
- 依赖 `FrameAllocator` 提供物理页支持。

kernel/src/graphics/core.rs

Lines changed: 15 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ impl<'a> Renderer<'a> {
5959
// Init BF, color = black (0)
6060
let back_buffer = vec![0; buffer_size];
6161
Self {
62-
framebuffer: framebuffer,
62+
framebuffer,
6363
back_buffer,
6464
pixel_size,
6565
clear_color: color::BLACK,
@@ -87,7 +87,7 @@ impl<'a> Renderer<'a> {
8787
let value: u32 = ((color.r as u32) << self.framebuffer.red_mask_shift())
8888
| ((color.g as u32) << self.framebuffer.green_mask_shift())
8989
| ((color.b as u32) << self.framebuffer.blue_mask_shift());
90-
return value;
90+
value
9191
} else if self.bpp == 24 {
9292
color.to_u32(false)
9393
} else {
@@ -96,6 +96,11 @@ impl<'a> Renderer<'a> {
9696
}
9797

9898
/// Draw pixel to BF
99+
///
100+
/// # Safety
101+
///
102+
/// This function is unsafe because it does not check if the coordinates are within the
103+
/// framebuffer boundaries. The caller must ensure that `x` and `y` are valid.
99104
#[inline(always)]
100105
pub unsafe fn set_pixel_raw_unchecked(&mut self, x: u64, y: u64, color: &color::Color) {
101106
let offset = self.get_buffer_offset(x, y);
@@ -177,17 +182,15 @@ impl<'a> Renderer<'a> {
177182
pub fn clear(&mut self) {
178183
let width = self.framebuffer.width();
179184
let height = self.framebuffer.height();
180-
let color = self.clear_color.clone();
185+
let color = self.clear_color;
181186
// Optimize clear operation
182187
let masked_clear_color = self.mask_color(&color);
183188
let pixel_bytes = masked_clear_color.to_le_bytes(); // To byte array
184189
let bytes_to_fill = &pixel_bytes[..self.pixel_size];
185190
for y in 0..height {
186191
for x in 0..width {
187192
let offset = self.get_buffer_offset(x, y);
188-
for i in 0..self.pixel_size {
189-
self.back_buffer[offset + i] = bytes_to_fill[i];
190-
}
193+
self.back_buffer[offset..offset + self.pixel_size].copy_from_slice(bytes_to_fill);
191194
}
192195
}
193196

@@ -200,8 +203,8 @@ impl<'a> Renderer<'a> {
200203
/* ======== Drawing Example functions ======== */
201204
/// Draw a line
202205
pub fn draw_line(&mut self, p1: Pixel, p2: Pixel, color: color::Color) {
203-
let dx_abs = ((p2.x as i64 - p1.x as i64).abs()) as u64;
204-
let dy_abs = ((p2.y as i64 - p1.y as i64).abs()) as u64;
206+
let dx_abs = (p2.x as i64 - p1.x as i64).unsigned_abs();
207+
let dy_abs = (p2.y as i64 - p1.y as i64).unsigned_abs();
205208
let steep = dy_abs > dx_abs;
206209
let (mut x1, mut y1) = p1.to_coord();
207210
let (mut x2, mut y2) = p2.to_coord();
@@ -214,7 +217,7 @@ impl<'a> Renderer<'a> {
214217
core::mem::swap(&mut y1, &mut y2);
215218
}
216219
let dx = x2 - x1;
217-
let dy = (y2 as i64 - y1 as i64).abs() as u64;
220+
let dy = (y2 as i64 - y1 as i64).unsigned_abs();
218221
let mut error = (dx / 2) as i64;
219222
let y_step = if y1 < y2 { 1 } else { -1 };
220223
let mut y = y1 as i64;
@@ -348,7 +351,7 @@ impl<'a> Renderer<'a> {
348351
}
349352

350353
/// Draw a rectangle
351-
pub fn draw_rect(&mut self, pixel: Pixel, width: u64, height: u64, color: color::Color) -> () {
354+
pub fn draw_rect(&mut self, pixel: Pixel, width: u64, height: u64, color: color::Color) {
352355
let (x, y) = pixel.to_coord();
353356
let x2 = x + width;
354357
let y2 = y + height;
@@ -364,9 +367,9 @@ impl<'a> Renderer<'a> {
364367
let (x_min, y_min) = pixel.to_coord();
365368
let x_max = x_min + width;
366369
let y_max = y_min + height;
367-
let x_start = x_min.max(0);
370+
let x_start = x_min;
368371
let x_end = x_max.min(self.width() - 1);
369-
let y_start = y_min.max(0);
372+
let y_start = y_min;
370373
let y_end = y_max.min(self.height() - 1);
371374
for y in y_start..=y_end {
372375
for x in x_start..=x_end {

kernel/src/libs/time/tsc.rs

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -57,11 +57,7 @@ pub fn init() {
5757
// time = pit_delta / PIT_FREQ
5858
// freq = tsc_delta * PIT_FREQ / pit_delta
5959

60-
if pit_delta == 0 {
61-
0 // Failed
62-
} else {
63-
(tsc_delta * PIT_FREQ) / pit_delta
64-
}
60+
(tsc_delta * PIT_FREQ).checked_div(pit_delta).unwrap_or(0)
6561
});
6662

6763
TSC_FREQUENCY.store(freq, Ordering::Relaxed);

kernel/src/memory/allocator.rs

Lines changed: 25 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,6 @@
33
//! This module implements the heap allocator for the kernel.
44
//! It uses the `talc` crate to manage heap memory with dynamic growth support.
55
6-
use crate::config::KERNEL_DEFAULT_HEAP_SIZE;
76
use talc::{Span, Talc, Talck};
87
use x86_64::{
98
structures::paging::{
@@ -22,45 +21,56 @@ impl talc::OomHandler for KernelOomHandler {
2221
fn handle_oom(talc: &mut Talc<Self>, _layout: core::alloc::Layout) -> Result<(), ()> {
2322
// Expand by 1MB at least
2423
let expand_size = 1024 * 1024;
25-
24+
2625
let mut ms_lock = crate::memory::vmm::KERNEL_MEMORY_SET.lock();
2726
let memory_set = ms_lock.as_mut().ok_or(())?;
28-
27+
2928
// Find heap area
3029
let (old_end, new_end) = {
31-
let heap_area = memory_set.areas.iter_mut().find(|a| a.name == "heap").ok_or(())?;
30+
let heap_area = memory_set
31+
.areas
32+
.iter_mut()
33+
.find(|a| a.name == "heap")
34+
.ok_or(())?;
3235
let old_end = heap_area.end;
3336
let new_end = old_end + expand_size;
3437
heap_area.end = new_end;
3538
(old_end, new_end)
3639
};
37-
40+
3841
// Map the new pages MANUALLY to avoid deadlock via #PF
3942
let page_range = {
4043
let start_page = Page::containing_address(old_end);
4144
let end_page = Page::containing_address(new_end - 1u64);
4245
Page::range_inclusive(start_page, end_page)
4346
};
44-
47+
4548
let memory_map_response = crate::MEMORY_MAP_REQUEST
4649
.get_response()
4750
.expect("Failed to get memory map response");
48-
let mut frame_allocator = unsafe { crate::memory::paging::init_frame_allocator(memory_map_response) };
49-
51+
let mut frame_allocator =
52+
unsafe { crate::memory::paging::init_frame_allocator(memory_map_response) };
53+
5054
for page in page_range {
5155
let frame = frame_allocator.allocate_frame().ok_or(())?;
52-
let flags = PageTableFlags::PRESENT | PageTableFlags::WRITABLE | PageTableFlags::NO_EXECUTE;
56+
let flags =
57+
PageTableFlags::PRESENT | PageTableFlags::WRITABLE | PageTableFlags::NO_EXECUTE;
5358
unsafe {
54-
memory_set.page_table.map_to(page, frame, flags, &mut frame_allocator).map_err(|_| ())?.flush();
59+
memory_set
60+
.page_table
61+
.map_to(page, frame, flags, &mut frame_allocator)
62+
.map_err(|_| ())?
63+
.flush();
5564
}
5665
}
57-
58-
drop(ms_lock);
59-
66+
67+
drop(ms_lock);
68+
6069
unsafe {
61-
talc.claim(Span::new(old_end.as_mut_ptr(), new_end.as_mut_ptr())).map_err(|_| ())?;
70+
talc.claim(Span::new(old_end.as_mut_ptr(), new_end.as_mut_ptr()))
71+
.map_err(|_| ())?;
6272
}
63-
73+
6474
Ok(())
6575
}
6676
}

kernel/src/memory/mod.rs

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,9 @@ pub mod paging;
44
pub mod protection;
55
pub mod vmm;
66

7+
pub use paging::{phys_to_virt, virt_to_phys_direct};
8+
pub use vmm::translate_addr;
9+
710
pub fn init() {
811
let memory_map_response = crate::MEMORY_MAP_REQUEST
912
.get_response()

kernel/src/memory/paging.rs

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ use limine::response::MemoryMapResponse;
1414
use x86_64::{
1515
registers::control::Cr3,
1616
structures::paging::{OffsetPageTable, PageTable},
17-
VirtAddr,
17+
PhysAddr, VirtAddr,
1818
};
1919

2020
/// Retrieve the HHDM (Higher Half Direct Map) offset from Limine
@@ -29,6 +29,19 @@ pub fn get_hhdm_offset() -> VirtAddr {
2929
)
3030
}
3131

32+
/// Convert physical address to virtual address using HHDM
33+
pub fn phys_to_virt(phys: PhysAddr) -> VirtAddr {
34+
VirtAddr::new(phys.as_u64() + get_hhdm_offset().as_u64())
35+
}
36+
37+
/// Convert virtual address to physical address (only for HHDM)
38+
///
39+
/// # Safety
40+
/// The caller must ensure the virtual address is within the HHDM region.
41+
pub unsafe fn virt_to_phys_direct(virt: VirtAddr) -> PhysAddr {
42+
PhysAddr::new(virt.as_u64() - get_hhdm_offset().as_u64())
43+
}
44+
3245
/// Initialize an OffsetPageTable for accessing page tables
3346
///
3447
/// # Arguments

0 commit comments

Comments
 (0)