Skip to content

Comments

Add support for non-contiguous physical page mapping#666

Open
sangho2 wants to merge 7 commits intomainfrom
sanghle/lvbs/vmap2
Open

Add support for non-contiguous physical page mapping#666
sangho2 wants to merge 7 commits intomainfrom
sanghle/lvbs/vmap2

Conversation

@sangho2
Copy link
Contributor

@sangho2 sangho2 commented Feb 18, 2026

This PR introduces a vmap subsystem to the LVBS platform to map non-contiguous physical page frames into virtually contiguous addresses.

Changes

  • Add map_non_contiguous_phys_frames which is similar to existing map_phys_frame_range except that this new function is for non-contiguous physical page frames.
  • Reserve a 1 TB VA window for vmap and vunmap
  • Maintain separate hash tables for tracking PA<->VA for VMAPs (mm/vmap.rs)
  • Support cross-core TLB flush

@sangho2 sangho2 marked this pull request as ready for review February 18, 2026 00:58
@sangho2 sangho2 force-pushed the sanghle/lvbs/vmap2 branch 4 times, most recently from debb8fe to 2ed4daf Compare February 19, 2026 04:03
Copy link
Member

@wdcui wdcui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Over the code looks good to me. I left some comments below.

&mut inner,
Page::range_inclusive(
start_page,
start_page + (mapped_count as u64 - 1),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why mapped_count - 1?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rollback_mapped_pages internally calls x86_64 crate's clean_up_addr_range which expects an inclusive range. we can use an exclusive range for rollback_mapped_pages if that is intuitive.

&& FLUSH_TLB
{
fl.flush();
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we log the error at least?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean whether we rollbacked mappings? We can log the error, but rollback_mapped_pages is already within an error path.

///
/// Returns `Some(VirtAddr)` with the starting virtual address on success,
/// or `None` if insufficient virtual address space is available.
fn allocate_va_range(&mut self, num_pages: usize) -> Option<VirtAddr> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to support address alignment?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Functions in this module are called through vmap which has type-based strict alignment requirement.


// Try to find a suitable range in the free set (first-fit)
for range in self.free_set.iter() {
if range.end - range.start >= size {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here the range's size is measured by bytes, and other places a va range is measured by pages. It's a little confusing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make sense. let me see whether I can change it.

}
}

if ALIGN != PAGE_SIZE {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: check alignment before the more expensive loop?

/// Start of the vmap virtual address region.
/// This address is chosen to be within the 4-level paging canonical address space
/// and not conflict with VTL1's direct-mapped physical memory.
const VMAP_START: u64 = 0x6000_0000_0000;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do we prevent the page table code from using this va range?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By default, we do use a global offset when we map physical page frames. Unless our target machine has an extreme amount of memory, there are no overlap. We can add extra check to the normal map function if needed.

@sangho2 sangho2 force-pushed the sanghle/lvbs/vmap2 branch 2 times, most recently from 2e1da17 to 5ccdbae Compare February 21, 2026 00:07
@github-actions
Copy link

🤖 SemverChecks 🤖 ⚠️ Potential breaking API changes detected ⚠️

Click for details
--- failure enum_variant_missing: pub enum variant removed or renamed ---

Description:
A publicly-visible enum has at least one variant that is no longer available under its prior name. It may have been renamed or removed entirely.
        ref: https://doc.rust-lang.org/cargo/reference/semver.html#item-remove
       impl: https://github.com/obi1kenobi/cargo-semver-checks/tree/v0.46.0/src/lints/enum_variant_missing.ron

Failed in:
  variant PhysPointerError::NonContiguousPages, previously in file /home/runner/work/litebox/litebox/target/semver-checks/git-main/48693f9106fd64135d69f97c954a376d5bd51c97/litebox_common_linux/src/vmap.rs:170

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants