Skip to content

Commit f0ff0b5

Browse files
wenyuzhaoqinsoon
andauthored
Compressed Oops Support (#235)
This PR adds compressed oops support for mmtk-openjdk, and enables it by default. # Implementation strategy and workarounds ## Heap layout and compression policy This PR uses the `Edge` type to abstract over the compressed edge and uncompressed edge. Object field loads and stores for uncompressed edges work as before. Loads and stores for compressed edges will involve an additional compression and decompression step. In general, this is the function to decode a 32-bit compressed pointer to its uncompressed form: ```rust fn decode(compressed_oop: u32) -> u64 { BASE + ((compressed_oop as u64) << SHIFT) } ``` OpenJDK has a few optimizations to reduce the add and shift operations in JIT-compiled code, this PR supports them all: 1. For heap <= 3G, if we set the heap range as `0x4000_0000..0x1_0000_0000`, it is possible to totally remove the add and the shift. The compressed and uncompressed forms are identical. * Set `BASE = 0` and `SHIFT = 0` for this case. 2. For heap <= 31G, if we set the heap range as `0x4000_0000..0x8_0000_0000`, it is possible to remove the add. * Set `BASE = 0` and `SHIFT = 3` for this case. 3. For heap > 31G, the add and shift operation is still necessary. For cases (1) and (2), the jit compiled code will contain less or even no encoding/decoding instructions, and hence improve the mutator performance. However, in Rust code, we still do the add and shift unconditionally, even when `BASE` or `SHIFT` is set to zero. ## NULL pointer checking Generally, `BASE` can be any address as long as the memory is not reserved by others. However, `BASE` must be smaller than `HEAP_START`, otherwise `HEAP_START` will be encoded as `0` and be treated as a null pointer. Same as openjdk, we set `BASE` to `HEAP_START - 4096` to solve this issue. ## Type specialization Since we only support one edge type per binding, providing two `OpenJDKEdge` in one `MMTK` instance is not possible. This PR solves the issue by specializing almost all the types in the binding, with a `const COMPRESSED: bool` generic type argument. It provides two `MMTK` singletons: `MMTK<OpenJDK<COMPRESSED = true>>` and `MMTK<OpenJDK<COMPRESSED = false>>`. `MMTK<OpenJDK<COMPRESSED = true>>` will have the `OpenJDKEdge<COMPRESSED = true>` edge that does the extra pointer compression/decompression. The two MMTK singletons are wrapped in two lazy_static global variables. The binding will only initialize one of them depending on the OpenJDK command-line arguments. Initializing the wrong one that does not match the `UseCompressedOops` flag will trigger an assertion failure. ## Pointer tagging When compressed oops is enabled, all the fields are guaranteed to be compressed oops. However, stack or other global root pointers may be still uncompressed. The GC needs to handle both compressed and uncompressed edges and be able to distinguish between them. To support this, this PR treats all the root `OpenJDKEdge<COMPRESSED = true>`s as tagged pointers. If the 63-th bit is set, this indicates that this edge points to a 64-nit uncompressed oop, instead of a compressed oop. And the `OpenJDKEdge::{load, store}` methods will skip the encoding/decoding step. For object field edges, the encoding is performed unconditionally without the pointer tag check. When compressed oops is disabled, there is no pointer tag check as well. ## Embedded pointers Some (or probably all) pointers embedded in code objects are also compressed. On x64, it is always compressed to a `u32` integer that sits in an unaligned memory location. This means we need to (1) treat them as compressed oops just like other roots. (2) still performs the unaligned stores and loads. However, for other architectures, the compressed embedded pointers may not be encoded as a `u32` anymore. ## Compressed `Klass*` pointers When `UseCompressedOops` is enabled, by default it also enables `UseCompressedClassPointers`. This will make the `Klass*` pointer in the object header compressed to a `u32` as well. This PR supports class pointer compression as well. However, class pointer compression is only supported and tested when the compressed oops is enabled. The two flags must be enabled or disabled together. Enabling only one of them is not tested, not supported, and will trigger a runtime assertion failure. --- # Performance results [SemiSpace](http://squirrel.anu.edu.au/plotty-public/wenyuz/v8/p/mm26Ra) [Immix](http://squirrel.anu.edu.au/plotty-public/wenyuz/v8/p/wEDPv4) --------- Co-authored-by: Yi Lin <[email protected]>
1 parent 9ab13ae commit f0ff0b5

24 files changed

+1003
-399
lines changed

.github/scripts/ci-matrix-result-check.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -36,11 +36,11 @@ def read_in_plans():
3636
value = m.group(1)
3737
else:
3838
raise ValueError(f"Cannot find a plan string in {prop}")
39-
39+
4040
# Store the value in the dictionary
4141
key = chr(97+i)
4242
results[key] = value
43-
43+
4444
return results
4545

4646
def read_in_actual_results(line, plan_dict):
@@ -144,7 +144,7 @@ def print_log(directory, search_string):
144144
if expected[plan] == "ignore":
145145
print(f"Result for {plan} is ignored")
146146
continue
147-
147+
148148
if expected[plan] != actual[plan]:
149149
error_no = 1
150150
if expected[plan] == "pass":

.github/scripts/ci-test-assertions.sh

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,4 +56,5 @@ sudo sysctl -w vm.max_map_count=655300
5656
export MMTK_PLAN=PageProtect
5757

5858
build/linux-x86_64-normal-server-$DEBUG_LEVEL/jdk/bin/java -XX:+UseThirdPartyHeap -server -XX:MetaspaceSize=100M -Xms4G -Xmx4G -jar $DACAPO_PATH/dacapo-2006-10-MR2.jar fop
59-
build/linux-x86_64-normal-server-$DEBUG_LEVEL/jdk/bin/java -XX:+UseThirdPartyHeap -server -XX:MetaspaceSize=100M -Xms4G -Xmx4G -jar $DACAPO_PATH/dacapo-2006-10-MR2.jar luindex
59+
# Note: Disable compressed pointers for luindex as it does not work well with GC plans that uses virtual memory excessively.
60+
build/linux-x86_64-normal-server-$DEBUG_LEVEL/jdk/bin/java -XX:+UseThirdPartyHeap -server -XX:MetaspaceSize=100M -XX:-UseCompressedOops -XX:-UseCompressedClassPointers -Xms4G -Xmx4G -jar $DACAPO_PATH/dacapo-2006-10-MR2.jar luindex

.github/scripts/ci-test-malloc-mark-sweep.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ run_test() {
1616
# Malloc marksweep is horribly slow. We just run fop.
1717

1818
# build/linux-x86_64-normal-server-$DEBUG_LEVEL/jdk/bin/java -XX:+UseThirdPartyHeap -server -XX:MetaspaceSize=100M -Xms500M -Xmx500M -jar $DACAPO_PATH/dacapo-2006-10-MR2.jar antlr
19-
build/linux-x86_64-normal-server-$DEBUG_LEVEL/jdk/bin/java -XX:+UseThirdPartyHeap -server -XX:MetaspaceSize=100M -Xms50M -Xmx50M -jar $DACAPO_PATH/dacapo-2006-10-MR2.jar fop
19+
build/linux-x86_64-normal-server-$DEBUG_LEVEL/jdk/bin/java -XX:-UseCompressedOops -XX:-UseCompressedClassPointers -XX:+UseThirdPartyHeap -server -XX:MetaspaceSize=100M -Xms50M -Xmx50M -jar $DACAPO_PATH/dacapo-2006-10-MR2.jar fop
2020
# build/linux-x86_64-normal-server-$DEBUG_LEVEL/jdk/bin/java -XX:+UseThirdPartyHeap -server -XX:MetaspaceSize=100M -Xms500M -Xmx500M -jar $DACAPO_PATH/dacapo-2006-10-MR2.jar luindex
2121
# build/linux-x86_64-normal-server-$DEBUG_LEVEL/jdk/bin/java -XX:+UseThirdPartyHeap -server -XX:MetaspaceSize=100M -Xms500M -Xmx500M -jar $DACAPO_PATH/dacapo-2006-10-MR2.jar pmd
2222
# build/linux-x86_64-normal-server-$DEBUG_LEVEL/jdk/bin/java -XX:+UseThirdPartyHeap -server -XX:MetaspaceSize=100M -Xms500M -Xmx500M -jar $DACAPO_PATH/dacapo-2006-10-MR2.jar hsqldb

.github/scripts/ci-test-only-normal-no-compressed-oops.sh

Lines changed: 164 additions & 0 deletions
Large diffs are not rendered by default.

.github/scripts/ci-test-only-normal.sh

Lines changed: 0 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -151,14 +151,3 @@ build/linux-x86_64-normal-server-$DEBUG_LEVEL/jdk/bin/java -XX:+UseThirdPartyHea
151151
# These benchmarks take 40s+ for slowdebug build, we may consider removing them from the CI
152152
build/linux-x86_64-normal-server-$DEBUG_LEVEL/jdk/bin/java -XX:+UseThirdPartyHeap -server -XX:MetaspaceSize=100M -XX:TieredStopAtLevel=1 -Xms500M -Xmx500M -jar $DACAPO_PATH/dacapo-2006-10-MR2.jar hsqldb
153153
build/linux-x86_64-normal-server-$DEBUG_LEVEL/jdk/bin/java -XX:+UseThirdPartyHeap -server -XX:MetaspaceSize=100M -XX:TieredStopAtLevel=1 -Xms500M -Xmx500M -jar $DACAPO_PATH/dacapo-2006-10-MR2.jar eclipse
154-
155-
# --- PageProtect ---
156-
# Make sure this runs last in our tests unless we want to set it back to the default limit.
157-
sudo sysctl -w vm.max_map_count=655300
158-
159-
export MMTK_PLAN=PageProtect
160-
161-
build/linux-x86_64-normal-server-$DEBUG_LEVEL/jdk/bin/java -XX:+UseThirdPartyHeap -server -XX:MetaspaceSize=100M -Xms4G -Xmx4G -jar $DACAPO_PATH/dacapo-2006-10-MR2.jar antlr
162-
build/linux-x86_64-normal-server-$DEBUG_LEVEL/jdk/bin/java -XX:+UseThirdPartyHeap -server -XX:MetaspaceSize=100M -Xms4G -Xmx4G -jar $DACAPO_PATH/dacapo-2006-10-MR2.jar fop
163-
build/linux-x86_64-normal-server-$DEBUG_LEVEL/jdk/bin/java -XX:+UseThirdPartyHeap -server -XX:MetaspaceSize=100M -Xms4G -Xmx4G -jar $DACAPO_PATH/dacapo-2006-10-MR2.jar luindex
164-
# build/linux-x86_64-normal-server-$DEBUG_LEVEL/jdk/bin/java -XX:+UseThirdPartyHeap -server -XX:MetaspaceSize=100M -Xms4G -Xmx4G -jar $DACAPO_PATH/dacapo-2006-10-MR2.jar pmd

.github/scripts/ci-test.sh

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@ cd $cur
66
cd $cur
77
./ci-test-only-normal.sh
88
cd $cur
9+
./ci-test-only-normal-no-compressed-oops.sh
10+
cd $cur
911
./ci-test-only-weak-ref.sh
1012
cd $cur
1113
./ci-test-assertions.sh

mmtk/Cargo.lock

Lines changed: 2 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

mmtk/Cargo.toml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,8 @@ openjdk_version = "28e56ee32525c32c5a88391d0b01f24e5cd16c0f"
2424
libc = "0.2"
2525
lazy_static = "1.1"
2626
once_cell = "1.10.0"
27+
atomic = "0.5.1"
28+
memoffset = "0.9.0"
2729
# Be very careful to commit any changes to the following mmtk dependency, as our CI scripts (including mmtk-core CI)
2830
# rely on matching these lines to modify them: e.g. comment out the git dependency and use the local path.
2931
# These changes are safe:

mmtk/src/abi.rs

Lines changed: 98 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,14 @@
1-
use crate::UPCALLS;
1+
use super::UPCALLS;
2+
use crate::OpenJDKEdge;
3+
use atomic::Atomic;
4+
use atomic::Ordering;
25
use mmtk::util::constants::*;
36
use mmtk::util::conversions;
47
use mmtk::util::ObjectReference;
58
use mmtk::util::{Address, OpaquePointer};
69
use std::ffi::CStr;
710
use std::fmt;
11+
use std::sync::atomic::AtomicUsize;
812
use std::{mem, slice};
913

1014
#[repr(i32)]
@@ -80,7 +84,7 @@ impl Klass {
8084
pub const LH_HEADER_SIZE_SHIFT: i32 = BITS_IN_BYTE as i32 * 2;
8185
pub const LH_HEADER_SIZE_MASK: i32 = (1 << BITS_IN_BYTE) - 1;
8286
pub unsafe fn cast<'a, T>(&self) -> &'a T {
83-
&*(self as *const _ as usize as *const T)
87+
&*(self as *const Self as *const T)
8488
}
8589
/// Force slow-path for instance size calculation?
8690
const fn layout_helper_needs_slow_path(lh: i32) -> bool {
@@ -168,7 +172,7 @@ impl InstanceKlass {
168172
const VTABLE_START_OFFSET: usize = Self::HEADER_SIZE * BYTES_IN_WORD;
169173

170174
fn start_of_vtable(&self) -> *const usize {
171-
unsafe { (self as *const _ as *const u8).add(Self::VTABLE_START_OFFSET) as _ }
175+
(Address::from_ref(self) + Self::VTABLE_START_OFFSET).to_ptr()
172176
}
173177

174178
fn start_of_itable(&self) -> *const usize {
@@ -263,24 +267,53 @@ impl InstanceRefKlass {
263267
}
264268
*DISCOVERED_OFFSET
265269
}
266-
pub fn referent_address(oop: Oop) -> Address {
267-
oop.get_field_address(Self::referent_offset())
270+
pub fn referent_address<const COMPRESSED: bool>(oop: Oop) -> OpenJDKEdge<COMPRESSED> {
271+
oop.get_field_address(Self::referent_offset()).into()
268272
}
269-
pub fn discovered_address(oop: Oop) -> Address {
270-
oop.get_field_address(Self::discovered_offset())
273+
pub fn discovered_address<const COMPRESSED: bool>(oop: Oop) -> OpenJDKEdge<COMPRESSED> {
274+
oop.get_field_address(Self::discovered_offset()).into()
271275
}
272276
}
273277

278+
#[repr(C)]
279+
union KlassPointer {
280+
/// uncompressed Klass pointer
281+
klass: &'static Klass,
282+
/// compressed Klass pointer
283+
narrow_klass: u32,
284+
}
285+
274286
#[repr(C)]
275287
pub struct OopDesc {
276288
pub mark: usize,
277-
pub klass: &'static Klass,
289+
klass: KlassPointer,
290+
}
291+
292+
static COMPRESSED_KLASS_BASE: Atomic<Address> = Atomic::new(Address::ZERO);
293+
static COMPRESSED_KLASS_SHIFT: AtomicUsize = AtomicUsize::new(0);
294+
295+
/// When enabling compressed pointers, the class pointers are also compressed.
296+
/// The c++ part of the binding should pass the compressed klass base and shift to rust binding, as object scanning will need it.
297+
pub fn set_compressed_klass_base_and_shift(base: Address, shift: usize) {
298+
COMPRESSED_KLASS_BASE.store(base, Ordering::Relaxed);
299+
COMPRESSED_KLASS_SHIFT.store(shift, Ordering::Relaxed);
278300
}
279301

280302
impl OopDesc {
281303
pub fn start(&self) -> Address {
282304
unsafe { mem::transmute(self) }
283305
}
306+
307+
pub fn klass<const COMPRESSED: bool>(&self) -> &'static Klass {
308+
if COMPRESSED {
309+
let compressed = unsafe { self.klass.narrow_klass };
310+
let addr = COMPRESSED_KLASS_BASE.load(Ordering::Relaxed)
311+
+ ((compressed as usize) << COMPRESSED_KLASS_SHIFT.load(Ordering::Relaxed));
312+
unsafe { &*addr.to_ptr::<Klass>() }
313+
} else {
314+
unsafe { self.klass.klass }
315+
}
316+
}
284317
}
285318

286319
impl fmt::Debug for OopDesc {
@@ -292,8 +325,24 @@ impl fmt::Debug for OopDesc {
292325
}
293326
}
294327

328+
/// 32-bit compressed klass pointers
329+
#[repr(transparent)]
330+
#[derive(Clone, Copy)]
331+
pub struct NarrowKlass(u32);
332+
295333
pub type Oop = &'static OopDesc;
296334

335+
/// 32-bit compressed reference pointers
336+
#[repr(transparent)]
337+
#[derive(Clone, Copy)]
338+
pub struct NarrowOop(u32);
339+
340+
impl NarrowOop {
341+
pub fn slot(&self) -> Address {
342+
Address::from_ref(self)
343+
}
344+
}
345+
297346
/// Convert ObjectReference to Oop
298347
impl From<ObjectReference> for &OopDesc {
299348
fn from(o: ObjectReference) -> Self {
@@ -323,8 +372,8 @@ impl OopDesc {
323372
}
324373

325374
/// Calculate object instance size
326-
pub unsafe fn size(&self) -> usize {
327-
let klass = self.klass;
375+
pub unsafe fn size<const COMPRESSED: bool>(&self) -> usize {
376+
let klass = self.klass::<COMPRESSED>();
328377
let lh = klass.layout_helper;
329378
// The (scalar) instance size is pre-recorded in the TIB?
330379
if lh > Klass::LH_NEUTRAL_VALUE {
@@ -336,7 +385,7 @@ impl OopDesc {
336385
} else if lh <= Klass::LH_NEUTRAL_VALUE {
337386
if lh < Klass::LH_NEUTRAL_VALUE {
338387
// Calculate array size
339-
let array_length = self.as_array_oop().length();
388+
let array_length = self.as_array_oop().length::<COMPRESSED>();
340389
let mut size_in_bytes: usize =
341390
(array_length as usize) << Klass::layout_helper_log2_element_size(lh);
342391
size_in_bytes += Klass::layout_helper_header_size(lh) as usize;
@@ -356,34 +405,57 @@ pub struct ArrayOopDesc(OopDesc);
356405
pub type ArrayOop = &'static ArrayOopDesc;
357406

358407
impl ArrayOopDesc {
359-
const LENGTH_OFFSET: usize = mem::size_of::<Self>();
408+
fn length_offset<const COMPRESSED: bool>() -> usize {
409+
let klass_offset_in_bytes = memoffset::offset_of!(OopDesc, klass);
410+
if COMPRESSED {
411+
klass_offset_in_bytes + mem::size_of::<NarrowKlass>()
412+
} else {
413+
klass_offset_in_bytes + mem::size_of::<KlassPointer>()
414+
}
415+
}
360416

361417
fn element_type_should_be_aligned(ty: BasicType) -> bool {
362418
ty == BasicType::T_DOUBLE || ty == BasicType::T_LONG
363419
}
364420

365-
fn header_size(ty: BasicType) -> usize {
366-
let typesize_in_bytes =
367-
conversions::raw_align_up(Self::LENGTH_OFFSET + BYTES_IN_INT, BYTES_IN_LONG);
421+
fn header_size<const COMPRESSED: bool>(ty: BasicType) -> usize {
422+
let typesize_in_bytes = conversions::raw_align_up(
423+
Self::length_offset::<COMPRESSED>() + BYTES_IN_INT,
424+
BYTES_IN_LONG,
425+
);
368426
if Self::element_type_should_be_aligned(ty) {
369427
conversions::raw_align_up(typesize_in_bytes / BYTES_IN_WORD, BYTES_IN_LONG)
370428
} else {
371429
typesize_in_bytes / BYTES_IN_WORD
372430
}
373431
}
374-
fn length(&self) -> i32 {
375-
unsafe { *((self as *const _ as *const u8).add(Self::LENGTH_OFFSET) as *const i32) }
432+
fn length<const COMPRESSED: bool>(&self) -> i32 {
433+
unsafe { (Address::from_ref(self) + Self::length_offset::<COMPRESSED>()).load::<i32>() }
376434
}
377-
fn base(&self, ty: BasicType) -> Address {
378-
let base_offset_in_bytes = Self::header_size(ty) * BYTES_IN_WORD;
379-
Address::from_ptr(unsafe { (self as *const _ as *const u8).add(base_offset_in_bytes) })
435+
fn base<const COMPRESSED: bool>(&self, ty: BasicType) -> Address {
436+
let base_offset_in_bytes = Self::header_size::<COMPRESSED>(ty) * BYTES_IN_WORD;
437+
Address::from_ref(self) + base_offset_in_bytes
380438
}
381-
// This provides an easy way to access the array data in Rust. However, the array data
382-
// is Java types, so we have to map Java types to Rust types. The caller needs to guarantee:
383-
// 1. <T> matches the actual Java type
384-
// 2. <T> matches the argument, BasicType `ty`
385-
pub unsafe fn data<T>(&self, ty: BasicType) -> &[T] {
386-
slice::from_raw_parts(self.base(ty).to_ptr(), self.length() as _)
439+
/// This provides an easy way to access the array data in Rust. However, the array data
440+
/// is Java types, so we have to map Java types to Rust types. The caller needs to guarantee:
441+
/// 1. `<T>` matches the actual Java type
442+
/// 2. `<T>` matches the argument, BasicType `ty`
443+
pub unsafe fn data<T, const COMPRESSED: bool>(&self, ty: BasicType) -> &[T] {
444+
slice::from_raw_parts(
445+
self.base::<COMPRESSED>(ty).to_ptr(),
446+
self.length::<COMPRESSED>() as _,
447+
)
448+
}
449+
450+
pub unsafe fn slice<const COMPRESSED: bool>(
451+
&self,
452+
ty: BasicType,
453+
) -> crate::OpenJDKEdgeRange<COMPRESSED> {
454+
let base = self.base::<COMPRESSED>(ty);
455+
let start = base;
456+
let lshift = OpenJDKEdge::<COMPRESSED>::LOG_BYTES_IN_EDGE;
457+
let end = base + ((self.length::<COMPRESSED>() as usize) << lshift);
458+
(start..end).into()
387459
}
388460
}
389461

mmtk/src/active_plan.rs

Lines changed: 12 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
use crate::MutatorClosure;
22
use crate::OpenJDK;
3-
use crate::SINGLETON;
43
use crate::UPCALLS;
54
use mmtk::util::opaque_pointer::*;
65
use mmtk::vm::ActivePlan;
@@ -9,12 +8,12 @@ use mmtk::Plan;
98
use std::collections::VecDeque;
109
use std::marker::PhantomData;
1110

12-
struct OpenJDKMutatorIterator<'a> {
13-
mutators: VecDeque<&'a mut Mutator<OpenJDK>>,
11+
struct OpenJDKMutatorIterator<'a, const COMPRESSED: bool> {
12+
mutators: VecDeque<&'a mut Mutator<OpenJDK<COMPRESSED>>>,
1413
phantom_data: PhantomData<&'a ()>,
1514
}
1615

17-
impl<'a> OpenJDKMutatorIterator<'a> {
16+
impl<'a, const COMPRESSED: bool> OpenJDKMutatorIterator<'a, COMPRESSED> {
1817
fn new() -> Self {
1918
let mut mutators = VecDeque::new();
2019
unsafe {
@@ -29,8 +28,8 @@ impl<'a> OpenJDKMutatorIterator<'a> {
2928
}
3029
}
3130

32-
impl<'a> Iterator for OpenJDKMutatorIterator<'a> {
33-
type Item = &'a mut Mutator<OpenJDK>;
31+
impl<'a, const COMPRESSED: bool> Iterator for OpenJDKMutatorIterator<'a, COMPRESSED> {
32+
type Item = &'a mut Mutator<OpenJDK<COMPRESSED>>;
3433

3534
fn next(&mut self) -> Option<Self::Item> {
3635
self.mutators.pop_front()
@@ -39,24 +38,24 @@ impl<'a> Iterator for OpenJDKMutatorIterator<'a> {
3938

4039
pub struct VMActivePlan {}
4140

42-
impl ActivePlan<OpenJDK> for VMActivePlan {
43-
fn global() -> &'static dyn Plan<VM = OpenJDK> {
44-
SINGLETON.get_plan()
41+
impl<const COMPRESSED: bool> ActivePlan<OpenJDK<COMPRESSED>> for VMActivePlan {
42+
fn global() -> &'static dyn Plan<VM = OpenJDK<COMPRESSED>> {
43+
crate::singleton::<COMPRESSED>().get_plan()
4544
}
4645

4746
fn is_mutator(tls: VMThread) -> bool {
4847
unsafe { ((*UPCALLS).is_mutator)(tls) }
4948
}
5049

51-
fn mutator(tls: VMMutatorThread) -> &'static mut Mutator<OpenJDK> {
50+
fn mutator(tls: VMMutatorThread) -> &'static mut Mutator<OpenJDK<COMPRESSED>> {
5251
unsafe {
5352
let m = ((*UPCALLS).get_mmtk_mutator)(tls);
54-
&mut *m
53+
&mut *(m as *mut Mutator<OpenJDK<COMPRESSED>>)
5554
}
5655
}
5756

58-
fn mutators<'a>() -> Box<dyn Iterator<Item = &'a mut Mutator<OpenJDK>> + 'a> {
59-
Box::new(OpenJDKMutatorIterator::new())
57+
fn mutators<'a>() -> Box<dyn Iterator<Item = &'a mut Mutator<OpenJDK<COMPRESSED>>> + 'a> {
58+
Box::new(OpenJDKMutatorIterator::<COMPRESSED>::new())
6059
}
6160

6261
fn number_of_mutators() -> usize {

0 commit comments

Comments
 (0)