-
Notifications
You must be signed in to change notification settings - Fork 78
Concurrent immix #1355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Concurrent immix #1355
Changes from 3 commits
fb12863
9ac22bd
04fee38
100c049
2bbb200
90f4518
521adea
4f14b2f
865e24c
7b13272
1db4743
5c38704
a95e94a
eacc959
028bd0e
fe29529
26c2a54
80024dd
7487c1b
6bb03c0
ad41d7e
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -32,6 +32,8 @@ extern crate static_assertions; | |
extern crate probe; | ||
|
||
mod mmtk; | ||
use std::sync::atomic::AtomicUsize; | ||
|
||
pub use mmtk::MMTKBuilder; | ||
pub(crate) use mmtk::MMAPPER; | ||
pub use mmtk::MMTK; | ||
|
@@ -51,3 +53,9 @@ pub mod vm; | |
pub use crate::plan::{ | ||
AllocationSemantics, BarrierSelector, Mutator, MutatorContext, ObjectQueue, Plan, | ||
}; | ||
|
||
static NUM_CONCURRENT_TRACING_PACKETS: AtomicUsize = AtomicUsize::new(0); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this should be moved into There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Right. |
||
|
||
fn concurrent_marking_packets_drained() -> bool { | ||
crate::NUM_CONCURRENT_TRACING_PACKETS.load(std::sync::atomic::Ordering::SeqCst) == 0 | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -21,6 +21,7 @@ pub enum BarrierSelector { | |
NoBarrier, | ||
/// Object remembering barrier is used. | ||
ObjectBarrier, | ||
SATBBarrier, | ||
} | ||
|
||
impl BarrierSelector { | ||
|
@@ -45,6 +46,9 @@ impl BarrierSelector { | |
pub trait Barrier<VM: VMBinding>: 'static + Send + Downcast { | ||
fn flush(&mut self) {} | ||
|
||
/// load referent from java.lang.Reference | ||
fn load_reference(&mut self, _referent: ObjectReference) {} | ||
wks marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
/// Subsuming barrier for object reference write | ||
fn object_reference_write( | ||
&mut self, | ||
|
@@ -92,6 +96,8 @@ pub trait Barrier<VM: VMBinding>: 'static + Send + Downcast { | |
self.memory_region_copy_post(src, dst); | ||
} | ||
|
||
fn object_reference_clone_pre(&mut self, _obj: ObjectReference) {} | ||
|
||
/// Full pre-barrier for array copy | ||
fn memory_region_copy_pre(&mut self, _src: VM::VMMemorySlice, _dst: VM::VMMemorySlice) {} | ||
|
||
|
@@ -159,6 +165,10 @@ pub trait BarrierSemantics: 'static + Send { | |
|
||
/// Object will probably be modified | ||
fn object_probable_write_slow(&mut self, _obj: ObjectReference) {} | ||
|
||
fn load_reference(&mut self, _o: ObjectReference) {} | ||
wks marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
fn object_reference_clone_pre(&mut self, _obj: ObjectReference) {} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This method only has an blank implementation in |
||
} | ||
|
||
/// Generic object barrier with a type argument defining it's slow-path behaviour. | ||
|
@@ -250,3 +260,82 @@ impl<S: BarrierSemantics> Barrier<S::VM> for ObjectBarrier<S> { | |
} | ||
} | ||
} | ||
|
||
pub struct SATBBarrier<S: BarrierSemantics> { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This seems to be a pre-write |
||
semantics: S, | ||
} | ||
|
||
impl<S: BarrierSemantics> SATBBarrier<S> { | ||
pub fn new(semantics: S) -> Self { | ||
Self { semantics } | ||
} | ||
fn object_is_unlogged(&self, object: ObjectReference) -> bool { | ||
// unsafe { S::UNLOG_BIT_SPEC.load::<S::VM, u8>(object, None) != 0 } | ||
S::UNLOG_BIT_SPEC.load_atomic::<S::VM, u8>(object, None, Ordering::SeqCst) != 0 | ||
} | ||
} | ||
|
||
impl<S: BarrierSemantics> Barrier<S::VM> for SATBBarrier<S> { | ||
fn flush(&mut self) { | ||
self.semantics.flush(); | ||
} | ||
|
||
fn load_reference(&mut self, o: ObjectReference) { | ||
self.semantics.load_reference(o) | ||
} | ||
|
||
fn object_reference_clone_pre(&mut self, obj: ObjectReference) { | ||
self.semantics.object_reference_clone_pre(obj); | ||
} | ||
|
||
fn object_probable_write(&mut self, obj: ObjectReference) { | ||
self.semantics.object_probable_write_slow(obj); | ||
} | ||
|
||
fn object_reference_write_pre( | ||
&mut self, | ||
src: ObjectReference, | ||
slot: <S::VM as VMBinding>::VMSlot, | ||
target: Option<ObjectReference>, | ||
) { | ||
if self.object_is_unlogged(src) { | ||
self.semantics | ||
.object_reference_write_slow(src, slot, target); | ||
} | ||
} | ||
|
||
fn object_reference_write_post( | ||
&mut self, | ||
_src: ObjectReference, | ||
_slot: <S::VM as VMBinding>::VMSlot, | ||
_target: Option<ObjectReference>, | ||
) { | ||
unimplemented!() | ||
} | ||
|
||
fn object_reference_write_slow( | ||
&mut self, | ||
src: ObjectReference, | ||
slot: <S::VM as VMBinding>::VMSlot, | ||
target: Option<ObjectReference>, | ||
) { | ||
self.semantics | ||
.object_reference_write_slow(src, slot, target); | ||
} | ||
|
||
fn memory_region_copy_pre( | ||
&mut self, | ||
src: <S::VM as VMBinding>::VMMemorySlice, | ||
dst: <S::VM as VMBinding>::VMMemorySlice, | ||
) { | ||
self.semantics.memory_region_copy_slow(src, dst); | ||
} | ||
|
||
fn memory_region_copy_post( | ||
&mut self, | ||
_src: <S::VM as VMBinding>::VMMemorySlice, | ||
_dst: <S::VM as VMBinding>::VMMemorySlice, | ||
) { | ||
unimplemented!() | ||
} | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,146 @@ | ||
use std::sync::atomic::Ordering; | ||
|
||
use crate::{ | ||
plan::{barriers::BarrierSemantics, concurrent::immix::global::ConcurrentImmix, VectorQueue}, | ||
scheduler::WorkBucketStage, | ||
util::ObjectReference, | ||
vm::{ | ||
slot::{MemorySlice, Slot}, | ||
VMBinding, | ||
}, | ||
MMTK, | ||
}; | ||
|
||
use super::{concurrent_marking_work::ProcessModBufSATB, Pause}; | ||
|
||
pub struct SATBBarrierSemantics<VM: VMBinding> { | ||
mmtk: &'static MMTK<VM>, | ||
satb: VectorQueue<ObjectReference>, | ||
refs: VectorQueue<ObjectReference>, | ||
immix: &'static ConcurrentImmix<VM>, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This could take a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, here is also a quick hack |
||
} | ||
|
||
impl<VM: VMBinding> SATBBarrierSemantics<VM> { | ||
pub fn new(mmtk: &'static MMTK<VM>) -> Self { | ||
Self { | ||
mmtk, | ||
satb: VectorQueue::default(), | ||
refs: VectorQueue::default(), | ||
immix: mmtk | ||
.get_plan() | ||
.downcast_ref::<ConcurrentImmix<VM>>() | ||
.unwrap(), | ||
} | ||
} | ||
|
||
fn slow(&mut self, _src: Option<ObjectReference>, _slot: VM::VMSlot, old: ObjectReference) { | ||
self.satb.push(old); | ||
if self.satb.is_full() { | ||
self.flush_satb(); | ||
} | ||
} | ||
|
||
fn enqueue_node( | ||
&mut self, | ||
src: Option<ObjectReference>, | ||
slot: VM::VMSlot, | ||
_new: Option<ObjectReference>, | ||
) -> bool { | ||
if let Some(old) = slot.load() { | ||
self.slow(src, slot, old); | ||
} | ||
true | ||
} | ||
|
||
/// Attempt to atomically log an object. | ||
/// Returns true if the object is not logged previously. | ||
fn log_object(&self, object: ObjectReference) -> bool { | ||
Self::UNLOG_BIT_SPEC.store_atomic::<VM, u8>(object, 0, None, Ordering::SeqCst); | ||
true | ||
} | ||
|
||
fn flush_satb(&mut self) { | ||
if !self.satb.is_empty() { | ||
if self.should_create_satb_packets() { | ||
let satb = self.satb.take(); | ||
if let Some(pause) = self.immix.current_pause() { | ||
debug_assert_ne!(pause, Pause::InitialMark); | ||
self.mmtk.scheduler.work_buckets[WorkBucketStage::Closure] | ||
.add(ProcessModBufSATB::new(satb)); | ||
} else { | ||
self.mmtk.scheduler.work_buckets[WorkBucketStage::Unconstrained] | ||
.add(ProcessModBufSATB::new(satb)); | ||
} | ||
} else { | ||
let _ = self.satb.take(); | ||
}; | ||
} | ||
} | ||
|
||
#[cold] | ||
fn flush_weak_refs(&mut self) { | ||
if !self.refs.is_empty() { | ||
// debug_assert!(self.should_create_satb_packets()); | ||
let nodes = self.refs.take(); | ||
if let Some(pause) = self.immix.current_pause() { | ||
debug_assert_ne!(pause, Pause::InitialMark); | ||
self.mmtk.scheduler.work_buckets[WorkBucketStage::Closure] | ||
.add(ProcessModBufSATB::new(nodes)); | ||
} else { | ||
self.mmtk.scheduler.work_buckets[WorkBucketStage::Unconstrained] | ||
.add(ProcessModBufSATB::new(nodes)); | ||
} | ||
} | ||
} | ||
|
||
fn should_create_satb_packets(&self) -> bool { | ||
self.immix.concurrent_marking_in_progress() | ||
|| self.immix.current_pause() == Some(Pause::FinalMark) | ||
} | ||
} | ||
|
||
impl<VM: VMBinding> BarrierSemantics for SATBBarrierSemantics<VM> { | ||
type VM = VM; | ||
|
||
#[cold] | ||
fn flush(&mut self) { | ||
self.flush_satb(); | ||
self.flush_weak_refs(); | ||
} | ||
|
||
fn object_reference_write_slow( | ||
&mut self, | ||
src: ObjectReference, | ||
_slot: <Self::VM as VMBinding>::VMSlot, | ||
_target: Option<ObjectReference>, | ||
) { | ||
self.object_probable_write_slow(src); | ||
self.log_object(src); | ||
} | ||
|
||
fn memory_region_copy_slow( | ||
&mut self, | ||
_src: <Self::VM as VMBinding>::VMMemorySlice, | ||
dst: <Self::VM as VMBinding>::VMMemorySlice, | ||
) { | ||
for s in dst.iter_slots() { | ||
self.enqueue_node(None, s, None); | ||
} | ||
} | ||
|
||
fn load_reference(&mut self, o: ObjectReference) { | ||
if !self.immix.concurrent_marking_in_progress() { | ||
return; | ||
} | ||
self.refs.push(o); | ||
if self.refs.is_full() { | ||
self.flush_weak_refs(); | ||
} | ||
} | ||
|
||
fn object_probable_write_slow(&mut self, obj: ObjectReference) { | ||
obj.iterate_fields::<VM, _>(|s| { | ||
self.enqueue_node(Some(obj), s, None); | ||
}); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These two fields are maintained here so all policies can access them.
concurrent_marking_active
basically means there is concurrent marking going on, and policies should allocate objects as if they were marked. I think we should add a flag such asallocate_as_live
per space, and set the flag for the spaces.concurrent_marking_threshold
is more like a global counter for allocated pages. We could maintain such a counter regardless the plan is concurrent or not.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Each policy already has an
initialize_object_metadata
function that takes in a parameteralloc: bool
. The parameter is currently unused. But this is fine-grained, and usually spaces would want to bulk set this metadata instead of per-object.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think
alloc
means something different. It was initially from here:https://github.com/JikesRVM/JikesRVM/blob/5072f19761115d987b6ee162f49a03522d36c697/MMTk/src/org/mmtk/policy/ExplicitLargeObjectSpace.java#L108-L109
We also misused it here
mmtk-core/src/mmtk.rs
Lines 594 to 597 in d93262b