Skip to content

Conversation

@aswaterman
Copy link
Collaborator

@aswaterman aswaterman commented Dec 27, 2025

I believe that @en-sc's comment here is correct: #2161 (comment)

Nevertheless, failing an assertion when someone sets a trigger on memory accessed by a wide access is not reasonable behavior for Spike. Better to do something that follows the principle of least surprise, despite the debug spec's lack of clarity on this point.

This PR also fixes some unrelated code-quality issues in adjacent code.

riscv/mmu.cc Outdated
Comment on lines 210 to 215

for (size_t offset = 0; offset < data_size; offset += sizeof(reg_t)) {
auto this_size = std::min(data_size - offset, sizeof(reg_t));
auto this_data = reg_from_bytes(this_size, bytes + offset);
check_triggers(operation, addr + offset, virt, this_size, this_data);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please, consider the following alternative:

Suggested change
for (size_t offset = 0; offset < data_size; offset += sizeof(reg_t)) {
auto this_size = std::min(data_size - offset, sizeof(reg_t));
auto this_data = reg_from_bytes(this_size, bytes + offset);
check_triggers(operation, addr + offset, virt, this_size, this_data);
}
check_triggers(operation, addr, virt, data_size, reg_from_bytes(std::min(data_size, sizeof(reg_t)), bytes));

The motivation is:

  1. AFAIU, an access wider then sizeof(reg_t) (please note, this is not XLEN) is already split into chunks at this point e.g.:
    for (reg_t fn = 0; fn < nf; ++fn) { \
    elt_width##_t val = P.VU.elt<elt_width##_t>(vs3 + fn * emul, vreg_inx); \
    MMU.store<elt_width##_t>( \
    baseAddr + (stride) + (offset) * sizeof(elt_width##_t), val); \
    } \

    This is why I think the assertion is valid.
  2. However, if there is such an access, an alternative aproach that checks a number of "chunks" that are no wider then the size of reg_t breaks address matching.
    Consider the following:
  • 16-byte-wide access on address 0x0.
  • There are two triggers in a chain:
    • The first checks whether an address less then 0x1 is accessed.
    • The second checks whether an address greater or equal to 0x8 is accessed.
  • If the access is split into two 8-byte wide checks, the chain will not match (the first "chunk" matches the first trigger, the second "chunk" matches the second trigger, neither matches the chain as a whole).

While the scenario is synthetic, IMHO the new behavior better complies with the spec recommendation for address match triggers (select is 0 (address)) to consider all the virtual addresses being accessed.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@en-sc AMOCAS.Q, FLQ, and FSQ perform accesses wider than sizeof(reg_t). They will fail the assertion. With that in mind, what do you recommend?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see. I can apply your suggestion without reinstating the assertion. I guess that works for me. Seems like an oversight in the trigger spec to not consider atomic accesses greater than XLEN bits, though.

I believe that @en-sc's comment here is correct:
#2161 (comment)

Nevertheless, failing an assertion when someone sets a trigger on memory
accessed by a wide access is not reasonable behavior for Spike.  Better to
do something that follows the principle of least surprise, despite the
debug spec's lack of clarity on this point.
I'd like to remove this routine eventually, but let's make it a bit less
visually unappealing in the meantime.
@aswaterman aswaterman merged commit 6dda489 into master Jan 22, 2026
3 checks passed
@aswaterman aswaterman deleted the fix-amocas-q branch January 22, 2026 01:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants