Skip to content

Conversation

@Ziemas
Copy link
Contributor

@Ziemas Ziemas commented Nov 8, 2025

Currently based on #13497 and #13502, I'm not going to try to get this in until after those.

Might try splitting this up into smaller PR's later.

Description of Changes

Firstly this PR combines the two per-core voice arrays into one single global one, this lets us process all the voices at once and we can then mix in the voices we want to each core output.

We can then begin to pull out data into a separate struct with a SOA layout to more efficiently process it.

In this PR I first split out the final voice output so that we can do volume application and voice gate testing in a single straight loop.

I then split out the NextA member so that IRQ testing can be batched for all voices at once.

Rationale behind Changes

Goes vroom vroom 🏎️

Suggested Testing Steps

I don't think anything here should actually result in any observable differences in behavior, so just test random stuff and make sure nothing blows up I guess.

Did you use AI to help find, test, or implement this issue or feature?

No.

Ziemas added 12 commits November 6, 2025 21:32
Collect them all in SPU2::InternalReset.
Might as well If the saveversion is already being bumped.

[SAVEVERSION+]
This makes the timing of NAX advancing more similar to console since it
emulates the decode buffer behaviour of it rushing ahead of playback
until the buffer is full.

It also makes interpolation of the first four samples more correct by
using real data instead of the zero filled previous values.

[SAVEVERSION+]
@Ziemas Ziemas changed the title SPU: Start to vectorize voice processing SPU: Start to make voice processing vectorizable Nov 8, 2025
@JordanTheToaster JordanTheToaster added this to the Release 2.8 milestone Dec 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants