SPU: Start to make voice processing vectorizable #13509
Draft
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Currently based on #13497 and #13502, I'm not going to try to get this in until after those.
Might try splitting this up into smaller PR's later.
Description of Changes
Firstly this PR combines the two per-core voice arrays into one single global one, this lets us process all the voices at once and we can then mix in the voices we want to each core output.
We can then begin to pull out data into a separate struct with a SOA layout to more efficiently process it.
In this PR I first split out the final voice output so that we can do volume application and voice gate testing in a single straight loop.
I then split out the NextA member so that IRQ testing can be batched for all voices at once.
Rationale behind Changes
Goes vroom vroom 🏎️
Suggested Testing Steps
I don't think anything here should actually result in any observable differences in behavior, so just test random stuff and make sure nothing blows up I guess.
Did you use AI to help find, test, or implement this issue or feature?
No.