Skip to content

bugfix(network): Fix runahead logic update to better follow the maximum network latency and avoid stutters and slow downs#1482

Merged
xezon merged 2 commits intoTheSuperHackers:mainfrom
Mauller:fix-runahead-calculation
Sep 9, 2025
Merged

bugfix(network): Fix runahead logic update to better follow the maximum network latency and avoid stutters and slow downs#1482
xezon merged 2 commits intoTheSuperHackers:mainfrom
Mauller:fix-runahead-calculation

Conversation

@Mauller
Copy link

@Mauller Mauller commented Aug 23, 2025

This PR addresses issues with how the runAhead logic calculated the number of logic frames the local client can progress before receiving data in an online or network game.

If the calculated run ahead is not correct for the current network latency, it can introduce stutter and game slow downs.
This is due to the client needing to wait for new commands before progressing the simulation.

The original runahead logic had two fundamental flaws in its calculation that can introduce stutter, even when the network latency is good between clients:

  1. The maximum latency evaluation was incorrect.

Originally they used the average of the two highest latencies to calculate the maximum latency.
The issue with this is that you are averaging two values which are already averages, this can result in a latency value lower than the actual highest latency in the network.

This can result in a situation where the runahead value is calculated to a lower than required value. which will result in microstutter as clients wait for new commands.

  1. The runahead calculation needed to round up to the next whole integer instead of implicitly flooring the value when casting.

This issue introduces the largest amount of stutter in a game. With the original calculation, when the latency is between two runahead values, it would always truncate to the lowest value. Which would result in the latency following the runahead.

This slowly introduces more stutter as the latency approaches the next runahead threshold.

To correct this, the runahead should have been rounded up to the next integer value, so the number of logic frames always exceeds the latency.
e.g 10 frames being 330ms vs 310ms network latency. This implicitly adds a buffering effect to the network latency on top of the additional network slack.

For the network slack, we now calculate this as a part of the latency value instead of the runahead like in the original code.
This then allows the network slack to work as a buffer between the current latency and the calculated runahead.
So if the slack is a 10% window, when the latency gets within 10% of the current runahead, the runahead will be pushed up to the next runahead level.


There are further optimisations that help with Networked game performance, but these will be handled in future PR's.
Most of these revolve around allowing the runahead to go bellow 10 frames,
increasing the update frequency of the runahead,
decreasing the number of bins the network latency is calculated over and reducing the frame batching time.

In combination, the above optimisations allow the game to be more responsive when the network latency allows, while also letting the game adapt to network conditions faster.

@Mauller Mauller added Bug Something is not working right, typically is user facing Major Severity: Minor < Major < Critical < Blocker Network Anything related to network, servers Gen Relates to Generals ZH Relates to Zero Hour labels Aug 23, 2025
@Mauller Mauller self-assigned this Aug 23, 2025
@Mauller Mauller added the Performance Is a performance concern label Aug 28, 2025
@Mauller Mauller force-pushed the fix-runahead-calculation branch from cae8af1 to f95daef Compare September 4, 2025 19:19
@Mauller
Copy link
Author

Mauller commented Sep 4, 2025

Tweaked and pushed

@Mauller Mauller force-pushed the fix-runahead-calculation branch from f95daef to d92eb4c Compare September 4, 2025 19:31
@Mauller
Copy link
Author

Mauller commented Sep 4, 2025

Forgot to copy across to generals, fixed

Copy link

@xezon xezon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks plausible. We trust the Mauller that the Network runs better.

@xezon
Copy link

xezon commented Sep 5, 2025

Has merge conflicts.

@Mauller
Copy link
Author

Mauller commented Sep 5, 2025

Code looks plausible. We trust the Mauller that the Network runs better.

This code is currently running in GO and it's smooooooth.

Has merge conflicts.

Will tweak it when i get home, half day on fridays which is nice.

@Mauller Mauller force-pushed the fix-runahead-calculation branch from d92eb4c to c5b75d8 Compare September 5, 2025 15:00
@Mauller
Copy link
Author

Mauller commented Sep 5, 2025

Rebased with main

@xezon xezon changed the title bugfix(network): fix runahead update logic so the runahead always follows the maximum network latency bugfix(network): Fix network runahead logic update to better follow the maximum network latency and avoid stutters and slow downs Sep 9, 2025
@xezon xezon changed the title bugfix(network): Fix network runahead logic update to better follow the maximum network latency and avoid stutters and slow downs bugfix(network): Fix runahead logic update to better follow the maximum network latency and avoid stutters and slow downs Sep 9, 2025
@xezon xezon merged commit 8723184 into TheSuperHackers:main Sep 9, 2025
19 checks passed
@xezon xezon deleted the fix-runahead-calculation branch September 9, 2025 06:39
fbraz3 pushed a commit to fbraz3/GeneralsX that referenced this pull request Nov 10, 2025
fbraz3 pushed a commit to fbraz3/GeneralsX that referenced this pull request Feb 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Bug Something is not working right, typically is user facing Gen Relates to Generals Major Severity: Minor < Major < Critical < Blocker Network Anything related to network, servers Performance Is a performance concern ZH Relates to Zero Hour

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants