I am currently studying the HPCC SIGCOMM 2019 paper alongside the reference NS-3 implementation in the Alibaba-edu repository.
I am writing to ask for clarification regarding a fundamental difference I observed between the theoretical model and the simulation code:
Control Variable (Window vs. Rate): The paper (Algorithm 1) describes a Window-based control loop, where the window $W$ is the primary state variable, and the sending rate is derived as $R = W / T$ (where $T$ is the base propagation delay). However, the simulation code (rdma-hw.cc) implements a Rate-based control loop, updating the rate directly ($R_{new} = R / C + R_{AI}$) without deriving it from a window. What was the motivation for this divergence in the implementation?
Role of Base RTT ($T$): In the paper's formula ($R = W / T$), the Base RTT ($T$) plays a scaling role in defining the relationship between inflight bytes and sending rate. Since the code updates the rate directly, it does not appear to explicitly use $T$ in the update logic. How is the impact of Base RTT ($T$) accounted for in the rate-based implementation? Is it implicitly handled via the configured additive increase parameter, or is the logic fundamentally different?
Thank you for your answers.