I don't think the TCP RTT value that fastly is generating is actually a real TCP RTT value. I think the TCP RTT value might actually be closer to an application level RTT value. This is also why I think this detection method might be working well.
You can reproduce this by taking a TCP proxy injecting a large sleep between when it reads data from the server and writes data to the client and then also dump the Wireshark TCP conversion. Wireshark can graph what it thinks the TCP RTT is from the servers point of view. Lets say you use a 3 second delay, in my experience on linux Wireshark will think the TCP RTT from the servers POV is around 0.25ms. importantly, Wireshark doesn't actually include the time taken from the packet to reach the server if the tcpdump was run on the client but rather it is measuring how quickly the client generates an ack for a packet. But if you look at the fastly rtt_us it will be an implausible 800ms. If the client is acking the servers segments almost immediately it's impossible for it to be really 800ms. However, if the TCP RTT value is instead being calculated based on how long the client takes to send data to fastly in response to the last data sent to the client then that makes more sense. However, I don't think this is a normal TCP RTT value that the kernel would generate.
I don't think the TCP RTT value that fastly is generating is actually a real TCP RTT value. I think the TCP RTT value might actually be closer to an application level RTT value. This is also why I think this detection method might be working well.
You can reproduce this by taking a TCP proxy injecting a large sleep between when it reads data from the server and writes data to the client and then also dump the Wireshark TCP conversion. Wireshark can graph what it thinks the TCP RTT is from the servers point of view. Lets say you use a 3 second delay, in my experience on linux Wireshark will think the TCP RTT from the servers POV is around 0.25ms. importantly, Wireshark doesn't actually include the time taken from the packet to reach the server if the tcpdump was run on the client but rather it is measuring how quickly the client generates an ack for a packet. But if you look at the fastly
rtt_usit will be an implausible 800ms. If the client is acking the servers segments almost immediately it's impossible for it to be really 800ms. However, if the TCP RTT value is instead being calculated based on how long the client takes to send data to fastly in response to the last data sent to the client then that makes more sense. However, I don't think this is a normal TCP RTT value that the kernel would generate.