Matrix transport latency: does geographic distribution actually help? #5
jevonearth
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Background
The Beacon protocol uses Matrix as its dApp-to-wallet relay transport. Matrix is a federated protocol designed for chat, which means it's chatty by nature: lots of sequential HTTP round-trips for operations that Beacon treats as a single logical step.
To reduce latency for users worldwide, operators run relay servers in multiple geographic regions and rely on federation to connect them. The theory: a dApp in Tokyo talks to a nearby server in Asia, a wallet in London talks to a nearby server in Europe, and federation handles the cross-ocean hop. Fewer long-distance round-trips, lower perceived latency.
We instrumented the protocol to count what actually happens on the wire.
Anatomy of a pairing flow
A complete dApp-to-wallet pairing requires these sequential client-to-server HTTP calls:
GET /beacon/info(server timestamp)POST /login(Ed25519 auth)GET /beacon/infoPOST /loginGET /sync(initial state drain)POST /createRoom(with invite)GET /sync(poll for invite, 1-N)POST /joinGET /joined_members(poll, 1-N)PUT /send(channel-open)GET /sync(poll for channel-open, 1-N)PUT /send(first encrypted message)GET /sync(poll for message)That's 13+ sequential HTTP round-trips on the critical path before the first application-level message is exchanged. Most are strictly sequential: you can't join a room before you receive the invite, you can't send a message before the join completes, etc.
The latency math
Scenario A: both clients on a single remote server
dApp in Tokyo, wallet in London, one relay server in London. RTT from Tokyo: ~150ms.
Every one of those 13 operations pays the 150ms penalty:
13 x 150ms = ~1,950ms minimum network overhead
Scenario B: federated, each client on a nearby server
dApp in Tokyo talks to a Tokyo server (5ms RTT), wallet in London talks to a London server (5ms RTT). Federation between servers: ~150ms RTT.
Client operations are now fast, but federation adds its own serial round-trips:
Client ops: 13 x 5ms = 65ms
Federation ops: 5 x 150ms = 750ms
Total: ~815ms
Result
A real improvement, but not transformative. Here's why:
Federation doesn't help with event notification latency. When the dApp is long-polling
/syncwaiting for an invite:The federation hop replaces the client hop rather than eliminating it. The win comes entirely from the synchronous request/response calls (login, createRoom, join, sendMessage) where a nearby server turns 150ms into 5ms.
The structural problem
Geographic distribution treats the symptom (high per-hop latency) rather than the disease (too many hops). The protocol requires 13+ sequential round-trips to pair, and that number doesn't change regardless of server placement.
For comparison, protocols like WalletConnect v2 pair in ~3 round-trips via a simple relay. A transport redesign that collapsed the Beacon pairing into fewer sequential operations would do more for user-perceived latency than any amount of geographic distribution.
Measured test timings
From our E2E test suite running two federated Synapse instances (same host, so network latency is negligible, but the round-trip count is real):
The ~100ms delta between same-server and federated pairing on localhost gives a sense of the federation protocol overhead even with zero network latency.
Beta Was this translation helpful? Give feedback.
All reactions