Skip to content

Commit 9741c4e

Browse files
authored
discv5: fix some typos (#201)
1 parent 591edbd commit 9741c4e

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

discv5/discv5-theory.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ public key and the session keys are derived from it using the HKDF key derivatio
107107
function.
108108

109109
dest-pubkey = public key corresponding to node B's static private key
110-
secret = ecdh(ephemeral-key, dest-pubkey)
110+
secret = ecdh(dest-pubkey, ephemeral-key)
111111
kdf-info = "discovery v5 key agreement" || node-id-A || node-id-B
112112
prk = HKDF-Extract(secret, challenge-data)
113113
key-data = HKDF-Expand(prk, kdf-info)
@@ -227,7 +227,7 @@ store a limited number of sessions in an in-memory LRU cache.
227227
To prevent IP spoofing attacks, implementations must ensure that session secrets and the
228228
handshake are tied to a specific UDP endpoint. This is simple to implement by using the
229229
node ID and IP/port as the 'key' into the in-memory session cache. When a node switches
230-
endpoints, e.g. when roaming between different wireless networks, sessions will to be
230+
endpoints, e.g. when roaming between different wireless networks, sessions will have to be
231231
re-established by handshaking again. This requires no effort on behalf of the roaming node
232232
because the recipients of protocol messages will simply refuse to decrypt messages from
233233
the new endpoint and reply with WHOAREYOU.
@@ -278,7 +278,7 @@ the table, tracking whether the node has ever successfully responded to a PING r
278278

279279
In order to keep all k-bucket positions occupied even when bucket members fail liveness
280280
checks, it is strongly recommended to maintain a 'replacement cache' alongside each
281-
bucket. This cache holds recently-seen node which would fall into the corresponding bucket
281+
bucket. This cache holds recently-seen nodes which would fall into the corresponding bucket
282282
but cannot become a member of the bucket because it is already at capacity. Once a bucket
283283
member becomes unresponsive, a replacement can be chosen from the cache.
284284

@@ -472,19 +472,19 @@ Since every node may act as an advertisement medium for any topic, advertisers a
472472
looking for ads must agree on a scheme by which ads for a topic are distributed. When the
473473
number of nodes advertising a topic is at least a certain percentage of the whole
474474
discovery network (rough estimate: at least 1%), ads may simply be placed on random nodes
475-
because searching for the topic on randomly selected will locate the ads quickly enough.
475+
because searching for the topic on randomly selected nodes will locate the ads quickly enough.
476476

477477
However, topic search should be fast even when the number of advertisers for a topic is
478478
much smaller than the number of all live nodes. Advertisers and searchers must agree on a
479479
subset of nodes to serve as advertisement media for the topic. This subset is simply a
480-
region of node ID address space, consisting of nodes whose Kademlia address is within a
480+
region of the node ID address space, consisting of nodes whose Kademlia address is within a
481481
certain distance to the topic hash `sha256(T)`. This distance is called the 'topic
482482
radius'.
483483

484484
Example: for a topic `f3b2529e...` with a radius of 2^240, the subset covers all nodes
485485
whose IDs have prefix `f3b2...`. A radius of 2^256 means the entire network, in which case
486486
advertisements are distributed uniformly among all nodes. The diagram below depicts a
487-
region of address space with the topic hash `t` in the middle and several nodes close to
487+
region of the address space with topic hash `t` in the middle and several nodes close to
488488
`t` surrounding it. Dots above the nodes represent entries in the node's queue for the
489489
topic.
490490

0 commit comments

Comments
 (0)