Skip to content

Commit b4d6bc9

Browse files
committed
Fix typos
1 parent 41851e8 commit b4d6bc9

File tree

11 files changed

+29
-29
lines changed

11 files changed

+29
-29
lines changed

API_CORE.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ The `core` API is the programmatic interface for IPFS, it defines the method sig
1212

1313
# Table of Contents
1414

15-
TODo
15+
TODO
1616

1717
## Required for compliant IPFS implementation
1818

@@ -83,7 +83,7 @@ TODo
8383
- tail
8484

8585

86-
## Tooling on top of the Core + Extentions
86+
## Tooling on top of the Core + Extensions
8787

8888
> Everything defined here is optional, and might be specific to the implementation details (like running on the command line).
8989

ARCHITECTURE.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ The Routing Sytem is an interface that is satisfied by various kinds of implemen
112112

113113
See more in the [libp2p specs](https://github.com/libp2p/specs).
114114

115-
## 3.3 Block Exchange -- transfering content-addressed data
115+
## 3.3 Block Exchange -- transferring content-addressed data
116116

117117
The IPFS **Block Exchange** takes care of negotiating bulk data transfers. Once nodes know each other -- and are connected -- the exchange protocols govern how the transfer of content-addressed blocks occurs.
118118

@@ -175,7 +175,7 @@ The IPFS **naming** layer -- or IPNS -- handles the creation of:
175175

176176
IPNS is based on [SFS](http://en.wikipedia.org/wiki/Self-certifying_File_System). It is a PKI namespace -- a name is simply the hash of a public key. Whoever controls the private key controls the name. Records are signed by the private key and distributed anywhere (in IPFS, via the routing system). This is an egalitarian way to assign mutable names in the internet at large, without any centralization whatsoever, or certificate authorities.
177177

178-
See more in the namin spec (TODO).
178+
See more in the naming spec (TODO).
179179

180180
# 4. Applications and Datastructures -- on top of IPFS
181181

BITSWAP.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ Task workers watch the message queues, dequeue a waiting message, and send it to
113113

114114
## Network
115115

116-
The network is the abstraction representing all Bitswap peers that are connected to us by one or more hops. Bitswap messages flow in and out of the network. This is where a game-theoretical analysis of Bitswap becomes relevant – in an arbitrary network we must assume that all of our peers are rational and self-interested, and we act accordingly. Work along these lines can be found in the [research-bitswap respository](https://github.com/ipfs/research-bitswap), with a preliminary game-theoretical analysis currently in-progress [here](https://github.com/ipfs/research-bitswap/blob/docs/strategy_analysis/analysis/prelim_strategy_analysis.pdf).
116+
The network is the abstraction representing all Bitswap peers that are connected to us by one or more hops. Bitswap messages flow in and out of the network. This is where a game-theoretical analysis of Bitswap becomes relevant – in an arbitrary network we must assume that all of our peers are rational and self-interested, and we act accordingly. Work along these lines can be found in the [research-bitswap repository](https://github.com/ipfs/research-bitswap), with a preliminary game-theoretical analysis currently in-progress [here](https://github.com/ipfs/research-bitswap/blob/docs/strategy_analysis/analysis/prelim_strategy_analysis.pdf).
117117

118118
# Implementation Details
119119

IMPORTERS_EXPORTERS.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Lots of discussions around this topic, some of them here:
3030

3131
Importing data into IPFS can be done in a variety of ways. These are use-case specific, produce different datastructures, produce different graph topologies, and so on. These are not strictly needed in an IPFS implementation, but definitely make it more useful.
3232

33-
These data importing primitivies are really just tools on top of IPLD, meaning that these can be generic and separate from IPFS itself.
33+
These data importing primitives are really just tools on top of IPLD, meaning that these can be generic and separate from IPFS itself.
3434

3535
Essentially, data importing is divided into two parts:
3636

@@ -52,10 +52,10 @@ Essentially, data importing is divided into two parts:
5252

5353
## Requirements
5454

55-
These are a set of requirements (or guidelines) of the expectations that need to be fullfilled for a layout or a splitter:
55+
These are a set of requirements (or guidelines) of the expectations that need to be fulfilled for a layout or a splitter:
5656

5757
- a layout should expose an API encoder/decoder like, that is, able to convert data to its format and convert it back to the original format
58-
- a layout should contain a clear umnambiguous representation of the data that gets converted to its format
58+
- a layout should contain a clear unambiguous representation of the data that gets converted to its format
5959
- a layout can leverage one or more splitting strategies, applying the best strategy depending on the data format (dedicated format chunking)
6060
- a splitter can be:
6161
- agnostic - chunks any data format in the same way
@@ -77,7 +77,7 @@ These are a set of requirements (or guidelines) of the expectations that need to
7777
Importer
7878
```
7979

80-
- `chunkers or splitters` algorithms that read a stream and produce a series of chunks. for our purposes should be deterministic on the stream. divided into:
80+
- `chunkers or splitters` algorithms that read a stream and produce a series of chunks. for our purposes should be deterministic on the stream. divided into:
8181
- `universal chunkers` which work on any streams given to them. (eg size, rabin, etc). should work roughly equally well across inputs.
8282
- `specific chunkers` which work on specific types of files (tar splitter, mp4 splitter, etc). special purpose but super useful for big files and special types of data.
8383
- `layouts or topologies` graph topologies (eg balanced vs trickledag vs ext4, ... etc)

IPNS.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ All things considered, the IPFS naming layer is responsible for the creation of:
2525

2626
## Introduction
2727

28-
Each time a file is modified, its content address changes. As a consequence, the address previously used for getting that file needs to be updated by who is using it. As this is not pratical, IPNS was created to solve the problem.
28+
Each time a file is modified, its content address changes. As a consequence, the address previously used for getting that file needs to be updated by who is using it. As this is not practical, IPNS was created to solve the problem.
2929

3030
IPNS is based on [SFS](http://en.wikipedia.org/wiki/Self-certifying_File_System). It consists of a PKI namespace, where a name is simply the hash of a public key. As a result, whoever controls the private key has full control over the name. Accordingly, records are signed by the private key and then distributed across the network (in IPFS, via the routing system). This is an egalitarian way to assign mutable names on the Internet at large, without any centralization whatsoever, or certificate authorities.
3131

@@ -53,7 +53,7 @@ An IPNS record is a data structure containing the following fields:
5353
- 7. **ttl** (uint64)
5454
- A hint for how long the record should be cached before going back to, for instance the DHT, in order to check if it has been updated.
5555

56-
These records are stored locally, as well as spread accross the network, in order to be accessible to everyone. For storing this structured data, we use [Protocol Buffers](https://github.com/google/protobuf), which is a language-neutral, platform neutral extensible mechanism for serializing structured data.
56+
These records are stored locally, as well as spread across the network, in order to be accessible to everyone. For storing this structured data, we use [Protocol Buffers](https://github.com/google/protobuf), which is a language-neutral, platform neutral extensible mechanism for serializing structured data.
5757

5858
```
5959
message IpnsEntry {
@@ -79,13 +79,13 @@ message IpnsEntry {
7979

8080
Taking into consideration a p2p network, each peer should be able to publish IPNS records to the network, as well as to resolve the IPNS records published by other peers.
8181

82-
When a node intends to publish a record to the network, an IPNS record needs to be created first. The node needs to have a previously generated assymetric key pair to create the record according to the datastructure previously specified. It is important pointing out that the record needs to be uniquely identified in the network. As a result, the record identifier should be a hash of the public key used to sign the record.
82+
When a node intends to publish a record to the network, an IPNS record needs to be created first. The node needs to have a previously generated asymmetric key pair to create the record according to the datastructure previously specified. It is important pointing out that the record needs to be uniquely identified in the network. As a result, the record identifier should be a hash of the public key used to sign the record.
8383

8484
As an IPNS record may be updated during its lifetime, a versioning related logic is needed during the publish process. As a consequence, the record must be stored locally, in order to enable the publisher to understand which is the most recent record published. Accordingly, before creating the record, the node must verify if a previous version of the record exists, and update the sequence value for the new record being created.
8585

8686
Once the record is created, it is ready to be spread through the network. This way, a peer can use whatever routing system it supports to make the record accessible to the remaining peers of the network.
8787

88-
On the other side, each peer must be able to get a record published by another node. It only needs to have the unique identifier used to publish the record to the network. Taking into account the routing system being used, we may obtain a set of occurences of the record from the network. In this case, records can be compared using the sequence number, in order to obtain the most recent one.
88+
On the other side, each peer must be able to get a record published by another node. It only needs to have the unique identifier used to publish the record to the network. Taking into account the routing system being used, we may obtain a set of occurrences of the record from the network. In this case, records can be compared using the sequence number, in order to obtain the most recent one.
8989

9090
As soon as the node has the most recent record, the signature and the validity must be verified, in order to conclude that the record is still valid and not compromised.
9191

@@ -120,4 +120,4 @@ The routing record is spread across the network according to the available routi
120120

121121
**Key format:** `/ipns/BINARY_ID`
122122

123-
The two routing systems currenty available in IPFS are the `DHT` and `pubsub`. As the `pubsub` topics must be `utf-8` for interoperability among different implementations
123+
The two routing systems currently available in IPFS are the `DHT` and `pubsub`. As the `pubsub` topics must be `utf-8` for interoperability among different implementations

KEYSTORE.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ in the directory should be readonly, by the owner `400`.
3333

3434
### Interface
3535
Several additions and modifications will need to be made to the ipfs toolchain to
36-
accomodate the changes. First, the creation of two subcommands `ipfs key` and
36+
accommodate the changes. First, the creation of two subcommands `ipfs key` and
3737
`ipfs crypt`:
3838

3939
```
@@ -148,7 +148,7 @@ OPTIONS:
148148
149149
DESCRIPTION:
150150
151-
'ipfs crypt encrypt' is a command used to encypt data so that only holders of a certain
151+
'ipfs crypt encrypt' is a command used to encrypt data so that only holders of a certain
152152
key can read it.
153153
154154
```
@@ -206,7 +206,7 @@ does not linger in memory.
206206

207207
#### Unixfs
208208

209-
- new node types, 'encrypted' and 'signed', probably shouldnt be in unixfs, just understood by it
209+
- new node types, 'encrypted' and 'signed', probably shouldn't be in unixfs, just understood by it
210210
- if new node types are not unixfs nodes, special consideration must be given to the interop
211211

212212
- DagReader needs to be able to access keystore to seamlessly stream encrypted data we have keys for
@@ -217,7 +217,7 @@ does not linger in memory.
217217
- DagBuilderHelper needs to be able to encrypt blocks
218218
- Dag Nodes should be generated like normal, then encrypted, and their parents should
219219
link to the hash of the encrypted node
220-
- DagBuilderParams should have extra parameters to acommodate creating a DBH that encrypts the blocks
220+
- DagBuilderParams should have extra parameters to accommodate creating a DBH that encrypts the blocks
221221

222222
#### New 'Encrypt' package
223223

@@ -230,7 +230,7 @@ public key chosen and stored in the Encrypted DAG structure.
230230
Note: One option is to simply add it to the key interface.
231231

232232
### Structures
233-
Some tenative mockups (in json) of the new DAG structures for signing and encrypting
233+
Some tentative mockups (in json) of the new DAG structures for signing and encrypting
234234

235235
Signed DAG:
236236
```

MERKLE_DAG.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ The format has two parts, the logical format, and the serialized format.
4141

4242
### Logical Format
4343

44-
The merkledag format defines two parts, `Nodes` and `Links` between nodes. `Nodes` embed `Links` in their `Link Segement` (or link table).
44+
The merkledag format defines two parts, `Nodes` and `Links` between nodes. `Nodes` embed `Links` in their `Link Segment` (or link table).
4545

4646
A node is divided in two parts:
4747
- a `Link Segment` which contains all the links.
@@ -112,7 +112,7 @@ In a sense, IPFS is a "web of data-structures", with the merkledag as the common
112112

113113
The merkledag is a type of Linked-Data. The links do not follow the standard URI format, and instead opt for a more general and flexible UNIX filesystem path format, but the power is all there. One can trivially map formats like JSON-LD directly onto IPFS (IPFS-LD), making IPFS applications capable of using the full-power of the semantic web.
114114

115-
A powerful result of content (and identity) addressing is that linked data definitions can be distributed directly with the content itself, and do not need to be served from the original location. This enables the creation of Linked Data defintions, specs, and applications which can operate faster (no need to fetch it over the network), disconnected, or even completely offline.
115+
A powerful result of content (and identity) addressing is that linked data definitions can be distributed directly with the content itself, and do not need to be served from the original location. This enables the creation of Linked Data definitions, specs, and applications which can operate faster (no need to fetch it over the network), disconnected, or even completely offline.
116116

117117
## Merkledag Notation
118118

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212

1313
We use the following label system to identify the state of each spec:
1414

15-
- ![](https://img.shields.io/badge/status-wip-orange.svg?style=flat-square) - A work-in-progress, possibly to describe an idea before actually commiting to a full draft of the spec.
15+
- ![](https://img.shields.io/badge/status-wip-orange.svg?style=flat-square) - A work-in-progress, possibly to describe an idea before actually committing to a full draft of the spec.
1616
- ![](https://img.shields.io/badge/status-draft-yellow.svg?style=flat-square) - A draft that is ready to review. It should be implementable.
1717
- ![](https://img.shields.io/badge/status-reliable-green.svg?style=flat-square) - A spec that has been adopted (implemented) and can be used as a reference point to learn how the system works.
1818
- ![](https://img.shields.io/badge/status-stable-brightgreen.svg?style=flat-square) - We consider this spec to close to final, it might be improved but the system it specifies should not change fundamentally.
@@ -54,7 +54,7 @@ The specs contained in this repository are:
5454
- [Bitswap](./BITSWAP.md) - BitTorrent-inspired exchange
5555
- **Key Management:**
5656
- [KeyStore](./KEYSTORE.md) - Key management on IPFS
57-
- [KeyChain](./KEYCHAIN.md) - Distribution of cryptographic Artificats
57+
- [KeyChain](./KEYCHAIN.md) - Distribution of cryptographic Artifacts
5858
- **Networking layer:**
5959
- [libp2p](https://github.com/libp2p/specs) - libp2p is a modular and extensible network stack, built and use by IPFS, but that it can be reused as a standalone project. Covers:
6060
- **Records, Naming and Record Systems:**

REPO.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ Keys are structured using the [multikey](https://github.com/jbenet/multikey) for
9090
The node's `config` (configuration) is a tree of variables, used to configure various aspects of operation. For example:
9191
- the set of bootstrap peers IPFS uses to connect to the network
9292
- the Swarm, API, and Gateway network listen addresses
93-
- the Datastore configuration regarding the contruction and operation of the on-disk storage system.
93+
- the Datastore configuration regarding the construction and operation of the on-disk storage system.
9494

9595
There is a set of properties, which are mandatory for the repo usage. Those are `Addresses`, `Discovery`, `Bootstrap`, `Identity`, `Datastore` and `Keychain`.
9696

REPO_FS.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -47,9 +47,9 @@ This spec defines `fs-repo` version `1`, its formats, and semantics.
4747
`./api` is a file that exists to denote an API endpoint to listen to.
4848
- It MAY exist even if the endpoint is no longer live (i.e. it is a _stale_ or left-over `./api` file).
4949

50-
In the presence of an `./api` file, ipfs tools (eg go-ipfs `ipfs daemon`) MUST attempt to delegate to the endpoint, and MAY remove the file if resonably certain the file is stale. (e.g. endpoint is local, but no process is live)
50+
In the presence of an `./api` file, ipfs tools (eg go-ipfs `ipfs daemon`) MUST attempt to delegate to the endpoint, and MAY remove the file if reasonably certain the file is stale. (e.g. endpoint is local, but no process is live)
5151

52-
The `./api` file is used in conjunction with the `repo.lock`. Clients may opt to use the api service, or wait until the process holding `repo.lock` exits. The file's content is the api endoint as a [multiaddr](https://github.com/jbenet/multiaddr)
52+
The `./api` file is used in conjunction with the `repo.lock`. Clients may opt to use the api service, or wait until the process holding `repo.lock` exits. The file's content is the api endpoint as a [multiaddr](https://github.com/jbenet/multiaddr)
5353

5454
```
5555
> cat .ipfs/api
@@ -107,7 +107,7 @@ configuration variables. It MUST only be changed while holding the
107107

108108
### hooks/
109109

110-
The `hooks` directory contains exectuable scripts to be called on specific
110+
The `hooks` directory contains executable scripts to be called on specific
111111
events to alter ipfs node behavior.
112112

113113
Currently available hooks:

0 commit comments

Comments
 (0)