Skip to content

Commit 067482a

Browse files
authored
Merge branch 'main' into add-website-guide
2 parents 8a6b170 + fbb66bd commit 067482a

20 files changed

+262
-367
lines changed

.github/styles/pln-ignore.txt

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -135,9 +135,11 @@ mainnet
135135
markdown(lint)
136136
markdownlint
137137
merkle
138+
merkleize
138139
merklizing
139140
merkleizing
140141
merkleizes
142+
merkleized
141143
merkleization
142144
metadata('s)
143145
metamask

.github/workflows/build.yml

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
name: Build and Deploy to IPFS
2+
3+
permissions:
4+
contents: read
5+
pull-requests: write
6+
statuses: write
7+
on:
8+
push:
9+
branches:
10+
- main
11+
pull_request:
12+
13+
jobs:
14+
build-and-deploy:
15+
runs-on: ubuntu-latest
16+
outputs: # This exposes the CID output of the action to the rest of the workflow
17+
cid: ${{ steps.deploy.outputs.cid }}
18+
steps:
19+
- name: Checkout code
20+
uses: actions/checkout@v4
21+
22+
- name: Setup Node.js
23+
uses: actions/setup-node@v4
24+
with:
25+
node-version: '20'
26+
cache: 'npm'
27+
28+
- name: Install dependencies
29+
run: npm ci
30+
31+
- name: Build project
32+
run: npm run docs:build
33+
34+
- uses: ipfs/ipfs-deploy-action@v1
35+
name: Deploy to IPFS
36+
id: deploy
37+
with:
38+
path-to-deploy: docs/.vuepress/dist
39+
storacha-key: ${{ secrets.STORACHA_KEY }}
40+
storacha-proof: ${{ secrets.STORACHA_PROOF }}
41+
github-token: ${{ github.token }}

docs/.vuepress/config.js

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -207,14 +207,10 @@ module.exports = {
207207
'/how-to/modify-bootstrap-list',
208208
'/how-to/nat-configuration',
209209
'/how-to/kubo-rpc-tls-auth',
210-
'/how-to/ipfs-updater',
211-
[
212-
'https://github.com/ipfs-examples/js-ipfs-examples/tree/master/examples/custom-ipfs-repo',
213-
'Customize an IPFS repo'
214-
],
215210
'/how-to/kubo-garbage-collection',
216211
'/how-to/troubleshooting',
217212
'/how-to/webtransport',
213+
'/install/run-ipfs-inside-docker',
218214
]
219215
},
220216
{

docs/.vuepress/redirects

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,14 +43,15 @@
4343
/how-to/browser-tools-frameworks /how-to/ipfs-on-the-web
4444
/how-to/troubleshoot-file-transfers /how-to/troubleshooting
4545
/how-to/run-ipfs-inside-docker /install/run-ipfs-inside-docker
46+
/how-to/ipfs-updater /install/command-line
4647
/install/command-line-quick-start/ /how-to/command-line-quick-start
4748
/install/js-ipfs/ https://github.com/ipfs/helia/wiki
4849
/introduction/ /concepts
4950
/introduction/faq/ /concepts/faq
5051
/introduction/how-ipfs-works/ /concepts/how-ipfs-works
5152
/introduction/overview/ /concepts/what-is-ipfs
5253
/introduction/usage/ /how-to/command-line-quick-start
53-
/install/ipfs-updater /how-to/ipfs-updater
54+
/install/ipfs-updater /install/command-line
5455
/install/recent-releases/ https://github.com/ipfs/kubo/releases
5556
/install/recent-releases/go-ipfs-0-5/ https://github.com/ipfs/kubo/releases
5657
/install/recent-releases/go-ipfs-0-5/features/ https://github.com/ipfs/kubo/releases

docs/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ You can build apps that leverage IPFS implementations, or use HTTP instead:
3333

3434
#### Using IPFS
3535

36-
Build an IPFS-native app using one of the many IPFS <VueCustomTooltip label="Software, written in any programming language, with functionality to process and transmit content-addressed data. Some implementations are optimized for specific use cases or devices, or use different subsystems to handle content-addressed data. There are multiple specififactions in IPFS for handling content-addressed data, and not all implementations implement them." underlined multiline is-medium>implementations</VueCustomTooltip> and tools built by and for Web3 users:
36+
Build an IPFS-native app using one of the many IPFS <VueCustomTooltip label="Software, written in any programming language, with functionality to process and transmit content-addressed data. Some implementations are optimized for specific use cases or devices, or use different subsystems to handle content-addressed data. There are multiple specifications in IPFS for handling content-addressed data, and not all implementations implement them." underlined multiline is-medium>implementations</VueCustomTooltip> and tools built by and for Web3 users:
3737

3838
- If you are familiar with JavaScript, checkout the[IPFS in web apps guide](./how-to/ipfs-in-web-apps.md), which covers how to use [Helia](https://github.com/ipfs/helia) and related libraries to build IPFS-native apps.
3939
- To develop IPFS applications using Go and/or interact with IPFS from the terminal, use the [IPFS Kubo implementation in Go](./install/command-line.md).

docs/concepts/further-reading/academic-papers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Here are a few papers that are useful for understanding IPFS, whether it be unde
1111

1212
> Original IPFS white paper
1313
14-
**Benet, Juan**: _The InterPlanetary File System (IPFS) is a peer-to-peer distributed file system that seeks to connect all computing devices with the same system of files. In some ways, IPFS is similar to the Web, but IPFS could be seen as a single BitTorrent swarm, exchanging objects within one Git repository. In other words, IPFS provides a high throughput content-addressed block storage model, with content-addressed hyperlinks. This forms a generalized Merkle DAG, a data structure upon which one can build versioned file systems, blockchains, and even a Permanent Web. IPFS combines a distributed hashtable, an incentivized block exchange, and a self-certifying namespace. IPFS has no single point of failure, and nodes do not need to trust each other._
14+
**Benet, Juan**: _The InterPlanetary File System (IPFS) is a peer-to-peer distributed file system that seeks to connect all computing devices with the same system of files. In some ways, IPFS is similar to the Web, but IPFS could be seen as a single BitTorrent swarm, exchanging objects within one Git repository. In other words, IPFS provides a high throughput content-addressed block storage model, with content-addressed hyperlinks. This forms a generalized Merkle DAG, a data structure upon which one can build versioned file systems, blockchains, and even a Permanent Web. IPFS combines a distributed hashtable, an incentivized block exchange, and a self-certifying namespace. IPFS has no single point of failure, and nodes do not need to trust each other._
1515

1616
## [Design and Evaluation of IPFS: A Storage Layer for the Decentralized Web](https://ipfs.io/ipfs/bafybeid6doxhzck3me366265u3ony6rbuzv7dze7pjuptxeln24b2qvur4?filename=trautwein2022a.pdf)
1717

docs/concepts/immutability.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ In the website example, when we change a variable, the CID of the webpage is dif
9191
+----------+
9292
```
9393

94-
This process is essentially what the [InterPlantery Naming Service (IPNS)](../concepts/ipns.md) does! CIDs can be difficult to deal with and hard to remember, so IPNS saves users from the cumbersome task of dealing with CIDs directly. More importantly, CIDs change with the content because they are the content. Whereas the inbound reference of URLs/pointers stay the same, and the outbound referral changes:
94+
This process is essentially what the [InterPlanetary Naming Service (IPNS)](../concepts/ipns.md) does! CIDs can be difficult to deal with and hard to remember, so IPNS saves users from the cumbersome task of dealing with CIDs directly. More importantly, CIDs change with the content because they are the content. Whereas the inbound reference of URLs/pointers stay the same, and the outbound referral changes:
9595

9696
```shell
9797
+--------+ +----------------+ +-------------------------------------------------------------+

docs/concepts/ipns.md

Lines changed: 23 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ description: Learn about mutability in IPFS, InterPlanetary Name System (IPNS),
1818
- [Publishing IPNS records over PubSub lifecycle](#publishing-ipns-records-over-pubsub-lifecycle)
1919
- [Tradeoffs between consistency vs. availability](#tradeoffs-between-consistency-vs-availability)
2020
- [IPNS record validity](#ipns-record-validity)
21+
- [IPNS record TTL](#ipns-record-ttl)
2122
- [Practical considerations](#practical-considerations)
2223
- [IPNS in practice](#ipns-in-practice)
2324
- [Resolving IPNS names using IPFS gateways](#resolving-ipns-names-using-ipfs-gateways)
@@ -177,13 +178,32 @@ Availability means resolving to a valid IPNS record, at the cost of potentially
177178

178179
When setting the `validity` (referred to as [`lifetime` by Kubo](https://github.com/ipfs/kubo/blob/master/docs/config.md#ipnsrecordlifetime)) field of an IPNS record, you typically need to choose whether you favor **consistency** (short validity period, e.g. 48 hours) or **availability** (long validity period, e.g. 1 month), due to the inherent trade-off between the two.
179180

181+
#### IPNS record TTL
182+
183+
If you experience slow IPNS update propagation, the Time-to-Live (TTL) setting is the first thing to check.
184+
185+
##### TTL as a Publisher
186+
187+
When you publish an IPNS Record, the default TTL, which controls caching, might be set to a high value, such as one hour. If you want third-party gateways and nodes to bypass the cache and check for updates more frequently, consider lowering this value.
188+
189+
- **Kubo**: Refer to the `--ttl` option in [`ipfs name publish --help`](https://docs.ipfs.tech/reference/kubo/cli/#ipfs-name-publish) for details on adjusting this setting.
190+
- **Note**: If your IPNS Record is used behind a DNSLink (e.g., `/ipns/example.com` pointing to `/ipns/k51..libp2p-key`), the DNS TXT record at `_dnslink.example.com` has its own TTL. This DNS TTL also affects caching. Ensure that both TTL values are aligned for consistent behavior.
191+
192+
##### TTL as a Gateway Operator
193+
194+
You should have the ability to override the TTL provided by the publisher and set a lower cap on how long resolution results are cached.
195+
196+
- **Kubo**: Configure this using the [`Ipns.MaxCacheTTL`](https://github.com/ipfs/kubo/blob/master/docs/config.md#ipnsmaxcachettl) setting.
197+
- **Rainbow**: Adjust this with the [`RAINBOW_IPNS_MAX_CACHE_TTL`](https://github.com/ipfs/rainbow/blob/main/docs/environment-variables.md#rainbow_ipns_max_cache_ttl) environment variable.
198+
180199
#### Practical considerations
181200

182-
One of the most important things to consider with IPNS names is **how frequently you intend on updating the name**.
201+
The most important thing to consider with IPNS names is **how frequently you intend on updating the name** and **how long a valid record should be cached before checking for an update**.
183202

184-
Practically, two levers within your control determine where your IPNS name is on the spectrum between consistency and availability:
203+
Practically, levers within your control determine where your IPNS name is on the spectrum between consistency and availability:
185204

186-
- **IPNS record validity:** longer validity will veer towards availability. Moreover, longer validity will reduce the dependence on the key holder (which for most purposes is stored on a single machine and rare shared) since the record can continue to persist without requiring the private key holder to sign a new record. Another benefit of a longer validity is that the transport can be delegated to other nodes or services (such as [w3name](https://staging.web3.storage/docs/how-tos/w3name/)), without compromising the private key.
205+
- **IPNS record validity:** longer validity will veer towards availability. Moreover, longer validity will reduce the dependence on the key holder (which for most purposes is stored on a single machine and rare shared) since the record can continue to persist without requiring the private key holder to sign a new record. Another benefit of a longer validity is that the transport can be delegated to other nodes or services (such as [w3name](https://docs.storacha.network/how-to/w3name/)), without compromising the private key.
206+
- **IPNS record TTL:** longer TTL trades update propagation speed for better page load performance and resiliency.
187207
- **Transport mechanism:** the DHT veers towards consistency while PubSub veers towards availability. However, with Kubo, IPNS names are always published to the DHT, while PubSub is opt-in. For most purposes, enabling PubSub is a net gain unless you hit the upper limit of connections as a result of too many PubSub subscriptions.
188208

189209
## IPNS in practice

docs/concepts/lifecycle.md

Lines changed: 15 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -5,14 +5,14 @@ description: Learn about the lifecycle of data in IPFS.
55

66
# The lifecycle of data in IPFS
77

8-
- [1. Content-addressing](#1-content-addressing)
8+
- [1. Content-addressing / Merkleizing](#1-content-addressing--merkleizing)
99
- [2. Providing](#2-providing)
1010
- [3. Retrieving](#3-retrieving)
1111
- [Learn more](#learn-more)
1212

1313
## 1. Content-addressing / Merkleizing
1414

15-
The first stage in the lifecycle of data in IPFS is to address it by CID. This is a local operation that takes arbitrary data and encodes it so it can be addressed by a CID.
15+
The first stage in the lifecycle of data in IPFS is to address it by CID. This is a local operation that takes arbitrary data and encodes it so it can be addressed by a CID. This is also known as _merkleizing_ the data, because the input data is transformed into a [Merkle DAG](./merkle-dag.md).
1616

1717
The exact process depends on the type of data. For files and directories, this is done by constructing a [UnixFS](./file-systems.md#unix-file-system-unixfs) [Merkle DAG](./merkle-dag.md). For other data types, such as dag-cbor, this is done by encoding the data with [dag-cbor](https://ipld.io/docs/codecs/known/dag-cbor/) which is hashed to produce a CID.
1818

@@ -22,11 +22,20 @@ For example, merkleizing a static web application into a UnixFS DAG looks like t
2222

2323
## 2. Providing
2424

25-
In this stage, the blocks of the CID are saved on an IPFS node (or pinning service) and made retrievable to the network. Simply saving the CID on the node does not mean the CID is retrievable, so pinning must be used. Pinning allows the node to advertise that it has the CID, and provide it to the network.
25+
Once the input data has been merkleized and addressed by a CID, the node announces itself as a provider of the CID(s) to the IPFS network, thereby creating a public mapping between the CID and the node. This is typically known as **providing**, other names for this step are **publishing** and **advertising**.
2626

27-
- **Advertising:** In this step, a CID is made discoverable to the IPFS network by advertising a record linking the CID and the server's IP address to the [DHT](./dht.md). Advertising is a continuous process that repeats typically every 12 hours. The term **publishing** is also commonly used to refer to this step.
27+
IPFS nodes announce CID(s) to either the [DHT](./dht.md) or the [IPNI](./ipni.md) — the two content routing systems supported by [IPFS Mainnet](./glossary.md#mainnet).
2828

29-
- **Providing:** The content-addressable representation of the CID is persisted on one of web3.storage's IPFS nodes (servers running an IPFS node) and made publicly available to the IPFS network.
29+
### What about Pinning?
30+
31+
[Pinning](./glossary.md#pinning) can have slightly different meanings depending on the context:
32+
33+
From a high level, pinning can mean either:
34+
35+
- **Pin by CID:** Requesting a pinning service or IPFS Node to pin a CID, without uploading the data, in this case the pinning service or IPFS node handles retrieval from provider nodes; a process that can fail if no providers are available. Once pinned, the pinning service or IPFS node will keep a copy of the data locally and typically provide the CIDs it is pinning to the network. The [Pinning API spec](https://ipfs.github.io/pinning-services-api-spec/) provides a standard way to do this with pinning services, though some pinning services have their own APIs. With Kubo, the `ipfs pin add CID` command can be used to pin a CID.
36+
- **Pin data:** Uploading data (files, directories, etc.) to the pinning service and get back a CID, in this case the pinning service handles merkleizing the data so it is addressed by a CID. With Kubo, the `ipfs add file` command is used to both merkleize the data and pin it.
37+
38+
To summarize, pinning, when successful, results in a node or pinning service **providing** the CIDs to the network.
3039

3140
## 3. Retrieving
3241

@@ -38,15 +47,7 @@ In this stage, an IPFS node fetches the blocks of the CID and constructs the Mer
3847

3948
- **Verification:** The IPFS node verifies the blocks fetched by hashing them and ensuring that the resulting hash is correct. Note that this type of retrieval is _trustless_; that is, blocks can come from any node in the network.
4049

41-
- **Local access:** Once all blocks are present, the Merkle DAG can be constructed, making the file or directory underlying the CID successfully replicated and accessible.
42-
43-
<!-- ## 4. Deleting
44-
45-
At this point, the blocks associated with a CID are deleted from a node. **Deletion is always a local operation**. If a CID has been replicated to other nodes, it will continue to be available on the IPFS network.
46-
47-
:::callout
48-
Once the CID is replicated by another node, it is typically advertised to DHT by default, even if it isn't explicitly pinned.
49-
::: -->
50+
- **Local access:** Once all blocks of the DAG with the requested CID are successfully replicated locally, the data is available for local access.
5051

5152
## Learn more
5253

0 commit comments

Comments
 (0)