You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/how-to/troubleshooting.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ For the purposes of this guide, we will use the following tools:
21
21
22
22
-[IPFS Check](https://check.ipfs.network) - A browser-based debugging tool that can help you identify the root cause of a problem with retrieval.
23
23
-[Kubo](https://github.com/ipfs/kubo) - A popular implementation of IPFS with a CLI that can be used to troubleshoot retrieval and providing from the terminal.
24
-
-[Helia Identify tool](https://ipfs.fyi/identify) - A browser-based tool to run libp2p identify with a given peer id, testing whether the peer is dialable from a browser.
24
+
-[Helia Identify tool](https://ipfs.fyi/identify) - A browser-based tool to run [libp2p identify](https://github.com/libp2p/specs/blob/master/identify/README.md) with a given peer id, testing whether the peer is dialable from a browser.
25
25
-[Public Delegated Routing Endpoint](../concepts/public-utilities.md#delegated-routing-endpoint) at `https://delegated-ipfs.dev/routing/v1` - which can be used to find providers for a CID.
26
26
27
27
## Troubleshooting retrieval
@@ -127,7 +127,7 @@ The output will look as follows:
127
127
128
128
Looking at the output, you can know the following:
129
129
130
-
-There are 9 working providers for the CID.
130
+
-The routing query found 9 working providers for the CID.
131
131
- Some providers were found in the IPNI, some in the DHT.
132
132
- Some providers are providing the data with HTTP (the first result), and others with Bitswap over a libp2p QUIC connection (the second result).
133
133
@@ -139,15 +139,15 @@ When using IPFS Check, you can identify whether NAT hole punching was necessary
139
139
140
140
This is because when a provider peer is behind NAT, it will acquire a circuit relay reservation as part of the [NAT hole punching process (DCUtR)](https://blog.ipfs.tech/2022-01-20-libp2p-hole-punching/).
141
141
142
-
If NAT traversal is necessary to connect to a provider, and you are also behind NAT, there's a chance that NAT hole punching will fail for you, because unlike the IPFS Check backend which has a public IP, allowing DCUtR to leverage dialback for direct connection, when two peers are behind NAT, they cannot dial back to each other, and require hole punching, which is not guaranteed to be successful.
142
+
If NAT traversal is necessary to connect to a provider, and you are also behind NAT, there's a chance that NAT hole punching will fail for you. Unlike the IPFS Check backend which has a public IP (allowing DCUtR to leverage dialback for direct connection), when two peers are behind NAT, they cannot dial back to each other and require hole punching, which is not guaranteed to be successful.
143
143
144
144
### IPFS Check video guide
145
145
146
146
The following video gives an overview of how to use IPFS Check and its different modes of operation.
147
147
148
148
@[youtube](XeNOQDOrdC0)
149
149
150
-
## Debug browser connectivity with Helia Identify
150
+
## Debug browser connectivity
151
151
152
152
[Helia Identify](https://ipfs.fyi/identify) is a browser-based tool to run libp2p identify with a given peer id, testing whether the peer is dialable from a browser. This is useful to test whether a provider is reachable from a browser, which is a common cause of browser-based retrieval failures.
153
153
@@ -157,11 +157,11 @@ The following gif shows how to use Helia Identify to test whether a provider is
157
157
158
158
## Troubleshooting with Kubo
159
159
160
-
This procedure assumes that you have the latest version of kubo installed. To debug manually:
160
+
This procedure assumes that you have the latest version of Kubo [installed](../install/command-line.md). To debug manually:
161
161
162
162
1. Open up a terminal window.
163
163
164
-
1. Using kubo, determine if any peers are advertising the `<CID>` you are requesting:
164
+
1. Using Kubo, determine if any peers are advertising the `<CID>` you are requesting:
165
165
166
166
```shell
167
167
ipfs routing findprovs <CID>
@@ -223,7 +223,7 @@ If you are the provider for the CID, see the next section on [troubleshooting pr
223
223
224
224
If you rely on a pinning service to provide the content, check the status page of the pinning service.
225
225
226
-
If you are not the provider for the CID and you cannot find any providers for the CID there's not much more you can do. If you have a copy of the content or a `.car` file, you can provide it to the network by importing it into Kubo with `ipfs dag import <file>.car`.
226
+
If you are not the provider for the CID and cannot find any providers, there's not much more you can do. However, if you still have a copy of the content or can fetch it as a `.car` file from somewhere, you can provide it to the network by importing it into Kubo with `ipfs dag import <file>.car`.
227
227
228
228
## Troubleshooting providing
229
229
@@ -252,12 +252,12 @@ With this in mind, if no providers are returned, do the following:
252
252
LastReprovide: 2025-06-10 09:45:18
253
253
```
254
254
255
-
2. Note the value for `LastReprovideDuration`. If it is close to 48 hours, select one of the following options, keeping in mind that each has tradeoffs:
255
+
2. Note the value for `LastReprovideDuration`. If it is close to 48 hours, or if you notice a "reprovide taking too long" warning in your `ipfs daemon` output log, select one of the following options, keeping in mind that each has tradeoffs:
256
256
257
257
-**Enable the [Accelerated DHT Client](https://github.com/ipfs/go-ipfs/blob/master/docs/experimental-features.md#accelerated-dht-client) in Kubo**. This configuration improves content providing times significantly by maintaining more connections to peers and a larger routing table and batching advertising of provider records. However, this performance boost comes at the cost of increased resource consumption, most notably network connections to other peers, and can lead to degraded network performance in home networks.
258
258
259
-
-**Change the reprovider strategy from `all` to either `pinned` or `roots`.** In both cases, only provider records for explicitly pinned content are advertised. Differences and tradeoffs are noted below:
260
-
- The `pinned` strategy will advertise both the root CIDs and child block CIDs (the entire DAG) of explicitly pinned content.
259
+
-**Change the [Reprovider Strategy](https://github.com/ipfs/kubo/blob/master/docs/config.md#reproviderstrategy) from `all` to either `pinned+mfs` or `roots`.** In both cases, only provider records for explicitly pinned content are advertised. Differences and tradeoffs are noted below:
260
+
- The `pinned+mfs` strategy will advertise both the root CIDs and child block CIDs (the entire DAG) of explicitly pinned content and the locally available part of MFS.
261
261
- The `roots` strategy will only advertise the root CIDs of pinned content, reducing the total number of provides in each run. This strategy is the most efficient, but should be done with caution, as it will limit discoverability only to root CIDs. In other words, if you are adding folders of files to an IPFS node, only the CID for the pinned folder will be advertised. All the blocks will still be retrievable with Bitswap once a connection to the node is established.
0 commit comments