You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/how-to/troubleshooting-kubo.md
+2-75Lines changed: 2 additions & 75 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -104,7 +104,7 @@ You should see the peer ID of `node a` printed out.
104
104
105
105
If this command returns nothing (or returns IDs that are not `node a`), then no record of node `a` being a provider for the CID. This can happen if the data is added while `node a` does not have a daemon running.
106
106
107
-
If this happens, you can the `ipfs routing provide <cid>` command on `node a` to announce to the network that you have that CID:
107
+
If this happens, and you don't want to wait for [`Reprovider.Interval`](https://github.com/ipfs/kubo/blob/master/docs/config.md#reproviderinterval) to trigger, you can use the `ipfs routing provide <cid>` command on `node a` to manually announce to the network that you have that CID:
108
108
109
109
```shell
110
110
ipfs routing provide <cid>
@@ -161,78 +161,5 @@ When you see ipfs doing something (using lots of CPU, memory, or otherwise being
161
161
162
162
There's a command (`ipfs diag profile`) that will do this for you and bundle the results up into a zip file, ready to be attached to a bug report.
163
163
164
-
If you feel intrepid, you can dump this information and investigate it yourself:
At the top, you can see that this goroutine (number 2306090) has been waiting to acquire a semaphore for458 minutes. That seems bad. Looking at the rest of the trace, we see the exact line it's waiting on is line 47 of runtime/sema.go. That's not particularly helpful, so we move on. Next, we see that call was made by line 205 of yamux/session.goin the `Close` method of `yamux.Session`. This one appears to be the issue.
217
-
218
-
Given that information, look foranother goroutine that might be holding the semaphorein question in the rest of the stack dump.
219
-
220
-
There are a few different reasons that goroutines can be hung:
221
-
222
-
- `semacquire` means we're waiting to take a lock or semaphore.
223
-
- `select` means that the goroutine is hanging in a select statement, and none of the cases are yielding anything.
224
-
- `chan receive` and `chan send` are waiting for a channel to be received from or sent on, respectively.
225
-
- `IO wait` generally means that we are waiting on a socket to read or write data, although it *can* mean we are waiting on a very slow filesystem.
226
-
227
-
If you see any of those tags _without_ a `, X minutes` suffix, that generally means there isn't a problem -- you just caught that goroutine in the middle of a short waitfor something. If the waittime is over a few minutes, that either means that goroutine doesn't do much, or something is pretty wrong.
228
-
229
-
If you see a lot of goroutines, consider using [stackparse](https://github.com/whyrusleeping/stackparse) to filter, sort, and summarize them.
230
-
231
-
### Analyzing the CPU Profile
232
-
233
-
The go team wrote an [excellent article on profiling go programs](http://blog.golang.org/profiling-go-programs). If you've already gathered the above information, you can skip down to where they start talking about `go tool pprof`. My go-to method of analyzing these is to run the `web` command, which generates an SVG dotgraph and opens it in your browser. This is the quickest way to easily point out where the hot spots in the code are.
234
-
235
-
### Analyzing vars and memory statistics
236
-
237
-
The output is JSON formatted and includes badger store statistics, the command line run, and the output from Go's [runtime.ReadMemStats](https://golang.org/pkg/runtime/#ReadMemStats). The [MemStats](https://golang.org/pkg/runtime/#MemStats) has useful information about memory allocation and garbage collection.
164
+
If you feel intrepid, you can dump this information and investigate it yourself by following the [Advanced Kubo Debug Guide at GitHub](https://github.com/ipfs/kubo/blob/master/docs/debug-guide.md).
Copy file name to clipboardExpand all lines: docs/reference/diagnostic-tools.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ Learn more about CID concepts, including components and versions in the [content
39
39
40
40
## Helia Identify
41
41
42
-
[Helia Identify](https://ipfs.fyi/identify) is a browser-based tool to run libp2p identify with Peer IDs / multiaddrs, testing whether an IPFS peer is Web friendly, i.e. whether it can be connected to from a browser. This is useful to test whether content can be directly retrieved from a provider node.
42
+
[Helia Identify](https://ipfs.fyi/identify) is a browser-based tool to run [libp2p identify](https://github.com/libp2p/specs/blob/master/identify/README.md) with Peer IDs / multiaddrs, testing whether an IPFS peer is Web friendly, i.e. whether it can be connected to from a browser. This is useful to test whether content can be directly retrieved from a provider node.
0 commit comments