Skip to content

Releases: ipfs/kubo

v0.40.1

27 Feb 17:58
v0.40.1
39f8a65

Choose a tag to compare

Note

This patch release was brought to you by the Shipyard team.

This is a Windows bugfix release. If you use Linux or macOS v0.40.0 should be fine.

🚒 Bugfix for Windows

If you run Kubo on Windows, v0.40.0 can crash after running for a while. The daemon starts fine and works normally at first, but eventually hits a memory corruption in Go's network I/O layer and dies. This is likely caused by an upstream Go 1.26 regression in overlapped I/O handling that has known issues (go#77142, #11214).

This patch release downgrades the Go toolchain from 1.26 to 1.25, which does not have this bug. If you are running Kubo on Windows, upgrade to v0.40.1. We will switch back to Go 1.26.x once the upstream fix lands.

📝 Changelog

Full Changelog v0.40.1
  • github.com/ipfs/kubo:

See v0.40.0 for full list of changes since v0.39.x.

v0.40.0

25 Feb 23:16
v0.40.0
882b7d2

Choose a tag to compare

Note

This release was brought to you by the Shipyard team.

🔦 Highlights

This release brings reproducible file imports (CID Profiles), cleanup of interrupted flatfs operations, better connectivity diagnostics, and improved gateway behavior. It also ships with Go 1.26, lowering memory usage and GC overhead across the board.

🔢 IPIP-499: UnixFS CID Profiles

CID Profiles are presets that pin down how files get split into blocks and organized into directories, so you get the same CID for the same data across different software or versions. Defined in IPIP-499.

New configuration profiles

  • unixfs-v1-2025: modern CIDv1 profile with improved defaults
  • unixfs-v0-2015 (alias legacy-cid-v0): best-effort legacy CIDv0 behavior

Apply with: ipfs config profile apply unixfs-v1-2025

The test-cid-v1 and test-cid-v1-wide profiles have been removed. Use unixfs-v1-2025 or manually set specific Import.* settings instead.

New Import.* options

  • Import.UnixFSHAMTDirectorySizeEstimation: estimation mode (links, block, or disabled)
  • Import.UnixFSDAGLayout: DAG layout (balanced or trickle)

New ipfs add CLI flags

  • --dereference-symlinks resolves all symlinks to their target content, replacing the deprecated --dereference-args which only resolved CLI argument symlinks
  • --empty-dirs / -E controls inclusion of empty directories (default: true)
  • --hidden / -H includes hidden files (default: false)
  • --trickle implicit default can be adjusted via Import.UnixFSDAGLayout

ipfs files write fix for CIDv1 directories

When writing to MFS directories that use CIDv1 (via --cid-version=1 or ipfs files chcid), single-block files now produce raw block CIDs (like bafkrei...), matching the behavior of ipfs add --raw-leaves. Previously, MFS would wrap single-block files in dag-pb even when raw leaves were enabled. CIDv0 directories continue to use dag-pb.

Block size limit raised to 2MiB

ipfs block put, ipfs dag put, and ipfs dag import now accept blocks up to 2MiB without --allow-big-block, matching the bitswap spec. The previous 1MiB limit was too restrictive and broke ipfs dag import of 1MiB-chunked non-raw-leaf data (protobuf wrapping pushes blocks slightly over 1MiB). The max --chunker value for ipfs add is 2MiB - 256 bytes to leave room for protobuf framing. IPIP-499 profiles use lower chunk sizes (256KiB and 1MiB) and are not affected.

HAMT Threshold Fix

HAMT directory sharding threshold changed from >= to > to match the Go docs and JS implementation (ipfs/boxo@6707376). A directory exactly at 256 KiB now stays as a basic directory instead of converting to HAMT. This is a theoretical breaking change, but unlikely to impact real-world users as it requires a directory to be exactly at the threshold boundary. If you depend on the old behavior, adjust Import.UnixFSHAMTShardingSize to be 1 byte lower.

🧹 Automatic cleanup of interrupted imports

If you cancel ipfs add or ipfs dag import mid-operation, Kubo now automatically cleans up incomplete data on the next daemon start. Previously, interrupted imports would leave orphan blocks in your repository that were difficult to identify and remove without pins and running explicit garbage collection.

Batch operations also use less memory now. Block data is written to disk immediately rather than held in RAM until the batch commits.

Under the hood, the block storage layer (flatfs) was rewritten to use atomic batch operations via a temporary staging directory. See go-ds-flatfs#142 for details.

🌍 Light clients can now use your node for delegated routing

The Routing V1 HTTP API is now exposed by default at http://127.0.0.1:8080/routing/v1. This allows light clients in browsers to use Kubo Gateway as a delegated routing backend instead of running a full DHT client. Support for IPIP-476: Delegated Routing DHT Closest Peers API is included. Can be disabled via Gateway.ExposeRoutingAPI.

📊 See total size when pinning

ipfs pin add --progress now shows the total size of the pinned DAG as it fetches blocks.

Example output:

Fetched/Processed 336 nodes (83 MB)

🔀 IPIP-523: ?format= takes precedence over Accept header

The ?format= URL query parameter now always wins over the Accept header (IPIP-523), giving you deterministic HTTP caching and protecting against CDN cache-key collisions. Browsers can also use ?format= reliably even when they send Accept headers with specific content types.

The only breaking change is for edge cases where a client sends both a specific Accept header and a different ?format= value for an explicitly supported format (tar, raw, car, dag-json, dag-cbor, etc.). Previously Accept would win. Now ?format= always wins.

🚫 IPIP-524: Gateway codec conversion disabled by default

Gateways no longer convert between codecs by default (IPIP-524). This removes gateways from a gatekeeping role: clients can adopt new codecs immediately without waiting for gateway operator updates. Requests for a format that differs from the block's codec now return 406 Not Acceptable.

Migration: Clients should fetch raw blocks (?format=raw or Accept: application/vnd.ipld.raw)
and convert client-side using libraries like @helia/verified-fetch.

Set Gateway.AllowCodecConversion
to true to restore previous behavior.

✅ More reliable IPNS over PubSub

IPNS over PubSub implementation in Kubo is now more reliable. Duplicate messages are rejected even in large networks where messages may cycle back after the in-memory cache expires.

Kubo now persists the maximum seen sequence number per peer to the datastore (go-libp2p-pubsub#BasicSeqnoValidator), providing stronger duplicate detection that survives node restarts. This addresses message flooding issues reported in #9665.

IPNS over PubSub is opt-in via Ipns.UsePubsub. Kubo's pubsub is optimized for IPNS use case. For custom pubsub applications requiring different va...

Read more

v0.40.0-rc2

20 Feb 19:15
v0.40.0-rc2

Choose a tag to compare

v0.40.0-rc2 Pre-release
Pre-release

This Release Preview was brought to you by the Shipyard team.

Draft release notes: docs/changelogs/v0.40.md
Release status: #11008

v0.40.0-rc1

12 Feb 22:13
v0.40.0-rc1

Choose a tag to compare

v0.40.0-rc1 Pre-release
Pre-release

This Release Preview was brought to you by the Shipyard team.

Draft release notes: docs/changelogs/v0.40.md
Release status: #11008

v0.39.0

27 Nov 03:47
v0.39.0
2896aed

Choose a tag to compare

Note

This release was brought to you by the Shipyard team.

Overview

This release is an important step toward solving the DHT bottleneck for self-hosting IPFS on consumer hardware and home networks. The DHT sweep provider (now default) announces your content to the network without traffic spikes that overwhelm residential connections. Automatic UPnP recovery means your node stays reachable after router restarts without manual intervention.

New content becomes findable immediately after ipfs add. The provider system persists state across restarts, alerts you when falling behind, and exposes detailed stats for monitoring. This release also finalizes the deprecation of the legacy go-ipfs name.

🔦 Highlights

🎯 DHT Sweep provider is now the default

The Amino DHT Sweep provider system, introduced as experimental in v0.38, is now enabled by default (Provide.DHT.SweepEnabled=true).

What this means: All nodes now benefit from efficient keyspace-sweeping content announcements that reduce memory overhead and create predictable network patterns, especially for nodes providing large content collections.

Migration: The transition is automatic on upgrade. Your existing configuration is preserved:

  • If you explicitly set Provide.DHT.SweepEnabled=false in v0.38, you'll continue using the legacy provider
  • If you were using the default settings, you'll automatically get the sweep provider
  • To opt out and return to legacy behavior: ipfs config --json Provide.DHT.SweepEnabled false
  • Providers with medium to large datasets may need to adjust defaults; see Capacity Planning
  • When Routing.AcceleratedDHTClient is enabled, full sweep efficiency may not be available yet; consider disabling the accelerated client as sweep is sufficient for most workloads. See caveat 4.

New features available with sweep mode:

  • Detailed statistics via ipfs provide stat (see below)
  • Automatic resume after restarts with persistent state (see below)
  • Proactive alerts when reproviding falls behind (see below)
  • Better metrics for monitoring (provider_provides_total) (see below)
  • Fast optimistic provide of new root CIDs (see below)

For background on the sweep provider design and motivations, see Provide.DHT.SweepEnabled and Shipyard's blogpost Provide Sweep: Solving the DHT Provide Bottleneck.

⚡ Fast root CID providing for immediate content discovery

When you add content to IPFS, the sweep provider queues it for efficient DHT provides over time. While this is resource-efficient, other peers won't find your content immediately after ipfs add or ipfs dag import completes.

To make sharing faster, ipfs add and ipfs dag import now do an immediate provide of root CIDs to the DHT in addition to the regular queue (controlled by the new --fast-provide-root flag, enabled by default). This complements the sweep provider system: fast-provide handles the urgent case (root CIDs that users share and reference), while the sweep provider efficiently provides all blocks according to Provide.Strategy over time.

This closes the gap between command completion and content shareability: root CIDs typically become discoverable on the network in under a second (compared to 30+ seconds previously). The feature uses optimistic DHT operations, which are significantly faster with the sweep provider (now enabled by default).

By default, this immediate provide runs in the background without blocking the command. For use cases requiring guaranteed discoverability before the command returns (e.g., sharing a link immediately), use --fast-provide-wait to block until the provide completes.

Simple examples:

ipfs add file.txt                     # Root provided immediately, blocks queued for sweep provider
ipfs add file.txt --fast-provide-wait # Wait for root provide to complete
ipfs dag import file.car              # Same for CAR imports

Configuration: Set defaults via Import.FastProvideRoot (default: true) and Import.FastProvideWait (default: false). See ipfs add --help and ipfs dag import --help for more details and examples.

Fast root CID provide is automatically skipped when DHT routing is unavailable (e.g., Routing.Type=none or delegated-only configurations).

⏯️ Provider state persists across restarts

The Sweep provider now persists the reprovide cycle state and automatically resumes where it left off after a restart. This brings several improvements:

  • Persistent progress: The provider saves its position in the reprovide cycle to the datastore. On restart, it continues from where it stopped instead of starting from scratch.
  • Catch-up reproviding: If the node was offline for an extended period, all CIDs that haven't been reprovided within the configured reprovide interval are immediately queued for reproviding when the node starts up. This ensures content availability is maintained even after downtime.
  • Persistent provide queue: The provide queue is persisted to the datastore on shutdown. When the node restarts, queued CIDs are restored and provided as expected, preventing loss of pending provide operations.
  • Resume control: The resume behavior is controlled via Provide.DHT.ResumeEnabled (default: true). Set to false if you don't want to keep the persisted provider state from a previous run.

This feature improves reliability for nodes that experience intermittent connectivity or restarts.

📊 Detailed statistics with ipfs provide stat

The Sweep provider system now exposes detailed statistics through ipfs provide stat, helping you monitor provider health and troubleshoot issues.

Run ipfs provide stat for a quick summary, or use --all to see complete metrics including connectivity status, queue sizes, reprovide schedules, network statistics, operation rates, and worker utilization. For real-time monitoring, use watch ipfs provide stat --all --compact to observe changes in a 2-column layout. Individual sections can be displayed with flags like --network, --operations, or --workers.

For Dual DHT configurations, use --lan to view LAN DHT statistics instead of the default WAN DHT stats.

For more information, run ipfs provide stat --help or see the Provide Stats documentation, including Capacity Planning.

Note

Legacy provider (when Provide.DHT.SweepEnabled=false) shows basic statistics without flag support.

🔔 Slow reprovide warnings

Kubo now monitors DHT reprovide operations when Provide.DHT.SweepEnabled=true
and alerts you if your node is falling behind on reprovides.

When the reprovide queue consistently grows and all periodic workers are busy,
a warning displays with:

  • Queue size and worker utilization details
  • Recommended solutions: increase Provide.DHT.MaxWorkers or Provide.DHT.DedicatedPeriodicWorkers
  • Command to monitor real-time progress: watch ipfs provide stat --all --compact

The alert polls every 15 minutes (to avoid alert fatigue while catching
persistent issues) and only triggers after sustained growth across multiple
intervals. The legacy provider is unaffected by this change.

📊 Metric rename: provider_provides_total

The Amino DHT Sweep provider metric has been renamed from total_provide_count_total to provider_provides_total to follow OpenTelemetry naming conventions and maintain consistency with other kad-dht metrics (which use dot notation like rpc.inbound.messages, rpc.outbound.requests, etc.).

Migration: If y...

Read more

v0.39.0-rc1

17 Nov 22:01
v0.39.0-rc1

Choose a tag to compare

v0.39.0-rc1 Pre-release
Pre-release

This Release Preview was brought to you by the Shipyard team.

Draft release notes: docs/changelogs/v0.39.md
Release status: #10946

v0.38.2

30 Oct 02:44
v0.38.2
9fd105a

Choose a tag to compare

Note

This release was brought to you by the Shipyard team.

Overview

Kubo 0.38.2 is a quick patch release that improves retrieval, traces and memory usage.

🔦 Highlights

  • Updates boxo v0.35.1 with bitswap and HTTP retrieval fixes:
    • Fixed bitswap trace context not being passed to sessions, restoring observability for monitoring tools
    • Kubo now fetches from HTTP gateways that return errors in legacy IPLD format, improving compatibility with older providers
    • Better handling of rate-limited HTTP endpoints and clearer timeout error messages
  • Updates go-libp2p-kad-dht v0.35.1 with memory optimizations for nodes using Provide.DHT.SweepEnabled=true
  • Updates quic-go v0.55.0 to fix memory pooling where stream frames weren't returned to the pool on cancellation

For full release notes of 0.38, see 0.38.1.

📝 Changelog

Full Changelog

👨‍👩‍👧‍👦 Contributors

Contributor Commits Lines ± Files Changed
rvagg 1 +537/-481 3
Carlos Hernandez 9 +556/-218 11
Guillaume Michel 3 +139/-105 6
gammazero 8 +101/-97 14
Hector Sanjuan 1 +87/-28 5
Marcin Rataj 4 +57/-9 7
Marco Munizaga 2 +42/-14 7
Dennis Trautwein 2 +19/-7 7
Andrew Gillis 3 +3/-19 3
Rod Vagg 4 +12/-3 4
web3-bot 1 +2/-1 1
galargh 1 +1/-1 1

v0.38.1

08 Oct 21:34
v0.38.1
6bf52ae

Choose a tag to compare

Note

This release was brought to you by the Shipyard team.

Overview

Kubo 0.38 simplifies content announcement configuration, introduces an experimental sweeping DHT provider for efficient large-scale operations, and includes various performance improvements.

v0.38.1 includes fixes for migrations on Windows and Pebble datastore – if you are using either, make sure to use .1 release.

🔦 Highlights

🚀 Repository migration: simplified provide configuration

This release migrates the repository from version 17 to version 18, simplifying how you configure content announcements.

The old Provider and Reprovider sections are now combined into a single Provide section. Your existing settings are automatically migrated - no manual changes needed.

Migration happens automatically when you run ipfs daemon --migrate. For manual migration: ipfs repo migrate --to=18.

Read more about the new system below.

🧹 Experimental Sweeping DHT Provider

A new experimental DHT provider is available as an alternative to both the default provider and the resource-intensive accelerated DHT client. Enable it via Provide.DHT.SweepEnabled.

How it works: Instead of providing keys one-by-one, the sweep provider systematically explores DHT keyspace regions in batches.

Reprovide Cycle Comparison

The diagram shows how sweep mode avoids the hourly traffic spikes of Accelerated DHT while maintaining similar effectiveness. By grouping CIDs into keyspace regions and processing them in batches, sweep mode reduces memory overhead and creates predictable network patterns.

Benefits for large-scale operations: Handles hundreds of thousands of CIDs with reduced memory and network connections, spreads operations evenly to eliminate resource spikes, maintains state across restarts through persistent keystore, and provides better metrics visibility.

Monitoring and debugging: Legacy mode (SweepEnabled=false) tracks provider_reprovider_provide_count and provider_reprovider_reprovide_count, while sweep mode (SweepEnabled=true) tracks total_provide_count_total. Enable debug logging with GOLOG_LOG_LEVEL=error,provider=debug,dht/provider=debug to see detailed logs from either system.

Note

This feature is experimental and opt-in. In the future, it will become the default and replace the legacy system. Some commands like ipfs stats provide and ipfs routing provide are not yet available with sweep mode. Run ipfs provide --help for alternatives.

For configuration details, see Provide.DHT. For metrics documentation, see Provide metrics.

📊 Exposed DHT metrics

Kubo now exposes DHT metrics from go-libp2p-kad-dht, including total_provide_count_total for sweep provider operations and RPC metrics prefixed with rpc_inbound_ and rpc_outbound_ for DHT message traffic. See Kubo metrics documentation for details.

🚨 Improved gateway error pages with diagnostic tools

Gateway error pages now provide more actionable information during content retrieval failures. When a 504 Gateway Timeout occurs, users see detailed retrieval state information including which phase failed and a sample of providers that were attempted:

Improved gateway error page showing retrieval diagnostics

  • Gateway.DiagnosticServiceURL (default: https://check.ipfs.network): Configures the diagnostic service URL. When set, 504 errors show a "Check CID retrievability" button that links to this service with ?cid=<failed-cid> for external diagnostics. Set to empty string to disable.
  • Enhanced error details: Timeout errors now display the retrieval phase where failure occurred (e.g., "connecting to providers", "fetching data") and up to 3 peer IDs that were attempted but couldn't deliver the content, making it easier to diagnose network or provider issues.
  • Retry button on all error pages: Every gateway error page now includes a retry button for quick page refresh without manual URL re-entry.

🎨 Updated WebUI

The Web UI has been updated to v4.9 with a new Diagnostics screen for troubleshooting and system monitoring. Access it at http://127.0.0.1:5001/webui when running your local IPFS node.

Diagnostics: Logs Files: Check Retrieval Diagnostics: Retrieval Results
Diagnostics logs Retrieval check interface Retrieval check results
Debug issues in real-time by adjusting log level without restart (global or per-subsystem like bitswap) Check if content is available to other peers directly from Files screen Find out why content won't load or who is providing it to the network
Peers: Agent Versions Files: Custom Sorting
Peers with Agent Version File sorting options
Know what software peers run Find files faster with new sorting

Additional improvements include a close button in the file viewer, better error handling, and fixed navigation highlighting.

📌 Pin name improvements

ipfs pin ls <cid> --names now correctly returns pin names for specific CIDs (#10649, boxo#1035), RPC no longer incorrectly returns names from other pins (#10966), and pin names are now limited to 255 bytes for better cross-platform compatibility (#10981).

🛠️ Identity CID size enforcement and ipfs files write fixes

Identity CID size limits are now enforced

Identity CIDs use multihash 0x00 to embed data directly in the CID without hashing. This experimental optimization was designed for tiny data where a CID reference would be larger than the data itself, but without size limits it was easy to misuse and could turn into an anti-pattern that wastes resources and enables abuse. This release enforces a maximum of 128 bytes for identity CIDs - attempting to exceed this limit will return a clear error message.

  • ipfs add --inline-limit and --hash=identity now enforce the 128-byte maximum (error when exceeded)
  • ipfs files write prevents creation of oversized identity CIDs

Multiple ipfs files write bugs have been fixed

This release resolves several long-standing MFS issues: raw nodes now preserve their codec instead of being forced to dag-pb, append operations on raw nodes work correctly by converting to UnixFS when needed, and identity CIDs properly inherit the full CID prefix from parent directories.

📤 Provide Filestore and Urlstore blocks on write

Improvements to the providing system in the last release (provide blocks according to the configured Strategy) left out Filestore and [Urlstore](https:/...

Read more

v0.38.0

02 Oct 01:46
v0.38.0
34debcb

Choose a tag to compare

Warning

Note

This release was brought to you by the Shipyard team.

Overview

Kubo 0.38.0 simplifies content announcement configuration, introduces an experimental sweeping DHT provider for efficient large-scale operations, and includes various performance improvements.

🔦 Highlights

🚀 Repository migration: simplified provide configuration

This release migrates the repository from version 17 to version 18, simplifying how you configure content announcements.

The old Provider and Reprovider sections are now combined into a single Provide section. Your existing settings are automatically migrated - no manual changes needed.

Migration happens automatically when you run ipfs daemon --migrate. For manual migration: ipfs repo migrate --to=18.

Read more about the new system below.

🧹 Experimental Sweeping DHT Provider

A new experimental DHT provider is available as an alternative to both the default provider and the resource-intensive accelerated DHT client. Enable it via Provide.DHT.SweepEnabled.

How it works: Instead of providing keys one-by-one, the sweep provider systematically explores DHT keyspace regions in batches.

Reprovide Cycle Comparison

The diagram shows how sweep mode avoids the hourly traffic spikes of Accelerated DHT while maintaining similar effectiveness. By grouping CIDs into keyspace regions and processing them in batches, sweep mode reduces memory overhead and creates predictable network patterns.

Benefits for large-scale operations: Handles hundreds of thousands of CIDs with reduced memory and network connections, spreads operations evenly to eliminate resource spikes, maintains state across restarts through persistent keystore, and provides better metrics visibility.

Monitoring and debugging: Legacy mode (SweepEnabled=false) tracks provider_reprovider_provide_count and provider_reprovider_reprovide_count, while sweep mode (SweepEnabled=true) tracks total_provide_count_total. Enable debug logging with GOLOG_LOG_LEVEL=error,provider=debug,dht/provider=debug to see detailed logs from either system.

Note

This feature is experimental and opt-in. In the future, it will become the default and replace the legacy system. Some commands like ipfs stats provide and ipfs routing provide are not yet available with sweep mode. Run ipfs provide --help for alternatives.

For configuration details, see Provide.DHT. For metrics documentation, see Provide metrics.

📊 Exposed DHT metrics

Kubo now exposes DHT metrics from go-libp2p-kad-dht, including total_provide_count_total for sweep provider operations and RPC metrics prefixed with rpc_inbound_ and rpc_outbound_ for DHT message traffic. See Kubo metrics documentation for details.

🚨 Improved gateway error pages with diagnostic tools

Gateway error pages now provide more actionable information during content retrieval failures. When a 504 Gateway Timeout occurs, users see detailed retrieval state information including which phase failed and a sample of providers that were attempted:

Improved gateway error page showing retrieval diagnostics

  • Gateway.DiagnosticServiceURL (default: https://check.ipfs.network): Configures the diagnostic service URL. When set, 504 errors show a "Check CID retrievability" button that links to this service with ?cid=<failed-cid> for external diagnostics. Set to empty string to disable.
  • Enhanced error details: Timeout errors now display the retrieval phase where failure occurred (e.g., "connecting to providers", "fetching data") and up to 3 peer IDs that were attempted but couldn't deliver the content, making it easier to diagnose network or provider issues.
  • Retry button on all error pages: Every gateway error page now includes a retry button for quick page refresh without manual URL re-entry.

🎨 Updated WebUI

The Web UI has been updated to v4.9 with a new Diagnostics screen for troubleshooting and system monitoring. Access it at http://127.0.0.1:5001/webui when running your local IPFS node.

Diagnostics: Logs Files: Check Retrieval Diagnostics: Retrieval Results
Diagnostics logs Retrieval check interface Retrieval check results
Debug issues in real-time by adjusting log level without restart (global or per-subsystem like bitswap) Check if content is available to other peers directly from Files screen Find out why content won't load or who is providing it to the network
Peers: Agent Versions Files: Custom Sorting
Peers with Agent Version File sorting options
Know what software peers run Find files faster with new sorting

Additional improvements include a close button in the file viewer, better error handling, and fixed navigation highlighting.

📌 Pin name improvements

ipfs pin ls <cid> --names now correctly returns pin names for specific CIDs (#10649, boxo#1035), RPC no longer incorrectly returns names from other pins (#10966), and pin names are now limited to 255 bytes for better cross-platform compatibility (#10981).

🛠️ Identity CID size enforcement and ipfs files write fixes

Identity CID size limits are now enforced

Identity CIDs use multihash 0x00 to embed data directly in the CID without hashing. This experimental optimization was designed for tiny data where a CID reference would be larger than the data itself, but without size limits it was easy to misuse and could turn into an anti-pattern that wastes resources and enables abuse. This release enforces a maximum of 128 bytes for identity CIDs - attempting to exceed this limit will return a clear error message.

  • ipfs add --inline-limit and --hash=identity now enforce the 128-byte maximum (error when exceeded)
  • ipfs files write prevents creation of oversized identity CIDs

Multiple ipfs files write bugs have been fixed

This release resolves several long-standing MFS issues: raw nodes now preserve their codec instead of being forced to dag-pb, append operations on raw nodes work correctly by converting to UnixFS when needed, and identity CIDs properly inherit the full CID prefix from parent directories.

📤 Provide Filestore and Urlstore blocks on write

Improvements to the providing system in the last release (provide blocks according to the configured Strategy) left out [Filestore](https://github.com/ipfs/kubo/blob/master/docs/experimental-features.md#ipfs-filesto...

Read more

v0.38.0-rc2

27 Sep 02:49
v0.38.0-rc2
070177b

Choose a tag to compare

v0.38.0-rc2 Pre-release
Pre-release

This release was brought to you by the Shipyard team.