diff --git a/docs/documentation/act.md b/docs/documentation/act.md index c40010d8..eba8211a 100644 --- a/docs/documentation/act.md +++ b/docs/documentation/act.md @@ -5,14 +5,368 @@ slug: /act sidebar_label: ACT --- -## 🚧 Under Construction 🚧 -:::caution 🚧 This page is under construction +ACT, or Access Control Trie, is a decentralized permission system built into the Swarm network that allows you to restrict access to uploaded content. -This section is still being worked on. Check back soon for updates! +When you upload data to Swarm using ACT, only the original uploader and users with public keys listed in an associated grantee list are able to retrieve and decrypt that data. The grantee list is published separately and cryptographically referenced during upload and download operations. +ACT is ideal for use cases such as the serialized release of content like a podcast or newsletter where the publisher wishes to limit access to subscribers only. + +:::warning +Once a file is uploaded with ACT, any node whose public key is on the ACT grantees list referenced during the upload ***will have permanent access to that file*** as long as the file reference and history reference returned from the upload has been shared with them. + +Updating the grantees list to remove a public key ***will not revoke access*** to the content retroactively. + +Likewise, re-uploading the content using the new grantees list will also ***not retroactively revoke access*** to the content. +::: + +## Requirements +The use of ACT requires the following: + +* A Bee light node running on with synced postage batch data. (Running at `http://localhost:1633` by default) +* A valid postage batch ID. [Buy one](/docs/storage/#purchasing-storage) if needed. +* Public keys of the nodes you want to grant access to. +* The **public key of the publishing node**. This can be obtained using the [`bee.getNodeAddresses()` method](/docs/status/#3-get-node-addresses). + +## Create Grantees List + +First we create a grantees list with the public keys of anyone we want to grant access to. + +#### Example Script: + +The example script below performs the following key operations: + +1. Initializes a Bee client. +2. Defines a list of grantee public keys. +3. Specifies a valid postage batch ID. +4. Calls `bee.createGrantees()` to create a new grantee list. +5. Logs the resulting `ref` and `historyref`. + +```js +import { Bee, PublicKey, BatchId } from '@ethersphere/bee-js'; + +// Initialize Bee instance +const bee = new Bee('http://localhost:1633'); + +// Grantee's public key (replace with the actual key(s) of the node(s) you wish to grant access to) +const grantees = [ + new PublicKey('027d0c4759f689ea3dd3eb79222870671c492cb99f3fade275bcbf0ea39cd0ef6e'), +]; + +// Your postage batch ID (replace with your own valid postage batch ID) +const postageBatchId = new BatchId('0258a225fe8da54cc6537eb8b12fcf6706c7873dbe19b9381d31729aa0405398'); + +async function createGranteeList() { + try { + // Create the grantee list using `bee.createGrantees()` method + const response = await bee.createGrantees(postageBatchId, grantees); + + // Log the response (ref and history ref) + console.log('Grantee List Created Successfully:'); + console.log('Reference:', response.ref.toHex()); + console.log('History Reference:', response.historyref.toHex()); + } catch (error) { + console.error('Error creating grantee list:', error); + } +} + +// Call the function to create the grantee list +createGranteeList(); +``` + +Example output: + +```bash +Grantee List Created Successfully: +Reference: 69da034fdae049eed9a22ec48b98a08ed5d363d48076f88c44ffe3367a18e306cae6aaf1cfce72d59262b9fb9293e15469c01c6a2626bb62478116cc98fb303b +History Reference: 18d6f58a1d3c8253a5fc47023d49e9011236ead43724e595e898e1b422b77b19 +``` + +The first 64 byte (128 hex digit) reference `Grantee List Reference` (`ref`) is used on its own for reviewing the list contents and updating the list. + +The second reference 32 byte (64 hex digit) `History Reference` (`historyref`) is used for uploading with ACT and is also used along with the first `ref` for creating a new updated grantee list based on the original list referred to by the `ref`. + +## Update Grantees List + +:::info +Although we refer to this operation as an "update", due to Swarm's immutable nature, the original list is not modified by this operation. Rather a new list is created with the specified grantee keys added or removed from the original list. This operation ***DOES NOT*** retroactively add or remove access to content uploaded with the original ACT list. ::: +To update a grantees list, call the `bee.patchGrantees()` method with the following arguments: + +* A valid postage batch ID +* The original list’s `ref` and `historyref` +* An object specifying public keys to `add` or `revoke` + +```js +bee.patchGrantees(postageBatchId, ref, historyref, { + add: [grantee1, grantee2], + revoke: [], +}); +``` + +Calling this method returns the new list’s updated `ref` and `historyref`, which you should use for future updates or access. + +#### Example Script: + +The example script below performs the following key steps: + +1. Initializes the Bee client and defines two public keys to add as grantees. +2. Provides the existing grantee list’s `ref` and `historyref`, and a valid postage batch ID. +3. Calls `bee.patchGrantees()` to add the new keys to the list. +4. Logs the updated grantee list’s `ref` and `historyref`. + +```js +import { Bee, PublicKey, BatchId, Reference } from '@ethersphere/bee-js'; + +// Initialize Bee instance +const bee = new Bee('http://localhost:1633'); + +// Grantee's public key(s) to be added (replace with the actual key) +const grantee1 = new PublicKey('027d0c4759f689ea3dd3eb79222870671c492cb99f3fade275bcbf0ea39cd0ef6e'); +const grantee2 = new PublicKey('03636056d1e08f100c5acaf14d10070102de9444c97b2e8215305ab3e97254ede6'); + +// Grantee list reference and history reference returned from initial list creation +const granteeListRef = new Reference('69da034fdae049eed9a22ec48b98a08ed5d363d48076f88c44ffe3367a18e306cae6aaf1cfce72d59262b9fb9293e15469c01c6a2626bb62478116cc98fb303b') +const granteeHistoryRef = new Reference('18d6f58a1d3c8253a5fc47023d49e9011236ead43724e595e898e1b422b77b19') + +// Your postage batch ID (replace with a valid one) +const postageBatchId = new BatchId('0258a225fe8da54cc6537eb8b12fcf6706c7873dbe19b9381d31729aa0405398'); + +// Function to update the grantee list by adding the new public key +async function updateGranteeList() { + try { + // Call the patchGrantees function to add the new public key + const response = await bee.patchGrantees(postageBatchId, granteeListRef, granteeHistoryRef, { + add: [grantee1, grantee2], // Add the new grantee + revoke: [], + }); + + // Log the updated grantee list references + console.log('Grantee List Updated Successfully:'); + console.log('Updated Reference:', response.ref.toHex()); + console.log('Updated History Reference:', response.historyref.toHex()); + } catch (error) { + console.error('Error updating grantee list:', error.message); + if (error.response) { + // If there's an error, log the full response for more details + console.error('Response Status:', error.response.status); + console.error('Response Body:', JSON.stringify(error.response.body, null, 2)); + } + } +} + +// Call the function to update the grantee list +updateGranteeList(); +``` + +Example output: + +```bash +Grantee List Updated Successfully: +Updated Reference: a029324c42e7911032b83155f487d545b6e07b521a90fce90a266f308c0a455417e71bc03621868da2f6e84357ba772cb03b408fce79862b03d2e082004eccd8 +Updated History Reference: d904f0790acb7edfda6a078176d64ec026b40298bfdbceb82956533e31489fcd +``` + +## Get Grantees List + +In order to view the members of our grantees list we need to use the 64 byte `ref` returned when we create or update a list. We will view both our original list and the updated list based on the original list using the respective `ref` from each list: + +:::info +The grantee list is encrypted, and only the owner can view the grantee list, make sure to use the owner node when using the `bee.getGrantees()` method. +::: + +#### Example Script: + +The example script below performs the following operations: + +1. Initializes a Bee client. +2. Defines two existing grantee list 64 byte `ref` copied from the results of our previous example scripts. +3. Calls `bee.getGrantees()` for each `ref` to retrieve the corresponding grantee list. +4. Logs the status, status text, and list of grantee public keys in compressed hex format. + + +```js +import { Bee, Reference } from '@ethersphere/bee-js'; + +// Initialize Bee instance +const bee = new Bee('http://localhost:1633'); + + +// Grantee list references (the reference returned from the `bee.createGrantees()` function) +const granteeListRef_01 = new Reference('69da034fdae049eed9a22ec48b98a08ed5d363d48076f88c44ffe3367a18e306cae6aaf1cfce72d59262b9fb9293e15469c01c6a2626bb62478116cc98fb303b'); +const granteeListRef_02 = new Reference('a029324c42e7911032b83155f487d545b6e07b521a90fce90a266f308c0a455417e71bc03621868da2f6e84357ba772cb03b408fce79862b03d2e082004eccd8'); + +// Function to get the grantee list +async function getGranteeList(granteeListRef) { + try { + // Call the getGrantees function with the reference + const result = await bee.getGrantees(granteeListRef); + + // Log the full response + console.log('Grantee List Retrieved:'); + console.log('Status:', result.status); + console.log('Status Text:', result.statusText); + + // Log the grantee lists as arrays of their hex string representations + console.log('Grantees:', result.grantees.map(grantee => grantee.toCompressedHex())); + + } catch (error) { + console.error('Error retrieving grantee list:', error); + } +} + +// Call the function to fetch the grantee list +getGranteeList(granteeListRef_01); +getGranteeList(granteeListRef_02); +``` + +Example output: + +```bash +Grantee List Retrieved: +Status: 200 +Status Text: OK +Grantees: [ + '027d0c4759f689ea3dd3eb79222870671c492cb99f3fade275bcbf0ea39cd0ef6e' +] +Grantee List Retrieved: +Status: 200 +Status Text: OK +Grantees: [ + '027d0c4759f689ea3dd3eb79222870671c492cb99f3fade275bcbf0ea39cd0ef6e', + '03636056d1e08f100c5acaf14d10070102de9444c97b2e8215305ab3e97254ede6' +] +``` + +The first list of grantees contains the first public key we gave access to when we created the list, while the second one contains both the first and the second one we added when we created our second list based on the first one. + +## Upload With ACT + +We can upload our content with either of the two lists we created depending on which set of users we wish to give access too. In the example below, we use both lists. + +#### Example Script: + +The example script below performs the following operations: + +1. Initializes a Bee client. +2. Defines a postage batch ID and two ACT grantee list 32 byte `historyref` hashes returned from the operations in the previous examples. +3. Defines a string to upload as a sample file. +4. Calls `bee.uploadFile()` twice with ACT enabled, specifying a `historyRef` each time to enforce access control. +5. Logs the resulting Swarm reference and history reference after each upload. + +```js +import { Bee, BatchId, Reference } from '@ethersphere/bee-js'; + +// Initialize Bee instance +const bee = new Bee('http://localhost:1633'); + +// Your postage batch ID (replace with a valid one) +const postageBatchId = new BatchId('0258a225fe8da54cc6537eb8b12fcf6706c7873dbe19b9381d31729aa0405398'); + +// Grantee list reference (the reference returned from the `bee.createGrantees()` function) +const historyRef_01 = new Reference('18d6f58a1d3c8253a5fc47023d49e9011236ead43724e595e898e1b422b77b19'); +const historyRef_02 = new Reference('d904f0790acb7edfda6a078176d64ec026b40298bfdbceb82956533e31489fcd'); + +// Sample data to upload +const fileData = 'This is a sample string that will be uploaded securely using ACT.'; + + +async function uploadWithACT(historyRef) { + try { + // Upload the string with ACT enabled + const result = await bee.uploadFile(postageBatchId, fileData, 'samplefile.txt', { + act: true, // Enable ACT for the uploaded data + actHistoryAddress: historyRef, // Provide the grantee list reference for ACT + contentType: 'text/plain', + }); + + console.log('File uploaded successfully with ACT:'); + console.log('Reference:', result.reference.toHex()); + console.log("History reference") + console.log(result.historyAddress.value.toHex()) + } catch (error) { + console.error('Error uploading file with ACT:', error); + } +} + +// Call the function to upload the file with each `historyref` +uploadWithACT(historyRef_01); +uploadWithACT(historyRef_02); +``` + +Example output: + +```bash +File uploaded successfully with ACT: +Reference: e227acea84e1d55e90baa93a698e79577a5b1c54513925b61476386798b41728 +History reference +18d6f58a1d3c8253a5fc47023d49e9011236ead43724e595e898e1b422b77b19 +File uploaded successfully with ACT: +Reference: e227acea84e1d55e90baa93a698e79577a5b1c54513925b61476386798b41728 +History reference +d904f0790acb7edfda6a078176d64ec026b40298bfdbceb82956533e31489fcd +``` + +The reference hash is the same for each upload since the content is the same. The reference hash along with a `historyref` and the uploader's public key are required in order to access the content uploaded with ACT. + +You can choose which `historyref` to share depending on which set of public keys you wish to authorize to download the content. + +## Download With ACT + +In order to download using ACT, we must pass in the public key from the grantee list creator along with the file reference and history reference returned from the file upload operation: + +#### Example Script: + +The example script below performs the following operations: + +1. Initializes a Bee client. +2. Defines a publisher public key and associated file reference + history references for ACT-protected content using the references returned from the upload operation. +3. Calls `bee.downloadFile()` with ACT options (`actPublisher` and `actHistoryAddress`) to access protected data. +4. Logs the decoded file content. + + +```js +import { Bee, Reference, PublicKey } from '@ethersphere/bee-js' + +// Initialize Bee instance +const bee = new Bee('http://localhost:1633') + + +// Publisher public key used during upload +const publisherPublicKey = new PublicKey('0295562f9c1013d1db29f7aaa0c997c4bb3f1fc053bd0ed49a3d98584490cc8f96'); + +// File reference and history reference returned from upload operation +const fileRef_01 = new Reference('e227acea84e1d55e90baa93a698e79577a5b1c54513925b61476386798b41728'); +const historyRef_01 = new Reference('18d6f58a1d3c8253a5fc47023d49e9011236ead43724e595e898e1b422b77b19'); + + +// Function to download ACT-protected content +async function downloadWithACT(fileRef, historyRef, publisherPubKey) { + try { + const result = await bee.downloadFile(fileRef, './', { + actPublisher: publisherPubKey, + actHistoryAddress: historyRef + }) + + console.log('Content:', result.data.toUtf8()) + } catch (error) { + console.error(`Error downloading from reference ${fileRef}:`, error) + } +} + +downloadWithACT( + fileRef_01, + historyRef_01, + publisherPublicKey +) +``` + +Example terminal output: + +```bash +Content: This is a sample string that will be uploaded securely using ACT. +``` + +In the example above, we used the history reference from the file uploaded using the grantees list with only one public key included (`027d0c4759f689ea3dd3eb79222870671c492cb99f3fade275bcbf0ea39cd0ef6e`), and so it will only be able to be retrieved and decrypted by the node with that public key. -* Show example of creating grantee list -* Show example of secure upload -* Show example of secure download +If any other node attempts to download this content then a 404 error will be returned. \ No newline at end of file diff --git a/docs/documentation/gsoc.md b/docs/documentation/gsoc.md index 8ac87ae2..c4386473 100644 --- a/docs/documentation/gsoc.md +++ b/docs/documentation/gsoc.md @@ -5,14 +5,144 @@ slug: /gsoc sidebar_label: GSOC --- -## 🚧 Under Construction 🚧 -:::caution 🚧 This page is under construction -This section is still being worked on. Check back soon for updates! +The GSOC (Graffiti Several Owner Chunk) feature allows a **full node** to receive messages from many other nodes using a shared Single Owner Chunk (SOC). This enables real-time messaging over Swarm. -::: +## Overview +GSOC messages are updates to a pre-mined SOC that lands in the **neighborhood** of the recipient node. Only full nodes receive these updates because light nodes do not sync neighborhood chunks. However, **any node** (light or full) can send GSOC messages. + +### GSOC vs PSS + +GSOC shares some similarities with PSS - both features allow for full nodes to receive messages from other nodes. + +However, there are several key differences: + +- Unlike PSS, **GSOC only needs to mine the target chunk once**—multiple messages reuse it, making it **faster, cheaper**, and **more efficient** for recurring updates. +- Unlike PSS, **No encryption** is used by default, making it unsuitable for handling sensitive data. +- Unlike PSS, **GSOC chunks are not meant to be retrieved directly**. The SOC used to initiate a GSOC connection is used to listen for incoming messages only, the chunk itself is not meant to be retrieved since the incoming messages are not actually used to update the SOC since double-signing an SOC is undefined behavior. + +## Requirements + +To use the example scripts below, you need: + +- A Bee full node with a fully synced reserve for receiving GSOC messages. +- A light node for sending GSOC messages. +- The batch ID of a usable postage batch. If you don't have one already, you will need to [buy a batch](/docs/storage/#purchasing-storage) to upload data. If you do have one, you will need to [get and save](/docs/storage/#selecting-a-batch) its batch ID. + +## Create an Identifier (Receiver and Sender) + +Identifiers in GSOC are similar to topics in PSS — they define the stream of messages a receiver node is subscribed to. The sender must use the same identifier so that their messages are received. + +Each identifier is a 32 byte (64-digit) hex string. It can be initialized with a 32 byte hex string of your choice or can be created from any arbitrary string using the `Identifier` utility class. You can also use the zero-initialized `NULL_IDENTIFIER` as a default identifier for cases where you don't need a unique identifier: + + +```js +import { Identifier, NULL_IDENTIFIER } from '@ethersphere/bee-js' + +// Use default (all zeros): +const identifier = NULL_IDENTIFIER + +// From a hex string: +const identifier = new Identifier('6527217e549e84f98e51b1d8b1ead62ff5cad59acd9713825754555d6975f103') + +// From a human-readable string: +const identifier = Identifier.fromString('chat:v1') +``` + +- Use `NULL_IDENTIFIER` is a 64 digit hex string of all zeros - use it for quick testing or when uniqueness doesn't matter. +- Use any hex string to initialize a new `Identifier` object . +- Use `Identifier.fromString()` to generate an identifier derived from your string of choice (allows for easy to remember human readable identifiers `"notifications"`, `"chat:user1"`). + +## Get Target Overlay (Receiver Node) + +This step **is performed by the receiving full node** to retrieve its overlay address. This overlay address is then shared with the sender node to use as a target overlay for its GSOC messages: + +```js +import { Bee } from '@ethersphere/bee-js' + +const bee = new Bee('http://localhost:1633') + +async function checkAddresses() { + const addresses = await bee.getNodeAddresses() + console.log('Node Addresses:', addresses) +} + +checkAddresses() +``` + +Example output: + +```bash +Node Addresses: +Overlay: 1e2054bec3e681aeb0b365a1f9a574a03782176bd3ec0bcf810ebcaf551e4070 +Ethereum: 9a73f283cd9211b96b5ec63f7a81a0ddc847cd93 +Public Key: 7d0c4759f689ea3dd3eb79222870671c492cb99f3fade275bcbf0ea39cd0ef6e25edd43c99985983e49aa528f3f2b6711085354a31acb4e7b03559b02ec868f0 +PSS Public Key: 5ade58d20be7e04ee8f875eabeebf9c53375a8fc73917683155c7c0b572f47ef790daa3328f48482663954d12f6e4739f748572c1e86bfa89af99f17e7dd4d33 +Underlay: [ + '/ip4/127.0.0.1/tcp/1634/p2p/QmcpSJPHuuQYRgDkNfwziihVcpuVteoNxePvfzaJyp9z7j', + '/ip4/172.17.0.2/tcp/1634/p2p/QmcpSJPHuuQYRgDkNfwziihVcpuVteoNxePvfzaJyp9z7j', + '/ip6/::1/tcp/1634/p2p/QmcpSJPHuuQYRgDkNfwziihVcpuVteoNxePvfzaJyp9z7j' +] +``` + +The `Overlay` should be saved and shared with sender nodes. + +## Set Up a Listener (Receiver Node) + +This must be run on a full node. It mines a key that lands within its own neighborhood and starts listening. + +```js +import { Bee, Identifier, NULL_IDENTIFIER } from '@ethersphere/bee-js' + +const bee = new Bee('http://localhost:1633') +const identifier = Identifier.fromString(NULL_IDENTIFIER) + +async function listen() { + const { overlay } = await bee.getNodeAddresses() + + // The signer is initialized using the receiving node's own overlay and chosen identifier + const signer = bee.gsocMine(overlay, identifier) + + // A GSOC subscription is established using the blockchain address derived from the signer and the identifier + bee.gsocSubscribe(signer.publicKey().address(), identifier, { + // A callback function is used to handle incoming updates - you can include your application logic here + onMessage: message => console.log('Received:', message.toUtf8()), + onError: error => console.error('Subscription error:', error), + }) + + console.log('Listening for GSOC updates...') +} + +listen() +``` + +## Send a Message (Sender Node) + +The sending node must have a ***usable postage batch id*** and also know the ***target overlay address*** and ***identifier*** in order to send a message: + +```js +import { Bee, Identifier, NULL_IDENTIFIER } from '@ethersphere/bee-js' + +const bee = new Bee('http://localhost:1643') + +// The identifier is initialized using the same data as the receiving node +const identifier = Identifier.fromString(NULL_IDENTIFIER) +const batchId = '6c84b6d3f5273b969c3df875cde7ccd9920f5580122929aedaf440bfe4484405' + +const recipientOverlay = '1e2054bec3e681aeb0b365a1f9a574a03782176bd3ec0bcf810ebcaf551e4070' + +async function sendMessage() { + // The signer is initialized using the overlay address and identifier shared by the receiving node + const signer = bee.gsocMine(recipientOverlay, identifier) + + // bee.gsocSend() is called with the batch id, initialized signer, identifier, and message payload in order to send a GSOC message + await bee.gsocSend(batchId, signer, identifier, 'Hello via GSOC!') + console.log('Message sent') +} + +sendMessage() +``` + +For more information about GSOC, refer to the [Bee documentation](https://docs.ethswarm.org/docs/develop/tools-and-features/gsoc). -* Show mining the writer private key for the targeted overlay address [Gist](https://gist.github.com/Cafe137/e76ef081263aaec7a715139d700f3433) -* Show a simple listener, and a simple send invocation [Gist1](https://gist.github.com/Cafe137/7f02fb54ad5a79833f3b718b94df0d41) [Gist2](https://gist.github.com/Cafe137/6277f1d112b3b78ba36f717551357c3b) -* Show Identifier class usage [Gist](https://gist.github.com/Cafe137/25a244d85758480aa1e15c80ff147b72) \ No newline at end of file diff --git a/docs/documentation/pinning.md b/docs/documentation/pinning.md index e415e208..31602a6a 100644 --- a/docs/documentation/pinning.md +++ b/docs/documentation/pinning.md @@ -5,14 +5,130 @@ slug: /pinning sidebar_label: Pinning --- -## 🚧 Under Construction 🚧 -:::caution 🚧 This page is under construction +Pinning allows you to guarantee that content will always be available by storing it **locally on your own Bee node**. -This section is still being worked on. Check back soon for updates! +However, pinning ***does not*** guarantee availability across the Swarm network. Therefore you must use pinning along with the **stewardship utilities** included in `bee-js` to monitor the availability of your pinned content and reupload it if needed. -::: +In this section, you'll learn how to: + +- Pin content +- Check whether pinned content is still retrievable from the network +- Reupload missing content +- View all currently pinned references +- Remove pins that are no longer required + +## Pinning and Unpinning Content + +To pin a reference (so it remains stored on your node): + +```js +await bee.pin(reference) +console.log('Reference pinned locally.') +``` + +To stop tracking and remove it from local pin storage: + +```js +await bee.unpin(reference) +console.log('Reference unpinned and no longer tracked.') +``` + +## Checking if a Reference is Retrievable + +Use `isReferenceRetrievable(reference)` to verify if the content for a given Swarm reference is currently accessible on the network: + +```js +const isAvailable = await bee.isReferenceRetrievable(reference) + +if (isAvailable) { + console.log('Data is retrievable from the network.') +} else { + console.log('Data is missing from the network.') +} +``` + +## Reuploading Pinned Data + +If content is missing but was previously pinned, you can reupload it using `reuploadPinnedData(postageBatchId, reference)`: + +```js +await bee.reuploadPinnedData(postageBatchId, reference) +console.log('Data has been reuploaded to the network.') +``` + +## Listing All Pinned References + +You can get all currently pinned references with: + +```js +const pins = await bee.getAllPins() +console.log('Pinned references:', pins.map(ref => ref.toHex())) +``` + +To check if a specific reference is pinned: + +```js +const pinStatus = await bee.getPin(reference) +console.log('Pin info:', pinStatus) +``` + +## Example Script + +The following script automates the process of checking all locally pinned references, verifying their retrievability from the network, and reuploading any that are missing. This ensures that your pinned data remains available even if it has dropped out of the Swarm network. + +```js +import { Bee } from "@ethersphere/bee-js" + +const bee = new Bee('http://localhost:1633') +const postageBatchId = "129903062bedc4eca6fc1c232ed385e93dda72f711caa1ead6018334dd801cee" + +async function reuploadMissingPins() { + try { + // Get all currently pinned references + const pinnedRefs = await bee.getAllPins() + + if (!pinnedRefs.length) { + console.log("No pinned references found.") + return + } + + console.log(`Found ${pinnedRefs.length} pinned references.`) + + let repaired = 0 + + // Loop through all references and check retrievability + for (const ref of pinnedRefs) { + const reference = ref.toHex() + const isAvailable = await bee.isReferenceRetrievable(reference) + + if (isAvailable) { + console.log(`✅ ${reference} is retrievable.`) + } else { + console.log(`⚠️ ${reference} is missing — reuploading...`) + await bee.reuploadPinnedData(postageBatchId, reference) + console.log(`🔁 Reuploaded ${reference}`) + repaired++ + } + } + + console.log(`\nDone. ${repaired} reference(s) were reuploaded.`) + } catch (error) { + console.error("Error:", error.message) + } +} + +reuploadMissingPins() +``` + +Example terminal output: + +```bash +Found 2 pinned references. +⚠️ 1880ff0bbd23997dfa46921ba2ab0098824d967fe60c6ca1ae2e8fd722f4db78 is missing — reuploading... +🔁 Reuploaded 1880ff0bbd23997dfa46921ba2ab0098824d967fe60c6ca1ae2e8fd722f4db78 +✅ fd79d5e0ebd8407e422f53ce1d7c4c41ebf403be55143900f8d1490560294780 is retrievable. + +Done. 1 reference(s) were reuploaded. +``` -* Show an example of pinning a reference -* Show an example of listing references -* Show an example of re-uploading a reference diff --git a/docs/documentation/pss.md b/docs/documentation/pss.md index 36c26c8e..f2eb848d 100644 --- a/docs/documentation/pss.md +++ b/docs/documentation/pss.md @@ -2,203 +2,155 @@ title: Postal Service over Swarm id: pss slug: /pss -sidebar_label: Postal Service over Swarm +sidebar_label: PSS --- -## 🚧 Under Construction 🚧 -:::caution 🚧 This page is under construction +Swarm supports sending encrypted messages over the network using a system called **Postal Service over Swarm** (PSS). These messages are embedded in regular Swarm traffic and routed to specific nodes based on their overlay address. -This section is still being worked on. Check back soon for updates! +## What Is PSS? -::: +PSS provides a pub/sub-like functionality for secure messaging. Full nodes can listen for messages on a specific **topic**, and other nodes (light or full) can send them payloads using the recipient node's **overlay address** and optionally encrypted using the recipient's **PSS public key**. +Messages can be received via **subscription** or by a **one-off listener**. -* Remove the unrelated intro section -* Show a listener -* Show a one time receive -* Show a send invocation -* Move it towards the end of the chapters, it is not very important +:::caution +Only full nodes can receive messages since PSS messages are sent as a part of the chunk syncing process which only full nodes take part in. +::: +## Requirements -import Tabs from '@theme/Tabs' -import TabItem from '@theme/TabItem' +To use the example scripts below, you need: -Swarm provides the ability to send messages that appear to be normal Swarm traffic, but are in fact messages that may be received and decrypted to reveal their content only to specific nodes that were intended to receive them. +- A Bee full node with a fully synced reserve for receiving PSS messages. +- A light node for sending PSS messages. +- The batch ID of a usable postage batch. If you don't have one already, you will need to [buy a batch](/docs/storage/#purchasing-storage) to upload data. If you do have one, you will need to [get and save](/docs/storage/#selecting-a-batch) its batch ID. -PSS provides a pub-sub facility that can be used for a variety of tasks. Nodes are able to listen to messages received for a specific topic in their nearest neighbourhood and create messages destined for another neighbourhood which are sent over the network using Swarm's usual data dissemination protocols. -The intended use of PSS is to communicate privately with a publicly known identity (to for example initiate further communication directly). Due to the cost of mining the trojan chunks, it is not recommended to use as an instant messaging system. +## Get Recipient Info (Full Node Only) -:::caution Light nodes are unreachable -Be aware! You can not send message to Light nodes! This is because light nodes does not fully participate -in the data exchange in Swarm network and hence the message won't arrive to them. -::: +This step **must be performed by the receiving full node**. It retrieves the node’s **overlay address** and **PSS public key**, which must then be shared with the sending node: -## Getting the relevant data -When you start `bee`, you may find all the necessary information in the log: -```sh -INFO using existing swarm network address: 9e2ebf266369090091620db013aab164afb1574aedb3fcc08ce8dc6e6f28ef54 -INFO swarm public key 03e0cee7e979fa99350fc2e2f8c81d857b525b710380f238742af269bb794dfd3c -INFO pss public key 02fa24cac43531176d21678900b37bd800c93da3b02c5e11572fb6a96ec49527fa -INFO using ethereum address 5f5505033e3b985b88e20616d95201596b463c9a -``` -Let's break it down: -- **Ethereum address** is the public address of your node wallet. Together with the corresponding private key, it is used for things such as making Ethereum transactions (receiving and sending ETH and BZZ); receiving, claiming and singing cheques and the Swarm network address is also derived from it. -- The **Swarm network address** defines your location in the kademlia and within the context of PSS is used for addressing the trojan chunks to you. In other words, others may use it to send you a message. -- **PSS public key** can be used by others to encrypt their messages for you. +- The **overlay address** is **required** as the routing target. +- The **PSS public key** is **optional** and only needed for encryption. - +```js +import { Bee } from '@ethersphere/bee-js' -## Sending message +const bee = new Bee('http://localhost:1633') -To send data simply define a topic, prefix of the recipient's swarm network address (we recommend 4-6 character prefix length) and the data to be send. -:::caution Your communication privacy may be at risk -When sending PSS messages without encryption key, any Bee node through which the trojan chunk passes would be able to read the message. -::: +async function checkAddresses() { + const addresses = await bee.getNodeAddresses() - - - -```ts -/** - * @param {string} topic - * @param {string} targetPrefix - * @param {string|Uint8Array} data - * @param {string} encryptionKey - */ -bee.pssSend('topic', '9e2e', 'Hello!') -``` - - - + console.log('Node Addresses:', addresses) +} -```js -/** - * @param {string} topic - * @param {string} targetPrefix - * @param {string|Uint8Array} data - * @param {string} encryptionKey - */ -bee.pssSend('topic', '9e2e', 'Hello!') +checkAddresses() ``` - - - -If you want to encrypt the message, you may provide the recipient's PSS public key. - - - - -```ts -bee.pssSend( - 'topic', - '9e2e', - 'Encrypted Hello!', - '02fa24cac43531176d21678900b37bd800c93da3b02c5e11572fb6a96ec49527fa', -) +Example output: + +```bash +Node Addresses: +Overlay: 1e2054bec3e681aeb0b365a1f9a574a03782176bd3ec0bcf810ebcaf551e4070 +Ethereum: 9a73f283cd9211b96b5ec63f7a81a0ddc847cd93 +Public Key: 7d0c4759f689ea3dd3eb79222870671c492cb99f3fade275bcbf0ea39cd0ef6e25edd43c99985983e49aa528f3f2b6711085354a31acb4e7b03559b02ec868f0 +PSS Public Key: 5ade58d20be7e04ee8f875eabeebf9c53375a8fc73917683155c7c0b572f47ef790daa3328f48482663954d12f6e4739f748572c1e86bfa89af99f17e7dd4d33 +Underlay: [ + '/ip4/127.0.0.1/tcp/1634/p2p/QmcpSJPHuuQYRgDkNfwziihVcpuVteoNxePvfzaJyp9z7j', + '/ip4/172.17.0.2/tcp/1634/p2p/QmcpSJPHuuQYRgDkNfwziihVcpuVteoNxePvfzaJyp9z7j', + '/ip6/::1/tcp/1634/p2p/QmcpSJPHuuQYRgDkNfwziihVcpuVteoNxePvfzaJyp9z7j' +] ``` +The `Overlay` and `PSS Public Key` values should be shared with the sending node. - - +The sender (which can be a light node or a full node) needs the **overlay address** to generate the message target, and can optionally use the **PSS public key** to encrypt the message. -```js -bee.pssSend( - 'topic', - '9e2e', - 'Encrypted Hello!', - '02fa24cac43531176d21678900b37bd800c93da3b02c5e11572fb6a96ec49527fa', -) -``` +## Listen for Messages (Full Node) - - +You can listen on a topic using both **continuous subscription** and **one-time receive**: -## Retrieving message -As a recipient, you have two ways how to receive the message. If you are expecting one off message (which is the intended PSS use case to exchange credentials for further direct communication), you can use the `pssReceive` method. - - +- `bee.pssSubscribe()` is used to set up a continuous subscription. +- `bee.pssReceive()` is used to set up a listener on a timeout which closes after receiving a message. -```ts -const message = await bee.pssReceive('topic') +```js +import { Bee, Topic } from '@ethersphere/bee-js' + +const bee = new Bee('http://localhost:1633') + +// Generate a topic from a unique string +const topic = Topic.fromString('pss-demo') + +console.log('Subscribing to topic:', topic.toHex()) + +// Continuous subscription +bee.pssSubscribe(topic, { + onMessage: msg => console.log('Received via subscription:', msg.toUtf8()), + onError: err => console.error('Subscription error:', err.message), +}) + +// One-time receive (3 hour timeout) +async function receiveOnce() { + try { + console.log('Waiting for one-time message...') + const message = await bee.pssReceive(topic, 1000 * 60 * 60 * 3) + console.log('One-time received:', message.toUtf8()) + } catch (err) { + console.error('pssReceive error or timeout:', err.message) + } +} -console.log(message.text()) // prints the received message +receiveOnce() ``` - - +In this script we generate a `topic` from our chosen string with the `Topic.fromString()` method. Then we subscribe to listen for incoming pss messages for that topic with the `bee.pssSubscribe()` method, and we also set up a listener for receiving a single message with the `bee.pssReceive()` method. When a chunk with a PSS message for that topic is synced into our node's neighborhood, it will be received and handled by our node with the `onMessage` callback function when using the `bee.pssSubscribe()` or through the return value of the `bee.pssReceive()` method in our `receiveOnce` function. + +## Send Message (Light or Full Node) + +The sender must provide: + +- A valid **postage batch ID** +- The recipient’s **overlay address** (used to generate the routing target) +- Optionally the **PSS public key** for encryption ```js -const message = await bee.pssReceive('topic') +import { Bee, Topic, Utils } from '@ethersphere/bee-js' -console.log(message.text()) // prints the received message -``` +const bee = new Bee('http://localhost:1643') +const BATCH_ID = '6d8118c693423eef41796d58edbbffb76881806a0f44da728bf80f0c1aafa783' - - +// The overlay address of the receiving node +const recipientOverlay = '1e2054bec3e681aeb0b365a1f9a574a03782176bd3ec0bcf810ebcaf551e4070' -If you want to subscribe to multiple messagees, use the `pssSubscribe` method. +// Generate a topic using the same string shared by the receiving node +const topic = Topic.fromString('pss-demo') +// Set the number of leading prefix bits to mine for the chunk bearing the PSS message +const target = Utils.makeMaxTarget(recipientOverlay) - - +// The PSS message payload +const message = 'Hello from the light node!' -```ts -const handler = { - onMessage: (message: Data) => {console.log(message.text())}, - onError: (error: BeeError) => {console.log(error)} +async function send() { + try { + await bee.pssSend(BATCH_ID, topic, target, message) + console.log('Message sent via PSS.') + } catch (err) { + console.error('Failed to send message:', err.message) + } } -// Subscribe -const subscription = bee.pssSubscribe('topic', handler) - -// Terminate the subscription -subscription.cancel() +send() ``` - - +## Encrypt with PSS Public Key -```js -const handler = { - onMessage: (message) => {console.log(message.text())}, - onError: (error) => {console.log(error)} -} +To encrypt the message specifically for the recipient, include their **PSS public key** in the send call: -// Subscribe -const subscription = bee.pssSubscribe('topic', handler) +```js +const recipientPssPublicKey = '5ade58d20be7e04ee8f875eabeebf9c53375a8fc73917683155c7c0b572f47ef790daa3328f48482663954d12f6e4739f748572c1e86bfa89af99f17e7dd4d33' -// Terminate the subscription -subscription.cancel() +await bee.pssSend(BATCH_ID, topic, target, message, recipientPssPublicKey) ``` - - +The message will then be encrypted using the PSS public key of the recipient node before sending and will only be decryptable by the recipient node (although the message bearing the PSS chunk will be received by all nodes in the same neighborhood as the recipient, it will only be decryptable by the recipient node). diff --git a/docs/documentation/soc-and-feeds.md b/docs/documentation/soc-and-feeds.md index d3259d15..803b03dd 100644 --- a/docs/documentation/soc-and-feeds.md +++ b/docs/documentation/soc-and-feeds.md @@ -5,196 +5,411 @@ slug: /soc-and-feeds sidebar_label: SOC and Feeds --- -## 🚧 Under Construction 🚧 -:::caution 🚧 This page is under construction -This section is still being worked on. Check back soon for updates! +import Tabs from '@theme/Tabs' +import TabItem from '@theme/TabItem' -::: +Swarm provides the ability to store content in content-addressed [chunks](https://docs.ethswarm.org/docs/concepts/DISC/#content-addressed-chunks-and-single-owner-chunks) (CAC) whose addresses are derived from the chunk data, or Single Owner Chunks (SOC) whose addresses are derived from the uploader's address and chosen identifier. With single owner chunks, a user can assign arbitrary data to an address and attest chunk integrity with their digital signature. +Feeds are a unique feature of Swarm which simulate the publishing of mutable content on Swarm's immutable storage. ***Feeds constitute the primary use case for SOCs.*** Developers can use Feeds to version revisions of a mutable resource by indexing sequential updates to a topic at predictably calculated addresses. Because Feeds are built on top of SOCs, their interfaces are somewhat similar and use common concepts. -import Tabs from '@theme/Tabs' -import TabItem from '@theme/TabItem' -Swarm provides the ability to store content in content-addressed chunks or Single Owner Chunks (SOC). With single owner chunks, a user can assign arbitrary data to an address and attest chunk integrity with their digital signature. +## Requirements -Feeds are a unique feature of Swarm. They constitute the primary use case for single owner chunks. Developers can use Feeds to version revisions of a mutable resource, indexing sequential updates to a topic, publish the parts to streams, or post consecutive messages in a communication channel. Feeds implement persisted pull-messaging and can also be interpreted as a pub-sub system. +Interactions with SOC and feeds requires the following: -Because Feeds are built on top of SOCs, their interfaces are somewhat similar and use common concepts. +* A fully initialized Bee light node running with fully synced postage batch data. (Running at `http://localhost:1633` by default) +* A valid postage batch ID. +* An Ethereum-compatible private key to sign updates. Using your node or blockchain account wallet's private key is strongly discouraged. Using a dedicated key for SOC / feeds is recommended. ## Single Owner Chunks -Bee-js calculates a SOC address as the hash of an `identifier` and `owner`. The `identifier` is a 32 bytes long arbitrary data, usually expected as a hex string or a `Uint8Array`. The `owner` is an Ethereum address that consists of 20 bytes in a format of a hex string or `Uint8Array`. +Bee-js calculates a SOC Swarm reference hash as the keccak256 hash of the concatenation of the `identifier` and `owner` Ethereum address. The `identifier` is a 32 byte long arbitrary value (by default a hex string or a `Uint8Array`). The `owner` is an Ethereum address that consists of 20 bytes in a format of a hex string or `Uint8Array`. + +:::info +SOCs are powerful and flexible low-level feature which provide the foundation upon which higher level abstractions such as [GSOC](/docs/gsoc/) and [feeds](/docs/soc-and-feeds/#feeds) are built. For most common use cases developers are recommended to use these higher level abstractions rather than interacting directly with SOCs themselves. +::: :::warning SOCs are immutable! -You might be tempted to modify a SOC's content to "update" the chunk. Reuploading of SOC is forbidden in Swarm as it might create uncertain behavior. Bee node will reject the API call if it finds already existing SOC for the given address. Either use a different `identifier`, or you might be looking for Feeds as your use case. +You might be tempted to modify a SOC's content to "update" the chunk. Reuploading of an SOC is discouraged as its behavior is undefined. Either use a different `identifier`, or you might be looking for feeds if you need to perform multiple updates to the same content. ::: -### Reading SOCs +### Uploading SOCs -To read data from a SOC, we need to make a reader object bound to a specific `owner`. Then we can download the data with the provided chunk's `identifier`. +To write a Single Owner Chunk (SOC), use the `makeSOCWriter()` method from the Bee client. This method requires a signer, which can be an instance of `PrivateKey`, a raw Ethereum private key as a hex string (with or without the `0x` prefix), or a `Uint8Array` representing the private key. -```js -const owner = '0x8d3766440f0d7b949a5e32995d09619a7f86e632' -const socReader = bee.makeSOCReader(owner) -const identifier = '0000000000000000000000000000000000000000000000000000000000000000' -const soc = await socReader.download(identifier) -const data = soc.payload() -``` +The signer is used to cryptographically sign the chunk, using the same format Ethereum uses for signing transactions. -### Writing SOCs +Once the `SOCWriter` is created, you can upload an SOC by providing a `postageBatchId`, a 32-byte `identifier`, and the `data` payload. -When writing a SOC, first, we need to make a writer object. Because we need to sign the chunk, we need to pass in a `signer` object. The `signer` object can be either an Ethereum private key (as a hex string or `Uint8Array`) or an instance of the `Signer` interface. The `Signer` interface can be used for integration with 3rd party Ethereum wallet applications because Swarm uses the same format for signing chunks that Ethereum uses for signing transactions. :::info Default `signer` +When you are instantiating `Bee` class you can pass an Ethereum private key as the default signer that will be used if you won't specify it directly for the `makeSOCWriter`. +::: -When you are instantiating `Bee` class you can pass it a default signer that will be used if you won't specify it -directly for the `makeSOCWriter`. See `Bee` constructor for more info. - +:::warning Your assets and/or privacy may be at risk +We suggest using ephemeral private keys (e.g. randomly generated) when writing to SOC or Feeds. Never use your real Ethereum private keys here (or in any web applications) directly because it will allow others to sign messages with your kay which may compromise your privacy or lead to the loss of funds stored by that account. ::: -:::tip Ethereum Wallet signers +```js +import { Bee, PrivateKey, NULL_IDENTIFIER, Bytes } from "@ethersphere/bee-js" -If you want to use your browser Ethereum Wallet like Metamask you can use utility called `makeEthereumWalletSigner` that we ship with bee-js -which creates a `Signer` object out of given EIP-1193 compatible provider. +// Define your Ethereum private key (don't use your node's or real wallet's private keys) +const privateKey = new PrivateKey('0x634fb5a872396d9693e5c9f9d7233cfa93f395c093371017ff44aa9ae6564cdd') -See it used in our example [here](https://github.com/ethersphere/examples-js/tree/master/eth-wallet-signing). +// Print the identifier and address which can be used to retrieve the SOC +console.log("SOC identifier") +console.log(new Bytes(NULL_IDENTIFIER).toHex()) -```js -import { Utils } from '@ethersphere/bee-js' +// Print Ethereum address +console.log("Ethereum address") +console.log(privateKey.publicKey().address().toHex()) -const signer = Utils.makeEthereumWalletSigner(window.ethereum) -... -``` -::: +// Initialize Bee client with default signer and Swarm node URL +const bee = new Bee('http://localhost:1643', { signer: privateKey }) + +// Replace with your own valid postage batch ID here +const postageBatchId = 'f2949db4cfa4f5140ed3ef29f651d189175a8cb9534c992d3c3212b17f0b67f7' + +// Create the SOC writer +const socWriter = bee.makeSOCWriter(privateKey) +// The data you want to store in the SOC +const data = 'this is my sample data' -```ts -type SyncSigner = (digest: Data) => Signature | string -type AsyncSigner = (digest: Data) => Promise +async function uploadSOC() { + try { + // Upload the data to the SOC using the postage batch and identifier + const response = await socWriter.upload(postageBatchId, NULL_IDENTIFIER, data) -/** - * Interface for implementing Ethereum compatible signing. - * - * @property sign The sign function that can be sync or async - * @property address The Ethereum address of the signer - */ -export type Signer = { - sign: SyncSigner | AsyncSigner - address: EthAddress + // Log the human-readable reference (hex string) + console.log("SOC reference:") + console.log("Reference (Hex):", response.reference.toHex()) + } catch (error) { + // Handle any errors during the upload + console.error("Error uploading SOC:", error) + } } + +// Call the function to write the SOC +uploadSOC() +``` +Example output: + +```bash +SOC identifier +0000000000000000000000000000000000000000000000000000000000000000 +Ethereum address +8d3766440f0d7b949a5e32995d09619a7f86e632 +SOC reference: +Reference (Hex): 9d453ebb73b2fedaaf44ceddcf7a0aa37f3e3d6453fea5841c31f0ea6d61dc85 ``` -:::warning Your communication privacy may be at risk -We suggest using either ephemeral private keys (e.g. randomly generated) or the `Signer` interface when writing to SOC or Feeds. Never use your real Ethereum private keys here (or in any web applications really) directly because you may lose your funds stored on it. -::: -Using the writer interface is similar to using the reader: +In this example: +- `privateKey` is used to initialize the writer as `socWriter` which is used to sign the SOC. +- `NULL_IDENTIFIER` is the 32-byte value used as the identifier (can be replaced with any user-defined value). +- `socWriter.upload()` signs and uploads the data, returning a `reference` as confirmation. + +The `identifier` and Ethereum address together determine the SOC address and must match exactly when retrieving the chunk later. The returned `reference` is included as part of the upload response, but unlike non-SOC uploads, the returned reference is not used to retrieve the chunk, rather the `identifier` and Ethereum address are used (see next section for example usage). + +### Retrieving SOCs + +To retrieve a previously uploaded SOC, you must know the Ethereum address of the owner (the signer used to upload the SOC) and the exact 32-byte `identifier` used during upload. These two values uniquely determine the SOC address in Swarm. + +To download a SOC in Bee-JS, use the `makeSOCReader()` method. This method takes the owner's Ethereum address (as a `EthAddress` instance, a hex string, or a `Uint8Array`) and returns a `SOCReader` object. You can then call `.download(identifier)` on the reader to retrieve the chunk's data. + +:::info SOC address is derived from the identifier and owner +Unlike uploads using content addressed chunks which are retrieved by their Swarm reference hash, SOCs are retrieved using the combination of `identifier` and `owner`, not their Swarm reference hash. +::: ```js -const postageBatchId = await bee.createPostageBatch("100", 17) -const signer = '0x634fb5a872396d9693e5c9f9d7233cfa93f395c093371017ff44aa9ae6564cdd' -const socWriter = bee.makeSOCWriter(signer) -const identifier = '0000000000000000000000000000000000000000000000000000000000000000' -const data = new Uint8Array([1, 2, 3]) -const response = await socWriter.upload(postageBatchId, identifier, data) +import { Bee, Size, NULL_IDENTIFIER } from "@ethersphere/bee-js" + +// Initialize Bee client pointing to the Swarm node +const bee = new Bee('http://localhost:1633') + +// The owner's Ethereum address (20 bytes) +const owner = '8d3766440f0d7b949a5e32995d09619a7f86e632' + +// Create a SOC reader object bound to the owner +const socReader = bee.makeSOCReader(owner) + +async function readSOC() { + try { + // Download the SOC using the identifier + const response = await socReader.download(NULL_IDENTIFIER) + + // Log the data + console.log("SOC Data:", response.payload.toUtf8()) + + // Optionally, you can use the data in other ways (e.g., process, display, etc.) + } catch (error) { + // Handle any errors during download + console.error("Error downloading SOC:", error) + } +} + +// Call the function to read the SOC +readSOC() ``` +In this example: +- The `owner` is the Ethereum address used to sign the SOC. +- `NULL_IDENTIFIER` is the same default identifier used in the earlier upload example. +- The returned payload is a `Bytes` object, and `.toUtf8()` converts it to a human-readable string. + +Make sure the `owner` and `identifier` values match exactly what was used during upload — any mismatch will result in the chunk not being found. + + + ## Feeds -Feeds are an abstraction built on top of SOCs to provide mutable resources on the otherwise immutable data types that Swarm supports. +Feeds are an abstraction built on top of Single Owner Chunks (SOCs) that **simulate mutable content** in Swarm. They enable sequenced updates over time while maintaining a stable access point. Feeds are ideal for dynamic content like apps, messages, or websites. Each update is stored in a new immutable chunk at a deterministically calculated address. + +> Although feeds appear "mutable," no data is ever modified—new updates are simply written to new indexes. + +Similar to how an SOC is defined by an `owner` and `identifier`, a feed is defined by: + +* `owner`: an Ethereum-compatible address +* `topic`: a unique 32-byte identifier + +### Concepts + +* **Feed Writing**: Publishers sign and write updates associated with specific topics to specific indices using their private key. + +* **Append-Only Behavior and Index Resolution**: Feeds are typically used in an append-only fashion, though skipping indices is possible. However, the latest update is resolved as the highest *consecutive* index. Updates at non-consecutively written indices must be retrieved explicitly. + +* **No Overwriting**: Each index can be written only once. Updates are permanent. -One of the most common use cases for feeds is to store mutable data in an immutable address. For example, when hosting a website on Swarm, we may want its address stored in ENS, but we don't want to pay for changing the reference every time the site is updated. +* **Feed Reading**: Readers resolve updates using the `owner` and `topic`. By default, if no index is specified they fetch the latest consecutively written index. -A feed is defined by its `owner` (see above), a `topic` (32 bytes arbitrary data as a hex string or `Uint8Array`), and a `type`. `type` defines how the updates and lookup of the feed index are made (currently only the `sequence` type is supported). +* **Payloads**: Feed payloads include strings, JSON, or Swarm references. Payload size is limited to a single 4 KB chunk. -The publisher is the owner of the feed, and only they can post updates to the feed. Posting an update requires (1) constructing the identifier from the topic and the correct index and (2) signing it concatenated together with the hash of the arbitrary content of the update. -Conversely, users can consume a feed by retrieving the chunk by its address. Retrieving an update requires the consumer to construct the address from the owner’s address and the identifier. To calculate the identifier, they need the topic and the appropriate index. For this, they need to know the indexing scheme. -Feeds enable Swarm users to represent a sequence of content updates. The content of the update is the payload that the feed owner signs against the identifier. The payload can be a swarm reference from which the user can retrieve the associated data. +### Creating and Updating Feeds Using Topics -### Topic +This script demonstrates how to create two distinct feeds using different topics and update them using two methods: `uploadPayload()` and `upload()`. -In Swarm, `topic` is a 32-byte long arbitrary byte array. It's possible to choose an arbitrary topic for each application, and then knowing someone's (or something's) address, it's possible to find their feeds. Also, this can be the hash of a human-readable string, specifying the topic and optionally the subtopic of the feed. There is a helper function provided for that: +* **`upload()`**: Used for uploading references to other content on Swarm. +* **`uploadPayload()`**: Directly stores an arbitrary data payload in the feed. + +Although it is possible to update feeds with an arbitrary data payload using `uploadPayload()`, for most use cases the content should be uploaded separately (such as with `bee.uploadFile()`), and the feed will be updated with the reference of that upload using the `upload()` method. + +The script below performs the following steps: + +1. Initializes the Bee client and derives the feed owner address from a private key. +2. Uses the `uploadPayload()` method to upload a plain text string as a **payload update** to the feed with topic `"payload-update"`. +3. Uploads the same string as a file to Swarm and obtains a reference. +4. Uses the `upload()` method to upload the obtained reference as a **reference update** to the feed with topic `"reference-update"`. ```js -const topic = bee.makeFeedTopic('my-dapp.eth/outbox') +import { Bee, Topic, PrivateKey } from '@ethersphere/bee-js' + +const BEE_URL = 'http://localhost:1643' + +// Make sure to replace with a valid batch ID +const POSTAGE_BATCH_ID = 'c119705f257c0015a062b17929e3ca3269e158231324707f2ea6e72c5c9f9b78' + +// Any Ethereum style private key can be used, ideally dedicated to this feed only. Using your node or wallet's key is strongly discouraged. +const privateKey = new PrivateKey('0x0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef') +const bee = new Bee(BEE_URL) +const owner = privateKey.publicKey().address() + +// This owner address can be shared along with the topic to enable anyone to retrieve updates from the feed +console.log('Feed owner address:') +console.log(owner.toHex()) + +async function run() { + const message = 'This is a test announcement.' + const topic1 = Topic.fromString('payload-update') + + // The writer is constructed from the topic and private key, and can be used for writing feed updates (it also supports reading from feeds) + const writer1 = bee.makeFeedWriter(topic1, privateKey) + + await writer1.uploadPayload(POSTAGE_BATCH_ID, message) + console.log('✅ First feed updated with payload.') + + const result = await bee.uploadFile(POSTAGE_BATCH_ID, message, 'announcement.txt') + console.log(result) + console.log(result.reference.toHex()) + + // The second writer is constructed using a new topic and the same private key + const topic2 = Topic.fromString('reference-update') + const writer2 = bee.makeFeedWriter(topic2, privateKey) + await writer2.upload(POSTAGE_BATCH_ID, result.reference) + console.log('✅ Second feed updated with reference.') +} + +run().catch(console.error) ``` -### High level JSON API -Many applications are storing or manipulating data in JSON. bee-js has convenience high level API to use feeds with JSON objects. -It consists of two methods: +### Retrieving from Feeds by Topic and Owner + +This script demonstrates how to retrieve data from feeds using their `topic` and `owner`. There are two primary methods used for reading from feeds: + +* **`downloadPayload()`** – Used to read the raw payload written directly to the feed. +* **`downloadReference()`** – Used to read a Swarm reference from the feed. The returned reference can then be passed to `downloadFile()` to retrieve the associated file. + +The script performs the following steps: + +1. Initializes the Bee client and derives the feed owner address from a private key. +2. Reads the latest **payload update** from the feed with topic `"payload-update"` using `downloadPayload()`. +3. Reads the latest **reference update** from the feed with topic `"reference-update"` using `downloadReference()`, then retrieves the associated file from Swarm using `downloadFile()`. - - `setJsonFeed` method to set JSON compatible data to feed - - `getJsonFeed` method to get JSON compatible data (and parse them) from feed +Feed readers always require a topic and an owner address. By default, they fetch the latest *consecutively written* update. To retrieve a specific update, an explicit index can be provided. -:::info Bee's instance signer -You can pass a `Signer` (or compatible data) into `Bee` class constructor, which then -will be used as default `Signer`. +:::warning +While not explicitly enforced, it is strongly recommended to use feeds in an append-only fashion. If instead non-consecutive updates are performed, the only way to discover updates at higher non-consecutively written indexes is to iterate one by one through all indexes up to the number of the index with the update. ::: ```js -const postageBatchId = await bee.createPostageBatch("100", 17) - -await bee.setJsonFeed( - postageBatchId, - 'some cool arbitraty topic', - { some: ['cool', { json: 'compatible' }, 'object']}, - { signer: '0x634fb5a872396d9693e5c9f9d7233cfa93f395c093371017ff44aa9ae6564cdd' } -) - -const data = await bee.getJsonFeed( - 'some cool arbitraty topic', - { signer: '0x634fb5a872396d9693e5c9f9d7233cfa93f395c093371017ff44aa9ae6564cdd' } -) - -console.log(data) -// Prints: { some: ['cool', { json: 'compatible' }, 'object']} +import { Bee, Topic, PrivateKey } from '@ethersphere/bee-js' + +const BEE_URL = 'http://localhost:1643' + +async function run() { + const bee = new Bee(BEE_URL) + const privateKey = new PrivateKey('0x0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef') + const owner = privateKey.publicKey().address() + + // Read from payload-update feed + const topic1 = Topic.fromString('payload-update') + const reader1 = bee.makeFeedReader(topic1, owner) + + console.log('\n Reading latest update for feed: "payload-update"') + + try { + const latest = await reader1.downloadPayload() + const text = latest.payload.toUtf8() + + console.log('Message (as plain text):', text) + console.log('Feed index:', BigInt(`0x${Buffer.from(latest.feedIndex.bytes).toString('hex')}`)) + console.log('Next index:', BigInt(`0x${Buffer.from(latest.feedIndexNext.bytes).toString('hex')}`)) + } catch (error) { + console.warn('❌ Failed to read update:', error.message) + } + + // Read from reference-update feed + const topic2 = Topic.fromString('reference-update') + const reader2 = bee.makeFeedReader(topic2, owner) + + console.log('\n Reading latest update for feed: "reference-update"') + + try { + const latest = await reader2.downloadReference() + const referenceHex = latest.reference.toHex() + + console.log('Swarm reference (hex):', referenceHex) + + const file = await bee.downloadFile(referenceHex) + console.log('Retrieved file content:', file.data.toUtf8()) + console.log('Feed index:', BigInt(`0x${Buffer.from(latest.feedIndex.bytes).toString('hex')}`)) + console.log('Next index:', BigInt(`0x${Buffer.from(latest.feedIndexNext.bytes).toString('hex')}`)) + } catch (error) { + console.warn('❌ Failed to read update:', error.message) + } +} + +run().catch(console.error) ``` -### Low level API -Low level API is an API that is more flexible in its usage, but requires better understanding and mainly more method calls. +### Using Feed Manifests for Fixed References + +Feed manifests allow you to expose a feed through a **stable Swarm reference** that always resolves to the latest update. This is especially useful for hosting evolving content like websites or files, without needing to share a new Swarm reference each time content changes. + +With a manifest, you can: + +* Retrieve the **latest feed update** through a static `/bzz//` URL. +* Share a single reference that always resolves to the **current content**. +* Enable compatibility with public gateways and ENS domains. + +The script below performs the following steps: + +1. Initializes the Bee client and creates a feed manifest using a topic and owner. +2. Checks the current feed index or starts from index 0. +3. Uploads two updates to Swarm and stores their references in the feed at consecutive indices. +4. After each update, retrieves the content using the same manifest reference, confirming it resolves to the latest. -#### Reading feeds -To read data from a feed, we need to make a reader object for a specific `type`, `topic` and `owner`, then we can download the latest update containing a reference. ```js -const topic = '0000000000000000000000000000000000000000000000000000000000000000' -const owner = '0x8d3766440f0d7b949a5e32995d09619a7f86e632' -const feedReader = bee.makeFeedReader('sequence', topic, owner) -const feedUpdate = await feedReader.download() -console.log(feedUpdate.reference) // prints the latest reference stored in the feed +import { Bee, Topic, PrivateKey, FeedIndex } from '@ethersphere/bee-js' + +const BEE_URL = 'http://localhost:1643' +const POSTAGE_BATCH_ID = 'c119705f257c0015a062b17929e3ca3269e158231324707f2ea6e72c5c9f9b78' + +const bee = new Bee(BEE_URL) +const privateKey = new PrivateKey('0x0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef') +const owner = privateKey.publicKey().address() + +console.log('Feed owner:', owner.toHex()) + +const topic = Topic.fromString('uploaded-reference-demo') +const manifestReference = await bee.createFeedManifest(POSTAGE_BATCH_ID, topic, owner) +console.log('\n Manifest created:') +console.log('Ref:', manifestReference.toString()) +console.log('URL:', `${BEE_URL}/bzz/${manifestReference.toString()}/`) + +const reader = bee.makeFeedReader(topic, owner) +let index +try { + const latest = await reader.download() + index = latest.feedIndexNext + const latestIndex = BigInt(`0x${Buffer.from(latest.feedIndex.bytes).toString('hex')}`) + console.log(`\n Latest index: ${latestIndex}`) +} catch { + index = FeedIndex.fromBigInt(0n) + console.log('\n No updates yet. Starting at index 0') +} + +for (let i = 0; i < 2; i++) { + const content = `Update ${BigInt(`0x${Buffer.from(index.bytes).toString('hex')}`)}` + const upload = await bee.uploadFile(POSTAGE_BATCH_ID, content, `update-${i}.txt`) + const writer = bee.makeFeedWriter(topic, privateKey) + await writer.upload(POSTAGE_BATCH_ID, upload.reference, { index }) + + console.log(`\n✅ Updated to index ${BigInt(`0x${Buffer.from(index.bytes).toString('hex')}`)}: "${content}"`) + + const result = await bee.downloadFile(manifestReference) + console.log(` Retrieved via manifest: "${result.data.toUtf8()}"`) + console.log(`URL: ${BEE_URL}/bzz/${manifestReference.toString()}/`) + + index = index.next() +} ``` -#### Writing feeds -When writing a feed, typically an immutable content is uploaded first, and then its reference is updated in the feed. The `signer` here is the same as with [writing the SOCs](#writing-socs) (with the same caveats!). +### Non-Sequential Feed Updates (Discouraged) + +Although feeds are typically updated in a sequential, append-only manner, it is possible to manually write to a specific index using the `index` option. However, only the **highest consecutively filled index** is considered the latest. Any gaps will result in newer updates at higher indices being ignored when resolving the latest feed content. + +For example: ```js -const postageBatchId = await bee.createPostageBatch("100", 17) -const data = new Uint8Array([1, 2, 3]) -const reference = await bee.uploadData(data) -const topic = '0000000000000000000000000000000000000000000000000000000000000000' -const signer = '0x634fb5a872396d9693e5c9f9d7233cfa93f395c093371017ff44aa9ae6564cdd' -const feedWriter = bee.makeFeedWriter('sequence', topic, signer) -const response = await feedWriter.upload(postageBatchId, reference) +await writer.uploadPayload(POSTAGE_BATCH_ID, 'Initial update') // Goes to index 0 +await writer.uploadPayload(POSTAGE_BATCH_ID, 'Out-of-order update', { index: 5 }) ``` -### Using feed manifest +Now, attempting to read the latest update without specifying an index will still return the update at index 0: -One of the most common use cases for feeds is to store mutable data in an immutable address. For example, when hosting a website on Swarm, we may want its address stored in ENS, but we don't want to pay for changing the reference every time the site is updated. +```js +const latest = await reader.downloadPayload() +console.log(latest.payload.toUtf8()) +// → "Initial update" +``` -Swarm provides a feature called "feed manifests" for this use case. It is a content-addressed chunk that stores a feed's definition (the `type`, the `topic`, and the `owner`). When it is looked up using the `bzz` endpoint, Swarm recognizes that it refers to a feed and continues the lookup according to the feed parameters. +To read the out-of-order update at index 5, you must explicitly specify it: ```js -const postageBatchId = await bee.createPostageBatch("100", 17) -const topic = '0000000000000000000000000000000000000000000000000000000000000000' -const owner = '0x8d3766440f0d7b949a5e32995d09619a7f86e632' -const reference = bee.createFeedManifest(postageBatchId, 'sequence', topic, owner) +const manual = await reader.downloadPayload({ index: 5 }) +console.log(manual.payload.toUtf8()) +// → "Out-of-order update" ``` -This creates the feed manifest chunk on Swarm. You can use the returned reference to look up with the `/bzz` endpoint or use it with ENS. +:::caution +Manually writing to skipped indices is supported but not recommended. It is recommended to use the default behavior when performing feed updates (no `index` specified) to maintain a clean, append-only feed history and ensure new updates are easily discoverable. +::: diff --git a/docs/documentation/staking.md b/docs/documentation/staking.md index b7ea495a..5687499b 100644 --- a/docs/documentation/staking.md +++ b/docs/documentation/staking.md @@ -5,14 +5,104 @@ slug: /staking sidebar_label: Staking --- -## 🚧 Under Construction 🚧 -:::caution 🚧 This page is under construction -This section is still being worked on. Check back soon for updates! +Operating a Bee full node and staking BZZ makes you eligible to participate in the redistribution game — a mechanism for earning additional BZZ through by sharing disk space with the Swarm network. This guide shows how to use `bee-js` to deposit stake and check your node's staking status. + +:::danger +⚠️ **Important:** Staked BZZ is **non-refundable** — once deposited, it **cannot be withdrawn**. +::: + + +:::info +Currently, `bee-js` supports depositing stake and checking staking status, but does **not yet support** advanced features like [partial stake withdrawals](https://docs.ethswarm.org/docs/bee/working-with-bee/staking#partial-stake-withdrawals) or [reserve doubling](https://docs.ethswarm.org/docs/bee/working-with-bee/staking#reserve-doubling). + + +For a complete guide to the requirements and configuration for staking, refer to the [Bee documentation](https://docs.ethswarm.org/docs/bee/working-with-bee/staking). ::: -* Get staked xBZZ -* Stake xBZZ -* Get redistribution state + + + +## Stake BZZ + +To stake, use the `depositStake` method provided by `bee-js`. It accepts a value in PLUR, the smallest unit of BZZ (like wei in Ethereum). The `BZZ` utility class simplifies conversion from decimal string to PLUR. + +```js +import { Bee, BZZ } from '@ethersphere/bee-js' + +const bee = new Bee('http://localhost:1633') + +async function main() { + + // Convert 10 BZZ to PLUR + const amount = BZZ.fromDecimalString('10') + + const txHash = await bee.depositStake(amount) + console.log('Stake deposited. Transaction hash:', txHash.toHex()) +} + +main().catch(console.error) +``` + +Example output: + +```bash +Stake deposited. Transaction hash: e1b86eebc54b465d84ab278da94a387e9786076557ab8f3fe04ba1b52dc065c8 +``` +A successful staking transaction will return the transaction hash which you can look up on a blockchain explorer like [Gnosisscan](https://gnosisscan.io/tx/0xe1b86eebc54b465d84ab278da94a387e9786076557ab8f3fe04ba1b52dc065c8). + +## Check Staking Status + +After staking, you can confirm the deposited amount and monitor your node’s participation in the redistribution game: + +```js +import { Bee } from '@ethersphere/bee-js' + +const bee = new Bee('http://localhost:1633') + +async function main() { + const stake = await bee.getStake() + const redistributionState = await bee.getRedistributionState() + + console.log('Current staked amount:', stake.toDecimalString(), 'BZZ') + console.log('\nRedistribution State:') + console.log(JSON.stringify(redistributionState, null, 2)) +} + +main().catch(console.error) +``` + +Example output: + +```bash +Current staked amount: 10.0000000000000001 BZZ + +Redistribution State: +{ + "minimumGasFunds": { + "state": "274506772500000" + }, + "hasSufficientFunds": true, + "isFrozen": false, + "isFullySynced": true, + "phase": "claim", + "round": 261311, + "lastWonRound": 0, + "lastPlayedRound": 0, + "lastFrozenRound": 0, + "lastSelectedRound": 0, + "lastSampleDurationSeconds": 0, + "block": 39719372, + "reward": { + "state": "0" + }, + "fees": { + "state": "0" + }, + "isHealthy": true +} +``` + +For details on interpreting these values, refer to the [staking status section](https://docs.ethswarm.org/docs/bee/working-with-bee/staking#check-status) of the Bee documentation. \ No newline at end of file diff --git a/docs/documentation/tracking-uploads.md b/docs/documentation/tracking-uploads.md new file mode 100644 index 00000000..a59ca1fd --- /dev/null +++ b/docs/documentation/tracking-uploads.md @@ -0,0 +1,151 @@ +--- +title: Tracking Uploads +id: tracking-uploads +slug: /tracking-uploads +sidebar_label: Tracking Uploads +--- + +You can track the progress of deferred uploads using "tags". Each tag tracks how many chunks were **split**, **stored**, **seen**, and **synced** by the network. By creating a tag before uploading and passing it to the upload function, you make the upload *trackable, allowing you to confirm whether your uploaded data has been fully synced. + +:::info +Tracking with tags is used ***only for [deferred uploads](/docs/upload-download/#deferred-uploads)***. That is because when content is uploaded in a deferred manner, the content's reference hash will be returned *immediately*, often before the content has been fully synced to the network. Therefore tags should be used in order to confirm when the content has been fully synced and is retrievable. + +With direct uploads, the reference hash is not returned until after the content has already been uploaded and fully synced to the network, so there is no need to track it after uploading. +::: + +## How It Works + +### 1. Create a Tag + +Before uploading, create a new tag using `bee.createTag()`. This returns a unique tag UID that will be used to monitor the upload. + +```js +const tag = await bee.createTag() +console.log("Created new tag with UID:", tag.uid) +``` + +Alternatively, you can use an existing tag from `bee.getAllTags()` (useful for testing or reuse): + +```js +const allTags = await bee.getAllTags() +if (allTags.length > 0) { + tag = allTags[0] + console.log("Using existing tag with UID:", tag.uid) +} +``` + +### 2. Upload a File with the Tag + +To enable tracking, pass the tag UID into the upload options under the `tag` key: + +```js +const result = await bee.uploadFile(postageBatchId, fileData, 'nodes.json', { + tag: tag.uid, + contentType: 'application/json' +}) +``` + +This links the upload to your tag so you can monitor its progress. + +### 3. Track Tag Progress + +Use `bee.retrieveTag(tagUid)` to monitor upload progress. Chunks that have **already been synced in the past** are counted in `seen`, while newly synced ones are in `synced`. Poll repeatedly until: + +```text +synced + seen === split +``` + +This indicates that all chunks have either just synced or were already present in the network. + +```js +const tag = await bee.retrieveTag(tagUid) +console.log(` - Total split: ${tag.split}`) +console.log(` - Synced: ${tag.synced}`) +console.log(` - Seen: ${tag.seen}`) +``` + +## Example Script + +```js +import { Bee } from "@ethersphere/bee-js" +import fs from "fs/promises" + +const bee = new Bee('http://localhost:1633') +const postageBatchId = "129903062bedc4eca6fc1c232ed385e93dda72f711caa1ead6018334dd801cee" + +async function waitForTagSync(tagUid, interval = 800) { + while (true) { + const tag = await bee.retrieveTag(tagUid) + + console.log(`Progress (Tag ${tagUid}):`) + console.log(` - Total split: ${tag.split}`) + console.log(` - Stored: ${tag.stored}`) + console.log(` - Seen: ${tag.seen}`) + console.log(` - Synced: ${tag.synced}`) + + if (tag.split > 0 && tag.synced + tag.seen >= tag.split) { + console.log("Upload fully synced!") + break + } + + await new Promise(resolve => setTimeout(resolve, interval)) + } +} + +async function uploadNodesJsonWithTag() { + try { + const fileData = await fs.readFile('./nodes.json') + + const tag = await bee.createTag() + console.log("Created new tag with UID:", tag.uid) + + const result = await bee.uploadFile(postageBatchId, fileData, 'nodes.json', { + tag: tag.uid, + contentType: 'application/json' + }) + + console.log("Uploaded reference:", result.reference.toHex()) + + await waitForTagSync(tag.uid) + } catch (error) { + console.error("Error uploading nodes.json:", error.message) + } +} + +uploadNodesJsonWithTag() +``` + + +### Example Terminal Output + +```bash +Created new tag with UID: 85 +Uploaded reference: 78e5247e97b1a3362b6c3f054924dce734e0ffd7df0cb5ed9b636cb6a4a14d93 +Progress (Tag 85): + - Total split: 1078 + - Stored: 0 + - Seen: 0 + - Synced: 546 +Progress (Tag 85): + - Total split: 1078 + - Stored: 0 + - Seen: 0 + - Synced: 1078 +Upload fully synced! +``` + + +## Deleting Tags + +You can delete tags you no longer need using their uid: + +```js +await bee.deleteTag(tag.uid) +console.log("Deleted tag:", tag.uid) +``` + +## References + +- [Bee docs – Syncing / Tags](https://docs.ethswarm.org/docs/develop/access-the-swarm/syncing) +- [Bee API Reference – `/tags`](https://docs.ethswarm.org/api/#tag/Tag) + diff --git a/docs/documentation/upload-download.md b/docs/documentation/upload-download.md index e003f7a8..03a5a18f 100644 --- a/docs/documentation/upload-download.md +++ b/docs/documentation/upload-download.md @@ -5,143 +5,324 @@ slug: /upload-download sidebar_label: Upload and Download --- import Tabs from '@theme/Tabs' import TabItem from '@theme/TabItem' -Uploading your data to Swarm is easy with `bee-js`. Based on your needs you can either upload directly unstructured data, single file or even complex directories. Let's walk through the options one by one. +Uploading and downloading with Swarm is easy with `bee-js`. Based on your needs you can either upload unstructured data directly, single files, lists of files, or entire directories. Each upload will return a Swarm reference hash, which is a unique identifier for the upload that can be used to download the uploaded content. + ## Requirements To use the example scripts below, you need: - An instance of `bee-js`'s `Bee` [initialized](/docs/getting-started/) as `bee` using the API endpoint of a currently operating Bee node. -- (Uploads only) The batch ID of a previously purchased usable postage batch with enough `remainingSize` left to upload the desired data. If you don't have one already, you will need to [buy a batch](/docs/storage/#purchasing-storage) to upload data. If you do have one, you will need to [get and save](/docs/storage/#selecting-a-batch) its batch ID. - -## Uploading +- The batch ID of a previously purchased usable postage batch with enough `remainingSize` left to upload the desired data. If you don't have one already, you will need to [buy a batch](/docs/storage/#purchasing-storage) to upload data. If you do have one, you will need to [get and save](/docs/storage/#selecting-a-batch) its batch ID. -The examples below assume you already have an instance of the `Bee` class [initialized](/docs/getting-started/) as `bee`, and the [batch ID](/docs/storage/#purchasing-storage) of a valid postage stamp batch saved as a string in `postageBatchId`. -### Upload Data +## Arbitrary Data You can upload and retrieve any `string` or `Uint8Array` data with the `uploadData` and `downloadData` functions. -When you download data the return type is the `Data` interface which extends `Uint8Array` with convenience functions like: +When you download data the return type is `Bytes`. The `Bytes` class includes various convenience functions like: - `toUtf8()` that converts the bytes into UTF-8 encoded string - - `hex()` that converts the bytes into **non-prefixed** hex string - - `json()` that converts the bytes into JSON object + - `toHex()` that converts the bytes to a hex string + - `toJSON()` that converts the bytes into JSON object + +:::info +The `Bytes` class is a core data type in `bee-js`. It includes a variety of useful utility methods which you can learn more about on the [Utility Classes](/docs/utilities/) page. +::: ```js import { Bee } from "@ethersphere/bee-js" const bee = new Bee('http://localhost:1633') -const postageBatchId = "177da0994ed3000d241b183d33758aec42495bf9008fab059f0e3f208f3a1ade" +const postageBatchId = "129903062bedc4eca6fc1c232ed385e93dda72f711caa1ead6018334dd801cee" + +const jsonData = { + message: "Bee is awesome!", + features: ["decentralized", "reliable", "scalable"], + version: 1.0 +}; -const result = await bee.uploadData(postageBatchId, "Bee is awesome!") +const jsonString = JSON.stringify(jsonData); -console.log(result.reference.toHex()) +const result = await bee.uploadData(postageBatchId, jsonString) +// Prints the 64 character long hex string Swarm reference - make sure to save the reference in order to access the content later +console.log(result.reference.toHex()) + +// Use the Swarm reference hash to download the data const retrievedData = await bee.downloadData(result.reference) -console.log(retrievedData.toUtf8()) // prints 'Bee is awesome!' + +console.log(retrievedData) // Prints the raw data +console.log(retrievedData.toUtf8()) // Prints the data as UTF-8 text +console.log(retrievedData.toJSON()) // Prints the data as JSON ``` :::info Tip A Swarm reference or hash is a 64 character long hex string which is the address of the uploaded data, file, or directory. It must saved so it can be used later to retrieve the uploaded content. ::: -### Upload Single file +Example terminal output: + +```bash +e2d9d04da9f8a000ddcc50e1b86fbff00c6202b406a9dd7aa55d283747858c33 +Bytes { + bytes: , + length: 92 +} +{"message":"Bee is awesome!","features":["decentralized","reliable","scalable"],"version":1} +{ + message: 'Bee is awesome!', + features: [ 'decentralized', 'reliable', 'scalable' ], + version: 1 +} +``` + -You can also upload files by specifying a filename. When you download the file, `bee-js` will return additional information like the `contentType` or `name` of the file. +## Single Files + +The `uploadFile` function accepts a `string`, `Uint8Array`, Node.js `Readable` stream, or browser `File` object as input data, along with an optional filename and upload options. + +:::info +When working with browsers you can use the [`File` interface](https://developer.mozilla.org/en-US/docs/Web/API/File). The filename is taken from the `File` object itself, but can be overwritten through the second argument of the `uploadFile` function. +::: ```js import { Bee } from "@ethersphere/bee-js" +import { readFileSync } from "fs" const bee = new Bee('http://localhost:1633') -const postageBatchId = "177da0994ed3000d241b183d33758aec42495bf9008fab059f0e3f208f3a1ade" +const postageBatchId = "ec4d7e3acbd626471b33135164335dfcb0bed889dd4a951c09da8ea7b59c1fc9" + +async function uploadFromDisk() { + try { + // Read the file from the local file system + const filePath = "./textfile.txt" + const fileData = readFileSync(filePath) + + // Upload the file data + const result = await bee.uploadFile(postageBatchId, fileData, "textfile.txt", { + contentType: "text/plain" + }) + + // Print the reference hash used to retrieve the content + console.log(result.reference.toHex()) + + + // Download the file + const retrievedFile = await bee.downloadFile(result.reference.toHex()) + + console.log(retrievedFile.name) // Prints 'textfile.txt' + console.log(retrievedFile.contentType) // Prints 'text/plain' + console.log(retrievedFile.data.toUtf8()) // Prints file content -const result = await bee.uploadFile(postageBatchId, "Bee is awesome!", "textfile.txt") -const retrievedFile = await bee.downloadFile(result.reference.toHex()) + return result.reference.toHex() + } catch (error) { + console.error("Error:", error.message) + } +} -console.log(retrievedFile.name) // prints 'textfile.txt' -console.log(retrievedFile.contentType) // prints 'application/x-www-form-urlencoded' -console.log(retrievedFile.data.toUtf8()) // prints 'Bee is awesome!' +uploadFromDisk() ``` -You can directly upload using the [`File` interface](https://developer.mozilla.org/en-US/docs/Web/API/File). The filename is taken from the `File` object itself, but can be overwritten through the second argument of the `uploadFile` function. +Example terminal output: + +```bash +textfile.txt +application/x-www-form-urlencoded +this is a sample file +0dca369c5a1a1ef5be1f5e293089c695920323805568382d9e97e3cd17678a3a +``` + +## Multiple Files + +For uploading multiple files at once in a browser environment you can use the `uploadFiles` function. + +*Note that it preserves the relative paths of all uploaded files, so the full paths (e.g. "folder/nested.txt") must be used when downloading them.* + +:::caution Browser Only + + +The `uploadFiles` function is only supported for **browser environments**, as it requires input in the form of a [`FileList`](https://developer.mozilla.org/en-US/docs/Web/API/FileList) or array of [`File`](https://developer.mozilla.org/en-US/docs/Web/API/File) objects. + +While `File` is available in Node v20+, it is still not fully supported and may not always work as expected. +::: ```js import { Bee } from "@ethersphere/bee-js" -import fs from 'fs' const bee = new Bee('http://localhost:1633') -const postageBatchId = "177da0994ed3000d241b183d33758aec42495bf9008fab059f0e3f208f3a1ade" +const postageBatchId = "ec4d7e3acbd626471b33135164335dfcb0bed889dd4a951c09da8ea7b59c1fc9" + +async function uploadFiles() { + try { + // Create 3 File objects from strings with nested paths + const rootFile = new File(["Root file content"], "root.txt", { type: "text/plain" }) + const nestedFile = new File(["Nested file content"], "folder/nested.txt", { type: "text/plain" }) + const deepFile = new File(["Deeply nested file content"], "folder/subfolder/deep.txt", { type: "text/plain" }) + + // Upload all files + const result = await bee.uploadFiles(postageBatchId, [rootFile, nestedFile, deepFile]) + console.log("Files uploaded with reference:", result.reference.toHex()) + + // Download each file by full path + const downloadedRoot = await bee.downloadFile(result.reference, 'root.txt') + const downloadedNested = await bee.downloadFile(result.reference, 'folder/nested.txt') + const downloadedDeep = await bee.downloadFile(result.reference, 'folder/subfolder/deep.txt') + + // Display contents + console.log("Root file:", downloadedRoot.data.toUtf8()) + console.log("Nested file:", downloadedNested.data.toUtf8()) + console.log("Deep file:", downloadedDeep.data.toUtf8()) + + return result.reference.toHex() + } catch (error) { + console.error("Error:", error.message) + } +} + +uploadFiles() +``` -// Read the file content -const fileContent = fs.readFileSync("./textFile.txt", "utf8") +## Directories -// Upload the file content with a name -const result = await bee.uploadFile(postageBatchId, fileContent, "textfile.txt") +:::info +`uploadFilesFromDirectory` is not available in the browser as it relies on [`fs` from NodeJS](https://nodejs.org/api/fs.html). +::: -// Download the file -const retrievedFile = await bee.downloadFile(result.reference) -console.log(retrievedFile.name) // prints 'textfile.txt' -console.log(retrievedFile.contentType) // should print 'application/x-www-form-urlencoded -console.log(retrievedFile.data.toUtf8()) // prints the file content -``` +The `uploadFilesFromDirectory` function takes a directory path as input and recursively uploads all files within it, including those in all nested subdirectories, while preserving their relative paths. -### Files and Directories +*When downloading files later, you must use the full relative paths exactly as they appeared during upload.* -In browsers, you can easily upload an array of `File` objects coming from your form directly with [`FileList`](https://developer.mozilla.org/en-US/docs/Web/API/FileList). If the files uploaded through `uploadFiles` have a relative path, they are added relative to this filepath. Otherwise, the whole structure is flattened into single directory. +Let's assume we have the following file structure: + +```bash +. +├── folder +│   ├── nested.txt +│   └── subfolder +│   └── deep.txt +└── root.txt +``` ```js import { Bee } from "@ethersphere/bee-js" -import fs from 'fs' const bee = new Bee('http://localhost:1633') -const postageBatchId = "177da0994ed3000d241b183d33758aec42495bf9008fab059f0e3f208f3a1ade" -const foo = new File(["foo"], "foo.txt", { type: "text/plain" }) -const bar = new File(["bar"], "bar.txt", { type: "text/plain" }) +const postageBatchId = "ec4d7e3acbd626471b33135164335dfcb0bed889dd4a951c09da8ea7b59c1fc9" + +async function uploadDirectory() { + try { + // Upload the current directory (where the script is run) + const result = await bee.uploadFilesFromDirectory(postageBatchId, process.cwd()) + + console.log("Directory uploaded successfully!") + console.log("Swarm reference:", result.reference.toHex()) + + // Download each file using its relative path + const root = await bee.downloadFile(result.reference, 'root.txt') + const nested = await bee.downloadFile(result.reference, 'folder/nested.txt') + const deep = await bee.downloadFile(result.reference, 'folder/subfolder/deep.txt') + + // Print out file contents + console.log("root.txt:", root.data.toUtf8()) + console.log("folder/nested.txt:", nested.data.toUtf8()) + console.log("folder/subfolder/deep.txt:", deep.data.toUtf8()) + } catch (error) { + console.error("Error during upload or download:", error.message) + } +} + +uploadDirectory() +``` -const result = await bee.uploadFiles(postageBatchId, [ foo, bar ]) // upload +Example terminal output: + +```bash +Directory uploaded successfully! +Swarm reference: 3e42f7cfbeec140129211fa24b9b57b2bff5932416dc8ff30b44c8446b259e92 +root.txt: Root level content +folder/nested.txt: Nested content +folder/subfolder/deep.txt: Deeply nested content +``` + + +## Upload Options + +The `uploadData`, `uploadFile`, `uploadFiles`, and other similar methods accept an **options object** as their third argument. This object lets you modify how the upload is handled — for example, enabling encryption, pinning data locally, or attaching a tag to track the upload. -const Foo = await bee.downloadFile(result.reference, './foo.txt') // download foo -const Bar = await bee.downloadFile(result.reference, './bar.txt') // download bar -console.log(Foo.data.toUtf8()) // prints 'foo' -console.log(Bar.data.toUtf8()) // prints 'bar' +### Pinning + +If you set `pin: true`, the uploaded data will be **stored locally** on your Bee node, even if it becomes unavailable on the wider Swarm network. This lets your node re-upload the content if needed: + +```js +await bee.uploadData(postageBatchId, 'my content', { pin: true }) ``` -You may also utilize the `uploadFilesFromDirectory` function, which takes the directory path as input and uploads all files in that directory. Let's assume we have the following file structure: +:::info +Pinning is local only. It doesn't make data permanent across the network. +::: -```sh -. -+-- foo.txt -+-- dir -| +-- bar.txt +More info: +- [Bee docs – Pinning](https://docs.ethswarm.org/docs/develop/access-the-swarm/pinning) + + +### Encryption + +You can enable **client-side encryption** by setting `encrypt: true`. This encrypts the content before uploading and returns a longer Swarm reference that includes the decryption key. + +**Example:** + +```js +await bee.uploadData(postageBatchId, 'sensitive content', { encrypt: true }) ``` +When you later download the content, `bee-js` will decrypt it automatically if the reference contains the embedded key. + +More info: +- [Store with Encryption](https://docs.ethswarm.org/docs/develop/access-the-swarm/store-with-encryption) + + +### Tags + +Tags let you **track upload progress** through the Bee API. You can create a new tag using `bee.createTag()`, then pass its ID when uploading. + +**Example:** + ```js -import { Bee } from "@ethersphere/bee-js" -import fs from 'fs' +const tag = await bee.createTag() +await bee.uploadData(postageBatchId, 'track me', { tag: tag.uid }) +``` -const bee = new Bee('http://localhost:1633') -const postageBatchId = "177da0994ed3000d241b183d33758aec42495bf9008fab059f0e3f208f3a1ade" -const result = await bee.uploadFilesFromDirectory(postageBatchId, './') // upload recursively current folder +You can use the tag ID to monitor syncing status: how many chunks were split, stored, seen, and synced. -const Foo = await bee.downloadFile(result.reference, './foo.txt') // download foo -const Bar = await bee.downloadFile(result.reference, './dir/bar.txt') // download bar +See [here](/docs/upload-download/#using-tags-to-monitor-upload-progress) for more info on creating, monitoring, and managing tags. -console.log(Foo.data.toUtf8()) // prints 'foo' -console.log(Bar.data.toUtf8()) // prints 'bar' + +### Deferred Uploads + +By default, uploads are **deferred**, meaning the client sends the data to the Bee node, generates and returns the Swarm reference hash, and the node starts syncing it to the network **in the background**. The function returns as soon as the chunks are processed and a Swarm reference is generated — **even before the data is actually available on the network**. + +This can be risky if you assume the data is already retrievable using the returned reference. + +**To ensure reliability, it is recommended to set `deferred: false`** unless you're explicitly tracking the upload using tags. + +**Recommended usage:** + +```js +await bee.uploadData(postageBatchId, 'data', { deferred: false }) ``` +:::tip +If you do use `deferred: true`, make sure to use a [tag](/docs/tracking-uploads/) to track upload progress and confirm the success of the upload. +::: diff --git a/docs/introduction.md b/docs/introduction.md index defc68e9..68d07751 100644 --- a/docs/introduction.md +++ b/docs/introduction.md @@ -9,6 +9,10 @@ sidebar_label: Introduction The following documentation will guide you through installing and using `bee-js`. Take your first steps with `bee-js` in the [Getting Started section](./getting-started). +:::info Note on Ethereum addresses +Ethereum addresses / keys are mentioned throughout the documentation. Please not that this refers to the Ethereum format only, and does not indicate the Ethereum blockchain itself. Swarm is built on the Gnosis Blockchain which is a fork of Ethereum, and uses the same key and addressing schemes. +::: + ## Development We'd love you to join us! Are you up to the challenge of helping us to create bee-js and the other incredible technologies we're building? Have a look at our Github - [Ethersphere](https://github.com/ethersphere). diff --git a/sidebars.js b/sidebars.js index 1cb529e4..c24b730a 100644 --- a/sidebars.js +++ b/sidebars.js @@ -11,13 +11,17 @@ module.exports = { 'documentation/chequebook', 'documentation/storage', 'documentation/upload-download', - // 'documentation/pinning', - // 'documentation/staking', + 'documentation/tracking-uploads', + 'documentation/pinning', + 'documentation/staking', + 'documentation/pss', + 'documentation/gsoc', + 'documentation/soc-and-feeds', + 'documentation/act', // 'documentation/manifests', - // 'documentation/soc-and-feeds', - // 'documentation/pss', - // 'documentation/gsoc', - // 'documentation/act', + + + 'documentation/utilities', ], collapsed: false