Skip to content

Commit 46ff2b5

Browse files
authored
Merge pull request #739 from ethersphere/rchash-correction
Correct rchash usage and redundancy calc
2 parents fc7209e + 96930f7 commit 46ff2b5

File tree

3 files changed

+45
-160
lines changed

3 files changed

+45
-160
lines changed

docs/bee/working-with-bee/bee-api.md

Lines changed: 27 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -458,10 +458,26 @@ From the results we can see that we have a healthy neighborhood size when compar
458458

459459
### */rchash*
460460

461-
Calling the /rchash endpoint triggers the generation of a reserve commitment hash, which is used in the [redistribution game](/docs/concepts/incentives/redistribution-game), and will report the amount of time it took to generate the hash. This is useful for getting a performance benchmark to ensure that your node's hardware is sufficient.
461+
Calling the `/rchash` endpoint triggers the generation of a reserve commitment hash, which is used in the [redistribution game](/docs/concepts/incentives/redistribution-game), and will report the amount of time it took to generate the hash. This is useful for getting a performance benchmark to ensure that your node's hardware is sufficient.
462+
463+
464+
465+
466+
The `/rchash` endpoint has 3 parameters: `depth` and `anchor_01` and `anchor_02`. For both of the anchor parameters, you should use the first 4 digits from your node's overlay address:
467+
468+
```
469+
/rchash/{depth}/{anchor_01}/{anchor_02}
470+
```
471+
472+
:::info anchor parameter details
473+
- The anchor parameters must match the prefix bits of the node's overlay address up to at least the current storage depth (with each hex digit equal to 4 bits).
474+
- The anchor parameters also must have an even number of digits.
475+
476+
Therefore you can use the first four digits of your node's overlay address since it will work for depths up to depth 16, which will not be approached unless the depth increases up to depth 17, which is not likely to happen in the near future. If it does increase to depth 17, then the first 6 overlay digits should be used.
477+
:::
462478

463479
```bash
464-
sudo curl -sX GET http://localhost:1633/rchash/10/aaaa/aaaa | jq
480+
sudo curl -sX GET http://localhost:1633/rchash/10/1e20/1e20 | jq
465481
```
466482
It should not take much longer than 6 minutes at most for results to be returned:
467483
```bash
@@ -485,6 +501,15 @@ It should not take much longer than 6 minutes at most for results to be returned
485501

486502
If the `Time` value is much longer than 6 minutes then it likely means that the node's hardware performance is not sufficient. Consider upgrading to use faster memory or processor.
487503

504+
If while running the `/rchash` command there is an evictions related error such as the one below, try running the call to the `/rchash` endpoint again.
505+
506+
```
507+
error: "level"="error" "logger"="node/storageincentives" "msg"="make sample" "error"="sampler: failed creating sample: sampler stopped due to ongoing evictions"
508+
```
509+
510+
While evictions are a normal part of Bee's standard operation, the event of an eviction will interrupt the sampler process.
511+
512+
488513
### */health*
489514
490515
The `/health` endpoint provides a quick status check for your Bee node which simply indicates whether the node is operating or not. It is often used in tools like Docker and Kubernetes.

docs/bee/working-with-bee/staking.md

Lines changed: 1 addition & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -765,61 +765,8 @@ Confirm that `hasSufficientFunds` is `true`, and `isFullySynced` is `true` befor
765765
766766
#### Run sampler process to benchmark performance
767767
768-
One of the most common issues affecting staking is the `sampler` process failing. The sampler is a resource intensive process which is run by nodes which are selected to take part in redistribution. The process may fail or time out if the node's hardware specifications aren't high enough. To check a node's performance the `/rchash/{depth}/{anchor_01}/{anchor_02}` endpoint of the API may be used. The `anchor_01` and `anchor_02` must be a hex string with an even number of digits. For simplicity, you can just use `aaaa` for both anchors as we do in the example further down.
768+
One of the most common issues affecting staking is the `sampler` process failing. The sampler is a resource intensive process which is run by nodes which are selected to take part in redistribution. The process may fail or time out if the node's hardware specifications aren't high enough. To check a node's performance the `/rchash` endpoint of the API may be used. See the `/rchash` section of the [Bee API page for usage details](/docs/bee/working-with-bee/bee-api/).
769769

770-
The `{anchor}` value can be set to any random hexadecimal string, while `{depth}` should be set to the current depth.
771-
772-
To get the current depth, call the `/reservestate` endpoint
773-
774-
```bash
775-
sudo curl -sX GET http://localhost:1633/reservestate | jq
776-
```
777-
Copy the `storageRadius` value from the output (this represents the ACTUAL depth for your node, in other words, the depth to which your node is responsible for storing files. The `radius` value is the hypothetical depth your node would be at if every postage batch was fully utilised.)
778-
779-
```bash
780-
{
781-
"radius": 15,
782-
"storageRadius": 10,
783-
"commitment": 128332464128
784-
}
785-
```
786-
787-
Call the endpoint like so:
788-
789-
```bash
790-
sudo curl -sX GET http://localhost:1633/rchash/10/aaaa/aaaa | jq
791-
```
792-
793-
If the sampler runs successfully, you should see output like this:
794-
795-
```bash
796-
{
797-
"Sample": {
798-
"Items": [
799-
"000003dac2b2f75842e410474dfa4c1e6e0b9970d81b57b33564c5620667ba96",
800-
"00000baace30916f7445dbcc44d9b55cb699925acfbe157e4498c63bde834f40",
801-
"0000126f48fb1e99e471efc683565e4b245703c922b9956f89cbe09e1238e983",
802-
"000012db04a281b7cc0e6436a49bdc5b06ff85396fcb327330ca307e409d2a04",
803-
"000014f365b1a381dda85bbeabdd3040fb1395ca9e222e72a597f4cc76ecf6c2",
804-
"00001869a9216b3da6814a877fdbc31f156fc2e983b52bc68ffc6d3f3cc79af0",
805-
"0000198c0456230b555d5261091cf9206e75b4ad738495a60640b425ecdf408f",
806-
"00001a523bd1b688472c6ea5a3c87c697db64d54744829372ac808de8ec1d427"
807-
],
808-
"Hash": "7f7d93c6235855fedc34e32c6b67253e27910ca4e3b8f2d942efcd758a6d8829"
809-
},
810-
"Time": "2m54.087909745s"
811-
}
812-
```
813-
814-
If the `Time` value is higher than 6 minutes, then the hardware specifications for the node may need to be upgraded.
815-
816-
If there is an evictions related error such as the one below, try running the call to the `/rchash/` endpoint again.
817-
818-
```
819-
error: "level"="error" "logger"="node/storageincentives" "msg"="make sample" "error"="sampler: failed creating sample: sampler stopped due to ongoing evictions"
820-
```
821-
822-
While evictions are a normal part of Bee's standard operation, the event of an eviction will interrupt the sampler process.
823770

824771
If you are still experiencing problems, you can find more help in the [node-operators](https://discord.gg/kHRyMNpw7t) Discord channel (for your safety, do not accept advice from anyone sending a private message on Discord).
825772

src/components/RedundancyCalc.js

Lines changed: 17 additions & 104 deletions
Original file line numberDiff line numberDiff line change
@@ -13,20 +13,17 @@ export default function UploadCostCalc() {
1313
const maxParities = [9, 21, 31, 90];
1414
const maxChunksEncrypted = [59, 53, 48, 18];
1515
const errorTolerances = {
16-
None: "0%",
1716
Medium: "1%",
1817
Strong: "5%",
1918
Insane: "10%",
2019
Paranoid: "50%"
2120
};
2221

22+
2323
const formatNumberCustom = (num) => {
24-
// Modify to display integers without .00 for chunks
25-
if (unit === "chunks") {
26-
return Math.round(num).toString();
27-
}
2824
const isScientific = Math.abs(num) > 0 && Math.abs(num) < 0.0001;
29-
return isScientific ? num.toExponential(2) : num.toFixed(2);
25+
let formattedNum = isScientific ? num.toExponential(2) : num.toFixed(2);
26+
return formattedNum;
3027
};
3128

3229
const handleDataSizeChange = (e) => {
@@ -45,6 +42,7 @@ export default function UploadCostCalc() {
4542
setUnit(e.target.value);
4643
};
4744

45+
4846
const calculateCost = () => {
4947
setErrorMessage("");
5048
setResult([]);
@@ -54,12 +52,6 @@ export default function UploadCostCalc() {
5452
return;
5553
}
5654

57-
// If redundancy is None, skip erasure coding calculations
58-
if (redundancy === "None") {
59-
calculateNonRedundancyCost();
60-
return;
61-
}
62-
6355
let parityDataInGb = 0;
6456
let totalChunks, sizeInKb, sizeInGb;
6557

@@ -101,15 +93,17 @@ export default function UploadCostCalc() {
10193
: Math.ceil(totalChunks / 127); // 0.8% overhead for unencrypted files
10294

10395
// Add PAC overhead to total chunks
104-
const redundancyLevels = { None: -1, Medium: 0, Strong: 1, Insane: 2, Paranoid: 3 };
96+
const totalChunksWithPac = totalChunks + pacOverheadChunks;
97+
98+
const redundancyLevels = { Medium: 0, Strong: 1, Insane: 2, Paranoid: 3 };
10599
const redundancyLevel = redundancyLevels[redundancy];
106-
100+
107101
const quotient = isEncrypted
108-
? Math.floor(totalChunks / maxChunksEncrypted[redundancyLevel])
109-
: Math.floor(totalChunks / maxChunks[redundancyLevel]);
102+
? Math.floor(totalChunksWithPac / maxChunksEncrypted[redundancyLevel])
103+
: Math.floor(totalChunksWithPac / maxChunks[redundancyLevel]);
110104
const remainder = isEncrypted
111-
? totalChunks % maxChunksEncrypted[redundancyLevel]
112-
: totalChunks % maxChunks[redundancyLevel];
105+
? totalChunksWithPac % maxChunksEncrypted[redundancyLevel]
106+
: totalChunksWithPac % maxChunks[redundancyLevel];
113107

114108
let remainderParities = 0;
115109
if (remainder > 0) {
@@ -119,7 +113,6 @@ export default function UploadCostCalc() {
119113
}
120114

121115
const totalParities = quotient * maxParities[redundancyLevel] + remainderParities;
122-
const totalChunksWithPac = totalChunks + pacOverheadChunks;
123116
const totalDataWithParity = totalChunksWithPac + totalParities;
124117
const percentDifference = ((totalDataWithParity - totalChunks) / totalChunks) * 100;
125118

@@ -183,89 +176,7 @@ export default function UploadCostCalc() {
183176

184177
setResult(resultsArray);
185178
};
186-
187-
const calculateNonRedundancyCost = () => {
188-
let totalChunks, sizeInKb, sizeInGb;
189179

190-
if (!dataSize || isNaN(parseFloat(dataSize))) {
191-
setErrorMessage("Please enter a valid data size.");
192-
return;
193-
}
194-
195-
if (unit === "kb") {
196-
const kbValue = parseFloat(dataSize);
197-
if (isNaN(kbValue) || kbValue <= 0) {
198-
setErrorMessage("Please input a valid KB value above 0.");
199-
return;
200-
}
201-
sizeInKb = kbValue;
202-
totalChunks = Math.ceil((kbValue * 1024) / (2 ** 12));
203-
} else if (unit === "gb") {
204-
const gbValue = parseFloat(dataSize);
205-
if (isNaN(gbValue) || gbValue <= 0) {
206-
setErrorMessage("Please input a valid GB value above 0.");
207-
return;
208-
}
209-
sizeInGb = gbValue;
210-
sizeInKb = gbValue * 1024 * 1024; // Convert GB to KB
211-
totalChunks = Math.ceil((sizeInKb * 1024) / (2 ** 12));
212-
} else {
213-
const chunksValue = parseFloat(dataSize);
214-
if (isNaN(chunksValue) || chunksValue <= 1 || chunksValue % 1 > 0) {
215-
setErrorMessage("Please input an integer greater than 1 for chunk values");
216-
return;
217-
}
218-
totalChunks = Math.ceil(chunksValue);
219-
sizeInKb = (totalChunks * (2 ** 12)) / 1024;
220-
}
221-
222-
// Calculate PAC overhead
223-
const pacOverheadChunks = isEncrypted
224-
? Math.ceil(totalChunks / 63) // 1.6% overhead for encrypted files
225-
: Math.ceil(totalChunks / 127); // 0.8% overhead for unencrypted files
226-
227-
const totalChunksWithPac = totalChunks + pacOverheadChunks;
228-
const percentDifference = ((totalChunksWithPac - totalChunks) / totalChunks) * 100;
229-
230-
// Calculate PAC overhead size in KB and GB
231-
const pacOverheadInKb = (pacOverheadChunks * (2 ** 12)) / 1024;
232-
const sizeInGbCalculated = unit === "gb" ? sizeInKb / (1024 * 1024) : null;
233-
234-
const resultsArray = [
235-
{
236-
name: "Source data size",
237-
value: unit === "gb" ? `${formatNumberCustom(sizeInGbCalculated)} GB` : `${formatNumberCustom(sizeInKb)} KB`,
238-
},
239-
{
240-
name: "PAC overhead",
241-
value: unit === "gb"
242-
? `${formatNumberCustom(pacOverheadInKb / (1024 * 1024))} GB (${formatNumberCustom(pacOverheadChunks)} chunks)`
243-
: `${formatNumberCustom(pacOverheadInKb)} KB (${formatNumberCustom(pacOverheadChunks)} chunks)`,
244-
},
245-
{
246-
name: "Source data in chunks",
247-
value: formatNumberCustom(totalChunks)
248-
},
249-
{
250-
name: "Source with PAC overhead",
251-
value: formatNumberCustom(totalChunksWithPac)
252-
},
253-
{
254-
name: "Percent cost increase",
255-
value: `${percentDifference.toFixed(2)}%`
256-
},
257-
{
258-
name: "Selected redundancy level",
259-
value: `${redundancy}`
260-
},
261-
{
262-
name: "Error tolerance",
263-
value: errorTolerances[redundancy]
264-
},
265-
];
266-
267-
setResult(resultsArray);
268-
};
269180

270181
const styles = {
271182
container: { padding: '20px', fontFamily: 'Arial', maxWidth: '650px', margin: '0 auto' },
@@ -314,7 +225,6 @@ export default function UploadCostCalc() {
314225
<div style={styles.title}>Redundancy Level:</div>
315226
<select value={redundancy} onChange={handleRedundancyChange} style={styles.select}>
316227
<option value="" disabled>Select Redundancy Level</option>
317-
<option value="None">None</option>
318228
<option value="Medium">Medium</option>
319229
<option value="Strong">Strong</option>
320230
<option value="Insane">Insane</option>
@@ -335,10 +245,13 @@ export default function UploadCostCalc() {
335245
<tbody>
336246
{result.map((item, index) => (
337247
<tr key={index}>
338-
<td style={{...styles.tdName, ...styles.bold}}>{item.name}</td>
248+
<td style={{...styles.tdName, ...styles.bold}}>{item.name}</td> {/* Apply bold style here */}
339249
<td style={styles.tdValue}>{item.value}</td>
340250
</tr>
341251
))}
342252
</tbody>
343253
</table>
344-
</div>
254+
</div>
255+
</div>
256+
);
257+
}

0 commit comments

Comments
 (0)