Skip to content

Commit 956d3e0

Browse files
Merge pull request #981 from multiversx/remove-GCP-archives-download
Remove option to download archives from GCP
2 parents e1ab4bc + b2c899f commit 956d3e0

File tree

1 file changed

+4
-34
lines changed

1 file changed

+4
-34
lines changed

docs/integrators/deep-history-squad.md

Lines changed: 4 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -117,46 +117,16 @@ Apart from the flag mentioned above, the setup of a deep-history observer is ide
117117
Never attach a non-pruned database to a regular observer (i.e. that does not have the above **operation-mode**) - unless you are not interested into the deep-history features. The regular observer irremediably removes, trucates and prunes the data (as configured, for storage efficiency).
118118
:::
119119

120-
Now that we have finished with the installation part, we can proceed with populating the database from a non-pruned database archive. There are two options here:
121-
- Download non-pruned database
122-
- Reconstruct non-pruned database
120+
Now that we have finished with the installation part, we can proceed to populate the non-pruned database. There are two options here:
121+
- Reconstruct non-pruned database (recommended).
122+
- Download non-pruned database (we can provide archives for the required epochs, on request).
123123

124-
[comment]: # (mx-context-auto)
125-
126-
## Downloading non-pruned database
127-
128-
Archives supporting historical lookup are available to download from a Google Cloud Storage [bucket](https://console.cloud.google.com/storage/browser/multiversx-deep-history-archives-mainnet).
129-
130-
In order to avoid unintentional downloads and promote careful fetching of archives, we've enabled the [requester pays](https://cloud.google.com/storage/docs/requester-pays) feature on the bucket that holds the deep-history archives for mainnet.
131-
132-
### Requirements
133-
134-
1. **Google Cloud Platform Account**: An account on Google Cloud Platform with billing enabled is required. Find out how you can manage your billing account and modify your project here:
135-
- https://cloud.google.com/billing/docs/how-to/manage-billing-account
136-
- https://cloud.google.com/billing/docs/how-to/modify-project
137-
138-
2. **Google Cloud SDK**: The Google Cloud SDK includes the `gcloud` command-line tool, which you'll use to interact with Google Cloud Storage. In order to install it, please follow the instructions provided on the [Google Cloud SDK webpage](https://cloud.google.com/sdk/docs/install).
139-
140-
### Downloading archives
141-
142-
Once you have the Google Cloud SDK installed and you're [authenticated](https://cloud.google.com/docs/authentication/gcloud), you can download archives from the Google Cloud Storage bucket using the `gcloud storage cp` command.
143-
144-
Here's an example command that downloads an archive from the `multiversx-deep-history-archives-mainnet` bucket:
145-
```
146-
gcloud storage cp gs://multiversx-deep-history-archives-mainnet/shard-0/Epoch_01200.tar ~/DOWNLOAD_LOCATION --billing-project=BILLING_PROJECT
147-
```
148-
Replace **BILLING_PROJECT** with the name of your billing project and **~/DOWNLOAD_LOCATION** with the directory where the archives should be downloaded.
149-
150-
The following example will download epochs starting with Epoch_01000.tar up to Epoch_01300.tar, for a billing project called **multiversx**:
151-
```
152-
gcloud storage cp gs://multiversx-deep-history-archives-mainnet/shard-0/Epoch_0{1000..1300}.tar ~/Downloads/ --billing-project=multiversx
153-
```
154124

155125
[comment]: # (mx-context-auto)
156126

157127
## Reconstructing non-pruned databases
158128

159-
An alternative to downloading the non-pruned history is to reconstruct it locally (on your own infrastructure).
129+
The recommended method for populating a non-pruned database is to reconstruct it locally (on your own infrastructure).
160130

161131
There are also two options for reconstructing a non-pruned database:
162132
- Based on the **[import-db](/validators/import-db/)** feature, which re-processes past blocks - and, while doing so, retains the whole, non-pruned accounts history.

0 commit comments

Comments
 (0)