Skip to content

Commit 20e1c43

Browse files
authored
Merge branch 'develop' into master-merge
2 parents 32a7105 + b95eac7 commit 20e1c43

33 files changed

+5680
-579
lines changed

.env

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -172,12 +172,15 @@ STACKS_NODE_TYPE=L1
172172
# STACKS_ADDRESS_CACHE_SIZE=10000
173173

174174
# Specify a URL to redirect from /doc. If this URL is not provided, server renders local documentation
175-
# of openapi.yaml for test / development NODE_ENV.
175+
# of openapi.yaml for test / development NODE_ENV.
176176
# For production, /doc is not served if this env var is not provided.
177-
# API_DOCS_URL="https://docs.hiro.so/api"
177+
# API_DOCS_URL="https://docs.hiro.so/api"
178178

179-
# For use while syncing. Places the API into an "Initial Block Download(IBD)" mode,
180-
# forcing it to stop any redundant processing until the node is fully synced up to its peers.
181-
# Some examples of processing that are avoided are:
179+
# For use while syncing. Places the API into an "Initial Block Download(IBD)" mode,
180+
# forcing it to stop any redundant processing until the node is fully synced up to its peers.
181+
# Some examples of processing that are avoided are:
182182
# REFRESH MATERIALIZED VIEW SQLs that are extremely CPU intensive on the PG instance, Mempool messages, etc.,
183183
# IBD_MODE_UNTIL_BLOCK=
184+
185+
# Folder with events to be imported by the event-replay.
186+
STACKS_EVENTS_DIR=./eventssds

.github/workflows/netlify.yml

Lines changed: 0 additions & 58 deletions
This file was deleted.

.github/workflows/vercel.yml

Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
name: Vercel
2+
3+
env:
4+
VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
5+
VERCEL_PROJECT_ID: ${{ secrets.VERCEL_PROJECT_ID }}
6+
7+
on:
8+
push:
9+
branches:
10+
- master
11+
- beta
12+
pull_request:
13+
14+
jobs:
15+
vercel:
16+
runs-on: ubuntu-latest
17+
18+
environment:
19+
name: ${{ github.ref_name == 'master' && 'Production' || 'Preview' }}
20+
url: ${{ github.ref_name == 'master' && 'https://stacks-blockchain-api.vercel.app/' || 'https://stacks-blockchain-api-pbcblockstack-blockstack.vercel.app/' }}
21+
22+
env:
23+
PROD: ${{ github.ref_name == 'master' }}
24+
25+
steps:
26+
- uses: actions/checkout@v2
27+
with:
28+
fetch-depth: 0
29+
30+
- name: Use Node.js
31+
uses: actions/setup-node@v2
32+
with:
33+
node-version-file: '.nvmrc'
34+
35+
- name: Cache node modules
36+
uses: actions/cache@v2
37+
env:
38+
cache-name: cache-node-modules
39+
with:
40+
path: |
41+
~/.npm
42+
**/node_modules
43+
key: ${{ runner.os }}-build-${{ env.cache-name }}-${{ hashFiles('**/package-lock.json') }}
44+
restore-keys: |
45+
${{ runner.os }}-build-${{ env.cache-name }}-
46+
${{ runner.os }}-build-
47+
${{ runner.os }}-
48+
49+
- name: Install deps
50+
run: npm ci --audit=false
51+
52+
- name: Install Vercel CLI
53+
run: npm install --global vercel@latest
54+
55+
- name: Pull Vercel environment information
56+
run: vercel pull --yes --environment=${{ env.PROD && 'production' || 'preview' }} --token=${{ secrets.VERCEL_TOKEN }}
57+
58+
- name: Build project artifacts
59+
run: vercel build ${{ env.PROD && '--prod' || '' }} --token=${{ secrets.VERCEL_TOKEN }}
60+
61+
- name: Deploy project artifacts to Vercel
62+
id: deploy
63+
run: vercel ${{ env.PROD && '--prod' || 'deploy' }} --prebuilt --token=${{ secrets.VERCEL_TOKEN }} | awk '{print "deployment_url="$1}' >> $GITHUB_OUTPUT
64+
65+
- name: Add comment with Vercel deployment URL
66+
if: ${{ github.event_name == 'pull_request' }}
67+
uses: thollander/actions-comment-pull-request@v2
68+
with:
69+
comment_tag: vercel
70+
message: |
71+
Vercel deployment URL: ${{ steps.deploy.outputs.deployment_url }} :rocket:

Dockerfile

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,4 +8,18 @@ RUN echo "GIT_TAG=$(git tag --points-at HEAD)" >> .env
88
RUN npm config set unsafe-perm true && npm ci && npm run build && npm run build:docs && npm prune --production
99
RUN apk del .build-deps
1010

11+
# As no pre-built binaries of duckdb can be found for Alpine (musl based),
12+
# a rebuild of duckdb package is need.
13+
#
14+
# Library used by the event-replay based on parquet files.
15+
ARG DUCKDB_VERSION=0.8.1
16+
WORKDIR /duckdb
17+
RUN apk add --no-cache --virtual .duckdb-build-deps python3 git g++ make
18+
RUN git clone https://github.com/duckdb/duckdb.git -b v${DUCKDB_VERSION} --depth 1 \
19+
&& cd duckdb/tools/nodejs \
20+
&& ./configure && make all
21+
WORKDIR /app
22+
RUN npm uninstall duckdb && npm install /duckdb/duckdb/tools/nodejs
23+
RUN apk del .duckdb-build-deps
24+
1125
CMD ["node", "./lib/index.js"]

README.md

Lines changed: 45 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77

88
## Quick start
99

10-
A self-contained Docker image is provided which starts a Stacks 2.05 blockchain and API instance.
10+
A self-contained Docker image is provided, which starts a Stacks 2.05 blockchain and API instance.
1111

1212
Ensure Docker is installed, then run the command:
1313

@@ -21,7 +21,7 @@ Similarly, a "mocknet" instance can be started. This runs a local node, isolated
2121
docker run -p 3999:3999 -e STACKS_NETWORK=mocknet hirosystems/stacks-blockchain-api-standalone
2222
```
2323

24-
Once the blockchain has synced with network, the API will be available at:
24+
Once the blockchain has synced with the network, the API will be available at:
2525
[http://localhost:3999](http://localhost:3999)
2626

2727
## Development quick start
@@ -40,7 +40,7 @@ Check to see if the server started successfully by visiting http://localhost:399
4040

4141
### Setup Services
4242

43-
Then run `npm run devenv:deploy` which uses docker-compose to deploy the service dependencies (e.g. PostgreSQL, Stacks core node, etc).
43+
Then run `npm run devenv:deploy`, which uses docker-compose to deploy the service dependencies (e.g., PostgreSQL, Stacks core node, etc.)
4444

4545
### Running the server
4646

@@ -54,66 +54,80 @@ See [overview.md](overview.md) for architecture details.
5454

5555
# Deployment
5656

57-
For optimal performance, we recommend running the API database on PostgreSQL version 14 or newer.
57+
We recommend running the API database on PostgreSQL version 14 or newer for optimal performance.
5858

5959
## Upgrading
6060

61-
If upgrading the API to a new major version (e.g. `3.0.0` to `4.0.0`) then the Postgres database from the previous version will not be compatible and the process will fail to start.
61+
If upgrading the API to a new major version (e.g., `3.0.0` to `4.0.0`), then the Postgres database from the previous version will not be compatible, and the process will fail to start.
6262

63-
[Event Replay](#event-replay) must be used when upgrading major versions. Follow the event replay [instructions](#event-replay-instructions) below. Failure to do so will require wiping both the Stacks Blockchain chainstate data and the API Postgres database, and re-syncing from scratch.
63+
[Event Replay](#event-replay) must be used when upgrading major versions. Follow the event replay [instructions](#event-replay-instructions) below. Failure to do so will require wiping the Stacks Blockchain chain state data and the API Postgres database and re-syncing from scratch.
6464

6565
## API Run Modes
6666

67-
The API supports a series of run modes, each accommodating different use cases for scaling and data access by toggling [architecture](#architecture) components on or off depending on its objective.
67+
The API supports a series of run modes, each accommodating different use cases for scaling and data access by toggling [architecture](#architecture) components on or off, depending on its objective.
6868

6969
### Default mode (Read-write)
7070

71-
The default mode runs with all components enabled. It consumes events from a Stacks node, writes them to a postgres database, and serves API endpoints.
71+
The default mode runs with all components enabled. It consumes events from a Stacks node, writes them to a Postgres database, and serves API endpoints.
7272

7373
### Write-only mode
7474

75-
During Write-only mode, the API only runs the Stacks node events server to populate the postgres database but it does not serve any API endpoints.
75+
During Write-only mode, the API only runs the Stacks node events server to populate the Postgres database, but it does not serve any API endpoints.
7676

77-
This mode is very useful when you need to consume blockchain data from the postgres database directly and you're not interested in taking on the overhead of running an API web server.
77+
This mode is very useful when you need to consume blockchain data from the Postgres database directly, and you're not interested in taking on the overhead of running an API web server.
7878

7979
For write-only mode, set the environment variable `STACKS_API_MODE=writeonly`.
8080

8181
### Read-only mode
8282

8383
During Read-only mode, the API runs without an internal event server that listens to events from a Stacks node.
84-
The API only reads data from the connected postgres database when building endpoint responses.
84+
The API only reads data from the connected Postgres database when building endpoint responses.
8585
In order to keep serving updated blockchain data, this mode requires the presence of another API instance that keeps writing stacks-node events to the same database.
8686

87-
This mode is very useful when building an environment that load-balances incoming HTTP requests between multiple API instances that can be scaled up and down very quickly.
88-
Read-only instances support websockets and socket.io clients.
87+
This mode is very useful when building an environment that load-balances incoming HTTP requests between multiple API instances that can be scaled up and down quickly.
88+
Read-only instances support WebSockets and socket.io clients.
8989

9090
For read-only mode, set the environment variable `STACKS_API_MODE=readonly`.
9191

9292
### Offline mode
9393

94-
In Offline mode app runs without a stacks-node or postgres connection. In this mode, only the given rosetta endpoints are supported:
94+
In Offline mode, the app runs without a stacks-node or Postgres connection. In this mode, only the given rosetta endpoints are supported:
9595
https://www.rosetta-api.org/docs/node_deployment.html#offline-mode-endpoints.
9696

97-
For running offline mode set an environment variable `STACKS_API_MODE=offline`
97+
For running offline mode, set an environment variable `STACKS_API_MODE=offline`
9898

9999
## Event Replay
100100

101101
The stacks-node is only able to emit events live as they happen. This poses a problem in the
102-
scenario where the stacks-blockchain-api needs to be upgraded and its database cannot be migrated to
102+
scenario where the stacks-blockchain-API needs to be upgraded, and its database cannot be migrated to
103103
a new schema. One way to handle this upgrade is to wipe the stacks-blockchain-api's database and
104-
stacks-node working directory, and re-sync from scratch.
104+
stacks-node working directory and re-sync from scratch.
105105

106106
Alternatively, an event-replay feature is available where the API records the HTTP POST requests
107107
from the stacks-node event emitter, then streams these events back to itself. Essentially simulating
108108
a wipe & full re-sync, but much quicker.
109109

110-
The feature can be used via program args. For example, if there are breaking changes in the API's
111-
sql schema, like adding a new column which requires event's to be re-played, the following steps
112-
could be ran:
110+
The feature can be used via program args. For example, if there are breaking changes in the APIs
111+
SQL schema, like adding a new column that requires events to be re-played, the following steps
112+
could be run:
113113

114-
### Event Replay Instructions
114+
### Event Replay V2
115115

116-
#### V1 BNS Data
116+
This version of the replay process relies on parquet files processing instead of TSV files.
117+
118+
There are some improvements on the replay process and this version is is, around, 10x times faster than the previous (V1) one.
119+
120+
__Note: the previous event-replay version is still available and can be used as well, for the same purpose.__
121+
122+
#### Instructions
123+
124+
To run the new event-replay, please follow the instructions at [stacks-event-replay](https://github.com/hirosystems/stacks-event-replay#installation) repository.
125+
126+
### Event Replay V1
127+
128+
#### Instructions
129+
130+
##### V1 BNS Data
117131

118132
**Optional but recommended** - If you want the V1 BNS data, there are going to be a few extra steps:
119133

@@ -139,7 +153,7 @@ could be ran:
139153
```
140154
1. Set the data's location as the value of `BNS_IMPORT_DIR` in your `.env` file.
141155
142-
#### Export and Import
156+
##### Export and Import
143157
144158
1. Ensure the API process is not running. When stopping the API, let the process exit gracefully so
145159
that any in-progress SQL writes can finish.
@@ -162,29 +176,29 @@ could be ran:
162176
* `archival` (default): The process will import and ingest *all* blockchain events that have
163177
happened since the first block.
164178
* `pruned`: The import process will ignore some prunable events (mempool, microblocks) until the
165-
import block height has reached `chain tip - 256` blocks. This saves a considerable amount of
166-
time during import, but sacrifices some historical data. You can use this mode if you're mostly
167-
interested in running an API that prioritizes real time information.
179+
import block height has reached `chain tip - 256` blocks. This saves considerable
180+
time during import but sacrifices some historical data. You can use this mode if you're mostly
181+
interested in running an API that prioritizes real-time information.
168182

169183
## Bugs and feature requests
170184

171185
If you encounter a bug or have a feature request, we encourage you to follow the steps below:
172186

173187
1. **Search for existing issues:** Before submitting a new issue, please search [existing and closed issues](../../issues) to check if a similar problem or feature request has already been reported.
174188
1. **Open a new issue:** If it hasn't been addressed, please [open a new issue](../../issues/new/choose). Choose the appropriate issue template and provide as much detail as possible, including steps to reproduce the bug or a clear description of the requested feature.
175-
1. **Evaluation SLA:** Our team reads and evaluates all the issues and pull requests. We are avaliable Monday to Friday and we make a best effort to respond within 7 business days.
189+
1. **Evaluation SLA:** Our team reads and evaluates all the issues and pull requests. We are available Monday to Friday and make our best effort to respond within 7 business days.
176190
177191
Please **do not** use the issue tracker for personal support requests or to ask for the status of a transaction. You'll find help at the [#support Discord channel](https://discord.gg/SK3DxdsP).
178192

179193
## Contribute
180194

181-
Development of this product happens in the open on GitHub, and we are grateful to the community for contributing bugfixes and improvements. Read below to learn how you can take part in improving the product.
195+
Development of this product happens in the open on GitHub, and we are grateful to the community for contributing bug fixes and improvements. Read below to learn how you can take part in improving the product.
182196

183197
### Code of Conduct
184-
Please read our [Code of conduct](../../../.github/blob/main/CODE_OF_CONDUCT.md) since we expect project participants to adhere to it.
198+
Please read our [Code of Conduct](../../../.github/blob/main/CODE_OF_CONDUCT.md) since we expect project participants to adhere to it.
185199

186200
### Contributing Guide
187-
Read our [contributing guide](.github/CONTRIBUTING.md) to learn about our development process, how to propose bugfixes and improvements, and how to build and test your changes.
201+
Read our [contributing guide](.github/CONTRIBUTING.md) to learn about our development process, how to propose bug fixes and improvements, and how to build and test your changes.
188202

189203
### Community
190204

@@ -199,4 +213,4 @@ Join our community and stay connected with the latest updates and discussions:
199213

200214
## Client library
201215

202-
You can use the Stacks Blockchain API Client library if you require a way to call the API via JavaScript or receive real-time updates via Websockets or Socket.io. Learn more [here](client/README.md).
216+
You can use the Stacks Blockchain API Client library if you need to call the API via JavaScript or receive real-time updates via WebSockets or Socket.io. Learn more [here](client/README.md).

content/faqs.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
---
2-
Title: FAQ's
2+
Title: FAQs
33
---
44

5-
# FAQ's
5+
# FAQs
66

77
#### **I am attempting to receive the status from a local Stacks Blockchain node API and present to a user how close it is to being synced. I can retrieve the current height of the local node (`/v2/info`). Is there any way for me to retrieve the real current height from an API node that is not completely synced? I want to avoid going directly to the centrally-hosted node.**
88

content/feature-guides/rate-limiting.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Rate limiting will be applied to all API endpoints and [faucet requests](https:/
88

99
You can refer to the rate limit for each endpoint in the table below:
1010

11-
| **Endpoint** | **Rate-Limit (RPM)** |
11+
| **Endpoint** | **Rate Per Minute(RPM) limit** |
1212
| ------------------------------------------------------------------------------------------- |-----------------------|
1313
| api.mainnet.hiro.so/extended/ <br/> api.hiro.so/extended/ <br/> | <br/> 500 <br/> <br/> |
1414
| api.mainnet.hiro.so/rosetta/ <br/> api.hiro.so/rosetta/<br/> | <br/> 200 <br/> <br/> |

0 commit comments

Comments
 (0)