Skip to content

Commit 46d7a7f

Browse files
committed
Base. Changed Cloud9 for CloudShell in README
1 parent 6068760 commit 46d7a7f

File tree

1 file changed

+283
-0
lines changed

1 file changed

+283
-0
lines changed

lib/base/README.md

Lines changed: 283 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,283 @@
1+
# Sample AWS Blockchain Node Runner app for Base Nodes
2+
3+
| Contributed by |
4+
|:---------------|
5+
|[@frbrkoala](https://github.com/frbrkoala), [@danyalprout](https://github.com/danyalprout)|
6+
7+
[Base](https://base.org/) is a "Layer 2" scaling solution for Ethereum. This blueprint helps to deploy Base RPC nodes on AWS. It is meant to be used for development, testing or Proof of Concept purposes.
8+
9+
## Overview of Deployment Architectures for Single Node setups
10+
11+
### Single node setup
12+
13+
![Single Node Deployment](./doc/assets/Architecture-SingleNode-v3.png)
14+
15+
1. A Base node deployed in the [Default VPC](https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html) continuously synchronizes with the rest of nodes on Base blockchain network through [Internet Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html).
16+
2. The Base node is used by dApps or development tools internally from within the Default VPC. JSON RPC API is not exposed to the Internet directly to protect nodes from unauthorized access.
17+
3. Your Base node needs access to a fully-synced [Ethereum Mainnet or Sepolia RPC endpoint](https://docs.base.org/tools/node-providers) .
18+
4. The Base node sends various monitoring metrics for both EC2 and Base nodes to Amazon CloudWatch.
19+
20+
## Additional materials
21+
22+
<details>
23+
<summary>Review the for pros and cons of this solution.</summary>
24+
25+
### Well-Architected Checklist
26+
27+
This is the Well-Architected checklist for Ethereum nodes implementation of the AWS Blockchain Node Runner app. This checklist takes into account questions from the [AWS Well-Architected Framework](https://aws.amazon.com/architecture/well-architected/) which are relevant to this workload. Please feel free to add more checks from the framework if required for your workload.
28+
29+
| Pillar | Control | Question/Check | Remarks |
30+
|:------------------------|:----------------------------------|:---------------------------------------------------------------------------------|:-----------------|
31+
| Security | Network protection | Are there unnecessary open ports in security groups? | Please note that port 9222 (TCP/UDP) for Base are open to public to support P2P protocols. We have to rely on the protection mechanisms built into the Base software to protect those ports. |
32+
| | | Traffic inspection | AWS WAF could be implemented for traffic inspection. Additional charges will apply. |
33+
| | Compute protection | Reduce attack surface | This solution uses Amazon Linux 2 AMI. You may choose to run hardening scripts on it. |
34+
| | | Enable people to perform actions at a distance | This solution uses AWS Systems Manager for terminal session, not ssh ports. |
35+
| | Data protection at rest | Use encrypted Amazon Elastic Block Store (Amazon EBS) volumes | This solution uses encrypted Amazon EBS volumes. |
36+
| | Data protection in transit | Use TLS | By design TLS is not used in Base RPC and P2P protocols because the data is considered public. To protect RPC traffic we expose the port only for internal use. |
37+
| | Authorization and access control | Use instance profile with Amazon Elastic Compute Cloud (Amazon EC2) instances | This solution uses AWS Identity and Access Management (AWS IAM) role instead of IAM user. |
38+
| | | Following principle of least privilege access | In the node, root user is not used (using special user "bcuser" instead). |
39+
| | Application security | Security focused development practices | cdk-nag is being used with documented suppressions. |
40+
| Cost optimization | Service selection | Use cost effective resources | Base nodes works well on ARM architecture and we use Graviton3-powered EC2 instances for better cost effectiveness. |
41+
| | Cost awareness | Estimate costs | One Base node with on-Demand priced m7g.2xlarge and 3TiB EBS gp3 volume will cost around US$599.27 per month in the US East (N. Virginia) region. Additional charges will apply for Ethereum L1 node and will depend on the service used. |
42+
| Reliability | Resiliency implementation | Withstand component failures | This solution currently does not have high availability and is deployed to a single availability zone. |
43+
| | Data backup | How is data backed up? | The data is not specially backed up. The node will have to re-sync its state from other nodes in the Base network to recover. |
44+
| | Resource monitoring | How are workload resources monitored? | Resources are being monitored using Amazon CloudWatch dashboards. Amazon CloudWatch custom metrics are being pushed via CloudWatch Agent. |
45+
| Performance efficiency | Compute selection | How is compute solution selected? | Compute solution is selected based on the recommendations the from Base community to provide stable and cost-effective operations. |
46+
| | Storage selection | How is storage solution selected? | Storage solution is selected based on the recommendations the from Base community to provide stable and cost-effective operations. |
47+
| | Architecture selection | How is the best performance architecture selected? | In this solution we try to balance price and performance to achieve better cost efficiency, but not necessarily the best performance. |
48+
| Operational excellence | Workload health | How is health of workload determined? | We rely on the standard EC2 instance monitoring tool to detect stalled instances. |
49+
| Sustainability | Hardware & services | Select most efficient hardware for your workload | We use ARM-powered EC2 instance type for better cost/performance balance. |
50+
</details>
51+
52+
<details>
53+
<summary>Recommended Infrastructure</summary>
54+
55+
## Hardware Requirements
56+
57+
**Minimum for Base node sepolia**
58+
59+
- Instance type [m7g.2xlarge](https://aws.amazon.com/ec2/instance-types/m7g/).
60+
- 1500GB EBS gp3 storage with at least 5000 IOPS.
61+
62+
**Recommended for Base node on mainnet**
63+
64+
- Instance type [m7g.2xlarge](https://aws.amazon.com/ec2/instance-types/m7g/).
65+
- 4100GB EBS gp3 storage with at least 5000 IOPS.`
66+
67+
</details>
68+
69+
## Setup Instructions
70+
71+
### Open AWS CloudShell
72+
73+
To begin, ensure you login to your AWS account with permissions to create and modify resources in IAM, EC2, EBS, VPC, S3, KMS, and Secrets Manager.
74+
75+
From the AWS Management Console, open the [AWS CloudShell](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html), a web-based shell environment. If unfamiliar, review the [2-minute YouTube video](https://youtu.be/fz4rbjRaiQM) for an overview and check out [CloudShell with VPC environment](https://docs.aws.amazon.com/cloudshell/latest/userguide/creating-vpc-environment.html) that we'll use to test nodes API from internal IP address space.
76+
77+
Once ready, you can run the commands to deploy and test blueprints in the CloudShell.
78+
79+
### Make sure you have access to Ethereum L1 node
80+
81+
Base node needs a URL to a Full Ethereum Node to validate blocks it receives. You can run your own with [Ethereum node blueprint](https://aws-samples.github.io/aws-blockchain-node-runners/docs/Blueprints/Ethereum) or use [one of partners of Base](https://docs.base.org/tools/node-providers).
82+
83+
### On your CloudShell: Clone this repository and install dependencies
84+
85+
```bash
86+
git clone https://github.com/aws-samples/aws-blockchain-node-runners
87+
cd aws-blockchain-node-runners
88+
npm install
89+
```
90+
91+
### From your CloudShell: Deploy required dependencies
92+
93+
1. Make sure you are in the root directory of the cloned repository
94+
95+
2. If you have deleted or don't have the default VPC, create default VPC
96+
97+
```bash
98+
aws ec2 create-default-vpc
99+
```
100+
101+
> NOTE:
102+
> You may see the following error if the default VPC already exists: `An error occurred (DefaultVpcAlreadyExists) when calling the CreateDefaultVpc operation: A Default VPC already exists for this account in this region.`. That means you can just continue with the following steps.
103+
104+
3. Configure your setup
105+
106+
Create your own copy of `.env` file and edit it to update with your AWS Account ID and Region:
107+
```bash
108+
# Make sure you are in aws-blockchain-node-runners/lib/base
109+
cd lib/base
110+
pwd
111+
cp ./sample-configs/.env-sample-rpc .env
112+
nano .env
113+
```
114+
115+
4. Deploy common components such as IAM role
116+
117+
```bash
118+
pwd
119+
# Make sure you are in aws-blockchain-node-runners/lib/base
120+
npx cdk deploy base-common
121+
```
122+
123+
> IMPORTANT:
124+
> All AWS CDK v2 deployments use dedicated AWS resources to hold data during deployment. Therefore, your AWS account and Region must be [bootstrapped](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html) to create these resources before you can deploy. If you haven't already bootstrapped, issue the following command:
125+
> ```bash
126+
> cdk bootstrap aws://ACCOUNT-NUMBER/REGION
127+
> ```
128+
129+
### Option 1: Deploy Single Node
130+
131+
1. For L1 node you you can set your own URLs in `BASE_L1_EXECUTION_ENDPOINT` and `BASE_L1_CONSENSUS_ENDPOINT` properties of `.env` file. It can be one of [the providers recommended by Base](https://docs.base.org/tools/node-providers) or you can run your own Ethereum node [with Node Runner Ethereum blueprint](https://aws-samples.github.io/aws-blockchain-node-runners/docs/Blueprints/Ethereum) (tested with single-node geth-lighthouse combination). For example:
132+
133+
```bash
134+
#For Sepolia:
135+
BASE_L1_EXECUTION_ENDPOINT="https://ethereum-sepolia-rpc.publicnode.com"
136+
BASE_L1_CONSENSUS_ENDPOINT="https://ethereum-sepolia-beacon-api.publicnode.com"
137+
```
138+
139+
2. Deploy Base RPC Node and wait for it to sync. For Full node on Mainnet it might take a day when using snapshots or about a week if syncing from block 0. You can use snapshots provided by the Base team by setting `BASE_RESTORE_FROM_SNAPSHOT="true"` in `.env` file.
140+
141+
```bash
142+
pwd
143+
# Make sure you are in aws-blockchain-node-runners/lib/base
144+
npx cdk deploy base-single-node --json --outputs-file single-node-deploy.json
145+
```
146+
After deployment you can watch the progress with CloudWatch dashboard (see [Monitoring](#monitoring)) or check the progress manually. For manual access, use SSM to connect into EC2 first and watch the log like this:
147+
148+
```bash
149+
export INSTANCE_ID=$(cat single-node-deploy.json | jq -r '..|.node-instance-id? | select(. != null)')
150+
echo "INSTANCE_ID=" $INSTANCE_ID
151+
export AWS_REGION=us-east-1
152+
aws ssm start-session --target $INSTANCE_ID --region $AWS_REGION
153+
echo Latest synced block behind by: $((($(date +%s)-$( \
154+
curl -d '{"id":0,"jsonrpc":"2.0","method":"optimism_syncStatus"}' \
155+
-H "Content-Type: application/json" http://localhost:7545 | \
156+
jq -r .result.unsafe_l2.timestamp))/60)) minutes
157+
```
158+
159+
3. Test Base RPC API
160+
Use curl to query from within the node instance:
161+
```bash
162+
export INSTANCE_ID=$(cat single-node-deploy.json | jq -r '..|.node-instance-id? | select(. != null)')
163+
echo "INSTANCE_ID=" $INSTANCE_ID
164+
export AWS_REGION=us-east-1
165+
aws ssm start-session --target $INSTANCE_ID --region $AWS_REGION
166+
167+
curl -s -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' http://localhost:8545
168+
```
169+
170+
### Option 2: Highly Available RPC Nodes
171+
172+
1. For L1 node you you can set your own URLs in `BASE_L1_EXECUTION_ENDPOINT` and `BASE_L1_CONSENSUS_ENDPOINT` properties of `.env` file. It can be one of [the providers recommended by Base](https://docs.base.org/tools/node-providers) or you can run your own Ethereum node [with Node Runner Ethereum blueprint](https://aws-samples.github.io/aws-blockchain-node-runners/docs/Blueprints/Ethereum) (tested with geth-lighthouse combination). For example:
173+
174+
```bash
175+
#For Sepolia:
176+
BASE_L1_EXECUTION_ENDPOINT="https://ethereum-sepolia-rpc.publicnode.com"
177+
BASE_L1_CONSENSUS_ENDPOINT="https://ethereum-sepolia-beacon-api.publicnode.com"
178+
```
179+
180+
2. Deploy Base RPC Node and wait for it to sync. For Mainnet it might a day when using snapshots or about a week if syncing from block 0. You can use snapshots provided by the Base team by setting `BASE_RESTORE_FROM_SNAPSHOT="true"` in `.env` file.
181+
182+
```bash
183+
pwd
184+
# Make sure you are in aws-blockchain-node-runners/lib/base
185+
npx cdk deploy base-ha-nodes --json --outputs-file ha-nodes-deploy.json
186+
```
187+
188+
2. Give the new RPC **full** nodes about 5 hours to initialize and then run the following query against the load balancer behind the RPC node created.
189+
190+
```bash
191+
export RPC_ALB_URL=$(cat ha-nodes-deploy.json | jq -r '..|.alburl? | select(. != null)')
192+
echo RPC_ALB_URL=$RPC_ALB_URL
193+
```
194+
195+
Copy output from the last `echo` command with `RPC_ALB_URL=<internal_IP>` and open [CloudShell tab with VPC environment](https://docs.aws.amazon.com/cloudshell/latest/userguide/creating-vpc-environment.html) to access internal IP address space. Paste `RPC_ALB_URL=<internal_IP>` into the new CloudShell tab. Then query the API:
196+
197+
```bash
198+
# IMPORTANT: Run from CloudShell VPC environment tab
199+
curl http://$RPC_ALB_URL:8545 -X POST -H "Content-Type: application/json" \
200+
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'
201+
```
202+
203+
**NOTE:** By default and for security reasons the load balancer is available only from within the default VPC in the region where it is deployed. It is not available from the Internet and is not open for external connections. Before opening it up please make sure you protect your RPC APIs.
204+
205+
**NOTE:** We currently don't recommend running **archive** nodes in HA setup, because it takes way too long to get them synced. Use single-node setup instead.
206+
207+
### Monitoring
208+
Every 5 minutes a script on the Base node publishes to CloudWatch service the metrics for current block for L1/L2 clients as well as blocks behind metric for L1 and minutes behind for L2. When the node is fully synced the blocks behind metric should get to 4 and minutes behind should get down to 0. To see the metrics for **single node only**:
209+
210+
- Navigate to CloudWatch service (make sure you are in the region you have specified for AWS_REGION)
211+
- Open Dashboards and select `base-single-node-<network>-<your_ec2_instance_id>` from the list of dashboards.
212+
213+
Metrics for **ha nodes** configuration is not yet implemented (contributions are welcome!)
214+
215+
## From your CloudShell: Clear up and undeploy everything
216+
217+
1. Undeploy all Nodes and Common stacks
218+
219+
```bash
220+
# Setting the AWS account id and region in case local .env file is lost
221+
export AWS_ACCOUNT_ID=<your_target_AWS_account_id>
222+
export AWS_REGION=<your_target_AWS_region>
223+
224+
pwd
225+
# Make sure you are in aws-blockchain-node-runners/lib/base
226+
227+
# Undeploy Single Node
228+
npx cdk destroy base-single-node
229+
230+
# Undeploy HA Nodes
231+
npx cdk destroy base-ha-nodes
232+
233+
# Delete all common components like IAM role and Security Group
234+
npx cdk destroy base-common
235+
```
236+
237+
## FAQ
238+
239+
1. How to check the logs of the clients running on my Base node?
240+
241+
**Note:** In this tutorial we chose not to use SSH and use Session Manager instead. That allows you to log all sessions in AWS CloudTrail to see who logged into the server and when. If you receive an error saying `SessionManagerPlugin is not found`, [install Session Manager plugin for AWS CLI](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html)
242+
243+
```bash
244+
pwd
245+
# Make sure you are in aws-blockchain-node-runners/lib/base
246+
247+
export INSTANCE_ID=$(cat single-node-deploy.json | jq -r '..|.nodeinstanceid? | select(. != null)')
248+
echo "INSTANCE_ID=" $INSTANCE_ID
249+
export AWS_REGION=us-east-1
250+
aws ssm start-session --target $INSTANCE_ID --region $AWS_REGION
251+
sudo su bcuser
252+
# Geth logs:
253+
docker logs --tail 50 node_geth_1 -f
254+
# Base logs:
255+
docker logs --tail 50 node_node_1 -f
256+
```
257+
2. How to check the logs from the EC2 user-data script?
258+
259+
```bash
260+
pwd
261+
# Make sure you are in aws-blockchain-node-runners/lib/base
262+
263+
export INSTANCE_ID=$(cat single-node-deploy.json | jq -r '..|.nodeinstanceid? | select(. != null)')
264+
echo "INSTANCE_ID=" $INSTANCE_ID
265+
export AWS_REGION=us-east-1
266+
aws ssm start-session --target $INSTANCE_ID --region $AWS_REGION
267+
sudo cat /var/log/cloud-init-output.log
268+
```
269+
270+
3. How can I restart the Base node?
271+
272+
``` bash
273+
export INSTANCE_ID=$(cat single-node-deploy.json | jq -r '..|.nodeinstanceid? | select(. != null)')
274+
echo "INSTANCE_ID=" $INSTANCE_ID
275+
export AWS_REGION=us-east-1
276+
aws ssm start-session --target $INSTANCE_ID --region $AWS_REGION
277+
sudo su bcuser
278+
/usr/local/bin/docker-compose -f /home/bcuser/node/docker-compose.yml down && \
279+
/usr/local/bin/docker-compose -f /home/bcuser/node/docker-compose.yml up -d
280+
```
281+
4. Where to find the key Base client directories?
282+
283+
- The data directory is `/data`

0 commit comments

Comments
 (0)