diff --git a/.gitignore b/.gitignore
index 46ce6cd2..738a7d47 100644
--- a/.gitignore
+++ b/.gitignore
@@ -38,3 +38,4 @@ ha-nodes-deploy*.json
.env
.idea
.vscode
+.venv
diff --git a/README.md b/README.md
index 8aa4971f..ef1eecce 100644
--- a/README.md
+++ b/README.md
@@ -16,7 +16,7 @@ If you'd like propose a Node Runner Blueprint for your node, see [Adding new Nod
- `lib/constructs` - [CDK constructs](https://docs.aws.amazon.com/cdk/v2/guide/constructs.html) used in Node Runner Blueprints
- `lib/your-chain` - Node Runner Blueprint for a specific chain
- `website` - Content for the project web site built with [Docusaurus](https://docusaurus.io/)
-- `website/docs` - Place for the new blueprint deployment instructions. (If you are adding a new blueprint, use on of the existing examples to refer to the `README.md` file within your Node Runner Blueprint directory inside `lib`).
+- `website/docs` - Place for the new blueprint deployment instructions. (If you are adding a new blueprint, use one of the existing examples to refer to the `README.md` file within your Node Runner Blueprint directory inside `lib`).
### License
This repository uses MIT License. See more in [LICENSE](./LICENSE).
diff --git a/lib/xrp/README.md b/lib/xrp/README.md
new file mode 100644
index 00000000..a08fe558
--- /dev/null
+++ b/lib/xrp/README.md
@@ -0,0 +1,229 @@
+# Sample AWS Blockchain Node Runner app for XRP Nodes
+
+| Contributed by |
+|:--------------------------------:|
+| [Pedro Aceves](https://github.com/acevesp)|
+
+XRP node deployment on AWS. All nodes are configure as ["Stock Servers"](https://xrpl.org/docs/infrastructure/configuration/server-modes/run-rippled-as-a-stock-server)
+
+## Overview of Deployment Architectures for Single and HA setups
+
+### Single node setup
+
+
+
+1. A XRP node deployed in the [Default VPC](https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html) continuously synchronizes with the rest of nodes on the configured xrp network through [Internet Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html).
+2. The XRP node is used by dApps or development tools internally from within the Default VPC. RPC API is not exposed to the Internet directly to protect nodes from unauthorized access.
+3. The XRP node sends various monitoring metrics for both EC2 and current XRP ledger sequence to Amazon CloudWatch. It also updates the dashboard with correct storage device names to display respective metrics properly.
+
+### HA setup
+
+
+
+1. A set of XRP nodes are deployed within an [Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-groups.html) in the [Default VPC](https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html) continuously synchronizing with the rest of nodes on the configured xrp network through [Internet Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html).
+2. The XRP nodes are accessed by dApps or development tools internally through [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html). RPC API is not exposed to the Internet to protect nodes from unauthorized access.
+3. The XRP nodes send various monitoring metrics for EC2 to Amazon CloudWatch.
+
+## Well-Architected
+
+
+Review pros and cons of this solution.
+
+### Well-Architected Checklist
+
+This is the Well-Architected checklist for XRP nodes implementation of the AWS Blockchain Node Runner app. This checklist takes into account questions from the [AWS Well-Architected Framework](https://aws.amazon.com/architecture/well-architected/) which are relevant to this workload. Please feel free to add more checks from the framework if required for your workload.
+
+| Pillar | Control | Question/Check | Remarks |
+|:------------------------|:----------------------------------|:---------------------------------------------------------------------------------|:-----------------|
+| Security | Network protection | Are there unnecessary open ports in security groups? | Please note that XRP sync ports remain open for outbound connections; Port 2459 and 51235 (TCP/UDP). |
+| | | Traffic inspection | AWS WAF could be implemented for traffic inspection. Additional charges will apply. |
+| | Compute protection | Reduce attack surface | This solution uses Amazon Linux 2 AMI. You may choose to run hardening scripts on it. |
+| | | Enable people to perform actions at a distance | This solution uses AWS Systems Manager for terminal session, not ssh ports. |
+| | Data protection at rest | Use encrypted Amazon Elastic Block Store (Amazon EBS) volumes | This solution uses encrypted Amazon EBS volumes. |
+| | | Use encrypted Amazon Simple Storage Service (Amazon S3) buckets | This solution uses Amazon S3 managed keys (SSE-S3) encryption. |
+| | Data protection in transit | Use TLS | The AWS Application Load balancer currently uses HTTP listener. Create HTTPS listener with self signed certificate if TLS is desired. |
+| | Authorization and access control | Use instance profile with Amazon Elastic Compute Cloud (Amazon EC2) instances | This solution uses AWS Identity and Access Management (AWS IAM) role instead of IAM user. |
+| | | Following principle of least privilege access | Privileges are scoped down. |
+| | Application security | Security focused development practices | cdk-nag is being used with appropriate suppressions. |
+| Cost optimization | Service selection | Use cost effective resources | Cost efficient R7a instances are being used, which are ideal for high transaction and low latecy workloads. |
+| Reliability | Resiliency implementation | Withstand component failures | This solution uses AWS Application Load Balancer with RPC nodes for high availability. |
+| | Resource monitoring | How are workload resources monitored? | Resources are being monitored using Amazon CloudWatch dashboards. Amazon CloudWatch custom metrics are being pushed via CloudWatch Agent. |
+| Performance efficiency | Compute selection | How is compute solution selected? | Compute solution is selected based on best price-performance. |
+| | Storage selection | How is storage solution selected? | Storage solution is selected based on best price-performance. |
+| Operational excellence | Workload health | How is health of workload determined? | Health of workload is determined via AWS Application Load Balancer Target Group Health Checks, on port 6005. |
+| Sustainability | Hardware & services | Select most efficient hardware for your workload | Amazon EC2 R7a instances support the Sustainability Pillar of the AWS Well-Architected Framework by offering memory optimization that enables more efficient resource utilization, potentially reducing overall energy consumption and hardware requirements for data-intensive workloads. |
+
+
+
+## Setup Instructions
+
+### Open AWS CloudShell
+
+To begin, ensure you login to your AWS account with permissions to create and modify resources in IAM, EC2, EBS, VPC, S3, and KMS.
+
+From the AWS Management Console, open the [AWS CloudShell](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html), a web-based shell environment. If unfamiliar, review the [2-minute YouTube video](https://youtu.be/fz4rbjRaiQM) for an overview and check out [CloudShell with VPC environment](https://docs.aws.amazon.com/cloudshell/latest/userguide/creating-vpc-environment.html) that we'll use to test nodes API from internal IP address space.
+
+Once ready, you can run the commands to deploy and test blueprints in the CloudShell.
+
+### Clone this repository and install dependencies
+
+```bash
+git clone https://github.com/aws-samples/aws-blockchain-node-runners.git
+cd aws-blockchain-node-runners
+npm install
+```
+
+### Configure your setup
+
+1. Make sure you are in the root directory of the cloned repository
+
+2. If you have deleted or don't have the default VPC, create default VPC
+
+```bash
+aws ec2 create-default-vpc
+```
+
+> **NOTE:** *You may see the following error if the default VPC already exists: `An error occurred (DefaultVpcAlreadyExists) when calling the CreateDefaultVpc operation: A Default VPC already exists for this account in this region.`. That means you can just continue with the following steps.*
+
+3. Configure your setup
+
+Create your own copy of `.env` file and edit it to update with your AWS Account ID and Region:
+```bash
+cd lib/xrp
+cp ./sample-configs/.env-sample-testnet .env
+nano .env
+```
+> **NOTE:** *You can find more examples inside `sample-configs`*
+
+
+4. Deploy common components such as IAM role:
+
+```bash
+npx cdk deploy XRP-common
+```
+
+
+### Deploy a Single Node
+
+1. Deploy the node
+
+```bash
+npx cdk deploy XRP-single-node --json --outputs-file single-node-deploy.json
+```
+
+2. After starting the node you need to wait for the initial synchronization process to finish. You can use Amazon CloudWatch to track the progress. There is a script that publishes CloudWatch metrics every 5 minutes, where you can watch `XRP Sequence` metrics. When the node is fully synced the sequence should match that of the configured xrp network (testnet, mainnet, etc). To see them:
+
+ - Navigate to [CloudWatch service](https://console.aws.amazon.com/cloudwatch/) (make sure you are in the region you have specified for `AWS_REGION`)
+ - Open `Dashboards` and select dashboard that starts with `XRP-single-node` from the list of dashboards.
+
+3. Once the initial synchronization is done, you should be able to access the RPC API of that node from within the same VPC. The RPC port is not exposed to the Internet. Run the following command to retrieve the private IP of the single RPC node you deployed:
+
+```bash
+export INSTANCE_ID=$(cat single-node-deploy.json | jq -r '.["XRP-single-node"].nodeinstanceid')
+ NODE_INTERNAL_IP=$(aws ec2 describe-instances --instance-ids $INSTANCE_ID --query 'Reservations[*].Instances[*].PrivateIpAddress' --output text)
+echo "NODE_INTERNAL_IP=$NODE_INTERNAL_IP"
+```
+
+Copy output from the last `echo` command with `NODE_INTERNAL_IP=` and open [CloudShell tab with VPC environment](https://docs.aws.amazon.com/cloudshell/latest/userguide/creating-vpc-environment.html) to access internal IP address space. Paste `NODE_INTERNAL_IP=` into the new CloudShell tab.
+
+Then query the RPC API to receive the latest block height:
+
+``` bash
+# IMPORTANT: Run from CloudShell VPC environment tab
+curl -X POST -H "Content-Type: application/json" http://$NODE_INTERNAL_IP:6005/ -d '{
+ "method": "ledger_current",
+ "params": [{}]
+}'
+```
+You will get a response similar to this:
+
+```json
+{"result":{"ledger_current_index":5147254,"status":"success"}}
+```
+
+Note: If the node is still syncing, you will receive the following response:
+
+```json
+{"result":{"error":"noNetwork","error_code":17,"error_message":"Not synced to the network.","request":{"command":"ledger_current"},"status":"error"}}
+```
+
+### Deploy HA Nodes
+
+1. Deploy multiple HA Nodes
+
+```bash
+pwd
+# Make sure you are in aws-blockchain-node-runners/lib/xrp
+npx cdk deploy XRP-ha-nodes --json --outputs-file ha-nodes-deploy.json
+```
+
+2. Give the new nodes time to initialize
+
+3. To perform an RPC request to your load balancer, run the following command to retrieve the ALB URL:
+
+```bash
+export XRP_RPC_ALB_URL=$(cat ha-nodes-deploy.json | jq -r '..|.alburl? | select(. != null)')
+echo XRP_RPC_ALB_URL=$XRP_RPC_ALB_URL
+```
+
+Copy output from the last `echo` command with `XRP_RPC_ALB_URL=` and open [CloudShell tab with VPC environment](https://docs.aws.amazon.com/cloudshell/latest/userguide/creating-vpc-environment.html) to access internal IP address space. Paste `XRP_RPC_ALB_URL=` into the VPC CloudShell tab.
+
+Then query the load balancer to retrieve the current block height:
+
+```bash
+curl -X POST -H "Content-Type: application/json" http://$XRP_RPC_ALB_URL:6005/ -d '{
+ "method": "ledger_current",
+ "params": [{}]
+ }'
+ ```
+
+You will get a response similar to this:
+
+```json
+{"result":{"ledger_current_index":5147300,"status":"success"}}
+```
+
+> **NOTE:** *By default and for security reasons the load balancer is available only from within the default VPC in the region where it is deployed. It is not available from the Internet and is not open for external connections. Before opening it up please make sure you protect your RPC APIs.*
+
+### Cleaning up and undeploying everything
+
+Destroy HA Nodes, Single Nodes and Common stacks
+
+```bash
+pwd
+# Make sure you are in aws-blockchain-node-runners/lib/xrp
+
+# Destroy HA Nodes
+npx cdk destroy XRP-ha-nodes
+
+# Destroy Single Node
+npx cdk destroy XRP-single-node
+
+# Delete all common components like IAM role and Security Group
+npx cdk destroy XRP-common
+```
+
+### FAQ
+
+1. How to check the logs from the EC2 user-data script?
+
+```bash
+pwd
+# Make sure you are in aws-blockchain-node-runners/lib/xrp
+
+export INSTANCE_ID=$(cat single-node-deploy.json | jq -r '.["XRP-single-node"].nodeinstanceid')
+echo "INSTANCE_ID=" $INSTANCE_ID
+aws ssm start-session --target $INSTANCE_ID --region $AWS_REGION
+sudo cat /var/log/cloud-init-output.log
+sudo cat /var/log/user-data.log
+```
+2. How can I change rippled (XRP) configuration?
+ There are two places of configuration for the xrp nodes:
+
+ a. `.env` file. Here is where you specify the xrp network you want. This is the key for the config in part b
+
+ ```bash
+ HUB_NETWORK_ID="testnet"
+ ```
+
+ b. `lib/xrp/lib/assets/rippled/rippledconfig.py` file. Here you can setup listeners and network configuration for the network specified in part "a"
diff --git a/lib/xrp/app.ts b/lib/xrp/app.ts
new file mode 100644
index 00000000..e74d2a80
--- /dev/null
+++ b/lib/xrp/app.ts
@@ -0,0 +1,49 @@
+#!/usr/bin/env node
+import "dotenv/config";
+import * as cdk from "aws-cdk-lib";
+import * as nag from "cdk-nag";
+import * as config from "./lib/config/XRPConfig";
+
+import { XRPSingleNodeStack } from "./lib/single-node-stack";
+import { XRPCommonStack } from "./lib/common-stack";
+import { XRPHANodesStack } from "./lib/ha-nodes-stack";
+
+const app = new cdk.App();
+cdk.Tags.of(app).add("Project", "AWSXRP");
+
+const commonStack = new XRPCommonStack(app, "XRP-common", {
+ stackName: `XRP-nodes-common`,
+ env: { account: config.baseConfig.accountId, region: config.baseConfig.region },
+});
+
+new XRPSingleNodeStack(app, "XRP-single-node", {
+ env: { account: config.baseConfig.accountId, region: config.baseConfig.region },
+ stackName: `XRP-single-node`,
+ instanceType: config.baseNodeConfig.instanceType,
+ instanceCpuType: config.baseNodeConfig.instanceCpuType,
+ dataVolume: config.baseNodeConfig.dataVolume,
+ hubNetworkID: config.baseNodeConfig.hubNetworkID,
+ instanceRole: commonStack.instanceRole,
+});
+
+ new XRPHANodesStack(app, "XRP-ha-nodes", {
+ stackName: "xrp-ha-nodes",
+ env: { account: config.baseConfig.accountId, region: config.baseConfig.region },
+ instanceType: config.baseNodeConfig.instanceType,
+ instanceCpuType: config.baseNodeConfig.instanceCpuType,
+ dataVolume: config.baseNodeConfig.dataVolume,
+ hubNetworkID: config.baseNodeConfig.hubNetworkID,
+ instanceRole: commonStack.instanceRole,
+ albHealthCheckGracePeriodMin: config.haNodeConfig.albHealthCheckGracePeriodMin,
+ heartBeatDelayMin: config.haNodeConfig.heartBeatDelayMin,
+ numberOfNodes: config.haNodeConfig.numberOfNodes,
+ });
+
+// Security Check
+cdk.Aspects.of(app).add(
+ new nag.AwsSolutionsChecks({
+ verbose: false,
+ reports: true,
+ logIgnores: false,
+ })
+);
diff --git a/lib/xrp/cdk.json b/lib/xrp/cdk.json
new file mode 100644
index 00000000..d374005c
--- /dev/null
+++ b/lib/xrp/cdk.json
@@ -0,0 +1,44 @@
+{
+ "app": "npx ts-node --prefer-ts-exts app.ts",
+ "watch": {
+ "include": [
+ "**"
+ ],
+ "exclude": [
+ "README.md",
+ "cdk*.json",
+ "**/*.d.ts",
+ "**/*.js",
+ "tsconfig.json",
+ "package*.json",
+ "yarn.lock",
+ "node_modules",
+ "test"
+ ]
+ },
+ "context": {
+ "@aws-cdk/aws-lambda:recognizeLayerVersion": true,
+ "@aws-cdk/core:checkSecretUsage": true,
+ "@aws-cdk/core:target-partitions": [
+ "aws",
+ "aws-cn"
+ ],
+ "@aws-cdk-containers/ecs-service-extensions:enableDefaultLogDriver": true,
+ "@aws-cdk/aws-ec2:uniqueImdsv2TemplateName": true,
+ "@aws-cdk/aws-ecs:arnFormatIncludesClusterName": true,
+ "@aws-cdk/aws-iam:minimizePolicies": true,
+ "@aws-cdk/core:validateSnapshotRemovalPolicy": true,
+ "@aws-cdk/aws-codepipeline:crossAccountKeyAliasStackSafeResourceName": true,
+ "@aws-cdk/aws-s3:createDefaultLoggingPolicy": true,
+ "@aws-cdk/aws-sns-subscriptions:restrictSqsDescryption": true,
+ "@aws-cdk/aws-apigateway:disableCloudWatchRole": true,
+ "@aws-cdk/core:enablePartitionLiterals": true,
+ "@aws-cdk/aws-events:eventsTargetQueueSameAccount": true,
+ "@aws-cdk/aws-iam:standardizedServicePrincipals": true,
+ "@aws-cdk/aws-ecs:disableExplicitDeploymentControllerForCircuitBreaker": true,
+ "@aws-cdk/aws-iam:importedRoleStackSafeDefaultPolicyName": true,
+ "@aws-cdk/aws-s3:serverAccessLogsUseBucketPolicy": true,
+ "@aws-cdk/aws-route53-patters:useCertificate": true,
+ "@aws-cdk/customresources:installLatestAwsSdkDefault": false
+ }
+}
diff --git a/lib/xrp/doc/assets/Architecture-HA Nodes.drawio.png b/lib/xrp/doc/assets/Architecture-HA Nodes.drawio.png
new file mode 100644
index 00000000..e52cb7b8
Binary files /dev/null and b/lib/xrp/doc/assets/Architecture-HA Nodes.drawio.png differ
diff --git a/lib/xrp/doc/assets/Architecture-Single node.drawio.png b/lib/xrp/doc/assets/Architecture-Single node.drawio.png
new file mode 100644
index 00000000..cc3770c0
Binary files /dev/null and b/lib/xrp/doc/assets/Architecture-Single node.drawio.png differ
diff --git a/lib/xrp/jest.config.js b/lib/xrp/jest.config.js
new file mode 100644
index 00000000..9b9c4172
--- /dev/null
+++ b/lib/xrp/jest.config.js
@@ -0,0 +1,11 @@
+module.exports = {
+ testEnvironment: "node",
+ roots: ["/test"],
+ testMatch: ["**/*.test.ts"],
+ transform: {
+ "^.+\\.tsx?$": "ts-jest"
+ },
+ setupFiles: [
+ "dotenv/config"
+ ]
+};
diff --git a/lib/xrp/lib/assets/cw-agent.json b/lib/xrp/lib/assets/cw-agent.json
new file mode 100644
index 00000000..28833017
--- /dev/null
+++ b/lib/xrp/lib/assets/cw-agent.json
@@ -0,0 +1,76 @@
+{
+ "agent": {
+ "metrics_collection_interval": 60,
+ "run_as_user": "root"
+ },
+ "metrics": {
+ "aggregation_dimensions": [
+ [
+ "InstanceId"
+ ]
+ ],
+ "append_dimensions": {
+ "InstanceId": "${aws:InstanceId}"
+ },
+ "metrics_collected": {
+ "cpu": {
+ "measurement": [
+ "cpu_usage_idle",
+ "cpu_usage_iowait",
+ "cpu_usage_user",
+ "cpu_usage_system"
+ ],
+ "metrics_collection_interval": 60,
+ "resources": [
+ "*"
+ ],
+ "totalcpu": false
+ },
+ "disk": {
+ "measurement": [
+ "used_percent"
+ ],
+ "metrics_collection_interval": 60,
+ "resources": [
+ "*"
+ ]
+ },
+ "diskio": {
+ "measurement": [
+ "io_time",
+ "write_bytes",
+ "read_bytes",
+ "writes",
+ "reads",
+ "write_time",
+ "read_time",
+ "iops_in_progress"
+ ],
+ "metrics_collection_interval": 60,
+ "resources": [
+ "*"
+ ]
+ },
+ "mem": {
+ "measurement": [
+ "mem_used_percent",
+ "mem_cached"
+ ],
+ "metrics_collection_interval": 60
+ },
+ "netstat": {
+ "measurement": [
+ "tcp_established",
+ "tcp_time_wait"
+ ],
+ "metrics_collection_interval": 60
+ },
+ "swap": {
+ "measurement": [
+ "swap_used_percent"
+ ],
+ "metrics_collection_interval": 60
+ }
+ }
+ }
+}
diff --git a/lib/xrp/lib/assets/rippled/configBuilder.py b/lib/xrp/lib/assets/rippled/configBuilder.py
new file mode 100644
index 00000000..30e7e8dd
--- /dev/null
+++ b/lib/xrp/lib/assets/rippled/configBuilder.py
@@ -0,0 +1,133 @@
+import configparser
+import os
+import sys
+from dataclasses import dataclass
+from pathlib import Path
+from typing import Dict, Any, Tuple
+
+import rippledconfig
+
+
+@dataclass
+class RippledConfig:
+ """Class to handle Rippled configuration settings"""
+ assets_path: Path
+ xrp_network: str
+
+ def __init__(self, assets_path: str):
+ self.assets_path = Path(assets_path) / "rippled"
+ self.xrp_network = os.environ.get("XRP_NETWORK", "mainnet")
+ self.server_ports = rippledconfig.xrp_defaults["server_ports"]
+ self.node_db_defaults = rippledconfig.xrp_defaults["db_defaults"]
+ self.network_defaults = rippledconfig.xrp_defaults["network_defaults"]
+
+ def load_config_files(self) -> Tuple[configparser.ConfigParser, configparser.ConfigParser]:
+ """Load and parse configuration template files"""
+ ripple_cfg = self._create_config_parser()
+ validator_cfg = self._create_config_parser()
+
+ ripple_cfg.read_string(self._read_template_file("rippled.cfg.template"))
+ validator_cfg.read_string(self._read_template_file("validators.txt.template"))
+
+ return ripple_cfg, validator_cfg
+
+ def _read_template_file(self, filename: str) -> str:
+ """Read a template file from the assets directory"""
+ try:
+ with open(self.assets_path / filename) as f:
+ return f.read()
+ except FileNotFoundError as e:
+ raise FileNotFoundError(f"Template file {filename} not found in {self.assets_path}") from e
+
+ @staticmethod
+ def _create_config_parser() -> configparser.ConfigParser:
+ """Create a configured ConfigParser instance"""
+ parser = configparser.ConfigParser(
+ allow_no_value=True,
+ delimiters="=",
+ empty_lines_in_values=False
+ )
+ parser.optionxform = str
+ return parser
+
+ def apply_network_configuration(self, ripple_cfg: configparser.ConfigParser,
+ validator_cfg: configparser.ConfigParser) -> None:
+ """Apply network-specific configuration settings"""
+ network_config = self.network_defaults[self.xrp_network]
+
+ if self.xrp_network == "mainnet":
+ self._configure_mainnet(ripple_cfg, validator_cfg, network_config)
+ elif self.xrp_network == "testnet":
+ self._configure_testnet(ripple_cfg, validator_cfg, network_config)
+
+ def _configure_mainnet(self, ripple_cfg: configparser.ConfigParser,
+ validator_cfg: configparser.ConfigParser,
+ network_config: Dict[str, Any]) -> None:
+ """Configure settings for mainnet"""
+ ripple_cfg.remove_section("ips")
+ ripple_cfg.set("network_id", network_config["network_id"])
+ ripple_cfg['ssl_verify'].clear()
+ ripple_cfg.set("ssl_verify", network_config["ssl_verify"])
+ self._apply_common_config(ripple_cfg, validator_cfg, network_config)
+
+ def _configure_testnet(self, ripple_cfg: configparser.ConfigParser,
+ validator_cfg: configparser.ConfigParser,
+ network_config: Dict[str, Any]) -> None:
+ """Configure settings for testnet"""
+ ripple_cfg.set("ips", network_config["ips"])
+ ripple_cfg.set("network_id", network_config["network_id"])
+ ripple_cfg['ssl_verify'].clear()
+ ripple_cfg.set("ssl_verify", network_config["ssl_verify"])
+ self._apply_common_config(ripple_cfg, validator_cfg, network_config)
+
+ def _apply_common_config(self, ripple_cfg: configparser.ConfigParser,
+ validator_cfg: configparser.ConfigParser,
+ network_config: Dict[str, Any]) -> None:
+ """Apply common configuration settings"""
+ self._configure_server_ports(ripple_cfg)
+ self._configure_node_db(ripple_cfg)
+ self._configure_validators(validator_cfg, network_config)
+
+ def _configure_server_ports(self, config: configparser.ConfigParser) -> None:
+ """Configure server ports settings"""
+ for section, settings in self.server_ports.items():
+ for key, value in settings.items():
+ config.set(section, key, value)
+
+ def _configure_node_db(self, config: configparser.ConfigParser) -> None:
+ """Configure node database settings"""
+ for section, settings in self.node_db_defaults.items():
+ for key, value in settings.items():
+ config.set(section, key, value)
+
+ def _configure_validators(self, config: configparser.ConfigParser,
+ network_config: Dict[str, Any]) -> None:
+ """Configure validator settings"""
+ for section in config.sections():
+ config[section].clear()
+ config.set(section, "\n".join(map(str, network_config[section])))
+
+def main():
+ """Main function to generate Rippled configuration"""
+ try:
+ assets_path = sys.argv[1]
+ config_handler = RippledConfig(assets_path)
+
+ ripple_cfg, validator_cfg = config_handler.load_config_files()
+ config_handler.apply_network_configuration(ripple_cfg, validator_cfg)
+
+ # Write configurations to files
+ with open(rippledconfig.rippled_cfg_file, "w") as r_cfg:
+ ripple_cfg.write(r_cfg, space_around_delimiters=True)
+ with open(rippledconfig.rippled_validator_file, "w") as val_cfg:
+ validator_cfg.write(val_cfg, space_around_delimiters=True)
+
+ except IndexError:
+ print("Error: Please provide the assets path as a command line argument")
+ sys.exit(1)
+ except Exception as e:
+ print(f"Error: {str(e)}")
+ sys.exit(1)
+
+if __name__ == "__main__":
+ main()
diff --git a/lib/xrp/lib/assets/rippled/ripple.repo b/lib/xrp/lib/assets/rippled/ripple.repo
new file mode 100644
index 00000000..da3c851d
--- /dev/null
+++ b/lib/xrp/lib/assets/rippled/ripple.repo
@@ -0,0 +1,7 @@
+[ripple-stable]
+name=XRP Ledger Packages
+enabled=1
+gpgcheck=0
+repo_gpgcheck=1
+baseurl=https://repos.ripple.com/repos/rippled-rpm/stable/
+gpgkey=https://repos.ripple.com/repos/rippled-rpm/stable/repodata/repomd.xml.key
diff --git a/lib/xrp/lib/assets/rippled/rippled.cfg b/lib/xrp/lib/assets/rippled/rippled.cfg
new file mode 100644
index 00000000..fd568927
--- /dev/null
+++ b/lib/xrp/lib/assets/rippled/rippled.cfg
@@ -0,0 +1,1507 @@
+#-------------------------------------------------------------------------------
+#
+#
+#-------------------------------------------------------------------------------
+#
+# Contents
+#
+# 1. Server
+#
+# 2. Peer Protocol
+#
+# 3. Ripple Protocol
+#
+# 4. HTTPS Client
+#
+# 5.
+#
+# 6. Database
+#
+# 7. Diagnostics
+#
+# 8. Voting
+#
+# 9. Misc Settings
+#
+# 10. Example Settings
+#
+#-------------------------------------------------------------------------------
+#
+# Purpose
+#
+# This file documents and provides examples of all rippled server process
+# configuration options. When the rippled server instance is launched, it
+# looks for a file with the following name:
+#
+# rippled.cfg
+#
+# For more information on where the rippled server instance searches for the
+# file, visit:
+#
+# https://xrpl.org/commandline-usage.html#generic-options
+#
+# This file should be named rippled.cfg. This file is UTF-8 with DOS, UNIX,
+# or Mac style end of lines. Blank lines and lines beginning with '#' are
+# ignored. Undefined sections are reserved. No escapes are currently defined.
+#
+# Notation
+#
+# In this document a simple BNF notation is used. Angle brackets denote
+# required elements, square brackets denote optional elements, and single
+# quotes indicate string literals. A vertical bar separating 1 or more
+# elements is a logical "or"; any one of the elements may be chosen.
+# Parentheses are notational only, and used to group elements; they are not
+# part of the syntax unless they appear in quotes. White space may always
+# appear between elements, it has no effect on values.
+#
+# A required identifier
+# '=' The equals sign character
+# | Logical "or"
+# ( ) Used for grouping
+#
+#
+# An identifier is a string of upper or lower case letters, digits, or
+# underscores subject to the requirement that the first character of an
+# identifier must be a letter. Identifiers are not case sensitive (but
+# values may be).
+#
+# Some configuration sections contain key/value pairs. A line containing
+# a key/value pair has this syntax:
+#
+# '='
+#
+# Depending on the section and key, different value types are possible:
+#
+# A signed integer
+# An unsigned integer
+# A boolean. 1 = true/yes/on, 0 = false/no/off.
+#
+# Consult the documentation on the key in question to determine the possible
+# value types.
+#
+#
+#
+#-------------------------------------------------------------------------------
+#
+# 1. Server
+#
+#----------
+#
+#
+#
+# rippled offers various server protocols to clients making inbound
+# connections. The listening ports rippled uses are "universal" ports
+# which may be configured to handshake in one or more of the available
+# supported protocols. These universal ports simplify administration:
+# A single open port can be used for multiple protocols.
+#
+# NOTE At least one server port must be defined in order
+# to accept incoming network connections.
+#
+#
+# [server]
+#
+# A list of port names and key/value pairs. A port name must start with a
+# letter and contain only letters and numbers. The name is not case-sensitive.
+# For each name in this list, rippled will look for a configuration file
+# section with the same name and use it to create a listening port. The
+# name is informational only; the choice of name does not affect the function
+# of the listening port.
+#
+# Key/value pairs specified in this section are optional, and apply to all
+# listening ports unless the port overrides the value in its section. They
+# may be considered default values.
+#
+# Suggestion:
+#
+# To avoid a conflict with port names and future configuration sections,
+# we recommend prepending "port_" to the port name. This prefix is not
+# required, but suggested.
+#
+# This example defines two ports with different port numbers and settings:
+#
+# [server]
+# port_public
+# port_private
+# port = 80
+#
+# [port_public]
+# ip = 0.0.0.0
+# port = 443
+# protocol = peer,https
+#
+# [port_private]
+# ip = 127.0.0.1
+# protocol = http
+#
+# When rippled is used as a command line client (for example, issuing a
+# server stop command), the first port advertising the http or https
+# protocol will be used to make the connection.
+#
+#
+#
+# []
+#
+# A series of key/value pairs that define the settings for the port with
+# the corresponding name. These keys are possible:
+#
+# ip =
+#
+# Required. Determines the IP address of the network interface to bind
+# to. To bind to all available IPv4 interfaces, use 0.0.0.0
+# To binding to all IPv4 and IPv6 interfaces, use ::
+#
+# NOTE if the ip value is ::, then any incoming IPv4 connections will
+# be made as mapped IPv4 addresses.
+#
+# port =
+#
+# Required. Sets the port number to use for this port.
+#
+# protocol = [ http, https, peer ]
+#
+# Required. A comma-separated list of protocols to support:
+#
+# http JSON-RPC over HTTP
+# https JSON-RPC over HTTPS
+# ws Websockets
+# wss Secure Websockets
+# peer Peer Protocol
+#
+# Restrictions:
+#
+# Only one port may be configured to support the peer protocol.
+# A port cannot have websocket and non websocket protocols at the
+# same time. It is possible have both Websockets and Secure Websockets
+# together in one port.
+#
+# NOTE If no ports support the peer protocol, rippled cannot
+# receive incoming peer connections or become a superpeer.
+#
+# limit =
+#
+# Optional. An integer value that will limit the number of connected
+# clients that the port will accept. Once the limit is reached, new
+# connections will be refused until other clients disconnect.
+# Omit or set to 0 to allow unlimited numbers of clients.
+#
+# user =
+# password =
+#
+# When set, these credentials will be required on HTTP/S requests.
+# The credentials must be provided using HTTP's Basic Authentication
+# headers. If either or both fields are empty, then no credentials are
+# required. IP address restrictions, if any, will be checked in addition
+# to the credentials specified here.
+#
+# When acting in the client role, rippled will supply these credentials
+# using HTTP's Basic Authentication headers when making outbound HTTP/S
+# requests.
+#
+# admin = [ IP, IP, IP, ... ]
+#
+# A comma-separated list of IP addresses or subnets. Subnets
+# should be represented in "slash" notation, such as:
+# 10.0.0.0/8
+# 172.16.0.0/12
+# 192.168.0.0/16
+# Those examples are ipv4, but ipv6 is also supported.
+# When configuring subnets, the address must match the
+# underlying network address. Otherwise, the desired IP range is
+# ambiguous. For example, 10.1.2.3/24 has a network address of
+# 10.1.2.0. Therefore, that subnet should be configured as
+# 10.1.2.0/24.
+#
+# When set, grants administrative command access to the specified
+# addresses. These commands may be issued over http, https, ws, or wss
+# if configured on the port. If not provided, the default is to not allow
+# administrative commands.
+#
+# NOTE A common configuration value for the admin field is "localhost".
+# If you are listening on all IPv4/IPv6 addresses by specifing
+# ip = :: then you can use admin = ::ffff:127.0.0.1,::1 to allow
+# administrative access from both IPv4 and IPv6 localhost
+# connections.
+#
+# *SECURITY WARNING*
+# 0.0.0.0 or :: may be used to allow access from any IP address. It must
+# be the only address specified and cannot be combined with other IPs.
+# Use of this address can compromise server security, please consider its
+# use carefully.
+#
+# admin_user =
+# admin_password =
+#
+# When set, clients must provide these credentials in the submitted
+# JSON for any administrative command requests submitted to the HTTP/S,
+# WS, or WSS protocol interfaces. If administrative commands are
+# disabled for a port, these credentials have no effect.
+#
+# When acting in the client role, rippled will supply these credentials
+# in the submitted JSON for any administrative command requests when
+# invoking JSON-RPC commands on remote servers.
+#
+# secure_gateway = [ IP, IP, IP, ... ]
+#
+# A comma-separated list of IP addresses or subnets. See the
+# details for the "admin" option above.
+#
+# When set, allows the specified addresses to pass HTTP headers
+# containing username and remote IP address for each session. If a
+# non-empty username is passed in this way, then resource controls
+# such as often resulting in "tooBusy" errors will be lifted. However,
+# administrative RPC commands such as "stop" will not be allowed.
+# The HTTP headers that secure_gateway hosts can set are X-User and
+# X-Forwarded-For. Only the X-User header affects resource controls.
+# However, both header values are logged to help identify user activity.
+# If no X-User header is passed, or if its value is empty, then
+# resource controls will default to those for non-administrative users.
+#
+# The secure_gateway IP addresses are intended to represent
+# proxies. Since rippled trusts these hosts, they must be
+# responsible for properly authenticating the remote user.
+#
+# If some IP addresses are included for both "admin" and
+# "secure_gateway" connections, then they will be treated as
+# "admin" addresses.
+#
+# ssl_key =
+# ssl_cert =
+# ssl_chain =
+#
+# Use the specified files when configuring SSL on the port.
+#
+# NOTE If no files are specified and secure protocols are selected,
+# rippled will generate an internal self-signed certificate.
+#
+# The files have these meanings:
+#
+# ssl_key
+#
+# Specifies the filename holding the SSL key in PEM format.
+#
+# ssl_cert
+#
+# Specifies the path to the SSL certificate file in PEM format.
+# This is not needed if the chain includes it. Use ssl_chain if
+# your certificate includes one or more intermediates.
+#
+# ssl_chain
+#
+# If you need a certificate chain, specify the path to the
+# certificate chain here. The chain may include the end certificate.
+# This must be used if the certificate includes intermediates.
+#
+# ssl_ciphers =
+#
+# Control the ciphers which the server will support over SSL on the port,
+# specified using the OpenSSL "cipher list format".
+#
+# NOTE If unspecified, rippled will automatically configure a modern
+# cipher suite. This default suite should be widely supported.
+#
+# You should not modify this string unless you have a specific
+# reason and cryptographic expertise. Incorrect modification may
+# keep rippled from connecting to other instances of rippled or
+# prevent RPC and WebSocket clients from connecting.
+#
+# send_queue_limit = [1..65535]
+#
+# A Websocket will disconnect when its send queue exceeds this limit.
+# The default is 100. A larger value may help with erratic disconnects but
+# may adversely affect server performance.
+#
+# WebSocket permessage-deflate extension options
+#
+# These settings configure the optional permessage-deflate extension
+# options and may appear on any port configuration entry. They are meaningful
+# only to ports which have enabled a WebSocket protocol.
+#
+# permessage_deflate =
+#
+# Determines if permessage_deflate extension negotiations are enabled.
+# When enabled, clients may request the extension and the server will
+# offer the enabled extension in response.
+#
+# client_max_window_bits = [9..15]
+# server_max_window_bits = [9..15]
+# client_no_context_takeover =
+# server_no_context_takeover =
+#
+# These optional settings control options related to the permessage-deflate
+# extension negotiation. For precise definitions of these fields please see
+# the RFC 7692, "Compression Extensions for WebSocket":
+# https://tools.ietf.org/html/rfc7692
+#
+# compress_level = [0..9]
+#
+# When set, determines the amount of compression attempted, where 0 is
+# the least amount and 9 is the most amount. Higher levels require more
+# CPU resources. Levels 1 through 3 use a fast compression algorithm,
+# while levels 4 through 9 use a more compact algorithm which uses more
+# CPU resources. If unspecified, a default of 3 is used.
+#
+# memory_level = [1..9]
+#
+# When set, determines the relative amount of memory used to hold
+# intermediate compression data. Higher numbers can give better compression
+# ratios at the cost of higher memory and CPU resources.
+#
+# [rpc_startup]
+#
+# Specify a list of RPC commands to run at startup.
+#
+# Examples:
+# { "command" : "server_info" }
+# { "command" : "log_level", "partition" : "ripplecalc", "severity" : "trace" }
+#
+#
+#
+# [websocket_ping_frequency]
+#
+#
+#
+# The amount of time to wait in seconds, before sending a websocket 'ping'
+# message. Ping messages are used to determine if the remote end of the
+# connection is no longer available.
+#
+#
+# [server_domain]
+#
+# domain name
+#
+# The domain under which a TOML file applicable to this server can be
+# found. A server may lie about its domain so the TOML should contain
+# a reference to this server by pubkey in the [nodes] array.
+#
+#
+#-------------------------------------------------------------------------------
+#
+# 2. Peer Protocol
+#
+#-----------------
+#
+# These settings control security and access attributes of the Peer to Peer
+# server section of the rippled process. Peer Protocol implements the
+# Ripple Payment protocol. It is over peer connections that transactions
+# and validations are passed from to machine to machine, to determine the
+# contents of validated ledgers.
+#
+#
+#
+# [compression]
+#
+# true or false
+#
+# true - enables compression
+# false - disables compression [default].
+#
+# The rippled server can save bandwidth by compressing its peer-to-peer communications,
+# at a cost of greater CPU usage. If you enable link compression,
+# the server automatically compresses communications with peer servers
+# that also have link compression enabled.
+# https://xrpl.org/enable-link-compression.html
+#
+#
+#
+# [ips]
+#
+# List of hostnames or ips where the Ripple protocol is served. A default
+# starter list is included in the code and used if no other hostnames are
+# available.
+#
+# One address or domain name per line is allowed. A port may must be
+# specified after adding a space to the address. The ordering of entries
+# does not generally matter.
+#
+# The default list of entries is:
+# - r.ripple.com 51235
+# - sahyadri.isrdc.in 51235
+# - hubs.xrpkuwait.com 51235
+#
+# Examples:
+#
+# [ips]
+# 192.168.0.1
+# 192.168.0.1 2459
+# r.ripple.com 51235
+#
+#
+# [ips_fixed]
+#
+# List of IP addresses or hostnames to which rippled should always attempt to
+# maintain peer connections with. This is useful for manually forming private
+# networks, for example to configure a validation server that connects to the
+# Ripple network through a public-facing server, or for building a set
+# of cluster peers.
+#
+# One address or domain names per line is allowed. A port must be specified
+# after adding a space to the address.
+#
+#
+#
+# [peer_private]
+#
+# 0 or 1.
+#
+# 0: Request peers to broadcast your address. Normal outbound peer connections [default]
+# 1: Request peers not broadcast your address. Only connect to configured peers.
+#
+#
+#
+# [peers_max]
+#
+# The largest number of desired peer connections (incoming or outgoing).
+# Cluster and fixed peers do not count towards this total. There are
+# implementation-defined lower limits imposed on this value for security
+# purposes.
+#
+#
+#
+# [node_seed]
+#
+# This is used for clustering. To force a particular node seed or key, the
+# key can be set here. The format is the same as the validation_seed field.
+# To obtain a validation seed, use the validation_create command.
+#
+# Examples: RASH BUSH MILK LOOK BAD BRIM AVID GAFF BAIT ROT POD LOVE
+# shfArahZT9Q9ckTf3s1psJ7C7qzVN
+#
+#
+#
+# [cluster_nodes]
+#
+# To extend full trust to other nodes, place their node public keys here.
+# Generally, you should only do this for nodes under common administration.
+# Node public keys start with an 'n'. To give a node a name for identification
+# place a space after the public key and then the name.
+#
+#
+#
+# [max_transactions]
+#
+# Configure the maximum number of transactions to have in the job queue
+#
+# Must be a number between 100 and 1000, defaults to 250
+#
+#
+# [overlay]
+#
+# Controls settings related to the peer to peer overlay.
+#
+# A set of key/value pair parameters to configure the overlay.
+#
+# public_ip =
+#
+# If the server has a known, fixed public IPv4 address,
+# specify that IP address here in dotted decimal notation.
+# Peers will use this information to reject attempt to proxy
+# connections to or from this server.
+#
+# ip_limit =
+#
+# The maximum number of incoming peer connections allowed by a single
+# IP that isn't classified as "private" in RFC1918. The implementation
+# imposes some hard and soft upper limits on this value to prevent a
+# single host from consuming all inbound slots. If the value is not
+# present the server will autoconfigure an appropriate limit.
+#
+# max_unknown_time =
+#
+# The maximum amount of time, in seconds, that an outbound connection
+# is allowed to stay in the "unknown" tracking state. This option can
+# take any value between 300 and 1800 seconds, inclusive. If the option
+# is not present the server will autoconfigure an appropriate limit.
+#
+# The current default (which is subject to change) is 600 seconds.
+#
+# max_diverged_time =
+#
+# The maximum amount of time, in seconds, that an outbound connection
+# is allowed to stay in the "diverged" tracking state. The option can
+# take any value between 60 and 900 seconds, inclusive. If the option
+# is not present the server will autoconfigure an appropriate limit.
+#
+# The current default (which is subject to change) is 300 seconds.
+#
+#
+# [transaction_queue] EXPERIMENTAL
+#
+# This section is EXPERIMENTAL, and should not be
+# present for production configuration settings.
+#
+# A set of key/value pair parameters to tune the performance of the
+# transaction queue.
+#
+# ledgers_in_queue =
+#
+# The queue will be limited to this of average ledgers'
+# worth of transactions. If the queue fills up, the transactions
+# with the lowest fee levels will be dropped from the queue any
+# time a transaction with a higher fee level is added.
+# Default: 20.
+#
+# minimum_queue_size =
+#
+# The queue will always be able to hold at least this of
+# transactions, regardless of recent ledger sizes or the value of
+# ledgers_in_queue. Default: 2000.
+#
+# retry_sequence_percent =
+#
+# If a client replaces a transaction in the queue (same sequence
+# number as a transaction already in the queue), the new
+# transaction's fee must be more than percent higher
+# than the original transaction's fee, or meet the current open
+# ledger fee to be considered. Default: 25.
+#
+# minimum_escalation_multiplier =
+#
+# At ledger close time, the median fee level of the transactions
+# in that ledger is used as a multiplier in escalation
+# calculations of the next ledger. This minimum value ensures that
+# the escalation is significant. Default: 500.
+#
+# minimum_txn_in_ledger =
+#
+# Minimum number of transactions that must be allowed into the
+# ledger at the minimum required fee before the required fee
+# escalates. Default: 5.
+#
+# minimum_txn_in_ledger_standalone =
+#
+# Like minimum_txn_in_ledger when rippled is running in standalone
+# mode. Default: 1000.
+#
+# target_txn_in_ledger =
+#
+# Number of transactions allowed into the ledger at the minimum
+# required fee that the queue will "work toward" as long as
+# consensus stays healthy. The limit will grow quickly until it
+# reaches or exceeds this number. After that the limit may still
+# change, but will stay above the target. If consensus is not
+# healthy, the limit will be clamped to this value or lower.
+# Default: 50.
+#
+# maximum_txn_in_ledger =
+#
+# (Optional) Maximum number of transactions that will be allowed
+# into the ledger at the minimum required fee before the required
+# fee escalates. Default: no maximum.
+#
+# normal_consensus_increase_percent =
+#
+# (Optional) When the ledger has more transactions than "expected",
+# and performance is humming along nicely, the expected ledger size
+# is updated to the previous ledger size plus this percentage.
+# Default: 20
+#
+# slow_consensus_decrease_percent =
+#
+# (Optional) When consensus takes longer than appropriate, the
+# expected ledger size is updated to the minimum of the previous
+# ledger size or the "expected" ledger size minus this percentage.
+# Default: 50
+#
+# maximum_txn_per_account =
+#
+# Maximum number of transactions that one account can have in the
+# queue at any given time. Default: 10.
+#
+# minimum_last_ledger_buffer =
+#
+# If a transaction has a LastLedgerSequence, it must be at least
+# this much larger than the current open ledger sequence number.
+# Default: 2.
+#
+# zero_basefee_transaction_feelevel =
+#
+# So we don't deal with infinite fee levels, treat any transaction
+# with a 0 base fee (ie. SetRegularKey password recovery) as
+# having this fee level.
+# Default: 256000.
+#
+#
+#-------------------------------------------------------------------------------
+#
+# 3. Protocol
+#
+#-------------------
+#
+# These settings affect the behavior of the server instance with respect
+# to protocol level activities such as validating and closing ledgers
+# adjusting fees in response to server overloads.
+#
+#
+#
+#
+# [relay_proposals]
+#
+# Controls the relay and processing behavior for proposals received by this
+# server that are issued by validators that are not on the server's UNL.
+#
+# Legal values are:
+# "all" - Relay and process all incoming proposals
+# "trusted" - Relay only trusted proposals, but locally process all
+# "drop_untrusted" - Relay only trusted proposals, do not process untrusted
+#
+# The default is "trusted".
+#
+#
+# [relay_validations]
+#
+# Controls the relay and processing behavior for validations received by this
+# server that are issued by validators that are not on the server's UNL.
+#
+# Legal values are:
+# "all" - Relay and process all incoming validations
+# "trusted" - Relay only trusted validations, but locally process all
+# "drop_untrusted" - Relay only trusted validations, do not process untrusted
+#
+# The default is "all".
+#
+#
+#
+#
+#
+# [ledger_history]
+#
+# The number of past ledgers to acquire on server startup and the minimum to
+# maintain while running.
+#
+# To serve clients, servers need historical ledger data. Servers that don't
+# need to serve clients can set this to "none". Servers that want complete
+# history can set this to "full".
+#
+# This must be less than or equal to online_delete (if online_delete is used)
+#
+# The default is: 256
+#
+#
+#
+# [fetch_depth]
+#
+# The number of past ledgers to serve to other peers that request historical
+# ledger data (or "full" for no limit).
+#
+# Servers that require low latency and high local performance may wish to
+# restrict the historical ledgers they are willing to serve. Setting this
+# below 32 can harm network stability as servers require easy access to
+# recent history to stay in sync. Values below 128 are not recommended.
+#
+# The default is: full
+#
+#
+#
+# [validation_seed]
+#
+# To perform validation, this section should contain either a validation seed
+# or key. The validation seed is used to generate the validation
+# public/private key pair. To obtain a validation seed, use the
+# validation_create command.
+#
+# Examples: RASH BUSH MILK LOOK BAD BRIM AVID GAFF BAIT ROT POD LOVE
+# shfArahZT9Q9ckTf3s1psJ7C7qzVN
+#
+#
+#
+# [validator_token]
+#
+# This is an alternative to [validation_seed] that allows rippled to perform
+# validation without having to store the validator keys on the network
+# connected server. The field should contain a single token in the form of a
+# base64-encoded blob.
+# An external tool is available for generating validator keys and tokens.
+#
+#
+#
+# [validator_key_revocation]
+#
+# If a validator's secret key has been compromised, a revocation must be
+# generated and added to this field. The revocation notifies peers that it is
+# no longer safe to trust the revoked key. The field should contain a single
+# revocation in the form of a base64-encoded blob.
+# An external tool is available for generating and revoking validator keys.
+#
+#
+#
+# [validators_file]
+#
+# Path or name of a file that determines the nodes to always accept as validators.
+#
+# The contents of the file should include a [validators] and/or
+# [validator_list_sites] and [validator_list_keys] entries.
+# [validators] should be followed by a list of validation public keys of
+# nodes, one per line.
+# [validator_list_sites] should be followed by a list of URIs each serving a
+# list of recommended validators.
+# [validator_list_keys] should be followed by a list of keys belonging to
+# trusted validator list publishers. Validator lists fetched from configured
+# sites will only be considered if the list is accompanied by a valid
+# signature from a trusted publisher key.
+#
+# Specify the file by its name or path.
+# Unless an absolute path is specified, it will be considered relative to
+# the folder in which the rippled.cfg file is located.
+#
+# Examples:
+# /home/ripple/validators.txt
+# C:/home/ripple/validators.txt
+#
+# Example content:
+# [validators]
+# n949f75evCHwgyP4fPVgaHqNHxUVN15PsJEZ3B3HnXPcPjcZAoy7
+# n9MD5h24qrQqiyBC8aeqqCWvpiBiYQ3jxSr91uiDvmrkyHRdYLUj
+# n9L81uNCaPgtUJfaHh89gmdvXKAmSt5Gdsw2g1iPWaPkAHW5Nm4C
+# n9KiYM9CgngLvtRCQHZwgC2gjpdaZcCcbt3VboxiNFcKuwFVujzS
+# n9LdgEtkmGB9E2h3K4Vp7iGUaKuq23Zr32ehxiU8FWY7xoxbWTSA
+#
+#
+#
+# [path_search]
+# When searching for paths, the default search aggressiveness. This can take
+# exponentially more resources as the size is increased.
+#
+# The recommended value to support advanced pathfinding is: 7
+#
+# The default is: 2
+#
+# [path_search_fast]
+# [path_search_max]
+# When searching for paths, the minimum and maximum search aggressiveness.
+#
+# If you do not need pathfinding, you can set path_search_max to zero to
+# disable it and avoid some expensive bookkeeping.
+#
+# To support advanced pathfinding the recommended value for
+# 'path_search_fast' is 2, and for 'path_search_max' is 10.
+#
+# The default for 'path_search_fast' is 2. The default for 'path_search_max' is 3.
+#
+# [path_search_old]
+#
+# For clients that use the legacy path finding interfaces, the search
+# aggressiveness to use.
+#
+# The recommended value to support advanced pathfinding is: 7.
+#
+# The default is: 2
+#
+#
+#
+# [fee_default]
+#
+# Sets the base cost of a transaction in drops. Used when the server has
+# no other source of fee information, such as signing transactions offline.
+#
+#
+#
+# [workers]
+#
+# Configures the number of threads for processing work submitted by peers
+# and clients. If not specified, then the value is automatically set to the
+# number of processor threads plus 2 for networked nodes. Nodes running in
+# stand alone mode default to 1 worker.
+#
+# [io_workers]
+#
+# Configures the number of threads for processing raw inbound and outbound IO.
+#
+# [prefetch_workers]
+#
+# Configures the number of threads for performing nodestore prefetching.
+#
+#
+#
+# [network_id]
+#
+# Specify the network which this server is configured to connect to and
+# track. If set, the server will not establish connections with servers
+# that are explicitly configured to track another network.
+#
+# Network identifiers are usually unsigned integers in the range 0 to
+# 4294967295 inclusive. The server also maps the following well-known
+# names to the corresponding numerical identifier:
+#
+# main -> 0
+# testnet -> 1
+# devnet -> 2
+#
+# If this value is not specified the server is not explicitly configured
+# to track a particular network.
+#
+#
+# [ledger_replay]
+#
+# 0 or 1.
+#
+# 0: Disable the ledger replay feature [default]
+# 1: Enable the ledger replay feature. With this feature enabled, when
+# acquiring a ledger from the network, a rippled node only downloads
+# the ledger header and the transactions instead of the whole ledger.
+# And the ledger is built by applying the transactions to the parent
+# ledger.
+#
+#-------------------------------------------------------------------------------
+#
+# 4. HTTPS Client
+#
+#----------------
+#
+# The rippled server instance uses HTTPS GET requests in a variety of
+# circumstances, including but not limited to contacting trusted domains to
+# fetch information such as mapping an email address to a Ripple Payment
+# Network address.
+#
+# [ssl_verify]
+#
+# 0 or 1.
+#
+# 0. HTTPS client connections will not verify certificates.
+# 1. Certificates will be checked for HTTPS client connections.
+#
+# If not specified, this parameter defaults to 1.
+#
+#
+#
+# [ssl_verify_file]
+#
+#
+#
+# A file system path leading to the certificate verification file for
+# HTTPS client requests.
+#
+#
+#
+# [ssl_verify_dir]
+#
+#
+#
+#
+# A file system path leading to a file or directory containing the root
+# certificates that the server will accept for verifying HTTP servers.
+# Used only for outbound HTTPS client connections.
+#
+#-------------------------------------------------------------------------------
+#
+# 6. Database
+#
+#------------
+#
+# rippled creates 4 SQLite database to hold bookkeeping information
+# about transactions, local credentials, and various other things.
+# It also creates the NodeDB, which holds all the objects that
+# make up the current and historical ledgers.
+#
+# The size of the NodeDB grows in proportion to the amount of new data and the
+# amount of historical data (a configurable setting) so the performance of the
+# underlying storage media where the NodeDB is placed can significantly affect
+# the performance of the server.
+#
+# Partial pathnames will be considered relative to the location of
+# the rippled.cfg file.
+#
+# [node_db] Settings for the Node Database (required)
+#
+# Format (without spaces):
+# One or more lines of case-insensitive key / value pairs:
+# '='
+# ...
+#
+# Example:
+# type=nudb
+# path=db/nudb
+#
+# The "type" field must be present and controls the choice of backend:
+#
+# type = NuDB
+#
+# NuDB is a high-performance database written by Ripple Labs and optimized
+# for rippled and solid-state drives.
+#
+# NuDB maintains its high speed regardless of the amount of history
+# stored. Online delete may be selected, but is not required. NuDB is
+# available on all platforms that rippled runs on.
+#
+# type = RocksDB
+#
+# RocksDB is an open-source, general-purpose key/value store - see
+# http://rocksdb.org/ for more details.
+#
+# RocksDB is an alternative backend for systems that don't use solid-state
+# drives. Because RocksDB's performance degrades as it stores more data,
+# keeping full history is not advised, and using online delete is
+# recommended.
+#
+# Required keys for NuDB and RocksDB:
+#
+# path Location to store the database
+#
+# Optional keys
+#
+# cache_size Size of cache for database records. Default is 16384.
+# Setting this value to 0 will use the default value.
+#
+# cache_age Length of time in minutes to keep database records
+# cached. Default is 5 minutes. Setting this value to
+# 0 will use the default value.
+#
+# Note: if neither cache_size nor cache_age is
+# specified, the cache for database records will not
+# be created. If only one of cache_size or cache_age
+# is specified, the cache will be created using the
+# default value for the unspecified parameter.
+#
+# Note: the cache will not be created if online_delete
+# is specified.
+#
+# fast_load Boolean. If set, load the last persisted ledger
+# from disk upon process start before syncing to
+# the network. This is likely to improve performance
+# if sufficient IOPS capacity is available.
+# Default 0.
+#
+# Optional keys for NuDB or RocksDB:
+#
+# earliest_seq The default is 32570 to match the XRP ledger
+# network's earliest allowed sequence. Alternate
+# networks may set this value. Minimum value of 1.
+#
+# online_delete Minimum value of 256. Enable automatic purging
+# of older ledger information. Maintain at least this
+# number of ledger records online. Must be greater
+# than or equal to ledger_history.
+#
+# These keys modify the behavior of online_delete, and thus are only
+# relevant if online_delete is defined and non-zero:
+#
+# advisory_delete 0 for disabled, 1 for enabled. If set, the
+# administrative RPC call "can_delete" is required
+# to enable online deletion of ledger records.
+# Online deletion does not run automatically if
+# non-zero and the last deletion was on a ledger
+# greater than the current "can_delete" setting.
+# Default is 0.
+#
+# delete_batch When automatically purging, SQLite database
+# records are deleted in batches. This value
+# controls the maximum size of each batch. Larger
+# batches keep the databases locked for more time,
+# which may cause other functions to fall behind,
+# and thus cause the node to lose sync.
+# Default is 100.
+#
+# back_off_milliseconds
+# Number of milliseconds to wait between
+# online_delete batches to allow other functions
+# to catch up.
+# Default is 100.
+#
+# age_threshold_seconds
+# The online delete process will only run if the
+# latest validated ledger is younger than this
+# number of seconds.
+# Default is 60.
+#
+# recovery_wait_seconds
+# The online delete process checks periodically
+# that rippled is still in sync with the network,
+# and that the validated ledger is less than
+# 'age_threshold_seconds' old. If not, then continue
+# sleeping for this number of seconds and
+# checking until healthy.
+# Default is 5.
+#
+# Notes:
+# The 'node_db' entry configures the primary, persistent storage.
+#
+# The 'import_db' is used with the '--import' command line option to
+# migrate the specified database into the current database given
+# in the [node_db] section.
+#
+# [import_db] Settings for performing a one-time import (optional)
+# [database_path] Path to the book-keeping databases.
+#
+# The server creates and maintains 4 to 5 bookkeeping SQLite databases in
+# the 'database_path' location. If you omit this configuration setting,
+# the server creates a directory called "db" located in the same place as
+# your rippled.cfg file.
+# Partial pathnames are relative to the location of the rippled executable.
+#
+# [sqlite] Tuning settings for the SQLite databases (optional)
+#
+# Format (without spaces):
+# One or more lines of case-insensitive key / value pairs:
+# '='
+# ...
+#
+# Example 1:
+# safety_level=low
+#
+# Example 2:
+# journal_mode=off
+# synchronous=off
+#
+# WARNING: These settings can have significant effects on data integrity,
+# particularly in systemic failure scenarios. It is strongly recommended
+# that they be left at their defaults unless the server is having
+# performance issues during normal operation or during automatic purging
+# (online_delete) operations. A warning will be logged on startup if
+# 'ledger_history' is configured to store more than 10,000,000 ledgers and
+# any of these settings are less safe than the default. This is due to the
+# inordinate amount of time and bandwidth it will take to safely rebuild a
+# corrupted database of that size from other peers.
+#
+# Optional keys:
+#
+# safety_level Valid values: high, low
+# The default is "high", which tunes the SQLite
+# databases in the most reliable mode, and is
+# equivalent to:
+# journal_mode=wal
+# synchronous=normal
+# temp_store=file
+# "low" is equivalent to:
+# journal_mode=memory
+# synchronous=off
+# temp_store=memory
+# These "low" settings trade speed and reduced I/O
+# for a higher risk of data loss. See the
+# individual settings below for more information.
+# This setting may not be combined with any of the
+# other tuning settings: "journal_mode",
+# "synchronous", or "temp_store".
+#
+# journal_mode Valid values: delete, truncate, persist, memory, wal, off
+# The default is "wal", which uses a write-ahead
+# log to implement database transactions.
+# Alternately, "memory" saves disk I/O, but if
+# rippled crashes during a transaction, the
+# database is likely to be corrupted.
+# See https://www.sqlite.org/pragma.html#pragma_journal_mode
+# for more details about the available options.
+# This setting may not be combined with the
+# "safety_level" setting.
+#
+# synchronous Valid values: off, normal, full, extra
+# The default is "normal", which works well with
+# the "wal" journal mode. Alternatively, "off"
+# allows rippled to continue as soon as data is
+# passed to the OS, which can significantly
+# increase speed, but risks data corruption if
+# the host computer crashes before writing that
+# data to disk.
+# See https://www.sqlite.org/pragma.html#pragma_synchronous
+# for more details about the available options.
+# This setting may not be combined with the
+# "safety_level" setting.
+#
+# temp_store Valid values: default, file, memory
+# The default is "file", which will use files
+# for temporary database tables and indices.
+# Alternatively, "memory" may save I/O, but
+# rippled does not currently use many, if any,
+# of these temporary objects.
+# See https://www.sqlite.org/pragma.html#pragma_temp_store
+# for more details about the available options.
+# This setting may not be combined with the
+# "safety_level" setting.
+#
+# page_size Valid values: integer (MUST be power of 2 between 512 and 65536)
+# The default is 4096 bytes. This setting determines
+# the size of a page in the transaction.db file.
+# See https://www.sqlite.org/pragma.html#pragma_page_size
+# for more details about the available options.
+#
+# journal_size_limit Valid values: integer
+# The default is 1582080. This setting limits
+# the size of the journal for transaction.db file. When the limit is
+# reached, older entries will be deleted.
+# See https://www.sqlite.org/pragma.html#pragma_journal_size_limit
+# for more details about the available options.
+#
+#
+#-------------------------------------------------------------------------------
+#
+# 7. Diagnostics
+#
+#---------------
+#
+# These settings are designed to help server administrators diagnose
+# problems, and obtain detailed information about the activities being
+# performed by the rippled process.
+#
+#
+#
+# [debug_logfile]
+#
+# Specifies where a debug logfile is kept. By default, no debug log is kept.
+# Unless absolute, the path is relative the directory containing this file.
+#
+# Example: debug.log
+#
+#
+#
+# [insight]
+#
+# Configuration parameters for the Beast. Insight stats collection module.
+#
+# Insight is a module that collects information from the areas of rippled
+# that have instrumentation. The configuration parameters control where the
+# collection metrics are sent. The parameters are expressed as key = value
+# pairs with no white space. The main parameter is the choice of server:
+#
+# "server"
+#
+# Choice of server to send metrics to. Currently the only choice is
+# "statsd" which sends UDP packets to a StatsD daemon, which must be
+# running while rippled is running. More information on StatsD is
+# available here:
+# https://github.com/b/statsd_spec
+#
+# When server=statsd, these additional keys are used:
+#
+# "address" The UDP address and port of the listening StatsD server,
+# in the format, n.n.n.n:port.
+#
+# "prefix" A string prepended to each collected metric. This is used
+# to distinguish between different running instances of rippled.
+#
+# If this section is missing, or the server type is unspecified or unknown,
+# statistics are not collected or reported.
+#
+# Example:
+#
+# [insight]
+# server=statsd
+# address=192.168.0.95:4201
+# prefix=my_validator
+#
+# [perf]
+#
+# Configuration of performance logging. If enabled, write Json-formatted
+# performance-oriented data periodically to a distinct log file.
+#
+# "perf_log" A string specifying the pathname of the performance log
+# file. A relative pathname will log relative to the
+# configuration directory. Required to enable
+# performance logging.
+#
+# "log_interval" Integer value for number of seconds between writing
+# to performance log. Default 1.
+#
+# Example:
+# [perf]
+# perf_log=/var/log/rippled/perf.log
+# log_interval=2
+#
+#-------------------------------------------------------------------------------
+#
+# 8. Voting
+#
+#----------
+#
+# The vote settings configure settings for the entire Ripple network.
+# While a single instance of rippled cannot unilaterally enforce network-wide
+# settings, these choices become part of the instance's vote during the
+# consensus process for each voting ledger.
+#
+# [voting]
+#
+# A set of key/value pair parameters used during voting ledgers.
+#
+# reference_fee =
+#
+# The cost of the reference transaction fee, specified in drops.
+# The reference transaction is the simplest form of transaction.
+# It represents an XRP payment between two parties.
+#
+# If this parameter is unspecified, rippled will use an internal
+# default. Don't change this without understanding the consequences.
+#
+# Example:
+# reference_fee = 10 # 10 drops
+#
+# account_reserve =
+#
+# The account reserve requirement is specified in drops. The portion of an
+# account's XRP balance that is at or below the reserve may only be
+# spent on transaction fees, and not transferred out of the account.
+#
+# If this parameter is unspecified, rippled will use an internal
+# default. Don't change this without understanding the consequences.
+#
+# Example:
+# account_reserve = 10000000 # 10 XRP
+#
+# owner_reserve =
+#
+# The owner reserve is the amount of XRP reserved in the account for
+# each ledger item owned by the account. Ledger items an account may
+# own include trust lines, open orders, and tickets.
+#
+# If this parameter is unspecified, rippled will use an internal
+# default. Don't change this without understanding the consequences.
+#
+# Example:
+# owner_reserve = 2000000 # 2 XRP
+#
+#-------------------------------------------------------------------------------
+#
+# 9. Misc Settings
+#
+#-----------------
+#
+# [node_size]
+#
+# Tunes the servers based on the expected load and available memory. Legal
+# sizes are "tiny", "small", "medium", "large", and "huge". We recommend
+# you start at the default and raise the setting if you have extra memory.
+#
+# The code attempts to automatically determine the appropriate size for
+# this parameter based on the amount of RAM and the number of execution
+# cores available to the server. The current decision matrix is:
+#
+# | | Cores |
+# |---------|------------------------|
+# | RAM | 1 | 2 or 3 | ≥ 4 |
+# |---------|------|--------|--------|
+# | < ~8GB | tiny | tiny | tiny |
+# | < ~12GB | tiny | small | small |
+# | < ~16GB | tiny | small | medium |
+# | < ~24GB | tiny | small | large |
+# | < ~32GB | tiny | small | huge |
+#
+# [signing_support]
+#
+# Specifies whether the server will accept "sign" and "sign_for" commands
+# from remote users. Even if the commands are sent over a secure protocol
+# like secure websocket, this should generally be discouraged, because it
+# requires sending the secret to use for signing to the server. In order
+# to sign transactions, users should prefer to use a standalone signing
+# tool instead.
+#
+# This flag has no effect on the "sign" and "sign_for" command line options
+# that rippled makes available.
+#
+# The default value of this field is "false"
+#
+# Example:
+#
+# [signing_support]
+# true
+#
+# [crawl]
+#
+# List of options to control what data is reported through the /crawl endpoint
+# See https://xrpl.org/peer-crawler.html
+#
+#
+#
+# Enable or disable access to /crawl requests. Default is '1' which
+# enables access.
+#
+# overlay =
+#
+# Report information about peers this server is connected to, similar
+# to the "peers" RPC API. Default is '1' which means to report peer
+# overlay info.
+#
+# server =
+#
+# Report information about the local server, similar to the "server_state"
+# RPC API. Default is '1' which means to report local server info.
+#
+# counts =
+#
+# Report information about the local server health counters, similar to
+# the "get_counts" RPC API. Default is '0' which means not to report
+# server counts.
+#
+# unl =
+#
+# Report information about the local server's validator lists, similar to
+# the "validators" and "validator_list_sites" RPC APIs. Default is '1'
+# which means to report server validator lists.
+#
+# Examples:
+#
+# [crawl]
+# 0
+#
+# [crawl]
+# overlay = 1
+# server = 1
+# counts = 0
+# unl = 1
+#
+# [vl]
+#
+# Options to control what data is reported through the /vl endpoint
+# See [...]
+#
+# enable =
+#
+# Enable or disable access to /vl requests. Default is '1' which
+# enables access.
+#
+# [beta_rpc_api]
+#
+# 0 or 1.
+#
+# 0: Disable the beta API version for JSON-RPC and WebSocket [default]
+# 1: Enable the beta API version for testing. The beta API version
+# contains breaking changes that require a new API version number.
+# They are not ready for public consumption.
+#
+#-------------------------------------------------------------------------------
+#
+# 10. Example Settings
+#
+#--------------------
+#
+# Administrators can use these values as a starting point for configuring
+# their instance of rippled, but each value should be checked to make sure
+# it meets the business requirements for the organization.
+#
+# Server
+#
+# These example configuration settings create these ports:
+#
+# "peer"
+#
+# Peer protocol open to everyone. This is required to accept
+# incoming rippled connections. This does not affect automatic
+# or manual outgoing Peer protocol connections.
+#
+# "rpc"
+#
+# Administrative RPC commands over HTTPS, when originating from
+# the same machine (via the loopback adapter at 127.0.0.1).
+#
+# "wss_admin"
+#
+# Admin level API commands over Secure Websockets, when originating
+# from the same machine (via the loopback adapter at 127.0.0.1).
+#
+# "grpc"
+#
+# ETL commands for Clio. We recommend setting secure_gateway
+# in this section to a comma-separated list of the addresses
+# of your Clio servers, in order to bypass rippled's rate limiting.
+#
+# This port is commented out but can be enabled by removing
+# the '#' from each corresponding line including the entry under [server]
+#
+# "wss_public"
+#
+# Guest level API commands over Secure Websockets, open to everyone.
+#
+# For HTTPS and Secure Websockets ports, if no certificate and key file
+# are specified then a self-signed certificate will be generated on startup.
+# If you have a certificate and key file, uncomment the corresponding lines
+# and ensure the paths to the files are correct.
+#
+# NOTE
+#
+# To accept connections on well known ports such as 80 (HTTP) or
+# 443 (HTTPS), most operating systems will require rippled to
+# run with administrator privileges, or else rippled will not start.
+
+[server]
+port_rpc_admin_local
+port_peer
+port_ws_admin_local
+port_ws_public
+#ssl_key = /etc/ssl/private/server.key
+#ssl_cert = /etc/ssl/certs/server.crt
+
+[port_rpc_admin_local]
+port = 5005
+ip = 127.0.0.1
+admin = 127.0.0.1
+protocol = http
+
+[port_peer]
+port = 51235
+ip = 0.0.0.0
+# alternatively, to accept connections on IPv4 + IPv6, use:
+#ip = ::
+protocol = peer
+
+[port_ws_admin_local]
+port = 6006
+ip = 127.0.0.1
+admin = 127.0.0.1
+protocol = ws
+send_queue_limit = 500
+
+[port_grpc]
+port = 50051
+ip = 127.0.0.1
+secure_gateway = 127.0.0.1
+
+[port_ws_public]
+port = 6005
+ip = 0.0.0.0
+protocol = wss,ws,http
+send_queue_limit = 500
+
+#-------------------------------------------------------------------------------
+
+# This is primary persistent datastore for rippled. This includes transaction
+# metadata, account states, and ledger headers. Helpful information can be
+# found at https://xrpl.org/capacity-planning.html#node-db-type
+# type=NuDB is recommended for non-validators with fast SSDs. Validators or
+# slow / spinning disks should use RocksDB. Caution: Spinning disks are
+# not recommended. They do not perform well enough to consistently remain
+# synced to the network.
+# online_delete=512 is recommended to delete old ledgers while maintaining at
+# least 512.
+# advisory_delete=0 allows the online delete process to run automatically
+# when the node has approximately two times the "online_delete" value of
+# ledgers. No external administrative command is required to initiate
+# deletion.
+
+[node_db]
+type=NuDB
+path=/var/lib/rippled/db/nudb
+online_delete=<>
+advisory_delete=<>
+
+[database_path]
+/var/lib/rippled/db
+
+
+# This needs to be an absolute directory reference, not a relative one.
+# Modify this value as required.
+[debug_logfile]
+/var/log/rippled/debug.log
+
+# To use the XRP test network
+# (see https://xrpl.org/connect-your-rippled-to-the-xrp-test-net.html),
+# use the following [ips] section:
+# [ips]
+# r.altnet.rippletest.net 51235
+[ips]
+<>
+[network_id]
+<>
+# File containing trusted validator keys or validator list publishers.
+# Unless an absolute path is specified, it will be considered relative to the
+# folder in which the rippled.cfg file is located.
+[validators_file]
+validators.txt
+
+# Turn down default logging to save disk space in the long run.
+# Valid values here are trace, debug, info, warning, error, and fatal
+[rpc_startup]
+{ "command": "log_level", "severity": "warning" }
+
+# If ssl_verify is 1, certificates will be validated.
+# To allow the use of self-signed certificates for development or internal use,
+# set to ssl_verify to 0.
+[ssl_verify]
+1
+[crawl]
+1
diff --git a/lib/xrp/lib/assets/rippled/rippled.cfg.template b/lib/xrp/lib/assets/rippled/rippled.cfg.template
new file mode 100644
index 00000000..7637f60c
--- /dev/null
+++ b/lib/xrp/lib/assets/rippled/rippled.cfg.template
@@ -0,0 +1,33 @@
+[server]
+port_peer
+port_rpc_admin_local
+port_ws_admin_local
+port_ws_public
+[port_ws_public]
+[port_rpc_admin_local]
+[port_peer]
+[port_ws_admin_local]
+
+[node_db]
+[database_path]
+/var/lib/rippled/db
+# This needs to be an absolute directory reference, not a relative one.
+# Modify this value as required.
+[debug_logfile]
+/var/log/rippled/debug.log
+[ips]
+[network_id]
+[validators_file]
+validators.txt
+
+# Turn down default logging to save disk space in the long run.
+# Valid values here are trace, debug, info, warning, error, and fatal
+[rpc_startup]
+{ "command": "log_level", "severity": "warning" }
+# If ssl_verify is 1, certificates will be validated.
+# To allow the use of self-signed certificates for development or internal use,
+# set to ssl_verify to 0.
+[ssl_verify]
+1
+[crawl]
+1
diff --git a/lib/xrp/lib/assets/rippled/rippledconfig.py b/lib/xrp/lib/assets/rippled/rippledconfig.py
new file mode 100644
index 00000000..9e3ae5b0
--- /dev/null
+++ b/lib/xrp/lib/assets/rippled/rippledconfig.py
@@ -0,0 +1,56 @@
+# amazonq-ignore-next-line
+rippled_cfg_file = "/opt/ripple/etc/rippled.cfg"
+rippled_validator_file = "/opt/ripple/etc/validators.txt"
+xrp_defaults = {
+ "server_ports": {
+ "port_peer": {
+ "port": "51235",
+ "protocol": "peer",
+ "ip": "0.0.0.0",
+ },
+ "port_rpc_admin_local": {
+ "port": "5005",
+ "ip": "127.0.0.1",
+ "admin": "127.0.0.1",
+ "protocol": "http,https",
+ },
+ "port_ws_admin_local": {
+ "port": "6006",
+ "ip": "127.0.0.1",
+ "admin": "127.0.0.1",
+ "protocol": "ws,wss",
+ },
+ "port_ws_public": {
+ "port": "6005",
+ "ip": "0.0.0.0",
+ "protocol": "ws,wss,http",
+ },
+ },
+ "db_defaults": {
+ "node_db": {
+ "type": "NuDB",
+ "path": "/var/lib/rippled/db/nudb",
+ "online_delete": "512",
+ "advisory_delete": "1",
+ }
+ },
+ "network_defaults": {
+ "mainnet": {
+ "network_id": "main",
+ "ssl_verify": "1",
+ "validator_list_sites": ["https://vl.ripple.com"],
+ "validator_list_keys": [
+ "ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734"
+ ],
+ },
+ "testnet": {
+ "network_id": "testnet",
+ "ssl_verify": "0",
+ "ips": "s.altnet.rippletest.net 51235",
+ "validator_list_sites": ["https://vl.altnet.rippletest.net"],
+ "validator_list_keys": [
+ "ED264807102805220DA0F312E71FC2C69E1552C9C5790F6C25E3729DEB573D5860"
+ ],
+ },
+ },
+}
diff --git a/lib/xrp/lib/assets/rippled/validators.txt.template b/lib/xrp/lib/assets/rippled/validators.txt.template
new file mode 100644
index 00000000..df9f68cf
--- /dev/null
+++ b/lib/xrp/lib/assets/rippled/validators.txt.template
@@ -0,0 +1,59 @@
+#
+# Default validators.txt
+#
+# This file is located in the same folder as your rippled.cfg file
+# and defines which validators your server trusts not to collude.
+#
+# This file is UTF-8 with DOS, UNIX, or Mac style line endings.
+# Blank lines and lines starting with a '#' are ignored.
+#
+#
+#
+# [validators]
+#
+# List of the validation public keys of nodes to always accept as validators.
+#
+# Manually listing validator keys is not recommended for production networks.
+# See validator_list_sites and validator_list_keys below.
+#
+# Examples:
+# n9KorY8QtTdRx7TVDpwnG9NvyxsDwHUKUEeDLY3AkiGncVaSXZi5
+# n9MqiExBcoG19UXwoLjBJnhsxEhAZMuWwJDRdkyDz1EkEkwzQTNt
+#
+# [validator_list_sites]
+#
+# List of URIs serving lists of recommended validators.
+#
+# Examples:
+# https://vl.ripple.com
+# https://vl.xrplf.org
+# http://127.0.0.1:8000
+# file:///etc/opt/ripple/vl.txt
+#
+# [validator_list_keys]
+#
+# List of keys belonging to trusted validator list publishers.
+# Validator lists fetched from configured sites will only be considered
+# if the list is accompanied by a valid signature from a trusted
+# publisher key.
+# Validator list keys should be hex-encoded.
+#
+# Examples:
+# ED2677ABFFD1B33AC6FBC3062B71F1E8397C1505E1C42C64D11AD1B28FF73F4734
+# ED307A760EE34F2D0CAA103377B1969117C38B8AA0AA1E2A24DAC1F32FC97087ED
+#
+
+# The default validator list publishers that the rippled instance
+# trusts.
+#
+# WARNING: Changing these values can cause your rippled instance to see a
+# validated ledger that contradicts other rippled instances'
+# validated ledgers (aka a ledger fork) if your validator list(s)
+# do not sufficiently overlap with the list(s) used by others.
+# See: https://arxiv.org/pdf/1802.07242.pdf
+
+[validator_list_sites]
+<>
+
+[validator_list_keys]
+<>
diff --git a/lib/xrp/lib/assets/user-data/check_xrp_sequence.sh b/lib/xrp/lib/assets/user-data/check_xrp_sequence.sh
new file mode 100644
index 00000000..706bd3be
--- /dev/null
+++ b/lib/xrp/lib/assets/user-data/check_xrp_sequence.sh
@@ -0,0 +1,252 @@
+#!/bin/bash
+###############################################################################
+# check_xrp_sequence.sh
+#
+# This script retrieves the current validated ledger sequence number from a local
+# XRP node and sends it to AWS CloudWatch as a metric. Includes retry logic,
+# proper error handling, and ensures only one instance runs at a time.
+#
+# Requirements:
+# - AWS CLI
+# - jq
+# - curl
+# - Local rippled node running on port 5005
+#
+# The script is idempotent and includes the following features:
+# - Lockfile to prevent multiple concurrent executions
+# - Retry mechanism for all external calls
+# - Proper signal handling and cleanup
+# - Consistent logging
+# - Comprehensive error handling
+###############################################################################
+
+set -euo pipefail
+
+# Configuration
+MAX_RETRIES=3
+RETRY_DELAY=5
+NAMESPACE="CWAgent"
+CURRENT_METRIC_NAME="XRP_Current_Sequence"
+DELTA_METRIC_NAME="XRP_Delta_Sequence"
+LOCKFILE="/tmp/check_xrp_sequence.lock"
+LOCK_FD=200
+
+# Logging functions
+log() {
+ local level=$1
+ local message=$2
+ echo "[$(date -u +"%Y-%m-%dT%H:%M:%SZ")] [${level}] ${message}"
+}
+
+log_info() {
+ log "INFO" "$1"
+}
+
+log_error() {
+ log "ERROR" "$1"
+}
+
+log_warning() {
+ log "WARN" "$1"
+}
+
+# Error handling
+handle_error() {
+ local exit_code=$1
+ local error_msg=$2
+ log_error "${error_msg}"
+ exit "${exit_code}"
+}
+
+# Function to clean up lock file
+cleanup() {
+ local exit_code=$?
+ log_info "Cleaning up..."
+ # Release lock file
+ flock -u ${LOCK_FD}
+ rm -f "${LOCKFILE}"
+ exit ${exit_code}
+}
+
+# Handle signals
+trap cleanup EXIT
+trap 'exit 1' INT TERM
+# Get instance metadata with retries
+get_metadata() {
+ local endpoint=$1
+ local retry_count=0
+ local result
+
+ while [[ ${retry_count} -lt ${MAX_RETRIES} ]]; do
+ if result=$(curl -s -f -H "X-aws-ec2-metadata-token: $(curl -s -f -X PUT 'http://169.254.169.254/latest/api/token' -H 'X-aws-ec2-metadata-token-ttl-seconds: 21600')" "http://169.254.169.254/latest/meta-data/${endpoint}"); then
+ echo "${result}"
+ return 0
+ fi
+ log_warning "Failed to get metadata from ${endpoint}, attempt $((retry_count + 1))/${MAX_RETRIES}"
+ retry_count=$((retry_count + 1))
+ sleep ${RETRY_DELAY}
+ done
+
+ log_error "Failed to retrieve metadata from ${endpoint} after ${MAX_RETRIES} attempts"
+ return 1
+}
+
+# Check dependencies
+check_dependencies() {
+ log_info "Checking dependencies..."
+ local missing_deps=()
+
+ for cmd in aws jq curl; do
+ if ! command -v "${cmd}" >/dev/null 2>&1; then
+ missing_deps+=("${cmd}")
+ fi
+ done
+
+ if [[ ${#missing_deps[@]} -gt 0 ]]; then
+ log_error "Missing required dependencies: ${missing_deps[*]}"
+ return 1
+ fi
+
+ log_info "All dependencies satisfied"
+ return 0
+}
+
+# Function to get current sequence from rippled with retries
+get_current_sequence() {
+ local retry_count=0
+ local seq
+
+ while [[ ${retry_count} -lt ${MAX_RETRIES} ]]; do
+ if seq=$(curl -s -f -H 'Content-Type: application/json' \
+ -d '{"method":"ledger_current","params":[{}]}' \
+ http://localhost:5005/ | \
+ jq -e '.result.ledger_current_index // 0'); then
+ if [[ "${seq}" != "0" ]]; then
+ echo "${seq}"
+ return 0
+ fi
+ fi
+ log_warning "Failed to get sequence, attempt $((retry_count + 1))/${MAX_RETRIES}"
+ retry_count=$((retry_count + 1))
+ sleep ${RETRY_DELAY}
+ done
+
+ log_error "Failed to get current sequence after ${MAX_RETRIES} attempts"
+ return 1
+}
+
+get_validated_sequence() {
+ local retry_count=0
+ local seq
+
+ while [[ ${retry_count} -lt ${MAX_RETRIES} ]]; do
+ if seq=$(curl -s -f -H 'Content-Type: application/json' \
+ -d '{"method":"server_info","params":[{}]}' \
+ http://localhost:5005/ | \
+ jq -e '.result.info.validated_ledger.seq // 0'); then
+ if [[ "${seq}" != "0" ]]; then
+ echo "${seq}"
+ return 0
+ fi
+ fi
+ log_warning "Failed to get sequence, attempt $((retry_count + 1))/${MAX_RETRIES}"
+ retry_count=$((retry_count + 1))
+ sleep ${RETRY_DELAY}
+ done
+
+ log_error "Failed to get current sequence after ${MAX_RETRIES} attempts"
+ return 1
+}
+
+# Function to send metric to CloudWatch with retries
+send_to_cloudwatch() {
+ local sequence=$1
+ local metric_name=$2
+ local retry_count=0
+
+ while [[ ${retry_count} -lt ${MAX_RETRIES} ]]; do
+ if aws cloudwatch put-metric-data \
+ --namespace "${NAMESPACE}" \
+ --metric-name "${metric_name}" \
+ --value "${sequence}" \
+ --region "${REGION}" \
+ --dimensions "InstanceId=${INSTANCE_ID}" \
+ --timestamp "${TIMESTAMP}"; then
+ log_info "Successfully sent sequence ${sequence} to CloudWatch"
+ return 0
+ fi
+ log_warning "Failed to send metrics to CloudWatch, attempt $((retry_count + 1))/${MAX_RETRIES}"
+ retry_count=$((retry_count + 1))
+ sleep ${RETRY_DELAY}
+ done
+
+ log_error "Failed to send metrics to CloudWatch after ${MAX_RETRIES} attempts"
+ return 1
+}
+
+# Initialize environment variables
+init_environment() {
+ log_info "Initializing environment variables"
+ REGION=$(get_metadata "placement/region") || return 1
+ INSTANCE_ID=$(get_metadata "instance-id") || return 1
+ TIMESTAMP=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
+ return 0
+}
+
+# Main function
+main() {
+ local sequence
+
+ log_info "Starting XRP sequence check"
+
+ # Ensure only one instance is running
+ exec {LOCK_FD}>"${LOCKFILE}"
+ if ! flock -n "${LOCK_FD}"; then
+ log_error "Another instance is already running"
+ return 1
+ fi
+
+ # Check dependencies first
+ if ! check_dependencies; then
+ return 1
+ fi
+
+ # Initialize environment variables
+ if ! init_environment; then
+ return 1
+ fi
+
+ # Get current sequence
+ if ! current_sequence=$(get_current_sequence); then
+ return 1
+ fi
+
+ # Get current sequence
+ if ! validated_sequence=$(get_validated_sequence); then
+ return 1
+ fi
+
+ log_info "Retrieved current sequence: ${current_sequence}"
+ log_info "Retrieved validated sequence: ${validated_sequence}"
+
+ # Send to CloudWatch
+ if ! send_to_cloudwatch "${current_sequence}" "${CURRENT_METRIC_NAME}"; then
+ return 1
+ fi
+
+ # Send to CloudWatch
+ delta_sequence=$((current_sequence - validated_sequence))
+ if ! send_to_cloudwatch "${delta_sequence}" "${DELTA_METRIC_NAME}"; then
+ return 1
+ fi
+
+ log_info "XRP sequence check completed successfully"
+ return 0
+}
+
+# Execute main function
+if ! main; then
+ handle_error 1 "Failed to complete XRP sequence check"
+fi
+
+exit 0
diff --git a/lib/xrp/lib/assets/user-data/node.sh b/lib/xrp/lib/assets/user-data/node.sh
new file mode 100644
index 00000000..240d8cbd
--- /dev/null
+++ b/lib/xrp/lib/assets/user-data/node.sh
@@ -0,0 +1,447 @@
+#!/bin/bash
+
+# Enable error handling and debugging
+set -eo pipefail
+
+exec > >(tee /var/log/user-data.log | logger -t user-data -s 2>/dev/console) 2>&1
+###################
+# Constants
+###################
+readonly RIPPLED_CONFIG_DIR="/opt/ripple/etc"
+readonly YUM_REPO_DIR="/etc/yum.repos.d"
+readonly ENV_FILE="/etc/environment"
+readonly RIPPLED_USER="rippled"
+readonly RIPPLED_GROUP="rippled"
+readonly RIPPLED_UID=1111
+readonly RIPPLED_GID=1111
+readonly MOUNT_POINT="/var/lib/rippled"
+readonly MAX_RETRIES=3
+readonly RETRY_DELAY=5
+readonly DATA_VOLUME_NAME="/dev/sdf"
+readonly ASSETS_DIR="/root/assets"
+readonly ASSETS_ZIP="/root/assets.zip"
+
+###################
+# Logging Functions
+###################
+log() {
+ local level="$1"
+ local message="$2"
+ echo "[$(date +'%Y-%m-%d %H:%M:%S')] [${level}] ${message}" | tee -a /var/log/rippled-setup.log
+}
+
+log_info() {
+ log "INFO" "$1"
+}
+
+log_error() {
+ log "ERROR" "$1" >&2
+}
+
+log_warning() {
+ log "WARNING" "$1"
+}
+
+###################
+# Error Handling
+###################
+handle_error() {
+ local exit_code=$?
+ local line_number=$1
+ log_error "Failed at line ${line_number} with exit code ${exit_code}"
+
+ exit "${exit_code}"
+}
+
+trap 'handle_error ${LINENO}' ERR
+
+###################
+# Environment Setup
+###################
+setup_environment() {
+ log_info "Setting up environment variables"
+
+ # Backup existing environment file
+ if [[ -f "${ENV_FILE}" ]]; then
+ cp "${ENV_FILE}" "${ENV_FILE}.$(date +%Y%m%d_%H%M%S).backup"
+ fi
+
+ declare -A env_vars=(
+ ["AWS_REGION"]="_AWS_REGION_"
+ ["ASSETS_S3_PATH"]="_ASSETS_S3_PATH_"
+ ["STACK_NAME"]="_STACK_NAME_"
+ ["STACK_ID"]="_STACK_ID_"
+ ["RESOURCE_ID"]="_NODE_CF_LOGICAL_ID_"
+ # ["HUB_NETWORK_IP"]="_HUB_NETWORK_IP_"
+ ["XRP_NETWORK"]="_HUB_NETWORK_ID_"
+ # ["VALIDATOR_LIST_SITES"]="_VALIDATOR_LIST_SITES_"
+ # ["VALIDATOR_LIST_KEYS"]="_VALIDATOR_LIST_KEYS_"
+ # ["ONLINE_DELETE"]="_ONLINE_DELETE_"
+ # ["ADVISORY_DELETE"]="_ADVISORY_DELETE_"
+ ["DATA_VOLUME_TYPE"]="_DATA_VOLUME_TYPE_"
+ ["DATA_VOLUME_SIZE"]="_DATA_VOLUME_SIZE_"
+ ["LIFECYCLE_HOOK_NAME"]="_LIFECYCLE_HOOK_NAME_"
+ ["ASG_NAME"]="_ASG_NAME_"
+ )
+
+ # Clear and recreate environment file
+ : >"${ENV_FILE}"
+
+ for key in "${!env_vars[@]}"; do
+ local value="${env_vars[${key}]}"
+ if [[ "${value}" =~ [[:space:]] || "${value}" =~ [^a-zA-Z0-9_./-] ]]; then
+ echo "export ${key}=\"${value}\"" >>"${ENV_FILE}"
+ else
+ echo "export ${key}=${value}" >>"${ENV_FILE}"
+ fi
+ done
+
+ # Source the environment file
+ # shellcheck source=/dev/null
+ source "${ENV_FILE}"
+}
+install_rippled() {
+ log_info "Installing/updating rippled on Amazon Linux 2..."
+ setup_environment
+
+ # Setup repository if not exists
+ if [[ ! -f "$YUM_REPO_DIR/ripple.repo" ]]; then
+ log_info "Setting up ripple repository..."
+ sudo cp ${ASSETS_DIR}/rippled/ripple.repo "$YUM_REPO_DIR/ripple.repo"
+ fi
+
+ sudo yum -y update
+
+ # Install/update rippled if needed
+ if ! rpm -q rippled &>/dev/null; then
+ log_info "Installing rippled package..."
+ sudo yum install -y rippled
+ else
+ log_info "rippled package already installed, checking for updates..."
+ sudo yum update -y rippled
+ fi
+
+ log_info "build out and write rippled.cfg and validaotrs.txt"
+ python3 ${ASSETS_DIR}/rippled/configBuilder.py ${ASSETS_DIR}
+
+}
+
+# Function to start and verify rippled service
+start_rippled() {
+ echo "Starting rippled service..."
+ sudo systemctl enable --now rippled
+ sudo systemctl start rippled
+
+ # Verify service status
+ if ! sudo systemctl status rippled; then
+ echo "Failed to start rippled service"
+ return 1
+ fi
+ echo "rippled service started successfully"
+}
+
+###################
+# System Setup
+###################
+install_dependencies() {
+ log_info "Installing system dependencies"
+
+ local packages=(
+ "cmake"
+ "git"
+ "gcc-c++"
+ "snappy-devel"
+ "libicu-devel"
+ "zlib-devel"
+ "jq"
+ "unzip"
+ "amazon-cloudwatch-agent"
+ "openssl-devel"
+ "libffi-devel"
+ "bzip2-devel"
+ "wget"
+ )
+
+ # Check for packages that need to be installed
+ local packages_to_install=()
+ for package in "${packages[@]}"; do
+ if ! rpm -q "$package" &>/dev/null; then
+ log_info "Package $package needs to be installed"
+ packages_to_install+=("$package")
+ else
+ log_info "Package $package is already installed"
+ fi
+ done
+
+ # If no packages need installation, we're done
+ if [ ${#packages_to_install[@]} -eq 0 ]; then
+ log_info "All required packages are already installed"
+ return 0
+ fi
+
+ local retry_count=0
+ while [[ ${retry_count} -lt ${MAX_RETRIES} ]]; do
+ if sudo yum update -y &&
+ sudo yum groupinstall -y "Development Tools" &&
+ sudo yum install -y "${packages[@]}"; then
+ return 0
+ fi
+
+ retry_count=$((retry_count + 1))
+ log_warning "Retry ${retry_count}/${MAX_RETRIES} for package installation"
+ sleep "${RETRY_DELAY}"
+ done
+ log_error "Failed to install dependencies after ${MAX_RETRIES} attempts"
+ return 1
+}
+
+###################
+# User Management
+###################
+setup_user_and_group() {
+ log_info "Setting up rippled user and group"
+
+ # Create group if it doesn't exist
+ if ! getent group "${RIPPLED_GROUP}" >/dev/null; then
+ sudo groupadd -g "${RIPPLED_GID}" "${RIPPLED_GROUP}"
+ fi
+
+ # Create user if it doesn't exist
+ if ! getent passwd "${RIPPLED_USER}" >/dev/null; then
+ sudo useradd -u "${RIPPLED_UID}" -g "${RIPPLED_GID}" -m -s /bin/bash "${RIPPLED_USER}"
+ fi
+
+ # Ensure home directory permissions are correct
+ sudo chown -R "${RIPPLED_USER}:${RIPPLED_GROUP}" "/home/${RIPPLED_USER}"
+}
+
+###################
+# Asset Management
+###################
+setup_assets() {
+ log_info "Downloading and extracting assets"
+
+ # Clean up any existing assets
+ rm -rf "${ASSETS_DIR}" "${ASSETS_ZIP}"
+
+ # Download and extract assets with retry logic
+ local retry_count=0
+ while [[ ${retry_count} -lt ${MAX_RETRIES} ]]; do
+ if aws s3 cp "${ASSETS_S3_PATH}" "${ASSETS_ZIP}" --region "${AWS_REGION}" &&
+ unzip -q "${ASSETS_ZIP}" -d "${ASSETS_DIR}"; then
+ return 0
+ fi
+
+ retry_count=$((retry_count + 1))
+ log_warning "Retry ${retry_count}/${MAX_RETRIES} for asset download"
+ sleep "${RETRY_DELAY}"
+ done
+
+ log_error "Failed to setup assets after ${MAX_RETRIES} attempts"
+ return 1
+}
+
+###################
+# Volume Management
+###################
+get_data_volume_id() {
+ local volume_size="${1}"
+ lsblk -lnb | awk -v VOLUME_SIZE_BYTES="$DATA_VOLUME_SIZE" '{if ($4== ${volume_size}) {print $1}}'
+}
+
+setup_data_volume() {
+ log_info "Setting up data volume"
+
+ local volume_id
+ volume_id="$DATA_VOLUME_NAME"
+
+ log_info "Data volume ID: ${volume_id}"
+
+ # Verify volume exists
+ if [[ -z "${volume_id}" ]]; then
+ log_error "Data volume not found"
+ return 1
+ fi
+
+ # Check if device exists
+ local device="${volume_id}"
+ if [[ ! -b "${device}" ]]; then
+ log_error "Device ${device} not found"
+ return 1
+ fi
+
+ # Check if already mounted
+ if is_volume_mounted "${MOUNT_POINT}"; then
+ log_info "Data volume already mounted at ${MOUNT_POINT}"
+ # Verify correct permissions even if already mounted
+ sudo chown "${RIPPLED_USER}:${RIPPLED_GROUP}" "${MOUNT_POINT}"
+ return 0
+ fi
+
+ # Ensure mount point exists
+ if [[ ! -d "${MOUNT_POINT}" ]]; then
+ log_info "Creating mount point directory ${MOUNT_POINT}"
+ sudo mkdir -p "${MOUNT_POINT}"
+ fi
+
+ # Format and mount
+ if ! format_and_mount_volume "${volume_id}"; then
+ log_error "Failed to format and mount volume ${volume_id}"
+ return 1
+ fi
+
+ log_info "Data volume setup completed successfully"
+ return 0
+}
+
+is_volume_mounted() {
+ local mount_point="${1}"
+ mountpoint -q "${mount_point}"
+}
+
+format_and_mount_volume() {
+ local volume_id="${1}"
+ local device="${volume_id}"
+ local fstype="xfs"
+
+ # Check if filesystem already exists
+ if ! blkid "${device}" | grep -q "${fstype}"; then
+ log_info "Formatting volume ${device} with ${fstype}"
+ if ! sudo mkfs.${fstype} "${device}"; then
+ log_error "Failed to format volume ${device}"
+ return 1
+ fi
+ # Wait for filesystem to be ready
+ sleep 5
+ else
+ log_info "Volume ${device} already formatted with ${fstype}"
+ fi
+
+ # Get UUID
+ local volume_uuid
+ volume_uuid=$(lsblk -fn -o UUID "${volume_id}")
+
+ if [[ -z "${volume_uuid}" ]]; then
+ log_error "Failed to get UUID for volume ${volume_id}"
+ return 1
+ fi
+
+ local fstab_entry="UUID=${volume_uuid} ${MOUNT_POINT} xfs defaults 0 2"
+
+ # Update fstab
+ update_fstab "${fstab_entry}"
+
+ # Create mount point and mount
+ sudo mkdir -p "${MOUNT_POINT}/db"
+ sudo chown -R "${RIPPLED_USER}:${RIPPLED_GROUP}" "${MOUNT_POINT}"
+ sudo mount -a
+
+ # Set permissions
+ sudo chown -R "${RIPPLED_USER}:${RIPPLED_GROUP}" "${MOUNT_POINT}"
+}
+
+update_fstab() {
+ local fstab_entry="${1}"
+
+ # Backup fstab
+ sudo cp /etc/fstab "/etc/fstab.$(date +%Y%m%d_%H%M%S).backup"
+
+ if grep -q "${MOUNT_POINT}" /etc/fstab; then
+ local line_num
+ line_num=$(grep -n "${MOUNT_POINT}" /etc/fstab | cut -d: -f1)
+ sudo sed -i "${line_num}s#.*#${fstab_entry}#" /etc/fstab
+ else
+ echo "${fstab_entry}" | sudo tee -a /etc/fstab
+ fi
+}
+
+check_volume() {
+ local volume="$1"
+ local max_attempts=10
+ local attempt=1
+ local sleep_time=5
+
+ while ! blockdev --getro "$volume" 2>/dev/null; do
+ if [ $attempt -ge $max_attempts ]; then
+ log_error "Volume $volume not ready after $max_attempts attempts"
+ return 1
+ fi
+ log_info "Waiting for volume $volume (attempt $attempt/$max_attempts)"
+ sleep $((sleep_time * attempt)) # Exponential backoff
+ ((attempt++))
+ done
+ return 0
+}
+
+setup_cloud_watch() {
+ sudo cp ${ASSETS_DIR}/cw-agent.json "/opt/aws/amazon-cloudwatch-agent/etc/custom-amazon-cloudwatch-agent.json"
+ /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \
+ -a fetch-config -c file:/opt/aws/amazon-cloudwatch-agent/etc/custom-amazon-cloudwatch-agent.json -m ec2 -s
+ systemctl restart amazon-cloudwatch-agent
+
+ systemctl daemon-reload
+}
+
+setup_seq_check() {
+
+ echo "Configuring xrp ledger synch script"
+
+ sudo cp ${ASSETS_DIR}/user-data/check_xrp_sequence.sh "/opt/check_xrp_sequence.sh"
+ sudo chmod +x /opt/check_xrp_sequence.sh
+ sudo chown rippled:rippled /opt/check_xrp_sequence.sh
+
+ sudo cp "$ASSETS_DIR/user-data/synch-check.service" /etc/systemd/system/synch-check.service
+ sudo cp "$ASSETS_DIR/user-data/synch-check.timer" /etc/systemd/system/synch-check.timer
+
+ sudo systemctl start synch-check.timer
+ sudo systemctl enable synch-check.timer
+
+}
+
+###################
+# Main Function
+###################
+main() {
+ log_info "Starting rippled node installation"
+ setup_environment
+ if [[ "$RESOURCE_ID" != "none" ]]; then
+ cfn-signal --stack "${STACK_NAME}" --resource "${RESOURCE_ID}" --region "${AWS_REGION}"
+ fi
+
+ #Check volume availability
+ if ! check_volume "${DATA_VOLUME_NAME}"; then
+ log_error "Volume check failed"
+ return 1
+ fi
+
+ local steps=(
+ install_dependencies
+ setup_user_and_group
+ setup_assets
+ setup_data_volume
+ setup_cloud_watch
+ install_rippled
+ start_rippled
+ setup_seq_check
+ )
+
+ for step in "${steps[@]}"; do
+ log_info "Executing step: ${step}"
+ if ! ${step}; then
+ log_error "Step ${step} failed"
+ return 1
+ fi
+ done
+ if [[ "$LIFECYCLE_HOOK_NAME" != "none" ]]; then
+ setup_environment
+ echo "Signaling ASG lifecycle hook to complete"
+ TOKEN=$(curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
+ INSTANCE_ID=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -s http://169.254.169.254/latest/meta-data/instance-id)
+ aws autoscaling complete-lifecycle-action --lifecycle-action-result CONTINUE --instance-id "${INSTANCE_ID}" --lifecycle-hook-name "${LIFECYCLE_HOOK_NAME}" --auto-scaling-group-name "${ASG_NAME}" --region "${AWS_REGION}"
+ fi
+
+ log_info "rippled installation completed successfully"
+}
+
+# Execute main function
+main
diff --git a/lib/xrp/lib/assets/user-data/synch-check.service b/lib/xrp/lib/assets/user-data/synch-check.service
new file mode 100644
index 00000000..cdb736b6
--- /dev/null
+++ b/lib/xrp/lib/assets/user-data/synch-check.service
@@ -0,0 +1,7 @@
+[Unit]
+Description="XRP ledger sync status; gets current seq this ledger is on"
+After=rippled.service
+
+[Service]
+Type=oneshot
+ExecStart=/opt/check_xrp_sequence.sh
diff --git a/lib/xrp/lib/assets/user-data/synch-check.timer b/lib/xrp/lib/assets/user-data/synch-check.timer
new file mode 100644
index 00000000..1eacbee5
--- /dev/null
+++ b/lib/xrp/lib/assets/user-data/synch-check.timer
@@ -0,0 +1,10 @@
+[Unit]
+Description="Run Sync check service every 1 min"
+
+[Timer]
+OnBootSec=1min
+OnUnitActiveSec=1min
+Unit=synch-check.service
+
+[Install]
+WantedBy=timers.target
diff --git a/lib/xrp/lib/common-stack.ts b/lib/xrp/lib/common-stack.ts
new file mode 100644
index 00000000..71b99a88
--- /dev/null
+++ b/lib/xrp/lib/common-stack.ts
@@ -0,0 +1,74 @@
+import * as cdk from "aws-cdk-lib";
+import * as cdkConstructs from "constructs";
+import * as iam from 'aws-cdk-lib/aws-iam';
+import * as nag from "cdk-nag";
+
+export interface XRPCommonStackProps extends cdk.StackProps {
+
+}
+
+export class XRPCommonStack extends cdk.Stack {
+ AWS_STACKNAME = cdk.Stack.of(this).stackName;
+ AWS_ACCOUNT_ID = cdk.Stack.of(this).account;
+ instanceRole: iam.Role;
+
+ constructor(scope: cdkConstructs.Construct, id: string, props: XRPCommonStackProps) {
+ super(scope, id, props);
+
+ const region = cdk.Stack.of(this).region;
+
+ this.instanceRole = new iam.Role(this, `node-role`, {
+ assumedBy: new iam.ServicePrincipal("ec2.amazonaws.com"),
+ managedPolicies: [
+ iam.ManagedPolicy.fromAwsManagedPolicyName("SecretsManagerReadWrite"),
+ iam.ManagedPolicy.fromAwsManagedPolicyName("AmazonSSMManagedInstanceCore"),
+ iam.ManagedPolicy.fromAwsManagedPolicyName("CloudWatchAgentServerPolicy"),
+ ],
+ });
+
+ this.instanceRole.addToPolicy(new iam.PolicyStatement({
+ // Can't target specific stack: https://github.com/aws/aws-cdk/issues/22657
+ resources: ["*"],
+ actions: ["cloudformation:SignalResource"],
+ }));
+
+ this.instanceRole.addToPolicy(new iam.PolicyStatement({
+ resources: [`arn:aws:autoscaling:${region}:${this.AWS_ACCOUNT_ID}:autoScalingGroup:*:autoScalingGroupName/xrp-*`],
+ actions: ["autoscaling:CompleteLifecycleAction"],
+ }));
+
+ this.instanceRole.addToPolicy(
+ new iam.PolicyStatement({
+ resources: [
+ `arn:aws:s3:::cloudformation-examples`,
+ `arn:aws:s3:::cloudformation-examples/*`,
+ ],
+ actions: ["s3:ListBucket", "s3:*Object", "s3:GetBucket*"],
+ })
+ );
+
+ new cdk.CfnOutput(this, "Instance Role ARN", {
+ value: this.instanceRole.roleArn,
+ exportName: "XRPNodeInstanceRoleArn",
+ });
+
+ /**
+ * cdk-nag suppressions
+ */
+
+ nag.NagSuppressions.addResourceSuppressions(
+ this,
+ [
+ {
+ id: "AwsSolutions-IAM4",
+ reason: "AmazonSSMManagedInstanceCore and CloudWatchAgentServerPolicy are restrictive enough",
+ },
+ {
+ id: "AwsSolutions-IAM5",
+ reason: "Can't target specific stack: https://github.com/aws/aws-cdk/issues/22657",
+ },
+ ],
+ true
+ );
+ }
+}
diff --git a/lib/xrp/lib/config/XRPConfig.interface.ts b/lib/xrp/lib/config/XRPConfig.interface.ts
new file mode 100644
index 00000000..0b71d58a
--- /dev/null
+++ b/lib/xrp/lib/config/XRPConfig.interface.ts
@@ -0,0 +1,17 @@
+import * as configTypes from "../../../constructs/config.interface";
+
+export interface XRPBaseNodeConfig extends configTypes.BaseNodeConfig {
+ hubNetworkID: string;
+ // hubNetworkIP: string;
+ // onlineDelete: string;
+ // advisoryDelete: string;
+ // validatorListSites: string;
+ // validatorListKeys: string;
+ dataVolume: configTypes.DataVolumeConfig;
+}
+
+export interface HAXRPBaseNodeConfig extends XRPBaseNodeConfig {
+ albHealthCheckGracePeriodMin: number;
+ heartBeatDelayMin: number;
+ numberOfNodes: number;
+}
diff --git a/lib/xrp/lib/config/XRPConfig.ts b/lib/xrp/lib/config/XRPConfig.ts
new file mode 100644
index 00000000..93b0ea68
--- /dev/null
+++ b/lib/xrp/lib/config/XRPConfig.ts
@@ -0,0 +1,48 @@
+import * as ec2 from "aws-cdk-lib/aws-ec2";
+import * as configTypes from "../../../constructs/config.interface";
+import * as constants from "../../../constructs/constants";
+import * as xrp from "./XRPConfig.interface";
+
+
+const parseDataVolumeType = (dataVolumeType: string) => {
+ switch (dataVolumeType) {
+ case "gp3":
+ return ec2.EbsDeviceVolumeType.GP3;
+ case "io2":
+ return ec2.EbsDeviceVolumeType.IO2;
+ case "io1":
+ return ec2.EbsDeviceVolumeType.IO1;
+ case "instance-store":
+ return constants.InstanceStoreageDeviceVolumeType;
+ default:
+ return ec2.EbsDeviceVolumeType.GP3;
+ }
+}
+
+export const baseConfig: configTypes.BaseConfig = {
+ accountId: process.env.AWS_ACCOUNT_ID || "xxxxxxxxxxx",
+ region: process.env.AWS_REGION || "us-east-2",
+}
+
+
+
+export const baseNodeConfig: xrp.XRPBaseNodeConfig = {
+ instanceType: new ec2.InstanceType(process.env.XRP_INSTANCE_TYPE ? process.env.XRP_INSTANCE_TYPE : "r6a.8xlarge"),
+ instanceCpuType: process.env.XRP_CPU_TYPE?.toLowerCase() == "x86_64" ? ec2.AmazonLinuxCpuType.X86_64 : ec2.AmazonLinuxCpuType.ARM_64,
+ dataVolume: {
+ sizeGiB: process.env.DATA_VOL_SIZE ? parseInt(process.env.DATA_VOL_SIZE): 2000,
+ type: parseDataVolumeType(process.env.DATA_VOL_TYPE?.toLowerCase() ? process.env.DATA_VOL_TYPE?.toLowerCase() : "gp3"),
+ iops: process.env.DATA_VOL_IOPS ? parseInt(process.env.DATA_VOL_IOPS): 12000,
+ throughput: process.env.DATA_VOL_THROUGHPUT ? parseInt(process.env.DATA_VOL_THROUGHPUT): 700,
+ },
+ hubNetworkID: process.env.HUB_NETWORK_ID || "testnet"
+};
+
+
+
+export const haNodeConfig: xrp.HAXRPBaseNodeConfig = {
+ ...baseNodeConfig,
+ albHealthCheckGracePeriodMin: process.env.XRP_HA_ALB_HEALTHCHECK_GRACE_PERIOD_MIN ? parseInt(process.env.XRP_HA_ALB_HEALTHCHECK_GRACE_PERIOD_MIN) : 10,
+ heartBeatDelayMin: process.env.XRP_HA_NODES_HEARTBEAT_DELAY_MIN ? parseInt(process.env.XRP_HA_NODES_HEARTBEAT_DELAY_MIN) : 40,
+ numberOfNodes: process.env.XRP_HA_NUMBER_OF_NODES ? parseInt(process.env.XRP_HA_NUMBER_OF_NODES) : 2,
+};
diff --git a/lib/xrp/lib/config/createIniFile.ts b/lib/xrp/lib/config/createIniFile.ts
new file mode 100644
index 00000000..6e828327
--- /dev/null
+++ b/lib/xrp/lib/config/createIniFile.ts
@@ -0,0 +1,34 @@
+import * as fs from 'fs';
+interface RippledConfig {
+ [section: string]: string[] | Record;
+}
+export function parseRippledConfig(filePath: string): RippledConfig {
+ const config: RippledConfig = {};
+ const fileContent = fs.readFileSync(filePath, 'utf-8');
+ const lines = fileContent.split(/\r?\n/);
+ let currentSection: string | null = null;
+ lines.forEach((line) => {
+ line = line.trim();
+ // Ignore empty lines and comments
+ if (!line || line.startsWith('#') || line.startsWith(';')) {
+ return;
+ }
+ // Section header
+ if (line.startsWith('[') && line.endsWith(']')) {
+ currentSection = line.slice(1, -1).trim();
+ config[currentSection] = [];
+ } else if (currentSection) {
+ // Handle list-like sections (e.g., `[ips]`)
+ if (!line.includes('=')) {
+ (config[currentSection] as string[]).push(line);
+ } else {
+ // Handle key-value pairs
+ const [key, value] = line.split('=').map((part) => part.trim());
+ if (typeof config[currentSection] === 'object') {
+ (config[currentSection] as Record)[key] = value || '';
+ }
+ }
+ }
+ });
+ return config;
+}
diff --git a/lib/xrp/lib/constructs/node-cw-dashboard.ts b/lib/xrp/lib/constructs/node-cw-dashboard.ts
new file mode 100644
index 00000000..ee1dad10
--- /dev/null
+++ b/lib/xrp/lib/constructs/node-cw-dashboard.ts
@@ -0,0 +1,237 @@
+export const SingleNodeCWDashboardJSON = {
+ "widgets": [
+ {
+ "height": 4,
+ "width": 8,
+ "y": 0,
+ "x": 0,
+ "type": "metric",
+ "properties": {
+ "view": "timeSeries",
+ "stat": "Average",
+ "period": 300,
+ "stacked": false,
+ "yAxis": {
+ "left": {
+ "min": 0
+ }
+ },
+ "region": "${REGION}",
+ "metrics": [
+ [ "AWS/EC2", "CPUUtilization", "InstanceId", "${INSTANCE_ID}", { "label": "${INSTANCE_ID}-${INSTANCE_NAME}" } ]
+ ],
+ "title": "CPU utilization (%)"
+ }
+ },
+ {
+ "height": 4,
+ "width": 8,
+ "y": 0,
+ "x": 8,
+ "type": "metric",
+ "properties": {
+ "metrics": [
+ [ { "expression": "m7/PERIOD(m7)", "label": "Read", "id": "e7" } ],
+ [ "CWAgent", "diskio_reads", "InstanceId", "${INSTANCE_ID}", "name", "nvme1n1", { "id": "m7", "visible": false, "stat": "Sum", "period": 60 } ],
+ [ { "expression": "m8/PERIOD(m8)", "label": "Write", "id": "e8" } ],
+ [ "CWAgent", "diskio_writes", "InstanceId", "${INSTANCE_ID}", "name", "nvme1n1", { "id": "m8", "visible": false, "stat": "Sum", "period": 60 } ]
+ ],
+ "view": "timeSeries",
+ "stacked": false,
+ "region": "${REGION}",
+ "stat": "Sum",
+ "period": 60,
+ "title": "nvme1n1 Volume Read/Write (IO/sec)"
+ }
+ },
+ {
+ "height": 4,
+ "width": 8,
+ "y": 0,
+ "x": 16,
+ "type": "metric",
+ "properties": {
+ "metrics": [
+ [ "CWAgent", "XRP_Current_Sequence", "InstanceId", "${INSTANCE_ID}", { "label": "${INSTANCE_ID}-${INSTANCE_NAME}", "region": "${REGION}" } ]
+ ],
+ "sparkline": false,
+ "view": "timeSeries",
+ "region": "${REGION}",
+ "stacked": false,
+ "singleValueFullPrecision": true,
+ "liveData": true,
+ "setPeriodToTimeRange": false,
+ "trend": true,
+ "title": "XRP Current Sequence",
+ "period": 300
+ }
+ },
+ {
+ "height": 4,
+ "width": 8,
+ "y": 12,
+ "x": 16,
+ "type": "metric",
+ "properties": {
+ "view": "timeSeries",
+ "stat": "Average",
+ "period": 300,
+ "stacked": false,
+ "yAxis": {
+ "left": {
+ "min": 0
+ }
+ },
+ "region": "${REGION}",
+ "metrics": [
+ [ "AWS/EC2", "NetworkIn", "InstanceId", "${INSTANCE_ID}", { "label": "${INSTANCE_ID}-${INSTANCE_NAME}" } ]
+ ],
+ "title": "Network in (bytes)"
+ }
+ },
+ {
+ "height": 4,
+ "width": 8,
+ "y": 4,
+ "x": 0,
+ "type": "metric",
+ "properties": {
+ "view": "timeSeries",
+ "stacked": false,
+ "region": "${REGION}",
+ "stat": "Average",
+ "period": 300,
+ "metrics": [
+ [ "CWAgent", "cpu_usage_iowait", "InstanceId", "${INSTANCE_ID}", { "label": "${INSTANCE_ID}-${INSTANCE_NAME}" } ]
+ ],
+ "title": "CPU Usage IO wait (%)"
+ }
+ },
+ {
+ "height": 4,
+ "width": 8,
+ "y": 4,
+ "x": 8,
+ "type": "metric",
+ "properties": {
+ "view": "timeSeries",
+ "stat": "Sum",
+ "period": 60,
+ "stacked": false,
+ "yAxis": {
+ "left": {
+ "min": 0
+ }
+ },
+ "region": "${REGION}",
+ "metrics": [
+ [ { "expression": "IF(m7_2 !=0, (m7_1 / m7_2), 0)", "label": "Read", "id": "e7" } ],
+ [ "CWAgent", "diskio_read_time", "InstanceId", "${INSTANCE_ID}", "name", "nvme1n1", { "id": "m7_1", "visible": false, "stat": "Sum", "period": 60 } ],
+ [ "CWAgent", "diskio_reads", "InstanceId", "${INSTANCE_ID}", "name", "nvme1n1", { "id": "m7_2", "visible": false, "stat": "Sum", "period": 60 } ],
+ [ { "expression": "IF(m7_4 !=0, (m7_3 / m7_4), 0)", "label": "Write", "id": "e8" } ],
+ [ "CWAgent", "diskio_write_time", "InstanceId", "${INSTANCE_ID}", "name", "nvme1n1", { "id": "m7_3", "visible": false, "stat": "Sum", "period": 60 } ],
+ [ "CWAgent", "diskio_writes", "InstanceId", "${INSTANCE_ID}", "name", "nvme1n1", { "id": "m7_4", "visible": false, "stat": "Sum", "period": 60 } ]
+ ],
+ "title": "nvme1n1 Volume Read/Write latency (ms/op)"
+ }
+ },
+ {
+ "height": 4,
+ "width": 8,
+ "y": 8,
+ "x": 16,
+ "type": "metric",
+ "properties": {
+ "view": "timeSeries",
+ "stat": "Average",
+ "period": 300,
+ "stacked": false,
+ "yAxis": {
+ "left": {
+ "min": 0
+ }
+ },
+ "region": "${REGION}",
+ "metrics": [
+ [ "AWS/EC2", "NetworkOut", "InstanceId", "${INSTANCE_ID}", { "label": "${INSTANCE_ID}-${INSTANCE_NAME}" } ]
+ ],
+ "title": "Network out (bytes)"
+ }
+ },
+ {
+ "height": 4,
+ "width": 8,
+ "y": 8,
+ "x": 0,
+ "type": "metric",
+ "properties": {
+ "view": "timeSeries",
+ "stacked": false,
+ "region": "${REGION}",
+ "stat": "Average",
+ "period": 300,
+ "metrics": [
+ [ "CWAgent", "mem_used_percent", "InstanceId", "${INSTANCE_ID}", { "label": "${INSTANCE_ID}-${INSTANCE_NAME}" } ]
+ ],
+ "title": "Mem Used (%)"
+ }
+ },
+ {
+ "height": 4,
+ "width": 8,
+ "y": 8,
+ "x": 8,
+ "type": "metric",
+ "properties": {
+ "metrics": [
+ [ { "expression": "m2/PERIOD(m2)", "label": "Read", "id": "e2", "period": 60, "region": "${REGION}" } ],
+ [ "CWAgent", "diskio_read_bytes", "InstanceId", "${INSTANCE_ID}", "name", "nvme1n1", { "id": "m2", "stat": "Sum", "visible": false, "period": 60 } ],
+ [ { "expression": "m3/PERIOD(m3)", "label": "Write", "id": "e3", "period": 60, "region": "${REGION}" } ],
+ [ "CWAgent", "diskio_write_bytes", "InstanceId", "${INSTANCE_ID}", "name", "nvme1n1", { "id": "m3", "stat": "Sum", "visible": false, "period": 60 } ]
+ ],
+ "view": "timeSeries",
+ "stacked": false,
+ "region": "${REGION}",
+ "stat": "Average",
+ "period": 60,
+ "title": "nvme1n1 Volume Read/Write throughput (bytes/sec)"
+ }
+ },
+ {
+ "height": 4,
+ "width": 8,
+ "y": 12,
+ "x": 8,
+ "type": "metric",
+ "properties": {
+ "metrics": [
+ [ "CWAgent", "disk_used_percent", "InstanceId", "${INSTANCE_ID}", "device", "nvme1n1", "path", "/var/lib/rippled", "fstype", "xfs", { "region": "${REGION}", "label": "/var/lib/rippled" } ]
+ ],
+ "sparkline": true,
+ "view": "singleValue",
+ "region": "${REGION}",
+ "title": "nvme1n1 Disk Used (%)",
+ "period": 60,
+ "stat": "Maximum"
+ }
+ },
+ {
+ "type": "metric",
+ "x": 16,
+ "y": 4,
+ "width": 8,
+ "height": 4,
+ "properties": {
+ "metrics": [
+ [ "CWAgent", "XRP_Delta_Sequence", "InstanceId", "${INSTANCE_ID}", { "region": "${REGION}", "label": "XRP Current - Validated Sequence" } ]
+ ],
+ "view": "timeSeries",
+ "stacked": false,
+ "region": "${REGION}",
+ "period": 300,
+ "stat": "Maximum",
+ "title": "XRP Current - Validated Sequence"
+ }
+ }
+ ]
+}
diff --git a/lib/xrp/lib/constructs/xrp-node-security-group.ts b/lib/xrp/lib/constructs/xrp-node-security-group.ts
new file mode 100644
index 00000000..f0cbb6a6
--- /dev/null
+++ b/lib/xrp/lib/constructs/xrp-node-security-group.ts
@@ -0,0 +1,51 @@
+import * as cdk from "aws-cdk-lib";
+import * as cdkContructs from "constructs";
+import * as ec2 from "aws-cdk-lib/aws-ec2";
+import * as nag from "cdk-nag";
+
+export interface XRPNodeSecurityGroupConstructProps {
+ vpc: cdk.aws_ec2.IVpc;
+}
+
+export class XRPNodeSecurityGroupConstruct extends cdkContructs.Construct {
+ public securityGroup: cdk.aws_ec2.ISecurityGroup;
+
+ constructor(scope: cdkContructs.Construct, id: string, props: XRPNodeSecurityGroupConstructProps) {
+ super(scope, id);
+
+ const {
+ vpc
+ } = props;
+
+ const sg = new ec2.SecurityGroup(this, `rpc-node-security-group`, {
+ vpc,
+ description: "Security Group for Blockchain nodes",
+ allowAllOutbound: true
+ });
+
+ // Public ports
+ sg.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcpRange(51235, 51235), "P2P protocols");
+ sg.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcpRange(2459, 2459), "P2P protocols");
+
+
+ // Private ports restricted only to the VPC IP range
+ sg.addIngressRule(ec2.Peer.ipv4(vpc.vpcCidrBlock), ec2.Port.tcp(6005), "RPC port HTTP (user access needs to be restricted. Allowed access only from internal IPs)");
+
+ this.securityGroup = sg;
+
+ /**
+ * cdk-nag suppressions
+ */
+
+ nag.NagSuppressions.addResourceSuppressions(
+ this,
+ [
+ {
+ id: "AwsSolutions-EC23",
+ reason: "Need to use wildcard for P2P ports"
+ }
+ ],
+ true
+ );
+ }
+}
diff --git a/lib/xrp/lib/ha-nodes-stack.ts b/lib/xrp/lib/ha-nodes-stack.ts
new file mode 100644
index 00000000..ca9a0078
--- /dev/null
+++ b/lib/xrp/lib/ha-nodes-stack.ts
@@ -0,0 +1,139 @@
+import * as cdk from "aws-cdk-lib";
+import * as cdkConstructs from "constructs";
+import * as ec2 from "aws-cdk-lib/aws-ec2";
+import * as s3Assets from "aws-cdk-lib/aws-s3-assets";
+import * as nag from "cdk-nag";
+import * as path from "path";
+import * as fs from "fs";
+import { HANodesConstruct } from "../../constructs/ha-rpc-nodes-with-alb";
+import * as constants from "../../constructs/constants";
+import { XRPSingleNodeStackProps } from "./single-node-stack";
+import { XRPNodeSecurityGroupConstruct } from "./constructs/xrp-node-security-group";
+
+export interface XRPHANodesStackProps extends XRPSingleNodeStackProps {
+ albHealthCheckGracePeriodMin: number;
+ heartBeatDelayMin: number;
+ numberOfNodes: number;
+}
+
+export class XRPHANodesStack extends cdk.Stack {
+ constructor(scope: cdkConstructs.Construct, id: string, props: XRPHANodesStackProps) {
+ super(scope, id, props);
+
+ // Setting up necessary environment variables
+ const REGION = cdk.Stack.of(this).region;
+ const STACK_NAME = cdk.Stack.of(this).stackName;
+ const STACK_ID = cdk.Stack.of(this).stackId;
+ const lifecycleHookName = STACK_NAME;
+ const autoScalingGroupName = STACK_NAME;
+
+ // Getting our config from initialization properties
+ const {
+ instanceType,
+ instanceCpuType,
+ dataVolume: dataVolume,
+ stackName,
+ hubNetworkID,
+ albHealthCheckGracePeriodMin,
+ heartBeatDelayMin,
+ numberOfNodes
+ } = props;
+
+ // Using default VPC
+ const vpc = ec2.Vpc.fromLookup(this, "vpc", { isDefault: true });
+
+ // Setting up the security group for the node from xrp-specific construct
+ const instanceSG = new XRPNodeSecurityGroupConstruct(this, "security-group", {
+ vpc: vpc
+ });
+
+ // Making our scripts and configis from the local "assets" directory available for instance to download
+ const asset = new s3Assets.Asset(this, "assets", {
+ path: path.join(__dirname, "assets")
+ });
+
+ const instanceRole = props.instanceRole;
+
+ // Making sure our instance will be able to read the assets
+ asset.bucket.grantRead(instanceRole);
+
+ // Setting up the node using generic Single Node constract
+ if (instanceCpuType === ec2.AmazonLinuxCpuType.ARM_64) {
+ throw new Error("ARM_64 is not yet supported");
+ }
+
+ // Parsing user data script and injecting necessary variables
+ const nodeStartScript = fs.readFileSync(path.join(__dirname, "assets", "user-data", "node.sh")).toString();
+ const dataVolumeSizeBytes = dataVolume.sizeGiB * constants.GibibytesToBytesConversionCoefficient;
+
+ const modifiedInitNodeScript = cdk.Token.asString(
+ cdk.Lazy.string({
+ produce: () => {
+ return nodeStartScript
+ .replace("_AWS_REGION_", REGION)
+ .replace("_ASSETS_S3_PATH_", `s3://${asset.s3BucketName}/${asset.s3ObjectKey}`)
+ .replace("_STACK_NAME_", STACK_NAME)
+ .replace("_STACK_ID_", STACK_ID)
+ .replace("_NODE_CF_LOGICAL_ID_", constants.NoneValue)
+ .replace("_DATA_VOLUME_TYPE_", dataVolume.type)
+ .replace("_DATA_VOLUME_SIZE_", dataVolumeSizeBytes.toString())
+ .replace("_HUB_NETWORK_ID_", hubNetworkID)
+ .replace("_LIFECYCLE_HOOK_NAME_", lifecycleHookName)
+ .replace("_ASG_NAME_", autoScalingGroupName);
+ }
+ })
+ );
+
+ const healthCheckPath = "/";
+ const nodeASG = new HANodesConstruct(this, "stock-server-node", {
+ instanceType,
+ dataVolumes: [dataVolume],
+ rootDataVolumeDeviceName: "/dev/xvda",
+ machineImage: new ec2.AmazonLinuxImage({
+ generation: ec2.AmazonLinuxGeneration.AMAZON_LINUX_2,
+ cpuType: ec2.AmazonLinuxCpuType.X86_64
+ }),
+ vpc,
+ role: instanceRole,
+ securityGroup: instanceSG.securityGroup,
+ userData: modifiedInitNodeScript,
+ numberOfNodes,
+ albHealthCheckGracePeriodMin,
+ healthCheckPath,
+ heartBeatDelayMin,
+ lifecycleHookName: lifecycleHookName,
+ autoScalingGroupName: autoScalingGroupName,
+ rpcPortForALB: 6005
+ });
+
+
+ // Making sure we output the URL of our Applicaiton Load Balancer
+ new cdk.CfnOutput(this, "alb-url", {
+ value: nodeASG.loadBalancerDnsName
+ });
+
+ // Adding suppressions to the stack
+ nag.NagSuppressions.addResourceSuppressions(
+ this,
+ [
+ {
+ id: "AwsSolutions-AS3",
+ reason: "No notifications needed"
+ },
+ {
+ id: "AwsSolutions-S1",
+ reason: "No access log needed for ALB logs bucket"
+ },
+ {
+ id: "AwsSolutions-EC28",
+ reason: "Using basic monitoring to save costs"
+ },
+ {
+ id: "AwsSolutions-IAM5",
+ reason: "Need read access to the S3 bucket with assets"
+ }
+ ],
+ true
+ );
+ }
+}
diff --git a/lib/xrp/lib/single-node-stack.ts b/lib/xrp/lib/single-node-stack.ts
new file mode 100644
index 00000000..03b1f0fc
--- /dev/null
+++ b/lib/xrp/lib/single-node-stack.ts
@@ -0,0 +1,143 @@
+import * as cdk from "aws-cdk-lib";
+import * as cdkConstructs from "constructs";
+import * as ec2 from "aws-cdk-lib/aws-ec2";
+import * as iam from "aws-cdk-lib/aws-iam";
+import * as s3Assets from "aws-cdk-lib/aws-s3-assets";
+import * as path from "path";
+import * as fs from "fs";
+import * as cw from "aws-cdk-lib/aws-cloudwatch";
+import * as nag from "cdk-nag";
+import { SingleNodeConstruct } from "../../constructs/single-node";
+import { XRPNodeSecurityGroupConstruct } from "./constructs/xrp-node-security-group";
+import { SingleNodeCWDashboardJSON } from "./constructs/node-cw-dashboard";
+import { DataVolumeConfig } from "../../constructs/config.interface";
+import * as constants from "../../constructs/constants";
+
+
+export interface XRPSingleNodeStackProps extends cdk.StackProps {
+ instanceType: ec2.InstanceType;
+ instanceCpuType: ec2.AmazonLinuxCpuType;
+ dataVolume: DataVolumeConfig;
+ stackName: string;
+ hubNetworkID: string;
+ instanceRole: iam.Role;
+
+}
+
+export class XRPSingleNodeStack extends cdk.Stack {
+ constructor(scope: cdkConstructs.Construct, id: string, props: XRPSingleNodeStackProps) {
+ super(scope, id, props);
+
+ // Setting up necessary environment variables
+ const REGION = cdk.Stack.of(this).region;
+ const STACK_NAME = cdk.Stack.of(this).stackName;
+ const STACK_ID = cdk.Stack.of(this).stackId;
+ const availabilityZones = cdk.Stack.of(this).availabilityZones;
+ const chosenAvailabilityZone = availabilityZones.slice(0, 1)[0];
+
+ // Getting our config from initialization properties
+ const {
+ instanceType,
+ instanceCpuType,
+ dataVolume: dataVolume,
+ stackName,
+ hubNetworkID
+ } = props;
+
+ // Using default VPC
+ const vpc = ec2.Vpc.fromLookup(this, "vpc", { isDefault: true });
+
+ // Setting up the security group for the node from XRP-specific construct
+ const instanceSG = new XRPNodeSecurityGroupConstruct(this, "security-group", {
+ vpc: vpc
+ });
+
+ // Making our scripts and configis from the local "assets" directory available for instance to download
+ const asset = new s3Assets.Asset(this, "assets", {
+ path: path.join(__dirname, "assets")
+ });
+
+ const instanceRole = props.instanceRole; //iam.Role.fromRoleArn(this, "iam-role", importedInstanceRoleArn);
+
+ // Making sure our instance will be able to read the assets
+ asset.bucket.grantRead(instanceRole);
+
+ // Setting up the node using generic Single Node constract
+ if (instanceCpuType === ec2.AmazonLinuxCpuType.ARM_64) {
+ throw new Error("ARM_64 is not yet supported");
+ }
+
+
+ const node = new SingleNodeConstruct(this, "stock-server-node", {
+ instanceName: STACK_NAME,
+ instanceType,
+ dataVolumes: [dataVolume],
+ rootDataVolumeDeviceName: "/dev/xvda",
+ machineImage: new ec2.AmazonLinuxImage({
+ generation: ec2.AmazonLinuxGeneration.AMAZON_LINUX_2,
+ cpuType: ec2.AmazonLinuxCpuType.X86_64
+ }),
+ vpc,
+ availabilityZone: chosenAvailabilityZone,
+ role: instanceRole,
+ securityGroup: instanceSG.securityGroup,
+ vpcSubnets: {
+ subnetType: ec2.SubnetType.PUBLIC
+ }
+ });
+
+
+ // Parsing user data script and injecting necessary variables
+ const nodeStartScript = fs.readFileSync(path.join(__dirname, "assets", "user-data", "node.sh")).toString();
+ const dataVolumeSizeBytes = dataVolume.sizeGiB * constants.GibibytesToBytesConversionCoefficient;
+
+ const modifiedInitNodeScript = cdk.Token.asString(
+ cdk.Lazy.string({
+ produce: () => {
+ return nodeStartScript
+ .replace("_AWS_REGION_", REGION)
+ .replace("_ASSETS_S3_PATH_", `s3://${asset.s3BucketName}/${asset.s3ObjectKey}`)
+ .replace("_STACK_NAME_", STACK_NAME)
+ .replace("_STACK_ID_", STACK_ID)
+ .replace("_NODE_CF_LOGICAL_ID_", node.nodeCFLogicalId)
+ .replace("_DATA_VOLUME_TYPE_", dataVolume.type)
+ .replace("_DATA_VOLUME_SIZE_", dataVolumeSizeBytes.toString())
+ .replace("_HUB_NETWORK_ID_", hubNetworkID)
+ .replace("_LIFECYCLE_HOOK_NAME_", constants.NoneValue);
+ }
+ })
+ );
+
+ const userData = ec2.UserData.forLinux();
+ userData.addCommands(modifiedInitNodeScript);
+ node.instance.addUserData(userData.render());
+
+ // Adding CloudWatch dashboard to the node
+ const dashboardString = cdk.Fn.sub(JSON.stringify(SingleNodeCWDashboardJSON), {
+ INSTANCE_ID: node.instanceId,
+ INSTANCE_NAME: STACK_NAME,
+ REGION: REGION
+ });
+
+ new cw.CfnDashboard(this, "xrp-cw-dashboard", {
+ dashboardName: `${STACK_NAME}-${node.instanceId}`,
+ dashboardBody: dashboardString
+ });
+
+ new cdk.CfnOutput(this, "node-instance-id", {
+ value: node.instanceId
+ });
+
+ // Adding suppressions to the stack
+ nag.NagSuppressions.addResourceSuppressions(
+ this,
+ [
+ {
+ id: "AwsSolutions-IAM5",
+ reason: "Need read access to the S3 bucket with assets"
+ }
+ ],
+ true
+ );
+ }
+}
diff --git a/lib/xrp/package.json b/lib/xrp/package.json
new file mode 100644
index 00000000..eca6f9fb
--- /dev/null
+++ b/lib/xrp/package.json
@@ -0,0 +1,11 @@
+{
+ "name": "aws-blockchain-node-runners-xrp",
+ "version": "0.2.0",
+ "scripts": {
+ "build": "npx tsc",
+ "watch": "npx tsc -w",
+ "test": "npx jest --detectOpenHandles",
+ "cdk": "npx cdk",
+ "scan-cdk": "npx cdk synth"
+ }
+}
diff --git a/lib/xrp/sample-configs/.env-sample-mainnet b/lib/xrp/sample-configs/.env-sample-mainnet
new file mode 100644
index 00000000..d55445ae
--- /dev/null
+++ b/lib/xrp/sample-configs/.env-sample-mainnet
@@ -0,0 +1,12 @@
+AWS_ACCOUNT_ID="xxxxxxxxxxx"
+AWS_REGION="xxxxxxxxxx"
+XRP_INSTANCE_TYPE="r7a.2xlarge" #The solution was originally tested with the r7a.12xlarge instance type. Other instance types will work, but have not been extensively tested.
+XRP_CPU_TYPE="x86_64" # All options: "x86_64". ARM currently not supported
+DATA_VOL_TYPE="gp3" # Other options: "io1" | "io2" | "gp3" | "instance-store" . IMPORTANT: Use "instance-store" option only with instance types that support that feature, like popular for node im4gn, d3, i3en, and i4i instance families
+DATA_VOL_SIZE="2000" # Current required data size to keep both smapshot archive and unarchived version of it
+DATA_VOL_IOPS="12000" # Max IOPS for EBS volumes (not applicable for "instance-store")
+DATA_VOL_THROUGHPUT="700"
+XRP_HA_ALB_HEALTHCHECK_GRACE_PERIOD_MIN="60"
+XRP_HA_NODES_HEARTBEAT_DELAY_MIN="5"
+XRP_HA_NUMBER_OF_NODES="2"
+HUB_NETWORK_ID="mainnet"
diff --git a/lib/xrp/sample-configs/.env-sample-testnet b/lib/xrp/sample-configs/.env-sample-testnet
new file mode 100644
index 00000000..855e9787
--- /dev/null
+++ b/lib/xrp/sample-configs/.env-sample-testnet
@@ -0,0 +1,12 @@
+AWS_ACCOUNT_ID="xxxxxxxxxxx"
+AWS_REGION="xxxxxxxxxx"
+XRP_INSTANCE_TYPE="r7a.2xlarge" #The solution was originally tested with the r7a.12xlarge instance type. Other instance types will work, but have not been extensively tested.
+XRP_CPU_TYPE="x86_64" # All options: "x86_64". ARM currently not supported
+DATA_VOL_TYPE="gp3" # Other options: "io1" | "io2" | "gp3" | "instance-store" . IMPORTANT: Use "instance-store" option only with instance types that support that feature, like popular for node im4gn, d3, i3en, and i4i instance families
+DATA_VOL_SIZE="2000" # Current required data size to keep both smapshot archive and unarchived version of it
+DATA_VOL_IOPS="12000" # Max IOPS for EBS volumes (not applicable for "instance-store")
+DATA_VOL_THROUGHPUT="700"
+XRP_HA_ALB_HEALTHCHECK_GRACE_PERIOD_MIN="60"
+XRP_HA_NODES_HEARTBEAT_DELAY_MIN="5"
+XRP_HA_NUMBER_OF_NODES="2"
+HUB_NETWORK_ID="testnet"
diff --git a/lib/xrp/test/.env-test b/lib/xrp/test/.env-test
new file mode 100644
index 00000000..15a3928c
--- /dev/null
+++ b/lib/xrp/test/.env-test
@@ -0,0 +1,12 @@
+AWS_ACCOUNT_ID="xxxxxxxxxxx"
+AWS_REGION="xxxxxxxxxx"
+XRP_INSTANCE_TYPE="r7a.2xlarge"
+XRP_CPU_TYPE="x86_64" # All options: "x86_64". ARM currently not supported
+DATA_VOL_TYPE="gp3" # Other options: "io1" | "io2" | "gp3" | "instance-store" . IMPORTANT: Use "instance-store" option only with instance types that support that feature, like popular for node im4gn, d3, i3en, and i4i instance families
+DATA_VOL_SIZE="2000" # Current required data size to keep both smapshot archive and unarchived version of it
+DATA_VOL_IOPS="12000" # Max IOPS for EBS volumes (not applicable for "instance-store")
+DATA_VOL_THROUGHPUT="700"
+XRP_HA_ALB_HEALTHCHECK_GRACE_PERIOD_MIN="60"
+XRP_HA_NODES_HEARTBEAT_DELAY_MIN="5"
+XRP_HA_NUMBER_OF_NODES="2"
+HUB_NETWORK_ID="testnet"
diff --git a/lib/xrp/test/common-stack.test.ts b/lib/xrp/test/common-stack.test.ts
new file mode 100644
index 00000000..75570240
--- /dev/null
+++ b/lib/xrp/test/common-stack.test.ts
@@ -0,0 +1,76 @@
+import { Template } from "aws-cdk-lib/assertions";
+import * as cdk from "aws-cdk-lib";
+import * as dotenv from "dotenv";
+import * as config from "../lib/config/XRPConfig";
+import { XRPCommonStack } from "../lib/common-stack";
+
+dotenv.config({ path: "./test/.env-test" });
+
+describe("XRPCommonStack", () => {
+ test("synthesizes the way we expect", () => {
+ const app = new cdk.App();
+
+ // Create the XRPCommonStack.
+ const xrpCommonStack = new XRPCommonStack(app, "xrp-common", {
+ env: { account: config.baseConfig.accountId, region: config.baseConfig.region },
+ stackName: `xrp-nodes-common`
+ });
+
+ // Prepare the stack for assertions.
+ const template = Template.fromStack(xrpCommonStack);
+
+ // Has EC2 instance role.
+ template.hasResourceProperties("AWS::IAM::Role", {
+ AssumeRolePolicyDocument: {
+ Statement: [
+ {
+ Action: "sts:AssumeRole",
+ Effect: "Allow",
+ Principal: {
+ Service: "ec2.amazonaws.com"
+ }
+ }
+ ]
+ },
+ ManagedPolicyArns: [
+ {
+ "Fn::Join": [
+ "",
+ [
+ "arn:",
+ {
+ "Ref": "AWS::Partition"
+ },
+ ":iam::aws:policy/SecretsManagerReadWrite"
+ ]
+ ]
+ },
+ {
+ "Fn::Join": [
+ "",
+ [
+ "arn:",
+ {
+ Ref: "AWS::Partition"
+ },
+ ":iam::aws:policy/AmazonSSMManagedInstanceCore"
+ ]
+ ]
+ },
+ {
+ "Fn::Join": [
+ "",
+ [
+ "arn:",
+ {
+ "Ref": "AWS::Partition"
+ },
+ ":iam::aws:policy/CloudWatchAgentServerPolicy"
+ ]
+ ]
+ }
+ ]
+ });
+
+ });
+});
diff --git a/lib/xrp/test/ha-nodes-stack.test.ts b/lib/xrp/test/ha-nodes-stack.test.ts
new file mode 100644
index 00000000..0703a790
--- /dev/null
+++ b/lib/xrp/test/ha-nodes-stack.test.ts
@@ -0,0 +1,244 @@
+import { Match, Template } from "aws-cdk-lib/assertions";
+import * as cdk from "aws-cdk-lib";
+import * as dotenv from 'dotenv';
+dotenv.config({ path: './test/.env-test' });
+import * as config from "../lib/config/XRPConfig";
+import { XRPCommonStack } from "../lib/common-stack";
+import { XRPHANodesStack } from "../lib/ha-nodes-stack";
+
+describe("XRPHANodesStackProps", () => {
+ test("synthesizes the way we expect", () => {
+ const app = new cdk.App();
+
+ const xrpCommonStack = new XRPCommonStack(app, "xrp-common", {
+ env: { account: config.baseConfig.accountId, region: config.baseConfig.region },
+ stackName: `xrp-nodes-common`,
+ });
+
+ // Create the XRPHANodesStackProps.
+ const xRPHANodesStack = new XRPHANodesStack(app, "XRP-ha-nodes", {
+ stackName: "xrp-ha-nodes",
+ env: { account: config.baseConfig.accountId, region: config.baseConfig.region },
+ instanceType: config.baseNodeConfig.instanceType,
+ instanceCpuType: config.baseNodeConfig.instanceCpuType,
+ dataVolume: config.baseNodeConfig.dataVolume,
+ hubNetworkID: config.baseNodeConfig.hubNetworkID,
+ instanceRole: xrpCommonStack.instanceRole,
+ albHealthCheckGracePeriodMin: config.haNodeConfig.albHealthCheckGracePeriodMin,
+ heartBeatDelayMin: config.haNodeConfig.heartBeatDelayMin,
+ numberOfNodes: config.haNodeConfig.numberOfNodes,
+ });
+
+ // Prepare the stack for assertions.
+ const template = Template.fromStack(xRPHANodesStack);
+
+ // Has EC2 instance security group.
+ template.hasResourceProperties("AWS::EC2::SecurityGroup", {
+ GroupDescription: Match.anyValue(),
+ VpcId: Match.anyValue(),
+ SecurityGroupEgress: [
+ {
+ "CidrIp": "0.0.0.0/0",
+ "Description": "Allow all outbound traffic by default",
+ "IpProtocol": "-1"
+ }
+ ],
+ SecurityGroupIngress: [
+ {
+ "CidrIp": "0.0.0.0/0",
+ "Description": "P2P protocols",
+ "FromPort": 51235,
+ "IpProtocol": "tcp",
+ "ToPort": 51235
+ },
+ {
+ "CidrIp": "0.0.0.0/0",
+ "Description": "P2P protocols",
+ "FromPort": 2459,
+ "IpProtocol": "tcp",
+ "ToPort": 2459
+ },
+ {
+ "CidrIp": "1.2.3.4/5",
+ "Description": "RPC port HTTP (user access needs to be restricted. Allowed access only from internal IPs)",
+ "FromPort": 6005,
+ "IpProtocol": "tcp",
+ "ToPort": 6005
+ },
+ {
+ "Description": "Allow access from ALB to Blockchain Node",
+ "FromPort": 0,
+ "IpProtocol": "tcp",
+ "SourceSecurityGroupId": Match.anyValue(),
+ "ToPort": 65535
+ },
+ ]
+ })
+
+ // Has security group from ALB to EC2.
+ template.hasResourceProperties("AWS::EC2::SecurityGroupIngress", {
+ Description: Match.anyValue(),
+ FromPort: 6005,
+ GroupId: Match.anyValue(),
+ IpProtocol: "tcp",
+ SourceSecurityGroupId: Match.anyValue(),
+ ToPort: 6005,
+ })
+
+ // Has launch template profile for EC2 instances.
+ template.hasResourceProperties("AWS::IAM::InstanceProfile", {
+ Roles: [Match.anyValue()]
+ });
+
+ // Has EC2 launch template.
+ template.hasResourceProperties("AWS::EC2::LaunchTemplate", {
+ LaunchTemplateData: {
+ BlockDeviceMappings: [
+ {
+ "DeviceName": "/dev/xvda",
+ "Ebs": {
+ "DeleteOnTermination": true,
+ "Encrypted": true,
+ "Iops": 3000,
+ "Throughput": 125,
+ "VolumeSize": 46,
+ "VolumeType": "gp3"
+ }
+ },
+ {
+ "DeviceName": "/dev/sdf",
+ "Ebs": {
+ "DeleteOnTermination": true,
+ "Encrypted": true,
+ "Iops": 12000,
+ "Throughput": 700,
+ "VolumeSize": 2000,
+ "VolumeType": "gp3"
+ }
+ }
+ ],
+ EbsOptimized: true,
+ IamInstanceProfile: Match.anyValue(),
+ ImageId: Match.anyValue(),
+ InstanceType:"r7a.2xlarge",
+ SecurityGroupIds: [Match.anyValue()],
+ UserData: Match.anyValue(),
+ TagSpecifications: Match.anyValue(),
+ }
+ })
+
+ // Has Auto Scaling Group.
+ template.hasResourceProperties("AWS::AutoScaling::AutoScalingGroup", {
+ AutoScalingGroupName: `xrp-ha-nodes`,
+ HealthCheckGracePeriod: config.haNodeConfig.albHealthCheckGracePeriodMin * 60,
+ HealthCheckType: "ELB",
+ DefaultInstanceWarmup: 60,
+ MinSize: "0",
+ MaxSize: "4",
+ DesiredCapacity: config.haNodeConfig.numberOfNodes.toString(),
+ VPCZoneIdentifier: Match.anyValue(),
+ TargetGroupARNs: Match.anyValue(),
+ });
+
+ // Has Auto Scaling Lifecycle Hook.
+ template.hasResourceProperties("AWS::AutoScaling::LifecycleHook", {
+ DefaultResult: "ABANDON",
+ HeartbeatTimeout: config.haNodeConfig.heartBeatDelayMin * 60,
+ LifecycleHookName: `xrp-ha-nodes`,
+ LifecycleTransition: "autoscaling:EC2_INSTANCE_LAUNCHING",
+ });
+
+ // Has Auto Scaling Security Group.
+ template.hasResourceProperties("AWS::EC2::SecurityGroup", {
+ GroupDescription: "Security Group for Load Balancer",
+ SecurityGroupEgress: [
+ {
+ "CidrIp": "0.0.0.0/0",
+ "Description": "Allow all outbound traffic by default",
+ "IpProtocol": "-1"
+ }
+ ],
+ SecurityGroupIngress: [
+ {
+ "CidrIp": "1.2.3.4/5",
+ "Description": "Blockchain Node RPC",
+ "FromPort": 6005,
+ "IpProtocol": "tcp",
+ "ToPort": 6005
+ }
+ ],
+ VpcId: Match.anyValue(),
+ });
+
+ // Has ALB.
+ template.hasResourceProperties("AWS::ElasticLoadBalancingV2::LoadBalancer", {
+ LoadBalancerAttributes: [
+ {
+ Key: "deletion_protection.enabled",
+ Value: "false"
+ },
+ {
+ Key: "access_logs.s3.enabled",
+ Value: "true"
+ },
+ {
+ Key: "access_logs.s3.bucket",
+ Value: Match.anyValue(),
+ },
+ {
+ Key: "access_logs.s3.prefix",
+ Value: `xrp-ha-nodes`
+ }
+ ],
+ Scheme: "internal",
+ SecurityGroups: [
+ Match.anyValue()
+ ],
+ "Subnets": [
+ Match.anyValue(),
+ Match.anyValue()
+ ],
+ Type: "application",
+ });
+
+ // Has ALB listener.
+ template.hasResourceProperties("AWS::ElasticLoadBalancingV2::Listener", {
+ "DefaultActions": [
+ {
+ "TargetGroupArn": Match.anyValue(),
+ Type: "forward"
+ }
+ ],
+ LoadBalancerArn: Match.anyValue(),
+ Port: 6005,
+ Protocol: "HTTP"
+ })
+
+ // Has ALB target group.
+ template.hasResourceProperties("AWS::ElasticLoadBalancingV2::TargetGroup", {
+ HealthCheckEnabled: true,
+ HealthCheckIntervalSeconds: 30,
+ HealthCheckPath: "/",
+ HealthCheckPort: "6005",
+ HealthyThresholdCount: 3,
+ Matcher: {
+ HttpCode: "200-299"
+ },
+ Port: 6005,
+ Protocol: "HTTP",
+ TargetGroupAttributes: [
+ {
+ Key: "deregistration_delay.timeout_seconds",
+ Value: "30"
+ },
+ {
+ Key: "stickiness.enabled",
+ Value: "false"
+ }
+ ],
+ TargetType: "instance",
+ UnhealthyThresholdCount: 2,
+ VpcId: Match.anyValue(),
+ })
+ });
+});
diff --git a/lib/xrp/test/single-node-stack.test.ts b/lib/xrp/test/single-node-stack.test.ts
new file mode 100644
index 00000000..311e556e
--- /dev/null
+++ b/lib/xrp/test/single-node-stack.test.ts
@@ -0,0 +1,118 @@
+import { Match, Template } from "aws-cdk-lib/assertions";
+import * as cdk from "aws-cdk-lib";
+import * as dotenv from "dotenv";
+dotenv.config({ path: './test/.env-test' });
+import * as config from "../lib/config/XRPConfig";
+import { XRPCommonStack } from "../lib/common-stack";
+import { XRPSingleNodeStack } from "../lib/single-node-stack";
+
+
+describe("XRPSingleNodeStack", () => {
+ test("synthesizes the way we expect", () => {
+ const app = new cdk.App();
+ const xrpCommonStack = new XRPCommonStack(app, "xrp-common", {
+ env: { account: config.baseConfig.accountId, region: config.baseConfig.region },
+ stackName: `xrp-nodes-common`,
+ });
+
+ // Create the XRPSingleNodeStack.
+ const xrpSingleNodeStack = new XRPSingleNodeStack(app, "XRP-sync-node", {
+ env: { account: config.baseConfig.accountId, region: config.baseConfig.region },
+ stackName: `XRP-single-node`,
+ instanceType: config.baseNodeConfig.instanceType,
+ instanceCpuType: config.baseNodeConfig.instanceCpuType,
+ dataVolume: config.baseNodeConfig.dataVolume,
+ hubNetworkID: config.baseNodeConfig.hubNetworkID,
+ instanceRole: xrpCommonStack.instanceRole,
+ });
+
+ // Prepare the stack for assertions.
+ const template = Template.fromStack(xrpSingleNodeStack);
+
+ // Has EC2 instance security group.
+ template.hasResourceProperties("AWS::EC2::SecurityGroup", {
+ GroupDescription: Match.anyValue(),
+ VpcId: Match.anyValue(),
+ SecurityGroupEgress: [
+ {
+ "CidrIp": "0.0.0.0/0",
+ "Description": "Allow all outbound traffic by default",
+ "IpProtocol": "-1"
+ }
+ ],
+ SecurityGroupIngress: [
+ {
+ "CidrIp": "0.0.0.0/0",
+ "Description": "P2P protocols",
+ "FromPort": 51235,
+ "IpProtocol": "tcp",
+ "ToPort": 51235
+ },
+ {
+ "CidrIp": "0.0.0.0/0",
+ "Description": "P2P protocols",
+ "FromPort": 2459,
+ "IpProtocol": "tcp",
+ "ToPort": 2459
+ },
+ {
+ "CidrIp": "1.2.3.4/5",
+ "Description": "RPC port HTTP (user access needs to be restricted. Allowed access only from internal IPs)",
+ "FromPort": 6005,
+ "IpProtocol": "tcp",
+ "ToPort": 6005
+ }
+ ]
+ })
+
+ // Has EC2 instance with node configuration
+ template.hasResourceProperties("AWS::EC2::Instance", {
+ AvailabilityZone: Match.anyValue(),
+ UserData: Match.anyValue(),
+ BlockDeviceMappings: [
+ {
+ DeviceName: "/dev/xvda",
+ Ebs: {
+ DeleteOnTermination: true,
+ Encrypted: true,
+ Iops: 3000,
+ VolumeSize: 46,
+ VolumeType: "gp3"
+ }
+ }
+ ],
+ IamInstanceProfile: Match.anyValue(),
+ ImageId: Match.anyValue(),
+ InstanceType: "r7a.2xlarge",
+ Monitoring: true,
+ PropagateTagsToVolumeOnCreation: true,
+ SecurityGroupIds: Match.anyValue(),
+ SubnetId: Match.anyValue(),
+ })
+
+ // Has EBS data volume.
+ template.hasResourceProperties("AWS::EC2::Volume", {
+ AvailabilityZone: Match.anyValue(),
+ Encrypted: true,
+ Iops: 12000,
+ MultiAttachEnabled: false,
+ Size: 2000,
+ Throughput: 700,
+ VolumeType: "gp3"
+ })
+
+ // Has EBS data volume attachment.
+ template.hasResourceProperties("AWS::EC2::VolumeAttachment", {
+ Device: "/dev/sdf",
+ InstanceId: Match.anyValue(),
+ VolumeId: Match.anyValue(),
+ })
+
+ // Has CloudWatch dashboard.
+ template.hasResourceProperties("AWS::CloudWatch::Dashboard", {
+ DashboardBody: Match.anyValue(),
+ DashboardName: {"Fn::Join": ["", ["XRP-single-node-",{ "Ref": Match.anyValue() }]]}
+ })
+
+ });
+});
diff --git a/lib/xrp/tsconfig.json b/lib/xrp/tsconfig.json
new file mode 100644
index 00000000..8e1979f3
--- /dev/null
+++ b/lib/xrp/tsconfig.json
@@ -0,0 +1,31 @@
+{
+ "compilerOptions": {
+ "target": "ES2020",
+ "module": "commonjs",
+ "lib": [
+ "es2020",
+ "dom"
+ ],
+ "declaration": true,
+ "strict": true,
+ "noImplicitAny": true,
+ "strictNullChecks": true,
+ "noImplicitThis": true,
+ "alwaysStrict": true,
+ "noUnusedLocals": false,
+ "noUnusedParameters": false,
+ "noImplicitReturns": true,
+ "noFallthroughCasesInSwitch": false,
+ "inlineSourceMap": true,
+ "inlineSources": true,
+ "experimentalDecorators": true,
+ "strictPropertyInitialization": false,
+ "typeRoots": [
+ "../../node_modules/@types"
+ ]
+ },
+ "exclude": [
+ "node_modules",
+ "cdk.out"
+ ]
+}
diff --git a/scripts/run-all-cdk-tests.sh b/scripts/run-all-cdk-tests.sh
index 57fb1374..2d9304b7 100755
--- a/scripts/run-all-cdk-tests.sh
+++ b/scripts/run-all-cdk-tests.sh
@@ -10,6 +10,12 @@ run_test(){
local workdir=$1
cd "$workdir" || exit 1
echo "Running tests for $workdir"
+ if [ -f "jest.config.js" ]; then
+ echo "Using jest configuration at ${workdir}jest.config.js"
+ npx jest --config jest.config.js
+ else
+ npx jest
+ fi
npm run test
if [ $? -ne 0 ]; then
echo "Tests failed for $workdir"
diff --git a/website/docs/Blueprints/XRP.md b/website/docs/Blueprints/XRP.md
new file mode 100644
index 00000000..16e53e26
--- /dev/null
+++ b/website/docs/Blueprints/XRP.md
@@ -0,0 +1,8 @@
+---
+sidebar_label: XRP
+---
+#
+
+import Readme from '../../../lib/xrp/README.md';
+
+