- Installed AWS CLI
- AWS CLI configured with programmatic access
- S3 bucket accessible by AWS CLI User
- [Optionally] Installed kubectl for interacting with EKS
- Configure variables in terraform.tfvars
-
project_name- AWS organization project name; -
region- AWS deployment region, default iseu-central-1; -
docker_image- docker image of the collator; -
container_args- collator arguments are specific to collator you are spinning up; no spaces allowed in arguments - separate them with", "instead of spaces; -
container_command- command bypassed to collator container.
- Configure container ports with
container_argsin terraform.tfvars if your collator don't use defaults ports30333,9933,9944
- Configure variables in backend.tf
bucket- bucket name where tfvars are stored;region- bucket region.
- Configure variables in terraform.tfvars
eks_node_groups[0].disk_size- you may specifiy the disk size of the node; default is 500Gb;eks_node_groups[0].instance_types- you may specify instance size; default is "m5.xlarge".
Once you have configured everything, follow steps below to deploy the collator
- Upload tfvars file to the bucket with
aws s3 cp terraform.tfvars s3://${NAME_OF_THE_BUCKET}/terraform/tfvars/terraform.tfvars --profile ${PROFILE} - Install all dependecies with
terraform init - [optionally] create a workspace with
terraform workspace new ${COLLATOR_NAME}if you need to support several collators - [optionally] select a workspace you are going to work with
terraform workspace select ${COLLATOR_NAME} - Check deployment with
terraform plan - If everything is planned correctly apply deployment with
terraform apply - Verify that your node is syncing via https://telemetry.polkadot.io/
If you need to update the existing configuration
- [optionally] select the workspace you are going to work with
terraform workspace select ${COLLATOR_NAME} - fetch tfvars you have stored previously
aws s3 cp s3://${NAME_OF_THE_BUCKET}/terraform/tfvars/terraform.tfvars terraform.tfvars --profile ${PROFILE} - Verify that only required updates are planned with
terraform plan - If everything is planned correctly, apply deployment with
terraform apply
Examples of deployment for concrete parachains you can find here
Current setup supports following Maintenance actions:
- update collator image - update
docker_imagevariable in terraform var and runterraform applyto apply changes. - change disk size - update
eks_node_groups[0].disk_sizeto desired value and runterraform applyto apply changes. - change instance size - update
eks_node_groups[0].instance_typesto desired value and runterraform applyto apply changes. - destroy - if you need to free all resources you can do that with
terrform destroy. NOTE: This actions are irreversible and causes recreation of the node
After terraform was successfully executed, use the following command to authenticate in the cluster. Do not forget to change name (in case terraform code was changed and name of the cluster is other), region and profile if need.
aws eks update-kubeconfig --name collator-cluster --region ${AWS_REGION} --profile ${PROFILE}.
After execute command above, use kubectl to interact with cluster or install k8slens (IDE for kubernetes).
In case subscription would be required, use this link
Two monitoring options come out of the box with this project:
- Detailed monitoring within cloudwatch (do not forget to change region in case it is other). Which provides dashboards and logs aggregation for the EC2 Instance, EKS cluster, Collator pod etc. The only thing you need to choose correct log groups from your aws account, by default it would be called
/aws/eks/collator-cluster/cluster - Resource monitoring is located directly in ec2 instance, at monitoring tab, e. g.

| Name | Version |
|---|---|
| terraform | >= 1.3.0 |
| aws | 4.48.0 |
| Name | Version |
|---|---|
| aws | 4.48.0 |
| kubernetes | n/a |
| random | n/a |
| time | n/a |
| Name | Source | Version |
|---|---|---|
| ec2_label | cloudposse/label/null | 0.25.0 |
| eks_cluster | cloudposse/eks-cluster/aws | 2.6.0 |
| eks_node_groups | cloudposse/eks-node-group/aws | 2.6.1 |
| label | cloudposse/label/null | 0.25.0 |
| subnets | cloudposse/dynamic-subnets/aws | 2.0.4 |
| vpc | cloudposse/vpc/aws | 1.1.1 |
| vpc_label | cloudposse/label/null | 0.25.0 |
| Name | Type |
|---|---|
| aws_autoscaling_group_tag.eks_node_groups | resource |
| aws_eks_addon.coredns | resource |
| aws_vpc_ipv4_cidr_block_association.secondary_ipv4_cidr | resource |
| kubernetes_deployment.collator | resource |
| random_integer.octet1 | resource |
| random_integer.octet2 | resource |
| time_sleep.eks_node_groups_wait | resource |
| aws_availability_zones.available | data source |
| aws_caller_identity.current | data source |
| aws_eks_cluster.cluster | data source |
| aws_eks_cluster_auth.cluster | data source |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| container_args | n/a | list(string) |
n/a | yes |
| docker_image | n/a | string |
n/a | yes |
| eks_node_groups | n/a | list(object({ |
n/a | yes |
| project_name | n/a | string |
n/a | yes |
| aws_profile_name | n/a | string |
"collator" |
no |
| aws_region | n/a | string |
"eu-central-1" |
no |
| container_command | n/a | list(string) |
[ |
no |
| Name | Description |
|---|---|
| account_id | n/a |
| environment | n/a |
| project_name | n/a |