Skip to content

Latest commit

 

History

History
138 lines (110 loc) · 9.33 KB

File metadata and controls

138 lines (110 loc) · 9.33 KB

AWS Deployment

Pre-requirements

  1. Installed AWS CLI
  2. AWS CLI configured with programmatic access
  1. S3 bucket accessible by AWS CLI User
  2. [Optionally] Installed kubectl for interacting with EKS

Usage

Configuration

  1. Configure variables in terraform.tfvars
  • project_name - AWS organization project name;

  • region - AWS deployment region, default is eu-central-1;

  • docker_image - docker image of the collator;

  • container_args - collator arguments are specific to collator you are spinning up; no spaces allowed in arguments - separate them with ", " instead of spaces;

  • container_command - command bypassed to collator container.

  1. Configure container ports with container_args in terraform.tfvars if your collator don't use defaults ports 30333, 9933, 9944

Optional Configurations

  1. Configure variables in backend.tf
  • bucket - bucket name where tfvars are stored;
  • region - bucket region.
  1. Configure variables in terraform.tfvars
  • eks_node_groups[0].disk_size - you may specifiy the disk size of the node; default is 500Gb;
  • eks_node_groups[0].instance_types - you may specify instance size; default is "m5.xlarge".

Deployment

Once you have configured everything, follow steps below to deploy the collator

  • Upload tfvars file to the bucket with aws s3 cp terraform.tfvars s3://${NAME_OF_THE_BUCKET}/terraform/tfvars/terraform.tfvars --profile ${PROFILE}
  • Install all dependecies with terraform init
  • [optionally] create a workspace with terraform workspace new ${COLLATOR_NAME} if you need to support several collators
  • [optionally] select a workspace you are going to work with terraform workspace select ${COLLATOR_NAME}
  • Check deployment with terraform plan
  • If everything is planned correctly apply deployment with terraform apply
  • Verify that your node is syncing via https://telemetry.polkadot.io/

Update configuration

If you need to update the existing configuration

  • [optionally] select the workspace you are going to work with terraform workspace select ${COLLATOR_NAME}
  • fetch tfvars you have stored previously aws s3 cp s3://${NAME_OF_THE_BUCKET}/terraform/tfvars/terraform.tfvars terraform.tfvars --profile ${PROFILE}
  • Verify that only required updates are planned with terraform plan
  • If everything is planned correctly, apply deployment with terraform apply

Examples

Examples of deployment for concrete parachains you can find here

Maintenance

Current setup supports following Maintenance actions:

  • update collator image - update docker_image variable in terraform var and run terraform apply to apply changes.
  • change disk size - update eks_node_groups[0].disk_size to desired value and run terraform apply to apply changes.
  • change instance size - update eks_node_groups[0].instance_types to desired value and run terraform apply to apply changes.
  • destroy - if you need to free all resources you can do that with terrform destroy. NOTE: This actions are irreversible and causes recreation of the node

EKS Maintenance

After terraform was successfully executed, use the following command to authenticate in the cluster. Do not forget to change name (in case terraform code was changed and name of the cluster is other), region and profile if need.

aws eks update-kubeconfig --name collator-cluster --region ${AWS_REGION} --profile ${PROFILE}.

After execute command above, use kubectl to interact with cluster or install k8slens (IDE for kubernetes). In case subscription would be required, use this link

Monitoring

Two monitoring options come out of the box with this project:

  • Detailed monitoring within cloudwatch (do not forget to change region in case it is other). Which provides dashboards and logs aggregation for the EC2 Instance, EKS cluster, Collator pod etc. The only thing you need to choose correct log groups from your aws account, by default it would be called /aws/eks/collator-cluster/cluster
  • Resource monitoring is located directly in ec2 instance, at monitoring tab, e. g. img.png

Requirements

Name Version
terraform >= 1.3.0
aws 4.48.0

Providers

Name Version
aws 4.48.0
kubernetes n/a
random n/a
time n/a

Modules

Name Source Version
ec2_label cloudposse/label/null 0.25.0
eks_cluster cloudposse/eks-cluster/aws 2.6.0
eks_node_groups cloudposse/eks-node-group/aws 2.6.1
label cloudposse/label/null 0.25.0
subnets cloudposse/dynamic-subnets/aws 2.0.4
vpc cloudposse/vpc/aws 1.1.1
vpc_label cloudposse/label/null 0.25.0

Resources

Name Type
aws_autoscaling_group_tag.eks_node_groups resource
aws_eks_addon.coredns resource
aws_vpc_ipv4_cidr_block_association.secondary_ipv4_cidr resource
kubernetes_deployment.collator resource
random_integer.octet1 resource
random_integer.octet2 resource
time_sleep.eks_node_groups_wait resource
aws_availability_zones.available data source
aws_caller_identity.current data source
aws_eks_cluster.cluster data source
aws_eks_cluster_auth.cluster data source

Inputs

Name Description Type Default Required
container_args n/a list(string) n/a yes
docker_image n/a string n/a yes
eks_node_groups n/a
list(object({
name = optional(string, "default")
desired_size = optional(number, "1")
min_size = optional(number, "1")
max_size = optional(number, "1")
disk_size = optional(number, "20")
multi_az = optional(bool, "true")
kubernetes_version = optional(string, "1.23")
capacity_type = optional(string, "ON_DEMAND")
instance_types = optional(list(string), ["t3.medium"])
ami_release_version = optional(list(string), [])
arch = optional(string, "amd64")
kubernetes_labels = optional(map(string), {"node-group-purpose" = "default"})
}))
n/a yes
project_name n/a string n/a yes
aws_profile_name n/a string "collator" no
aws_region n/a string "eu-central-1" no
container_command n/a list(string)
[
""
]
no

Outputs

Name Description
account_id n/a
environment n/a
project_name n/a