- 
                Notifications
    
You must be signed in to change notification settings  - Fork 31
 
Add a Terraform configuration to deploy lnt.llvm.org #128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | 
|---|---|---|
| @@ -0,0 +1,34 @@ | ||
| name: Deploy lnt.llvm.org | ||
| 
     | 
||
| on: | ||
| push: | ||
| tags: | ||
| - 'v*' | ||
| 
     | 
||
| permissions: | ||
| contents: read | ||
| 
     | 
||
| jobs: | ||
| deploy: | ||
| runs-on: ubuntu-24.04 | ||
| 
     | 
||
| steps: | ||
| - uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 | ||
| 
     | 
||
| - name: Setup Terraform | ||
| uses: hashicorp/setup-terraform@v3 | ||
| 
     | 
||
| - name: Configure AWS Credentials | ||
| uses: aws-actions/configure-aws-credentials@v4 | ||
| with: | ||
| aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} | ||
| aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} | ||
| 
     | 
||
| - name: Initialize Terraform | ||
| run: terraform init | ||
| 
     | 
||
| - name: Apply Terraform changes | ||
| 
         There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If the instance already exists and then we re-deploy it, what's going to happen? My understanding is that we'd start over from scratch with an empty EC2 instance, which means we would lose all of the existing data stored on the instance. Is that not the case? Do you understand the mechanism by which the data we store in VOLUMES in the Docker container end up being persisted across re-deployments of the EC2 instance? I don't. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm pretty sure it just calls whatever AWS API calls it needs to update the instance to match your terraform file, it won't get destroyed. Terraform holds state about this type of stuff. The volume will just be stored on the root block device since we haven't attached any EBS storage or anything. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 
 I see. But the Terraform state is not kept across invocations of the Github Action, so I don't really understand how Terraform can tell that we even already have an instance.  | 
||
| run: terraform apply -auto-approve | ||
| env: | ||
| TF_VAR_lnt_db_password: ${{ secrets.LNT_DB_PASSWORD }} | ||
| TF_VAR_lnt_auth_token: ${{ secrets.LNT_AUTH_TOKEN }} | ||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,17 @@ | ||||||
| #!/bin/bash | ||||||
| 
     | 
||||||
| # | ||||||
| # This is a template for the startup script that gets run on the EC2 | ||||||
| # instance running lnt.llvm.org. This template gets filled in by the | ||||||
| # Terraform configuration file. | ||||||
| # | ||||||
| 
     | 
||||||
| sudo yum update -y | ||||||
| sudo amazon-linux-extras install docker git -y | ||||||
| sudo service docker start | ||||||
| sudo usermod -a -G docker ec2-user | ||||||
| sudo chkconfig docker on | ||||||
| 
     | 
||||||
| LNT_DB_PASSWORD=${__db_password__} | ||||||
| 
         There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Where are these env variables coming from?  | 
||||||
| LNT_AUTH_TOKEN=${__auth_token__} | ||||||
| docker compose --file compose.yaml up | ||||||
                
      
                  ldionne marked this conversation as resolved.
               
          
            Show resolved
            Hide resolved
        There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we need to daemonize this otherwise cloudinit will never finish 
        Suggested change
       
    
 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. IIUC these user data scripts are only called when the instance is first initialized, but not e.g. rebooted. So we probably want to change the docker-compose restart policy to be  There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This depends upon how we set it up. I was thinking it might be better to setup the machine to be a clean slate on every boot, and mount a persistent volume that actually contains the DB. That makes it super easy to change system software inside TF.  | 
||||||
| Original file line number | Diff line number | Diff line change | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,38 @@ | ||||||||||||||
| # | ||||||||||||||
| # Terraform file for deploying lnt.llvm.org. | ||||||||||||||
| # | ||||||||||||||
| 
     | 
||||||||||||||
| provider "aws" { | ||||||||||||||
| region = "us-west-2" | ||||||||||||||
| } | ||||||||||||||
| 
     | 
||||||||||||||
| 
         There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We also need a way to set the terraform state. We use a GCS bucket in the premerge cluster to do this. https://github.com/llvm/llvm-zorg/blob/87d07e600970abf419046d2ab6083b2d64240bce/premerge/main.tf#L31 Otherwise state isn't saved across checkouts, which means things won't work.  | 
||||||||||||||
| variable "lnt_db_password" { | ||||||||||||||
| 
         There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. These should probably be  https://github.com/llvm/llvm-zorg/blob/87d07e600970abf419046d2ab6083b2d64240bce/premerge/main.tf#L113 is how we set this up for premerge. Not sure exactly how to do this for AWS.  | 
||||||||||||||
| type = string | ||||||||||||||
| description = "The database password for the lnt.llvm.org database." | ||||||||||||||
| sensitive = true | ||||||||||||||
| } | ||||||||||||||
| 
     | 
||||||||||||||
| variable "lnt_auth_token" { | ||||||||||||||
| type = string | ||||||||||||||
| description = "The authentication token to perform destructive operations on lnt.llvm.org." | ||||||||||||||
| sensitive = true | ||||||||||||||
| } | ||||||||||||||
| 
     | 
||||||||||||||
| resource "local_file" "docker-compose-file" { | ||||||||||||||
| source = "../compose.yaml" | ||||||||||||||
| filename = "${path.module}/compose.yaml" | ||||||||||||||
| } | ||||||||||||||
| 
     | 
||||||||||||||
| resource "aws_instance" "docker_server" { | ||||||||||||||
| ami = "ami-0c97bd51d598d45e4" # Amazon Linux 2023 kernel-6.12 AMI in us-west-2 | ||||||||||||||
| 
         There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @boomanaiden154 Are we OK with hardcoding the AMI? What do you folks usually do? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Not familiar with how AWS does things. Hard coding it doesn't seem like a big deal. But we want to be able to change it, which would probably force instance recreation. I think we should do what I suggested above where the instance is a clean slate on every boot but mounts a persistent volume that has the DB info.  | 
||||||||||||||
| instance_type = "t2.micro" | ||||||||||||||
| 
         There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The default block storage on these devices are tiny (~8GB IIRC?), you probably want to expand it to a few GB more 
        Suggested change
       
    
 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. A couple GB boot disk should be fine, but slightly bigger might be good. The DB should probably be on a separate volume.  | 
||||||||||||||
| key_name = "test-key-name" # TODO | ||||||||||||||
| 
         There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not sure what to put here, I presume this needs to match a key in the LLVM Foundation's actual AWS account. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Any keys should be specified in the provider. I think this is a different type of key.  | 
||||||||||||||
| tags = { | ||||||||||||||
| Name = "lnt.llvm.org" | ||||||||||||||
| } | ||||||||||||||
| 
     | 
||||||||||||||
| user_data = templatefile("${path.module}/ec2-startup.sh.tpl", { | ||||||||||||||
| __db_password__ = var.lnt_db_password, | ||||||||||||||
| __auth_token__ = var.lnt_auth_token, | ||||||||||||||
| }) | ||||||||||||||
| } | ||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am deploying on tags at the moment: I don't think we want to re-deploy at every commit since we risk bringing down the instance. Actually, I even wonder whether that should be a manually triggered job. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We want to tag images by commit SHA, but explicitly version them in the terraform. That means we get a new image per commit, but only redeploy when we explicitly bump the commit of the images we're running.