Skip to content

Commit 285320d

Browse files
authored
Merge pull request #1614 from JoeStech/copilot-deployment-lp
Graviton Infrastructure for GitHub Copilot Extensions Learning Path
2 parents 3bb6d4e + 68d7872 commit 285320d

File tree

8 files changed

+434
-1
lines changed

8 files changed

+434
-1
lines changed

assets/contributors.csv

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ Albin Bernhardsson,,,,,
6363
Przemyslaw Wirkus,,,,,
6464
Zach Lasiuk,,,,,
6565
Daniel Nguyen,,,,,
66-
Joe Stech,Arm,,,,
66+
Joe Stech,Arm,JoeStech,joestech,,
6767
visualSilicon,,,,,
6868
Konstantinos Margaritis,VectorCamp,,,,
6969
Kieran Hejmadi,,,,,
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
---
2+
title: CDK installation
3+
weight: 2
4+
5+
### FIXED, DO NOT MODIFY
6+
layout: learningpathall
7+
---
8+
9+
## What is AWS CDK?
10+
11+
AWS CDK is an AWS-native Infrastructure as Code tool that allows cloud engineers to write IaC templates in many different languages. Regardless of the language used, all CDK code eventually transpiles to TypeScript, and the TypeScript generates CloudFormation templates, which then deploy the specified resources.
12+
13+
This Learning Path will use the Python flavor of AWS CDK, because the Copilot Extension that will be deployed is also written in Python. Writing both IaC and application code in the same language is helpful for certain teams, especially those without dedicated platform engineers.
14+
15+
## How do I install AWS CDK?
16+
17+
To install the required packages, you will need npm and Python installed. Next, run
18+
19+
```bash
20+
npm install -g aws-cdk
21+
```
22+
23+
To verify that the installation was successful, run
24+
25+
```bash
26+
cdk --version
27+
```
28+
29+
You should see a version number returned, signifying success.
30+
31+
After the CDK CLI is installed, you can use it to create a new Python CDK environment:
32+
33+
```bash
34+
mkdir copilot-extension-deployment
35+
cd copilot-extension-deployment
36+
cdk init app --language python
37+
```
38+
39+
This will set up convenient file stubs, as well as create a `requirements.txt` file with the Python CDK libraries required. The `init` command uses the name of the project folder to name various elements of the project. Hyphens in the folder name are converted to underscores. Install the packages in the `requirements.txt`:
40+
41+
```bash
42+
source .venv/bin/activate
43+
pip install -r requirements.txt
44+
```
45+
46+
Now you are ready to specify the AWS services needed for your GitHub Copilot Extension.
47+
Lines changed: 243 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,243 @@
1+
---
2+
title: Deploying AWS services
3+
weight: 3
4+
5+
### FIXED, DO NOT MODIFY
6+
layout: learningpathall
7+
---
8+
## What AWS services do I need?
9+
10+
In [the first GitHub Copilot Extension Learning Path](learning-paths/servers-and-cloud-computing/gh-copilot-simple) you ran a GitHub Copilot Extension from a single Linux computer, with the public URL being provided by an ngrok tunnel to your localhost.
11+
12+
In an actual production environment, you'll want:
13+
14+
* A domain that you own with DNS settings that you control (you can get this through AWS Route 53)
15+
* A load balancer (AWS ALB)
16+
* An auto-scaling cluster (AWS ASG) in a private virtual cloud subnet (AWS VPC) that you can adjust the size of based on load
17+
18+
In order to use your custom domain with your ALB, you'll also need a custom TLS certificate in order to allow the ALB to do TLS termination before the ALB forwards the packets to your ASG instances.
19+
20+
The following sections will walk you through setting up all these required services in AWS CDK.
21+
22+
## Imports
23+
24+
You will have an auto-generated folder called `copilot_extension_deployment` within the `copilot-extension-deployment` that you previously created. It will contain a file called `copilot_extension_deployment_stack.py`. Open this file, and add the following import lines:
25+
26+
```python
27+
from aws_cdk import (
28+
Stack,
29+
aws_ec2 as ec2,
30+
aws_elasticloadbalancingv2 as elbv2,
31+
aws_autoscaling as autoscaling,
32+
aws_iam as iam,
33+
CfnOutput,
34+
aws_certificatemanager as acm,
35+
aws_route53 as route53,
36+
aws_route53_targets as targets
37+
)
38+
```
39+
40+
Then, within the generated class (`class CopilotExtensionDeploymentStack(Stack):`) in the same file, add all the AWS services needed for your Extension deployment as described in the following sections.
41+
42+
## Virtual Private Cloud (VPC)
43+
44+
The code below will create a VPC with a public and private subnet. These subnets have a CIDR mask of 24, which means you'll get 256 total IPs in each subnet. If you need more than this, adjust accordingly.
45+
46+
```python
47+
vpc = ec2.Vpc(self, "FlaskStackVPC",
48+
max_azs=2,
49+
subnet_configuration=[
50+
ec2.SubnetConfiguration(
51+
name="Private",
52+
subnet_type=ec2.SubnetType.PRIVATE_WITH_EGRESS,
53+
cidr_mask=24
54+
),
55+
ec2.SubnetConfiguration(
56+
name="Public",
57+
subnet_type=ec2.SubnetType.PUBLIC,
58+
cidr_mask=24
59+
)
60+
]
61+
)
62+
```
63+
64+
You'll also need a security group for the EC2 instances:
65+
66+
```python
67+
security_group = ec2.SecurityGroup(self, "EC2SecurityGroup",
68+
vpc=vpc,
69+
allow_all_outbound=True,
70+
description="Security group for EC2 instances"
71+
)
72+
```
73+
74+
## EC2
75+
76+
Once you have your VPC templates set up, you can use them in your EC2 templates.
77+
78+
First, create a User Data script for all the EC2 templates that will launch in your auto-scaling group. This will install an SSM agent and the AWS CLI, for later convenience:
79+
80+
```python
81+
user_data = ec2.UserData.for_linux()
82+
user_data.add_commands(
83+
"apt-get update",
84+
# Install SSM agent
85+
"sudo snap install amazon-ssm-agent --classic",
86+
"sudo systemctl enable snap.amazon-ssm-agent.amazon-ssm-agent.service",
87+
"sudo systemctl start snap.amazon-ssm-agent.amazon-ssm-agent.service",
88+
# Install AWS CLI v2
89+
"apt install unzip",
90+
'curl "https://awscli.amazonaws.com/awscli-exe-linux-aarch64.zip" -o "awscliv2.zip"',
91+
"unzip awscliv2.zip",
92+
"sudo ./aws/install",
93+
# add any additional commands that you'd like to run on instance launch here
94+
)
95+
```
96+
97+
After the launch template, you'll want to get the latest Ubuntu 24.04 Arm AMI:
98+
99+
```python
100+
ubuntu_arm_ami = ec2.MachineImage.lookup(
101+
name="ubuntu/images/hvm-ssd-gp3/ubuntu-noble-24.04-arm64-server-*",
102+
owners=["099720109477"], # Canonical's AWS account ID
103+
filters={"architecture": ["arm64"]}
104+
)
105+
```
106+
107+
Next create an IAM role that will allow your EC2 instances to use the SSM agent, write logs to CloudWatch, and access AWS S3:
108+
109+
```Python
110+
ec2_role_name = "Proj-Flask-LLM-ALB-EC2-Role"
111+
ec2_role = iam.Role(self, "EC2Role",
112+
assumed_by=iam.ServicePrincipal("ec2.amazonaws.com"),
113+
managed_policies=[
114+
iam.ManagedPolicy.from_aws_managed_policy_name("AmazonSSMManagedInstanceCore"),
115+
iam.ManagedPolicy.from_aws_managed_policy_name("CloudWatchAgentServerPolicy"),
116+
iam.ManagedPolicy.from_aws_managed_policy_name("CloudWatchLogsFullAccess"),
117+
iam.ManagedPolicy.from_aws_managed_policy_name("AmazonS3FullAccess")
118+
],
119+
role_name=ec2_role_name,
120+
)
121+
```
122+
123+
Now pull all these elements together in the launch template that the ASG will use:
124+
125+
```Python
126+
launch_template = ec2.LaunchTemplate(self, "LaunchTemplate",
127+
instance_type=ec2.InstanceType("c8g.xlarge"),
128+
machine_image=ubuntu_arm_ami,
129+
user_data=user_data,
130+
security_group=security_group,
131+
role=ec2_role,
132+
detailed_monitoring=True,
133+
block_devices=[
134+
ec2.BlockDevice(
135+
device_name="/dev/sda1",
136+
volume=ec2.BlockDeviceVolume.ebs(
137+
volume_size=50,
138+
volume_type=ec2.EbsDeviceVolumeType.GP3,
139+
delete_on_termination=True
140+
)
141+
)
142+
]
143+
)
144+
```
145+
146+
Finally, create the ASG, specifying the launch template you just created as the launch template for the EC2 instances within the ASG:
147+
148+
```Python
149+
asg = autoscaling.AutoScalingGroup(self, "ASG",
150+
vpc=vpc,
151+
vpc_subnets=ec2.SubnetSelection(
152+
subnet_type=ec2.SubnetType.PRIVATE_WITH_EGRESS),
153+
launch_template=launch_template,
154+
min_capacity=1,
155+
max_capacity=1,
156+
desired_capacity=1
157+
)
158+
```
159+
160+
As you can see, you'll want the instances inside your private subnet for security, and you only need one instance to begin with. You can scale manually later on, or create an autoscaling function, depending on your needs.
161+
162+
## Application Load Balancer (ALB)
163+
164+
First, create an ALB using the VPC resources you previously specified, within the PUBLIC subnet:
165+
166+
```Python
167+
alb = elbv2.ApplicationLoadBalancer(self, "ALB",
168+
vpc=vpc,
169+
internet_facing=True,
170+
vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PUBLIC)
171+
)
172+
```
173+
174+
Next add a custom certificate. You'll need to generate this certificate beforehand. If you want to do this from the AWS console, see [Getting Started with AWS Certificate Manager](https://aws.amazon.com/certificate-manager/getting-started/).
175+
176+
Replace `ACM_CERTIFICATE_ARN` with the ARN of your newly created certificate:
177+
178+
```Python
179+
certificate = acm.Certificate.from_certificate_arn(
180+
self,
181+
"Certificate",
182+
os.environ["ACM_CERTIFICATE_ARN"]
183+
)
184+
```
185+
186+
Next configure a listener for the ALB that uses the certificate and adds the ASG as a target, listening on port 8080 (this is where you'll serve your Flask app):
187+
188+
```Python
189+
# Add a listener to the ALB with HTTPS
190+
listener = alb.add_listener("HttpsListener",
191+
port=443,
192+
certificates=[certificate],
193+
ssl_policy=elbv2.SslPolicy.RECOMMENDED)
194+
195+
# Add the ASG as a target to the ALB listener
196+
listener.add_targets("ASGTarget",
197+
port=8080,
198+
targets=[asg],
199+
protocol=elbv2.ApplicationProtocol.HTTP,
200+
health_check=elbv2.HealthCheck(
201+
path="/health",
202+
healthy_http_codes="200-299"
203+
))
204+
```
205+
206+
## Custom domain setup in Route 53
207+
208+
The final step in setting up your AWS services is to add an ALB-linked A record to the hosted zone for your domain. This makes sure that when GitHub invokes your API, the DNS is pointed to the IP of your ALB. You will need to replace `HOSTED_ZONE_DOMAIN_NAME` with your hosted zone domain, and replace `SUBDOMAIN_NAME` with the subdomain that maps to the ACM certificate that you generated and used in your ALB.
209+
210+
```Python
211+
hosted_zone = route53.HostedZone.from_lookup(self, "HostedZone",
212+
domain_name=os.environ["HOSTED_ZONE_DOMAIN_NAME"],
213+
)
214+
215+
# Create an A record for the subdomain
216+
route53.ARecord(self, "ALBDnsRecord",
217+
zone=hosted_zone,
218+
record_name=os.environ["SUBDOMAIN_NAME"],
219+
target=route53.RecordTarget.from_alias(targets.LoadBalancerTarget(alb))
220+
)
221+
```
222+
223+
## How do I deploy?
224+
225+
Once you have added all of the sections above to your `copilot_extension_deployment_stack.py` file, you can deploy your services to AWS. You must first ensure that your CDK environment in AWS is 'bootstrapped', which means that the AWS CDK has created all the resources it needs to use when deploying (IAM roles, an ECR repo for images, and buckets for artifacts). The bootstrap process is a one-time deal, and can generally be done by running:
226+
227+
```bash
228+
cdk bootstrap aws://123456789012/us-east-1
229+
```
230+
231+
Replace the AWS account and region with your account and region.
232+
233+
{{% notice Note %}}
234+
if your organization has governance rules in place regarding naming conventions you'll need a custom bootstrap yaml. To learn more about custom bootstrapping, see the [AWS guide on Bootstrapping your environment for use with the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping-env.html).
235+
{{% /notice %}}
236+
237+
Once your environment has been bootstrapped, you can run:
238+
239+
```bash
240+
cdk deploy
241+
```
242+
243+
from within the directory that includes your stack file. This deployment will take a few minutes, as CloudFormation deploys your resources.
Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
---
2+
title: Deploying Flask
3+
weight: 4
4+
5+
### FIXED, DO NOT MODIFY
6+
layout: learningpathall
7+
---
8+
9+
## How do I deploy my Copilot Extension Flask app to my newly created EC2 instance?
10+
11+
In the first GitHub Copilot Extension Learning Path you created a Flask app in the section titled "[How can I create my own private GitHub Copilot Extension?](http://localhost:1313/learning-paths/servers-and-cloud-computing/gh-copilot-simple/run-python/)".
12+
13+
You will deploy this Flask app on your newly created EC2 instance. First, get your EC2 instance ID:
14+
15+
```bash
16+
aws ec2 describe-instances --filters "Name=tag:Name,Values=CopilotExtensionDeploymentStack/LaunchTemplate" --query "Reservations[*].Instances[*].InstanceId" --output text
17+
```
18+
19+
Then use that ID to log in with AWS SSM. You must use AWS SSM because your instance is in a private subnet for security purposes, but because the SSM agent is running on the instance, it creates a tunnel that allows you to SSH into the machine with the following command:
20+
21+
```bash
22+
aws ssm start-session --target [your instance ID]
23+
```
24+
25+
You should now be able to go through the steps in "[How can I create my own private GitHub Copilot Extension?](http://localhost:1313/learning-paths/servers-and-cloud-computing/gh-copilot-simple/run-python/)" to create your Flask app, create a Python virtual environment, and install the appropriate packages.
26+
27+
The only two changes you'll make are to add a health check endpoint (for the ALB health check), and to run your app on 0.0.0.0 port 8080, which the ALB is listening for.
28+
29+
First, add the following endpoint to your main flask file:
30+
31+
```Python
32+
@app.route('/health')
33+
def health():
34+
return Response(status=200)
35+
```
36+
37+
Next, add the `host` argument to the `app.run` call at the end of the file and update the port number. The final result should look like this:
38+
39+
```Python
40+
if __name__ == '__main__':
41+
app.run(host='0.0.0.0', port=8080)
42+
```
43+
44+
This will expose your app to the port that you set up your ALB listener to listen on.
45+
46+
Run the simple extension:
47+
48+
```Python
49+
python ./simple-extension.py
50+
```
51+
52+
You should now be able to navigate to your API subdomain from any browser and see
53+
54+
```text
55+
"Hello! Welcome to the example GitHub Copilot Extension in Python!"
56+
```
57+
58+
Your API is now complete and ready to be configured in your GitHub Application.
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
---
2+
title: Configuring GitHub
3+
weight: 5
4+
5+
### FIXED, DO NOT MODIFY
6+
layout: learningpathall
7+
---
8+
9+
## How do I configure my GitHub Application to use my API?
10+
11+
Open the GitHub App that you created in [the first GitHub Copilot Extension Learning Path](learning-paths/servers-and-cloud-computing/gh-copilot-simple).
12+
13+
Navigate to the 'Copilot' tab, and add your URL to the field under the 'Agent Definition' section:
14+
15+
![Configure URL](configure.png)
16+
17+
You will also want to change the 'Callback URL' under the General tab. This is the full URL to redirect to after a user authorizes an installation.
18+
19+
## Test your Extension
20+
21+
You are now ready to test your productionized Extension. For guidance on testing, see [Test your Copilot Extension](http://localhost:1313/learning-paths/servers-and-cloud-computing/gh-copilot-simple/copilot-test/) in the previous Copilot Extension Learning Path.
22+
23+
## Next Steps
24+
25+
You are now ready to build a more advanced Copilot Extension that uses RAG techniques in [Create a RAG-based GitHub Copilot Extension in Python](../copilot-extension).

0 commit comments

Comments
 (0)