Skip to content

MonikaJassova/14-devops-automation-python

Repository files navigation

DevOps Automation with Python

This repo showcases using Python to automate various DevOps tasks:

  • EC2 status checks
  • configuring EC2 Instances (bulk operations on resources)
  • displaying EKS cluster information
  • creating backups of EC2 Volumes, cleaning them up and restoring from them
  • website monitoring and recovery

Technologies used

Python, Boto3, AWS, EKS, Terraform, PyCharm, Linode, Docker, Linux

EC2 Status Checks

  1. Provisioned 3 EC2 Instances in an AWS region using Terraform: terraform init & terraform apply in eu-west-3 directory
  2. Wrote ec2-status-checks.py using Boto3 and Schedule that fetches and prints statuses of EC2 Instances in the region at a specific interval.

Adding Tags to EC2 Instances

  1. Provisioned 3 EC2 Instances in 2 AWS regions using Terraform: terraform init & terraform apply in both eu-central-1 or eu-west-3 directories
  2. Wrote add-env-tags.py using Boto3 that gets all EC2 Instances in those 2 regions and adds a specific tag to them.

Displaying EKS clusters Information

  1. Provisioned an EKS cluster using Terraform: terraform init & terraform apply in the root of the project
  2. Wrote eks-status-checks.py using Boto3 that fetches and prints statuses, K8s version, and cluster endpoint of EKS clusters in the region.

EC2 Data Backup and Restore

  1. Provisioned 3 EC2 Instances in an AWS region using Terraform (one of them with volume tagged prod): terraform init & terraform apply in eu-central-1 directory
  2. Wrote volume-backups.py using Boto3 and Schedule that creates snapshots of EC2 Instance Volumes tagged with prod daily at a specified time.
  3. Wrote cleanup-snapshots.py using Boto3 that deletes all snapshots except the two with the most recent creation date for each Volume tagged with prod.
  4. Wrote restore-volume.py using Boto3 that creates a new Volume from the most recent snapshot of the volume attached to a specific EC2 instance and attaches the new volume to it.

Website Monitoring and Recovery

  1. Created a server on Linode: Create Linode -> selected the closest region, the most recent Debian, Shared CPU 2 GB, set root password and added an SSH key
  2. Created a Personal Access Token in Linode with Read/Write scope
  3. SSHed into the Linode server and installed Docker on it by following https://docs.docker.com/engine/install/debian
  4. Started nginx container on the server: docker run -d -p 8080:80 nginx
    • application accessible on public IP address and also DNS hostname and port 8080
  5. In my Gmail account (2FA-enabled), created an app-specific password at https://myaccount.google.com/apppasswords
  6. Wrote monitor-website.py using paramiko and linode-api4 that
    • checks whether an HTTP request to the server returns 200 status code
    • if it doesn't, sends a notification email to specified address
    • proceeds with restarting nginx container if the server returned other status code
    • proceeds with rebooting Linode server and then starting the container if the server does not respond at all

About

A project showcasing using Python to automate various DevOps tasks

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors