From dcf03a37478ff80c4015cd85fcb855e009f290f1 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Mon, 29 Oct 2018 23:49:36 -0400
Subject: [PATCH 01/64] Round 1 notes part 1
---
README.md | 47 +++++++++++++++++++++++++++++++++++++++++------
1 file changed, 41 insertions(+), 6 deletions(-)
diff --git a/README.md b/README.md
index 37d2d46..219f53e 100644
--- a/README.md
+++ b/README.md
@@ -1,8 +1,43 @@
-# Distributed Systems Practice
-Notes from learning about distributed systems in [GW CS 6421](https://gwdistsys18.github.io/) with [Prof. Wood](https://faculty.cs.gwu.edu/timwood/)
+Big Data and Machine Learning (Beginner level + Intermediate Level)
-## Area 1
-> Include notes here about each of the links
+For Video: Hadoop Intro, it takes 35 minutes to learn it. The video tutorial gives basic ideas of Hadoop framework. After 2000, the solution which uses the computation power provided by available computers to process data could not help. In recent years, there is an incredible explosion in the volume of data. IBM reported that 2.5 billion gigabytes of data was generated every day in 2012. 40000 search queries were done on Google every second. Therefore, we need computers with larger memories and faster processors or other more advanced solutions. The idea distributed system is using multiple computers to do the processing work which has much better performance. There are also challenges for this. There are high chances of failure since a distributed system uses multiple computers. There is also limit on bandwidth. Because it is difficult to synchronize data and process, the programming complexity is also high. The solution is Hadoop. Hadoop is a framework that allows for distributed processing of large data sets across clusters of commodity computers using simple programming models. The four key characters of Hadoop are economical, scalable, reliable and flexible. Compared to traditional DBMS, Hadoop distributes the data to multiple systems and later runs the computation wherever the data is located. The Hadoop has an ecosystem which is evolved from its three core components, data processing, resource management and Hadoop distributed file system. It is now comprised of 12 components including Hadoop distributed file system, HBase, scoop, flume, spark, Hadoop MapReduce, Pig, Impala, Hive, Cloudera Search, Oozie, Hue.
-## Area 2
-> Include notes here about each of the links
+For QwikLab: Analyze Big Data with Hadoop, it takes me more than one hour to learn and write up a summary. I have acquired how to create Amazon S3 bucket store my log files and output data, Launch a fully functional Hadoop cluster using Amazon EMR, define the schema, create a table for sample log data stored in Amazon S3, analyze the data using a HiveQL script and write the results back to Amazon S3. It is interesting to learn.
+
+For QwikLab: Intro to S3, it takes 50 minutes. In this lab, I learned:
+• Create a bucket in Amazon S3 service
+• Add an object for example a picture to the bucket
+• Manage access permissions on an object: change from private to public and see the access difference
+• Create a bucket policy by using the AWS policy generator which require the Amazon Resource Name.
+• Use bucket versioning to get access the picture with same name but uploaded at different time by changing the bucket policy
+The bucket is a really useful service and the versioning feature is quite cool.
+
+For QwikLab: Intro to Amazon Redshift, it takes me 60 minutes. In this lab, it covers
+• Launch a Redshift cluster: a cluster is a fully managed data warehouse that consists of a set of compute nodes; when launching a cluster, you have to specify the node type which determines the CPU, RAM, storage capacity and storage drive type.
+• Connect an SQL client called Pgweb to the Amazon Redshift cluster: we can write and run queries in Pgweb and also view the database information and structure.
+• Load sample data from an S3 bucket into the Amazon Redshift cluster which will hold the data for querying.
+• Run queries against data stored in Amazon Redshift: we could use SQL to query the data we need.
+
+
+In regard to Video: Short AWS Machine Learning Overview, it takes me 10 minutes. it talks about the Machine learning on AWS. Machine learning has three layers, framework interfaces for expert, ML platforms for developers and data scientists and application services for machine learning API calls in the application. Amazon Deep Learning AMI is for the frameworks layer and Zillow uses it. Amazon SageMaker is a good for ML platform layer.
+
+For Video Tutorial: Overview of AWS SageMaker, it takes me 35 minutes. The AWS SageMaker has four parts, including the notebook instance, jobs, models and endpoints. Notebook instance is about using algorithms to create model via training jobs. Training jobs are instances to train the model. We create models for hosting from job outputs, or import externally trained models into Amazon SageMaker. Endpoints are for developers to use the SageMaker in production. The tutor elaborate on xgboost, kmeans, scikit . He talks about setting up the training parameters. We can train it on single or multiple instances. Then we import models into hosts. The last step is build endpoint configuration and create endpoint for developers to call.
+
+For AWS Tutorial: Analyze Big Data with Hadoop, it takes me 80 minutes. I followed the following steps to finish the tutorial:
+• Step 1: Set Up Prerequisites: you have to have a personal AWS account; create an Amazon S3 Bucket and folder to store the output data from a Hive query; create an Amazon EC2 Key Pair to to connect to the nodes in your cluster over a secure channel using the Secure Shell (SSH) protocol.
+• Step 2: Launch The Cluster: user launches sample Amazon EMR cluster by using Quick Options in the Amazon EMR console and leaving most options to their default values; Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. By using these frameworks and related open-source projects, such as Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence workloads. Additionally, you can use Amazon EMR to transform and move large amounts of data into and out of other AWS data stores and databases, such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB.
+• Step 3: Allow SSH Connections to the Cluster From Your Client: Security groups act as virtual firewalls to control inbound and outbound traffic to your cluster. The default Amazon EMR-managed security groups associated with cluster instances do not allow inbound SSH connections as a security precaution. To connect to cluster nodes using SSH so that you can use the command line and view web interfaces that are hosted on the cluster, you need to add inbound rules that allow SSH traffic from trusted clients.
+• Step 4: Run a Hive Script to Process Data: The sample data is a series of Amazon CloudFront access log files; The sample script calculates the total number of requests per operating system over a specified time frame. The script uses HiveQL, which is a SQL-like scripting language for data warehousing and analysis
+• Step 5: Terminate the resources you do not need to save for the future: terminating your cluster terminates the associated Amazon EC2 instances and stops the accrual of Amazon EMR charges. Amazon EMR preserves metadata information about completed clusters for your reference, at no charge, for two months. The console does not provide a way to delete terminated clusters so that they aren't viewable in the console. Terminated clusters are removed from the cluster when the metadata is removed
+There is more information on how to plan and configure clusters in your custom way, set up the security, manage clusters and trouble shoot cluster if it is performing in a wrong way.
+
+For QwikLab: Intro to Amazon Machine Learning, it takes me 75 minutes. The lab tutorial consists of several parts:
+• Part 1- Upload training data : we put restaurant customer reviews data into Amazon S3 bucket and save it for analyzing
+• Part2- Create a datasource: configure Amazon ML to use the restaurant data set; we set customer review data as the data source for Amazon ML model
+• Part3- Create an ML Model from the Datasource: we will use data source to train and validate the model created in this part; the data source also contains metadata, such as the column data types and target variable which will also be used by the model algorithm; the ML modeling process will take 5 to 10 minutes to complete and we can see that in message section
+• Evaluate an ML model: the Amazon Machine Learning service evaluate the model automatically as part of the model creation process; it takes 70 percent of the data source to train the model and 30 percent to evaluate it.
+• Generate predictions from ML model: batch mode and real-time mode are two ways to generate predictions from ML model; batch mode is asynchronous while the real-time mode is real time.
+
+For AWS Tutorial: Build a Machine Learning Model, it takes me 50 minutes. It is about using Amazon ML to Predict Responses to a Marketing Offer:
+• Step 1: Prepare Your Data: In machine learning, you typically obtain the data and ensure that it is well formatted before starting the training process; we use customer purchase history to predict if this customer will subscribe to my new product
+• Step 2: Create a Training Datasource
From ade52dec75ebc697b00a931974390522b553cdc3 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Tue, 30 Oct 2018 00:00:33 -0400
Subject: [PATCH 02/64] Round 1
---
README.md | 25 ++++++++++++++++++++++++-
1 file changed, 24 insertions(+), 1 deletion(-)
diff --git a/README.md b/README.md
index 219f53e..a40459c 100644
--- a/README.md
+++ b/README.md
@@ -40,4 +40,27 @@ For QwikLab: Intro to Amazon Machine Learning, it takes me 75 minutes. The lab t
For AWS Tutorial: Build a Machine Learning Model, it takes me 50 minutes. It is about using Amazon ML to Predict Responses to a Marketing Offer:
• Step 1: Prepare Your Data: In machine learning, you typically obtain the data and ensure that it is well formatted before starting the training process; we use customer purchase history to predict if this customer will subscribe to my new product
-• Step 2: Create a Training Datasource
+• Step 2: Create a Training Datasource using the Amazon S3 service
+• Step 3: Create an ML Model: After you've created the training datasource, you use it to create an ML model, train the model, and then evaluate the results
+• Step 4: Review the ML Model's Predictive Performance and Set a Score Threshold
+• Step 5: Use the ML Model to Generate Predictions
+
+For Video Tutorial: Overview of AWS SageMaker, it takes me 40 minutes: The AWS SageMaker has four parts, including the notebook instance, jobs, models and endpoints. Notebook instance is about using algorithms to create model via training jobs. Training jobs are instances to train the model. We create models for hosting from job outputs, or import externally trained models into Amazon SageMaker. Endpoints are for developers to use the SageMaker in production. The tutor elaborate on xgboost, kmeans, scikit . He talks about setting up the training parameters. We can train it on single or multiple instances. Then we import models into hosts. The last step is build endpoint configuration and create endpoint for developers to call
+For AWS Tutorial: AWS SageMaker, it takes me 80 minutes.
+Step 1: Setting Up
+Step 2: Create an Amazon SageMaker Notebook Instance
+Step 3: Train and Deploy a Model
+Step 4: Clean up
+Step 5: Additional Considerations
+
+For Build a Serverless Real-Time Data Processing App, it takes 150 minutes,
+
+Cloud web application
+For QwikLab: Intro to S3, it takes 50 minutes. In this lab, I learned:
+• Create a bucket in Amazon S3 service
+• Add an object for example a picture to the bucket
+• Manage access permissions on an object: change from private to public and see the access difference
+• Create a bucket policy by using the AWS policy generator which require the Amazon Resource Name.
+• Use bucket versioning to get access the picture with same name but uploaded at different time by changing the bucket policy
+The bucket is a really useful service and the versioning feature is quite cool.
+
From 8681573460d93a2ad5ae4d5ce11b587a51a71244 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Sun, 9 Dec 2018 22:16:28 -0500
Subject: [PATCH 03/64] Update README.md
---
README.md | 939 ++++++++++++++++++++++++++++++++++++++++++++++++++----
1 file changed, 874 insertions(+), 65 deletions(-)
diff --git a/README.md b/README.md
index a40459c..c86e0e6 100644
--- a/README.md
+++ b/README.md
@@ -1,66 +1,875 @@
-Big Data and Machine Learning (Beginner level + Intermediate Level)
-
-For Video: Hadoop Intro, it takes 35 minutes to learn it. The video tutorial gives basic ideas of Hadoop framework. After 2000, the solution which uses the computation power provided by available computers to process data could not help. In recent years, there is an incredible explosion in the volume of data. IBM reported that 2.5 billion gigabytes of data was generated every day in 2012. 40000 search queries were done on Google every second. Therefore, we need computers with larger memories and faster processors or other more advanced solutions. The idea distributed system is using multiple computers to do the processing work which has much better performance. There are also challenges for this. There are high chances of failure since a distributed system uses multiple computers. There is also limit on bandwidth. Because it is difficult to synchronize data and process, the programming complexity is also high. The solution is Hadoop. Hadoop is a framework that allows for distributed processing of large data sets across clusters of commodity computers using simple programming models. The four key characters of Hadoop are economical, scalable, reliable and flexible. Compared to traditional DBMS, Hadoop distributes the data to multiple systems and later runs the computation wherever the data is located. The Hadoop has an ecosystem which is evolved from its three core components, data processing, resource management and Hadoop distributed file system. It is now comprised of 12 components including Hadoop distributed file system, HBase, scoop, flume, spark, Hadoop MapReduce, Pig, Impala, Hive, Cloudera Search, Oozie, Hue.
-
-For QwikLab: Analyze Big Data with Hadoop, it takes me more than one hour to learn and write up a summary. I have acquired how to create Amazon S3 bucket store my log files and output data, Launch a fully functional Hadoop cluster using Amazon EMR, define the schema, create a table for sample log data stored in Amazon S3, analyze the data using a HiveQL script and write the results back to Amazon S3. It is interesting to learn.
-
-For QwikLab: Intro to S3, it takes 50 minutes. In this lab, I learned:
-• Create a bucket in Amazon S3 service
-• Add an object for example a picture to the bucket
-• Manage access permissions on an object: change from private to public and see the access difference
-• Create a bucket policy by using the AWS policy generator which require the Amazon Resource Name.
-• Use bucket versioning to get access the picture with same name but uploaded at different time by changing the bucket policy
-The bucket is a really useful service and the versioning feature is quite cool.
-
-For QwikLab: Intro to Amazon Redshift, it takes me 60 minutes. In this lab, it covers
-• Launch a Redshift cluster: a cluster is a fully managed data warehouse that consists of a set of compute nodes; when launching a cluster, you have to specify the node type which determines the CPU, RAM, storage capacity and storage drive type.
-• Connect an SQL client called Pgweb to the Amazon Redshift cluster: we can write and run queries in Pgweb and also view the database information and structure.
-• Load sample data from an S3 bucket into the Amazon Redshift cluster which will hold the data for querying.
-• Run queries against data stored in Amazon Redshift: we could use SQL to query the data we need.
-
-
-In regard to Video: Short AWS Machine Learning Overview, it takes me 10 minutes. it talks about the Machine learning on AWS. Machine learning has three layers, framework interfaces for expert, ML platforms for developers and data scientists and application services for machine learning API calls in the application. Amazon Deep Learning AMI is for the frameworks layer and Zillow uses it. Amazon SageMaker is a good for ML platform layer.
-
-For Video Tutorial: Overview of AWS SageMaker, it takes me 35 minutes. The AWS SageMaker has four parts, including the notebook instance, jobs, models and endpoints. Notebook instance is about using algorithms to create model via training jobs. Training jobs are instances to train the model. We create models for hosting from job outputs, or import externally trained models into Amazon SageMaker. Endpoints are for developers to use the SageMaker in production. The tutor elaborate on xgboost, kmeans, scikit . He talks about setting up the training parameters. We can train it on single or multiple instances. Then we import models into hosts. The last step is build endpoint configuration and create endpoint for developers to call.
-
-For AWS Tutorial: Analyze Big Data with Hadoop, it takes me 80 minutes. I followed the following steps to finish the tutorial:
-• Step 1: Set Up Prerequisites: you have to have a personal AWS account; create an Amazon S3 Bucket and folder to store the output data from a Hive query; create an Amazon EC2 Key Pair to to connect to the nodes in your cluster over a secure channel using the Secure Shell (SSH) protocol.
-• Step 2: Launch The Cluster: user launches sample Amazon EMR cluster by using Quick Options in the Amazon EMR console and leaving most options to their default values; Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. By using these frameworks and related open-source projects, such as Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence workloads. Additionally, you can use Amazon EMR to transform and move large amounts of data into and out of other AWS data stores and databases, such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB.
-• Step 3: Allow SSH Connections to the Cluster From Your Client: Security groups act as virtual firewalls to control inbound and outbound traffic to your cluster. The default Amazon EMR-managed security groups associated with cluster instances do not allow inbound SSH connections as a security precaution. To connect to cluster nodes using SSH so that you can use the command line and view web interfaces that are hosted on the cluster, you need to add inbound rules that allow SSH traffic from trusted clients.
-• Step 4: Run a Hive Script to Process Data: The sample data is a series of Amazon CloudFront access log files; The sample script calculates the total number of requests per operating system over a specified time frame. The script uses HiveQL, which is a SQL-like scripting language for data warehousing and analysis
-• Step 5: Terminate the resources you do not need to save for the future: terminating your cluster terminates the associated Amazon EC2 instances and stops the accrual of Amazon EMR charges. Amazon EMR preserves metadata information about completed clusters for your reference, at no charge, for two months. The console does not provide a way to delete terminated clusters so that they aren't viewable in the console. Terminated clusters are removed from the cluster when the metadata is removed
-There is more information on how to plan and configure clusters in your custom way, set up the security, manage clusters and trouble shoot cluster if it is performing in a wrong way.
-
-For QwikLab: Intro to Amazon Machine Learning, it takes me 75 minutes. The lab tutorial consists of several parts:
-• Part 1- Upload training data : we put restaurant customer reviews data into Amazon S3 bucket and save it for analyzing
-• Part2- Create a datasource: configure Amazon ML to use the restaurant data set; we set customer review data as the data source for Amazon ML model
-• Part3- Create an ML Model from the Datasource: we will use data source to train and validate the model created in this part; the data source also contains metadata, such as the column data types and target variable which will also be used by the model algorithm; the ML modeling process will take 5 to 10 minutes to complete and we can see that in message section
-• Evaluate an ML model: the Amazon Machine Learning service evaluate the model automatically as part of the model creation process; it takes 70 percent of the data source to train the model and 30 percent to evaluate it.
-• Generate predictions from ML model: batch mode and real-time mode are two ways to generate predictions from ML model; batch mode is asynchronous while the real-time mode is real time.
-
-For AWS Tutorial: Build a Machine Learning Model, it takes me 50 minutes. It is about using Amazon ML to Predict Responses to a Marketing Offer:
-• Step 1: Prepare Your Data: In machine learning, you typically obtain the data and ensure that it is well formatted before starting the training process; we use customer purchase history to predict if this customer will subscribe to my new product
-• Step 2: Create a Training Datasource using the Amazon S3 service
-• Step 3: Create an ML Model: After you've created the training datasource, you use it to create an ML model, train the model, and then evaluate the results
-• Step 4: Review the ML Model's Predictive Performance and Set a Score Threshold
-• Step 5: Use the ML Model to Generate Predictions
-
-For Video Tutorial: Overview of AWS SageMaker, it takes me 40 minutes: The AWS SageMaker has four parts, including the notebook instance, jobs, models and endpoints. Notebook instance is about using algorithms to create model via training jobs. Training jobs are instances to train the model. We create models for hosting from job outputs, or import externally trained models into Amazon SageMaker. Endpoints are for developers to use the SageMaker in production. The tutor elaborate on xgboost, kmeans, scikit . He talks about setting up the training parameters. We can train it on single or multiple instances. Then we import models into hosts. The last step is build endpoint configuration and create endpoint for developers to call
-For AWS Tutorial: AWS SageMaker, it takes me 80 minutes.
-Step 1: Setting Up
-Step 2: Create an Amazon SageMaker Notebook Instance
-Step 3: Train and Deploy a Model
-Step 4: Clean up
-Step 5: Additional Considerations
-
-For Build a Serverless Real-Time Data Processing App, it takes 150 minutes,
-
-Cloud web application
-For QwikLab: Intro to S3, it takes 50 minutes. In this lab, I learned:
-• Create a bucket in Amazon S3 service
-• Add an object for example a picture to the bucket
-• Manage access permissions on an object: change from private to public and see the access difference
-• Create a bucket policy by using the AWS policy generator which require the Amazon Resource Name.
-• Use bucket versioning to get access the picture with same name but uploaded at different time by changing the bucket policy
-The bucket is a really useful service and the versioning feature is quite cool.
+[AWS Tutorial: Launch a VM] (https://aws.amazon.com/getting-started/tutorials/launch-a-virtual-machine/)
+Time Spent: 40 min
+### Step 1. Sign-up for AWS
+You could use your only personal account to register and you could also choose to set up IAM user for better management
+### Step 2. Launch an Amazon EC2 Instance
+ ### a. Enter the Amazon EC2 Console
+ Open the AWS Management Console, so you can keep this step-by-step guide open. When the screen loads, enter your user name and password to get started. Then type EC2 in the search bar and select Amazon EC2 to open the service console.
+
+b. Launch an Instance
+Select Launch Instance to create and configure your virtual machine.
+
+
+### Step 3. Configure your Instance
+ You are now in the EC2 Launch Instance Wizard, which will help you configure and launch your instance.
+ #### a. In this screen, you are shown options to choose an Amazon Machine Image (AMI). AMIs are preconfigured server templates you can use to launch an instance. Each AMI includes an operating system, and can also include applications and application servers. For this tutorial, find Amazon Linux AMI and click Select.
+
+b. You will now choose an instance type. Instance types comprise of varying combinations of CPU, memory, storage, and networking capacity so you can choose the appropriate mix for your applications.
+The default option of t2.micro should already be checked. This instance type is covered within the Free Tier and offers enough compute capacity to tackle simple workloads. Click Review and Launch at the bottom of the page.
+c. You can review the configuration, storage, tagging, and security settings that have been selected for your instance. While you have the option to customize these settings, we recommend accepting the default values for this tutorial.
+Click Launch at the bottom of the page.
+
+d. On the next screen you will be asked to choose an existing key pair or create a new key pair. A key pair is used to securely access your Linux instance using SSH. AWS stores the public part of the key pair which is just like a house lock. You download and use the private part of the key pair which is just like a house key.
+Select Create a new key pair and give it the name MyKeyPair. Next click the Download Key Pair button.
+After you download the MyKeyPair key, you will want to store your key in a secure location. If you lose your key, you won't be able to access your instance. If someone else gets access to your key, they will be able to access your instance.
+Windows users: We recommend saving your key pair in your user directory in a sub-directory called .ssh (ex. C:\user\{yourusername}\.ssh\MyKeyPair.pem).
+Tip: You can't use Windows Explorer to create a folder with a name that begins with a period unless you also end the folder name with a period. After you enter the name (.ssh.), the final period is removed automatically.
+Mac/Linux users: We recommend saving your key pair in the .ssh sub-directory from your home directory (ex. ~/.ssh/MyKeyPair.pem).
+Tip: On MacOS, the key pair is downloaded to your Downloads directory by default. To move the key pair into the .ssh sub-directory, enter the following command in a terminal window: mv ~/Downloads/MyKeyPair.pem ~/.ssh/MyKeyPair.pem
+After you have stored your key pair, click Launch Instance to start your Linux instance.
+e. Click View Instances on the next screen to view your instances and see the status of the instance you have just started.
+f. In a few minutes, the Instance State column on your instance will change to "running" and a Public IP address will be shown. You can refresh these Instance State columns by pressing the refresh button on the right just above the table. Copy the Public IP address of your AWS instance, so you can use it when we connect to the instance using SSH in Step 4.
+
+
+### Step 4. Connect to your Instance
+After launching your instance, it's time to connect to it using SSH.
+Mac/Linux user: Select Mac / Linux below to see instructions for opening a terminal window.
+• Windows
+
+• Mac
+• a. Your Mac or Linux computer most likely includes an SSH client by default. You can check for an SSH client by typing ssh at the command line. If your computer doesn't recognize the command, the OpenSSH project provides a free implementation of the full suite of SSH tools that you can download.
+Mac users: Open a terminal window first. Then press enter to open the terminal window.
+Linux users: Open a terminal window.
+b. Use the chmod command to make sure your private key file is not publicly viewable by entering the following command to restrict permissions to your private SSH key:
+chmod 400 ~/.ssh/mykeypair.pem
+You do not need to do this every time you connect to you instance, you only need to set this once per SSH key that you have.
+c. Use SSH to connect to your instance. In this case the user name is ec2-user, the SSH key is stored in the directory we saved it to in step 3 part d, and the IP address is from step 3 part f. The format is:
+ssh -i {full path of your .pem file} ec2-user@{instance IP address}
+Enter the following:
+ssh -i ~/.ssh/MyKeyPair.pem ec2-user@{IP_Address}
+Example: ssh -i ~/.ssh/MyKeyPair.pem ec2-user@52.27.212.125
+You'll see a response similar to the following:
+The authenticity of host 'ec2-198-51-100-1.compute-1.amazonaws.com (10.254.142.33)' can't be established. RSA key fingerprint is 1f:51:ae:28:df:63:e9:d8:cf:38:5d:87:2d:7b:b8:ca:9f:f5:b1:6f. Are you sure you want to continue connecting (yes/no)?
+Type yes and press enter.
+d. You'll see a response similar to the following:
+Warning: Permanently added 'ec2-198-51-100-1.compute-1.amazonaws.com' (RSA) to the list of known hosts.
+You should then see the welcome screen for your instance and you are now connected to your AWS Linux virtual machine in the cloud.
+
+### Step 5. Terminate Your Instance
+You can easily terminate the instance from the EC2 console. In fact, it is a best practice to terminate instances you are no longer using so you don’t keep getting charged for them.
+a. Back on the EC2 Console, select the box next to the instance you created. Then click the Actions button, navigate to Instance State, and click Terminate.
+b. You will be asked to confirm your termination - select Yes, Terminate.
+Note: This process can take several seconds to complete. Once your instance has been terminated, the Instance State will change to terminated on your EC2 Console.
+
+[Video: Virtualization] https://www.youtube.com/watch?v=GIdVRB5yNsk
+
+Cloud computing is booming thus we need virtualization to meet the needs. Virtualization first emerged in the 1970s and brought out by IBM since there were different computer with different systems.
+
+It starts with VMWare with a bunch of students in Stanford wanted to do software emulation of virtual machines. We often have two levels of privileges to run computer software. We run operating System and our applications. The OS part is called ring 0 and application part is called ring 3.
+
+Since X86 processor is not easy to be virtualized by VMWare, the Xen is developed. It is a hypervisor using a microkernel design, providing services that allow multiple operating systems to execute on the same computer hardware concurrently. But it has some drawbacks like high overheads which means that an instruction was executed on hypervisor means more instructions on the operating system.
+
+The Intel realized they have to do the virtualization itself thus the VT technology was developed and released into nearly all intel processors.
+
+In summary, the cloud computing companies just utilizes the software virtualization of the processors and other hardware resources they have to rent it the customer and gives the results they want back.
+
+[AWS Tutorial: Install a LAMP Web Server on Amazon Linux 2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-lamp-amazon-linux-2.html)
+Time: 80 minutes
+
+### Step 1: Prepare the LAMP Server
+ Prerequisites:
+*Create an IAM User:
+https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/get-set-up-for-amazon-ec2.html#create-an-iam-user
+
+
+
+* Create a Key Pair:
+Amazon EC2 uses public–key cryptography to encrypt and decrypt login information. Public–key cryptography uses a public key to encrypt a piece of data, such as a password, then the recipient uses the private key to decrypt the data. The public and private keys are known as a key pair.
+
+
+* Create a Virtual Private Cloud (VPC):
+Amazon VPC enables you to launch AWS resources into a virtual network that you've defined, known as a virtual private cloud (VPC). The newer EC2 instance types require that you launch your instances in a VPC
+
+
+
+* Create a Security Group:
+Security groups act as a firewall for associated instances, controlling both inbound and outbound traffic at the instance level. You must add rules to a security group that enable you to connect to your instance from your IP address using SSH. You can also add rules that allow inbound and outbound HTTP and HTTPS access from anywhere.
+
+*Launch an Instance
+1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
+2. From the console dashboard, choose Launch Instance.
+3. The Choose an Amazon Machine Image (AMI) page displays a list of basic configurations, called Amazon Machine Images (AMIs), that serve as templates for your instance. Select an HVM version of Amazon Linux 2. Notice that these AMIs are marked "Free tier eligible."
+4. On the Choose an Instance Type page, you can select the hardware configuration of your instance. Select the t2.micro type, which is selected by default. Notice that this instance type is eligible for the free tier.
+5. Choose Review and Launch to let the wizard complete the other configuration settings for you.
+6. On the Review Instance Launch page, under Security Groups, you'll see that the wizard created and selected a security group for you. You can use this security group, or alternatively you can select the security group that you created when getting set up using the following steps:
+a. Choose Edit security groups.
+b. On the Configure Security Group page, ensure that Select an existing security group is selected.
+c. Select your security group from the list of existing security groups, and then choose Review and Launch.
+7. On the Review Instance Launch page, choose Launch.
+8. When prompted for a key pair, select Choose an existing key pair, then select the key pair that you created when getting set up.
+Alternatively, you can create a new key pair. Select Create a new key pair, enter a name for the key pair, and then choose Download Key Pair. This is the only chance for you to save the private key file, so be sure to download it. Save the private key file in a safe place. You'll need to provide the name of your key pair when you launch an instance and the corresponding private key each time you connect to the instance.
+Warning
+Don't select the Proceed without a key pair option. If you launch your instance without a key pair, then you can't connect to it.
+When you are ready, select the acknowledgement check box, and then choose Launch Instances.
+9. A confirmation page lets you know that your instance is launching. Choose View Instances to close the confirmation page and return to the console.
+10. On the Instances screen, you can view the status of the launch. It takes a short time for an instance to launch. When you launch an instance, its initial state is pending. After the instance starts, its state changes to running and it receives a public DNS name. (If the Public DNS (IPv4) column is hidden, choose Show/Hide Columns (the gear-shaped icon) in the top right corner of the page and then select Public DNS (IPv4).) Note: if you use a VPC for the security group, you have to assign elastic IP and associate it with the instance to get the Public DNS else you do not have one automatically
+11. It can take a few minutes for the instance to be ready so that you can connect to it. Check that your instance has passed its status checks; you can view this information in the Status Checks column.
+
+*Now since we are done with the prerequisites, we are gonna prepare the LEMP server
+
+1. Connect to my instance using SSH
+Before you connect to your Linux instance, complete the following prerequisites:
+• Install an SSH client
+Your Linux computer most likely includes an SSH client by default. You can check for an SSH client by typing ssh at the command line. If your computer doesn't recognize the command, the OpenSSH project provides a free implementation of the full suite of SSH tools. For more information, see http://www.openssh.com.
+• Install the AWS CLI Tools
+(Optional) If you're using a public AMI from a third party, you can use the command line tools to verify the fingerprint. For more information about installing the AWS CLI, see Getting Set Up in the AWS Command Line Interface User Guide.
+• Get the ID of the instance
+You can get the ID of your instance using the Amazon EC2 console (from the Instance ID column). If you prefer, you can use the describe-instances (AWS CLI) or Get-EC2Instance (AWS Tools for Windows PowerShell) command.
+• Get the public DNS name of the instance
+You can get the public DNS for your instance using the Amazon EC2 console. Check the Public DNS (IPv4) column. If this column is hidden, choose the Show/Hide icon and select Public DNS (IPv4). If you prefer, you can use the describe-instances (AWS CLI) or Get-EC2Instance (AWS Tools for Windows PowerShell) command.
+• (IPv6 only) Get the IPv6 address of the instance
+If you've assigned an IPv6 address to your instance, you can optionally connect to the instance using its IPv6 address instead of a public IPv4 address or public IPv4 DNS hostname. Your local computer must have an IPv6 address and must be configured to use IPv6. You can get the IPv6 address of your instance using the Amazon EC2 console. Check the IPv6 IPs field. If you prefer, you can use the describe-instances (AWS CLI) or Get-EC2Instance(AWS Tools for Windows PowerShell) command. For more information about IPv6, see IPv6 Addresses.
+• Locate the private key and verify permissions
+Get the fully-qualified path to the location on your computer of the .pem file for the key pair that you specified when you launched the instance. Verify that the .pem file has permissions of 0400, not 0777. For more information, see Error: Unprotected Private Key File.
+• Get the default user name for the AMI that you used to launch your instance
+o For Amazon Linux 2 or the Amazon Linux AMI, the user name is ec2-user.
+o For a Centos AMI, the user name is centos.
+o For a Debian AMI, the user name is admin or root.
+o For a Fedora AMI, the user name is ec2-user or fedora.
+o For a RHEL AMI, the user name is ec2-user or root.
+o For a SUSE AMI, the user name is ec2-user or root.
+o For an Ubuntu AMI, the user name is ubuntu.
+o Otherwise, if ec2-user and root don't work, check with the AMI provider.
+• Enable inbound SSH traffic from your IP address to your instance
+Ensure that the security group associated with your instance allows incoming SSH traffic from your IP address. The default security group for the VPC does not allow incoming SSH traffic by default. The security group created by the launch wizard enables SSH traffic by default. For more information, see Authorizing Inbound Traffic for Your Linux Instances.
+
+1) (Optional) You can verify the RSA key fingerprint on your running instance by using one of the following commands on your local system (not on the instance). This is useful if you've launched your instance from a public AMI from a third party. Locate the SSH HOST KEY FINGERPRINTS section, and note the RSA fingerprint (for example, 1f:51:ae:28:bf:89:e9:d8:1f:25:5d:37:2d:7d:b8:ca:9f:f5:f1:6f) and compare it to the fingerprint of the instance.
+aws ec2 get-console-output --instance-id instance_id
+2) In a command-line shell, change directories to the location of the private key file that you created when you launched the instance.
+
+3) Use the following command to set the permissions of your private key file so that only you can read it.
+chmod 400 /path/my-key-pair.pem
+4) Use the ssh command to connect to the instance. You specify the private key (.pem) file anduser_name@public_dns_name. For example, if you used Amazon Linux 2 or the Amazon Linux AMI, the user name is ec2-user.
+ssh -i /path/my-key-pair.pem ec2-user@ec2-198-51-100-1.compute-1.amazonaws.com
+5) Enter yes.
+You see a response like the following:
+Warning: Permanently added 'ec2-***-**-***-*.compute-1.amazonaws.com' (RSA)
+to the list of known hosts.
+2. To ensure that all of your software packages are up to date, perform a quick software update on your instance. This process may take a few minutes, but it is important to make sure that you have the latest security updates and bug fixes.
+The -y option installs the updates without asking for confirmation. If you would like to examine the updates before installing, you can omit this option.
+[ec2-user ~]$ sudo yum update -y
+
+3. Install the lamp-mariadb10.2-php7.2 and php7.2 Amazon Linux Extras repositories to get the latest versions of the LAMP MariaDB and PHP packages for Amazon Linux 2.
+[ec2-user ~]$ sudo amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2
+Note
+If you receive an error stating sudo: amazon-linux-extras: command not found, then your instance was not launched with an Amazon Linux 2 AMI (perhaps you are using the Amazon Linux AMI instead). You can view your version of Amazon Linux with the following command.
+cat /etc/system-release
+4. Now that your instance is current, you can install the Apache web server, MariaDB, and PHP software packages.
+Use the yum install command to install multiple software packages and all related dependencies at the same time.
+[ec2-user ~]$ sudo yum install -y httpd mariadb-server
+Note
+You can view the current versions of these packages with the following command:
+yum info package_name
+5. Start the Apache web server.
+[ec2-user ~]$ sudo systemctl start httpd
+6. Use the systemctl command to configure the Apache web server to start at each system boot.
+[ec2-user ~]$ sudo systemctl enable httpd
+You can verify that httpd is on by running the following command:
+[ec2-user ~]$ sudo systemctl is-enabled httpd
+7. Add a security rule to allow inbound HTTP (port 80) connections to your instance if you have not already done so. By default, a launch-wizard-N security group was set up for your instance during initialization. This group contains a single rule to allow SSH connections.
+8. Test your web server. In a web browser, type the public DNS address (or the public IP address) of your instance. If there is no content in /var/www/html, you should see the Apache test page. You can get the public DNS for your instance using the Amazon EC2 console (check the Public DNS column; if this column is hidden, chooseShow/Hide Columns (the gear-shaped icon) and choose Public DNS).
+
+
+Apache httpd serves files that are kept in a directory called the Apache document root. The Amazon Linux Apache document root is /var/www/html, which by default is owned by root.
+
+To allow the ec2-user account to manipulate files in this directory, you must modify the ownership and permissions of the directory. There are many ways to accomplish this task. In this tutorial, you add ec2-user to the apache group, to give the apache group ownership of the /var/www directory and assign write permissions to the group.
+
+ To set file permissions
+1. Add your user (in this case, ec2-user) to the apache group.
+[ec2-user ~]$ sudo usermod -a -G apache ec2-user
+2. Log out and then log back in again to pick up the new group, and then verify your membership.
+a. Log out of the ec2 (use the exit command or close the terminal window):
+[ec2-user ~]$ exit
+b. To verify your membership in the apache group, reconnect to your instance, and then run the following command:
+[ec2-user ~]$ groups
+ec2-user adm wheel apache systemd-journal
+3. Change the group ownership of /var/www and its contents to the apache group.
+[ec2-user ~]$ sudo chown -R ec2-user:apache /var/www
+4. To add group write permissions and to set the group ID on future subdirectories, change the directory permissions of /var/www and its subdirectories.
+[ec2-user ~]$ sudo chmod 2775 /var/www && find /var/www -type d -exec sudo chmod 2775 {} \;
+5. To add group write permissions, recursively change the file permissions of /var/www and its subdirectories:
+[ec2-user ~]$ find /var/www -type f -exec sudo chmod 0664 {} \;
+Now, ec2-user (and any future members of the apache group) can add, delete, and edit files in the Apache document root, enabling you to add content, such as a static website or a PHP application.
+Step 2: Test Your LAMP Server
+1. Create a PHP file in the Apache document root.
+[ec2-user ~]$ echo "" > /var/www/html/phpinfo.php
+If you get a "Permission denied" error when trying to run this command, try logging out and logging back in again to pick up the proper group permissions that you configured in To set file permissions.
+2. In a web browser, type the URL of the file that you just created. This URL is the public DNS address of your instance followed by a forward slash and the file name. For example:
+http://my.public.dns.amazonaws.com/phpinfo.php
+You should see the PHP information page:
+
+Note
+If you do not see this page, verify that the /var/www/html/phpinfo.php file was created properly in the previous step. You can also verify that all of the required packages were installed with the following command.
+[ec2-user ~]$ sudo yum list installed httpd mariadb-server php-mysqlnd
+If any of the required packages are not listed in your output, install them with the sudo yum installpackage command. Also verify that the php7.2 and lamp-mariadb10.2-php7.2 extras are enabled in the out put of the amazon-linux-extras command.
+3. Delete the phpinfo.php file. Although this can be useful information, it should not be broadcast to the internet for security reasons.
+[ec2-user ~]$ rm /var/www/html/phpinfo.php
+You should now have a fully functional LAMP web server. If you add content to the Apache document root at /var/www/html, you should be able to view that content at the public DNS address for your instance.
+Step 3: Secure the Database Server
+To secure the MariaDB server
+1. Start the MariaDB server.
+[ec2-user ~]$ sudo systemctl start mariadb
+2. Run mysql_secure_installation.
+[ec2-user ~]$ sudo mysql_secure_installation
+a. When prompted, type a password for the root account.
+i. Type the current root password. By default, the root account does not have a password set. Press Enter.
+ii. Type Y to set a password, and type a secure password twice. For more information about creating a secure password, see https://identitysafe.norton.com/password-generator/. Make sure to store this password in a safe place.
+Note
+Setting a root password for MariaDB is only the most basic measure for securing your database. When you build or install a database-driven application, you typically create a database service user for that application and avoid using the root account for anything but database administration.
+b. Type Y to remove the anonymous user accounts.
+c. Type Y to disable the remote root login.
+d. Type Y to remove the test database.
+e. Type Y to reload the privilege tables and save your changes.
+3. (Optional) If you do not plan to use the MariaDB server right away, stop it. You can restart it when you need it again.
+[ec2-user ~]$ sudo systemctl stop mariadb
+4. (Optional) If you want the MariaDB server to start at every boot, type the following command.
+[ec2-user ~]$ sudo systemctl enable mariadb
+
+Step 4: (Optional) Install phpMyAdmin
+phpMyAdmin is a web-based database management tool that you can use to view and edit the MySQL databases on your EC2 instance. Follow the steps below to install and configure phpMyAdmin on your Amazon Linux instance.
+Important
+We do not recommend using phpMyAdmin to access a LAMP server unless you have enabled SSL/TLS in Apache; otherwise, your database administrator password and other data are transmitted insecurely across the internet. For security recommendations from the developers, see Securing your phpMyAdmin installation. For general information about securing a web server on an EC2 instance, see Tutorial: Configure Apache Web Server on Amazon Linux to use SSL/TLS.
+To install phpMyAdmin
+1. Install the required dependencies.
+[ec2-user ~]$ sudo yum install php-mbstring -y
+2. Restart Apache.
+[ec2-user ~]$ sudo systemctl restart httpd
+3. Restart php-fpm.
+[ec2-user ~]$ sudo systemctl restart php-fpm
+4. Navigate to the Apache document root at /var/www/html.
+[ec2-user ~]$ cd /var/www/html
+5. Select a source package for the latest phpMyAdmin release from https://www.phpmyadmin.net/downloads. To download the file directly to your instance, copy the link and paste it into a wget command, as in this example:
+[ec2-user html]$ wget https://www.phpmyadmin.net/downloads/phpMyAdmin-latest-all-languages.tar.gz
+6. Create a phpMyAdmin folder and extract the package into it with the following command.
+[ec2-user html]$ mkdir phpMyAdmin && tar -xvzf phpMyAdmin-latest-all-languages.tar.gz -C phpMyAdmin --strip-components 1
+7. Delete the phpMyAdmin-latest-all-languages.tar.gz tarball.
+[ec2-user html]$ rm phpMyAdmin-latest-all-languages.tar.gz
+8. (Optional) If the MySQL server is not running, start it now.
+[ec2-user ~]$ sudo systemctl start mariadb
+9. In a web browser, type the URL of your phpMyAdmin installation. This URL is the public DNS address (or the public IP address) of your instance followed by a forward slash and the name of your installation directory. For example:
+http://my.public.dns.amazonaws.com/phpMyAdmin
+You should see the phpMyAdmin login page:
+
+10. Log in to your phpMyAdmin installation with the root user name and the MySQL root password you created earlier.
+Your installation must still be configured before you put it into service. To configure phpMyAdmin, you can manually create a configuration file, use the setup console, or combine both approaches.
+For information about using phpMyAdmin, see the phpMyAdmin User Guide.
+Troubleshooting
+This section offers suggestions for resolving common problems you may encounter while setting up a new LAMP server.
+I can't connect to my server using a web browser.
+Perform the following checks to see if your Apache web server is running and accessible.
+• Check the status of the web server
+You can verify that httpd is on by running the following command:
+[ec2-user ~]$ sudo systemctl is-enabled httpd
+If the httpd process is not running, repeat the steps described in To prepare the LAMP server.
+• Check the firewall configuration
+If you are unable to see the Apache test page, check that the security group you are using contains a rule to allow HTTP (port 80) traffic. For information about adding an HTTP rule to your security group, see Adding Rules to a Security Group.
+
+* QwikLab: Intro to DynamoDB: https://awseducate.qwiklabs.com/focuses/23?parent=catalog
+Time spent: 30 min
+Introduction: Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed database and supports both document and key-value data models. This lab is for creating a table in Amazon DynamoDB to store information about a music library and execute some queries and finally delete the table.
+
+Task 1: create the music library table
+Use the Music as the library name to create the NoSQL table and set it up.
+
+Task 2: Add data to the music table.
+In NoSQL database, a table is a collection of data on a particular topic. Each table contains multiple items and an item is a group of attributes that is uniquely identifiable among all of the other items. An attribute is a fundamental data element.
+
+Task 3: Modify an existing item
+You can edit an item after creation if you find something wrong with it which is convenient
+
+Task 4: Query the table
+I learned that there are two ways to query a DynamoDB table, one is Query and another is Scan. A query is the most efficient way to retrieve data from a DynamoDB table. A scan will look through every item in the table which is less efficient.
+
+
+Task 5: Delete the table.
+
+fengexian
+
+[AWS Tutorial: Deploy a Scalable Node.js Web App](https://aws.amazon.com/getting-started/projects/deploy-nodejs-web-app/?trk=gs_card)
+
+ add image
+
+## Learning Steps
+
+### 1. Create Elastic Beanstalk App and Launch
+
+- Choose __Platform__ as _Node.js_
+
add image
+- Choose __Application code__ as _Sample application_
+- Create app and Launch, it should look like below,
+
add image
+
+- I learned that Elastic Beanstalk creates the environment with the following resources: EC2 instance, instance security group, load balancer, load balancer security group, auto Scaling group, amazon S3 bucket, Amazon CloudWatch alarms, AWS CloudFormation stack, Domain name
+
+
+### 2. Add Permissions to Your Environment's Instances
+
+- Open the [Roles page](https://console.aws.amazon.com/iam/home#roles) in the IAM console.
+- Choose __aws-elasticbeanstalk-ec2-role__
+- Attach Policies
+ - _AmazonSNSFullAccess_
+ - _AmazonDynamoDBFullAccess_
+
+### 3. Deploy the Sample Application
+
+- Download the [source bundle](https://github.com/awslabs/eb-node-express-sample/releases/download/v1.1/eb-node-express-sample-v1.1.zip) from Github
+- Open the Elastic Beanstalk console
+- Choose __Upload and Deploy__, select the source bundle
+
+
add image
+
+### 4. Create a DynamoDB Table outside Elastic Beanstalk
+
+- Table name: _nodejs-tutorial_
+- Primary key: _email_
+- Primary key type: _String_
+
+### 5. Update the Application's Configuration Files
+
+- Unzip the source file bundle
+- Open _.ebextensions/options.config_ and change the values of the following settings:
+ - NewSignupEmail: _YOUR EMAIL_
+ - STARTUP-SIGNUP-TABLE: _nodejs-tutorial_
+
+This configures the application to use the __nodejs-tutorial__ table instead of the one created by
+_.ebextensions/create-dynamodb-table.config_, and sets the email address that the Amazon SNS topic uses for notifications.
+
+- Remove _.ebextensions/create-dynamodb-table.config_, so that the next time you deploy the application,
+the table created by this configuration file will be deleted
+
+```bash
+~/nodejs-tutorial$ rm .ebextensions/create-dynamodb-table.config
+```
+
+- Zip the modified source bundle and deploy again
+
+### 6.Configure Your Environment for High Availability
+
+- Open the Elastic Beanstalk console
+- Choose __Configuration__
+- On the __Capacity__ configuration card, choose Modify
+- In the __Auto Scaling Group__ section, set __Min instances__ to 2.
+
+
add image
+
+### 7. Cleanup
+
+- Open the Elastic Beanstalk console
+- Choose __Actions__, and then choose __Terminate Environment__
+- Delete DynamoDB table __nodejs-tutorial__
+
+
+[QwikLab: Intro to AWS Lambda](https://awseducate.qwiklabs.com/focuses/36?parent=catalog)
+
+> AWS Lambda is a compute service that runs your code in response to events and automatically manages the compute resources for you, making it easy to build applications that respond quickly to new information.
+This lab creates a Lambda function to handle S3 image uploads by resizing them to thumbnails
+and storing the thumbnails in another S3 bucket.
+
+
+## Learning process
+
+### 1. Create 2 Amazon S3 Buckets as Input and Output Destination
+
+- On the __Services__ menu, select __S3__
+- Create bucket, with name _images-1234_, as the source bucket for original uploads
+- Create another bucket, with name _images-1234-resized_, as the output bucket for thumbnails
+- Upload the _HappyFace.jpg_ to source bucket
+
+### 2. Create an AWS Lambda Function
+
+- On the __Services__ menu, select __Lambda__
+- Create function and configure
+ - Name: Create-Thumbnail
+ - Runtime: Python 3.6
+ - Existing role: lambda-execution-role
+
+ This role grants permission to the Lambda function to read and write images in S3
+- Finish the rest of configuration by providing the url of the zipped Python script, which handles upload event, creates
+thumbnail in output bucket
+
+### 3. Trigger Your Function by Uploads
+
+- Click _Test_ button and configure
+ - Event template: Amazon S3 put
+ - Event name: Upload
+- Modify the template
+ - replace _example-bucket_ with _images-1234_
+ - replace _test.key_ with _HappyFace.jpg_
+- Save and run
+- If success, the thumbnail image could be found in output bucket
+
+
+### 4. Monitoring and Logging
+- __Monitoring__ tab displays graphs showing:
+ - Invocations: The number of times the function has been invoked.
+ - Duration: How long the function took to execute (in milliseconds).
+ - Errors: How many times the function failed.
+ - Throttles: When too many functions are invoked simultaneously, they will be throttled. The default is 1000 concurrent executions.
+ - Iterator Age: Measures the age of the last record processed from streaming triggers (Amazon Kinesis and Amazon DynamoDB Streams).
+ - Dead Letter Errors: Failures when sending messages to the Dead Letter Queue.
+- __Amazon CloudWatch Logs__ have detailed log messages in stream
+
+
+- [QwikLab: Intro to Amazon API Gateway](https://awseducate.qwiklabs.com/focuses/21?parent=catalog)
+
+> API Gateway is a managed service provided by AWS that makes creating, deploying and maintaining APIs easy.
+The lab creates a Lambda function and triggers it by accessing API Gateway endpoint url.
+The lab also introduced the best practices of building a RESTful API and the use of micro-service.
+
+## API Gateway includes features to:
+
+- Transform the body and headers of incoming API requests to match backend systems
+- Transform the body and headers of the outgoing API responses to match API requirements
+- Control API access via Amazon Identity and Access Management
+- Create and apply API keys for third-party development
+- Enable Amazon CloudWatch integration for API monitoring
+- Cache API responses via Amazon CloudFront for faster response times
+- Deploy an API to multiple stages, allowing easy differentiation between development, test, production as well as versioning
+- Connect custom domains to an API
+- Define models to help standardize your API request and response transformations
+
+## Amazon API Gateway and AWS Lambda Terminology:
+
+- __Resource__: Represented as a URL endpoint and path.
+For example, _api.mysite.com/questions_.
+You can associate HTTP methods with resources and define different backend targets for each method.
+In a microservices architecture, a resource would represent a single microservice within your system.
+
+- __Method__: In API Gateway, a method is identified by the combination of a resource path and an HTTP verb,
+such as GET, POST, and DELETE.
+
+- __Method Request__: The method request settings in API gateway store the methods authorization settings
+and define the URL Query String parameters and HTTP Request Headers that are received from the client.
+
+- __Integration Request__: The integration request settings define the backend target used with the method.
+It is also where you can define mapping templates, to transform the incoming request to match what the backend target is expecting.
+
+- __Integration Response__: The integration response settings is where the mappings are defined
+between the response from the backend target and the method response in API Gateway.
+You can also transform the data that is returned from your backend target to fit what your end users and applications are expecting.
+
+- __Method Response__: The method response settings define the method response types, their headers and content types.
+
+- __Model__: In API Gateway, a model defines the format, also known as the schema or shape, of some data.
+You create and use models to make it easier to create mapping templates.
+Because API Gateway is designed to work primarily with JavaScript Object Notation (JSON)-formatted data,
+API Gateway uses JSON Schema to define the expected schema of the data.
+
+- __Stage__: In API Gateway, a stage defines the path through which an API deployment is accessible.
+This is commonly used to deviate between versions, as well as development vs production endpoints, etc.
+
+- __Blueprint__: A Lambda blueprint is an example lambda function that can be used as a base to build out new Lambda functions.
+
+## useful resources
+- [White House RESTful API Standards](https://github.com/WhiteHouse/api-standards#pragmatic-rest)
+- [Spotify RESTful API Standards](https://developer.spotify.com/web-api/)
+
+
+
+## Learning Notes
+
+### 1. Create a Lambda Function on API Gateway
+
+- Same as [the last Lambda tutorial]((../cloud-web-apps-lab-aws-lambda)), use _Author from Scratch_, and configure:
+ - __Name__: FAQ
+ - __Runtime__: Node.js 8.10
+ - __Existing Role__: lambda-basic-execution
+
+- Create function and replace the event handling script, which performs:
+ - Define a list of FAQs
+ - Return a random FAQ
+
+- Add an API Gateway Trigger, the Lambda function is triggered whenever a call is made to API Gateway:
+ - __API__: Create a new API
+ - __Security__: Open
+ - __API name__: FAQ-API
+ - __Deployment stage__: myDeployment
+
+
+### 2. Trigger Lambda Function by API Gateway URL
+
+- Access API Gateway endpoint url in browser, a random JSON object will be returned
+- Create a test by configuring:
+ - __Event name__: BasicTest
+ - Replace keys and values with an empty JSON object
+ - Save, run and check logs
+
+
+
+[AWS Tutorial: Build a Serverless Web Application](https://aws.amazon.com/getting-started/projects/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/?trk=gs_card)
+
+> We will build a simple serverless (AWS Lambda) web application that enables users to request unicorn rides from the Wild Rydes fleet.
+The application will present users with an HTML based user interface for indicating the location
+where they would like to be picked up and will interface on the backend with a RESTful web service
+to submit the request and dispatch a nearby unicorn.
+The application will also provide facilities for users to register with the service and log in before requesting rides.
+
+
+### Static Web Hosting on S3
+
+Amazon S3 hosts static web resources including HTML, CSS, JavaScript, and image files which are loaded in the user's browser.
+
+
+
+- [Download the zip that has everything of the static site](https://github.com/awslabs/aws-serverless-workshops/archive/master.zip)
+- Create an S3 bucket with name _wildrydes-FIRSTNAME-LASTNAME_ as suggested
+- Unzip and upload everything in folder */WebApplication/1_StaticWebHosting/website/*
+- Make bucket content public by setting up policy
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": "*",
+ "Action": "s3:GetObject",
+ "Resource": "arn:aws:s3:::wildrydes-warren/*"
+ }
+ ]
+}
+```
+- Enable __Static website hosting__ under __Properties__ tab, and set _index.html_ for the Index document
+- Save and [see static website](http://wildrydes-warren.s3-website-us-east-1.amazonaws.com/)
+
+### User Management on Cognito
+
+Amazon Cognito provides user management and authentication functions to secure the backend API.
+
+
+
+- Create a Cognito user pool with name _WildRydes_, then get __Pool Id__
+- Add app client to pool with name _WildRydesWebApp_, uncheck the __Generate client secret__ option, since client secrets aren't
+currently supported for use with browser-based applications, then get __App client id__
+
+
+
+- Modify __/js/config.js__ by filling in __Pool Id__, __App client id__, and region
+```javascript
+window._config = {
+ cognito: {
+ userPoolId: 'us-east-1_65cLrZQkK', // e.g. us-east-2_uXboG5pAb
+ userPoolClientId: '3m1t3bi2d9p62qa79pj930r65p', // e.g. 25ddkmj4v6hfsfvruhpfi7n4hv
+ region: 'us-east-1' // e.g. us-east-2
+ },
+ api: {
+ invokeUrl: '' // e.g. https://rc7nyt4tql.execute-api.us-west-2.amazonaws.com/prod',
+ }
+};
+```
+
+- Visit [register.html](http://wildrydes-warren.s3-website-us-east-1.amazonaws.com/register.html) to create an account,
+either with a real mailbox or a dummy one
+
+- Visit [verify.html](http://wildrydes-warren.s3-website-us-east-1.amazonaws.com/verify.html), fill in the verification code
+or __confirm__ user in Cognito console (General settings/Users and groups) manually
+
+- Visit [ride.html](http://wildrydes-warren.s3-website-us-east-1.amazonaws.com/ride.html), log in with email and password,
+you should see
+
+
+
+### Serverless Backend with AWS Lambda
+
+Amazon DynamoDB provides a persistence layer where data can be stored by the API's Lambda function.
+
+
+
+- Create DynamoDB table with name __Rides__, and __RideId__ for partition key
+- Create an IAM role for Your Lambda function, name it _WildRydesLambda_
+
+ Every Lambda function has an IAM role associated with it.
+ This role defines what other AWS services the function is allowed to interact with.
+
+- Grant IAM role _WildRydesLambda_ to write DynamoDB
+
+
+
+- Specify table to the role with table ARN
+
+
+
+
+- Create a Lambda Function for Handling Requests, name it _RequestUnicorn_
+
+- Choose an existing role for function _RequestUnicorn_ as _WildRydesLambda_, so that the function
+is able to write DynamoDB
+
+
+
+- Test the function
+
+
+
+### RESTful APIs with API Gateway
+
+In this module you'll use Amazon API Gateway to expose the Lambda function _RequestUnicorn_ as a RESTful API.
+This API will be accessible on the public Internet.
+It will be secured using the Amazon Cognito user pool you created in the previous module.
+
+
+
+- Create a New REST API in API Gateway, name it _WildRydes_
+- Create a Cognito User Pools Authorizer, name it _WildRydes_, then test with the __Authorization Token__
+
+
+
+
+- Create a new resource, name it __ride__ and create a _POST_ method for it
+- Use Lambda function _RequestUnicorn_ to handle the _POST_ method
+- Deploy API in stage _prod_
+- Update _config.js_ in S3 with the _invokeUrl_
+- Login and request a unicorn pickup on white house south lawn :)
+
+
+
+
+> Due to the web application builds up with a rather complex architecture, the CD/CI configuration
+is not included, please refer to Module 2 tutorial. This article mainly focuses on implementing the features of app
+with AWS-CLI commands.
+
+## Official Links
+
+[AWS Tutorial: Build a Modern Web Application] (https://aws.amazon.com/getting-started/projects/build-modern-app-fargate-lambda-dynamodb-python/?trk=gs_card)
+
+## Application Architecture
+
+
add image
+
+The Mythical Mysfits website serves its static content directly from __Amazon S3__,
+provides a microservice API backend deployed as a container through __AWS Fargate__ on __Amazon ECS__,
+stores data in a managed NoSQL database provided by __Amazon DynamoDB__,
+with authentication and authorization for the application enabled through __AWS API Gateway__ and its integration with __AWS Cognito__.
+The user website clicks will be sent as records to an __Amazon Kinesis Firehose Delivery stream__
+where those records will be processed by serverless __AWS Lambda__ functions and then stored in __Amazon S3__.
+
+## Learning Notes
+
+### [Module 1: IDE Setup and Static Website Hosting](https://github.com/aws-samples/aws-modern-application-workshop/tree/python/module-1)
+
+- AWS Cloud9 IDE ships with a _t2.micro_ EC2 instance for free tier, the whole environment resembles _CodeAnywhere_ we
+have used for project 1
+
+
add image
+
+- Several aws-cli commands
+ - Create S3 bucket
+ ```bash
+ aws s3 mb s3://REPLACE_ME_BUCKET_NAME
+ ```
+ - Set website homepage in bucket
+ ```bash
+ aws s3 website s3://REPLACE_ME_BUCKET_NAME --index-document index.html
+ ```
+ - Set bucket access policy to public
+ ```bash
+ aws s3api put-bucket-policy
+ --bucket REPLACE_ME_BUCKET_NAME
+ --policy file://~/environment/aws-modern-application-workshop/module-1/aws-cli/website-bucket-policy.json
+ ```
+ - Publish website on S3
+ ```bash
+ aws s3 cp
+ ~/environment/aws-modern-application-workshop/module-1/web/index.html
+ s3://REPLACE_ME_BUCKET_NAME/index.html
+ ```
+
+- Visit static website [s3 index](http://mythical-bucket-warren.s3-website-us-east-1.amazonaws.com/)
+
+### [Module 2: Creating a Service with AWS Fargate](https://github.com/aws-samples/aws-modern-application-workshop/tree/python/module-2)
+
+AWS Fargate is a deployment option in Amazon ECS that allows you to deploy containers without having to manage any clusters or servers.
+For our Mythical Mysfits backend, we will use Python and create a Flask app in a Docker container behind a Network Load Balancer.
+These will form the microservice backend for the frontend website to integrate with.
+
+- Create the Core Infrastructure stack on cloud using AWS CloudFormation in 10 minutes, including
+ - An Amazon VPC
+ - Two NAT Gateways (cost $1 per day)
+ - A DynamoDB VPC Endpoint
+ - A Security Group
+ - IAM Roles
+
+```bash
+aws cloudformation create-stack --stack-name MythicalMysfitsCoreStack --capabilities CAPABILITY_NAMED_IAM
+--template-body file://~/environment/aws-modern-application-workshop/module-2/cfn/core.yml
+```
+
+Stack components are specified in _core.yml_.
+
+- Save stack information when creation completes
+
+```bash
+aws cloudformation describe-stacks --stack-name MythicalMysfitsCoreStack >
+~/environment/cloudformation-core-output.json
+```
+
+- Dockerize backend Flask webservice
+ - Change directory, where _Dockerfile_ that tells Docker all of the instructions
+ that should take place when the build command is executed.
+ ```bash
+ cd ~/environment/aws-modern-application-workshop/module-2/app
+ ```
+ - Build Docker image
+ ```bash
+ docker build . -t REPLACE_ME_ACCOUNT_ID.dkr.ecr.REPLACE_ME_REGION.amazonaws.com/mythicalmysfits/service:latest
+ ```
+ - Run image locally
+ ```bash
+ docker run -p 8080:8080 REPLACE_ME_WITH_DOCKER_IMAGE_TAG
+ ```
+ 
+ - Push the Docker Image to Amazon ECR (Amazon Elastic Container Registry)
+ ```bash
+ aws ecr create-repository --repository-name mythicalmysfits/service
+ $(aws ecr get-login --no-include-email)
+ docker push REPLACE_ME_WITH_DOCKER_IMAGE_TAG
+ ```
+- Deploy container on Cluster in the Amazon Elastic Container Service (ECS)
+ - Create cluster
+ ```bash
+ aws ecs create-cluster --cluster-name MythicalMysfits-Cluster
+ ```
+ - Create an AWS CloudWatch Logs Group
+ ```bash
+ aws logs create-log-group --log-group-name mythicalmysfits-logs
+ ```
+
+__AWS Fargate__ allows you to specify that your containers be deployed to a cluster without having to actually provision or manage any servers yourself.
+
+
+
+- Enabling a Load Balanced Fargate Service
+
+ - Create a Network Load Balancer
+ ```bash
+ aws elbv2 create-load-balancer --name mysfits-nlb --scheme internet-facing --type network
+ --subnets REPLACE_ME_PUBLIC_SUBNET_ONE REPLACE_ME_PUBLIC_SUBNET_TWO > ~/environment/nlb-output.json
+ ```
+
+ - Create a Load Balancer Target Group
+
+ A target group allows AWS resources to register themselves as targets for requests that the load balancer receives to forward.
+
+ ```bash
+ aws elbv2 create-target-group --name MythicalMysfits-TargetGroup --port 8080 --protocol TCP --target-type ip
+ --vpc-id REPLACE_ME_VPC_ID --health-check-interval-seconds 10 --health-check-path /
+ --health-check-protocol HTTP --healthy-threshold-count 3 --unhealthy-threshold-count 3 >
+ ~/environment/target-group-output.json
+ ```
+
+ - Create a Load Balancer Listener
+
+ This informs that load balancer that for requests received on a specific port,
+ they should be forwarded to targets that have registered to the above target group.
+
+ ```bash
+ aws elbv2 create-listener --default-actions TargetGroupArn=REPLACE_ME_NLB_TARGET_GROUP_ARN,
+ Type=forward --load-balancer-arn REPLACE_ME_NLB_ARN --port 80 --protocol TCP
+ ```
+
+- Visit website [s3 index](http://mythical-bucket-warren.s3-website-us-east-1.amazonaws.com/) again, website is accessing
+load balancer to retrieve data
+
+
+
+### [Module 3 - Adding a Data Tier with Amazon DynamoDB](https://github.com/aws-samples/aws-modern-application-workshop/tree/python/module-3)
+
+Rather than have all of the Mysfits be stored in a static JSON file,
+we will store them in a database to make the websites future more extensible and scalable.
+
+- Create a DynamoDB Table
+
+```bash
+aws dynamodb create-table --cli-input-json
+file://~/environment/aws-modern-application-workshop/module-3/aws-cli/dynamodb-table.json
+```
+
+- Populate the Table
+
+```bash
+aws dynamodb batch-write-item
+--request-items file://~/environment/aws-modern-application-workshop/module-3/aws-cli/populate-dynamodb.json
+```
+
+- Update Flask code to read data from DynamoDB
+
+- Visit website [s3 index](http://mythical-bucket-warren.s3-website-us-east-1.amazonaws.com/) again, website now displays
+data from DynamoDB.
+
+### [Module 4: Adding User and API features with Amazon API Gateway and AWS Cognito](https://github.com/aws-samples/aws-modern-application-workshop/tree/python/module-4)
+
+To make sure that only registered users are authorized to like or adopt mysfits on the website,
+we will deploy an REST API with Amazon API Gateway to sit in front of our NLB.
+
+- Adding a User Pool for Website Users
+ - Create the Cognito User Pool
+ ```bash
+ aws cognito-idp create-user-pool --pool-name MysfitsUserPool --auto-verified-attributes email
+ ```
+ - Create a Cognito User Pool Client
+ ```bash
+ aws cognito-idp create-user-pool-client --user-pool-id REPLACE_ME --client-name MysfitsUserPoolClient
+ ```
+- Adding a new REST API with Amazon API Gateway
+ - Create an API Gateway VPC Link
+ ```bash
+ aws apigateway create-vpc-link --name MysfitsApiVpcLink --target-arns REPLACE_ME_NLB_ARN >
+ ~/environment/api-gateway-link-output.json
+ ```
+ In order for API Gateway to privately integrate with our NLB,
+ we will configure an API Gateway VPC Link that enables API Gateway APIs to directly integrate with backend web services
+ that are privately hosted inside a VPC.
+
+ - Create the REST API using Swagger
+ REST API and all of its resources, methods, and configuration are defined within a JSON file.
+
+ - Deploy the API
+ A stage is a named reference to a deployment, which is a snapshot of the API.
+ You can use a Stage to manage and optimize a particular deployment.
+
+- Updating the Mythical Mysfits Website
+ - Update the Flask Service Backend
+
+ Provide new Flask service to keep up with the newly defined API in Gateway
+
+ - Update the Mythical Mysfits Website in S3
+
+ Switch API endpoint to API Gateway from NLB, see [API gateway health check](https://jigpafa4ti.execute-api.us-east-1.amazonaws.com/prod/mysfits)
+
+### [Module 5: Capturing User Behavior](https://github.com/aws-samples/aws-modern-application-workshop/tree/python/module-5)
+To help us gather more insights of user activity,
+we will implement the ability for the website frontend to submit a tiny request,
+each time a mysfit profile is clicked by a user,
+to a new microservice API we'll create.
+Those records will be processed in real-time by a serverless code function,
+aggregated, and stored for any future analysis that you may want to perform.
+
+- Creating the Streaming Service Stack
+ - Create an S3 Bucket for Lambda Function Code Packages
+ - Use the SAM CLI to Package your Code for Lambda
+ - Deploy the Stack using AWS CloudFormation
+
+- Sending Mysfit Profile Clicks to the Service
+ - Update the Website Content
+ - Push the New Site Version to S3
+
+- Login and click on website items, check user behavior data gathered in bucket
+
+
+- Workshop Clean-Up
+
+Clean up the workshop to avoid additional charging
+
+```bash
+aws cloudformation delete-stack --stack-name STACK-NAME-HERE
+```
From fefb8454359d1a3cef6622a754595bee97923ffe Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Sun, 9 Dec 2018 22:19:18 -0500
Subject: [PATCH 04/64] Update README.md
---
README.md | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/README.md b/README.md
index c86e0e6..e19faf5 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,5 @@
-[AWS Tutorial: Launch a VM] (https://aws.amazon.com/getting-started/tutorials/launch-a-virtual-machine/)
+[AWS Tutorial: Launch a VM](https://aws.amazon.com/getting-started/tutorials/launch-a-virtual-machine/)
+
Time Spent: 40 min
### Step 1. Sign-up for AWS
You could use your only personal account to register and you could also choose to set up IAM user for better management
From 97aaf80a65a4e674e7c0791b9a6c4c6ae21491f1 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Sun, 9 Dec 2018 22:20:27 -0500
Subject: [PATCH 05/64] Update README.md
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index e19faf5..464eb46 100644
--- a/README.md
+++ b/README.md
@@ -2,7 +2,7 @@
Time Spent: 40 min
### Step 1. Sign-up for AWS
-You could use your only personal account to register and you could also choose to set up IAM user for better management
+*You could use your only personal account to register and you could also choose to set up IAM user for better management
### Step 2. Launch an Amazon EC2 Instance
### a. Enter the Amazon EC2 Console
From 0ea96007196129df0595996eb2cfdd7e868c3418 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Sun, 9 Dec 2018 22:22:52 -0500
Subject: [PATCH 06/64] Update README.md
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index 464eb46..150cd26 100644
--- a/README.md
+++ b/README.md
@@ -2,7 +2,7 @@
Time Spent: 40 min
### Step 1. Sign-up for AWS
-*You could use your only personal account to register and you could also choose to set up IAM user for better management
+* You could use your only personal account to register and you could also choose to set up IAM user for better management
### Step 2. Launch an Amazon EC2 Instance
### a. Enter the Amazon EC2 Console
From 43dde1ca459380645edcc5a1fc685cefc84d82ea Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Sun, 9 Dec 2018 22:26:36 -0500
Subject: [PATCH 07/64] Update README.md
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index 150cd26..98bc391 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,4 @@
-[AWS Tutorial: Launch a VM](https://aws.amazon.com/getting-started/tutorials/launch-a-virtual-machine/)
+## [AWS Tutorial: Launch a VM](https://aws.amazon.com/getting-started/tutorials/launch-a-virtual-machine/)
Time Spent: 40 min
### Step 1. Sign-up for AWS
From f5e7cf8ec9912ef52c4f4e798b5dbf83faf30ff2 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Sun, 9 Dec 2018 22:29:47 -0500
Subject: [PATCH 08/64] Update README.md
---
README.md | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/README.md b/README.md
index 98bc391..eab39cb 100644
--- a/README.md
+++ b/README.md
@@ -1,10 +1,10 @@
## [AWS Tutorial: Launch a VM](https://aws.amazon.com/getting-started/tutorials/launch-a-virtual-machine/)
Time Spent: 40 min
-### Step 1. Sign-up for AWS
+### 1. Sign-up for AWS
* You could use your only personal account to register and you could also choose to set up IAM user for better management
-### Step 2. Launch an Amazon EC2 Instance
+### 2. Launch an Amazon EC2 Instance
### a. Enter the Amazon EC2 Console
Open the AWS Management Console, so you can keep this step-by-step guide open. When the screen loads, enter your user name and password to get started. Then type EC2 in the search bar and select Amazon EC2 to open the service console.
@@ -12,7 +12,7 @@ b. Launch an Instance
Select Launch Instance to create and configure your virtual machine.
-### Step 3. Configure your Instance
+### 3. Configure your Instance
You are now in the EC2 Launch Instance Wizard, which will help you configure and launch your instance.
#### a. In this screen, you are shown options to choose an Amazon Machine Image (AMI). AMIs are preconfigured server templates you can use to launch an instance. Each AMI includes an operating system, and can also include applications and application servers. For this tutorial, find Amazon Linux AMI and click Select.
@@ -33,7 +33,7 @@ e. Click View Instances on the next screen to view your instances and see the st
f. In a few minutes, the Instance State column on your instance will change to "running" and a Public IP address will be shown. You can refresh these Instance State columns by pressing the refresh button on the right just above the table. Copy the Public IP address of your AWS instance, so you can use it when we connect to the instance using SSH in Step 4.
-### Step 4. Connect to your Instance
+### 4. Connect to your Instance
After launching your instance, it's time to connect to it using SSH.
Mac/Linux user: Select Mac / Linux below to see instructions for opening a terminal window.
• Windows
@@ -57,7 +57,7 @@ d. You'll see a response similar to the following:
Warning: Permanently added 'ec2-198-51-100-1.compute-1.amazonaws.com' (RSA) to the list of known hosts.
You should then see the welcome screen for your instance and you are now connected to your AWS Linux virtual machine in the cloud.
-### Step 5. Terminate Your Instance
+### 5. Terminate Your Instance
You can easily terminate the instance from the EC2 console. In fact, it is a best practice to terminate instances you are no longer using so you don’t keep getting charged for them.
a. Back on the EC2 Console, select the box next to the instance you created. Then click the Actions button, navigate to Instance State, and click Terminate.
b. You will be asked to confirm your termination - select Yes, Terminate.
@@ -78,7 +78,7 @@ In summary, the cloud computing companies just utilizes the software virtualizat
[AWS Tutorial: Install a LAMP Web Server on Amazon Linux 2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-lamp-amazon-linux-2.html)
Time: 80 minutes
-### Step 1: Prepare the LAMP Server
+### 1: Prepare the LAMP Server
Prerequisites:
*Create an IAM User:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/get-set-up-for-amazon-ec2.html#create-an-iam-user
From c9cce91c8801eb424ded44db2f0a7e11779c8a01 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Sun, 9 Dec 2018 22:31:02 -0500
Subject: [PATCH 09/64] Update README.md
---
README.md | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/README.md b/README.md
index eab39cb..890585f 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,7 @@
## [AWS Tutorial: Launch a VM](https://aws.amazon.com/getting-started/tutorials/launch-a-virtual-machine/)
Time Spent: 40 min
+
### 1. Sign-up for AWS
* You could use your only personal account to register and you could also choose to set up IAM user for better management
@@ -75,11 +76,12 @@ The Intel realized they have to do the virtualization itself thus the VT technol
In summary, the cloud computing companies just utilizes the software virtualization of the processors and other hardware resources they have to rent it the customer and gives the results they want back.
-[AWS Tutorial: Install a LAMP Web Server on Amazon Linux 2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-lamp-amazon-linux-2.html)
+## [AWS Tutorial: Install a LAMP Web Server on Amazon Linux 2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-lamp-amazon-linux-2.html)
+
Time: 80 minutes
### 1: Prepare the LAMP Server
- Prerequisites:
+#### Prerequisites:
*Create an IAM User:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/get-set-up-for-amazon-ec2.html#create-an-iam-user
From 659c58e2c389f7ef64aa802473ebcb9883a09b58 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Sun, 9 Dec 2018 23:02:48 -0500
Subject: [PATCH 10/64] Update README.md
---
README.md | 91 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 91 insertions(+)
diff --git a/README.md b/README.md
index 890585f..19b10f5 100644
--- a/README.md
+++ b/README.md
@@ -1,3 +1,94 @@
+## Big Data and Machine Learning (Beginner level + Intermediate Level)
+
+## [Video: Hadoop Intro](https://www.youtube.com/watch?v=jKCj4BxGTi8&feature=youtu.be)
+
+Time: it takes 35 minutes to learn it.
+
+* The video tutorial gives basic ideas of Hadoop framework. After 2000, the solution which uses the computation power provided by available computers to process data could not help. In recent years, there is an incredible explosion in the volume of data. IBM reported that 2.5 billion gigabytes of data was generated every day in 2012. 40000 search queries were done on Google every second. Therefore, we need computers with larger memories and faster processors or other more advanced solutions. The idea distributed system is using multiple computers to do the processing work which has much better performance. There are also challenges for this. There are high chances of failure since a distributed system uses multiple computers. There is also limit on bandwidth. Because it is difficult to synchronize data and process, the programming complexity is also high. The solution is Hadoop. Hadoop is a framework that allows for distributed processing of large data sets across clusters of commodity computers using simple programming models. The four key characters of Hadoop are economical, scalable, reliable and flexible. Compared to traditional DBMS, Hadoop distributes the data to multiple systems and later runs the computation wherever the data is located. The Hadoop has an ecosystem which is evolved from its three core components, data processing, resource management and Hadoop distributed file system. It is now comprised of 12 components including Hadoop distributed file system, HBase, scoop, flume, spark, Hadoop MapReduce, Pig, Impala, Hive, Cloudera Search, Oozie, Hue.
+
+[AWS Tutorial: Analyze Big Data with Hadoop](https://aws.amazon.com/getting-started/projects/analyze-big-data/?trk=gs_card)
+
+Time: it takes me more than one hour to learn and write up a summary.
+
+* I have acquired how to create Amazon S3 bucket store my log files and output data, Launch a fully functional Hadoop cluster using Amazon EMR, define the schema, create a table for sample log data stored in Amazon S3, analyze the data using a HiveQL script and write the results back to Amazon S3. It is interesting to learn.
+
+[QwikLab: Intro to S3]
+
+Time: it takes 50 minutes.
+
+* In this lab, I learned:
+ * Create a bucket in Amazon S3 service
+ * Add an object for example a picture to the bucket
+ * Manage access permissions on an object: change from private to public and see the access difference
+ * Create a bucket policy by using the AWS policy generator which require the Amazon Resource Name.
+ * Use bucket versioning to get access the picture with same name but uploaded at different time by changing the bucket policy
+
+ * The bucket is a really useful service and the versioning feature is quite cool.
+
+For QwikLab: Intro to Amazon Redshift, it takes me 60 minutes. In this lab, it covers
+* Launch a Redshift cluster: a cluster is a fully managed data warehouse that consists of a set of compute nodes; when launching a cluster, you have to specify the node type which determines the CPU, RAM, storage capacity and storage drive type.
+* Connect an SQL client called Pgweb to the Amazon Redshift cluster: we can write and run queries in Pgweb and also view the database information and structure.
+* Load sample data from an S3 bucket into the Amazon Redshift cluster which will hold the data for querying.
+* Run queries against data stored in Amazon Redshift: we could use SQL to query the data we need.
+
+
+In regard to Video: Short AWS Machine Learning Overview, it takes me 10 minutes. it talks about the Machine learning on AWS. Machine learning has three layers, framework interfaces for expert, ML platforms for developers and data scientists and application services for machine learning API calls in the application. Amazon Deep Learning AMI is for the frameworks layer and Zillow uses it. Amazon SageMaker is a good for ML platform layer.
+
+[Overview of AWS SageMaker]
+
+Time: it takes me 35 minutes.
+
+* The AWS SageMaker has four parts, including the notebook instance, jobs, models and endpoints. Notebook instance is about using algorithms to create model via training jobs. Training jobs are instances to train the model. We create models for hosting from job outputs, or import externally trained models into Amazon SageMaker. Endpoints are for developers to use the SageMaker in production. The tutor elaborate on xgboost, kmeans, scikit . He talks about setting up the training parameters. We can train it on single or multiple instances. Then we import models into hosts. The last step is build endpoint configuration and create endpoint for developers to call.
+
+For AWS Tutorial: Analyze Big Data with Hadoop, it takes me 80 minutes. I followed the following steps to finish the tutorial:
+* Step 1: Set Up Prerequisites: you have to have a personal AWS account; create an Amazon S3 Bucket and folder to store the output data from a Hive query; create an Amazon EC2 Key Pair to to connect to the nodes in your cluster over a secure channel using the Secure Shell (SSH) protocol.
+* 2: Launch The Cluster: user launches sample Amazon EMR cluster by using Quick Options in the Amazon EMR console and leaving most options to their default values; Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. By using these frameworks and related open-source projects, such as Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence workloads. Additionally, you can use Amazon EMR to transform and move large amounts of data into and out of other AWS data stores and databases, such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB.
+* 3: Allow SSH Connections to the Cluster From Your Client: Security groups act as virtual firewalls to control inbound and outbound traffic to your cluster. The default Amazon EMR-managed security groups associated with cluster instances do not allow inbound SSH connections as a security precaution. To connect to cluster nodes using SSH so that you can use the command line and view web interfaces that are hosted on the cluster, you need to add inbound rules that allow SSH traffic from trusted clients.
+* 4: Run a Hive Script to Process Data: The sample data is a series of Amazon CloudFront access log files; The sample script calculates the total number of requests per operating system over a specified time frame. The script uses HiveQL, which is a SQL-like scripting language for data warehousing and analysis
+* 5:Terminate the resources you do not need to save for the future
+
+terminating your cluster terminates the associated Amazon EC2 instances and stops the accrual of Amazon EMR charges. Amazon EMR preserves metadata information about completed clusters for your reference, at no charge, for two months. The console does not provide a way to delete terminated clusters so that they aren't viewable in the console. Terminated clusters are removed from the cluster when the metadata is removed
+There is more information on how to plan and configure clusters in your custom way, set up the security, manage clusters and trouble shoot cluster if it is performing in a wrong way.
+
+[QwikLab: Intro to Amazon Machine Learning]
+
+Time : it takes me 75 minutes.
+
+The lab tutorial consists of several parts:
+* Part 1- Upload training data : we put restaurant customer reviews data into Amazon S3 bucket and save it for analyzing
+* Part2- Create a datasource: configure Amazon ML to use the restaurant data set; we set customer review data as the data source for Amazon ML model
+* Part3- Create an ML Model from the Datasource: we will use data source to train and validate the model created in this part; the data source also contains metadata, such as the column data types and target variable which will also be used by the model algorithm; the ML modeling process will take 5 to 10 minutes to complete and we can see that in message section
+* Evaluate an ML model: the Amazon Machine Learning service evaluate the model automatically as part of the model creation process; it takes 70 percent of the data source to train the model and 30 percent to evaluate it.
+* Generate predictions from ML model: batch mode and real-time mode are two ways to generate predictions from ML model; batch mode is asynchronous while the real-time mode is real time.
+
+For AWS Tutorial: Build a Machine Learning Model, it takes me 50 minutes. It is about using Amazon ML to Predict Responses to a Marketing Offer:
+* Step 1: Prepare Your Data: In machine learning, you typically obtain the data and ensure that it is well formatted before starting the training process; we use customer purchase history to predict if this customer will subscribe to my new product
+* Step 2: Create a Training Datasource using the Amazon S3 service
+* Step 3: Create an ML Model: After you've created the training datasource, you use it to create an ML model, train the model, and then evaluate the results
+* Step 4: Review the ML Model's Predictive Performance and Set a Score Threshold
+* Step 5: Use the ML Model to Generate Predictions
+
+For Video Tutorial: Overview of AWS SageMaker, it takes me 40 minutes: The AWS SageMaker has four parts, including the notebook instance, jobs, models and endpoints. Notebook instance is about using algorithms to create model via training jobs. Training jobs are instances to train the model. We create models for hosting from job outputs, or import externally trained models into Amazon SageMaker. Endpoints are for developers to use the SageMaker in production. The tutor elaborate on xgboost, kmeans, scikit . He talks about setting up the training parameters. We can train it on single or multiple instances. Then we import models into hosts. The last step is build endpoint configuration and create endpoint for developers to call
+For AWS Tutorial: AWS SageMaker, it takes me 80 minutes.
+Step 1: Setting Up
+Step 2: Create an Amazon SageMaker Notebook Instance
+Step 3: Train and Deploy a Model
+Step 4: Clean up
+Step 5: Additional Considerations
+
+For Build a Serverless Real-Time Data Processing App, it takes 150 minutes,
+
+Cloud web application
+For QwikLab: Intro to S3, it takes 50 minutes. In this lab, I learned:
+• Create a bucket in Amazon S3 service
+• Add an object for example a picture to the bucket
+• Manage access permissions on an object: change from private to public and see the access difference
+• Create a bucket policy by using the AWS policy generator which require the Amazon Resource Name.
+• Use bucket versioning to get access the picture with same name but uploaded at different time by changing the bucket policy
+The bucket is a really useful service and the versioning feature is quite cool.
+
+
+
## [AWS Tutorial: Launch a VM](https://aws.amazon.com/getting-started/tutorials/launch-a-virtual-machine/)
Time Spent: 40 min
From f88931c7265b2265b077ff6f2343f2c4fde684e8 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Sun, 9 Dec 2018 23:07:15 -0500
Subject: [PATCH 11/64] Update README.md
---
README.md | 28 +++++++++++++++++++---------
1 file changed, 19 insertions(+), 9 deletions(-)
diff --git a/README.md b/README.md
index 19b10f5..59b7fc1 100644
--- a/README.md
+++ b/README.md
@@ -12,7 +12,7 @@ Time: it takes me more than one hour to learn and write up a summary.
* I have acquired how to create Amazon S3 bucket store my log files and output data, Launch a fully functional Hadoop cluster using Amazon EMR, define the schema, create a table for sample log data stored in Amazon S3, analyze the data using a HiveQL script and write the results back to Amazon S3. It is interesting to learn.
-[QwikLab: Intro to S3]
+[QwikLab: Intro to S3](https://awseducate.qwiklabs.com/focuses/30?parent=catalog)
Time: it takes 50 minutes.
@@ -25,22 +25,32 @@ Time: it takes 50 minutes.
* The bucket is a really useful service and the versioning feature is quite cool.
-For QwikLab: Intro to Amazon Redshift, it takes me 60 minutes. In this lab, it covers
-* Launch a Redshift cluster: a cluster is a fully managed data warehouse that consists of a set of compute nodes; when launching a cluster, you have to specify the node type which determines the CPU, RAM, storage capacity and storage drive type.
-* Connect an SQL client called Pgweb to the Amazon Redshift cluster: we can write and run queries in Pgweb and also view the database information and structure.
-* Load sample data from an S3 bucket into the Amazon Redshift cluster which will hold the data for querying.
-* Run queries against data stored in Amazon Redshift: we could use SQL to query the data we need.
+[QwikLab: Intro to Amazon Redshift](https://awseducate.qwiklabs.com/focuses/28?parent=catalog)
+Time: it takes me 60 minutes. In this lab, it covers
-In regard to Video: Short AWS Machine Learning Overview, it takes me 10 minutes. it talks about the Machine learning on AWS. Machine learning has three layers, framework interfaces for expert, ML platforms for developers and data scientists and application services for machine learning API calls in the application. Amazon Deep Learning AMI is for the frameworks layer and Zillow uses it. Amazon SageMaker is a good for ML platform layer.
+* Launch a Redshift cluster: a cluster is a fully managed data warehouse that consists of a set of compute nodes; when launching a cluster, you have to specify the node type which determines the CPU, RAM, storage capacity and storage drive type.
+* Connect an SQL client called Pgweb to the Amazon Redshift cluster: we can write and run queries in Pgweb and also view the database information and structure.
+* Load sample data from an S3 bucket into the Amazon Redshift cluster which will hold the data for querying.
+* Run queries against data stored in Amazon Redshift: we could use SQL to query the data we need.
-[Overview of AWS SageMaker]
+
+[Video: Short AWS Machine Learning Overview](https://www.youtube.com/watch?v=soG1B4jMl2s)
+
+Time: it takes me 10 minutes
+
+* it talks about the Machine learning on AWS. Machine learning has three layers, framework interfaces for expert, ML platforms for developers and data scientists and application services for machine learning API calls in the application. Amazon Deep Learning AMI is for the frameworks layer and Zillow uses it. Amazon SageMaker is a good for ML platform layer.
+
+[Overview of AWS SageMaker](https://www.youtube.com/watch?v=ym7NEYEx9x4&index=12&list=RDMWhrLw7YK38)
Time: it takes me 35 minutes.
* The AWS SageMaker has four parts, including the notebook instance, jobs, models and endpoints. Notebook instance is about using algorithms to create model via training jobs. Training jobs are instances to train the model. We create models for hosting from job outputs, or import externally trained models into Amazon SageMaker. Endpoints are for developers to use the SageMaker in production. The tutor elaborate on xgboost, kmeans, scikit . He talks about setting up the training parameters. We can train it on single or multiple instances. Then we import models into hosts. The last step is build endpoint configuration and create endpoint for developers to call.
-For AWS Tutorial: Analyze Big Data with Hadoop, it takes me 80 minutes. I followed the following steps to finish the tutorial:
+[AWS Tutorial: Analyze Big Data with Hadoop](https://aws.amazon.com/getting-started/projects/analyze-big-data/?trk=gs_card)
+
+it takes me 80 minutes. I followed the following steps to finish the tutorial:
+
* Step 1: Set Up Prerequisites: you have to have a personal AWS account; create an Amazon S3 Bucket and folder to store the output data from a Hive query; create an Amazon EC2 Key Pair to to connect to the nodes in your cluster over a secure channel using the Secure Shell (SSH) protocol.
* 2: Launch The Cluster: user launches sample Amazon EMR cluster by using Quick Options in the Amazon EMR console and leaving most options to their default values; Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. By using these frameworks and related open-source projects, such as Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence workloads. Additionally, you can use Amazon EMR to transform and move large amounts of data into and out of other AWS data stores and databases, such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB.
* 3: Allow SSH Connections to the Cluster From Your Client: Security groups act as virtual firewalls to control inbound and outbound traffic to your cluster. The default Amazon EMR-managed security groups associated with cluster instances do not allow inbound SSH connections as a security precaution. To connect to cluster nodes using SSH so that you can use the command line and view web interfaces that are hosted on the cluster, you need to add inbound rules that allow SSH traffic from trusted clients.
From 171ff33ab079f32c2c014a6f0fc69c6c842504d2 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Sun, 9 Dec 2018 23:08:23 -0500
Subject: [PATCH 12/64] Update README.md
---
README.md | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/README.md b/README.md
index 59b7fc1..c3455cd 100644
--- a/README.md
+++ b/README.md
@@ -6,13 +6,13 @@ Time: it takes 35 minutes to learn it.
* The video tutorial gives basic ideas of Hadoop framework. After 2000, the solution which uses the computation power provided by available computers to process data could not help. In recent years, there is an incredible explosion in the volume of data. IBM reported that 2.5 billion gigabytes of data was generated every day in 2012. 40000 search queries were done on Google every second. Therefore, we need computers with larger memories and faster processors or other more advanced solutions. The idea distributed system is using multiple computers to do the processing work which has much better performance. There are also challenges for this. There are high chances of failure since a distributed system uses multiple computers. There is also limit on bandwidth. Because it is difficult to synchronize data and process, the programming complexity is also high. The solution is Hadoop. Hadoop is a framework that allows for distributed processing of large data sets across clusters of commodity computers using simple programming models. The four key characters of Hadoop are economical, scalable, reliable and flexible. Compared to traditional DBMS, Hadoop distributes the data to multiple systems and later runs the computation wherever the data is located. The Hadoop has an ecosystem which is evolved from its three core components, data processing, resource management and Hadoop distributed file system. It is now comprised of 12 components including Hadoop distributed file system, HBase, scoop, flume, spark, Hadoop MapReduce, Pig, Impala, Hive, Cloudera Search, Oozie, Hue.
-[AWS Tutorial: Analyze Big Data with Hadoop](https://aws.amazon.com/getting-started/projects/analyze-big-data/?trk=gs_card)
+## [AWS Tutorial: Analyze Big Data with Hadoop](https://aws.amazon.com/getting-started/projects/analyze-big-data/?trk=gs_card)
Time: it takes me more than one hour to learn and write up a summary.
* I have acquired how to create Amazon S3 bucket store my log files and output data, Launch a fully functional Hadoop cluster using Amazon EMR, define the schema, create a table for sample log data stored in Amazon S3, analyze the data using a HiveQL script and write the results back to Amazon S3. It is interesting to learn.
-[QwikLab: Intro to S3](https://awseducate.qwiklabs.com/focuses/30?parent=catalog)
+## [QwikLab: Intro to S3](https://awseducate.qwiklabs.com/focuses/30?parent=catalog)
Time: it takes 50 minutes.
@@ -25,7 +25,7 @@ Time: it takes 50 minutes.
* The bucket is a really useful service and the versioning feature is quite cool.
-[QwikLab: Intro to Amazon Redshift](https://awseducate.qwiklabs.com/focuses/28?parent=catalog)
+## [QwikLab: Intro to Amazon Redshift](https://awseducate.qwiklabs.com/focuses/28?parent=catalog)
Time: it takes me 60 minutes. In this lab, it covers
@@ -35,19 +35,19 @@ Time: it takes me 60 minutes. In this lab, it covers
* Run queries against data stored in Amazon Redshift: we could use SQL to query the data we need.
-[Video: Short AWS Machine Learning Overview](https://www.youtube.com/watch?v=soG1B4jMl2s)
+## [Video: Short AWS Machine Learning Overview](https://www.youtube.com/watch?v=soG1B4jMl2s)
Time: it takes me 10 minutes
* it talks about the Machine learning on AWS. Machine learning has three layers, framework interfaces for expert, ML platforms for developers and data scientists and application services for machine learning API calls in the application. Amazon Deep Learning AMI is for the frameworks layer and Zillow uses it. Amazon SageMaker is a good for ML platform layer.
-[Overview of AWS SageMaker](https://www.youtube.com/watch?v=ym7NEYEx9x4&index=12&list=RDMWhrLw7YK38)
+## [Overview of AWS SageMaker](https://www.youtube.com/watch?v=ym7NEYEx9x4&index=12&list=RDMWhrLw7YK38)
Time: it takes me 35 minutes.
* The AWS SageMaker has four parts, including the notebook instance, jobs, models and endpoints. Notebook instance is about using algorithms to create model via training jobs. Training jobs are instances to train the model. We create models for hosting from job outputs, or import externally trained models into Amazon SageMaker. Endpoints are for developers to use the SageMaker in production. The tutor elaborate on xgboost, kmeans, scikit . He talks about setting up the training parameters. We can train it on single or multiple instances. Then we import models into hosts. The last step is build endpoint configuration and create endpoint for developers to call.
-[AWS Tutorial: Analyze Big Data with Hadoop](https://aws.amazon.com/getting-started/projects/analyze-big-data/?trk=gs_card)
+## [AWS Tutorial: Analyze Big Data with Hadoop](https://aws.amazon.com/getting-started/projects/analyze-big-data/?trk=gs_card)
it takes me 80 minutes. I followed the following steps to finish the tutorial:
@@ -60,7 +60,7 @@ it takes me 80 minutes. I followed the following steps to finish the tutorial:
terminating your cluster terminates the associated Amazon EC2 instances and stops the accrual of Amazon EMR charges. Amazon EMR preserves metadata information about completed clusters for your reference, at no charge, for two months. The console does not provide a way to delete terminated clusters so that they aren't viewable in the console. Terminated clusters are removed from the cluster when the metadata is removed
There is more information on how to plan and configure clusters in your custom way, set up the security, manage clusters and trouble shoot cluster if it is performing in a wrong way.
-[QwikLab: Intro to Amazon Machine Learning]
+## [QwikLab: Intro to Amazon Machine Learning]
Time : it takes me 75 minutes.
From c337f96484e8261a8859a15a16c5f82854cb7003 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Mon, 10 Dec 2018 00:40:21 -0500
Subject: [PATCH 13/64] Update README.md
---
README.md | 46 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 46 insertions(+)
diff --git a/README.md b/README.md
index c3455cd..424bd8f 100644
--- a/README.md
+++ b/README.md
@@ -1,11 +1,57 @@
+Notes Contents
+=================
+
+ * [Big Data and Machine Learning -(Beginner level + Intermediate Level)](#bigDataAndMachineLearning)
+ * [ Video: Hadoop Intro](#hadoopIntro)
+ * [ AWS Tutorial: Analyze Big Data with Hadoop](#Analyze-Big-Data-with-Hadoop)
+ * [3. What is docker](#3-what-is-docker)
+ * [4. Docker Architecture](#4-docker-architecture)
+ * [4.1 Client](#41-client)
+ * [4.2 Docker Host](#42-docker-host)
+ * [4.3 Registry](#43-registry)
+ * [5. Docker Command Line](#5-docker-command-line)
+ * [6. Docker image](#6-docker-image)
+ * [6.1 Image creation from a container](#61-image-creation-from-a-container)
+ * [6.2 Image creation using a Dockerfile](#62-image-creation-using-a-dockerfile)
+ * [docker file](#docker-file)
+ * [7. Docker networking](#7-docker-networking)
+ * [7.1 Network drivers](#71-network-drivers)
+ * [7.2 Bridge networks](#72-bridge-networks)
+ * [Manage a user-defined bridge](#manage-a-user-defined-bridge)
+ * [Connect a container to a user-defined bridge](#connect-a-container-to-a-user-defined-bridge)
+ * [Enable forwarding from Docker containers to the outside world](#enable-forwarding-from-docker-containers-to-the-outside-world)
+ * [7.3 Overlay networks](#73-overlay-networks)
+ * [Create an overlay network](#create-an-overlay-network)
+ * [Create a service](#create-a-service)
+ * [7.4 Access from outside](#74-access-from-outside)
+ * [8. Swarm Mode Introduction for IT Pros](#8-swarm-mode-introduction-for-it-pros)
+ * [8.1 Docker Compose and Docker Swarm Mode](#81-docker-compose-and-docker-swarm-mode)
+ * [8.2 Swarm](#82-swarm)
+ * [8.3 Initialize a new Swarm](#83-initialize-a-new-swarm)
+ * [8.4 Show Swarm Members](#84-show-swarm-members)
+ * [9. Kubernetes](#9-kubernetes)
+ * [Cloud Web Apps](#cloud-web-apps)
+ * [1.Launch a linux VM](#1launch-a-linux-vm)
+ * [1.1 Launch an Amazon EC2 Instance](#11-launch-an-amazon-ec2-instance)
+ * [1.2 Configure your Instance](#12-configure-your-instance)
+ * [1.3 Download key pair to securely access your Linux instance using SSH](#13-download-key-pair-to-securely-access-your-linux-instance-using-ssh)
+ * [1.4 Connect to your Instance](#14-connect-to-your-instance)
+ * [2. Amazon Simple Storage Service(S3)](#2-amazon-simple-storage-services3)
+ * [2.1 Create a bucket](#21-create-a-bucket)
+ * [2.2 Upload an object](#22-upload-an-object)
+ * [2.3 Create a bucket policy](#23-create-a-bucket-policy)
+ * [2.4 Versioning](#24-versioning)
+
## Big Data and Machine Learning (Beginner level + Intermediate Level)
+
## [Video: Hadoop Intro](https://www.youtube.com/watch?v=jKCj4BxGTi8&feature=youtu.be)
Time: it takes 35 minutes to learn it.
* The video tutorial gives basic ideas of Hadoop framework. After 2000, the solution which uses the computation power provided by available computers to process data could not help. In recent years, there is an incredible explosion in the volume of data. IBM reported that 2.5 billion gigabytes of data was generated every day in 2012. 40000 search queries were done on Google every second. Therefore, we need computers with larger memories and faster processors or other more advanced solutions. The idea distributed system is using multiple computers to do the processing work which has much better performance. There are also challenges for this. There are high chances of failure since a distributed system uses multiple computers. There is also limit on bandwidth. Because it is difficult to synchronize data and process, the programming complexity is also high. The solution is Hadoop. Hadoop is a framework that allows for distributed processing of large data sets across clusters of commodity computers using simple programming models. The four key characters of Hadoop are economical, scalable, reliable and flexible. Compared to traditional DBMS, Hadoop distributes the data to multiple systems and later runs the computation wherever the data is located. The Hadoop has an ecosystem which is evolved from its three core components, data processing, resource management and Hadoop distributed file system. It is now comprised of 12 components including Hadoop distributed file system, HBase, scoop, flume, spark, Hadoop MapReduce, Pig, Impala, Hive, Cloudera Search, Oozie, Hue.
+
## [AWS Tutorial: Analyze Big Data with Hadoop](https://aws.amazon.com/getting-started/projects/analyze-big-data/?trk=gs_card)
Time: it takes me more than one hour to learn and write up a summary.
From 69c5cc788e3fd6d48d9cf863e4701956e4444ea5 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Mon, 10 Dec 2018 00:46:26 -0500
Subject: [PATCH 14/64] Update README.md
---
README.md | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/README.md b/README.md
index 424bd8f..e1e3ef1 100644
--- a/README.md
+++ b/README.md
@@ -4,11 +4,8 @@ Notes Contents
* [Big Data and Machine Learning -(Beginner level + Intermediate Level)](#bigDataAndMachineLearning)
* [ Video: Hadoop Intro](#hadoopIntro)
* [ AWS Tutorial: Analyze Big Data with Hadoop](#Analyze-Big-Data-with-Hadoop)
- * [3. What is docker](#3-what-is-docker)
+ * [QwikLab: Intro to S3](#Intro-to-S3)
* [4. Docker Architecture](#4-docker-architecture)
- * [4.1 Client](#41-client)
- * [4.2 Docker Host](#42-docker-host)
- * [4.3 Registry](#43-registry)
* [5. Docker Command Line](#5-docker-command-line)
* [6. Docker image](#6-docker-image)
* [6.1 Image creation from a container](#61-image-creation-from-a-container)
@@ -41,6 +38,7 @@ Notes Contents
* [2.2 Upload an object](#22-upload-an-object)
* [2.3 Create a bucket policy](#23-create-a-bucket-policy)
* [2.4 Versioning](#24-versioning)
+
## Big Data and Machine Learning (Beginner level + Intermediate Level)
@@ -58,6 +56,7 @@ Time: it takes me more than one hour to learn and write up a summary.
* I have acquired how to create Amazon S3 bucket store my log files and output data, Launch a fully functional Hadoop cluster using Amazon EMR, define the schema, create a table for sample log data stored in Amazon S3, analyze the data using a HiveQL script and write the results back to Amazon S3. It is interesting to learn.
+
## [QwikLab: Intro to S3](https://awseducate.qwiklabs.com/focuses/30?parent=catalog)
Time: it takes 50 minutes.
From bebb37113017e0f121d834d5e69e29ccb2fa99c7 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Mon, 10 Dec 2018 00:47:58 -0500
Subject: [PATCH 15/64] Update README.md
---
README.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/README.md b/README.md
index e1e3ef1..7854f0c 100644
--- a/README.md
+++ b/README.md
@@ -49,14 +49,14 @@ Time: it takes 35 minutes to learn it.
* The video tutorial gives basic ideas of Hadoop framework. After 2000, the solution which uses the computation power provided by available computers to process data could not help. In recent years, there is an incredible explosion in the volume of data. IBM reported that 2.5 billion gigabytes of data was generated every day in 2012. 40000 search queries were done on Google every second. Therefore, we need computers with larger memories and faster processors or other more advanced solutions. The idea distributed system is using multiple computers to do the processing work which has much better performance. There are also challenges for this. There are high chances of failure since a distributed system uses multiple computers. There is also limit on bandwidth. Because it is difficult to synchronize data and process, the programming complexity is also high. The solution is Hadoop. Hadoop is a framework that allows for distributed processing of large data sets across clusters of commodity computers using simple programming models. The four key characters of Hadoop are economical, scalable, reliable and flexible. Compared to traditional DBMS, Hadoop distributes the data to multiple systems and later runs the computation wherever the data is located. The Hadoop has an ecosystem which is evolved from its three core components, data processing, resource management and Hadoop distributed file system. It is now comprised of 12 components including Hadoop distributed file system, HBase, scoop, flume, spark, Hadoop MapReduce, Pig, Impala, Hive, Cloudera Search, Oozie, Hue.
-
+
## [AWS Tutorial: Analyze Big Data with Hadoop](https://aws.amazon.com/getting-started/projects/analyze-big-data/?trk=gs_card)
Time: it takes me more than one hour to learn and write up a summary.
* I have acquired how to create Amazon S3 bucket store my log files and output data, Launch a fully functional Hadoop cluster using Amazon EMR, define the schema, create a table for sample log data stored in Amazon S3, analyze the data using a HiveQL script and write the results back to Amazon S3. It is interesting to learn.
-
+
## [QwikLab: Intro to S3](https://awseducate.qwiklabs.com/focuses/30?parent=catalog)
Time: it takes 50 minutes.
From 5d355a645a471f9cafad78fe972377d2726ec7a8 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Mon, 10 Dec 2018 00:50:10 -0500
Subject: [PATCH 16/64] Update README.md
---
README.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/README.md b/README.md
index 7854f0c..da491f8 100644
--- a/README.md
+++ b/README.md
@@ -39,10 +39,10 @@ Notes Contents
* [2.3 Create a bucket policy](#23-create-a-bucket-policy)
* [2.4 Versioning](#24-versioning)
-
+
## Big Data and Machine Learning (Beginner level + Intermediate Level)
-
+
## [Video: Hadoop Intro](https://www.youtube.com/watch?v=jKCj4BxGTi8&feature=youtu.be)
Time: it takes 35 minutes to learn it.
From 8a51041a30603d8ff3b9168335c8c13faf0dd524 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Mon, 10 Dec 2018 00:51:20 -0500
Subject: [PATCH 17/64] Update README.md
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index da491f8..d061633 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,7 @@
Notes Contents
=================
- * [Big Data and Machine Learning -(Beginner level + Intermediate Level)](#bigDataAndMachineLearning)
+ [Big Data and Machine Learning -(Beginner level + Intermediate Level)](#bigDataAndMachineLearning)
* [ Video: Hadoop Intro](#hadoopIntro)
* [ AWS Tutorial: Analyze Big Data with Hadoop](#Analyze-Big-Data-with-Hadoop)
* [QwikLab: Intro to S3](#Intro-to-S3)
From 8ccc87e078210ac0a6a56d356f353426f479783b Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Mon, 10 Dec 2018 00:54:20 -0500
Subject: [PATCH 18/64] Update README.md
---
README.md | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/README.md b/README.md
index d061633..b486d91 100644
--- a/README.md
+++ b/README.md
@@ -1,15 +1,13 @@
Notes Contents
=================
- [Big Data and Machine Learning -(Beginner level + Intermediate Level)](#bigDataAndMachineLearning)
+ * [Big Data and Machine Learning -(Beginner level + Intermediate Level)](#bigDataAndMachineLearning)
* [ Video: Hadoop Intro](#hadoopIntro)
* [ AWS Tutorial: Analyze Big Data with Hadoop](#Analyze-Big-Data-with-Hadoop)
* [QwikLab: Intro to S3](#Intro-to-S3)
* [4. Docker Architecture](#4-docker-architecture)
* [5. Docker Command Line](#5-docker-command-line)
* [6. Docker image](#6-docker-image)
- * [6.1 Image creation from a container](#61-image-creation-from-a-container)
- * [6.2 Image creation using a Dockerfile](#62-image-creation-using-a-dockerfile)
* [docker file](#docker-file)
* [7. Docker networking](#7-docker-networking)
* [7.1 Network drivers](#71-network-drivers)
From 9f963f97781fee8ef826692c987e339d2a31b931 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Mon, 10 Dec 2018 01:01:46 -0500
Subject: [PATCH 19/64] Update README.md
---
README.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/README.md b/README.md
index b486d91..cc78513 100644
--- a/README.md
+++ b/README.md
@@ -2,7 +2,7 @@ Notes Contents
=================
* [Big Data and Machine Learning -(Beginner level + Intermediate Level)](#bigDataAndMachineLearning)
- * [ Video: Hadoop Intro](#hadoopIntro)
+ * [ Video: Hadoop Intro](#introduction)
* [ AWS Tutorial: Analyze Big Data with Hadoop](#Analyze-Big-Data-with-Hadoop)
* [QwikLab: Intro to S3](#Intro-to-S3)
* [4. Docker Architecture](#4-docker-architecture)
@@ -40,7 +40,7 @@ Notes Contents
## Big Data and Machine Learning (Beginner level + Intermediate Level)
-
+
## [Video: Hadoop Intro](https://www.youtube.com/watch?v=jKCj4BxGTi8&feature=youtu.be)
Time: it takes 35 minutes to learn it.
From e8ce1881f554d02b94d0fd50cb280d6b89428d0c Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Mon, 10 Dec 2018 01:02:48 -0500
Subject: [PATCH 20/64] Update README.md
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index cc78513..ad6159c 100644
--- a/README.md
+++ b/README.md
@@ -37,7 +37,7 @@ Notes Contents
* [2.3 Create a bucket policy](#23-create-a-bucket-policy)
* [2.4 Versioning](#24-versioning)
-
+
## Big Data and Machine Learning (Beginner level + Intermediate Level)
From 3b4c8a5e02e1f5aa0b8b9deecc4f33f194698a11 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Mon, 10 Dec 2018 01:35:52 -0500
Subject: [PATCH 21/64] Update README.md
---
README.md | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/README.md b/README.md
index ad6159c..a835239 100644
--- a/README.md
+++ b/README.md
@@ -5,10 +5,9 @@ Notes Contents
* [ Video: Hadoop Intro](#introduction)
* [ AWS Tutorial: Analyze Big Data with Hadoop](#Analyze-Big-Data-with-Hadoop)
* [QwikLab: Intro to S3](#Intro-to-S3)
- * [4. Docker Architecture](#4-docker-architecture)
- * [5. Docker Command Line](#5-docker-command-line)
- * [6. Docker image](#6-docker-image)
- * [docker file](#docker-file)
+ * [QwikLab: Intro to Amazon Redshift](#Intro-to-Amazon-Redshift)
+ * [Video: Short AWS Machine Learning Overview](#Short-AWS-Machine-Learning-Overview)
+ * [Video Tutorial: Overview of AWS SageMaker](#Overview-of-AWS-SageMaker)
* [7. Docker networking](#7-docker-networking)
* [7.1 Network drivers](#71-network-drivers)
* [7.2 Bridge networks](#72-bridge-networks)
@@ -47,14 +46,14 @@ Time: it takes 35 minutes to learn it.
* The video tutorial gives basic ideas of Hadoop framework. After 2000, the solution which uses the computation power provided by available computers to process data could not help. In recent years, there is an incredible explosion in the volume of data. IBM reported that 2.5 billion gigabytes of data was generated every day in 2012. 40000 search queries were done on Google every second. Therefore, we need computers with larger memories and faster processors or other more advanced solutions. The idea distributed system is using multiple computers to do the processing work which has much better performance. There are also challenges for this. There are high chances of failure since a distributed system uses multiple computers. There is also limit on bandwidth. Because it is difficult to synchronize data and process, the programming complexity is also high. The solution is Hadoop. Hadoop is a framework that allows for distributed processing of large data sets across clusters of commodity computers using simple programming models. The four key characters of Hadoop are economical, scalable, reliable and flexible. Compared to traditional DBMS, Hadoop distributes the data to multiple systems and later runs the computation wherever the data is located. The Hadoop has an ecosystem which is evolved from its three core components, data processing, resource management and Hadoop distributed file system. It is now comprised of 12 components including Hadoop distributed file system, HBase, scoop, flume, spark, Hadoop MapReduce, Pig, Impala, Hive, Cloudera Search, Oozie, Hue.
-
+
## [AWS Tutorial: Analyze Big Data with Hadoop](https://aws.amazon.com/getting-started/projects/analyze-big-data/?trk=gs_card)
Time: it takes me more than one hour to learn and write up a summary.
* I have acquired how to create Amazon S3 bucket store my log files and output data, Launch a fully functional Hadoop cluster using Amazon EMR, define the schema, create a table for sample log data stored in Amazon S3, analyze the data using a HiveQL script and write the results back to Amazon S3. It is interesting to learn.
-
+
## [QwikLab: Intro to S3](https://awseducate.qwiklabs.com/focuses/30?parent=catalog)
Time: it takes 50 minutes.
@@ -68,6 +67,7 @@ Time: it takes 50 minutes.
* The bucket is a really useful service and the versioning feature is quite cool.
+
## [QwikLab: Intro to Amazon Redshift](https://awseducate.qwiklabs.com/focuses/28?parent=catalog)
Time: it takes me 60 minutes. In this lab, it covers
@@ -77,13 +77,14 @@ Time: it takes me 60 minutes. In this lab, it covers
* Load sample data from an S3 bucket into the Amazon Redshift cluster which will hold the data for querying.
* Run queries against data stored in Amazon Redshift: we could use SQL to query the data we need.
-
+
## [Video: Short AWS Machine Learning Overview](https://www.youtube.com/watch?v=soG1B4jMl2s)
Time: it takes me 10 minutes
* it talks about the Machine learning on AWS. Machine learning has three layers, framework interfaces for expert, ML platforms for developers and data scientists and application services for machine learning API calls in the application. Amazon Deep Learning AMI is for the frameworks layer and Zillow uses it. Amazon SageMaker is a good for ML platform layer.
+
## [Overview of AWS SageMaker](https://www.youtube.com/watch?v=ym7NEYEx9x4&index=12&list=RDMWhrLw7YK38)
Time: it takes me 35 minutes.
From 1295cc4c421238c2603e414ee62f78c30729b853 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Mon, 10 Dec 2018 01:50:39 -0500
Subject: [PATCH 22/64] Update README.md
---
README.md | 49 +++++++++++++++++++++++++++----------------------
1 file changed, 27 insertions(+), 22 deletions(-)
diff --git a/README.md b/README.md
index a835239..781f30e 100644
--- a/README.md
+++ b/README.md
@@ -8,22 +8,12 @@ Notes Contents
* [QwikLab: Intro to Amazon Redshift](#Intro-to-Amazon-Redshift)
* [Video: Short AWS Machine Learning Overview](#Short-AWS-Machine-Learning-Overview)
* [Video Tutorial: Overview of AWS SageMaker](#Overview-of-AWS-SageMaker)
- * [7. Docker networking](#7-docker-networking)
- * [7.1 Network drivers](#71-network-drivers)
- * [7.2 Bridge networks](#72-bridge-networks)
- * [Manage a user-defined bridge](#manage-a-user-defined-bridge)
- * [Connect a container to a user-defined bridge](#connect-a-container-to-a-user-defined-bridge)
- * [Enable forwarding from Docker containers to the outside world](#enable-forwarding-from-docker-containers-to-the-outside-world)
- * [7.3 Overlay networks](#73-overlay-networks)
- * [Create an overlay network](#create-an-overlay-network)
- * [Create a service](#create-a-service)
- * [7.4 Access from outside](#74-access-from-outside)
- * [8. Swarm Mode Introduction for IT Pros](#8-swarm-mode-introduction-for-it-pros)
- * [8.1 Docker Compose and Docker Swarm Mode](#81-docker-compose-and-docker-swarm-mode)
- * [8.2 Swarm](#82-swarm)
- * [8.3 Initialize a new Swarm](#83-initialize-a-new-swarm)
- * [8.4 Show Swarm Members](#84-show-swarm-members)
- * [9. Kubernetes](#9-kubernetes)
+ * [AWS Tutorial: Analyze Big Data with Hadoop](#AWS-Tutorial-Analyze-Big-Data-with-Hadoop)
+ * [QwikLab: Intro to Amazon Machine Learning](#QwikLab-Intro-to-Amazon-Machine-Learning)
+ * [AWS Tutorial: Build a Machine Learning Model](#AWS-Tutorial-Build-a-Machine-Learning-Model)
+ * [Video Tutorial: Overview of AWS SageMaker](#VideoTutorialOverviewofAWSSageMaker)
+ * [Build a Serverless Real-Time Data Processing App](#BuildaServerlessReal-TimeDataProcessingApp)
+
* [Cloud Web Apps](#cloud-web-apps)
* [1.Launch a linux VM](#1launch-a-linux-vm)
* [1.1 Launch an Amazon EC2 Instance](#11-launch-an-amazon-ec2-instance)
@@ -84,18 +74,21 @@ Time: it takes me 10 minutes
* it talks about the Machine learning on AWS. Machine learning has three layers, framework interfaces for expert, ML platforms for developers and data scientists and application services for machine learning API calls in the application. Amazon Deep Learning AMI is for the frameworks layer and Zillow uses it. Amazon SageMaker is a good for ML platform layer.
-
+
## [Overview of AWS SageMaker](https://www.youtube.com/watch?v=ym7NEYEx9x4&index=12&list=RDMWhrLw7YK38)
Time: it takes me 35 minutes.
* The AWS SageMaker has four parts, including the notebook instance, jobs, models and endpoints. Notebook instance is about using algorithms to create model via training jobs. Training jobs are instances to train the model. We create models for hosting from job outputs, or import externally trained models into Amazon SageMaker. Endpoints are for developers to use the SageMaker in production. The tutor elaborate on xgboost, kmeans, scikit . He talks about setting up the training parameters. We can train it on single or multiple instances. Then we import models into hosts. The last step is build endpoint configuration and create endpoint for developers to call.
+
## [AWS Tutorial: Analyze Big Data with Hadoop](https://aws.amazon.com/getting-started/projects/analyze-big-data/?trk=gs_card)
-it takes me 80 minutes. I followed the following steps to finish the tutorial:
+Time: it takes me 80 minutes.
-* Step 1: Set Up Prerequisites: you have to have a personal AWS account; create an Amazon S3 Bucket and folder to store the output data from a Hive query; create an Amazon EC2 Key Pair to to connect to the nodes in your cluster over a secure channel using the Secure Shell (SSH) protocol.
+I followed the following steps to finish the tutorial:
+
+* 1: Set Up Prerequisites: you have to have a personal AWS account; create an Amazon S3 Bucket and folder to store the output data from a Hive query; create an Amazon EC2 Key Pair to to connect to the nodes in your cluster over a secure channel using the Secure Shell (SSH) protocol.
* 2: Launch The Cluster: user launches sample Amazon EMR cluster by using Quick Options in the Amazon EMR console and leaving most options to their default values; Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. By using these frameworks and related open-source projects, such as Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence workloads. Additionally, you can use Amazon EMR to transform and move large amounts of data into and out of other AWS data stores and databases, such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB.
* 3: Allow SSH Connections to the Cluster From Your Client: Security groups act as virtual firewalls to control inbound and outbound traffic to your cluster. The default Amazon EMR-managed security groups associated with cluster instances do not allow inbound SSH connections as a security precaution. To connect to cluster nodes using SSH so that you can use the command line and view web interfaces that are hosted on the cluster, you need to add inbound rules that allow SSH traffic from trusted clients.
* 4: Run a Hive Script to Process Data: The sample data is a series of Amazon CloudFront access log files; The sample script calculates the total number of requests per operating system over a specified time frame. The script uses HiveQL, which is a SQL-like scripting language for data warehousing and analysis
@@ -104,6 +97,7 @@ it takes me 80 minutes. I followed the following steps to finish the tutorial:
terminating your cluster terminates the associated Amazon EC2 instances and stops the accrual of Amazon EMR charges. Amazon EMR preserves metadata information about completed clusters for your reference, at no charge, for two months. The console does not provide a way to delete terminated clusters so that they aren't viewable in the console. Terminated clusters are removed from the cluster when the metadata is removed
There is more information on how to plan and configure clusters in your custom way, set up the security, manage clusters and trouble shoot cluster if it is performing in a wrong way.
+
## [QwikLab: Intro to Amazon Machine Learning]
Time : it takes me 75 minutes.
@@ -115,14 +109,22 @@ The lab tutorial consists of several parts:
* Evaluate an ML model: the Amazon Machine Learning service evaluate the model automatically as part of the model creation process; it takes 70 percent of the data source to train the model and 30 percent to evaluate it.
* Generate predictions from ML model: batch mode and real-time mode are two ways to generate predictions from ML model; batch mode is asynchronous while the real-time mode is real time.
-For AWS Tutorial: Build a Machine Learning Model, it takes me 50 minutes. It is about using Amazon ML to Predict Responses to a Marketing Offer:
+
+## [AWS Tutorial: Build a Machine Learning Model]
+
+Time: it takes me 50 minutes. It is about using Amazon ML to Predict Responses to a Marketing Offer:
* Step 1: Prepare Your Data: In machine learning, you typically obtain the data and ensure that it is well formatted before starting the training process; we use customer purchase history to predict if this customer will subscribe to my new product
* Step 2: Create a Training Datasource using the Amazon S3 service
* Step 3: Create an ML Model: After you've created the training datasource, you use it to create an ML model, train the model, and then evaluate the results
* Step 4: Review the ML Model's Predictive Performance and Set a Score Threshold
* Step 5: Use the ML Model to Generate Predictions
-For Video Tutorial: Overview of AWS SageMaker, it takes me 40 minutes: The AWS SageMaker has four parts, including the notebook instance, jobs, models and endpoints. Notebook instance is about using algorithms to create model via training jobs. Training jobs are instances to train the model. We create models for hosting from job outputs, or import externally trained models into Amazon SageMaker. Endpoints are for developers to use the SageMaker in production. The tutor elaborate on xgboost, kmeans, scikit . He talks about setting up the training parameters. We can train it on single or multiple instances. Then we import models into hosts. The last step is build endpoint configuration and create endpoint for developers to call
+
+[Video Tutorial: Overview of AWS SageMaker](https://www.youtube.com/watch?v=ym7NEYEx9x4&index=12&list=RDMWhrLw7YK38)
+
+Time:it takes me 40 minutes
+
+The AWS SageMaker has four parts, including the notebook instance, jobs, models and endpoints. Notebook instance is about using algorithms to create model via training jobs. Training jobs are instances to train the model. We create models for hosting from job outputs, or import externally trained models into Amazon SageMaker. Endpoints are for developers to use the SageMaker in production. The tutor elaborate on xgboost, kmeans, scikit . He talks about setting up the training parameters. We can train it on single or multiple instances. Then we import models into hosts. The last step is build endpoint configuration and create endpoint for developers to call
For AWS Tutorial: AWS SageMaker, it takes me 80 minutes.
Step 1: Setting Up
Step 2: Create an Amazon SageMaker Notebook Instance
@@ -130,7 +132,10 @@ Step 3: Train and Deploy a Model
Step 4: Clean up
Step 5: Additional Considerations
-For Build a Serverless Real-Time Data Processing App, it takes 150 minutes,
+
+[Build a Serverless Real-Time Data Processing App](https://aws.amazon.com/getting-started/projects/build-serverless-real-time-data-processing-app-lambda-kinesis-s3-dynamodb-cognito-athena/?trk=gs_card)
+
+Time: it takes 150 minutes,
Cloud web application
For QwikLab: Intro to S3, it takes 50 minutes. In this lab, I learned:
From 9ac3012d7a1747dd8b539ec0396be35fd96d599f Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Mon, 10 Dec 2018 09:47:46 -0500
Subject: [PATCH 23/64] Update README.md
---
README.md | 41 +++++++++++++++++++----------------------
1 file changed, 19 insertions(+), 22 deletions(-)
diff --git a/README.md b/README.md
index 781f30e..9c57c0b 100644
--- a/README.md
+++ b/README.md
@@ -14,17 +14,13 @@ Notes Contents
* [Video Tutorial: Overview of AWS SageMaker](#VideoTutorialOverviewofAWSSageMaker)
* [Build a Serverless Real-Time Data Processing App](#BuildaServerlessReal-TimeDataProcessingApp)
- * [Cloud Web Apps](#cloud-web-apps)
- * [1.Launch a linux VM](#1launch-a-linux-vm)
- * [1.1 Launch an Amazon EC2 Instance](#11-launch-an-amazon-ec2-instance)
- * [1.2 Configure your Instance](#12-configure-your-instance)
- * [1.3 Download key pair to securely access your Linux instance using SSH](#13-download-key-pair-to-securely-access-your-linux-instance-using-ssh)
- * [1.4 Connect to your Instance](#14-connect-to-your-instance)
- * [2. Amazon Simple Storage Service(S3)](#2-amazon-simple-storage-services3)
- * [2.1 Create a bucket](#21-create-a-bucket)
- * [2.2 Upload an object](#22-upload-an-object)
- * [2.3 Create a bucket policy](#23-create-a-bucket-policy)
- * [2.4 Versioning](#24-versioning)
+ * [Cloud Web Apps(Beginner level + Intermediate Level)](#cloud-web-apps)
+ * [AWS Tutorial: Launch a VM](#AWS-Tutorial-Launch-a-VM)
+ * [Video: Virtualization](#VideoVirtualization)
+ * [AWS Tutorial: Install a LAMP Web Server on Amazon Linux 2](#AWSTutorialInstallaLAMPWebServeronAmazonLinux2)
+ * [AWS Tutorial: Install a LAMP Web Server on Amazon Linux 2](#AWSTutorialInstallaLAMPWebServeronAmazonLinux2)
+
+
## Big Data and Machine Learning (Beginner level + Intermediate Level)
@@ -146,8 +142,8 @@ For QwikLab: Intro to S3, it takes 50 minutes. In this lab, I learned:
• Use bucket versioning to get access the picture with same name but uploaded at different time by changing the bucket policy
The bucket is a really useful service and the versioning feature is quite cool.
-
-
+## Cloud Web Apps(Beginner level + Intermediate Level)
+
## [AWS Tutorial: Launch a VM](https://aws.amazon.com/getting-started/tutorials/launch-a-virtual-machine/)
Time Spent: 40 min
@@ -156,7 +152,7 @@ Time Spent: 40 min
* You could use your only personal account to register and you could also choose to set up IAM user for better management
### 2. Launch an Amazon EC2 Instance
- ### a. Enter the Amazon EC2 Console
+ a. Enter the Amazon EC2 Console
Open the AWS Management Console, so you can keep this step-by-step guide open. When the screen loads, enter your user name and password to get started. Then type EC2 in the search bar and select Amazon EC2 to open the service console.
b. Launch an Instance
@@ -176,8 +172,10 @@ d. On the next screen you will be asked to choose an existing key pair or create
Select Create a new key pair and give it the name MyKeyPair. Next click the Download Key Pair button.
After you download the MyKeyPair key, you will want to store your key in a secure location. If you lose your key, you won't be able to access your instance. If someone else gets access to your key, they will be able to access your instance.
Windows users: We recommend saving your key pair in your user directory in a sub-directory called .ssh (ex. C:\user\{yourusername}\.ssh\MyKeyPair.pem).
+
Tip: You can't use Windows Explorer to create a folder with a name that begins with a period unless you also end the folder name with a period. After you enter the name (.ssh.), the final period is removed automatically.
Mac/Linux users: We recommend saving your key pair in the .ssh sub-directory from your home directory (ex. ~/.ssh/MyKeyPair.pem).
+
Tip: On MacOS, the key pair is downloaded to your Downloads directory by default. To move the key pair into the .ssh sub-directory, enter the following command in a terminal window: mv ~/Downloads/MyKeyPair.pem ~/.ssh/MyKeyPair.pem
After you have stored your key pair, click Launch Instance to start your Linux instance.
e. Click View Instances on the next screen to view your instances and see the status of the instance you have just started.
@@ -187,10 +185,10 @@ f. In a few minutes, the Instance State column on your instance will change to "
### 4. Connect to your Instance
After launching your instance, it's time to connect to it using SSH.
Mac/Linux user: Select Mac / Linux below to see instructions for opening a terminal window.
-• Windows
+* Windows
-• Mac
-• a. Your Mac or Linux computer most likely includes an SSH client by default. You can check for an SSH client by typing ssh at the command line. If your computer doesn't recognize the command, the OpenSSH project provides a free implementation of the full suite of SSH tools that you can download.
+* Mac
+* a. Your Mac or Linux computer most likely includes an SSH client by default. You can check for an SSH client by typing ssh at the command line. If your computer doesn't recognize the command, the OpenSSH project provides a free implementation of the full suite of SSH tools that you can download.
Mac users: Open a terminal window first. Then press enter to open the terminal window.
Linux users: Open a terminal window.
b. Use the chmod command to make sure your private key file is not publicly viewable by entering the following command to restrict permissions to your private SSH key:
@@ -214,7 +212,8 @@ a. Back on the EC2 Console, select the box next to the instance you created. The
b. You will be asked to confirm your termination - select Yes, Terminate.
Note: This process can take several seconds to complete. Once your instance has been terminated, the Instance State will change to terminated on your EC2 Console.
-[Video: Virtualization] https://www.youtube.com/watch?v=GIdVRB5yNsk
+
+[Video: Virtualization](https://www.youtube.com/watch?v=GIdVRB5yNsk)
Cloud computing is booming thus we need virtualization to meet the needs. Virtualization first emerged in the 1970s and brought out by IBM since there were different computer with different systems.
@@ -226,6 +225,7 @@ The Intel realized they have to do the virtualization itself thus the VT technol
In summary, the cloud computing companies just utilizes the software virtualization of the processors and other hardware resources they have to rent it the customer and gives the results they want back.
+
## [AWS Tutorial: Install a LAMP Web Server on Amazon Linux 2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-lamp-amazon-linux-2.html)
Time: 80 minutes
@@ -658,12 +658,9 @@ This is commonly used to deviate between versions, as well as development vs pro
- Replace keys and values with an empty JSON object
- Save, run and check logs
-
-
[AWS Tutorial: Build a Serverless Web Application](https://aws.amazon.com/getting-started/projects/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/?trk=gs_card)
-> We will build a simple serverless (AWS Lambda) web application that enables users to request unicorn rides from the Wild Rydes fleet.
-The application will present users with an HTML based user interface for indicating the location
+> In this lab I build a simple serverless (AWS Lambda) web application that enables users to request unicorn rides from the Wild Rydes fleet. The application present users with an HTML based user interface for indicating the location
where they would like to be picked up and will interface on the backend with a RESTful web service
to submit the request and dispatch a nearby unicorn.
The application will also provide facilities for users to register with the service and log in before requesting rides.
From 06ae80351a19331b589a1d34f8086e68ea038462 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Mon, 10 Dec 2018 10:02:56 -0500
Subject: [PATCH 24/64] Update README.md
---
README.md | 22 +++++++++++++---------
1 file changed, 13 insertions(+), 9 deletions(-)
diff --git a/README.md b/README.md
index 9c57c0b..ff313e3 100644
--- a/README.md
+++ b/README.md
@@ -18,8 +18,10 @@ Notes Contents
* [AWS Tutorial: Launch a VM](#AWS-Tutorial-Launch-a-VM)
* [Video: Virtualization](#VideoVirtualization)
* [AWS Tutorial: Install a LAMP Web Server on Amazon Linux 2](#AWSTutorialInstallaLAMPWebServeronAmazonLinux2)
- * [AWS Tutorial: Install a LAMP Web Server on Amazon Linux 2](#AWSTutorialInstallaLAMPWebServeronAmazonLinux2)
-
+ * [AWS Tutorial: Deploy a Scalable Node.js Web App](#AWSTutorialDeployaScalableNodejsWebApp)
+ * [QwikLab: Intro to DynamoDB](#QwikLabIntrotoDynamoDB)
+ * [QwikLab: Intro to AWS Lambda](#QwikLabIntrotoAWSLambda)
+ * [QwikLab: Intro to Amazon API Gateway](#QwikLabIntrotoAmazonAPIGateway)
@@ -331,8 +333,7 @@ yum info package_name
You can verify that httpd is on by running the following command:
[ec2-user ~]$ sudo systemctl is-enabled httpd
7. Add a security rule to allow inbound HTTP (port 80) connections to your instance if you have not already done so. By default, a launch-wizard-N security group was set up for your instance during initialization. This group contains a single rule to allow SSH connections.
-8. Test your web server. In a web browser, type the public DNS address (or the public IP address) of your instance. If there is no content in /var/www/html, you should see the Apache test page. You can get the public DNS for your instance using the Amazon EC2 console (check the Public DNS column; if this column is hidden, chooseShow/Hide Columns (the gear-shaped icon) and choose Public DNS).
-
+8. Test your web server. In a web browser, type the public DNS address (or the public IP address) of your instance. If there is no content in /var/www/html, you should see the Apache test page. You can get the public DNS for your instance using the Amazon EC2 console (check the Public DNS column; if this column is hidden, chooseShow/Hide Columns (the gear-shaped icon) and choose Public DNS).
Apache httpd serves files that are kept in a directory called the Apache document root. The Amazon Linux Apache document root is /var/www/html, which by default is owned by root.
@@ -428,7 +429,9 @@ If the httpd process is not running, repeat the steps described in To prepare th
• Check the firewall configuration
If you are unable to see the Apache test page, check that the security group you are using contains a rule to allow HTTP (port 80) traffic. For information about adding an HTTP rule to your security group, see Adding Rules to a Security Group.
-* QwikLab: Intro to DynamoDB: https://awseducate.qwiklabs.com/focuses/23?parent=catalog
+
+[QwikLab: Intro to DynamoDB](https://awseducate.qwiklabs.com/focuses/23?parent=catalog)
+
Time spent: 30 min
Introduction: Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed database and supports both document and key-value data models. This lab is for creating a table in Amazon DynamoDB to store information about a music library and execute some queries and finally delete the table.
@@ -447,8 +450,8 @@ I learned that there are two ways to query a DynamoDB table, one is Query and an
Task 5: Delete the table.
-fengexian
+
[AWS Tutorial: Deploy a Scalable Node.js Web App](https://aws.amazon.com/getting-started/projects/deploy-nodejs-web-app/?trk=gs_card)
add image
@@ -522,7 +525,7 @@ the table created by this configuration file will be deleted
- Choose __Actions__, and then choose __Terminate Environment__
- Delete DynamoDB table __nodejs-tutorial__
-
+
[QwikLab: Intro to AWS Lambda](https://awseducate.qwiklabs.com/focuses/36?parent=catalog)
> AWS Lambda is a compute service that runs your code in response to events and automatically manages the compute resources for you, making it easy to build applications that respond quickly to new information.
@@ -573,8 +576,8 @@ thumbnail in output bucket
- Dead Letter Errors: Failures when sending messages to the Dead Letter Queue.
- __Amazon CloudWatch Logs__ have detailed log messages in stream
-
-- [QwikLab: Intro to Amazon API Gateway](https://awseducate.qwiklabs.com/focuses/21?parent=catalog)
+
+[QwikLab: Intro to Amazon API Gateway](https://awseducate.qwiklabs.com/focuses/21?parent=catalog)
> API Gateway is a managed service provided by AWS that makes creating, deploying and maintaining APIs easy.
The lab creates a Lambda function and triggers it by accessing API Gateway endpoint url.
@@ -658,6 +661,7 @@ This is commonly used to deviate between versions, as well as development vs pro
- Replace keys and values with an empty JSON object
- Save, run and check logs
+
[AWS Tutorial: Build a Serverless Web Application](https://aws.amazon.com/getting-started/projects/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/?trk=gs_card)
> In this lab I build a simple serverless (AWS Lambda) web application that enables users to request unicorn rides from the Wild Rydes fleet. The application present users with an HTML based user interface for indicating the location
From 6d4217d34277b2168d3fca631f21b01a0f5e3364 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Mon, 10 Dec 2018 10:18:38 -0500
Subject: [PATCH 25/64] Update README.md
---
README.md | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/README.md b/README.md
index ff313e3..7b667f0 100644
--- a/README.md
+++ b/README.md
@@ -22,7 +22,9 @@ Notes Contents
* [QwikLab: Intro to DynamoDB](#QwikLabIntrotoDynamoDB)
* [QwikLab: Intro to AWS Lambda](#QwikLabIntrotoAWSLambda)
* [QwikLab: Intro to Amazon API Gateway](#QwikLabIntrotoAmazonAPIGateway)
-
+ * [AWS Tutorial: Build a Serverless Web Application](#AWSTutorialBuildaServerlessWebApplication)
+ * [AWS Tutorial: Build a Modern Web Application](#AWSTutorialBuildaModernWebApplication)
+
## Big Data and Machine Learning (Beginner level + Intermediate Level)
@@ -786,16 +788,13 @@ It will be secured using the Amazon Cognito user pool you created in the previou
- Login and request a unicorn pickup on white house south lawn :)

-
+
+[AWS Tutorial: Build a Modern Web Application] (https://aws.amazon.com/getting-started/projects/build-modern-app-fargate-lambda-dynamodb-python/?trk=gs_card)
> Due to the web application builds up with a rather complex architecture, the CD/CI configuration
is not included, please refer to Module 2 tutorial. This article mainly focuses on implementing the features of app
with AWS-CLI commands.
-## Official Links
-
-[AWS Tutorial: Build a Modern Web Application] (https://aws.amazon.com/getting-started/projects/build-modern-app-fargate-lambda-dynamodb-python/?trk=gs_card)
-
## Application Architecture
add image
From 08960580e83020e8b2ad28097a8c3dbb181b1897 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Mon, 10 Dec 2018 10:20:07 -0500
Subject: [PATCH 26/64] Update README.md
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index 7b667f0..0d5f2d6 100644
--- a/README.md
+++ b/README.md
@@ -789,7 +789,7 @@ It will be secured using the Amazon Cognito user pool you created in the previou

-[AWS Tutorial: Build a Modern Web Application] (https://aws.amazon.com/getting-started/projects/build-modern-app-fargate-lambda-dynamodb-python/?trk=gs_card)
+[AWS Tutorial: Build a Modern Web Application](https://aws.amazon.com/getting-started/projects/build-modern-app-fargate-lambda-dynamodb-python/?trk=gs_card)
> Due to the web application builds up with a rather complex architecture, the CD/CI configuration
is not included, please refer to Module 2 tutorial. This article mainly focuses on implementing the features of app
From 1ff89cddd5202c5272ce009876fd7b12926c045d Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Mon, 10 Dec 2018 10:26:24 -0500
Subject: [PATCH 27/64] Update README.md
---
README.md | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/README.md b/README.md
index 0d5f2d6..7582ed3 100644
--- a/README.md
+++ b/README.md
@@ -98,7 +98,7 @@ terminating your cluster terminates the associated Amazon EC2 instances and stop
There is more information on how to plan and configure clusters in your custom way, set up the security, manage clusters and trouble shoot cluster if it is performing in a wrong way.
-## [QwikLab: Intro to Amazon Machine Learning]
+## [QwikLab: Intro to Amazon Machine Learning](https://awseducate.qwiklabs.com/focuses/27?parent=catalog)
Time : it takes me 75 minutes.
@@ -109,8 +109,7 @@ The lab tutorial consists of several parts:
* Evaluate an ML model: the Amazon Machine Learning service evaluate the model automatically as part of the model creation process; it takes 70 percent of the data source to train the model and 30 percent to evaluate it.
* Generate predictions from ML model: batch mode and real-time mode are two ways to generate predictions from ML model; batch mode is asynchronous while the real-time mode is real time.
-
-## [AWS Tutorial: Build a Machine Learning Model]
+## [AWS Tutorial: Build a Machine Learning Model](https://aws.amazon.com/getting-started/projects/build-machine-learning-model/?trk=gs_card)
Time: it takes me 50 minutes. It is about using Amazon ML to Predict Responses to a Marketing Offer:
* Step 1: Prepare Your Data: In machine learning, you typically obtain the data and ensure that it is well formatted before starting the training process; we use customer purchase history to predict if this customer will subscribe to my new product
@@ -120,7 +119,7 @@ Time: it takes me 50 minutes. It is about using Amazon ML to Predict Responses t
* Step 5: Use the ML Model to Generate Predictions
-[Video Tutorial: Overview of AWS SageMaker](https://www.youtube.com/watch?v=ym7NEYEx9x4&index=12&list=RDMWhrLw7YK38)
+## [Video Tutorial: Overview of AWS SageMaker](https://www.youtube.com/watch?v=ym7NEYEx9x4&index=12&list=RDMWhrLw7YK38)
Time:it takes me 40 minutes
@@ -133,7 +132,7 @@ Step 4: Clean up
Step 5: Additional Considerations
-[Build a Serverless Real-Time Data Processing App](https://aws.amazon.com/getting-started/projects/build-serverless-real-time-data-processing-app-lambda-kinesis-s3-dynamodb-cognito-athena/?trk=gs_card)
+## [Build a Serverless Real-Time Data Processing App](https://aws.amazon.com/getting-started/projects/build-serverless-real-time-data-processing-app-lambda-kinesis-s3-dynamodb-cognito-athena/?trk=gs_card)
Time: it takes 150 minutes,
From e893f70e94f5325a6cf49a0c9cbd13e12d63fa77 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Mon, 10 Dec 2018 10:36:56 -0500
Subject: [PATCH 28/64] Update README.md
---
README.md | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/README.md b/README.md
index 7582ed3..79fb3bf 100644
--- a/README.md
+++ b/README.md
@@ -807,7 +807,7 @@ where those records will be processed by serverless __AWS Lambda__ functions and
## Learning Notes
-### [Module 1: IDE Setup and Static Website Hosting](https://github.com/aws-samples/aws-modern-application-workshop/tree/python/module-1)
+### Module 1: IDE Setup and Static Website Hosting
- AWS Cloud9 IDE ships with a _t2.micro_ EC2 instance for free tier, the whole environment resembles _CodeAnywhere_ we
have used for project 1
@@ -838,8 +838,7 @@ have used for project 1
- Visit static website [s3 index](http://mythical-bucket-warren.s3-website-us-east-1.amazonaws.com/)
-### [Module 2: Creating a Service with AWS Fargate](https://github.com/aws-samples/aws-modern-application-workshop/tree/python/module-2)
-
+### Module 2: Creating a Service with AWS Fargate
AWS Fargate is a deployment option in Amazon ECS that allows you to deploy containers without having to manage any clusters or servers.
For our Mythical Mysfits backend, we will use Python and create a Flask app in a Docker container behind a Network Load Balancer.
These will form the microservice backend for the frontend website to integrate with.
@@ -934,7 +933,7 @@ load balancer to retrieve data

-### [Module 3 - Adding a Data Tier with Amazon DynamoDB](https://github.com/aws-samples/aws-modern-application-workshop/tree/python/module-3)
+### Module 3 - Adding a Data Tier with Amazon DynamoDB
Rather than have all of the Mysfits be stored in a static JSON file,
we will store them in a database to make the websites future more extensible and scalable.
@@ -958,7 +957,7 @@ aws dynamodb batch-write-item
- Visit website [s3 index](http://mythical-bucket-warren.s3-website-us-east-1.amazonaws.com/) again, website now displays
data from DynamoDB.
-### [Module 4: Adding User and API features with Amazon API Gateway and AWS Cognito](https://github.com/aws-samples/aws-modern-application-workshop/tree/python/module-4)
+### Module 4: Adding User and API features with Amazon API Gateway and AWS Cognito
To make sure that only registered users are authorized to like or adopt mysfits on the website,
we will deploy an REST API with Amazon API Gateway to sit in front of our NLB.
@@ -998,7 +997,7 @@ we will deploy an REST API with Amazon API Gateway to sit in front of our NLB.
Switch API endpoint to API Gateway from NLB, see [API gateway health check](https://jigpafa4ti.execute-api.us-east-1.amazonaws.com/prod/mysfits)
-### [Module 5: Capturing User Behavior](https://github.com/aws-samples/aws-modern-application-workshop/tree/python/module-5)
+### Module 5: Capturing User Behavior
To help us gather more insights of user activity,
we will implement the ability for the website frontend to submit a tiny request,
each time a mysfit profile is clicked by a user,
From 08ce4e3c3e3c41cebf7f20d6dc70e05f9c9d1b39 Mon Sep 17 00:00:00 2001
From: gowarrior <34692832+gowarrior@users.noreply.github.com>
Date: Mon, 10 Dec 2018 10:55:14 -0500
Subject: [PATCH 29/64] Create Technical Report.md
---
Technical Report.md | 4 ++++
1 file changed, 4 insertions(+)
create mode 100644 Technical Report.md
diff --git a/Technical Report.md b/Technical Report.md
new file mode 100644
index 0000000..cb8bd21
--- /dev/null
+++ b/Technical Report.md
@@ -0,0 +1,4 @@
+
fMrI_Gt}CF2(UL0|9>PQf<@ z0Y3<+giiMfs)SDoAUx-kD53lE`kIL~COh9i@;etrQtU}EPNB7>F1S&AhVV^qdQz9LN%ssO{ zY5nlBm%VH;unauhK>FS_9+M}APtyJ>K_A8FI|t~6Cm!RCfFd6o=h)c}{N_lMt!22D zCFf|4qcw|Seb&xN#vc6wN+az!r%}962j?q)`Y`$6qJJZEoDjU&JfDlROyONf4$<^I z@EomZ=!5(4#)F!!e!lt|U0y@6{!Gp|IF3;rXvHJ5739&TtI$B^;1D^+r#M5rXPfcc zyQN0C93%T;M9A(a7l&zNZJIQFg9bcXwll;pbdGa`HJ|W7&|<^S8{hcGVn_9xKgiZ6 z0R8$?o%D2C7<7apVp9){5h29bI6n3I}PK6xaH7Gs(6TlyGf4-Kw^n{)C}Kg;vvVaS~WG$s&m4j2hr z^?fhXR(ysEQ1mE$xiRpAPZWbPWt8Qf_I##^v9MNz_omrwrr@Nqn^5)o7XJDLesGr` z9xs0Ji_03IO(B*Nj3_)ex)lTo8Y3r+4Vu+wWMfC5sbPjq1Z*>d&yd+z#9^vm!bfyF z=H4RoZ-8}!kKrTXZDb#waAQc(Zwk_s9^T^%Ug!^ffEIkjSA4-+ (Dmv zOd103JX$-3J3GrZ*fRhR{4_!p5UPvq#lK8B_v@}@wEv7VG-H8}f^g1Non+OB)6~DE z0Dv|$E*tPlpHFgx>?GMBN0PqxT9Caf93g$tS7d-Zkh{oH`eKNV|47Q1oERda)1b=Z z1g(?UV6e4Aj2q>m*f9iz72gP#sZgMD|g2WSqzo^B&}I*u?K5l{pMRNYDVEsyJI zlx*4k6GIG_B=`llzOjI}GVVJ!qGR+VK;UECLMwr146UJ9b2CLk7V)XhJC0WP#Ha`` zfW8tTn`YsNM4J2Tvrl;u8(!Y8GgamFhwLEL7H1WM9fzu)1v_>ennnX0fNN7XqI^Ji z^d}?Bpfh-)+3Q{JdL_y=jb+xr`;0a;SodLkm64aMDJ%Gr@Nm?qq%^jLeqwMmr_b>Y zUVtv(GhQ%K $9l{6Y$wXvM z{p!$u 1Z4WS!|>HrDgZ8j(FRNFD&5qLmz?3&10~ tfUN_5xc0xt*Vwg=Mo9Zy#MxaFcHoy=t0*!YZqB3~uKIchEbuxpk zTsl<#q;xRe=RA09=_8!rTjK^V3GR^(cqT8J -F`0^jON^sqaARcPEqZ_2)1Fo~ zYglh#oIKNqQu>r^pf&xAcK7gw(K8yzi{K)322tco_ZVemwNEbKDUq91Lfuz*0Y~(v zKk<>_Ml0O)gXau$f)dvpnzbNnR-B*Fiw$JJ#w7uZ*HpZ=f@k =_bsg@8VF=_PGmc>p&sD=B=9LbPM$-;I{~wq zBU8!b$}}b%)q^(e QhKllrPMs1ei-1IR#ICRSo*6Rfh*74C{&0XK2nQu~`Yy5JY1_q}_RjR`o z4bcfB0l6HfX=EsmIet+QgGX3(U>%L5lhp&CTDT(I?&VqiJdDWmAzbR`+Eu ?r5kG@u4`rbK6Tj8g6 zqaQVn?)M>e^zWk;k0T$B%BK!AX~Q*{B0Ka)P))w@jD0~Lxs_MY$iZ+9ro+h~ent=B zBA;XWx{uc)uh9@1@mD}*x>S(sW&VBl-M5V9;3zMg;A%-pe)<_%jGV%qL-iUBgPv({ z!2mrar b@`q=0Ozf1JG5_15QuAvL8- u}1sG5~&~j`%FEGPQ1{ z4)Ub`1Tbueq(8`_IvFvt$GDjl@;U&=p|9~FQ;)$TZJ{|h6~vbKgIDOGUy>OzyL_#u zdFkEyoi7HNS~sg84;rl}D(6LiazP#)=@ 57q~pn6^q{D8;@n(BTdGy2wZIv6MD9@%sqSyT^)Y54#z^$T0fHU%OV z>IHS# B8+(eA(U6tZFr7A$fv^)8K%@qn>HCXjFl$} z?-*t?;(0C)Mh!s}8Xl?FT8|M_fJ30D0UZe!A67P{MY*MPZ1Qc+k%2S6&*6E6MZ4&V zpx_?`B-ElzIK#(un+OjN %D@))wI!?oTZ=9t~BqCM$87RgU z9?Jy%#h6nl{MQenMtzDt4uR7Gi9UmqV8kXR>v!lz%cCCksNw+oIA@ekxe|bJaAqsW zyjLU06+G| +K|hJ>71H`9L{qrgCa zMOWa-sKC!!C*!JL^_S#afJBb8i55vYiTp%Y;6o1_r8v;s_bdSxrv?|YBq&gKg0%2e zoBB%-!eJYYkViN<;-5Mt0FQus_~q#S5N}g%5l8Jxpl+9pkwLpm$pBrnF95wZcmbPT zOr4X<*fsgtRZd@!g }y! 3Z* zz7d5pE_jMIILpsy5-N^?0HReNaCUGOoCtD!l-N$;+VR*iX9@)`2)Oo^32@dH{Nbk0 z7!M8+uSK XA2cR0@swA&-&$C_enPMMjl(^ep-CgM%~uZ7aEUj2MkvtuN^>yy6Jm zQ=fAwMn%1Nps%&%+`Zg)bYFhFfrEA==aT!{kxza&k~O@LT$2~hSHLB(g0s YYY<+vj`BGB3$&X z4WOMk5XLzKAvOZ2eTksKNy>BQ5g^RV^ZI^pPu;nerv@EiN=oF!xiOA*R*7o0&j|u} z_!}DcI4Cr6HjK|az~BIej)OBQbB;&VHV8#$6j_)~llyEa6->SAE++_ZVbNQB~Wmh;j##2fvA9s-WXaASPjGXk>UFE}u=kjJ}U zrlQa#K#;rwfd|UtkA4B1Bsn813|Ansr!DI!0vxYLn6gp_zA$KZ5#pV79_;~TfPTZv zAs~P9Ms{mF+^6&mMgYH+frGZttQ}LNl8p{ YbLlfRccozzyrJ%Na2ksV0~fs2A}amf6!^K%m~)vEVVr^)Q((x z Yfcw0l z4niOx!7C#HBR`$5PfWiW;c>p&MYAMAg+)-w3vmjWV|D8)b54MoOF<<;*&l^0w^@ zPVku{q^#CG(5FrGa>TW G`P2 @^cy*s0_vqt_%Lp&OL=XeUw+3SM;IQZ9lRo|j_L!Pw)*hA z&x^Dbek!L;fad6+@*sUwE>4K{wTCCmgw&5F37&9wl;Gt)9s^4Ub|q(aMZW>BN|B+w zFKTlTyJ411ztJZS36RsY6 lEO=;g$eFU%1AmHQ|jmL%i|d^s=-jor+tRT(Y5 IdMSC zhV&gC=u4mv0D3up(YAc1-HhBs)_BDE0sV;%eWHKWMM20CzNr_VDO&;~1|$x}bQJ^U zi!;`BloLSYQCpeJJ8q^IZbk=|-DWI&p0rj9qo*0EZb4*)!sBK57 z4|Sb+rfm8k_s~D8JXw C^MUK?@S`ARp)7o3$tH@Pz%Y8Jt)4n7Dd)xo}yS4N)wqwDxPiubdUS<48#>4CbP z>j&r7C zpBM04$6AxJv(6M4h2x|}l{V-2$_5@7wb=n?do;&J2u&T@Si(svL*USDO~YtV($AbB zI&5D@JICipVbR60YlCAYh#c`O!nQelpq!#@O2J6LGgF)T1wNG8DA0RIrmW#4y5&3~ zyD fc4`>K%jKqD%NKGHS1 zgC3(oODlAd{Ms-iux8}(D($5|=Oqm
u z(RWiNcnP*GD+pla(Z}efqX15kY1d|W@YhHBk2&U;GKwCBn{w)kY>*zc=UzgB0fh z7|KNh2I&xjM!{?4K~w V%B| zraEl5H^QLYgw9l%yrH45>}WyrB;a%j3Be?&whkNdh%9YH@SMZa4k6|wtZ5M%&QX+! zr}> IYyI45uTY5FSpJjcU8479eKGhU{b@tq=T*G|&Ns#$>4k!iBJ z?$p~q!f#s8FMa5j*Jm}o$zRi!dyeDo*LiXtIUaYN`|9l1+b=6`&7Xe$)RkkMH}^)- znP-bWYAa5vt~+><2fD+d(GN1fX3`OIL$@4_3hllpyBwq0h0sy&hfsD@dv)8>o=2AZ z`D;6BR4C}T*^CWgAM&}kez{q$(INO&Hz30j9W-O*ePFy}uQ*<|j_r-#afGLUkB(X6 zR=%c5`5NS%6$TyQtQD@e{0u#ysDwI#9Xb+mW3~~i(UzgTk$~$!j1 Cru?`g#WF>#ETe1v!Nwg^b4EcFiz8D%M=A33r4-IG4v!Nf*rqk%!U;H dc+CRea4#J z(GAZ8QXC5Tr9+ pT%_-G9X|9QJ@my45&fD6(8gYv z&GP;$dz#>%9 wz@u!1i@c}|cr>AxW-sEiSp z{6^OPiNa1gV3wecO3TgmvsH6|`~muJ=7M8n{OORjdbWVvkx2$VK%O~0x=PmTXUVgy zBL~eUam;jxu8?b^TKZFu8aEdQbel 2jk k8Es5roso}dQ#p#IE+2^&tr1=hk5gc<(ZOjcqkhI4UPep4h5 k+PfT|#?tbn4PBM{L)Z9g~FG{bd8iM7Q=Rsy5LmdPavehB!LY)TV`f z(Zh~c8-(oaO#f0OnxFn}uNfM;eCxNjU z1iQ5bZDfZXv1x=cr)Pp`K_+LxUPPt@Ut3}{$SJX5>>k_2A=5Q9d6s~8bB}sXy#j0W zINI>H|Ni@z&3|^Sdw-Gh=ajVr=x6MUb2^Mq`0Tw+=XkUz*y @n z^+(46ecKX1Uwm(r6XyKj W=99g~PfXT@RaSTEIzRxAPoj!T&+ zj5c(tl+od+_Hs`iI}|ws4vJ!_!+&xa-7(U3jRI}J$Cvxvi!gDL@HZ96!HK{)5^If~ zsaw4q48h|#P3f2lR#zfMBS&qhBQylhWgXFz(I97N%9CP=xV Tl7XiW(P8zGo4SSyiVYnv!rWwdaE 9TwDoP7cE0gja3dxgno1>s5W7X@4Wh%gMtPy*NoLJV}4q@$9L z(_nxJ4xtB}im4H!E%}VnGTp!cr>suQG)O5p=Eo>TUQs4x e54?o}^# z1`V9K4J>qu4VWj =E5IO&ptufAoz$17(ez>ANMNMe}B>cZ9Vu z*Wo%%O4L$MRm5rPT%a=&v3RCqb*QyDBPuVe8+kFL1T2CjZ`vvqP59x^5}9#u%5nyj zi%>a-BO%fboa84k6j9O^=c#>;kaL$$BultVIlvu`A~R)0t0L5Vo=mi6y2L#+6J}}I zfL?SO(V1?tX^C?)y#!y1f^Noza-k{Hnfj3vx1Op$to3qa)@e*HPYWEZHqme0#~=3M z>7j=nI&Bp#-v;FXr 8by-tx}nQhqIgtpi{G{h-7XM@-fHVN&qCv?^`K>~1}jZ zb9!IbRUXXo!=gWaj`9mBY9z9%KjtNDb?SF`gc0*|o(^-AALL#l825bL#k!o)oYyII zq*02FXGl3z&P^T8)vt5&Oq-s$=eagKlQ;+l+H;P}k>G|F`CZGGdL88>_+Cm?pRyqB zyQZkFt;sm#X#fBq07*naR7;u$RZd%8%qO6pafa$vUsCEQ6`rvDIl}50-e^;O+MyuM zLF#aHPu=LWmdRM)qp3|F0w*upr#yhN1sJ9Dfx3WTz;v~LgU1u2J0m*I-KHUJF;MzN zAGwC7esj 0|2^HC-8T 9TNaE(TMa_)Yfr9QmXAK@Lm z tiUE_j=T&oLc&waW<2b4pTe4)|#B)rn@G)HZz zH+b}U7CF#fjJJ9-OG7^G-u8`Afv>4Mn?T5zpwx7pBfXNy1v_8_Xs09@BUf~dZdyw; zB9l*l;sJms+(zk{d>}NSN9yAo{PEKcO*=z5VQc?vl={6NYeQJwH>L8;{GpHOtKg@q z=!R#4a~~|HtC3-O$i7qqyx_k`*y %cj{Khv5 zJ3Nt7 4pq4NTwVGHJ&YY%;Z`gDI2?1o z&Z|EBgSR8&AfYD^S#ZpK`6YnoH4gIVzoZzP`r18xkoNM-Q9tFF{?-nfYngz101g08 zrO-$Y@QAF-uU>i8n|$fp@F6@7Kh&H0(%!5`c%v6@!Z-B;?J>+`SU!47ALS>n0x28Q zOmkYt_s6on*Xu~PjQ}kzxF$C>uIN^mAd9SfCaB^Vt! JUWy0S*ZgU(f3t`YH<}=HMl}SOfj~Ebl^A5# z5pL%-L}}ftpXEHqx=eje2q%GdLF$Py%29vG!+=Tse5P6`DTS_$K@6!QMU{smqO6Pz zLqZ;W9KhP1bx@!_1|&*fZcg}#0b$UL_VVta=+{?hyxeTJuYU1Nzi2mLBxtrml%E%) zGhJ?nFlT6ElJ!$Y35|eZVF+gdyojU0M?8)Z!{hK=oymtT{762}(5x=`(L<*2HtWG0 z5M1TKZ%5Z0QBt0(gN*4v#yiQi`$p#svi8ZVwHyw@xjcdbfFAvd4*8`%$m6-bb&f{Y z&b38GfO|Rb*D>q0y3q>Y8^9CKwBv{m0DsVgZ~o=PRHlBm3(`^uy5Ofy^{Gc4jv(|z zP6T;gw^EKH)Hh!F;RHD=BeUR_GO5RXysGberY-o^I>OoFp}rA}TOu$kBmlGl% |K&wl7=MCF zDHt%L4f!L~1V(Z#0Y)z8xnExsW>6n0H-ko^1^8n9cT#MH81PB1x(hcMD?P4{N#A@ z8V?=|BJhAQ55QL+=^u3`Mb6S!j{SuHHiH5A;|$5He&jGD>tT$om$U^mo(IS|{`Tot zcl088wO@lg@Le=$SKUBRp+reU9I#&7bdj?SLcW1nou! Y+8FF)=&b)-CFs}Ai*wPBK%a>f9_nNeW;-G_e+3 #epAt$qS1~~wmxtlZ}myq#S?9*1N5JvZ<=7E6rS=2eE<&vp%)&eK*=b638GBb z>nF6LneM_9;1OC9(55~|vLv6Bzw^?zudL90=RTZa4V#RamC$B*f%oc=7q|~kd1_g& z+w>ZdpwGX_SRSxhhYXQ>FS&b9)d;cY$3P}j4_b9CTebA%h*5fUh!O+aI~T@$tl zEP>A2fDI|8E=0 ?ZsIs=P$5~3cV<;JfjD1c^Lj_jR`!}5693H-jfB! z%XF^fYeF_!Fk&1V;7E)ntVwgooSnAO3R0(Y?WkLl$2EKz?xge|ek&UpRA0{BM`LhT zPulWK;+#1MK@Ep(glbw*Uuwff6M>HpryEr<{A6AqbKvUMo+I4Sc4=q L@2qOUmUZ=-_<-Q}OA;g|h5W-4$eXhx9#XXKdc}hePVkj7kC?SE8Tss0~0*px@ z=gF7*{bTCRQ8{%cQDXR^0gdo n|r7tP}x2f`TRtI?i1I2h&82oFvar0yCQ_%oQE$q$bl z-OGLX(% qXHlDK-RsVY{|g1scX2`0i$-{Y>nFHJR49<>zjHdBSzuc7J%3=hR6P1 z%+xjAt(~{J(78;&4>UAKxWOGELy#zoKt)-c6Es2)piqEtcqWOc84aKG@<}yHL1;`V zay|^SIw+q+$aC&K#he7w3VMto{3)j+{G3xD#@A6kZNkHm(J<{P+GOAuZTQKTyfqKt znz}t_M70CpiZ{{_T{X?>QK#~CTk3VsF-9C6c%4lwaqD5doqp1fN^FvG#Kf5A1$N31V9`e9Hht%dO`Z%b@-NNDO2;w zd7kH7IUv9KQZ7d{)*ye~9>YpTB<0D4 (kBG1W051%hPBbJ z+nx8E?1BU_`j;ayC285hsNWizjeP<7CqEj{1L{84pPu1?bA1N%S!5~ad2X82yNH4W zj*iZdQFegi!XrEoa0!AE44JvWM?r=FksRS6XP&&|RbA>am2C#W&S}9$+J~QKoF@A8 z)iQy+;N7}#gcu)#7sElpYb9_W(=Zfq$u|uXH}bC0Y3QGnha!c~90;RL85nC{gs|g` zuu~k)(RoG%`XL6Bu<4Wf$e5{58#TxiL#HkvAH|5%;DBS497V7Uw$X}cmC@n=qx^OK zYwaA3iG-aq##g@bmCM&aY$U)Bqe4?xc#6j&c<1=Un3=NDPe$@0dOTrV8A|`W@^VQlP?4|t geOALD(4qS~P{k*8@? zXG$6juolVICk4LGBNz20oq#JIP3g%~yHl2A VB*nYsRL=4cr`;W?fL4 z^qcgn3;dr#wGnB^AF=*@m*~4yyqSTpc4x|1+iP|}-D^i= Ef|UC|&vJ z==!33r z1TeX2Qf~FI4+z)%Rd3{^-1HL}l1FVw9O2Z{8k5%P^}Gp$O-<;)GO|WOpiFQZJTT+Z z2xUs4P#!MKs-P{JK)5BL=iX~Ht~$%1Smdu>SQ1D*XfSnV_tF}Hu#O~sm&U~9?}~co z*@%>qc10Q`5 K` zM-~McJAH<7A~T=}G`aebX*b_*nF4bdv$a!G$kLLIbXk(sJ6V#cekg6%sNhFgcQ5WC z3W$W?bsf`xx6=21opyfGC!flMUu}tuv^jF~&eSPO-DWa?D4xL=IkUzQd1>#`1DGQ) zs%L3SFWD@So*yijMzls}@FQP`+WWho`j4>k6hEQ|0nRx{T_U_kc-CHQa&g2TVAO83 z0cF<~S&Paep0=Rh3qxMT=aLZ%& RvmL$Eb+$rjg2j1kSs`$GOr}9{i?=eblTRHmTU+k1C5D zcEHj4i&JKNW?||{e89z$U@=D4W^ADG34^{M9fL(%@cG*a Oq2uov(>;Bk2d?=6cois;Mj@qj-(joDreR&3G6h6?Q z*sLKs^OiNu&$n-;F3BvC24zp?l; ! zs(px`oD2~7xvoN+Ku8(rxKBLs#4=)%IOPYt96d$AM$i^$s|;a`q(&uuh^YyL`=z4; zsAyLmQD_awH8yCCp$5TU9Lfb1cbBX4)d9I<_!{b%&k|tIkNok>6?Yt!?}>8b-liWZ z4hq)ygLVj-def;883IQmXw#S8vGg8%@WG`I04(2nna-l9DI(>jxJ J74{IV!>_}Z_uBv1=h}^Y%4>Zu zEr1oT%BlSbC;#Lx9N;4_x`hSUcX2j1+05cS`I9HvNXIiW`@vsc(x!arHk+Y@bpSOh zo&2-L#lug$0Y-XGTNnp{O9t2cksp3a&offT5gw9-Hs;bs7$@%WB)oRTcrlYe2P?nw zM$##L?{3;EOt$-g5P!m`H+-u@q+Qitlw0}Jb?VARUX``-?{hbY1-@U>scQ&?D{<2k z)=unhvv_T4K}D(@g!xIQOF$)Suw8rOT?^Pc5 id zKJmg|T;!R&v@N`vWdI=7Iq^@1WOLIXc#Nz9K0G5 R}U&(U?x!!4bA>5`9faZO )!iQh%%C%tM<#4w#$3vp}6s` z>;b$u%E7O;M=mKi;iX09rkS~>O7U!fBCIgd(2pFJ&|%X)*mqW}bzEVPFz)fyGugWE z;60Kj-o|#Zytu{>;MLybDIeu5|H>@i@>Z7QC_8+)$gg<9)~4uS@^ED}l9M!8h_`|C zIo7!{IEl)1uz{ q9I9 zvmO*kL@_tQ)(><{%Gl *zr6ga+Z-m!Tk h zWWg0TokIrJ30drxY<%is4HuC$S*b5_WSRT3X7VSWXx*O2=eyOG@gJ@8P`7B5M`3ft zS;t$4mriMkH L2W8hTw3n;Url!)cOc7{6rfpQ5 zDIw1~yUycpg>(i?M(%H$z~g 0$8_GZ@=~*c~>sTx=BTMa;wHBxe12m%AI0bE! z41hzFt@chKr?4q@Yf#d%Gt>y&W{yVdYXw#wyWDur6~_IiKJ}?ZsRJ0w-E@~JUPLL# z10Tu&2vZjL1xWlT8hu;YC>P}dcBT=92j 6l&?Rv8T<0HseI zlpFAR?*n tpe?=JL{Z!~?E`RJpMF1pIjP*bV$AHm^NK4$#8 zCxL)m8&tR28RBmcr;Ta1O=-g>?OXU`Y11Oqi8a^|RwK571{rbLDB}|fnL=zz03#C! zkFA0k?rut81P=LV%Pwsd+2m!D3?Q+|!hlZu)yc$ljRU$s5m?#{%JQ&r%!kiDhoF$f zx3 0F=+h4@b3G55qrTGc9Z2s?6X2 z{`Z%U^;vs%>{+)2UZ$KJC}3b<7c8)Q?|a|740fd@ALVeQoDa_b@|VBddFGjC7A0ik zhVA;SY8ERw_;iHs@KL%`P4H-^EIHC8>6NR Y3_6mIH3DH%S<=!ckUj$FD1IAf zOat0}PcZ{SK;>LVphEfLr4g)ju0kUSy0`WSkN_Zy3*W#8=$HaDEeR}HroLUo3;y)g zx4!kQ3t+LT^jU!DeATQfAcsGvl1K}%*zL;N0;K*e!Y)|5Y*~x`;|>6^OvUN=E;4ja zb_NDE50Nb%0NE*|Zln+BP2(a{u%^460m~Zny{GR($&1BI1``BNk@~%ht4{@d`fY Pj83`{ZDC1;@$14gqtjoms%T#y-I*wjxAk|MzDq$z=$ zlz?_fS?W|2I~O^+%G3%=$*C?zm)bUkiG&RV`K7pZFkx9u?YwiZ{X1PodOoLc0*&cG zVf7n*NjiAp0t38l5kBIHJ6_Uv!V?grC|GaGV$HMiN+0jmtu0IL-Enf7kIS6KJ>MAi55Fn=yy0+t=rul{8U-o5&CCak_!UsaI$%dv&p1VXz+ zeJiE$qyS_H{7kVRN& ze8V4Tnc}pBfj{6RPZq2h8_L5W7W`R9o?Xh&AXXX4fuHF_R Z#%MLc4sdM$Kz1aX|2|!;}(v%sblr+{utT>LZdm;EJ)@?RXVc %+rWLIVzV@}R{cnzdkiteI_-V^yd)5h&sZZ99 zH2tNGjVd;eA)gibc;57(kG-8*gY+F~>%8%`yg1?qprC+Id4Q2=N{WJYCr|5_{%*(l zlKQ4iD&7GiyyHi?fiyqq>C1Rh9% wf%!v2Z}i zrW)Vq83d3kf9WZ6GB~2h2ZKnbI#oXgAIfC977vIHGdK`V**%lVq|28;^{cJH9&JtC zd3I@6Bf%hX_^;oM;nw$?K-kDhXJ{jVPNO4h;El#g7;{Ei8vRu}d1*{{O5j`)&<01# zPP*E%nGTkNX_Y`t+XaH!Zq|xUrM*+4Tpf)fm8Sk+y3sVFk*fQw4AvKGjZy}L^0HPZ zFC$fH9YgY6lek^dVzmJn>tQzRSPQh_Mc?!RGB9HO0Yg)_0NDm1)0!+|{;WQE0W)iB zz>$os?b+yJq-|}{Hg(pg4JdYBI*!iA>DC8L(JCWefT_I$4tWMb%1w6GO3B#exk1Oe zjK@BUPeB@_t3Lqk-T;TcdZIY{CRDoG1o@2vn`85v;qI*j!W?ZG8LZV`0iAnTq)(ge zFr%Sv4T7}0V@WUD?2$T~PC`K-iommmb`F|njh;h_GmS$MFrd?MC`lVBZ2ACN4sJd~ z;^|~{L~Wh|rC +68h#ubM{d*8+xfCFqPYkf+mcBQEG zHP86qr{h!PKnzg%+a|mL1Orq4_@D?(#s2iCKV8ag3Y4X2!<%qGi&f~^JHU+xQ>u2s z0!AlBDXSf*6cP{woYvnQCLpfaB5Q^CvtdSG7G7B2^O+)6Uei|&BzMdlFto;qN8q(Z z%8$q#Kjh#dFLg)`fLR_aV3xRDywVqD(&b7(TUMw0&KlJ>v~AeqUVUr7Nd4-M+JZJB zEi|@~#p8&dO^a}xXN9;j(TS0cLrF}U61Y?e=p5Q;(_uP{Qz%$yHp*;~crMg3KqFHB z81eMePcJ}3N$aE(yAEonV&t?MQ;T+3v(8Lg0%d$zUf_Dxw{&){XKCTrvpjWpN 4ooafDUoNw-%5+LgAZ9@S^h+L`u;)`K1W T@EC(W<^- zU5$nJ=}&)pnNBoX71qv7M}Fzd4uv+g$eQxp`Wz4ga5}d*;!*&tJN!55 aDFj=KIT)fVze= z+NCyT_Ci0@rqr>^d+ke`LhjKq?Q03> aVQb4v;hoYk_Uf019R`CBVG66 zmk)iwU*32Hz~U;WFy#fbtS_<(@nikc^edoe4U!j79req? cKA>l9P^gDUhhlYb^#KarG#@RUha;Qc_s26( z)NX7VrB=<_s|2<8rnOoWwf80|N^29l_N=|BnHselRJGLJqiPfT7oYDh&mWNcJonyn z&w1Z-&a3FYzD7WuXA81T7#0K;dk(2 N|BNvq mP5eIKQm-(-AccL zGwS>8GYQvE!r$rE_!!rIGaqW>G3~y#uOpoNddEz*k&oSa^JkNoo6H=kb79TDb*c)J z)Xi!11FO>4Vz3Fc=$Q)J)L=3{=@b!!wrxClH}V^AsA7Bn8=)Qzg4r+}qRBcYhDYlS z5WNCqZBvJW1l`2f20vSanfO<9v8)rvPhbUUS)M>4c J zg{)l{EjKPyk@RDO7B9M0Lb)F9$5PwZE*J9fngBF^e5$Z(C-8_YWOX^TQlWbs-_|3= zAqS=lC8I|qQi?y4GWh<#iJWXAvV>wg>)R}r&e=6@F9bLlv{mmuDfRZ_W^@h@U9-*V zS#FKBOihck6CP911Eq2UKgKh^sRY7ctEL)^?D2n=7i`HZu<5yyX@6RCkA#6~G0+J& z)hNbK#SZaX+VmoTPz 3ZdlMD{x8;TO+MvWI)FzBE$ck2LVKb?kOFd6+EJ za}G_&W!g3D%zX0gdse)URYq3-TI6Oue$`vfG(xXl=>p)OgYC?+S z;GcZYc_{InO6vj;h28*2(_2y=S}ko-M=m!$#G~O1B&0>Ph-Og(aV-aOgzZ}=n?elp z tBMb-RQOS z=s+tw{|CQ0NZ|#((d{b=CbMe8YY#v0g#f~}w!kWj4z2nb7^&wZ?EJV+XP-T>f98a; zWI}+JVA?l_X`Yv?xgB X)?9z_ u&r}9|^B~ zD}jtxBx 5kj-Wz#cE zALWD8-k#S__vx1e1GJe5p}5iifV~;UHdV?}aJ#pY{eN$4W0udda?ljUIns8`By0=C z9LXAHdJv2u$<~txA$w*sfb_#(RI^-0!SgI$EX2;aYqTxO_we02R;VHzpE=y;ZAYeJ zx5airU _KNs+w9X%o}A?nj^M%|<>^R1^~c1jWHJNwT&!&E8}rGx z%WjKG{w&ZUb{>0QZxU=`)3FmYoCJR`rDC9&F)tgeZy3i(!-(Geu};H87(I~hde2yL zn^Jm!GB?fKYfBH1u(mZI0^R+?WH~fRZhHY8W74`A+MCbFyeVU-;7MUPS6&FH@`-db zg%DB1j *sulfw%&tM{X86|*Ct%kabyoa@M$o-mM|xc_~vlmFjOrC^9U z@n@m-B-_-DgkHCz0aXX0o;YT%Dyd#s-_$1Cz}kWM4dmkT(<}V0;;ScrE`rcQnG6^L zK3-`|0I4Y;FGc`choJHb3EV@;2ar5J`0Emd<)ia)GmhT=K9}a{d-%&t^omWd z&PV08ZQ=UgYg!W}oDtuK?^|_X0}~?66D@eG=oMG95C`E_OIgYG`y8|Mj*W#rKIs9~ z)+%35bWSo4gX%w%&qHDT-R>PME+5tBPw9Q}!eEprddok+w^U_YTIbAH8m%FZb2aQ7 zE;Unhbg#d|DaB<6){a?riG_Ff%r8qTm5na?$L`CYKFgCi&LibHs_O*jb2EHhwT`y9 zTBcTGX(qSCt|-e6e3 ql#l0Mgs-@*;iod!GeT!I$ z{od+Bb~lZrh2>sh@(!O6b)kxU4&AD2mtUD>PjFtdE`epjld=N>>D3==eKdS|eaDt& zAfB2@h%E~Ja1XX^S64^SLz^{!DOteHzn4+$=Hf|+ulWe3(1}^EA^>nv*SMOOcA32v zHqjDdZc6$d;NC>F<&wy4BgPLXt+D)>&@h-i0xSw_3@?`EjOKOf3#zMhDmB{D-?%F3 z8P%fkdeQM`5ws6rwArYOeH$~DOOk`fKK1-M2>HXGsA;Xzg{gh9EijS1%S>SB+S8>A zXYxyMX_mdwOwzQn6ZIB)&LN=DWKYULvVZLMwB3N4aZo?}y}Ery72qRXE{Wil5VlpQ zm6DO4GP@5jKbvzGLN1fcTg3@xafB-WF?V*wUMj1qw@m2S^4cd?pH)GK$=r0DAt6J= zmTAmBQ-zKEPCfN;1JCL=`;t1}b6um}=h)h(NK->g-T6q-w~O{s-}o`95e=&>PW{da zc`}*F7E*wRZU;X#H3|QS29pem{0>-6P)d=>{f50eOY$bf=#wF 5cIC6X6U)E8Cmkez zcr8=fv=lIi#b(Q>Ei`=Y2TWNA5G-t(n6)53l#c@FQv=s}EEs>L^a}GYPe~WXy!G)- zVuQ0A<(6xRbxo${!qEZe37@6DNg#8|>x}OCZtz|Ez7)uoIcvmeU}>#vDy=!`5XNVV zoY8CLzLo!{;}9k?jM)=tYlP?3Q^@QiNVyBN7Q8>H@~sKWsfO$3DwE+BEOR<$E$AnP z=-vxdyjD~~lIt!ZOU01M-{f>!I1(oy-#g9 0Isv9OM&eH#vL_=MBLaCGCRV;!;TCeRQbwfqS~7T_U0p9_c`@|wr*nJu^|;{U zf;BxHsck7!zo=o|1U{}PuRdn$%KhQxq~c?x^e2+4Y?8)piAIae9WWS3BkQQ2k2Sa5 zI+h3tN)b?J%Qms=78169p_~3QK`PuJWqZ H*Oyt)De7gwVn#N|c$BKtDg|1T zB5ks4)bW@3e4x5LY$JltUDS0cBFy7+k9AFZnoP4UsPU(~Ll|LkRU^=;zDfIXvg%vY znt4w0X{{%OOcx=sIyQwZYq$GT}BGr2jr;I>u~S+Nz*So|r~fWi~rTH*&rP$YR@ zJSdObtRv~(axArsTcM!2+Tupf5p=~94HZ_>dZlAJj}7mnOozDmpC+N+Vim+Lf$q>l z{o<#==c~ >-~(G`jx6_?O>O^Q@Qu%cYIWVXSOeoV5p*1EiTh?a+!tLBXy!RTpp(oy>}7 z(3jhzpT{W@A9V|n6-L*y5gzG!mMhos>sIL+Kg1(L5(37S=(p~dzdJuK@yy)_`SX_u zq9x{L&@zY6vEMk lQq&HV;>VpQwU@?j6M5zO^>(=48X%l5F6r%_K~jgd7L3r zhMNK)2eK~}hNdcC$3D)|vsdqGKn6h>hkmrl@A(vchUSExHkMDAcb!#8SF7zDS7m;W z_L-*8MqlJN 3#0Vz z5$%prC#?t0%w5-C8>2&BcxCOq_SN*+!Fbl)u3Lut7;Hbhe_i#otAv3uY(Y$ t20AbE$={UEH!UMFQccWK%{fW;iVAuCGh7I{lj ziS8S5Qjbu_p2g{{h065fV8xr_NqF8_?0H?%c!Gs`&|#tP?f?$ivTBc4R>Ek}V7z0) z19x|zyiq7v;Ppu&x!SmBxy I)$_!@rY|w)E%Wsb4U&~xup9QM<;|iMtEWFt|?GT zRWM=`mZMi4NiPAp!m@@Ox-xh{+c;>4`>q5+SZ%Tx=U+O`CV{a$bb=DE&U-LIW#hH= z1N>hp(D!s7^i(->Jf3>e|EPV66R&d&q)j#0$Zy#-Ki;Sy;VNkla?aH?;X(3_9hnv- z)%qe2o!A6qhsaS}E=E-O4|M {NoT6SjpcW}!(L45|M4_fvu>enmP$Ejf(Xe0`WdX@?cHTT!Q{dubq z^+I=|dh4CX7#o?cWfM$~1Mzw&y)Nx|mK~on&^^D9Q$M++TODz6E510;K*~RuJi)HC zhlvs#5;j>d6i))j@4ZNkXnbRa6z1UOZ&(>?3amTegm;-AOux^{`ab_$F>KWa1+Xj{ z&OZXSe_s|8a{jZI6LdPgX2%vNCe*3%6c)dxv_8`uYdb@A(k?ZU!->GEK)k?K;5hL= zS~6m7v^}MB)v2C{_rfc9ogxHXxbYZ9g=EM5qo1W}HEle}DtQk{-yxRQE5xwo^4O|! zsDT4llh((EKzFq{YaqfA@RJTF8H;LO$c3E)nY3}#MP}NUDXWdU N=AkVrN+PQkI`#fGQo$6C~g04 ztb~|G(IQT;8F9fmp#%;1Tf3H{yh#{zW*YiagQ>8FF?grP`iCBzAJ_X&^09cCk;0=N zd4Lf%Vyc0=yB(X#EgQao>DMQ H0g*RPR+>hE;CF>gy#kW!BC+T?nVW++ zKGiIWTDM1Q-h_h4*iiyV?xo@i=QlI^fQ@|h{s1&2E6QE!O+&mcZ t))8> bfS4`U1n)A;VR2`;>Aq&adysX6m_a? z7@uFjcZ+Nc+CmzW+!b(jj-R7maei*7-A3N;Z2E&u oJ zoFQ4o35t+Hf_DW%>(@xss0uu|kTgDc$TBzXy!x_||Dvk1V`nZ;cVOw31lQHPU{BN$ zQ*$Hd$Mo?7NHQ4`{VwFJZJ`5RA*UNl`M$0guvWBBNT>YQJpjEc{jhIusG5I5j`Pj= zpwZ2WC<5~N hKLP*W8Utp#{>LArmDN(Lre zSmk_R$?`Jy{XIiUn9y8V0>?eF{l6V2!fu6?B<&vCQNrp@FmveT@BV*RM>)5q+w<(C z-<$Nce3FpBkX5yHjjSi-KR8xh;ukOFcYaS rSvOfc4e!F+8Eg$IwiJoX<1%qf&~KP=p%?h>2*ce3}d|JZxZ c~LHhZ+p&=(f3;A%9Mw=Z_TbQ%cwSeYGMF)O$Nyi|Y2=9ZFiD>EvJ5ZasWttx7}o z`~3W8b+Fet-e@a%C&$baIR>{%f{V1wK*t+xw-O5Jpck;gmkXepRbY(}ofo)J>Lpq@ zIVObozA}ib?)kP_;1xlf482~ &x2pPJ|~IpIHVG3@$6M$ z=z_siRfq@Y88^p%eRl|zK7q+s@J0zivHj}3Vx4$_ 6=P2GHo8qf_^AY^rN(&1$R9Lr(IwPPgI;dHFH}VdpUiTkdO*4;rix26uQhNav7{ zP7DY= ^!v7{yni=+} J z*B;#8?S<}1RE@sKb2sSsg``GLHj}u&?U)@EKsS_NPs!{gK!?7?+IH8~u@m}2IBPdO z+as=>YyN57ERNg|C7E?CP?iv>>lRYRTsUTeRXcxHr$}*#eq?A#bJtHJR6%&*`lIl~ z2; Tdy5Imbn;>V}q>!`R#u4d_idWW(`PqPvg$RG)F~A^_`WJR46z&h#=EWkI|0+ znk%me%4`7}Y0aYTOT j+U iIFMDiDS=@u>>?m-0nf&2e^~e>H zKJ6u`K9yIAqv_+n%GL!{>>32>1g6C}M5|lU{x5Q3H{Te6ZT7a9gQU7(wPF#o8cfGf zPt^7rU&y(8a6vBzV~*~pP-EfPvAV{r@I2TR1RsAu?JGRrb|`x%UFMOF9g?*sAF|6a zWZ^$SrbUx_X@+b>X^e#tI7-#4_Og@a$ks@yM}Y2AMV>)~%gN$=^D4CRkfN}CH*vVt zxCzusIJ5I=$f&ap^Z7yW)E*#t%ap0=pY*{8E$$y-tK?eTlBrwrOs{>4z$oA&lM8mD zx-meqlIiW~_Y5JMvTP!be{7r~4mB7yE2+=aMk)D?Cbkh7UzzTK#jZXS{L^+)xn}*< zHh8HXplYeQC`3@ !IAC2;oar~pSVmZY16gY!G@yA%-^<{TZ;Sy*{h zEQfd`kX(KRCs-(L*T z+!5@l>?z1l^RHJdMOoH2p>WWrGMpmpZi&=XCnk`xb%z+ECO<(t(w(U>gt z$<(=#94s!H%CDSsO<^V()BX8qCcI_aw%m#5!~dKY`eXnjzQg7tu>X6!EK4%SbwcCq z9~aHRj9W(q^XQP&HhXdqC_NKT=pq7z{{XW0Im{22K#1e8xUR6nfDH5zGAm3`)LD#D zHI`G@@$GxMyIH3bR>P@PK%tlG&Gka9-Y?V!tF#*O&?b !lci~8%{Wu&I>UVvq7aE;=AypK2oJfwcq0?t{5cz9T`D- zQ~X?5bHo2%;-0~ BH3ZwfaWO<;%p5+gkub6*iU&0xFz_w|Ll23%LBbQr-Z_ zZs{n)z+|}Z2v@~@Mtp+0NKsQu-)!AtV{DmVgCOZ0#~ZgfAXt$dAjRbUzVzJDkeZ32 zU(!Zban+>He?2E$@UW?~ ;VGIzfk;RTrp2&+}QrxukUd-$yyblB>@vWp0U!Saq|<+PN3t3BF~4 v!dr3VG;2S3}Io>0p_k4^Q?I(~GjWocWbPexGYxBLAaq z!oYx{!ezPUfqC$-FA_^ao8<8qfkZ;5K>C`P_Tsg3ygdrnf(5< z?uPG-l+`F268hO_k^NZ=xvN7UT*(|yX3C>d%fC7=i|~mx!V{0!-*NJ4FtYD{nio~d zm%u>-CBJ^j`PxYhdt|n4ox_WtN7xh@PQ-8`;953fg8d>GwoW^)hu7gr5ANkoiwVYC z5+UAR!0XuJvVU|DuPOc}mv@06Eb8~^Eg1(FKmYx94l6k$0@V9N@3bQqY!-~m35^Ym~k-ZA#o8pxA{x4%2x8#Zgyv28G{|2zhQ`5n2Egu5hMr-#tKLdESc zv-h$VOg$a^^MZr_u}AKn9@W*aMUOrsr3d)bI^VI8LsT*rXqT^cVst(A>=`Z&@wd?% zn%{!TBcACo4`Mq7l}khwgEE38n#Qo%V(}0g;xV$K5ea!@6(QgWQfTMku4?gSBev)b z&$#4)0iEvna)9AE@b95}e_Ty*Q4utLhNu96`P8ldh#v<<_wEF>7q}rgY!3rWNQP+i zmVw@hpzv)w_WQBkDkxDprTHNyLgwm~y(b-QP5tQ`52O!W@s{xwc*b}7>tS;HKpaDo z4U8j*ns&*=Z*GU7IL8Y=(|SL$SM)cHWKS|pI^yo2bbb! >4r=R LDOU*sB*)`kd7E3;-HS2QgpUuY}+$ z_~WqzQhwo|LvjtV9MSJLK`n6O`Iqf`Tt7A~topQ4wCK1GdRJw>+!LfD!x3{R4G) zW8~8s|3y=#1Y;H#6<@0YdgJl`O8pS@gqD+y8;J}XcdFb|mJcZroV_noT>E_X;DPT1 z-96H|dGfQr=Ea>A)h3~;2?nUydRu?%cbvvdaswzPSz84aRU3jNN-MMRbvmYexoBs7 zl6Z0|S4KUNWco2&)PxRV9(d0^iTVb!gq;d!REpQ8iOFg;wp+zX43lMV+}UA*NUBVY z&2ENaax=fktgBNlFyrU1gymUUZgKYn_xQN`&_h9Az|f6K_cy2wZ(tpnX!=JFl6h|d zsb_a9>pL^zqVz^}0U#UnGN;(;FRED-q&U`bwW;eL17(blE&OvFQq{q5g;>v16SJq& zg=Rm0rshV<{`#^l30pKYltBjJICK5mb}16Bj&0#zTcIDOra;jipqIQ}op92~p6$CF z+hwae2#DGj2H3xuNhj$BVOHNp#dxvpGcbqYV>sBK4g`OI?vN5@ zGkk0L4XPzF31E&<_gw=jP8hJmQ-6LY8X9~SUdU(0+bwuwY8|IVRVowFr0gt;^?}hm zx7( R!!aeQ%8Gcx?LzpO^4Y!;vHunc_zjJm z?cqYc=YbAP`RgAM7l~;1T19I{Rb~O{WL{c)EuPGwyqc`#!x)!9EAfVmoGmp;jll;1 zAei#rCZKZ26IU_(ph}J5m-2i}VM5+jRHg-B@@TU)Rr 7P&=HovVl`d474J4h4HNPQf=uCZ#lX@Q zhy1JX6}m_O-dRCczzxtLu)=KX`I>NR0c$iLY!zhf+aTCzyH`XVX&bJ(z(LB#JDI@} z(ykgRWvUn$n*e^N8E^Te&bTj`UHrdAj~!7ejtF85l9Qyu09E{;2fx9++A7I0hwO^e ze91wKy>sqwVdWZHXogEWxLRwYz%XNF?%iTRu)r$-ar3_E(&0S{pk1(r8Pz?~m1krh z57BEbZ>d^