Skip to content

Commit dac59d2

Browse files
Added Pranay's updates.
1 parent 8e1f335 commit dac59d2

File tree

3 files changed

+42
-13
lines changed

3 files changed

+42
-13
lines changed

content/learning-paths/servers-and-cloud-computing/sentiment-analysis-eks/cluster-monitoring.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,13 @@ layout: learningpathall
1212

1313
* Grafana is a visualization and analytics tool. It integrates with data sources from Prometheus to create interactive dashboards to monitor and analyze Kubernetes metrics.
1414

15+
{{% notice Note %}}
16+
The terrafom script executed in the previous step automatically installs Prometheus and Grafana in the EKS cluster. However, if you wish to have more flexibility with the installable versions for both, follow the instructions below.
17+
{{% /notice %}}
18+
1519
## Install Prometheus on your EKS cluster
1620

17-
You can use Helm to install prometheus on the Kubernetes cluster.
21+
You can use Helm to install Prometheus on the Kubernetes cluster.
1822

1923
Follow the [Helm documentation](https://helm.sh/docs/intro/install/) to install it on your computer.
2024

@@ -105,7 +109,7 @@ kubectl get pods -n grafana
105109

106110
Log in to the Grafana dashboard using the LoadBalancer IP and click on **Dashboards** in the left navigation page.
107111

108-
Locate a `Kubernetes/Compute Resources/Node` dashboard and click on it.
112+
Locate a `Kubernetes/Compute Resources/Node (Pods)` dashboard and click on it.
109113

110114
You should see a dashboard like below for your Kubernetes cluster:
111115

content/learning-paths/servers-and-cloud-computing/sentiment-analysis-eks/elasticsearch-and-kibana.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: Monitoring sentiment with Elasticsearch and Kibana
3-
weight: 4
3+
weight: 3
44

55
### FIXED, DO NOT MODIFY
66
layout: learningpathall
@@ -65,12 +65,12 @@ docker compose up
6565
```
6666
If you do not have the `docker compose` plugin already installed, you can install it through the following commands:
6767

68-
```Note
68+
{{% notice Note %}}
6969
sudo apt-get update
7070
sudo apt-get install docker-compose-plugin
71-
```
71+
{{% /notice %}}
7272

73-
After the dashboard is set up, use the public IP of your server on port 5601 to access the Kibana dashboard. See Figure 2.
73+
After the dashboard is set up, use the public IP of your server on port `5601` to access the Kibana dashboard. See Figure 2.
7474

7575
![kibana #center](_images/kibana.png "Figure 2: Kibana Dashboard Setup.")
7676

content/learning-paths/servers-and-cloud-computing/sentiment-analysis-eks/sentiment-analysis.md

Lines changed: 32 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: Set up Sentiment Analysis with Amazon EKS
3-
weight: 3
3+
weight: 4
44

55
### FIXED, DO NOT MODIFY
66
layout: learningpathall
@@ -147,9 +147,9 @@ The following commands run the application with two executors, each with 12 core
147147
Before executing the `spark-submit` command, set the following variables:
148148

149149
```console
150-
export MASTER_ADDRESS=<K8S_MASTER_ADDRESS>
150+
export K8S_API_SERVER_ADDRESS=<K8S_API_SERVER_ENDPOINT
151151
export ES_ADDRESS=<IP_ADDRESS_OF_ELASTICS_SEARCH>
152-
export CHECKPOINT_BUCKET=<BUCKET_NAME>
152+
export CHECKPOINT_BUCKET=<S3_BUCKET_NAME>
153153
export ECR_ADDRESS=<ECR_REGISTERY_ADDRESS>
154154
```
155155

@@ -158,14 +158,14 @@ Execute the `spark-submit` command:
158158
```console
159159
bin/spark-submit \
160160
--class bigdata.SentimentAnalysis \
161-
--master k8s://$MASTER_ADDRESS:443 \
161+
--master k8s://K8S_API_SERVER_ADDRESS:443 \
162162
--deploy-mode cluster \
163163
--conf spark.executor.instances=2 \
164164
--conf spark.kubernetes.container.image=$ECR_ADDRESS \
165165
--conf spark.kubernetes.driver.pod.name="spark-twitter" \
166166
--conf spark.kubernetes.namespace=default \
167167
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
168-
--conf spark.driver.extraJavaOptions="-DES_NODES=4$ES_ADDRESS -DCHECKPOINT_LOCATION=s3a://$CHECKPOINT_BUCKET/checkpoints/" \
168+
--conf spark.driver.extraJavaOptions="-DES_NODES=$ES_ADDRESS -DCHECKPOINT_LOCATION=s3a://$CHECKPOINT_BUCKET/checkpoints/" \
169169
--conf spark.executor.extraJavaOptions="-DES_NODES=$ES_ADDRESS -DCHECKPOINT_LOCATION=s3a://$CHECKPOINT_BUCKET/checkpoints/" \
170170
--conf spark.executor.cores=12 \
171171
--conf spark.driver.cores=12 \
@@ -187,14 +187,19 @@ spark-twitter 1/1 Running 0 12m
187187

188188
## X Sentiment Analysis
189189

190-
Create a twitter(X) [developer account](https://developer.x.com/en/docs/x-api/getting-started/getting-access-to-the-x-api) and create a `bearer token`.
190+
Create a twitter(X) [developer account](https://developer.x.com/en/docs/x-api/getting-started/getting-access-to-the-x-api) and download the `bearer token`.
191191

192-
Use the following commands to set the token and fetch the posts:
192+
Use the following commands to set the bearer token and fetch the posts:
193193

194194
```console
195195
export BEARER_TOKEN=<BEARER_TOKEN_FROM_X>
196196
python3 scripts/xapi_tweets.py
197197
```
198+
{{% notice Note %}}
199+
You might need to install the following python packages, if you run into any dependency issues:
200+
* pip3 install requests.
201+
* pip3 install boto3.
202+
{{% /notice %}}
198203

199204
You can modify the script `xapi_tweets.py` and use your own keywords.
200205

@@ -204,3 +209,23 @@ Here is the code which includes some sample keywords:
204209
query_params = {'query': "(#onArm OR @Arm OR #Arm OR #GenAI) -is:retweet lang:en",
205210
'tweet.fields': 'lang'}
206211
```
212+
213+
Use the following command to send these processed tweets to Elasticsearch
214+
215+
```console
216+
python3 csv_to_kinesis.py
217+
```
218+
219+
Navigate to the Kibana dashboard using the following URL and analyze the tweets:
220+
221+
```console
222+
http://<IP_Address_of_ES_and_Kibana>:5601
223+
```
224+
225+
## Environment Clean-up
226+
227+
Following this Learning Path will deploy many artifacts in your cloud account. Remember to destroy the resources after you've finished executing it. Use the following command to cleanup the resources:
228+
229+
```console
230+
terraform destroy
231+
```

0 commit comments

Comments
 (0)