Skip to content

Commit cf3f990

Browse files
authored
Merge branch 'rhds' into 6.8
2 parents 588d5ec + 6bfc481 commit cf3f990

15 files changed

+166
-191
lines changed

documentation/antora.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
name: inner-loop
2-
title: OpenShift Inner Loop Workshop
3-
version: "6.8"
2+
title: OpenShift Inner Loop Workshop for Developer Sandbox
3+
version: "rhds"
44
nav:
55
- modules/ROOT/nav.adoc
66

168 KB
Loading
136 KB
Loading
164 KB
Loading
691 KB
Loading

documentation/modules/ROOT/nav.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
* xref:introduction.adoc[1. Introduction]
2-
* xref:developer-workspace.adoc[2. Get your Developer Workspace]
2+
* xref:developer-workspace.adoc[2. Get your Developer Sandbox and Workspace]
33
* xref:inventory-quarkus.adoc[3. Create Inventory Service with Quarkus]
44
* xref:catalog-spring-boot.adoc[4. Create Catalog Service with Spring Boot]
55
* xref:gateway-dotnet.adoc[5. Create Gateway Service with .NET]

documentation/modules/ROOT/pages/app-config.adoc

Lines changed: 16 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,5 @@
11
:markup-in-source: verbatim,attributes,quotes
2-
:CHE_URL: http://devspaces.%APPS_HOSTNAME_SUFFIX%
3-
:USER_ID: %USER_ID%
4-
:OPENSHIFT_CONSOLE_URL: https://console-openshift-console.%APPS_HOSTNAME_SUFFIX%/topology/ns/my-project{USER_ID}
2+
:PROJECT: %PROJECT%
53

64
= Externalize Application Configuration
75
:navtitle: Externalize Application Configuration
@@ -64,7 +62,7 @@ making it simple to re-create complex deployments by just deploying a single tem
6462
be parameterized to get input for fields like service names and generate values for fields like passwords.
6563
====
6664

67-
In the {OPENSHIFT_CONSOLE_URL}[OpenShift Web Console^, role='params-link'], `*click on the 'Home' menu, and choose 'Software Catalog'*`
65+
In the OpenShift Web Console `*click on the 'Home' menu, and choose 'Software Catalog'*`
6866
and then select `*Databases*` within the catalog.
6967

7068
image::openshift-add-database.png[OpenShift - Add database, 800]
@@ -80,7 +78,7 @@ Then, enter the following information:
8078
|Value
8179

8280
|Namespace*
83-
|my-project{USER_ID}
81+
|_user_-dev
8482

8583
|Memory Limit*
8684
|512Mi
@@ -124,7 +122,7 @@ Then, enter the following information:
124122
|Value
125123

126124
|Namespace*
127-
|my-project{USER_ID}
125+
|_user_-dev
128126

129127
|Memory Limit*
130128
|512Mi
@@ -159,11 +157,11 @@ Now you can move on to configure the Inventory and Catalog service to use these
159157

160158
By default, due to security reasons, **containers are not allowed to snoop around OpenShift clusters and discover objects**. Security comes first and discovery is a privilege that needs to be granted to containers in each project.
161159

162-
Since you do want our applications to discover the config maps inside the **my-project{USER_ID}** project, you need `*to grant permission to the Service Account to access the OpenShift REST API*` and find the config maps.
160+
Since you do want our applications to discover the config maps inside the your project, you need `*to grant permission to the Service Account to access the OpenShift REST API*` and find the config maps.
163161

164162
[source,shell,subs="{markup-in-source}",role=copypaste]
165163
----
166-
oc policy add-role-to-user view -n my-project{USER_ID} -z default
164+
oc policy add-role-to-user view -z default
167165
----
168166

169167
[#externalize_quarkus_configuration]
@@ -178,7 +176,7 @@ In Quarkus, Driver is a build time property and cannot be overridden. So as you
178176
technology, you need to change the 'quarkus.datasource.driver' parameter
179177
in **/projects/workshop/labs/inventory-quarkus/src/main/resources/application.properties** and rebuild the application.
180178

181-
In your {CHE_URL}[Workspace^, role='params-link'], `*edit the '/projects/workshop/labs/inventory-quarkus/pom.xml' file and add the
179+
In your Workspace, `*edit the '/projects/workshop/labs/inventory-quarkus/pom.xml' file and add the
182180
'JDBC Driver - MariaDB' dependency*`
183181

184182
[source,xml,subs="{markup-in-source}",role=copypaste]
@@ -218,7 +216,7 @@ parameters unchanged. They will be overridden later.
218216

219217
Now, let's create the Quarkus configuration content using the database credentials.
220218

221-
In the {OPENSHIFT_CONSOLE_URL}[OpenShift Web Console^, role='params-link'], from the `*Workloads*` menu
219+
In the OpenShift Web Console from the `*Workloads*` menu
222220
`*click on 'Config Maps' then click on the 'Create Config Map' button*`.
223221

224222
image::openshift-create-configmap.png[Che - OpenShift Create Config Map, 900]
@@ -231,13 +229,12 @@ apiVersion: v1
231229
kind: ConfigMap
232230
metadata:
233231
name: inventory
234-
namespace: my-project{USER_ID}
235232
labels:
236233
app: coolstore
237234
app.kubernetes.io/instance: inventory
238235
data:
239236
application.properties: |-
240-
quarkus.datasource.jdbc.url=jdbc:mariadb://inventory-mariadb.my-project{USER_ID}.svc:3306/inventorydb
237+
quarkus.datasource.jdbc.url=jdbc:mariadb://inventory-mariadb.svc:3306/inventorydb
241238
quarkus.datasource.username=inventory
242239
quarkus.datasource.password=inventory
243240
----
@@ -250,7 +247,7 @@ Wait till the build is complete then, `*Delete the Inventory Pod*` to make it st
250247

251248
[source,shell,subs="{markup-in-source}",role=copypaste]
252249
----
253-
oc delete pod -l component=inventory -n my-project{USER_ID}
250+
oc delete pod -l component=inventory
254251
----
255252

256253
Now **verify that the config map is in fact injected into the container by checking if the seed data is
@@ -260,7 +257,7 @@ loaded into the database**.
260257

261258
[source,shell,subs="{markup-in-source}",role=copypaste]
262259
----
263-
oc rsh -n my-project{USER_ID} dc/inventory-mariadb
260+
oc rsh dc/inventory-mariadb
264261
----
265262

266263
Once connected to the MariaDB container, `*run the following*`:
@@ -321,7 +318,7 @@ configuration using an alternative **application.properties** backed by a config
321318

322319
Let's create the Spring Boot configuration content using the database credentials and create the Config Map.
323320

324-
In the {OPENSHIFT_CONSOLE_URL}[OpenShift Web Console^, role='params-link'], from the `*Workloads*` menu,
321+
In the OpenShift Web Console, from the `*Workloads*` menu,
325322
`*click on 'Config Maps' then click on the 'Create Config Map' button*`.
326323

327324
image::openshift-create-configmap.png[Che - OpenShift Create Config Map, 900]
@@ -334,13 +331,12 @@ apiVersion: v1
334331
kind: ConfigMap
335332
metadata:
336333
name: catalog
337-
namespace: my-project{USER_ID}
338334
labels:
339335
app: coolstore
340336
app.kubernetes.io/instance: catalog
341337
data:
342338
application.properties: |-
343-
spring.datasource.url=jdbc:postgresql://catalog-postgresql.my-project%USER_ID%.svc:5432/catalogdb
339+
spring.datasource.url=jdbc:postgresql://catalog-postgresql:5432/catalogdb
344340
spring.datasource.username=catalog
345341
spring.datasource.password=catalog
346342
spring.datasource.driver-class-name=org.postgresql.Driver
@@ -358,15 +354,15 @@ if enabled, triggers hot reloading of beans or Spring context when changes are d
358354

359355
[source,shell,subs="{markup-in-source}",role=copypaste]
360356
----
361-
oc delete pod -l component=catalog -n my-project{USER_ID}
357+
oc delete pod -l component=catalog
362358
----
363359

364360
When the Catalog container is *ready* (wait at least 30 seconds), verify that the PostgreSQL database is being
365361
used. Check the Catalog pod logs repeatedly:
366362

367363
[source,shell,subs="{markup-in-source}",role=copypaste]
368364
----
369-
oc logs deployment/catalog-coolstore -n my-project{USER_ID} | grep hibernate.dialect
365+
oc logs deployment/catalog-coolstore | grep hibernate.dialect
370366
----
371367

372368
You should have the following output:
@@ -380,7 +376,7 @@ You can also connect to the Catalog PostgreSQL database and verify that the seed
380376

381377
[source,shell,subs="{markup-in-source}",role=copypaste]
382378
----
383-
oc rsh -n my-project{USER_ID} dc/catalog-postgresql
379+
oc rsh dc/catalog-postgresql
384380
----
385381

386382
Once connected to the PostgreSQL container, run the following:

documentation/modules/ROOT/pages/app-health.adoc

Lines changed: 22 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,5 @@
11
:markup-in-source: verbatim,attributes,quotes
2-
:APPS_HOSTNAME_SUFFIX: %APPS_HOSTNAME_SUFFIX%
3-
:CHE_URL: http://devspaces.%APPS_HOSTNAME_SUFFIX%
4-
:USER_ID: %USER_ID%
5-
:OPENSHIFT_CONSOLE_URL: https://console-openshift-console.%APPS_HOSTNAME_SUFFIX%/topology/ns/my-project{USER_ID}
2+
:PROJECT: %PROJECT%
63

74
= Monitor Application Health
85
:navtitle: Monitor Application Health
@@ -63,16 +60,16 @@ start listening until initialization is complete.
6360
By default Pods are designed to be resilient, if a pod dies it will get restarted. Let's see
6461
this happening.
6562

66-
In the {OPENSHIFT_CONSOLE_URL}[OpenShift Web Console^, role='params-link'], under the Workloads menu
63+
In the OpenShift Web Console under the Workloads menu
6764
`*click on 'Topology' -> '(D) inventory-coolstore' -> 'Resources' -> 'P inventory-coolstore-x-xxxxx'*`
6865

6966
image::openshift-inventory-pod.png[OpenShift Inventory Pod, 700]
7067

71-
In the {OPENSHIFT_CONSOLE_URL}[OpenShift Web Console^, role='params-link'], `*click on 'Actions' -> 'Delete Pod' -> 'Delete'*`
68+
In the OpenShift Web Console, `*click on 'Actions' -> 'Delete Pod' -> 'Delete'*`
7269

7370
image::openshift-inventory-delete-pod.png[OpenShift Inventory Delete Pod, 700]
7471

75-
A new instance (pod) will be redeployed very quickly. Once deleted `*try to access your http://inventory-coolstore-my-project{USER_ID}.{APPS_HOSTNAME_SUFFIX}[Inventory Service^, role='params-link']*`.
72+
A new instance (pod) will be redeployed very quickly. Once deleted `*try to access your Inventory application test page*`.
7673

7774
However, imagine the _Inventory Service_ is stuck in a state (Stopped listening, Deadlock, etc)
7875
where it cannot perform as it should. In this case, the pod will not immediately die, it will be in a zombie state.
@@ -88,7 +85,7 @@ the container could automatically restart (based on its restart policy).
8885

8986
Let's imagine you have traffic coming into the _Inventory Service_. We can do that with simple script.
9087

91-
In your {CHE_URL}[Workspace^, role='params-link'],
88+
In your Workspace,
9289

9390
[tabs, subs="attributes+,+macros"]
9491
====
@@ -110,7 +107,7 @@ CLI::
110107
----
111108
for i in {1..60}
112109
do
113-
if [ $(curl -s -w "%{http_code}" -o /dev/null http://inventory-coolstore.my-project{USER_ID}.svc:8080/api/inventory/329299) == "200" ]
110+
if [ $(curl -s -w "%{http_code}" -o /dev/null http://inventory-coolstore.${DEVWORKSPACE_NAMESPACE}.svc:8080/api/inventory/329299) == "200" ]
114111
then
115112
MSG="\033[0;32mThe request to Inventory Service has succeeded\033[0m"
116113
else
@@ -132,14 +129,14 @@ image::che-inventory-traffic.png[Che - Catalog Traffic OK, 500]
132129

133130
Now let's scale out your _Inventory Service_ to 2 instances.
134131

135-
In the {OPENSHIFT_CONSOLE_URL}[OpenShift Web Console^, role='params-link'], under the Workloads menu,
132+
In the OpenShift Web Console under the Workloads menu,
136133
`*click on 'Topology' -> '(D) inventory-coolstore' -> 'Details' then click once on the up arrows
137134
on the right side of the pod blue circle*`.
138135

139136
image::openshift-scale-out-inventory.png[OpenShift Scale Out Catalog, 700]
140137

141138
You should see the 2 instances (pods) running.
142-
Now, `*switch back to your {CHE_URL}[Workspace^, role='params-link'] and check the output of the 'Inventory Generate Traffic' task*`.
139+
Now, `*switch back to your Workspace and check the output of the 'Inventory Generate Traffic' task*`.
143140

144141
image::che-inventory-traffic-ko.png[Che - Catalog Traffic KO, 500]
145142

@@ -150,7 +147,7 @@ In order to prevent this behaviour, a **Readiness check** is needed. It determin
150147
If the readiness probe fails, the endpoints controller ensures the container has its IP address removed from the endpoints of all services.
151148
A readiness probe can be used to signal to the endpoints controller that even though a container is running, it should not receive any traffic from a proxy.
152149

153-
First, scale down your _Inventory Service_ to 1 instance. In the {OPENSHIFT_CONSOLE_URL}[OpenShift Web Console^, role='params-link'], under the Workloads menu,
150+
First, scale down your _Inventory Service_ to 1 instance. In the OpenShift Web Console, under the Workloads menu,
154151
`*click on 'Topology' -> '(D) inventory-coolstore' -> 'Details' then click once on the down arrows
155152
on the right side of the pod blue circle*`.
156153

@@ -164,7 +161,7 @@ It allows applications to provide information about their state to external view
164161
in cloud environments where automated processes must be able to determine whether the application should be discarded or restarted.
165162

166163
Let's add the needed dependencies to **/projects/workshop/labs/inventory-quarkus/pom.xml**.
167-
In your {CHE_URL}[Workspace^, role='params-link'], `*edit the '/projects/workshop/labs/inventory-quarkus/pom.xml' file*`:
164+
In your Workspace, `*edit the '/projects/workshop/labs/inventory-quarkus/pom.xml' file*`:
168165

169166
[source,xml,subs="{markup-in-source}",role=copypaste]
170167
----
@@ -206,20 +203,20 @@ Wait till the build is complete then use the Dev Spaces terminal window to `*Del
206203

207204
[source,shell,subs="{markup-in-source}",role=copypaste]
208205
----
209-
oc delete pod -l component=inventory -n my-project{USER_ID}
206+
oc delete pod -l component=inventory
210207
----
211208

212209
NOTE: In Dev Spaces, to open a terminal window, `*click on 'Terminal' -> 'New Terminal'*`
213210

214211

215212
It will take a few seconds to restart, then verify that the health endpoint works for the **Inventory Service** using `*curl*`
216213

217-
In your {CHE_URL}[Workspace^, role='params-link'],
214+
In your Workspace,
218215
`*execute the following commands in the terminal window*` - it may take a few attempts while the pod restarts.
219216

220217
[source,shell,subs="{markup-in-source}",role=copypaste]
221218
----
222-
curl -w "\n" http://inventory-coolstore.my-project{USER_ID}.svc:8080/q/health
219+
curl -w "\n" http://inventory-coolstore.${DEVWORKSPACE_NAMESPACE}.svc:8080/q/health
223220
----
224221

225222
NOTE: To open a terminal window, `*click on 'Terminal' -> 'New Terminal'*`
@@ -239,7 +236,7 @@ You should have the following output:
239236
}
240237
----
241238

242-
In the {OPENSHIFT_CONSOLE_URL}[OpenShift Web Console^, role='params-link'], under the Workloads menu,
239+
In the In the OpenShift Web Console under the Workloads menu,
243240
`*click on 'Topology' -> '(D) inventory-coolstore' -> 'Add Health Checks'*`.
244241

245242
image::openshift-inventory-add-health-check.png[Che - Inventory Add Health Check, 700]
@@ -342,10 +339,10 @@ Therefore, as soon as you define the probe, OpenShift automatically redeploys th
342339
== Testing Inventory Readiness Probes
343340

344341
Now let's test it as you did previously.
345-
`*Generate traffic to Inventory Service*` and then, in the {OPENSHIFT_CONSOLE_URL}[OpenShift Web Console^, role='params-link'],
342+
`*Generate traffic to Inventory Service*` and then, in the OpenShift Web Console,
346343
`*scale out the Inventory Service to 2 instances (pods)*`
347344

348-
In your {CHE_URL}[Workspace^, role='params-link'], `*check the output of the 'Inventory Generate Traffic' task*`.
345+
In your Workspace, `*check the output of the 'Inventory Generate Traffic' task*`.
349346

350347
You should not see any errors, this means that you can now **scale out your _Inventory Service_ with no downtime.**
351348

@@ -362,12 +359,12 @@ dependencies which is already done for the **Catalog Service**.
362359

363360
Verify that the health endpoint works for the **Catalog Service** using `*curl*`.
364361

365-
In your {CHE_URL}[Workspace^, role='params-link'], in the *terminal* window,
362+
In your Workspace, in the *terminal* window,
366363
`*execute the following commands*`:
367364

368365
[source,shell,subs="{markup-in-source}",role=copypaste]
369366
----
370-
curl -w "\n" http://catalog-coolstore.my-project{USER_ID}.svc:8080/actuator/health
367+
curl -w "\n" http://catalog-coolstore.${DEVWORKSPACE_NAMESPACE}.svc:8080/actuator/health
371368
----
372369

373370
You should have the following output:
@@ -380,7 +377,7 @@ You should have the following output:
380377
Liveness and Readiness health checks values have already been set for this service as part of the build and deploying
381378
using Eclipse JKube in combination with the Spring Boot actuator.
382379

383-
You can check this in the {OPENSHIFT_CONSOLE_URL}[OpenShift Web Console^, role='params-link'], under the Workloads menu,
380+
You can check this in the OpenShift Web Console, under the Workloads menu,
384381
`*click on 'Topology' -> '(D) catalog-coolstore' -> 'Actions' -> 'Edit Health Checks'*`.
385382

386383
image::openshift-catalog-edit-health.png[Che - Catalog Add Health Check, 700]
@@ -480,7 +477,7 @@ Now you understand and know how to configure Readiness, Liveness and Startup pro
480477
|===
481478

482479
Finally, let's configure probes for Gateway and Web Service.
483-
In your {CHE_URL}[Workspace^, role='params-link'], `*click on 'Terminal' -> 'Run Task...' -> 'devfile: Probes - Configure Gateway & Web'*`
480+
In your Workspace, `*click on 'Terminal' -> 'Run Task...' -> 'devfile: Probes - Configure Gateway & Web'*`
484481

485482
image::che-runtask.png[Che - RunTask, 600]
486483

@@ -493,7 +490,7 @@ gain visibility into how the application behaves and particularly in identifying
493490
OpenShift provides container metrics out-of-the-box and displays how much memory, cpu and network
494491
each container has been consuming over time.
495492
//
496-
//In the {OPENSHIFT_CONSOLE_URL}[OpenShift Web Console^, role='params-link'],
493+
//In the OpenShift Web Console
497494
//`*click on 'Observe' then select your 'my-project{USER_ID}' project*`.
498495
//
499496
//In the project overview, you can see the different **Resource Usage** sections.
@@ -504,6 +501,7 @@ each container has been consuming over time.
504501
From the Workloads menu, `*click on 'Topology' -> any Deployment (D) and click on the associated Pod (P)*`
505502

506503
In the pod `*Metrics*` tab, you can see a more detailed view of the pod consumption.
504+
507505
The graphs can be found under the Metrics heading, or Details in earlier versions of the OpenShift console.
508506

509507
image::openshift-pod-details.png[OpenShift Pod Details,740]

0 commit comments

Comments
 (0)