Skip to content

Latest commit

 

History

History
95 lines (68 loc) · 3.38 KB

File metadata and controls

95 lines (68 loc) · 3.38 KB

Instruction for the workshop

Login

NOTE: It's good to have oc command line tool pre-installed.

Use an OpenShift cluster with OpenShift Virtualization capabilities. See README_INFRA_PREPARE.md how to setup the environment for Service Mesh and user projects.

1. Monolith application deployment

NOTE: Make sure you are logged in to OpenShift at your current terminal session.

Command:

# switch to the target namespace for deployment
oc project userX-apps

# create the deployment manifests on OpenShift
oc apply -f k8s-1 -n userX-apps

With the above command, the monolith application will be deployed in the virtual machine (see monolith-app.yml) and a web application will be deployed using a pre-built container image (see web-app.yml on OpenShift.

You can run the following command to check the pod status:

oc get pods -n userX-apps

Envoy proxies are deployed as sidecars and run in the same pod as services with the applications.

It's done by adding the following annotation to the deployments:

sidecar.istio.io/inject: "true"

We can see there is additional container in the ready state for each pod.

In this stage, DestinationRule and VirtualService will be created. VirtualService defines a set of traffic routing rules to apply when a host is addressed, while DestinationRule defines the policies that apply to the traffic for a service after the routing has occured.

Here we set a simple route rule for the monolith-app version 1.

2. Microservice application deployment

In this step, we will deploy service-b as a microservice simply by running:

oc apply -f k8s-2

We don't route any traffic to microservice yet. On Kiali we can see the traffic graph as the following:

Traffic graph with only monolith application

3. Canary release with Service Mesh

In Canary Deployments, newer versions of services are incrementally rolled out to users to minimize the risk and impact of any bugs introduced by the newer version. To begin incrementally routing traffic to the newer version of the frontend service, we can modify the weight of the VirtualService rule:

oc apply -f k8s-3
  route:
    - destination:
        host: monolith-app
        subset: version-v1
      weight: 80
    - destination:
        host: monolith-app
        subset: version-v2
      weight: 20

Now we can send requests to /service-b endpoints, we can check the traffic graph on Kiali:

Traffic graph with canary release

4. Canary release with Service Mesh FINISH

If we're happy with our results, we gradually increase the traffic sent to the new microservice, finally sending 100% traffic to the new service.

oc apply -f k8s-4
  route:
    - destination:
        host: monolith-app
        subset: version-v1
      weight: 0
    - destination:
        host: monolith-app
        subset: version-v2
      weight: 100

Now it's time to move forward with our app modernization and select the next "bounded context" in our monolithic app, that we move into a microservice like we've done before.