The aks-microservice-chart-blueprint chart is the best way to release your
microservice into PagoPA K8s environment. It contains all the required
components to get started, and it has several architectural aspects already
configured.
Some of the key benefits of this chart are:
- Highly secure environment thanks to
Workload Identityand secrets load by SecretProviderClass; - Ingress HTTPS connection;
- Improved scalability and reliability thanks to Keda;
- Simpified way to setup secrets and configMaps
⚠️ The 7.x version drop compatibility to podIdentity and use workload identity
To see the entire architecture please see this page architecture
see CHANGELOG to see the new features and the breking changes
please see this page about how to manage a migration for one version to another MIGRATION_GUIDE.md
- helm & kubernetes
This is the official and recommended method to adopt this chart.
To support the various teams we have decided that the 2.x releases and the 5.x releases will have LTS support.
By LTS we mean support aimed at solving bugs or blocking problems, but not new features for which it will be necessary to upgrade the version
These are the supported LTS releases and until when:
2.x: March 20245.x: July 2024
Create a helm folder inside your microservice project in which install the
Helm chart:
mkdir helm && cd helmAdd Helm repo:
helm repo add pagopa-microservice https://pagopa.github.io/aks-microservice-chart-blueprintIf you had already added this repo earlier, run
helm repo updateto retrieve the latest versions of the packages.
Add a very basic configuration in Chart.yaml:
cat <<EOF > Chart.yaml
apiVersion: v2
name: my-microservice
description: My microservice description
type: application
version: 1.0.0
appVersion: 1.0.0
dependencies:
- name: microservice-chart
version: 7.1.1
repository: "https://pagopa.github.io/aks-microservice-chart-blueprint"
EOFInstall the dependency:
helm dep buildCreate a values-<env>.yaml for each environment:
touch values-dev.yaml values-uat.yaml values-prod.yamlOverride all values that you need, and form the root of your project install the chart:
helm upgrade -i -n <namespace name> -f <file with values> <name of the helm chart> <chart folder>
helm upgrade -i -n mynamespace -f helm/values-dev.yaml mymicroservice helmChange version of the dependency and run the update:
cd helm && helm dep update .To work as expect this template must request:
App:
- has liveness and readiness endpoints
- you know which are the probes for your application, because are mandatory
Azure:
- TLS certificate are present into the kv (for ingress)
- Azure: Managed POD identity was created
K8s:
- Reloader of other tools that allow to restar the pod in case of some of the config map or secret are changed
see README/Microservice Chart configuration to understand how to use the values.
topologySpreadConstraints in Kubernetes is used to control how Pods are distributed across a cluster to improve high availability and fault tolerance. It ensures that Pods are spread across different topology domains (such as nodes, zones, or racks) according to defined rules, minimizing the risk of impact if a single domain fails.
Default configuration
topologySpreadConstraints:
create: true
useDefaultConfiguration: falseThis configuration create automatically this snippet of code
topologySpreadConstraints:
- labelSelector:
matchLabels:
app.kubernetes.io/instance: <APP NAME>
maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
- labelSelector:
matchLabels:
app.kubernetes.io/instance: <APP NAME>
maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
Custom configuration
topologySpreadConstraints:
create: true
config:
- labelSelector:
matchLabels:
app.kubernetes.io/instance: v7-java-helm-basic-test
maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotScheduleStarting with version 2.15, Keda deprecated the azure TriggerAuthentication provider=azure.
You now need to use azure-workload or one of the other providers listed in the documentation.
https://keda.sh/docs/2.17/authentication-providers/
To use the workload identity and be able to load secrets directly from kv, follow the MIGRATION_GUIDE.md.
If you want to force the re-deploy of pods without changing values, you can use this value
microservice-chart:
deployment:
forceRedeploy: truethis can be very usefull when you need to updated the values of a config map or secret
out of sync
Is possible to load env variables inside the pod, with the creation of a configmap called as the release name
envConfig:
<env variable name>: <value>
envConfig:
APP: foo
MY_APP_COLOR: "green" envSecret:
<env variable name>: <secret name inside kv>
envSecret:
MY_MANDATORY_SECRET: dvopla-d-neu-dev01-aks-apiserver-url
# configuration
keyvault:
name: "dvopla-d-blueprint-kv"
tenantId: "7788edaf-0346-4068-9d79-c868aed15b3d"imagePullSecret: use a Secret to pull an image from a private container image registry or repository.
Specify a name of the secret to pull the image from a private registry.
imagePullSecret:
name: NAME_OF_IMAGE_PULL_SECRETthis property allows to load from a configMap (denfined inside the values) a file, and mount into pod as file.
the default file path is /mnt/file-config/<file name>
configMapFromFile:
<key and filename>: <value>
configMapFromFile:
logback.xml: |-
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="30 seconds">
<root level="INFO">
<appender-ref ref="CONSOLE_APPENDER_ASYNC" />
</root>
</configuration>usefull when you have a shared configmap and you want to load his values.
ENV VARIABLE NAME: how the variable must be named inside pod, very usefull for example for spring that have some problems with variables that have hippen in the namekey inside config maps: which key to load inside the env variable name
NOTE External config file template use range flow control
externalConfigMapValues:
<configmap name>:
<ENV VARIABLE NAME>: <key inside config maps>
externalConfigMapValues:
external-configmap-values-complete-1:
DATABASE_DB_NANE: database-db-name
external-configmap-values-complete-2:
PLAYER-INITIAL-LIVES: player-initial-lives
UI_PROPERTIES_FILE_NAME: ui-properties-file-nameusefull when you have a shared config map with files, and you want to load inside your pod
All the files are created inside the path: /mnt/file-config-external/<config-map-name>/
externalConfigMapFiles:
create: true
configMaps:
- name: <config map name>
key: <config map key>
mountPath: <complete mount path with file name> #(Optional)
externalConfigMapFiles:
create: true
configMaps:
- name: external-configmap-files
key: game.properties
mountPath: "/config/game.properties"
- name: external-configmap-files
key: user-interface.xmlThis volume is create inside the AKS default disk, please don't use to store data, but use only as a tmp folder
tmpVolumeMount:
create: true
mounts:
- name: tmp
mountPath: /tmp
- name: logs
mountPath: /logspersistentVolumeClaimsMounts: allow to create local folders with persistent volumes (pv) and write permissions
This volume use a pvc to persist the data
persistentVolumeClaimsMounts:
create: true
mounts:
- name: pdf-pvc
mountPath: /pdf
pvcName: blueprint-hdd-pvc podDisruptionBudget:
create: true
minAvailable: 0This snippet allows to install pod into different nodes, created in different AZ's
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node_type
operator: In
values:
- user
- matchExpressions:
- key: elastic
operator: In
values:
- eck
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
aadpodidbinding: blueprint-pod-identity
namespaces: ["blueprint"]
topologyKey: topology.kubernetes.io/zoneThis code snippet in AKS forces the pods not to be all in the same node but to distribute themselves as much as possible in nodes created in different AZs, this is not blocking but only a desire, in fact if it is not possible the pods will still be deployed inside a node
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
aadpodidbinding: blueprint-pod-identity
namespaces: ["blueprint"]
topologyKey: topology.kubernetes.io/zone tolerations:
- effect: "NoSchedule"
key: "paymentWalletOnly"
operator: "Equal"
value: "true" livenessProbe:
httpGet:
path: /status/live
port: 8080
initialDelaySeconds: 10
failureThreshold: 6
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
terminationGracePeriodSeconds: 30 readinessProbe:
httpGet:
path: /status/ready
port: 8080
initialDelaySeconds: 30
failureThreshold: 6
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1 startupProbe:
create: true
handlerType: "exec"
exec:
command: ["/bin/sh", "-c", '[ -d "/csv" ]']
httpGet:
path: /status/ready
port: 8080
initialDelaySeconds: 1
failureThreshold: 6
periodSeconds: 10
timeoutSeconds: 10
successThreshold: 1
terminationGracePeriodSeconds: 30...
envConfig:
TO_OVERWRITE: "original-value"
COMMON: "same"
envSecret:
SEC_COMMON: 'common-secret'
SEC_TO_OVERWRITE: 'value-to-overwrite'
keyvault:
name: "pagopa-kv"
tenantId: "123123123123123"
canaryDelivery:
...
envConfig:
TO_OVERWRITE: "over-witten"
NEW_ITEM: "new item"
envSecret:
SEC_NEW_ITEM: 'new-secret'
SEC_TO_OVERWRITE: 'new-value'In a Canary deployment, the configurations from the envConfig of the stable version are merged with its own configuration.
It is the same for the envSecret.
You can add new variables to the canary (see SEC_NEW_ITEM) or overwrite values of the stable version (see SEC_TO_OVERWRITE).
ℹ️ this section is not directly used by aks blueprint but is a place holder for release pipelines. It was inserted here to centralize the information and its documentation
In order to use postman tests you need to configure your yaml as follows:
postman-test:
run: true
repoName: arc-be
dir: postman
collectionName: "pagopa-arc-E2E.postman_collection.json"
envVariablesFile: "arc_DEV.postman_environment.json" #inside azdo secure filesThe Service Monitor allows you to configure and send metrics from your application to Prometheus, both locally hosted and managed, using this new version of the configuration module. This feature is essential for monitoring application health and gaining real-time insights into its performance.
serviceMonitor:
create: true
endpoints:
- interval: 10s
targetPort: 9092
path: /
- interval: 10s
targetPort: 9091
path: /metrics
promethuesManaged: truecreate: trueEnables automatic creation of a dedicated Service Monitor for Prometheus.endpointsA list of endpoints defined to allow Prometheus to scrape metrics from the monitoring targets.interval: Specifies how frequently Prometheus should scrape metrics from the endpoint (e.g., every 10 seconds).targetPort: Sets the port Prometheus should connect to in order to access metrics.path: Defines the HTTP path where metrics are exposed — e.g.,/for general information and/metricsfor specific monitoring data.
promethuesManaged: trueEnables integration with a managed Prometheus system. This option is useful when using Prometheus as a managed service (e.g., in cloud environments), allowing the Service Monitor to automatically adjust to such setups.
See README/Postman tests to understand how to use the values.
For more information, visit the complete documentation.
Clone the repository and run the setup script:
git clone git@github.com:pagopa/aks-microservice-chart-blueprint.git
cd aks-microservice-chart-blueprint.git
sh /bin/setupSetup script installs a version manager tool that may introduce compatibility issues in your environment. To prevent any potential problems, you can install these dependencies manually or with your favourite tool:
- NodeJS 14.17.3
- Helm 3.8.0
The branch gh-pages contains the GitHub page content and all released charts.
To update the page content, use bin/publish.
- None.
livenessProbe readinessProbe Now chose if enable tcpSocket ot httpGet
livenessProbe:
handlerType: httpGet <httpGet|tcpSocket>
readinessProbe:
handlerType: httpGet <httpGet|tcpSocket>fileConfigExternals:
Now create file from external config map
fileConfigExternals:
create: true
configMaps:
- name: nodo-cacerts
key: cacertsserviceMonitor:
Now create service monitor for send metrics to prometheus
serviceMonitor:
create: true
endpoints:
- interval: 10s #micrometer
targetPort: 9092
path: /
- interval: 10s #cinnamon
targetPort: 9091
path: /metricsfileShare:
Now use azure storage file and mount in a pod to /mnt/file-azure/{{ name }}/.. (Es. /mnt/file-azure/certificates/java-cacerts)
(Attention key vault must contains two keys, azurestorageaccountname and azurestorageaccountkey. See https://learn.microsoft.com/en-us/azure/aks/azure-files-volume and storage file share named as fileShare.folders.name)
fileShare:
create: true
folders:
- name: certificates
readOnly: false
mountOptions: "dir_mode=0777,file_mode=0777,cache=strict,actimeo=30"
- name: firmatore
readOnly: false
mountOptions: "dir_mode=0777,file_mode=0777,cache=strict,actimeo=30"envFieldRef:
Now map environment from a Pod Information
envFieldRef:
NAMESPACE: "metadata.namespace"
SERVICE_HTTP_HOST: "status.podIP"fileConfig:
Now load file inside configMap and mount in a pod to /mnt/file-config/.. (Es. /mnt/file-config/logback.xml)
fileConfig:
logback.xml: |-
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="30 seconds">
<property name="CONSOLE_PATTERN" value="%d %-5level [sid:%X{sessionId}] [can:%X{idCanale}] [sta:%X{idStazione}] [%logger] - %msg [%X{akkaSource}]%n"/>
<appender name="CONSOLE_APPENDER" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>${CONSOLE_PATTERN}</pattern>
<charset>utf8</charset>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="CONSOLE_APPENDER_ASYNC" />
</root>
</configuration>Or use commenad helm for load file while use a subchart
--set-file 'microservice-chart.fileConfig.logback\.xml'=helm/config/dev/logback.xmlservice:
Now use a list of ports and not more a single value
service:
create: true
type: ClusterIP
ports:
- 8080
- 4000ingress: now you need to specify the service port
ingress:
create: true
host: "dev01.rtd.internal.dev.cstar.pagopa.it"
path: /rtd/progressive-delivery/(.*)
servicePort: 8080Install:
In the example folder, you can find a working examples.
Use spring-boot-app-color to test canary deployment
It is an elementary version of an Azure Function App written in NodeJS.
It has three functions:
readythat responds to the readiness probe;livethat responds to the liveness probe;secretsthat return a USER and a PASS taken respectively from a K8s ConfigMap and an Azure Key Vault.
To try it locally use either the Azure Functions Core Tools or Docker.
You can also find a generic pipeline.
https://github.com/pagopa/devops-java-springboot-color
there are two folders called:
- spring-boot-app-bar
- spring-boot-app-foo
This are only a helm chart that install a simple web application written in java springboot.
This can be usefull to check how works aks with two applications
We strongly suggest performing SAST on your microservice Helm chart. You could look at this GitHub Action.