From ae88a1cb767dabb724f9e4ff5b5165a49ca26570 Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Tue, 17 Sep 2024 22:39:16 +0100 Subject: [PATCH 01/22] TELCODOCS-2000 documenting the AGOF 2 --- content/learn/vp__agof .adoc | 311 +++++++++++++++++++++++++++++++++++ 1 file changed, 311 insertions(+) create mode 100644 content/learn/vp__agof .adoc diff --git a/content/learn/vp__agof .adoc b/content/learn/vp__agof .adoc new file mode 100644 index 000000000..ddb37c51d --- /dev/null +++ b/content/learn/vp__agof .adoc @@ -0,0 +1,311 @@ +--- +menu: + learn: + parent: Validated patterns frameworks +title: Ansible GitOps Framework +weight: 23 +aliases: /ocp-framework/agof/ +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + +== About the Ansible GitOps framework (AGOF)for validated patternS + +The link:/patterns/agof/[Ansible GitOps Framework] provides an extensible framework to do GitOps with https://docs.ansible.com/platform.html[Ansible Automation Platform] (AAP), and as such provides useful facilities for developing patterns (community and validated) that function with Ansible Automation Platform as the GitOps engine. + +AGOF comes with code to install VMs in AWS, if desired, or else it can work with previously provisioned VMs, or a functional AAP Controller endpoint + +When there is no access to OpenShift Container Platform, AGOF provides a standalone solution for deploying and managing validated patterns. This framework leverages GitOps principles without relying on OpenShift. + +== Pattern directories tour + +Examining any of the existing patterns reveals the important organizational part of the validated patterns framework. Let's take a look at a couple of the existing validated patterns: Multicluster GitOps and Industrial Edge. + +=== Multicloud GitOps + +The Multicloud GitOps approach enables centralized management of multiple cloud deployments across both public and private clouds, including workloads and the secure handling of secrets across environments. This approach is built on patterns that consist of two key components: a "common" element, which serves as a foundational framework shared by nearly all patterns, and a pattern-specific element that builds on the common framework with tailored content. This section focuses on the standardized directory structure used in Multicloud GitOps repositories for validated patterns in multicloud setups. By following the well-structured directory layout illustrated here, teams can streamline deployment processes across clouds, reduce configuration drift, enhance automation, and maintain a single source of truth for both infrastructure and application code. + +[source,text] +---- +~/g/multicloud-gitops on main ◦ tree -L 2 +. +├── ansible +├── ansible.cfg +├── charts +│ └── all +│ └── region +├── common +│ ├── acm +| ├── ansible +| ├── Changes.md +│ ├── clustergroup +│ ├── common -> . +│ ├── examples +│ ├── golang-external-secrets +│ ├── hashicorp-vault +│ ├── letsencrypt +| ├── LICENSE +| ├── Makefile +| ├── operator-install +| ├── README.md +│ ├── reference-output.yaml +│ ├── scripts +│ ├── tests +│ └── values-global.yaml +├── LICENSE +├── Makefile +├── overrides +│ ├── values-AWS.yaml +│ └── values-IBMCloud.yaml +├── pattern.sh -> ./common/scripts/pattern-util.sh +├── README.md +├── tests +├── values-global.yaml +├── values-global-one.yaml +├── values-hub.yaml +├── values-secret-multicloud-gitops.yaml +└── values-secret.yaml.template + +20 directories, 77 files +---- + +First we notice some subdirectories: charts and common, along with `values-` yaml files. + +=== Industrial edge + +[source,text] +---- +~/g/industrial-edge on stable-2.0 ◦ tree -L 2 +. +├── ansible +├── ansible.cfg +├── Changes.md +├── charts +│ ├── datacenter +│ └─ factory +| └── secrets +├── common +│ ├── acm +| ├── ansible +| ├── Changes.md +│ ├── clustergroup +│ ├── common -> . +│ ├── examples +│ ├── golang-external-secrets +│ ├── hashicorp-vault +│ ├── letsencrypt +| ├── LICENSE +| ├── Makefile +| ├── operator-install +│ ├── scripts +│ ├── tests +│ └── values-global.yaml +├── docs +│ ├── images +│ └── old-deployment-map.txt +├── images +│ ├── import-cluster.png +│ ├── import-with-kubeconfig.png +│ └── launch-acm-console.png +├── LICENSE +├── Makefile +├── overrides +| ├── values-prod-imagedata.yaml +│ └── values-test-imagedata.yaml +├── README.md +|── pattern.sh -> ./common/scripts/pattern-util.sh +├── scripts +│ ├── secret.sh +│ └── sleep-seed.sh +├── SUPPORT_AGREEMENT.md +├── tests +├── values-datacenter.yaml +├── values-factory.yaml +├── values-global.yaml +├── values-hub.yaml -> values-datacenter.yaml +└── values-secret.yaml.template + +25 directories, 98 files +---- + +We see the same or similar files in the both patterns directories. + +== The `common` directory + +The core components that make the Validated Patterns framework are contained in the common repository. These include: + +* OpenShift GitOps configuration +* Supports our clusterGroup and GitOps policies +* Validated Pattern framework build scripts and Makefiles +* Secrets Management with HashiCorp Vault +* Operator CRDs and other assets +* Various utility scripts + +The common repository contains all the shared manifests for the Validated Patterns Framework. These components are configured to work together within the GitOps framework. Instead of duplicating configurations across patterns, shared technologies are centralized in this common directory. Pattern-specific post-deployment configurations, if needed, should be added to the Helm charts in the charts directory. Typically, you won't need to modify the common directory unless working on the framework itself. + +=== Breakdown of common repository + +This table details the key components of the common repository: + +[cols="2,2", options="header"] +|=== +| Component | Description + +| acm | Contains the helm charts which contains policies and is used to configure the deployment of the Advance Cluster Manager. +| ansible | This directory contains the ansible roles and modules that support the secrets management for a pattern. +| clustergroup | Contains the helm chart used to create namespace, subscriptions, projects, and applications described in the values files. This is the seed for all patterns. +| golang-external-secrets | Helm chart for External Secrets Operator that integrates external secret management systems like AWS Secrets Manager, HashiCorp Vault, Google Secrets Manager, Azure Key Vault, IBM Cloud Secrets Manager, Akeyless. +| hashicorp-vault | Contains the helm chart for HashiCorp Vault. +|operator-install | Contains the helm chart used by the Validated Patterns Operator to create the openshift-gitops component and create the initial ArgoCD applications for the Validated Pattern. +| scripts | Directory which contains utility scripts used by the Validated Pattern Framework. +|=== + +== The `charts` directory + +This is where validated patterns keep the helm charts for a pattern. The helm charts are used to deploy and manage the various components of the applications deployed at a site. By convention, the charts are broken out by site location. You may see `datacenter`, `hub`, `factory`, or other site names in there. + +[NOTE] +==== +The site naming convention is flexible, allowing users to modify it to suit their environment. +==== + +Each site has sub-directories based on application or library component groupings. + +From https://helm.sh/docs/chart_template_guide/getting_started/[Helm documentation:] +_Application charts_ are a collection of templates that can be packaged into versioned archives to be deployed. + +_Library charts_ provide useful utilities or functions for the chart developer. They're included as a dependency of application charts to inject those utilities and functions into the rendering +pipeline. Library charts do not define any templates and therefore cannot be deployed. + +These groupings are used by OpenShift GitOps to deploy into the cluster. The configurations for each of the components inside an application are synced every three minutes by OpenShift GitOps to make sure that the site is up to date. The configuration can also be synced manually if you do not wish to wait up to three minutes. + +[source,text] +---- +. +├── datacenter +│ ├── external-secrets +│ ├── manuela-data-lake +│ ├── manuela-tst +│ ├── opendatahub +│ └── pipelines +├── factory +│ └── manuela-stormshift +└── secrets + └── pipeline-setup +---- +The configuration YAML for each component of the application is stored in the templates subdirectory. + +[source,text] +---- +. +├── external-secrets +│ ├── Chart.yaml +│ ├── templates +│ └── values.yaml +├── manuela-data-lake +│ ├── Chart.yaml +│ ├── templates +│ └── values.yaml +├── manuela-tst +│ ├── Chart.yaml +│ ├── templates +│ └── values.yaml +├── opendatahub +│ ├── Chart.yaml +│ ├── templates +│ └── values.yaml +└── pipelines + ├── Chart.yaml + ├── extra + ├── images + ├── README.md + ├── templates + └── values.yaml +---- +== The `scripts` directory + +In some cases, an Operator or Helm chart may require additional configuration. When extra code is needed for deployment, it should be placed in the scripts directory. Typically, consumers of a validated pattern won't interact directly with these scripts, as they are executed by the existing automation (for example through the Makefile or OpenShift GitOps). If extra adjustments are required for your application, place the scripts here and run them through automation. The scripts directory should generally be treated as off-limits unless you're modifying the framework itself. + +== Applications and `values-` files + +Helm uses `values.yaml` files to pass variables into charts. Variables in the `values.yaml` file can be overridden in the following ways: + +. By a `values.yaml` file in the parent directory +. By a values file passed into the `helm ` command using `-f` +. By specifying an override individual value in the the `helm` command with `--set` + +For more information on values files and their usage see the https://helm.sh/docs/chart_template_guide/values_files/[values files section] of the Helm documentation. + +This section is meant as an introduction to the `values-` files that the framework uses to override values in the chart templates. In the Getting Started pages there will be more specific usage details. + +=== There are three types of `value-` files. + +. *`values-global.yaml`*: +This is used to set variables for helm charts across the pattern. It contains the name of the pattern and sets some other variables for artifacts like, image registry, Git repositories, GitOps syncPolicy etc. +. *`values-.yaml`*: +Each specific site requires information regarding what applications and subscriptions are required for that site. This file contains a list of namespaces, applications, subscriptions, the operator versions etc. for that site. +. *`values-secret.yaml.template`*: +All patterns require some secrets for artifacts included in the pattern. For example credentials for GitHub, AWS, or Quay.io. The framework provides a safe way to load those secrets into a vault for consumption by the pattern. This template file can be copied to your home directory, the secret values applied, and the validated pattern will go look for `values-secrets.yaml` in your home directory. *Do not leave a `values-secrets.yaml` file in your cloned git directory or it may end up in your (often public) Git repository, like GitHub.* ++ +[NOTE] +==== +This file has nothing to do with helm and can be copied either to your home directory or to the `~/.config/validatedpatterns/` folder. The naming should be `values-secret-.yaml`. Ideally it should be encrypted with the ansible-vault. +==== + +=== Values files can have some overrides. + +. Version overrides can be used to set specific values for OCP versions. For example *`values-hub-4.16.yaml`* allows you to tweak a specific value for OCP 4.16 on the Hub cluster. +. Version overrides can be used to set specific values for specific cloud providers. For example *`values-AWS.yaml`* would allow you to tweak a specific value for all cluster groups deployed on AWS. + +=== Other combination examples include: + +. *`values-hub-Azure.yaml`* only apply this Azure tweak on the hub cluster. +. *`values-4.16.yaml`* apply these OCP 4.16 tweaks to all cluster groups in this pattern. + +Current supported cloud providers include *`AWS`*, *`Azure`*, *`GCP`*, and *Nutanix*. + +== Environment values and Helm + +The purpose of the values files is to leverage Helm's templating capabilities, allowing you to dynamically substitute values into your charts. This makes the pattern very portable. + +The following `messaging-route.yaml` example shows how the AMQ messaging service is using values set in the `values-global.yaml` file for Industrial Edge. + +[source,yaml] +---- +apiVersion: route.openshift.io/v1 +kind: Route +metadata: + labels: + app: messaging + name: messaging +spec: + host: messaging-manuela-tst-all.apps.{{ .Values.global.datacenter.clustername }}.{{ .Values.global.datacenter.domain }} + port: + targetPort: 3000-tcp + to: + kind: Service + name: messaging + weight: 100 + wildcardPolicy: None +---- + +The values in the `values-global.yaml` will be substituted when the YAML is applied to the cluster. + +[source,yaml] +---- +global: + pattern: industrial-edge + +... + + datacenter: + clustername: ipbabble-dc + domain: blueprints.rhecoeng.com + + edge: + clustername: ipbabble-f1 + domain: blueprints.rhecoeng.com +---- \ No newline at end of file From 501d201ce192e69fdb4a64fb560fc42b4e00c104 Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Wed, 18 Sep 2024 12:31:34 +0100 Subject: [PATCH 02/22] Trying to fix build --- content/learn/vp__agof .adoc | 311 ----------------------------------- 1 file changed, 311 deletions(-) delete mode 100644 content/learn/vp__agof .adoc diff --git a/content/learn/vp__agof .adoc b/content/learn/vp__agof .adoc deleted file mode 100644 index ddb37c51d..000000000 --- a/content/learn/vp__agof .adoc +++ /dev/null @@ -1,311 +0,0 @@ ---- -menu: - learn: - parent: Validated patterns frameworks -title: Ansible GitOps Framework -weight: 23 -aliases: /ocp-framework/agof/ ---- - -:toc: -:imagesdir: /images -:_content-type: ASSEMBLY -include::modules/comm-attributes.adoc[] - -== About the Ansible GitOps framework (AGOF)for validated patternS - -The link:/patterns/agof/[Ansible GitOps Framework] provides an extensible framework to do GitOps with https://docs.ansible.com/platform.html[Ansible Automation Platform] (AAP), and as such provides useful facilities for developing patterns (community and validated) that function with Ansible Automation Platform as the GitOps engine. - -AGOF comes with code to install VMs in AWS, if desired, or else it can work with previously provisioned VMs, or a functional AAP Controller endpoint - -When there is no access to OpenShift Container Platform, AGOF provides a standalone solution for deploying and managing validated patterns. This framework leverages GitOps principles without relying on OpenShift. - -== Pattern directories tour - -Examining any of the existing patterns reveals the important organizational part of the validated patterns framework. Let's take a look at a couple of the existing validated patterns: Multicluster GitOps and Industrial Edge. - -=== Multicloud GitOps - -The Multicloud GitOps approach enables centralized management of multiple cloud deployments across both public and private clouds, including workloads and the secure handling of secrets across environments. This approach is built on patterns that consist of two key components: a "common" element, which serves as a foundational framework shared by nearly all patterns, and a pattern-specific element that builds on the common framework with tailored content. This section focuses on the standardized directory structure used in Multicloud GitOps repositories for validated patterns in multicloud setups. By following the well-structured directory layout illustrated here, teams can streamline deployment processes across clouds, reduce configuration drift, enhance automation, and maintain a single source of truth for both infrastructure and application code. - -[source,text] ----- -~/g/multicloud-gitops on main ◦ tree -L 2 -. -├── ansible -├── ansible.cfg -├── charts -│ └── all -│ └── region -├── common -│ ├── acm -| ├── ansible -| ├── Changes.md -│ ├── clustergroup -│ ├── common -> . -│ ├── examples -│ ├── golang-external-secrets -│ ├── hashicorp-vault -│ ├── letsencrypt -| ├── LICENSE -| ├── Makefile -| ├── operator-install -| ├── README.md -│ ├── reference-output.yaml -│ ├── scripts -│ ├── tests -│ └── values-global.yaml -├── LICENSE -├── Makefile -├── overrides -│ ├── values-AWS.yaml -│ └── values-IBMCloud.yaml -├── pattern.sh -> ./common/scripts/pattern-util.sh -├── README.md -├── tests -├── values-global.yaml -├── values-global-one.yaml -├── values-hub.yaml -├── values-secret-multicloud-gitops.yaml -└── values-secret.yaml.template - -20 directories, 77 files ----- - -First we notice some subdirectories: charts and common, along with `values-` yaml files. - -=== Industrial edge - -[source,text] ----- -~/g/industrial-edge on stable-2.0 ◦ tree -L 2 -. -├── ansible -├── ansible.cfg -├── Changes.md -├── charts -│ ├── datacenter -│ └─ factory -| └── secrets -├── common -│ ├── acm -| ├── ansible -| ├── Changes.md -│ ├── clustergroup -│ ├── common -> . -│ ├── examples -│ ├── golang-external-secrets -│ ├── hashicorp-vault -│ ├── letsencrypt -| ├── LICENSE -| ├── Makefile -| ├── operator-install -│ ├── scripts -│ ├── tests -│ └── values-global.yaml -├── docs -│ ├── images -│ └── old-deployment-map.txt -├── images -│ ├── import-cluster.png -│ ├── import-with-kubeconfig.png -│ └── launch-acm-console.png -├── LICENSE -├── Makefile -├── overrides -| ├── values-prod-imagedata.yaml -│ └── values-test-imagedata.yaml -├── README.md -|── pattern.sh -> ./common/scripts/pattern-util.sh -├── scripts -│ ├── secret.sh -│ └── sleep-seed.sh -├── SUPPORT_AGREEMENT.md -├── tests -├── values-datacenter.yaml -├── values-factory.yaml -├── values-global.yaml -├── values-hub.yaml -> values-datacenter.yaml -└── values-secret.yaml.template - -25 directories, 98 files ----- - -We see the same or similar files in the both patterns directories. - -== The `common` directory - -The core components that make the Validated Patterns framework are contained in the common repository. These include: - -* OpenShift GitOps configuration -* Supports our clusterGroup and GitOps policies -* Validated Pattern framework build scripts and Makefiles -* Secrets Management with HashiCorp Vault -* Operator CRDs and other assets -* Various utility scripts - -The common repository contains all the shared manifests for the Validated Patterns Framework. These components are configured to work together within the GitOps framework. Instead of duplicating configurations across patterns, shared technologies are centralized in this common directory. Pattern-specific post-deployment configurations, if needed, should be added to the Helm charts in the charts directory. Typically, you won't need to modify the common directory unless working on the framework itself. - -=== Breakdown of common repository - -This table details the key components of the common repository: - -[cols="2,2", options="header"] -|=== -| Component | Description - -| acm | Contains the helm charts which contains policies and is used to configure the deployment of the Advance Cluster Manager. -| ansible | This directory contains the ansible roles and modules that support the secrets management for a pattern. -| clustergroup | Contains the helm chart used to create namespace, subscriptions, projects, and applications described in the values files. This is the seed for all patterns. -| golang-external-secrets | Helm chart for External Secrets Operator that integrates external secret management systems like AWS Secrets Manager, HashiCorp Vault, Google Secrets Manager, Azure Key Vault, IBM Cloud Secrets Manager, Akeyless. -| hashicorp-vault | Contains the helm chart for HashiCorp Vault. -|operator-install | Contains the helm chart used by the Validated Patterns Operator to create the openshift-gitops component and create the initial ArgoCD applications for the Validated Pattern. -| scripts | Directory which contains utility scripts used by the Validated Pattern Framework. -|=== - -== The `charts` directory - -This is where validated patterns keep the helm charts for a pattern. The helm charts are used to deploy and manage the various components of the applications deployed at a site. By convention, the charts are broken out by site location. You may see `datacenter`, `hub`, `factory`, or other site names in there. - -[NOTE] -==== -The site naming convention is flexible, allowing users to modify it to suit their environment. -==== - -Each site has sub-directories based on application or library component groupings. - -From https://helm.sh/docs/chart_template_guide/getting_started/[Helm documentation:] -_Application charts_ are a collection of templates that can be packaged into versioned archives to be deployed. - -_Library charts_ provide useful utilities or functions for the chart developer. They're included as a dependency of application charts to inject those utilities and functions into the rendering -pipeline. Library charts do not define any templates and therefore cannot be deployed. - -These groupings are used by OpenShift GitOps to deploy into the cluster. The configurations for each of the components inside an application are synced every three minutes by OpenShift GitOps to make sure that the site is up to date. The configuration can also be synced manually if you do not wish to wait up to three minutes. - -[source,text] ----- -. -├── datacenter -│ ├── external-secrets -│ ├── manuela-data-lake -│ ├── manuela-tst -│ ├── opendatahub -│ └── pipelines -├── factory -│ └── manuela-stormshift -└── secrets - └── pipeline-setup ----- -The configuration YAML for each component of the application is stored in the templates subdirectory. - -[source,text] ----- -. -├── external-secrets -│ ├── Chart.yaml -│ ├── templates -│ └── values.yaml -├── manuela-data-lake -│ ├── Chart.yaml -│ ├── templates -│ └── values.yaml -├── manuela-tst -│ ├── Chart.yaml -│ ├── templates -│ └── values.yaml -├── opendatahub -│ ├── Chart.yaml -│ ├── templates -│ └── values.yaml -└── pipelines - ├── Chart.yaml - ├── extra - ├── images - ├── README.md - ├── templates - └── values.yaml ----- -== The `scripts` directory - -In some cases, an Operator or Helm chart may require additional configuration. When extra code is needed for deployment, it should be placed in the scripts directory. Typically, consumers of a validated pattern won't interact directly with these scripts, as they are executed by the existing automation (for example through the Makefile or OpenShift GitOps). If extra adjustments are required for your application, place the scripts here and run them through automation. The scripts directory should generally be treated as off-limits unless you're modifying the framework itself. - -== Applications and `values-` files - -Helm uses `values.yaml` files to pass variables into charts. Variables in the `values.yaml` file can be overridden in the following ways: - -. By a `values.yaml` file in the parent directory -. By a values file passed into the `helm ` command using `-f` -. By specifying an override individual value in the the `helm` command with `--set` - -For more information on values files and their usage see the https://helm.sh/docs/chart_template_guide/values_files/[values files section] of the Helm documentation. - -This section is meant as an introduction to the `values-` files that the framework uses to override values in the chart templates. In the Getting Started pages there will be more specific usage details. - -=== There are three types of `value-` files. - -. *`values-global.yaml`*: -This is used to set variables for helm charts across the pattern. It contains the name of the pattern and sets some other variables for artifacts like, image registry, Git repositories, GitOps syncPolicy etc. -. *`values-.yaml`*: -Each specific site requires information regarding what applications and subscriptions are required for that site. This file contains a list of namespaces, applications, subscriptions, the operator versions etc. for that site. -. *`values-secret.yaml.template`*: -All patterns require some secrets for artifacts included in the pattern. For example credentials for GitHub, AWS, or Quay.io. The framework provides a safe way to load those secrets into a vault for consumption by the pattern. This template file can be copied to your home directory, the secret values applied, and the validated pattern will go look for `values-secrets.yaml` in your home directory. *Do not leave a `values-secrets.yaml` file in your cloned git directory or it may end up in your (often public) Git repository, like GitHub.* -+ -[NOTE] -==== -This file has nothing to do with helm and can be copied either to your home directory or to the `~/.config/validatedpatterns/` folder. The naming should be `values-secret-.yaml`. Ideally it should be encrypted with the ansible-vault. -==== - -=== Values files can have some overrides. - -. Version overrides can be used to set specific values for OCP versions. For example *`values-hub-4.16.yaml`* allows you to tweak a specific value for OCP 4.16 on the Hub cluster. -. Version overrides can be used to set specific values for specific cloud providers. For example *`values-AWS.yaml`* would allow you to tweak a specific value for all cluster groups deployed on AWS. - -=== Other combination examples include: - -. *`values-hub-Azure.yaml`* only apply this Azure tweak on the hub cluster. -. *`values-4.16.yaml`* apply these OCP 4.16 tweaks to all cluster groups in this pattern. - -Current supported cloud providers include *`AWS`*, *`Azure`*, *`GCP`*, and *Nutanix*. - -== Environment values and Helm - -The purpose of the values files is to leverage Helm's templating capabilities, allowing you to dynamically substitute values into your charts. This makes the pattern very portable. - -The following `messaging-route.yaml` example shows how the AMQ messaging service is using values set in the `values-global.yaml` file for Industrial Edge. - -[source,yaml] ----- -apiVersion: route.openshift.io/v1 -kind: Route -metadata: - labels: - app: messaging - name: messaging -spec: - host: messaging-manuela-tst-all.apps.{{ .Values.global.datacenter.clustername }}.{{ .Values.global.datacenter.domain }} - port: - targetPort: 3000-tcp - to: - kind: Service - name: messaging - weight: 100 - wildcardPolicy: None ----- - -The values in the `values-global.yaml` will be substituted when the YAML is applied to the cluster. - -[source,yaml] ----- -global: - pattern: industrial-edge - -... - - datacenter: - clustername: ipbabble-dc - domain: blueprints.rhecoeng.com - - edge: - clustername: ipbabble-f1 - domain: blueprints.rhecoeng.com ----- \ No newline at end of file From 31aba8205f17a98dd01e714c56f55168a7b0394a Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Wed, 18 Sep 2024 12:31:42 +0100 Subject: [PATCH 03/22] Trying to fix build 2 --- content/learn/vp_agof.adoc | 311 +++++++++++++++++++++++++++++++++++++ 1 file changed, 311 insertions(+) create mode 100644 content/learn/vp_agof.adoc diff --git a/content/learn/vp_agof.adoc b/content/learn/vp_agof.adoc new file mode 100644 index 000000000..ddb37c51d --- /dev/null +++ b/content/learn/vp_agof.adoc @@ -0,0 +1,311 @@ +--- +menu: + learn: + parent: Validated patterns frameworks +title: Ansible GitOps Framework +weight: 23 +aliases: /ocp-framework/agof/ +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + +== About the Ansible GitOps framework (AGOF)for validated patternS + +The link:/patterns/agof/[Ansible GitOps Framework] provides an extensible framework to do GitOps with https://docs.ansible.com/platform.html[Ansible Automation Platform] (AAP), and as such provides useful facilities for developing patterns (community and validated) that function with Ansible Automation Platform as the GitOps engine. + +AGOF comes with code to install VMs in AWS, if desired, or else it can work with previously provisioned VMs, or a functional AAP Controller endpoint + +When there is no access to OpenShift Container Platform, AGOF provides a standalone solution for deploying and managing validated patterns. This framework leverages GitOps principles without relying on OpenShift. + +== Pattern directories tour + +Examining any of the existing patterns reveals the important organizational part of the validated patterns framework. Let's take a look at a couple of the existing validated patterns: Multicluster GitOps and Industrial Edge. + +=== Multicloud GitOps + +The Multicloud GitOps approach enables centralized management of multiple cloud deployments across both public and private clouds, including workloads and the secure handling of secrets across environments. This approach is built on patterns that consist of two key components: a "common" element, which serves as a foundational framework shared by nearly all patterns, and a pattern-specific element that builds on the common framework with tailored content. This section focuses on the standardized directory structure used in Multicloud GitOps repositories for validated patterns in multicloud setups. By following the well-structured directory layout illustrated here, teams can streamline deployment processes across clouds, reduce configuration drift, enhance automation, and maintain a single source of truth for both infrastructure and application code. + +[source,text] +---- +~/g/multicloud-gitops on main ◦ tree -L 2 +. +├── ansible +├── ansible.cfg +├── charts +│ └── all +│ └── region +├── common +│ ├── acm +| ├── ansible +| ├── Changes.md +│ ├── clustergroup +│ ├── common -> . +│ ├── examples +│ ├── golang-external-secrets +│ ├── hashicorp-vault +│ ├── letsencrypt +| ├── LICENSE +| ├── Makefile +| ├── operator-install +| ├── README.md +│ ├── reference-output.yaml +│ ├── scripts +│ ├── tests +│ └── values-global.yaml +├── LICENSE +├── Makefile +├── overrides +│ ├── values-AWS.yaml +│ └── values-IBMCloud.yaml +├── pattern.sh -> ./common/scripts/pattern-util.sh +├── README.md +├── tests +├── values-global.yaml +├── values-global-one.yaml +├── values-hub.yaml +├── values-secret-multicloud-gitops.yaml +└── values-secret.yaml.template + +20 directories, 77 files +---- + +First we notice some subdirectories: charts and common, along with `values-` yaml files. + +=== Industrial edge + +[source,text] +---- +~/g/industrial-edge on stable-2.0 ◦ tree -L 2 +. +├── ansible +├── ansible.cfg +├── Changes.md +├── charts +│ ├── datacenter +│ └─ factory +| └── secrets +├── common +│ ├── acm +| ├── ansible +| ├── Changes.md +│ ├── clustergroup +│ ├── common -> . +│ ├── examples +│ ├── golang-external-secrets +│ ├── hashicorp-vault +│ ├── letsencrypt +| ├── LICENSE +| ├── Makefile +| ├── operator-install +│ ├── scripts +│ ├── tests +│ └── values-global.yaml +├── docs +│ ├── images +│ └── old-deployment-map.txt +├── images +│ ├── import-cluster.png +│ ├── import-with-kubeconfig.png +│ └── launch-acm-console.png +├── LICENSE +├── Makefile +├── overrides +| ├── values-prod-imagedata.yaml +│ └── values-test-imagedata.yaml +├── README.md +|── pattern.sh -> ./common/scripts/pattern-util.sh +├── scripts +│ ├── secret.sh +│ └── sleep-seed.sh +├── SUPPORT_AGREEMENT.md +├── tests +├── values-datacenter.yaml +├── values-factory.yaml +├── values-global.yaml +├── values-hub.yaml -> values-datacenter.yaml +└── values-secret.yaml.template + +25 directories, 98 files +---- + +We see the same or similar files in the both patterns directories. + +== The `common` directory + +The core components that make the Validated Patterns framework are contained in the common repository. These include: + +* OpenShift GitOps configuration +* Supports our clusterGroup and GitOps policies +* Validated Pattern framework build scripts and Makefiles +* Secrets Management with HashiCorp Vault +* Operator CRDs and other assets +* Various utility scripts + +The common repository contains all the shared manifests for the Validated Patterns Framework. These components are configured to work together within the GitOps framework. Instead of duplicating configurations across patterns, shared technologies are centralized in this common directory. Pattern-specific post-deployment configurations, if needed, should be added to the Helm charts in the charts directory. Typically, you won't need to modify the common directory unless working on the framework itself. + +=== Breakdown of common repository + +This table details the key components of the common repository: + +[cols="2,2", options="header"] +|=== +| Component | Description + +| acm | Contains the helm charts which contains policies and is used to configure the deployment of the Advance Cluster Manager. +| ansible | This directory contains the ansible roles and modules that support the secrets management for a pattern. +| clustergroup | Contains the helm chart used to create namespace, subscriptions, projects, and applications described in the values files. This is the seed for all patterns. +| golang-external-secrets | Helm chart for External Secrets Operator that integrates external secret management systems like AWS Secrets Manager, HashiCorp Vault, Google Secrets Manager, Azure Key Vault, IBM Cloud Secrets Manager, Akeyless. +| hashicorp-vault | Contains the helm chart for HashiCorp Vault. +|operator-install | Contains the helm chart used by the Validated Patterns Operator to create the openshift-gitops component and create the initial ArgoCD applications for the Validated Pattern. +| scripts | Directory which contains utility scripts used by the Validated Pattern Framework. +|=== + +== The `charts` directory + +This is where validated patterns keep the helm charts for a pattern. The helm charts are used to deploy and manage the various components of the applications deployed at a site. By convention, the charts are broken out by site location. You may see `datacenter`, `hub`, `factory`, or other site names in there. + +[NOTE] +==== +The site naming convention is flexible, allowing users to modify it to suit their environment. +==== + +Each site has sub-directories based on application or library component groupings. + +From https://helm.sh/docs/chart_template_guide/getting_started/[Helm documentation:] +_Application charts_ are a collection of templates that can be packaged into versioned archives to be deployed. + +_Library charts_ provide useful utilities or functions for the chart developer. They're included as a dependency of application charts to inject those utilities and functions into the rendering +pipeline. Library charts do not define any templates and therefore cannot be deployed. + +These groupings are used by OpenShift GitOps to deploy into the cluster. The configurations for each of the components inside an application are synced every three minutes by OpenShift GitOps to make sure that the site is up to date. The configuration can also be synced manually if you do not wish to wait up to three minutes. + +[source,text] +---- +. +├── datacenter +│ ├── external-secrets +│ ├── manuela-data-lake +│ ├── manuela-tst +│ ├── opendatahub +│ └── pipelines +├── factory +│ └── manuela-stormshift +└── secrets + └── pipeline-setup +---- +The configuration YAML for each component of the application is stored in the templates subdirectory. + +[source,text] +---- +. +├── external-secrets +│ ├── Chart.yaml +│ ├── templates +│ └── values.yaml +├── manuela-data-lake +│ ├── Chart.yaml +│ ├── templates +│ └── values.yaml +├── manuela-tst +│ ├── Chart.yaml +│ ├── templates +│ └── values.yaml +├── opendatahub +│ ├── Chart.yaml +│ ├── templates +│ └── values.yaml +└── pipelines + ├── Chart.yaml + ├── extra + ├── images + ├── README.md + ├── templates + └── values.yaml +---- +== The `scripts` directory + +In some cases, an Operator or Helm chart may require additional configuration. When extra code is needed for deployment, it should be placed in the scripts directory. Typically, consumers of a validated pattern won't interact directly with these scripts, as they are executed by the existing automation (for example through the Makefile or OpenShift GitOps). If extra adjustments are required for your application, place the scripts here and run them through automation. The scripts directory should generally be treated as off-limits unless you're modifying the framework itself. + +== Applications and `values-` files + +Helm uses `values.yaml` files to pass variables into charts. Variables in the `values.yaml` file can be overridden in the following ways: + +. By a `values.yaml` file in the parent directory +. By a values file passed into the `helm ` command using `-f` +. By specifying an override individual value in the the `helm` command with `--set` + +For more information on values files and their usage see the https://helm.sh/docs/chart_template_guide/values_files/[values files section] of the Helm documentation. + +This section is meant as an introduction to the `values-` files that the framework uses to override values in the chart templates. In the Getting Started pages there will be more specific usage details. + +=== There are three types of `value-` files. + +. *`values-global.yaml`*: +This is used to set variables for helm charts across the pattern. It contains the name of the pattern and sets some other variables for artifacts like, image registry, Git repositories, GitOps syncPolicy etc. +. *`values-.yaml`*: +Each specific site requires information regarding what applications and subscriptions are required for that site. This file contains a list of namespaces, applications, subscriptions, the operator versions etc. for that site. +. *`values-secret.yaml.template`*: +All patterns require some secrets for artifacts included in the pattern. For example credentials for GitHub, AWS, or Quay.io. The framework provides a safe way to load those secrets into a vault for consumption by the pattern. This template file can be copied to your home directory, the secret values applied, and the validated pattern will go look for `values-secrets.yaml` in your home directory. *Do not leave a `values-secrets.yaml` file in your cloned git directory or it may end up in your (often public) Git repository, like GitHub.* ++ +[NOTE] +==== +This file has nothing to do with helm and can be copied either to your home directory or to the `~/.config/validatedpatterns/` folder. The naming should be `values-secret-.yaml`. Ideally it should be encrypted with the ansible-vault. +==== + +=== Values files can have some overrides. + +. Version overrides can be used to set specific values for OCP versions. For example *`values-hub-4.16.yaml`* allows you to tweak a specific value for OCP 4.16 on the Hub cluster. +. Version overrides can be used to set specific values for specific cloud providers. For example *`values-AWS.yaml`* would allow you to tweak a specific value for all cluster groups deployed on AWS. + +=== Other combination examples include: + +. *`values-hub-Azure.yaml`* only apply this Azure tweak on the hub cluster. +. *`values-4.16.yaml`* apply these OCP 4.16 tweaks to all cluster groups in this pattern. + +Current supported cloud providers include *`AWS`*, *`Azure`*, *`GCP`*, and *Nutanix*. + +== Environment values and Helm + +The purpose of the values files is to leverage Helm's templating capabilities, allowing you to dynamically substitute values into your charts. This makes the pattern very portable. + +The following `messaging-route.yaml` example shows how the AMQ messaging service is using values set in the `values-global.yaml` file for Industrial Edge. + +[source,yaml] +---- +apiVersion: route.openshift.io/v1 +kind: Route +metadata: + labels: + app: messaging + name: messaging +spec: + host: messaging-manuela-tst-all.apps.{{ .Values.global.datacenter.clustername }}.{{ .Values.global.datacenter.domain }} + port: + targetPort: 3000-tcp + to: + kind: Service + name: messaging + weight: 100 + wildcardPolicy: None +---- + +The values in the `values-global.yaml` will be substituted when the YAML is applied to the cluster. + +[source,yaml] +---- +global: + pattern: industrial-edge + +... + + datacenter: + clustername: ipbabble-dc + domain: blueprints.rhecoeng.com + + edge: + clustername: ipbabble-f1 + domain: blueprints.rhecoeng.com +---- \ No newline at end of file From a8e47b3ef183e536aa40bdf80bfc1c3ebeae89f0 Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Wed, 18 Sep 2024 23:11:12 +0100 Subject: [PATCH 04/22] adding more content --- content/learn/vp_agof.adoc | 2 ++ 1 file changed, 2 insertions(+) diff --git a/content/learn/vp_agof.adoc b/content/learn/vp_agof.adoc index ddb37c51d..d907769fd 100644 --- a/content/learn/vp_agof.adoc +++ b/content/learn/vp_agof.adoc @@ -20,6 +20,8 @@ AGOF comes with code to install VMs in AWS, if desired, or else it can work with When there is no access to OpenShift Container Platform, AGOF provides a standalone solution for deploying and managing validated patterns. This framework leverages GitOps principles without relying on OpenShift. +Ansible Automation Platform includes automation controller, a web-based UI interface allows users to define, operate, scale, and delegate automation across their enterprise. + == Pattern directories tour Examining any of the existing patterns reveals the important organizational part of the validated patterns framework. Let's take a look at a couple of the existing validated patterns: Multicluster GitOps and Industrial Edge. From c9055d4a61387e44445c7efad0d41c9023a50d33 Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Mon, 23 Sep 2024 14:51:31 +0100 Subject: [PATCH 05/22] Adding more info --- content/learn/vp_agof.adoc | 296 ++----------------------------------- 1 file changed, 11 insertions(+), 285 deletions(-) diff --git a/content/learn/vp_agof.adoc b/content/learn/vp_agof.adoc index d907769fd..039684ee2 100644 --- a/content/learn/vp_agof.adoc +++ b/content/learn/vp_agof.adoc @@ -14,300 +14,26 @@ include::modules/comm-attributes.adoc[] == About the Ansible GitOps framework (AGOF)for validated patternS -The link:/patterns/agof/[Ansible GitOps Framework] provides an extensible framework to do GitOps with https://docs.ansible.com/platform.html[Ansible Automation Platform] (AAP), and as such provides useful facilities for developing patterns (community and validated) that function with Ansible Automation Platform as the GitOps engine. - -AGOF comes with code to install VMs in AWS, if desired, or else it can work with previously provisioned VMs, or a functional AAP Controller endpoint +The link:/patterns/ansible-gitops-framework/[Ansible GitOps Framework] provides an extensible framework to do GitOps with https://docs.ansible.com/platform.html[Ansible Automation Platform] (AAP), and as such provides useful facilities for developing patterns (community and validated) that function with Ansible Automation Platform as the GitOps engine. When there is no access to OpenShift Container Platform, AGOF provides a standalone solution for deploying and managing validated patterns. This framework leverages GitOps principles without relying on OpenShift. -Ansible Automation Platform includes automation controller, a web-based UI interface allows users to define, operate, scale, and delegate automation across their enterprise. - -== Pattern directories tour - -Examining any of the existing patterns reveals the important organizational part of the validated patterns framework. Let's take a look at a couple of the existing validated patterns: Multicluster GitOps and Industrial Edge. - -=== Multicloud GitOps - -The Multicloud GitOps approach enables centralized management of multiple cloud deployments across both public and private clouds, including workloads and the secure handling of secrets across environments. This approach is built on patterns that consist of two key components: a "common" element, which serves as a foundational framework shared by nearly all patterns, and a pattern-specific element that builds on the common framework with tailored content. This section focuses on the standardized directory structure used in Multicloud GitOps repositories for validated patterns in multicloud setups. By following the well-structured directory layout illustrated here, teams can streamline deployment processes across clouds, reduce configuration drift, enhance automation, and maintain a single source of truth for both infrastructure and application code. - -[source,text] ----- -~/g/multicloud-gitops on main ◦ tree -L 2 -. -├── ansible -├── ansible.cfg -├── charts -│ └── all -│ └── region -├── common -│ ├── acm -| ├── ansible -| ├── Changes.md -│ ├── clustergroup -│ ├── common -> . -│ ├── examples -│ ├── golang-external-secrets -│ ├── hashicorp-vault -│ ├── letsencrypt -| ├── LICENSE -| ├── Makefile -| ├── operator-install -| ├── README.md -│ ├── reference-output.yaml -│ ├── scripts -│ ├── tests -│ └── values-global.yaml -├── LICENSE -├── Makefile -├── overrides -│ ├── values-AWS.yaml -│ └── values-IBMCloud.yaml -├── pattern.sh -> ./common/scripts/pattern-util.sh -├── README.md -├── tests -├── values-global.yaml -├── values-global-one.yaml -├── values-hub.yaml -├── values-secret-multicloud-gitops.yaml -└── values-secret.yaml.template - -20 directories, 77 files ----- - -First we notice some subdirectories: charts and common, along with `values-` yaml files. - -=== Industrial edge - -[source,text] ----- -~/g/industrial-edge on stable-2.0 ◦ tree -L 2 -. -├── ansible -├── ansible.cfg -├── Changes.md -├── charts -│ ├── datacenter -│ └─ factory -| └── secrets -├── common -│ ├── acm -| ├── ansible -| ├── Changes.md -│ ├── clustergroup -│ ├── common -> . -│ ├── examples -│ ├── golang-external-secrets -│ ├── hashicorp-vault -│ ├── letsencrypt -| ├── LICENSE -| ├── Makefile -| ├── operator-install -│ ├── scripts -│ ├── tests -│ └── values-global.yaml -├── docs -│ ├── images -│ └── old-deployment-map.txt -├── images -│ ├── import-cluster.png -│ ├── import-with-kubeconfig.png -│ └── launch-acm-console.png -├── LICENSE -├── Makefile -├── overrides -| ├── values-prod-imagedata.yaml -│ └── values-test-imagedata.yaml -├── README.md -|── pattern.sh -> ./common/scripts/pattern-util.sh -├── scripts -│ ├── secret.sh -│ └── sleep-seed.sh -├── SUPPORT_AGREEMENT.md -├── tests -├── values-datacenter.yaml -├── values-factory.yaml -├── values-global.yaml -├── values-hub.yaml -> values-datacenter.yaml -└── values-secret.yaml.template - -25 directories, 98 files ----- - -We see the same or similar files in the both patterns directories. - -== The `common` directory - -The core components that make the Validated Patterns framework are contained in the common repository. These include: - -* OpenShift GitOps configuration -* Supports our clusterGroup and GitOps policies -* Validated Pattern framework build scripts and Makefiles -* Secrets Management with HashiCorp Vault -* Operator CRDs and other assets -* Various utility scripts - -The common repository contains all the shared manifests for the Validated Patterns Framework. These components are configured to work together within the GitOps framework. Instead of duplicating configurations across patterns, shared technologies are centralized in this common directory. Pattern-specific post-deployment configurations, if needed, should be added to the Helm charts in the charts directory. Typically, you won't need to modify the common directory unless working on the framework itself. - -=== Breakdown of common repository - -This table details the key components of the common repository: - -[cols="2,2", options="header"] -|=== -| Component | Description - -| acm | Contains the helm charts which contains policies and is used to configure the deployment of the Advance Cluster Manager. -| ansible | This directory contains the ansible roles and modules that support the secrets management for a pattern. -| clustergroup | Contains the helm chart used to create namespace, subscriptions, projects, and applications described in the values files. This is the seed for all patterns. -| golang-external-secrets | Helm chart for External Secrets Operator that integrates external secret management systems like AWS Secrets Manager, HashiCorp Vault, Google Secrets Manager, Azure Key Vault, IBM Cloud Secrets Manager, Akeyless. -| hashicorp-vault | Contains the helm chart for HashiCorp Vault. -|operator-install | Contains the helm chart used by the Validated Patterns Operator to create the openshift-gitops component and create the initial ArgoCD applications for the Validated Pattern. -| scripts | Directory which contains utility scripts used by the Validated Pattern Framework. -|=== - -== The `charts` directory - -This is where validated patterns keep the helm charts for a pattern. The helm charts are used to deploy and manage the various components of the applications deployed at a site. By convention, the charts are broken out by site location. You may see `datacenter`, `hub`, `factory`, or other site names in there. - -[NOTE] -==== -The site naming convention is flexible, allowing users to modify it to suit their environment. -==== - -Each site has sub-directories based on application or library component groupings. - -From https://helm.sh/docs/chart_template_guide/getting_started/[Helm documentation:] -_Application charts_ are a collection of templates that can be packaged into versioned archives to be deployed. - -_Library charts_ provide useful utilities or functions for the chart developer. They're included as a dependency of application charts to inject those utilities and functions into the rendering -pipeline. Library charts do not define any templates and therefore cannot be deployed. - -These groupings are used by OpenShift GitOps to deploy into the cluster. The configurations for each of the components inside an application are synced every three minutes by OpenShift GitOps to make sure that the site is up to date. The configuration can also be synced manually if you do not wish to wait up to three minutes. - -[source,text] ----- -. -├── datacenter -│ ├── external-secrets -│ ├── manuela-data-lake -│ ├── manuela-tst -│ ├── opendatahub -│ └── pipelines -├── factory -│ └── manuela-stormshift -└── secrets - └── pipeline-setup ----- -The configuration YAML for each component of the application is stored in the templates subdirectory. - -[source,text] ----- -. -├── external-secrets -│ ├── Chart.yaml -│ ├── templates -│ └── values.yaml -├── manuela-data-lake -│ ├── Chart.yaml -│ ├── templates -│ └── values.yaml -├── manuela-tst -│ ├── Chart.yaml -│ ├── templates -│ └── values.yaml -├── opendatahub -│ ├── Chart.yaml -│ ├── templates -│ └── values.yaml -└── pipelines - ├── Chart.yaml - ├── extra - ├── images - ├── README.md - ├── templates - └── values.yaml ----- -== The `scripts` directory - -In some cases, an Operator or Helm chart may require additional configuration. When extra code is needed for deployment, it should be placed in the scripts directory. Typically, consumers of a validated pattern won't interact directly with these scripts, as they are executed by the existing automation (for example through the Makefile or OpenShift GitOps). If extra adjustments are required for your application, place the scripts here and run them through automation. The scripts directory should generally be treated as off-limits unless you're modifying the framework itself. - -== Applications and `values-` files - -Helm uses `values.yaml` files to pass variables into charts. Variables in the `values.yaml` file can be overridden in the following ways: - -. By a `values.yaml` file in the parent directory -. By a values file passed into the `helm ` command using `-f` -. By specifying an override individual value in the the `helm` command with `--set` - -For more information on values files and their usage see the https://helm.sh/docs/chart_template_guide/values_files/[values files section] of the Helm documentation. - -This section is meant as an introduction to the `values-` files that the framework uses to override values in the chart templates. In the Getting Started pages there will be more specific usage details. - -=== There are three types of `value-` files. - -. *`values-global.yaml`*: -This is used to set variables for helm charts across the pattern. It contains the name of the pattern and sets some other variables for artifacts like, image registry, Git repositories, GitOps syncPolicy etc. -. *`values-.yaml`*: -Each specific site requires information regarding what applications and subscriptions are required for that site. This file contains a list of namespaces, applications, subscriptions, the operator versions etc. for that site. -. *`values-secret.yaml.template`*: -All patterns require some secrets for artifacts included in the pattern. For example credentials for GitHub, AWS, or Quay.io. The framework provides a safe way to load those secrets into a vault for consumption by the pattern. This template file can be copied to your home directory, the secret values applied, and the validated pattern will go look for `values-secrets.yaml` in your home directory. *Do not leave a `values-secrets.yaml` file in your cloned git directory or it may end up in your (often public) Git repository, like GitHub.* -+ -[NOTE] -==== -This file has nothing to do with helm and can be copied either to your home directory or to the `~/.config/validatedpatterns/` folder. The naming should be `values-secret-.yaml`. Ideally it should be encrypted with the ansible-vault. -==== - -=== Values files can have some overrides. - -. Version overrides can be used to set specific values for OCP versions. For example *`values-hub-4.16.yaml`* allows you to tweak a specific value for OCP 4.16 on the Hub cluster. -. Version overrides can be used to set specific values for specific cloud providers. For example *`values-AWS.yaml`* would allow you to tweak a specific value for all cluster groups deployed on AWS. - -=== Other combination examples include: - -. *`values-hub-Azure.yaml`* only apply this Azure tweak on the hub cluster. -. *`values-4.16.yaml`* apply these OCP 4.16 tweaks to all cluster groups in this pattern. - -Current supported cloud providers include *`AWS`*, *`Azure`*, *`GCP`*, and *Nutanix*. +Ansible Automation Platform includes automation controller, a web-based UI interface allows users to define, operate, scale, and delegate automation across their enterprise -== Environment values and Helm +The repository at https://github.com/validatedpatterns/agof/[Ansible GitOps Framework] provides code for installing VMs on AWS, if needed. Alternatively, it can be used with existing VMs or a functional AAP Controller endpoint. -The purpose of the values files is to leverage Helm's templating capabilities, allowing you to dynamically substitute values into your charts. This makes the pattern very portable. +The link:https://github.com/validatedpatterns/agof/[Ansible GitOps Framework] repository contains code to set up the infrastructure for applying `controller_configuration` to an AAP instance. It includes some predefined configurations and practices to make the infrastructure as code repository more user-friendly and standardized. An AGOF pattern for example link:https://https://github.com/mhjacks/agof_demo_config[demo] is primarily an IaC infrastructure as code (IaC) artifact designed to be used with the `controller_configuration` collection. -The following `messaging-route.yaml` example shows how the AMQ messaging service is using values set in the `values-global.yaml` file for Industrial Edge. +== Role of the Ansible Controller -[source,yaml] ----- -apiVersion: route.openshift.io/v1 -kind: Route -metadata: - labels: - app: messaging - name: messaging -spec: - host: messaging-manuela-tst-all.apps.{{ .Values.global.datacenter.clustername }}.{{ .Values.global.datacenter.domain }} - port: - targetPort: 3000-tcp - to: - kind: Service - name: messaging - weight: 100 - wildcardPolicy: None ----- +In the Ansible GitOps framework, an ansible controller refers to any machine that initiates and runs a playbook. This includes systems running ansible core or open source Ansible, where the machine executing the playbooks also acts as the controller. AAP provides the automation controller which is Red Hat's productized ansible controller. -The values in the `values-global.yaml` will be substituted when the YAML is applied to the cluster. +THe link:https://docs.ansible.com/platform.html[Automation controller] is the command and control center for Red Hat Ansible Automation Platform, replacing Ansible Tower. It includes a webUI, API, role-based access control (RBAC), a workflow visualizer, and continuous integration and continuous delivery (CI/CD) integrations to help you organize and manage automation across your enterprise. -[source,yaml] ----- -global: - pattern: industrial-edge +The Automation Controller serves as the centralized platform for managing and executing automation across infrastructure. It allows users to create job templates that standardize the deployment and execution of Ansible playbooks, making automation more consistent and reusable. It integrates essential components such as execution environments for consistent automation execution, projects (repositories for automation content), inventories (target endpoints), and credentials (for secure access to resources). -... +The webUI provides an intuitive interface to build, monitor, and manage automation workflows, while the API offers seamless integration with other tools, such as CI/CD pipelines or orchestration platforms. Overall, the Automation Controller streamlines the automation lifecycle, ensuring a scalable, secure, and maintainable automation environment. - datacenter: - clustername: ipbabble-dc - domain: blueprints.rhecoeng.com +== Ansible framework methods - edge: - clustername: ipbabble-f1 - domain: blueprints.rhecoeng.com ----- \ No newline at end of file +The default installation provides an AAP 2.4 installation deployed by using the Containerized Installer, with services deployed this way: \ No newline at end of file From 04ada1e82a4f2e1c10bf78b5181b0994cf7656bc Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Wed, 25 Sep 2024 12:26:39 +0100 Subject: [PATCH 06/22] Adding new section on running default install --- content/learn/vp_agof.adoc | 83 +++++++++- content/learn/vp_agof_proc.adoc | 277 ++++++++++++++++++++++++++++++++ 2 files changed, 357 insertions(+), 3 deletions(-) create mode 100644 content/learn/vp_agof_proc.adoc diff --git a/content/learn/vp_agof.adoc b/content/learn/vp_agof.adoc index 039684ee2..f43ec4d45 100644 --- a/content/learn/vp_agof.adoc +++ b/content/learn/vp_agof.adoc @@ -12,7 +12,7 @@ aliases: /ocp-framework/agof/ :_content-type: ASSEMBLY include::modules/comm-attributes.adoc[] -== About the Ansible GitOps framework (AGOF)for validated patternS +== About the Ansible GitOps framework (AGOF)for validated patterns The link:/patterns/ansible-gitops-framework/[Ansible GitOps Framework] provides an extensible framework to do GitOps with https://docs.ansible.com/platform.html[Ansible Automation Platform] (AAP), and as such provides useful facilities for developing patterns (community and validated) that function with Ansible Automation Platform as the GitOps engine. @@ -22,7 +22,7 @@ Ansible Automation Platform includes automation controller, a web-based UI inter The repository at https://github.com/validatedpatterns/agof/[Ansible GitOps Framework] provides code for installing VMs on AWS, if needed. Alternatively, it can be used with existing VMs or a functional AAP Controller endpoint. -The link:https://github.com/validatedpatterns/agof/[Ansible GitOps Framework] repository contains code to set up the infrastructure for applying `controller_configuration` to an AAP instance. It includes some predefined configurations and practices to make the infrastructure as code repository more user-friendly and standardized. An AGOF pattern for example link:https://https://github.com/mhjacks/agof_demo_config[demo] is primarily an IaC infrastructure as code (IaC) artifact designed to be used with the `controller_configuration` collection. +The link:https://github.com/validatedpatterns/agof/[Ansible GitOps Framework] repository contains code to set up the infrastructure for applying `controller_configuration` to an AAP instance. It includes some predefined configurations and practices to make the infrastructure as code repository more user-friendly and standardized. An AGOF pattern for example link:https://github.com/mhjacks/agof_demo_config[demo] is primarily an IaC infrastructure as code (IaC) artifact designed to be used with the `controller_configuration` collection. == Role of the Ansible Controller @@ -36,4 +36,81 @@ The webUI provides an intuitive interface to build, monitor, and manage automati == Ansible framework methods -The default installation provides an AAP 2.4 installation deployed by using the Containerized Installer, with services deployed this way: \ No newline at end of file +The three main methods for setting up an Ansible framework in relation to Ansible Automation Platform (AAP) 2.4 can be summarized as follows: + +=== Method 1: AWS-based install + +This method is ideal for organizations that prefer deploying AAP on AWS infrastructure. This default install process in AAP 2.4 uses AWS by default and offers a fully automated setup. It requires AWS credentials, builds an AWS image with Red Hat's ImageBuilder, and sets up AAP within an AWS VPC and subnet. The installer creates all the necessary resources, including AAP Controllers and, optionally, additional components such as Automation Hub. + +*Pros*: This is the easiest method if you already use AWS, as it automates the provisioning of resources, including VMs and network configurations. + +*Cons*: This requires AWS infrastructure and credentials. This is not ideal if you're working in an on-premises environment or a cloud platform other than AWS. + +[source,shell] +---- +./pattern.sh make install +---- + +=== Method 2: Pre-configured VMs Install + +This method allows the installation of AAP on pre-configured Red Hat Enterprise Linux (RHEL) VMs. It requires you to provide an inventory file that specifies details about the VMs or instances where AAP will be installed. It is designed for users with existing infrastructure who want to deploy AAP without depending on AWS. If you need to install a pattern on a cluster with a different topology than this, use the API install mechanism. + +*Pros*: Useful if you already have pre-configured VMs or bare-metal instances running RHEL. It allows greater flexibility and control over the environment. + +*Cons*: Requires more manual effort to configure VMs and may need additional customization for non-standard topologies. + +This model has been tested with up to two RHEL VMs (one for AAP and one for Hub). + +The requirements for this mode are as follows: + +* Must be running a version of RHEL that AAP supports +* Must be properly entitled with a subscription that makes the appropriate AAP repository available + +[source,shell] +---- +./pattern.sh make legacy_from_os_install INVENTORY=(your_inventory_file) +---- + +The path to `your_inventory_file` defaults to ~/inventory_agof if you do not specify one. + +.Example agof_inventory file + +[source,yaml] +---- +[build_control] +localhost + +[aap_controllers] +192.168.5.207 + +[automation_hub] + +[eda_controllers] + +[aap_controllers:vars] + +[automation_hub:vars] + +[all:vars] +ansible_user=myuser +ansible_ssh_pass=mypass +ansible_become_pass=mypass +ansible_remote_tmp=/tmp/.ansible +username=myuser +controller_hostname=192.168.5.207 +---- +=== Method 3: Custom Ansible controller (API install aka "Bare") + +In this method, you provide an existing Ansible Automation Platform (AAP) Controller endpoint, either on bare metal or in a private cloud, without needing AWS or pre-configured VMs. You specify the manifest, endpoint hostname, admin credentials, and pass the installation process to a predefined `controller_config_dir`. This is suitable for complex or custom topologies where you want full control over the deployment. + +*Pros*: Provides maximum flexibility and is designed for advanced users who have their own AAP installations, either on-prem or in complex environments that do not fit into the default or AWS-centric model. + +*Cons*: Requires an existing AAP controller, which might not be ideal for users new to AAP or those looking for more hands-off installations. + +[source,shell] +---- +./pattern.sh make api_install +---- + +=== Containerized Installer Feature +It is important to note that AAP 2.4 offers a _containerized installer feature_, currently in Tech Preview. This new method has advantages, such as ease of deployment and managing containerized environments, making it the preferred choice for some. AGOF (Automation Governance on OpenShift Framework) uses this installer by default, and it may become a common method for AAP deployment as it matures. diff --git a/content/learn/vp_agof_proc.adoc b/content/learn/vp_agof_proc.adoc new file mode 100644 index 000000000..1cec14991 --- /dev/null +++ b/content/learn/vp_agof_proc.adoc @@ -0,0 +1,277 @@ +--- +menu: + learn: + parent: Validated patterns frameworks +title: Creating a validated pattern using the AGOF framework +weight: 24 +aliases: /ocp-framework/agof/ +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + +== Creating a validated pattern using the AGOF framework + +Whichever method you use to deploy a validated pattern you need to: + +. Clone the link:https://github.com/validatedpatterns-demos/agof_minimal_demo[Ansible GitOps Framework Minimal Demo] repository. This is a minimalistic pattern that demonstrates how to use the Ansible GitOps Framework. ++ +[source,shell] +---- +$ git clone https://github.com/validatedpatterns-demos/agof_minimal_demo.git +---- + +. Clone the link:https://github.com/validatedpatterns/agof[AGOF] repository. This serves as the provisioner for the pattern ++ +[source,shell] +---- +$ git clone https://github.com/validatedpatterns-demos/agof_minimal_demo.git +---- + +=== Deploying using the AWS-based install method + +This method: + +. Builds an AWS image using Red Hat's ImageBuilder. + +. Deploys that image onto a new AWS VPC and subnet. + +. Deploys AAP on that image using the command line installer. + +. Hands over the configuration of the AAP installation to the specified `controller_config_dir`. + +.Prerequisites + +You need to provide some key information to a file named `agof_vault.yml` created in your home directory. The key pieces of information needed are: + +* The name of a hosted zone created under Route 53 in AWS. For example this could be `demo-project.aws.validatedpatterns.io`. + +* Your AWS account `Account ID` + +* Your `aws key` + +* Your `secret access key` + + +.Procedure + +. Copy the `agof_vault_template.yml` from the cloned `agof` directory to your home directory and rename it `agof_vault.yml`. ++ +[source,shell] +---- +$ cp ~/agof/agof_vault_template.yml ~/agof_vault.yml +---- ++ +.agof_vault settings +[cols="30%,70%",options="header"] +|=== +| Argument | Description + +| `aws_account_nbr_vault` +a|Your AWS account number. This is needed for sharing composer images. + +| `aws_access_key_vault` +a|Your AWS access key + +| `aws_secret_key_vault` +a| Your AWS secret key + +| `pattern_prefix` +|A unique prefix to distinguish instances in AWS. Used as the pattern name and in public DNS entries. + +| `ec2_region` +| An AWS region that your account has access to. + +| `offline_token` +a| A Red Hat offline token used to build the RHEL image on link:https://console.redhat.com[console.redhat.com]. + +[NOTE] +==== +Click the `GENERATE TOKEN` link at link:https://access.redhat.com/management/api[Red Hat API Tokens] to create the token. +==== + +| `redhat_username` +a|Red Hat Subscription username, used to log in to link:https://registry.redhat.io[registry.redhat.io] + +| `redhat_password` +a|Red Hat Subscription password, used to log in to link:https://registry.redhat.io[registry.redhat.io] + +| `admin_password` +| An admin password for AAP Controller and Automation Hub + +| `manifest_content` +a| Content for a manifest file to entitle AAP Controller. See below for an example of how to point to a local file. + +| `org_number_vault` +| The Organization Number (Org ID) attached to your Red Hat Subscription for RHEL and AAP. + +| `activation_key_vault` +| The name of an Activation Key to embed in the imagebuilder image. + +[NOTE] +==== +Click the `Create Activation Keys` link at link:https://console.redhat.com[console.redhat.com] > *Red Hat Enterprise Linux* > *Inventory* > *System Configuration* > *Activation Keys* to create an activation key. +==== + +| `skip_imagebuilder_build` +a| Set these variables to provide your own AMI, or to re-use an AMI previously generated with this process. When a previously built AMI is found this check does not take place. +#imagebuilder_ami: 'The ID of an AWS AMI image, preferably one that was built with this toolkit' + +| `automation_hub_token_vault` +a| A token associated with your AAP subscription used to retrieve Automation Hub content. + +[NOTE] +==== +Click the `Load token` link at link:https://console.redhat.com[console.redhat.com] > *Ansible Automation Platform* > *Automation Hub* > *Connect to Hub* to generate a token. +==== + +| `automation_hub_url_certified_vault` +| Optional: The private automation hub URL for certified content. + +| `automation_hub_url_validated_vault` +| Optional: The private automation hub URL for validated content. + +|=== + +. Edit the file and add the following: + +* `controller_config_dir:` setting it value to `{{ '~/agof_minimal_demo/config' | expanduser }}`. +* `db_password:`` setting an appropriate value for the postgres password for the DB instance for example `test`. + +. Optional: Create a subscription manifest by following the guidance at link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/htmlred_hat_ansible_automation_platform_operations_guide/assembly-aap-obtain-manifest-files#assembly-aap-obtain-manifest-files[Obtaining a manifest file] + +.. Update your `agof_vault.yml` file with the path to downloaded manifest zip file. For example add the following: ++ +[source,shell] +---- +controller_license_src_file: '~/Downloads/manifest__20240924T131518Z.zip' +manifest_content: "{{ lookup('file', controller_license_src_file) | b64encode }}" +---- + +.Example agof_vault.yml file + +[source,yaml] +---- +--- +aws_account_nbr_vault: '293265215425' +aws_access_key_vault: 'AKIAIJ6ZIKPAUZGF2643' +aws_secret_key_vault: 'gMC3Jy3/MZtOosjUDHy0Nl/2mp2HQok1JDfCQGKUR' + +pattern_prefix: 'foga' +pattern_dns_zone: 'aws.validatedpatterns.io' + +ec2_name_prefix: 'foga-testing' +ec2_region: 'us-east-1' + +offline_token: 'eyJhbGciOiJIUzUxMiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICI0NzQzYTkzMC03YmJiLTRkZGQtOTgzMS00ODcxNGRlZDc0YjUifQ.eyJpYXQiOjE3MjY4MjcxNDcsImp0aSI6ImJlNjdhZDQ4LWUzNWQtNDg1Yy04OTM2LTMwMzNmODgwM2Q0MSIsImlzcyI6Imh0dHBzOi8vc3NvLnJlZGhhdC5jb20vYXV0aC9yZWFsbXMvcmVkaGF0LWV4dGVybmFsIiwiYXVkIjoiaHR0cHM6Ly9zc28ucmVkaGF0LmNvbS9hdXRoL3JlYWxtcy9yZWRoYXQtZXh0ZXJuYWwiLCJzdWIiOiJmOjUyOGQ3NmZmLWY3MDgtNDNlZC04Y2Q1LWZlMTZmNGZlMGNlNjpyaG4tc3VwcG9ydC1rcXVpbm4iLCJ0eXAiOiJPZmZsaW5lIiwiYXpwIjoicmhzbS1hcGkiLCJzaWQiOiJhMGQ3YTEzYy1kNDM2LTQ1ZGEtOGZjZi1kOThlY2VjZDczNzkiLCJzY29wZSI6ImJhc2ljIHJvbGVzIHdlYi1vcmlnaW5zIGNsaWVudF90eXBlLnByZV9rYzI1IG9mZmxpbmVfYWNjZXNzIn0.482J5PUtr2TQCl-EUjmIwwCXe-o9_SYCuOABfpunxzGoTOdHevW7LQTGUrJ6FOrNPReuOdIwiA5KijowEVxeRw' + +redhat_username: 'rhn-support-myusername' +redhat_password: 'passwd01' + +admin_password: 'redhat123!' + +manifest_content: "Content for a manifest file to entitle AAP Controller. See below for an example of how to point to a local file" +#manifest_content: "{{ lookup('file', '~/Downloads/manifest_AVP_20230510T202608Z.zip') | b64encode }}" + +org_number_vault: "1778713" +activation_key_vault: "redhat123" + +# Set these variables to provide your own AMI, or to re-use an AMI previously generated with this process +#skip_imagebuilder_build: 'boolean: skips imagebuilder build process' +#imagebuilder_ami: 'The ID of an AWS AMI image, preferably one that was built with this toolkit' + +automation_hub_token_vault: 'eyJhbGciOiJIUzUxMiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICI0NzQzYTkzMC03YmJiLTRkZGQtOTgzMS00ODcxNGRlZDc0YjUifQ.eyJpYXQiOjE3MjY4MjIxNzEsImp0aSI6IjQ2Y2ZjZmU2LTY0ZTgtNDgyYS04Mjc1LWFlZjEzNTY3NjUxMiIsImlzcyI6Imh0dHBzOi8vc3NvLnJlZGhhdC5jb20vYXV0aC9yZWFsbXMvcmVkaGF0LWV4dGVybmFsIiwiYXVkIjoiaHR0cHM6Ly9zc28ucmVkaGF0LmNvbS9hdXRoL3JlYWxtcy9yZWRoYXQtZXh0ZXJuYWwiLCJzdWIiOiJmOjUyOGQ3NmZmLWY3MDgtNDNlZC04Y2Q1LWZlMTZmNGZlMGNlNjpyaG4tc3VwcG9ydC1rcXVpbm4iLCJ0eXAiOiJPZmZsaW5lIiwiYXpwIjoiY2xvdWQtc2VydmljZXMiLCJub25jZSI6IjI1NzQ3MDg1LTc2ZTEtNGIzZS05Y2U2LTE0MjJiNDY1ODAwMiIsInNpZCI6ImEwZDdhMTNjLWQ0MzYtNDVkYS04ZmNmLWQ5OGVjZWNkNzM3OSIsInNjb3BlIjoib3BlbmlkIGJhc2ljIGFwaS5pYW0uc2VydmljZV9hY2NvdW50cyByb2xlcyB3ZWItb3JpZ2lucyBjbGllbnRfdHlwZS5wcmVfa2MyNSBvZmZsaW5lX2FjY2VzcyJ9.pL3Ls9m0-AJqlRWdGnv7HEyIxA-PcQNR0RCtc8vqeMHaTgSME1h6Xd5rysyVzChRAHsPNv_kGwsgfx0DlnQ9jA' + +# These variables can be set but are optional. The previous (before AAP 2.4) conncept of sync-list was private +# to an account. +#automation_hub_url_certified_vault: 'The private automation hub URL for certified content' +#automation_hub_url_validated_vault: 'The private automation hub URL for validated content' + +controller_config_dir: "{{ '~/agof_minimal_demo/config' | expanduser }}" + +db_password: 'test' +---- + +. Run from the AGOF repository directory the following command : ++ +[source,shell] +---- +$ /pattern.sh make install +---- + + + +.Verification + +The default installation provides an AAP 2.4 installation deployed using the Containerized Installer, with services deployed this way: + +.agof_vault settings +[cols="30%,70%",options="header"] +|=== +| URL | Service + +| `https:{{ ec2_name_prefix }}.{{ domain }}:8443` +a| Controller API. + +| `https:{{ ec2_name_prefix }}.{{ domain }}:8444/` +a|Private Automation Hub + +| `https:/{{ ec2_name_prefix }}.{{ domain }}:8445/`` +a| EDA Automation Controller + +|=== + + + +=== Method 2: Pre-configured VMs Install + +This method allows the installation of AAP on pre-configured Red Hat Enterprise Linux (RHEL) VMs. It requires you to provide an inventory file that specifies details about the VMs or instances where AAP will be installed. It is designed for users with existing infrastructure who want to deploy AAP without depending on AWS. If you need to install a pattern on a cluster with a different topology than this, use the API install mechanism. + + + +[source,shell] +---- +./pattern.sh make legacy_from_os_install INVENTORY=(your_inventory_file) +---- + +The path to `your_inventory_file` defaults to ~/inventory_agof if you do not specify one. + +.Example agof_inventory file + +[source,yaml] +---- +[build_control] +localhost + +[aap_controllers] +192.168.5.207 + +[automation_hub] + +[eda_controllers] + +[aap_controllers:vars] + +[automation_hub:vars] + +[all:vars] +ansible_user=myuser +ansible_ssh_pass=mypass +ansible_become_pass=mypass +ansible_remote_tmp=/tmp/.ansible +username=myuser +controller_hostname=192.168.5.207 +---- + + +=== Method 3: Custom Ansible controller (API install) + +In this method, you provide an existing Ansible Automation Platform (AAP) Controller endpoint, either on bare metal or in a private cloud, without needing AWS or pre-configured VMs. You specify the manifest, endpoint hostname, admin credentials, and pass the installation process to a predefined `controller_config_dir`. This is suitable for complex or custom topologies where you want full control over the deployment. + + +[source,shell] +---- +./pattern.sh make api_install +---- \ No newline at end of file From ada7fc10424e505868e4b1b8a9686c3c3a6acc8b Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Wed, 25 Sep 2024 12:53:04 +0100 Subject: [PATCH 07/22] Trying to fix table not displaying --- content/learn/vp_agof.adoc | 48 +-------------------------------- content/learn/vp_agof_proc.adoc | 41 ++++++++++++++++------------ 2 files changed, 25 insertions(+), 64 deletions(-) diff --git a/content/learn/vp_agof.adoc b/content/learn/vp_agof.adoc index f43ec4d45..2837411d5 100644 --- a/content/learn/vp_agof.adoc +++ b/content/learn/vp_agof.adoc @@ -46,11 +46,6 @@ This method is ideal for organizations that prefer deploying AAP on AWS infrastr *Cons*: This requires AWS infrastructure and credentials. This is not ideal if you're working in an on-premises environment or a cloud platform other than AWS. -[source,shell] ----- -./pattern.sh make install ----- - === Method 2: Pre-configured VMs Install This method allows the installation of AAP on pre-configured Red Hat Enterprise Linux (RHEL) VMs. It requires you to provide an inventory file that specifies details about the VMs or instances where AAP will be installed. It is designed for users with existing infrastructure who want to deploy AAP without depending on AWS. If you need to install a pattern on a cluster with a different topology than this, use the API install mechanism. @@ -66,51 +61,10 @@ The requirements for this mode are as follows: * Must be running a version of RHEL that AAP supports * Must be properly entitled with a subscription that makes the appropriate AAP repository available -[source,shell] ----- -./pattern.sh make legacy_from_os_install INVENTORY=(your_inventory_file) ----- - -The path to `your_inventory_file` defaults to ~/inventory_agof if you do not specify one. - -.Example agof_inventory file - -[source,yaml] ----- -[build_control] -localhost - -[aap_controllers] -192.168.5.207 - -[automation_hub] - -[eda_controllers] - -[aap_controllers:vars] - -[automation_hub:vars] - -[all:vars] -ansible_user=myuser -ansible_ssh_pass=mypass -ansible_become_pass=mypass -ansible_remote_tmp=/tmp/.ansible -username=myuser -controller_hostname=192.168.5.207 ----- === Method 3: Custom Ansible controller (API install aka "Bare") In this method, you provide an existing Ansible Automation Platform (AAP) Controller endpoint, either on bare metal or in a private cloud, without needing AWS or pre-configured VMs. You specify the manifest, endpoint hostname, admin credentials, and pass the installation process to a predefined `controller_config_dir`. This is suitable for complex or custom topologies where you want full control over the deployment. *Pros*: Provides maximum flexibility and is designed for advanced users who have their own AAP installations, either on-prem or in complex environments that do not fit into the default or AWS-centric model. -*Cons*: Requires an existing AAP controller, which might not be ideal for users new to AAP or those looking for more hands-off installations. - -[source,shell] ----- -./pattern.sh make api_install ----- - -=== Containerized Installer Feature -It is important to note that AAP 2.4 offers a _containerized installer feature_, currently in Tech Preview. This new method has advantages, such as ease of deployment and managing containerized environments, making it the preferred choice for some. AGOF (Automation Governance on OpenShift Framework) uses this installer by default, and it may become a common method for AAP deployment as it matures. +*Cons*: Requires an existing AAP controller, which might not be ideal for users new to AAP or those looking for more hands-off installation \ No newline at end of file diff --git a/content/learn/vp_agof_proc.adoc b/content/learn/vp_agof_proc.adoc index 1cec14991..fb7c03392 100644 --- a/content/learn/vp_agof_proc.adoc +++ b/content/learn/vp_agof_proc.adoc @@ -46,7 +46,7 @@ This method: You need to provide some key information to a file named `agof_vault.yml` created in your home directory. The key pieces of information needed are: -* The name of a hosted zone created under Route 53 in AWS. For example this could be `demo-project.aws.validatedpatterns.io`. +* The name of a hosted zone created under Route 53 in AWS. For example this could be `aws.validatedpatterns.io`. * Your AWS account `Account ID` @@ -54,7 +54,6 @@ You need to provide some key information to a file named `agof_vault.yml` create * Your `secret access key` - .Procedure . Copy the `agof_vault_template.yml` from the cloned `agof` directory to your home directory and rename it `agof_vault.yml`. @@ -70,19 +69,19 @@ $ cp ~/agof/agof_vault_template.yml ~/agof_vault.yml | Argument | Description | `aws_account_nbr_vault` -a|Your AWS account number. This is needed for sharing composer images. +a| Your AWS account number. This is needed for sharing composer images. -| `aws_access_key_vault` -a|Your AWS access key +| `aws_access_key_vault` +a| Your AWS access key. | `aws_secret_key_vault` -a| Your AWS secret key +a| Your AWS secret key. | `pattern_prefix` -|A unique prefix to distinguish instances in AWS. Used as the pattern name and in public DNS entries. +a| A unique prefix to distinguish instances in AWS. Used as the pattern name and in public DNS entries. | `ec2_region` -| An AWS region that your account has access to. +a| An AWS region that your account has access to. | `offline_token` a| A Red Hat offline token used to build the RHEL image on link:https://console.redhat.com[console.redhat.com]. @@ -93,22 +92,22 @@ Click the `GENERATE TOKEN` link at link:https://access.redhat.com/management/api ==== | `redhat_username` -a|Red Hat Subscription username, used to log in to link:https://registry.redhat.io[registry.redhat.io] +a| Red Hat Subscription username, used to log in to link:https://registry.redhat.io[registry.redhat.io]. | `redhat_password` -a|Red Hat Subscription password, used to log in to link:https://registry.redhat.io[registry.redhat.io] +a| Red Hat Subscription password, used to log in to link:https://registry.redhat.io[registry.redhat.io]. | `admin_password` -| An admin password for AAP Controller and Automation Hub +a| An admin password for AAP Controller and Automation Hub. | `manifest_content` a| Content for a manifest file to entitle AAP Controller. See below for an example of how to point to a local file. | `org_number_vault` -| The Organization Number (Org ID) attached to your Red Hat Subscription for RHEL and AAP. +a| The Organization Number (Org ID) attached to your Red Hat Subscription for RHEL and AAP. | `activation_key_vault` -| The name of an Activation Key to embed in the imagebuilder image. +a| The name of an Activation Key to embed in the imagebuilder image. [NOTE] ==== @@ -128,13 +127,14 @@ Click the `Load token` link at link:https://console.redhat.com[console.redhat.co ==== | `automation_hub_url_certified_vault` -| Optional: The private automation hub URL for certified content. +a| Optional: The private automation hub URL for certified content. | `automation_hub_url_validated_vault` -| Optional: The private automation hub URL for validated content. +a| Optional: The private automation hub URL for validated content. |=== + . Edit the file and add the following: * `controller_config_dir:` setting it value to `{{ '~/agof_minimal_demo/config' | expanduser }}`. @@ -202,7 +202,6 @@ $ /pattern.sh make install ---- - .Verification The default installation provides an AAP 2.4 installation deployed using the Containerized Installer, with services deployed this way: @@ -224,6 +223,8 @@ a| EDA Automation Controller |=== +. Log in to `https:{{ ec2_name_prefix }}.{{ domain }}:8443` + === Method 2: Pre-configured VMs Install @@ -268,7 +269,13 @@ controller_hostname=192.168.5.207 === Method 3: Custom Ansible controller (API install) -In this method, you provide an existing Ansible Automation Platform (AAP) Controller endpoint, either on bare metal or in a private cloud, without needing AWS or pre-configured VMs. You specify the manifest, endpoint hostname, admin credentials, and pass the installation process to a predefined `controller_config_dir`. This is suitable for complex or custom topologies where you want full control over the deployment. +In this method, you provide an existing Ansible Automation Platform (AAP) Controller endpoint, either on bare metal or in a private cloud, without needing AWS or pre-configured VMs. + +Specify + +manifest +endpoint hostname +admin credentials, and pass the installation process to a predefined `controller_config_dir`. [source,shell] From ba41cd50a5647470dafc8598ce79e867fd477ce8 Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Wed, 25 Sep 2024 19:52:12 +0100 Subject: [PATCH 08/22] Adding more content --- content/learn/vp_agof_proc.adoc | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/content/learn/vp_agof_proc.adoc b/content/learn/vp_agof_proc.adoc index fb7c03392..7dbf3548a 100644 --- a/content/learn/vp_agof_proc.adoc +++ b/content/learn/vp_agof_proc.adoc @@ -222,13 +222,16 @@ a| EDA Automation Controller |=== +Once the demo is complete, though, what will happen is that you will have a project (the demo project), an inventory (consisting of the AAP controller), a credential (the private key from ec2), a job template (which runs a fact gather on the AAP controller) and a schedule that will run the job template every 5 minutes . Log in to `https:{{ ec2_name_prefix }}.{{ domain }}:8443` === Method 2: Pre-configured VMs Install -This method allows the installation of AAP on pre-configured Red Hat Enterprise Linux (RHEL) VMs. It requires you to provide an inventory file that specifies details about the VMs or instances where AAP will be installed. It is designed for users with existing infrastructure who want to deploy AAP without depending on AWS. If you need to install a pattern on a cluster with a different topology than this, use the API install mechanism. +This method allows the installation of AAP on pre-configured Red Hat Enterprise Linux (RHEL) VMs. It requires you to provide an inventory file that specifies details about the VMs or instances where AAP will be installed. + +It is designed for users with existing infrastructure who want to deploy AAP without depending on AWS. If you need to install a pattern on a cluster with a different topology than this, use the API install mechanism. From b2be9971c0f901798ad5e4f232e2f1bf83c5ac6f Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Fri, 27 Sep 2024 12:15:03 +0100 Subject: [PATCH 09/22] Adding content about collections --- content/learn/vp_agof_proc.adoc | 35 ++++++++++++++++++++++----------- 1 file changed, 24 insertions(+), 11 deletions(-) diff --git a/content/learn/vp_agof_proc.adoc b/content/learn/vp_agof_proc.adoc index 7dbf3548a..8bc295a6d 100644 --- a/content/learn/vp_agof_proc.adoc +++ b/content/learn/vp_agof_proc.adoc @@ -16,16 +16,16 @@ include::modules/comm-attributes.adoc[] Whichever method you use to deploy a validated pattern you need to: -. Clone the link:https://github.com/validatedpatterns-demos/agof_minimal_demo[Ansible GitOps Framework Minimal Demo] repository. This is a minimalistic pattern that demonstrates how to use the Ansible GitOps Framework. +. Clone the link:https://github.com/mhjacks/agof_demo_config[Ansible GitOps Framework Minimal Demo] repository. This is a minimal pattern that demonstrates how to use the Ansible GitOps Framework. + -[source,shell] +[source,terminal] ---- $ git clone https://github.com/validatedpatterns-demos/agof_minimal_demo.git ---- . Clone the link:https://github.com/validatedpatterns/agof[AGOF] repository. This serves as the provisioner for the pattern + -[source,shell] +[source,terminal] ---- $ git clone https://github.com/validatedpatterns-demos/agof_minimal_demo.git ---- @@ -137,8 +137,10 @@ a| Optional: The private automation hub URL for validated content. . Edit the file and add the following: -* `controller_config_dir:` setting it value to `{{ '~/agof_minimal_demo/config' | expanduser }}`. -* `db_password:`` setting an appropriate value for the postgres password for the DB instance for example `test`. +* `controller_config_dir:` set it's value to `{{ '~/agof_minimal_demo/config' | expanduser }}`. +* `db_password:` set an appropriate value for the postgres password for the DB instance for example `test`. +* `agof_statedir:` set its value to "{{ '~/agof' | expanduser }}" +* `agof_iac_repo:` set its value to "https://github.com/mhjacks/agof_demo_config.git" . Optional: Create a subscription manifest by following the guidance at link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/htmlred_hat_ansible_automation_platform_operations_guide/assembly-aap-obtain-manifest-files#assembly-aap-obtain-manifest-files[Obtaining a manifest file] @@ -192,19 +194,22 @@ automation_hub_token_vault: 'eyJhbGciOiJIUzUxMiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICI controller_config_dir: "{{ '~/agof_minimal_demo/config' | expanduser }}" db_password: 'test' + +agof_statedir: "{{ '~/agof' | expanduser }}" +agof_iac_repo: "https://github.com/mhjacks/agof_demo_config.git" ---- . Run from the AGOF repository directory the following command : + [source,shell] ---- -$ /pattern.sh make install +$ ./pattern.sh make install ---- .Verification -The default installation provides an AAP 2.4 installation deployed using the Containerized Installer, with services deployed this way: +The default installation provides an AAP 2.4 installation deployed using the containerized installer, with services deployed this way: .agof_vault settings [cols="30%,70%",options="header"] @@ -222,9 +227,19 @@ a| EDA Automation Controller |=== -Once the demo is complete, though, what will happen is that you will have a project (the demo project), an inventory (consisting of the AAP controller), a credential (the private key from ec2), a job template (which runs a fact gather on the AAP controller) and a schedule that will run the job template every 5 minutes +Once the install completes, you will have a project, an inventory (consisting of the AAP controller), a credential (the private key from ec2), a job template (which runs a fact gather on the AAP controller) and a schedule that will run the job template every 5 minutes, + +. Log in to `https:{{ ec2_name_prefix }}.{{ domain }}:8443` with the username `admin` and the password as configured in `admin_password` field of `agof_vault.yml`. + +. Under *Resources* > *Projects* verify the project *Ansible GitOps Framework Minimal Demo* is created with status *Successful*. -. Log in to `https:{{ ec2_name_prefix }}.{{ domain }}:8443` +. Under *Resources* > *Inventories* verify the inventory *AGOF Demo Inventory* is created with sync status *Success*. + +. Under *Resources* > *Templates* verify the job template *Ping Playbook* is created. + +. Under *Resources* > *Credentials* verify the ec2 ssh credential *ec2_ssh_credential* is created. + +. Under *Views* > *Schedules* verify the schedules are created. === Method 2: Pre-configured VMs Install @@ -233,8 +248,6 @@ This method allows the installation of AAP on pre-configured Red Hat Enterprise It is designed for users with existing infrastructure who want to deploy AAP without depending on AWS. If you need to install a pattern on a cluster with a different topology than this, use the API install mechanism. - - [source,shell] ---- ./pattern.sh make legacy_from_os_install INVENTORY=(your_inventory_file) From 9d3c41aa4cfa718875c206017e466c94dadcfc7c Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Fri, 27 Sep 2024 12:15:11 +0100 Subject: [PATCH 10/22] Adding content about collections 2 --- content/learn/vp_agof_config_controller.adoc | 102 +++++++++++++++++++ 1 file changed, 102 insertions(+) create mode 100644 content/learn/vp_agof_config_controller.adoc diff --git a/content/learn/vp_agof_config_controller.adoc b/content/learn/vp_agof_config_controller.adoc new file mode 100644 index 000000000..980e73d92 --- /dev/null +++ b/content/learn/vp_agof_config_controller.adoc @@ -0,0 +1,102 @@ +--- +menu: + learn: + parent: Validated patterns frameworks +title: Using the Controller Configuration collection +weight: 25 +aliases: /ocp-framework/agof/ +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + +== Using the Controller Configuration collection + +An AGOF pattern for example https://github.com/mhjacks/agof_demo_config is primarily an IaC (infrastructure as code) artifact designed to be used with the `controller_configuration` collections. + +The AGOF (Ansible GitOps Framework) repository https://github.com/validatedpatterns/agof contains the code and tools needed to set up a new Ansible Automation Platform (AAP) instance. This setup is automated, using Infrastructure as Code (IaC) practices. It also includes some specific preferences to make it easier for others to publicly share and manage this type of infrastructure setup. + +This approach ensures the automation controller configuration is version-controlled, dynamic, and reproducible. This method enables deployment automation with minimal manual intervention, which is useful for managing multiple controller instances or different environments in a CI/CD pipeline. + +For example for the AGOF minimal configuration demo the file https://github.com/mhjacks/agof_demo_config/blob/main/controller_config.yml is used with Ansible's Controller Configuration Collection, allowing the automation and management of Red Hat Ansible Automation Controller (formerly known as Ansible Tower). + +This file automates the creation, updating, or deletion of Ansible Controller objects (organizations, projects, inventories, credentials, templates, schedules). Sensitive information like passwords and keys are pulled dynamically from vaults, ensuring they are not hardcoded in the configuration. The project’s inventory and playbooks are managed through a Git repository, allowing for continuous integration and delivery (CI/CD) practices. Recurring playbook executions are scheduled automatically, eliminating the need for manual job triggers. + +== Key sections and parameters + +=== Vault variables + +*`orgname_vault: 'Demo Organization'`*:: +This specifies the organization name stored in a vault for security purposes. + +*`controller_username_vault: 'admin'`*:: +This is the Ansible Controller's username stored in a vault. + +*`controller_password_vault: '{{ admin_password }}'`*:: +The password is fetched dynamically from a vault for security purposes. + +=== Dynamic variables + +*`controller_username: '{{ controller_username_vault }}'`*:: +The Ansible Controller username is retrieved from the vault variable. + +*`controller_password: '{{ controller_password_vault }}'`*:: +The password is dynamically fetched from the vault. + +=== Project configuration + +*`agof_demo_project_name: 'Ansible GitOps Framework Minimal Demo'`*:: +This variable holds the name of the project being managed in the controller. + +*`controller_projects`*:: +Two projects are defined: + * One with the name *'Demo Project'*, marked for deletion (`state: absent`). + * The other is the actual project that will be created, associated with the Git repository hosted on GitHub. + +For more information see, link:https://github.com/redhat-cop/controller_configuration/blob/devel/roles/projects/README.md[controller_configuration.projects]. + +=== Organizations + +*`controller_organizations`*:: +Ensures that the organization, defined in `orgname_vault`, exists within the controller. For more information see, link:https://github.com/redhat-cop/controller_configuration/blob/devel/roles/organizations/README.md[controller_configuration.organizations]. + +=== Inventory and inventory sources + +*`controller_inventories`*:: +Defines an inventory called *'AGOF Demo Inventory'* under the *'Demo Organization'*. For more information see, link:https://github.com/redhat-cop/controller_configuration/blob/devel/roles/inventories/README.md[controller_configuration.inventories]. + + +*`controller_inventory_sources`*:: +Configures an inventory source tied to the Git project. The inventory is pulled from source control management (SCM) and associated with credentials for SSH access (`ec2_ssh_credential`). For more information see, link:https://github.com/redhat-cop/controller_configuration/blob/devel/roles/inventory_sources/README.md[controller_configuration.inventory_sources]. + +=== Job templates + +*`controller_templates`*:: +Two job templates are managed: + * One named *'Demo Job Template'*, marked for deletion. + * The other, *'Ping Playbook'*, is tied to a specific playbook (`ping.yml`), inventory, and project, and will use the defined credentials for execution. + +For more information see, link:https://github.com/redhat-cop/controller_configuration/blob/devel/roles/job_templates/README.md[controller_configuration.job_templates]. + +=== Job scheduling + +*`controller_schedules`*:: +Configures a recurring job schedule to run the *'Ping Playbook'* template every 120 minutes. The schedule uses an iCal `RRULE` format. For more information see, link:https://github.com/redhat-cop/controller_configuration/blob/devel/roles/schedules/README.md[controller_configuration.schedules]. + +=== Credentials + +*`controller_credentials`*:: +A credential named *'ec2_ssh_credential'* is created with SSH access to the EC2 instances using the private key stored at the path specified in `demo_ssh_key_file`. For more information see, link:https://github.com/redhat-cop/controller_configuration/blob/devel/roles/credentials/README.md[controller_configuration.credentials]. + +=== Job Launching + +*`controller_launch_jobs`*:: +Automatically launches the *'Ping Playbook'* job template within the organization defined in `orgname_vault`. For more information see, link:https://github.com/redhat-cop/controller_configuration/blob/devel/roles/job_launch/README.md[controller_configuration.job_launch]. + +For more information about the controller configuration see: + +* link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html-single/automation_controller_administration_guide/index[Red Hat Automation Controller Admin Guide] + +* link:https://github.com/redhat-cop/controller_configuration[Red Hat Communities of Practice Controller Configuration Collection] From 29eb43d86726aca572dff7622ab51c833c464b45 Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Fri, 27 Sep 2024 17:34:55 +0100 Subject: [PATCH 11/22] Adding content about collections 3 --- content/learn/vp_agof_proc.adoc | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/content/learn/vp_agof_proc.adoc b/content/learn/vp_agof_proc.adoc index 8bc295a6d..032335a4c 100644 --- a/content/learn/vp_agof_proc.adoc +++ b/content/learn/vp_agof_proc.adoc @@ -178,7 +178,7 @@ manifest_content: "Content for a manifest file to entitle AAP Controller. See be #manifest_content: "{{ lookup('file', '~/Downloads/manifest_AVP_20230510T202608Z.zip') | b64encode }}" org_number_vault: "1778713" -activation_key_vault: "redhat123" +activation_key_vault: "kevs-agof-key" # Set these variables to provide your own AMI, or to re-use an AMI previously generated with this process #skip_imagebuilder_build: 'boolean: skips imagebuilder build process' @@ -206,7 +206,6 @@ agof_iac_repo: "https://github.com/mhjacks/agof_demo_config.git" $ ./pattern.sh make install ---- - .Verification The default installation provides an AAP 2.4 installation deployed using the containerized installer, with services deployed this way: From 8bd6cf0a9c28bd1a8897a8b68968db49e5a34797 Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Mon, 30 Sep 2024 15:36:38 +0100 Subject: [PATCH 12/22] Adding docs on overview of install --- content/learn/vp_agof_config_controller.adoc | 98 ++++++++++++++++++++ content/learn/vp_agof_install_process.adoc | 33 +++++++ content/learn/vp_agof_proc.adoc | 19 +++- 3 files changed, 145 insertions(+), 5 deletions(-) create mode 100644 content/learn/vp_agof_install_process.adoc diff --git a/content/learn/vp_agof_config_controller.adoc b/content/learn/vp_agof_config_controller.adoc index 980e73d92..539d6eadc 100644 --- a/content/learn/vp_agof_config_controller.adoc +++ b/content/learn/vp_agof_config_controller.adoc @@ -22,6 +22,95 @@ This approach ensures the automation controller configuration is version-control For example for the AGOF minimal configuration demo the file https://github.com/mhjacks/agof_demo_config/blob/main/controller_config.yml is used with Ansible's Controller Configuration Collection, allowing the automation and management of Red Hat Ansible Automation Controller (formerly known as Ansible Tower). +[source,yaml] +---- +# vim: ft=yaml.ansible +--- +orgname_vault: 'Demo Organization' + +controller_username_vault: 'admin' +controller_password_vault: '{{ admin_password }}' + +controller_username: '{{ controller_username_vault }}' +controller_password: '{{ controller_password_vault }}' + +agof_demo_project_name: 'Ansible GitOps Framework Minimal Demo' + +controller_validate_certs: false + +controller_configuration_async_retries: 30 + +controller_settings: [] + +controller_projects: + - name: Demo Project + state: absent + + - name: '{{ agof_demo_project_name }}' + organization: "{{ orgname_vault }}" + scm_branch: main + scm_clean: "no" + scm_delete_on_update: "no" + scm_type: git + scm_update_on_launch: "yes" + scm_url: 'https://github.com/hybrid-cloud-demos/agof_minimal_demo.git' + +controller_organizations: + - name: '{{ orgname_vault }}' + +controller_inventories: + - name: 'AGOF Demo Inventory' + organization: '{{ orgname_vault }}' + +controller_inventory_sources: + - name: 'AGOF Demo Inventory Source' + inventory: 'AGOF Demo Inventory' + credential: 'ec2_ssh_credential' + overwrite: true + overwrite_vars: true + update_on_launch: true + source: scm + source_project: '{{ agof_demo_project_name }}' + source_path: 'inventory' + +controller_credential_types: [] + +controller_templates: + - name: Demo Job Template + state: absent + + - name: Ping Playbook + organization: "{{ orgname_vault }}" + project: '{{ agof_demo_project_name }}' + job_type: run + playbook: 'ansible/playbooks/ping.yml' + inventory: "AGOF Demo Inventory" + credentials: + - ec2_ssh_credential + +controller_schedules: + - name: Ping Playbook + organization: "{{ orgname_vault }}" + unified_job_template: Ping Playbook + rrule: DTSTART:20191219T130500Z RRULE:FREQ=MINUTELY;INTERVAL=120 + +demo_ssh_key_file: '~/{{ ec2_name_prefix }}/{{ ec2_name_prefix }}-private.pem' + +controller_credentials: + - name: ec2_ssh_credential + description: "EC2 SSH credential" + organization: '{{ orgname_vault }}' + credential_type: Machine + inputs: + username: 'ec2-user' + ssh_key_data: "{{ lookup('file', demo_ssh_key_file) }}" + become_method: sudo + +controller_launch_jobs: + - name: Ping Playbook + organization: "{{ orgname_vault }}" +---- + This file automates the creation, updating, or deletion of Ansible Controller objects (organizations, projects, inventories, credentials, templates, schedules). Sensitive information like passwords and keys are pulled dynamically from vaults, ensuring they are not hardcoded in the configuration. The project’s inventory and playbooks are managed through a Git repository, allowing for continuous integration and delivery (CI/CD) practices. Recurring playbook executions are scheduled automatically, eliminating the need for manual job triggers. == Key sections and parameters @@ -46,6 +135,7 @@ The Ansible Controller username is retrieved from the vault variable. The password is dynamically fetched from the vault. === Project configuration +Projects are collections of playbooks that are stored in a Git repository or SCM. This section can define how projects are configured in the Controller. *`agof_demo_project_name: 'Ansible GitOps Framework Minimal Demo'`*:: This variable holds the name of the project being managed in the controller. @@ -59,6 +149,8 @@ For more information see, link:https://github.com/redhat-cop/controller_configur === Organizations +Organizations represent a logical grouping for managing resources such as projects and inventories. + *`controller_organizations`*:: Ensures that the organization, defined in `orgname_vault`, exists within the controller. For more information see, link:https://github.com/redhat-cop/controller_configuration/blob/devel/roles/organizations/README.md[controller_configuration.organizations]. @@ -73,6 +165,8 @@ Configures an inventory source tied to the Git project. The inventory is pulled === Job templates +Job Templates define a specific playbook run, associating it with inventories, credentials, and other settings. + *`controller_templates`*:: Two job templates are managed: * One named *'Demo Job Template'*, marked for deletion. @@ -87,6 +181,8 @@ Configures a recurring job schedule to run the *'Ping Playbook'* template every === Credentials +Credentials store authentication details for accessing external systems like clouds, networks, and SCMs. + *`controller_credentials`*:: A credential named *'ec2_ssh_credential'* is created with SSH access to the EC2 instances using the private key stored at the path specified in `demo_ssh_key_file`. For more information see, link:https://github.com/redhat-cop/controller_configuration/blob/devel/roles/credentials/README.md[controller_configuration.credentials]. @@ -100,3 +196,5 @@ For more information about the controller configuration see: * link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html-single/automation_controller_administration_guide/index[Red Hat Automation Controller Admin Guide] * link:https://github.com/redhat-cop/controller_configuration[Red Hat Communities of Practice Controller Configuration Collection] + +* link:https://galaxy.ansible.com/ui/repo/published/infra/controller_configuration/docs/[Galaxy Red Hat Communities of Practice Controller Configuration Collection documentation] diff --git a/content/learn/vp_agof_install_process.adoc b/content/learn/vp_agof_install_process.adoc new file mode 100644 index 000000000..4ac318cae --- /dev/null +++ b/content/learn/vp_agof_install_process.adoc @@ -0,0 +1,33 @@ +--- +menu: + learn: + parent: Validated patterns frameworks +title: Providing data to controller configuration collection +weight: 26 +aliases: /ocp-framework/agof/ +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + +== Understanding the Ansible GitOps Framework (AGOF) installation process + +The Ansible GitOps Framework (AGOF) is a powerful solution designed to automate the deployment and configuration of Ansible Automation Platform (AAP) environments using GitOps principles. It leverages Ansible to manage infrastructure and application provisioning in a declarative, version-controlled way. AGOF provides a structured approach to setting up cloud infrastructure, installing AAP components, and handing over control to the AAP Controller for ongoing automation and management. An overview of the steps involved in configuring a basic demo minimal demo application are listed here: + +=== 1. Pre-Init Environment (Bootstrap Ansible) + +* *Ansible Configuration*: The environment is initialized by generating an `ansible.cfg` file, which is configured with Automation Hub and public Galaxy endpoints. This process includes vault configuration to inject the Automation Hub token and install required Ansible collections from `requirements.yml`. +* *Optional Image Build*: Images are created using Red Hat Image Builder to produce AMIs, which include `cloud-init`, activation keys, and organization details. These images can be reused in future installations. + +=== 2. Infrastructure Building (AWS Only) + +* *AWS Setup*: The framework sets up AWS infrastructure, including VPC, subnets, and security groups, using predefined roles. It also manages Route53 DNS entries for VMs. +* *VM Deployment*: Virtual machines are provisioned on EC2 with persistent hostnames and updates to `/etc/hosts` for AWS nodes. DNS entries are updated when IPs change after VM reboots. + +=== 3. Handover to Ansible Controller + +* *Controller Setup*: The Ansible Automation Platform (AAP) Controller and optionally the Automation Hub are installed and configured. Entitlements are managed via a manifest, and execution environments and collections are downloaded and prepared. +* *GitOps Mode*: After configuration, AGOF transitions to GitOps mode. All environment changes are managed via Git commits to the repositories used by the controller, ensuring declarative and automated infrastructure management from this point onward. + diff --git a/content/learn/vp_agof_proc.adoc b/content/learn/vp_agof_proc.adoc index 032335a4c..e3fa9f158 100644 --- a/content/learn/vp_agof_proc.adoc +++ b/content/learn/vp_agof_proc.adoc @@ -58,7 +58,7 @@ You need to provide some key information to a file named `agof_vault.yml` create . Copy the `agof_vault_template.yml` from the cloned `agof` directory to your home directory and rename it `agof_vault.yml`. + -[source,shell] +[source,terminal] ---- $ cp ~/agof/agof_vault_template.yml ~/agof_vault.yml ---- @@ -201,10 +201,11 @@ agof_iac_repo: "https://github.com/mhjacks/agof_demo_config.git" . Run from the AGOF repository directory the following command : + -[source,shell] +[source,terminal] ---- $ ./pattern.sh make install ---- +This command invokes the `controller_configuration` `dispatch` role on the controller endpoint based on the configuration found in the `controller_configuration_dir` and in your `agof_vault.yml` file. This controls all of the controller configuration. Based on the variables defined in the controller configuration, `dispatch` calls the necessary roles and modules in the right order to configure AAP to run your pattern. .Verification @@ -281,7 +282,6 @@ username=myuser controller_hostname=192.168.5.207 ---- - === Method 3: Custom Ansible controller (API install) In this method, you provide an existing Ansible Automation Platform (AAP) Controller endpoint, either on bare metal or in a private cloud, without needing AWS or pre-configured VMs. @@ -293,7 +293,16 @@ endpoint hostname admin credentials, and pass the installation process to a predefined `controller_config_dir`. -[source,shell] +[source,terminal] ---- ./pattern.sh make api_install ----- \ No newline at end of file +---- + +=== Tearing down the installation + +To tear down the installation run the following command: ++ +[source,terminal] +---- +$ ./pattern.sh make aws_uninstall +---- From a81150264b49f107e4f781e3e88ebaa7caabbcb0 Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Tue, 1 Oct 2024 12:31:01 +0100 Subject: [PATCH 13/22] Reordering content --- content/learn/vp_agof.adoc | 297 ++++++++++++++++++- content/learn/vp_agof_config_controller.adoc | 19 ++ content/learn/vp_agof_proc.adoc | 2 +- 3 files changed, 316 insertions(+), 2 deletions(-) diff --git a/content/learn/vp_agof.adoc b/content/learn/vp_agof.adoc index 2837411d5..b79cd5151 100644 --- a/content/learn/vp_agof.adoc +++ b/content/learn/vp_agof.adoc @@ -67,4 +67,299 @@ In this method, you provide an existing Ansible Automation Platform (AAP) Contro *Pros*: Provides maximum flexibility and is designed for advanced users who have their own AAP installations, either on-prem or in complex environments that do not fit into the default or AWS-centric model. -*Cons*: Requires an existing AAP controller, which might not be ideal for users new to AAP or those looking for more hands-off installation \ No newline at end of file +*Cons*: Requires an existing AAP controller, which might not be ideal for users new to AAP or those looking for more hands-off installation + +== Creating a validated pattern using the AGOF framework + +Whichever method you use to deploy a validated pattern you need to: + +. Clone the link:https://github.com/mhjacks/agof_demo_config[Ansible GitOps Framework Minimal Demo] repository. This is a minimal pattern that demonstrates how to use the Ansible GitOps Framework. ++ +[source,terminal] +---- +$ git clone https://github.com/validatedpatterns-demos/agof_minimal_demo.git +---- + +. Clone the link:https://github.com/validatedpatterns/agof[AGOF] repository. This serves as the provisioner for the pattern ++ +[source,terminal] +---- +$ git clone https://github.com/validatedpatterns-demos/agof_minimal_demo.git +---- + +=== Deploying using the AWS-based install method + +This method: + +. Builds an AWS image using Red Hat's ImageBuilder. + +. Deploys that image onto a new AWS VPC and subnet. + +. Deploys AAP on that image using the command line installer. + +. Hands over the configuration of the AAP installation to the specified `controller_config_dir`. + +.Prerequisites + +You need to provide some key information to a file named `agof_vault.yml` created in your home directory. The key pieces of information needed are: + +* The name of a hosted zone created under Route 53 in AWS. For example this could be `aws.validatedpatterns.io`. + +* Your AWS account `Account ID` + +* Your `aws key` + +* Your `secret access key` + +.Procedure + +. Copy the `agof_vault_template.yml` from the cloned `agof` directory to your home directory and rename it `agof_vault.yml`. ++ +[source,terminal] +---- +$ cp ~/agof/agof_vault_template.yml ~/agof_vault.yml +---- ++ +.agof_vault settings +[cols="30%,70%",options="header"] +|=== +| Argument | Description + +| `aws_account_nbr_vault` +a| Your AWS account number. This is needed for sharing composer images. + +| `aws_access_key_vault` +a| Your AWS access key. + +| `aws_secret_key_vault` +a| Your AWS secret key. + +| `pattern_prefix` +a| A unique prefix to distinguish instances in AWS. Used as the pattern name and in public DNS entries. + +| `ec2_region` +a| An AWS region that your account has access to. + +| `offline_token` +a| A Red Hat offline token used to build the RHEL image on link:https://console.redhat.com[console.redhat.com]. + +[NOTE] +==== +Click the `GENERATE TOKEN` link at link:https://access.redhat.com/management/api[Red Hat API Tokens] to create the token. +==== + +| `redhat_username` +a| Red Hat Subscription username, used to log in to link:https://registry.redhat.io[registry.redhat.io]. + +| `redhat_password` +a| Red Hat Subscription password, used to log in to link:https://registry.redhat.io[registry.redhat.io]. + +| `admin_password` +a| An admin password for AAP Controller and Automation Hub. + +| `manifest_content` +a| Content for a manifest file to entitle AAP Controller. See below for an example of how to point to a local file. + +| `org_number_vault` +a| The Organization Number (Org ID) attached to your Red Hat Subscription for RHEL and AAP. + +| `activation_key_vault` +a| The name of an Activation Key to embed in the imagebuilder image. + +[NOTE] +==== +Click the `Create Activation Keys` link at link:https://console.redhat.com[console.redhat.com] > *Red Hat Enterprise Linux* > *Inventory* > *System Configuration* > *Activation Keys* to create an activation key. +==== + +| `skip_imagebuilder_build` +a| Set these variables to provide your own AMI, or to re-use an AMI previously generated with this process. When a previously built AMI is found this check does not take place. +#imagebuilder_ami: 'The ID of an AWS AMI image, preferably one that was built with this toolkit' + +| `automation_hub_token_vault` +a| A token associated with your AAP subscription used to retrieve Automation Hub content. + +[NOTE] +==== +Click the `Load token` link at link:https://console.redhat.com[console.redhat.com] > *Ansible Automation Platform* > *Automation Hub* > *Connect to Hub* to generate a token. +==== + +| `automation_hub_url_certified_vault` +a| Optional: The private automation hub URL for certified content. + +| `automation_hub_url_validated_vault` +a| Optional: The private automation hub URL for validated content. + +|=== + + +. Edit the file and add the following: + +* `controller_config_dir:` set it's value to `{{ '~/agof_minimal_demo/config' | expanduser }}`. +* `db_password:` set an appropriate value for the postgres password for the DB instance for example `test`. +* `agof_statedir:` set its value to "{{ '~/agof' | expanduser }}" +* `agof_iac_repo:` set its value to "https://github.com/mhjacks/agof_demo_config.git" + +. Optional: Create a subscription manifest by following the guidance at link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/htmlred_hat_ansible_automation_platform_operations_guide/assembly-aap-obtain-manifest-files#assembly-aap-obtain-manifest-files[Obtaining a manifest file] + +.. Update your `agof_vault.yml` file with the path to downloaded manifest zip file. For example add the following: ++ +[source,shell] +---- +controller_license_src_file: '~/Downloads/manifest__20240924T131518Z.zip' +manifest_content: "{{ lookup('file', controller_license_src_file) | b64encode }}" +---- + +.Example agof_vault.yml file + +[source,yaml] +---- +--- +aws_account_nbr_vault: '293265215425' +aws_access_key_vault: 'AKIAIJ6ZIKPAUZGF2643' +aws_secret_key_vault: 'gMC3Jy3/MZtOosjUDHy0Nl/2mp2HQok1JDfCQGKUR' + +pattern_prefix: 'foga' +pattern_dns_zone: 'aws.validatedpatterns.io' + +ec2_name_prefix: 'foga-testing' +ec2_region: 'us-east-1' + +offline_token: 'eyJhbGciOiJIUzUxMiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICI0NzQzYTkzMC03YmJiLTRkZGQtOTgzMS00ODcxNGRlZDc0YjUifQ.eyJpYXQiOjE3MjY4MjcxNDcsImp0aSI6ImJlNjdhZDQ4LWUzNWQtNDg1Yy04OTM2LTMwMzNmODgwM2Q0MSIsImlzcyI6Imh0dHBzOi8vc3NvLnJlZGhhdC5jb20vYXV0aC9yZWFsbXMvcmVkaGF0LWV4dGVybmFsIiwiYXVkIjoiaHR0cHM6Ly9zc28ucmVkaGF0LmNvbS9hdXRoL3JlYWxtcy9yZWRoYXQtZXh0ZXJuYWwiLCJzdWIiOiJmOjUyOGQ3NmZmLWY3MDgtNDNlZC04Y2Q1LWZlMTZmNGZlMGNlNjpyaG4tc3VwcG9ydC1rcXVpbm4iLCJ0eXAiOiJPZmZsaW5lIiwiYXpwIjoicmhzbS1hcGkiLCJzaWQiOiJhMGQ3YTEzYy1kNDM2LTQ1ZGEtOGZjZi1kOThlY2VjZDczNzkiLCJzY29wZSI6ImJhc2ljIHJvbGVzIHdlYi1vcmlnaW5zIGNsaWVudF90eXBlLnByZV9rYzI1IG9mZmxpbmVfYWNjZXNzIn0.482J5PUtr2TQCl-EUjmIwwCXe-o9_SYCuOABfpunxzGoTOdHevW7LQTGUrJ6FOrNPReuOdIwiA5KijowEVxeRw' + +redhat_username: 'rhn-support-myusername' +redhat_password: 'passwd01' + +admin_password: 'redhat123!' + +manifest_content: "Content for a manifest file to entitle AAP Controller. See below for an example of how to point to a local file" +#manifest_content: "{{ lookup('file', '~/Downloads/manifest_AVP_20230510T202608Z.zip') | b64encode }}" + +org_number_vault: "1778713" +activation_key_vault: "kevs-agof-key" + +# Set these variables to provide your own AMI, or to re-use an AMI previously generated with this process +#skip_imagebuilder_build: 'boolean: skips imagebuilder build process' +#imagebuilder_ami: 'The ID of an AWS AMI image, preferably one that was built with this toolkit' + +automation_hub_token_vault: 'eyJhbGciOiJIUzUxMiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICI0NzQzYTkzMC03YmJiLTRkZGQtOTgzMS00ODcxNGRlZDc0YjUifQ.eyJpYXQiOjE3MjY4MjIxNzEsImp0aSI6IjQ2Y2ZjZmU2LTY0ZTgtNDgyYS04Mjc1LWFlZjEzNTY3NjUxMiIsImlzcyI6Imh0dHBzOi8vc3NvLnJlZGhhdC5jb20vYXV0aC9yZWFsbXMvcmVkaGF0LWV4dGVybmFsIiwiYXVkIjoiaHR0cHM6Ly9zc28ucmVkaGF0LmNvbS9hdXRoL3JlYWxtcy9yZWRoYXQtZXh0ZXJuYWwiLCJzdWIiOiJmOjUyOGQ3NmZmLWY3MDgtNDNlZC04Y2Q1LWZlMTZmNGZlMGNlNjpyaG4tc3VwcG9ydC1rcXVpbm4iLCJ0eXAiOiJPZmZsaW5lIiwiYXpwIjoiY2xvdWQtc2VydmljZXMiLCJub25jZSI6IjI1NzQ3MDg1LTc2ZTEtNGIzZS05Y2U2LTE0MjJiNDY1ODAwMiIsInNpZCI6ImEwZDdhMTNjLWQ0MzYtNDVkYS04ZmNmLWQ5OGVjZWNkNzM3OSIsInNjb3BlIjoib3BlbmlkIGJhc2ljIGFwaS5pYW0uc2VydmljZV9hY2NvdW50cyByb2xlcyB3ZWItb3JpZ2lucyBjbGllbnRfdHlwZS5wcmVfa2MyNSBvZmZsaW5lX2FjY2VzcyJ9.pL3Ls9m0-AJqlRWdGnv7HEyIxA-PcQNR0RCtc8vqeMHaTgSME1h6Xd5rysyVzChRAHsPNv_kGwsgfx0DlnQ9jA' + +# These variables can be set but are optional. The previous (before AAP 2.4) conncept of sync-list was private +# to an account. +#automation_hub_url_certified_vault: 'The private automation hub URL for certified content' +#automation_hub_url_validated_vault: 'The private automation hub URL for validated content' + +controller_config_dir: "{{ '~/agof_minimal_demo/config' | expanduser }}" + +db_password: 'test' + +agof_statedir: "{{ '~/agof' | expanduser }}" +agof_iac_repo: "https://github.com/mhjacks/agof_demo_config.git" +---- + +. Run from the AGOF repository directory the following command : ++ +[source,terminal] +---- +$ ./pattern.sh make install +---- +This command invokes the `controller_configuration` `dispatch` role on the controller endpoint based on the configuration found in the `controller_configuration_dir` and in your `agof_vault.yml` file. This controls all of the controller configuration. Based on the variables defined in the controller configuration, `dispatch` calls the necessary roles and modules in the right order to configure AAP to run your pattern. + +.Verification + +The default installation provides an AAP 2.4 installation deployed using the containerized installer, with services deployed this way: + +.agof_vault settings +[cols="30%,70%",options="header"] +|=== +| URL | Service + +| `https:{{ ec2_name_prefix }}.{{ domain }}:8443` +a| Controller API. + +| `https:{{ ec2_name_prefix }}.{{ domain }}:8444/` +a|Private Automation Hub + +| `https:/{{ ec2_name_prefix }}.{{ domain }}:8445/`` +a| EDA Automation Controller + +|=== + +Once the install completes, you will have a project, an inventory (consisting of the AAP controller), a credential (the private key from ec2), a job template (which runs a fact gather on the AAP controller) and a schedule that will run the job template every 5 minutes, + +. Log in to `https:{{ ec2_name_prefix }}.{{ domain }}:8443` with the username `admin` and the password as configured in `admin_password` field of `agof_vault.yml`. + +. Under *Resources* > *Projects* verify the project *Ansible GitOps Framework Minimal Demo* is created with status *Successful*. + +. Under *Resources* > *Inventories* verify the inventory *AGOF Demo Inventory* is created with sync status *Success*. + +. Under *Resources* > *Templates* verify the job template *Ping Playbook* is created. + +. Under *Resources* > *Credentials* verify the ec2 ssh credential *ec2_ssh_credential* is created. + +. Under *Views* > *Schedules* verify the schedules are created. + + +=== Method 2: Pre-configured VMs Install + +This method allows the installation of AAP on pre-configured Red Hat Enterprise Linux (RHEL) VMs. It requires you to provide an inventory file that specifies details about the VMs or instances where AAP will be installed. + +It is designed for users with existing infrastructure who want to deploy AAP without depending on AWS. If you need to install a pattern on a cluster with a different topology than this, use the API install mechanism. + +[source,shell] +---- +./pattern.sh make legacy_from_os_install INVENTORY=(your_inventory_file) +---- + +The path to `your_inventory_file` defaults to ~/inventory_agof if you do not specify one. + +.Example agof_inventory file + +[source,yaml] +---- +[build_control] +localhost + +[aap_controllers] +192.168.5.207 + +[automation_hub] + +[eda_controllers] + +[aap_controllers:vars] + +[automation_hub:vars] + +[all:vars] +ansible_user=myuser +ansible_ssh_pass=mypass +ansible_become_pass=mypass +ansible_remote_tmp=/tmp/.ansible +username=myuser +controller_hostname=192.168.5.207 +---- + +=== Method 3: Custom Ansible controller (API install) + +In this method, you provide an existing Ansible Automation Platform (AAP) Controller endpoint, either on bare metal or in a private cloud, without needing AWS or pre-configured VMs. + +Specify + +manifest +endpoint hostname +admin credentials, and pass the installation process to a predefined `controller_config_dir`. + + +[source,terminal] +---- +./pattern.sh make api_install +---- + +=== Tearing down the installation + +To tear down the installation run the following command: + +[source,terminal] +---- +$ ./pattern.sh make aws_uninstall +---- \ No newline at end of file diff --git a/content/learn/vp_agof_config_controller.adoc b/content/learn/vp_agof_config_controller.adoc index 539d6eadc..ba7f75977 100644 --- a/content/learn/vp_agof_config_controller.adoc +++ b/content/learn/vp_agof_config_controller.adoc @@ -12,6 +12,25 @@ aliases: /ocp-framework/agof/ :_content-type: ASSEMBLY include::modules/comm-attributes.adoc[] +== Understanding the Ansible GitOps Framework (AGOF) installation process + +The Ansible GitOps Framework (AGOF) is a powerful solution designed to automate the deployment and configuration of Ansible Automation Platform (AAP) environments using GitOps principles. It leverages Ansible to manage infrastructure and application provisioning in a declarative, version-controlled way. AGOF provides a structured approach to setting up cloud infrastructure, installing AAP components, and handing over control to the AAP Controller for ongoing automation and management. An overview of the steps involved in configuring a basic demo minimal demo application are listed here: + +=== 1. Pre-Init Environment (Bootstrap Ansible) + +* *Ansible Configuration*: The environment is initialized by generating an `ansible.cfg` file, which is configured with Automation Hub and public Galaxy endpoints. This process includes vault configuration to inject the Automation Hub token and install required Ansible collections from `requirements.yml`. +* *Optional Image Build*: Images are created using Red Hat Image Builder to produce AMIs, which include `cloud-init`, activation keys, and organization details. These images can be reused in future installations. + +=== 2. Infrastructure Building (AWS Only) + +* *AWS Setup*: The framework sets up AWS infrastructure, including VPC, subnets, and security groups, using predefined roles. It also manages Route53 DNS entries for VMs. +* *VM Deployment*: Virtual machines are provisioned on EC2 with persistent hostnames and updates to `/etc/hosts` for AWS nodes. DNS entries are updated when IPs change after VM reboots. + +=== 3. Handover to Ansible Controller + +* *Controller Setup*: The Ansible Automation Platform (AAP) Controller and optionally the Automation Hub are installed and configured. Entitlements are managed via a manifest, and execution environments and collections are downloaded and prepared. +* *GitOps Mode*: After configuration, AGOF transitions to GitOps mode. All environment changes are managed via Git commits to the repositories used by the controller, ensuring declarative and automated infrastructure management from this point onward. + == Using the Controller Configuration collection An AGOF pattern for example https://github.com/mhjacks/agof_demo_config is primarily an IaC (infrastructure as code) artifact designed to be used with the `controller_configuration` collections. diff --git a/content/learn/vp_agof_proc.adoc b/content/learn/vp_agof_proc.adoc index e3fa9f158..9cdbcf42d 100644 --- a/content/learn/vp_agof_proc.adoc +++ b/content/learn/vp_agof_proc.adoc @@ -301,7 +301,7 @@ admin credentials, and pass the installation process to a predefined `controller === Tearing down the installation To tear down the installation run the following command: -+ + [source,terminal] ---- $ ./pattern.sh make aws_uninstall From 2a17ad36ae20207ea4e2b8699922fd279b67f792 Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Wed, 2 Oct 2024 14:54:47 +0100 Subject: [PATCH 14/22] Trying to fix display issues --- content/learn/vp_agof.adoc | 56 ++++++++++++++++++++------------------ 1 file changed, 29 insertions(+), 27 deletions(-) diff --git a/content/learn/vp_agof.adoc b/content/learn/vp_agof.adoc index b79cd5151..65ab7eda1 100644 --- a/content/learn/vp_agof.adoc +++ b/content/learn/vp_agof.adoc @@ -18,9 +18,9 @@ The link:/patterns/ansible-gitops-framework/[Ansible GitOps Framework] provides When there is no access to OpenShift Container Platform, AGOF provides a standalone solution for deploying and managing validated patterns. This framework leverages GitOps principles without relying on OpenShift. -Ansible Automation Platform includes automation controller, a web-based UI interface allows users to define, operate, scale, and delegate automation across their enterprise +Administrators can use Ansible Automation Platform includes automation controller, a web-based UI interface to define, operate, scale, and delegate automation across their enterprise. -The repository at https://github.com/validatedpatterns/agof/[Ansible GitOps Framework] provides code for installing VMs on AWS, if needed. Alternatively, it can be used with existing VMs or a functional AAP Controller endpoint. +The repository at https://github.com/validatedpatterns/agof/[Ansible GitOps Framework] provides code for installing VMs on AWS, if needed. It can also be used with existing VMs or a functional AAP Controller endpoint. The link:https://github.com/validatedpatterns/agof/[Ansible GitOps Framework] repository contains code to set up the infrastructure for applying `controller_configuration` to an AAP instance. It includes some predefined configurations and practices to make the infrastructure as code repository more user-friendly and standardized. An AGOF pattern for example link:https://github.com/mhjacks/agof_demo_config[demo] is primarily an IaC infrastructure as code (IaC) artifact designed to be used with the `controller_configuration` collection. @@ -40,7 +40,7 @@ The three main methods for setting up an Ansible framework in relation to Ansibl === Method 1: AWS-based install -This method is ideal for organizations that prefer deploying AAP on AWS infrastructure. This default install process in AAP 2.4 uses AWS by default and offers a fully automated setup. It requires AWS credentials, builds an AWS image with Red Hat's ImageBuilder, and sets up AAP within an AWS VPC and subnet. The installer creates all the necessary resources, including AAP Controllers and, optionally, additional components such as Automation Hub. +This method is ideal for organizations that prefer deploying AAP on AWS infrastructure. This default install process in AAP 2.4 uses AWS by default and offers a fully automated setup. It requires AWS credentials, builds an AWS image with Red Hat's ImageBuilder, and sets up AAP within an AWS VPC and subnet. The installation program creates all the necessary resources, including AAP Controllers and, optionally, additional components such as Automation Hub. *Pros*: This is the easiest method if you already use AWS, as it automates the provisioning of resources, including VMs and network configurations. @@ -120,78 +120,80 @@ You need to provide some key information to a file named `agof_vault.yml` create $ cp ~/agof/agof_vault_template.yml ~/agof_vault.yml ---- + -.agof_vault settings -[cols="30%,70%",options="header"] + +. Agof vault settings + +[cols="30%,70%", options="header"] |=== | Argument | Description | `aws_account_nbr_vault` -a| Your AWS account number. This is needed for sharing composer images. +| Your AWS account number. This is needed for sharing composer images. | `aws_access_key_vault` -a| Your AWS access key. +| Your AWS access key. | `aws_secret_key_vault` -a| Your AWS secret key. +| Your AWS secret key. | `pattern_prefix` -a| A unique prefix to distinguish instances in AWS. Used as the pattern name and in public DNS entries. +| A unique prefix to distinguish instances in AWS. Used as the pattern name and in public DNS entries. | `ec2_region` -a| An AWS region that your account has access to. +| An AWS region that your account has access to. | `offline_token` -a| A Red Hat offline token used to build the RHEL image on link:https://console.redhat.com[console.redhat.com]. +| A Red Hat offline token used to build the RHEL image on https://console.redhat.com[console.redhat.com]. [NOTE] ==== -Click the `GENERATE TOKEN` link at link:https://access.redhat.com/management/api[Red Hat API Tokens] to create the token. +Click the `GENERATE TOKEN` link at https://access.redhat.com/management/api[Red Hat API Tokens] to create the token. ==== | `redhat_username` -a| Red Hat Subscription username, used to log in to link:https://registry.redhat.io[registry.redhat.io]. +| Red Hat Subscription username, used to log in to https://registry.redhat.io[registry.redhat.io]. | `redhat_password` -a| Red Hat Subscription password, used to log in to link:https://registry.redhat.io[registry.redhat.io]. +| Red Hat Subscription password, used to log in to https://registry.redhat.io[registry.redhat.io]. | `admin_password` -a| An admin password for AAP Controller and Automation Hub. +| An admin password for AAP Controller and Automation Hub. | `manifest_content` -a| Content for a manifest file to entitle AAP Controller. See below for an example of how to point to a local file. +| Content for a manifest file to entitle AAP Controller. See below for an example of how to point to a local file. | `org_number_vault` -a| The Organization Number (Org ID) attached to your Red Hat Subscription for RHEL and AAP. +| The Organization Number (Org ID) attached to your Red Hat Subscription for RHEL and AAP. | `activation_key_vault` -a| The name of an Activation Key to embed in the imagebuilder image. +| The name of an Activation Key to embed in the imagebuilder image. [NOTE] ==== -Click the `Create Activation Keys` link at link:https://console.redhat.com[console.redhat.com] > *Red Hat Enterprise Linux* > *Inventory* > *System Configuration* > *Activation Keys* to create an activation key. +Click the `Create Activation Keys` link at https://console.redhat.com[console.redhat.com] > *Red Hat Enterprise Linux* > *Inventory* > *System Configuration* > *Activation Keys* to create an activation key. ==== | `skip_imagebuilder_build` -a| Set these variables to provide your own AMI, or to re-use an AMI previously generated with this process. When a previously built AMI is found this check does not take place. +| Set these variables to provide your own AMI, or to re-use an AMI previously generated with this process. When a previously built AMI is found this check does not take place. #imagebuilder_ami: 'The ID of an AWS AMI image, preferably one that was built with this toolkit' | `automation_hub_token_vault` -a| A token associated with your AAP subscription used to retrieve Automation Hub content. +| A token associated with your AAP subscription used to retrieve Automation Hub content. [NOTE] ==== -Click the `Load token` link at link:https://console.redhat.com[console.redhat.com] > *Ansible Automation Platform* > *Automation Hub* > *Connect to Hub* to generate a token. +Click the `Load token` link at https://console.redhat.com[console.redhat.com] > *Ansible Automation Platform* > *Automation Hub* > *Connect to Hub* to generate a token. ==== | `automation_hub_url_certified_vault` -a| Optional: The private automation hub URL for certified content. +| Optional: The private automation hub URL for certified content. | `automation_hub_url_validated_vault` -a| Optional: The private automation hub URL for validated content. - +| Optional: The private automation hub URL for validated content. |=== + . Edit the file and add the following: * `controller_config_dir:` set it's value to `{{ '~/agof_minimal_demo/config' | expanduser }}`. @@ -224,7 +226,7 @@ pattern_dns_zone: 'aws.validatedpatterns.io' ec2_name_prefix: 'foga-testing' ec2_region: 'us-east-1' -offline_token: 'eyJhbGciOiJIUzUxMiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICI0NzQzYTkzMC03YmJiLTRkZGQtOTgzMS00ODcxNGRlZDc0YjUifQ.eyJpYXQiOjE3MjY4MjcxNDcsImp0aSI6ImJlNjdhZDQ4LWUzNWQtNDg1Yy04OTM2LTMwMzNmODgwM2Q0MSIsImlzcyI6Imh0dHBzOi8vc3NvLnJlZGhhdC5jb20vYXV0aC9yZWFsbXMvcmVkaGF0LWV4dGVybmFsIiwiYXVkIjoiaHR0cHM6Ly9zc28ucmVkaGF0LmNvbS9hdXRoL3JlYWxtcy9yZWRoYXQtZXh0ZXJuYWwiLCJzdWIiOiJmOjUyOGQ3NmZmLWY3MDgtNDNlZC04Y2Q1LWZlMTZmNGZlMGNlNjpyaG4tc3VwcG9ydC1rcXVpbm4iLCJ0eXAiOiJPZmZsaW5lIiwiYXpwIjoicmhzbS1hcGkiLCJzaWQiOiJhMGQ3YTEzYy1kNDM2LTQ1ZGEtOGZjZi1kOThlY2VjZDczNzkiLCJzY29wZSI6ImJhc2ljIHJvbGVzIHdlYi1vcmlnaW5zIGNsaWVudF90eXBlLnByZV9rYzI1IG9mZmxpbmVfYWNjZXNzIn0.482J5PUtr2TQCl-EUjmIwwCXe-o9_SYCuOABfpunxzGoTOdHevW7LQTGUrJ6FOrNPReuOdIwiA5KijowEVxeRw' +offline_token: '' redhat_username: 'rhn-support-myusername' redhat_password: 'passwd01' @@ -241,7 +243,7 @@ activation_key_vault: "kevs-agof-key" #skip_imagebuilder_build: 'boolean: skips imagebuilder build process' #imagebuilder_ami: 'The ID of an AWS AMI image, preferably one that was built with this toolkit' -automation_hub_token_vault: 'eyJhbGciOiJIUzUxMiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICI0NzQzYTkzMC03YmJiLTRkZGQtOTgzMS00ODcxNGRlZDc0YjUifQ.eyJpYXQiOjE3MjY4MjIxNzEsImp0aSI6IjQ2Y2ZjZmU2LTY0ZTgtNDgyYS04Mjc1LWFlZjEzNTY3NjUxMiIsImlzcyI6Imh0dHBzOi8vc3NvLnJlZGhhdC5jb20vYXV0aC9yZWFsbXMvcmVkaGF0LWV4dGVybmFsIiwiYXVkIjoiaHR0cHM6Ly9zc28ucmVkaGF0LmNvbS9hdXRoL3JlYWxtcy9yZWRoYXQtZXh0ZXJuYWwiLCJzdWIiOiJmOjUyOGQ3NmZmLWY3MDgtNDNlZC04Y2Q1LWZlMTZmNGZlMGNlNjpyaG4tc3VwcG9ydC1rcXVpbm4iLCJ0eXAiOiJPZmZsaW5lIiwiYXpwIjoiY2xvdWQtc2VydmljZXMiLCJub25jZSI6IjI1NzQ3MDg1LTc2ZTEtNGIzZS05Y2U2LTE0MjJiNDY1ODAwMiIsInNpZCI6ImEwZDdhMTNjLWQ0MzYtNDVkYS04ZmNmLWQ5OGVjZWNkNzM3OSIsInNjb3BlIjoib3BlbmlkIGJhc2ljIGFwaS5pYW0uc2VydmljZV9hY2NvdW50cyByb2xlcyB3ZWItb3JpZ2lucyBjbGllbnRfdHlwZS5wcmVfa2MyNSBvZmZsaW5lX2FjY2VzcyJ9.pL3Ls9m0-AJqlRWdGnv7HEyIxA-PcQNR0RCtc8vqeMHaTgSME1h6Xd5rysyVzChRAHsPNv_kGwsgfx0DlnQ9jA' +automation_hub_token_vault: '' # These variables can be set but are optional. The previous (before AAP 2.4) conncept of sync-list was private # to an account. From 51f7664c0045d620b51b0a72dde4895aea64cb0f Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Wed, 2 Oct 2024 15:13:40 +0100 Subject: [PATCH 15/22] Trying to fix display issues 2 --- content/learn/vp_agof_install_process.adoc | 33 --- content/learn/vp_agof_proc.adoc | 308 --------------------- 2 files changed, 341 deletions(-) delete mode 100644 content/learn/vp_agof_install_process.adoc delete mode 100644 content/learn/vp_agof_proc.adoc diff --git a/content/learn/vp_agof_install_process.adoc b/content/learn/vp_agof_install_process.adoc deleted file mode 100644 index 4ac318cae..000000000 --- a/content/learn/vp_agof_install_process.adoc +++ /dev/null @@ -1,33 +0,0 @@ ---- -menu: - learn: - parent: Validated patterns frameworks -title: Providing data to controller configuration collection -weight: 26 -aliases: /ocp-framework/agof/ ---- - -:toc: -:imagesdir: /images -:_content-type: ASSEMBLY -include::modules/comm-attributes.adoc[] - -== Understanding the Ansible GitOps Framework (AGOF) installation process - -The Ansible GitOps Framework (AGOF) is a powerful solution designed to automate the deployment and configuration of Ansible Automation Platform (AAP) environments using GitOps principles. It leverages Ansible to manage infrastructure and application provisioning in a declarative, version-controlled way. AGOF provides a structured approach to setting up cloud infrastructure, installing AAP components, and handing over control to the AAP Controller for ongoing automation and management. An overview of the steps involved in configuring a basic demo minimal demo application are listed here: - -=== 1. Pre-Init Environment (Bootstrap Ansible) - -* *Ansible Configuration*: The environment is initialized by generating an `ansible.cfg` file, which is configured with Automation Hub and public Galaxy endpoints. This process includes vault configuration to inject the Automation Hub token and install required Ansible collections from `requirements.yml`. -* *Optional Image Build*: Images are created using Red Hat Image Builder to produce AMIs, which include `cloud-init`, activation keys, and organization details. These images can be reused in future installations. - -=== 2. Infrastructure Building (AWS Only) - -* *AWS Setup*: The framework sets up AWS infrastructure, including VPC, subnets, and security groups, using predefined roles. It also manages Route53 DNS entries for VMs. -* *VM Deployment*: Virtual machines are provisioned on EC2 with persistent hostnames and updates to `/etc/hosts` for AWS nodes. DNS entries are updated when IPs change after VM reboots. - -=== 3. Handover to Ansible Controller - -* *Controller Setup*: The Ansible Automation Platform (AAP) Controller and optionally the Automation Hub are installed and configured. Entitlements are managed via a manifest, and execution environments and collections are downloaded and prepared. -* *GitOps Mode*: After configuration, AGOF transitions to GitOps mode. All environment changes are managed via Git commits to the repositories used by the controller, ensuring declarative and automated infrastructure management from this point onward. - diff --git a/content/learn/vp_agof_proc.adoc b/content/learn/vp_agof_proc.adoc deleted file mode 100644 index 9cdbcf42d..000000000 --- a/content/learn/vp_agof_proc.adoc +++ /dev/null @@ -1,308 +0,0 @@ ---- -menu: - learn: - parent: Validated patterns frameworks -title: Creating a validated pattern using the AGOF framework -weight: 24 -aliases: /ocp-framework/agof/ ---- - -:toc: -:imagesdir: /images -:_content-type: ASSEMBLY -include::modules/comm-attributes.adoc[] - -== Creating a validated pattern using the AGOF framework - -Whichever method you use to deploy a validated pattern you need to: - -. Clone the link:https://github.com/mhjacks/agof_demo_config[Ansible GitOps Framework Minimal Demo] repository. This is a minimal pattern that demonstrates how to use the Ansible GitOps Framework. -+ -[source,terminal] ----- -$ git clone https://github.com/validatedpatterns-demos/agof_minimal_demo.git ----- - -. Clone the link:https://github.com/validatedpatterns/agof[AGOF] repository. This serves as the provisioner for the pattern -+ -[source,terminal] ----- -$ git clone https://github.com/validatedpatterns-demos/agof_minimal_demo.git ----- - -=== Deploying using the AWS-based install method - -This method: - -. Builds an AWS image using Red Hat's ImageBuilder. - -. Deploys that image onto a new AWS VPC and subnet. - -. Deploys AAP on that image using the command line installer. - -. Hands over the configuration of the AAP installation to the specified `controller_config_dir`. - -.Prerequisites - -You need to provide some key information to a file named `agof_vault.yml` created in your home directory. The key pieces of information needed are: - -* The name of a hosted zone created under Route 53 in AWS. For example this could be `aws.validatedpatterns.io`. - -* Your AWS account `Account ID` - -* Your `aws key` - -* Your `secret access key` - -.Procedure - -. Copy the `agof_vault_template.yml` from the cloned `agof` directory to your home directory and rename it `agof_vault.yml`. -+ -[source,terminal] ----- -$ cp ~/agof/agof_vault_template.yml ~/agof_vault.yml ----- -+ -.agof_vault settings -[cols="30%,70%",options="header"] -|=== -| Argument | Description - -| `aws_account_nbr_vault` -a| Your AWS account number. This is needed for sharing composer images. - -| `aws_access_key_vault` -a| Your AWS access key. - -| `aws_secret_key_vault` -a| Your AWS secret key. - -| `pattern_prefix` -a| A unique prefix to distinguish instances in AWS. Used as the pattern name and in public DNS entries. - -| `ec2_region` -a| An AWS region that your account has access to. - -| `offline_token` -a| A Red Hat offline token used to build the RHEL image on link:https://console.redhat.com[console.redhat.com]. - -[NOTE] -==== -Click the `GENERATE TOKEN` link at link:https://access.redhat.com/management/api[Red Hat API Tokens] to create the token. -==== - -| `redhat_username` -a| Red Hat Subscription username, used to log in to link:https://registry.redhat.io[registry.redhat.io]. - -| `redhat_password` -a| Red Hat Subscription password, used to log in to link:https://registry.redhat.io[registry.redhat.io]. - -| `admin_password` -a| An admin password for AAP Controller and Automation Hub. - -| `manifest_content` -a| Content for a manifest file to entitle AAP Controller. See below for an example of how to point to a local file. - -| `org_number_vault` -a| The Organization Number (Org ID) attached to your Red Hat Subscription for RHEL and AAP. - -| `activation_key_vault` -a| The name of an Activation Key to embed in the imagebuilder image. - -[NOTE] -==== -Click the `Create Activation Keys` link at link:https://console.redhat.com[console.redhat.com] > *Red Hat Enterprise Linux* > *Inventory* > *System Configuration* > *Activation Keys* to create an activation key. -==== - -| `skip_imagebuilder_build` -a| Set these variables to provide your own AMI, or to re-use an AMI previously generated with this process. When a previously built AMI is found this check does not take place. -#imagebuilder_ami: 'The ID of an AWS AMI image, preferably one that was built with this toolkit' - -| `automation_hub_token_vault` -a| A token associated with your AAP subscription used to retrieve Automation Hub content. - -[NOTE] -==== -Click the `Load token` link at link:https://console.redhat.com[console.redhat.com] > *Ansible Automation Platform* > *Automation Hub* > *Connect to Hub* to generate a token. -==== - -| `automation_hub_url_certified_vault` -a| Optional: The private automation hub URL for certified content. - -| `automation_hub_url_validated_vault` -a| Optional: The private automation hub URL for validated content. - -|=== - - -. Edit the file and add the following: - -* `controller_config_dir:` set it's value to `{{ '~/agof_minimal_demo/config' | expanduser }}`. -* `db_password:` set an appropriate value for the postgres password for the DB instance for example `test`. -* `agof_statedir:` set its value to "{{ '~/agof' | expanduser }}" -* `agof_iac_repo:` set its value to "https://github.com/mhjacks/agof_demo_config.git" - -. Optional: Create a subscription manifest by following the guidance at link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/htmlred_hat_ansible_automation_platform_operations_guide/assembly-aap-obtain-manifest-files#assembly-aap-obtain-manifest-files[Obtaining a manifest file] - -.. Update your `agof_vault.yml` file with the path to downloaded manifest zip file. For example add the following: -+ -[source,shell] ----- -controller_license_src_file: '~/Downloads/manifest__20240924T131518Z.zip' -manifest_content: "{{ lookup('file', controller_license_src_file) | b64encode }}" ----- - -.Example agof_vault.yml file - -[source,yaml] ----- ---- -aws_account_nbr_vault: '293265215425' -aws_access_key_vault: 'AKIAIJ6ZIKPAUZGF2643' -aws_secret_key_vault: 'gMC3Jy3/MZtOosjUDHy0Nl/2mp2HQok1JDfCQGKUR' - -pattern_prefix: 'foga' -pattern_dns_zone: 'aws.validatedpatterns.io' - -ec2_name_prefix: 'foga-testing' -ec2_region: 'us-east-1' - -offline_token: 'eyJhbGciOiJIUzUxMiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICI0NzQzYTkzMC03YmJiLTRkZGQtOTgzMS00ODcxNGRlZDc0YjUifQ.eyJpYXQiOjE3MjY4MjcxNDcsImp0aSI6ImJlNjdhZDQ4LWUzNWQtNDg1Yy04OTM2LTMwMzNmODgwM2Q0MSIsImlzcyI6Imh0dHBzOi8vc3NvLnJlZGhhdC5jb20vYXV0aC9yZWFsbXMvcmVkaGF0LWV4dGVybmFsIiwiYXVkIjoiaHR0cHM6Ly9zc28ucmVkaGF0LmNvbS9hdXRoL3JlYWxtcy9yZWRoYXQtZXh0ZXJuYWwiLCJzdWIiOiJmOjUyOGQ3NmZmLWY3MDgtNDNlZC04Y2Q1LWZlMTZmNGZlMGNlNjpyaG4tc3VwcG9ydC1rcXVpbm4iLCJ0eXAiOiJPZmZsaW5lIiwiYXpwIjoicmhzbS1hcGkiLCJzaWQiOiJhMGQ3YTEzYy1kNDM2LTQ1ZGEtOGZjZi1kOThlY2VjZDczNzkiLCJzY29wZSI6ImJhc2ljIHJvbGVzIHdlYi1vcmlnaW5zIGNsaWVudF90eXBlLnByZV9rYzI1IG9mZmxpbmVfYWNjZXNzIn0.482J5PUtr2TQCl-EUjmIwwCXe-o9_SYCuOABfpunxzGoTOdHevW7LQTGUrJ6FOrNPReuOdIwiA5KijowEVxeRw' - -redhat_username: 'rhn-support-myusername' -redhat_password: 'passwd01' - -admin_password: 'redhat123!' - -manifest_content: "Content for a manifest file to entitle AAP Controller. See below for an example of how to point to a local file" -#manifest_content: "{{ lookup('file', '~/Downloads/manifest_AVP_20230510T202608Z.zip') | b64encode }}" - -org_number_vault: "1778713" -activation_key_vault: "kevs-agof-key" - -# Set these variables to provide your own AMI, or to re-use an AMI previously generated with this process -#skip_imagebuilder_build: 'boolean: skips imagebuilder build process' -#imagebuilder_ami: 'The ID of an AWS AMI image, preferably one that was built with this toolkit' - -automation_hub_token_vault: 'eyJhbGciOiJIUzUxMiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICI0NzQzYTkzMC03YmJiLTRkZGQtOTgzMS00ODcxNGRlZDc0YjUifQ.eyJpYXQiOjE3MjY4MjIxNzEsImp0aSI6IjQ2Y2ZjZmU2LTY0ZTgtNDgyYS04Mjc1LWFlZjEzNTY3NjUxMiIsImlzcyI6Imh0dHBzOi8vc3NvLnJlZGhhdC5jb20vYXV0aC9yZWFsbXMvcmVkaGF0LWV4dGVybmFsIiwiYXVkIjoiaHR0cHM6Ly9zc28ucmVkaGF0LmNvbS9hdXRoL3JlYWxtcy9yZWRoYXQtZXh0ZXJuYWwiLCJzdWIiOiJmOjUyOGQ3NmZmLWY3MDgtNDNlZC04Y2Q1LWZlMTZmNGZlMGNlNjpyaG4tc3VwcG9ydC1rcXVpbm4iLCJ0eXAiOiJPZmZsaW5lIiwiYXpwIjoiY2xvdWQtc2VydmljZXMiLCJub25jZSI6IjI1NzQ3MDg1LTc2ZTEtNGIzZS05Y2U2LTE0MjJiNDY1ODAwMiIsInNpZCI6ImEwZDdhMTNjLWQ0MzYtNDVkYS04ZmNmLWQ5OGVjZWNkNzM3OSIsInNjb3BlIjoib3BlbmlkIGJhc2ljIGFwaS5pYW0uc2VydmljZV9hY2NvdW50cyByb2xlcyB3ZWItb3JpZ2lucyBjbGllbnRfdHlwZS5wcmVfa2MyNSBvZmZsaW5lX2FjY2VzcyJ9.pL3Ls9m0-AJqlRWdGnv7HEyIxA-PcQNR0RCtc8vqeMHaTgSME1h6Xd5rysyVzChRAHsPNv_kGwsgfx0DlnQ9jA' - -# These variables can be set but are optional. The previous (before AAP 2.4) conncept of sync-list was private -# to an account. -#automation_hub_url_certified_vault: 'The private automation hub URL for certified content' -#automation_hub_url_validated_vault: 'The private automation hub URL for validated content' - -controller_config_dir: "{{ '~/agof_minimal_demo/config' | expanduser }}" - -db_password: 'test' - -agof_statedir: "{{ '~/agof' | expanduser }}" -agof_iac_repo: "https://github.com/mhjacks/agof_demo_config.git" ----- - -. Run from the AGOF repository directory the following command : -+ -[source,terminal] ----- -$ ./pattern.sh make install ----- -This command invokes the `controller_configuration` `dispatch` role on the controller endpoint based on the configuration found in the `controller_configuration_dir` and in your `agof_vault.yml` file. This controls all of the controller configuration. Based on the variables defined in the controller configuration, `dispatch` calls the necessary roles and modules in the right order to configure AAP to run your pattern. - -.Verification - -The default installation provides an AAP 2.4 installation deployed using the containerized installer, with services deployed this way: - -.agof_vault settings -[cols="30%,70%",options="header"] -|=== -| URL | Service - -| `https:{{ ec2_name_prefix }}.{{ domain }}:8443` -a| Controller API. - -| `https:{{ ec2_name_prefix }}.{{ domain }}:8444/` -a|Private Automation Hub - -| `https:/{{ ec2_name_prefix }}.{{ domain }}:8445/`` -a| EDA Automation Controller - -|=== - -Once the install completes, you will have a project, an inventory (consisting of the AAP controller), a credential (the private key from ec2), a job template (which runs a fact gather on the AAP controller) and a schedule that will run the job template every 5 minutes, - -. Log in to `https:{{ ec2_name_prefix }}.{{ domain }}:8443` with the username `admin` and the password as configured in `admin_password` field of `agof_vault.yml`. - -. Under *Resources* > *Projects* verify the project *Ansible GitOps Framework Minimal Demo* is created with status *Successful*. - -. Under *Resources* > *Inventories* verify the inventory *AGOF Demo Inventory* is created with sync status *Success*. - -. Under *Resources* > *Templates* verify the job template *Ping Playbook* is created. - -. Under *Resources* > *Credentials* verify the ec2 ssh credential *ec2_ssh_credential* is created. - -. Under *Views* > *Schedules* verify the schedules are created. - - -=== Method 2: Pre-configured VMs Install - -This method allows the installation of AAP on pre-configured Red Hat Enterprise Linux (RHEL) VMs. It requires you to provide an inventory file that specifies details about the VMs or instances where AAP will be installed. - -It is designed for users with existing infrastructure who want to deploy AAP without depending on AWS. If you need to install a pattern on a cluster with a different topology than this, use the API install mechanism. - -[source,shell] ----- -./pattern.sh make legacy_from_os_install INVENTORY=(your_inventory_file) ----- - -The path to `your_inventory_file` defaults to ~/inventory_agof if you do not specify one. - -.Example agof_inventory file - -[source,yaml] ----- -[build_control] -localhost - -[aap_controllers] -192.168.5.207 - -[automation_hub] - -[eda_controllers] - -[aap_controllers:vars] - -[automation_hub:vars] - -[all:vars] -ansible_user=myuser -ansible_ssh_pass=mypass -ansible_become_pass=mypass -ansible_remote_tmp=/tmp/.ansible -username=myuser -controller_hostname=192.168.5.207 ----- - -=== Method 3: Custom Ansible controller (API install) - -In this method, you provide an existing Ansible Automation Platform (AAP) Controller endpoint, either on bare metal or in a private cloud, without needing AWS or pre-configured VMs. - -Specify - -manifest -endpoint hostname -admin credentials, and pass the installation process to a predefined `controller_config_dir`. - - -[source,terminal] ----- -./pattern.sh make api_install ----- - -=== Tearing down the installation - -To tear down the installation run the following command: - -[source,terminal] ----- -$ ./pattern.sh make aws_uninstall ----- From 26d3477d6ea912634bc16c1c422828e551577624 Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Wed, 2 Oct 2024 17:00:36 +0100 Subject: [PATCH 16/22] Trying to fix display issues 3 --- content/learn/vp_agof.adoc | 96 +++++++++++--------- content/learn/vp_agof_config_controller.adoc | 14 ++- 2 files changed, 64 insertions(+), 46 deletions(-) diff --git a/content/learn/vp_agof.adoc b/content/learn/vp_agof.adoc index 65ab7eda1..a0cf0320c 100644 --- a/content/learn/vp_agof.adoc +++ b/content/learn/vp_agof.adoc @@ -30,44 +30,36 @@ In the Ansible GitOps framework, an ansible controller refers to any machine tha THe link:https://docs.ansible.com/platform.html[Automation controller] is the command and control center for Red Hat Ansible Automation Platform, replacing Ansible Tower. It includes a webUI, API, role-based access control (RBAC), a workflow visualizer, and continuous integration and continuous delivery (CI/CD) integrations to help you organize and manage automation across your enterprise. -The Automation Controller serves as the centralized platform for managing and executing automation across infrastructure. It allows users to create job templates that standardize the deployment and execution of Ansible playbooks, making automation more consistent and reusable. It integrates essential components such as execution environments for consistent automation execution, projects (repositories for automation content), inventories (target endpoints), and credentials (for secure access to resources). +The Automation Controller serves as the centralized platform for managing and executing automation across infrastructure. Use the Automation Controller to create job templates that standardize the deployment and execution of Ansible playbooks, making automation more consistent and reusable. It integrates essential components such as execution environments for consistent automation execution, projects (repositories for automation content), inventories (target endpoints), and credentials (for secure access to resources). The webUI provides an intuitive interface to build, monitor, and manage automation workflows, while the API offers seamless integration with other tools, such as CI/CD pipelines or orchestration platforms. Overall, the Automation Controller streamlines the automation lifecycle, ensuring a scalable, secure, and maintainable automation environment. == Ansible framework methods -The three main methods for setting up an Ansible framework in relation to Ansible Automation Platform (AAP) 2.4 can be summarized as follows: +The three main methods for setting up an Ansible framework are as follows: === Method 1: AWS-based install -This method is ideal for organizations that prefer deploying AAP on AWS infrastructure. This default install process in AAP 2.4 uses AWS by default and offers a fully automated setup. It requires AWS credentials, builds an AWS image with Red Hat's ImageBuilder, and sets up AAP within an AWS VPC and subnet. The installation program creates all the necessary resources, including AAP Controllers and, optionally, additional components such as Automation Hub. +This method is ideal for organizations that prefer deploying AAP on AWS infrastructure. This default install process in AAP 2.4 uses AWS by default and offers a fully automated setup. It requires AWS credentials, builds an AWS image with Red Hat's ImageBuilder, and sets up AAP within an AWS VPC and subnet. The installation program creates all the necessary resources, including AAP Controllers and, optionally, additional components such as the Automation Hub. -*Pros*: This is the easiest method if you already use AWS, as it automates the provisioning of resources, including VMs and network configurations. - -*Cons*: This requires AWS infrastructure and credentials. This is not ideal if you're working in an on-premises environment or a cloud platform other than AWS. +This is the easiest method if you already use AWS, as it automates the provisioning of resources, including VMs and network configurations. This requires AWS infrastructure and credentials. === Method 2: Pre-configured VMs Install -This method allows the installation of AAP on pre-configured Red Hat Enterprise Linux (RHEL) VMs. It requires you to provide an inventory file that specifies details about the VMs or instances where AAP will be installed. It is designed for users with existing infrastructure who want to deploy AAP without depending on AWS. If you need to install a pattern on a cluster with a different topology than this, use the API install mechanism. - -*Pros*: Useful if you already have pre-configured VMs or bare-metal instances running RHEL. It allows greater flexibility and control over the environment. - -*Cons*: Requires more manual effort to configure VMs and may need additional customization for non-standard topologies. +This method allows the installation of AAP on pre-configured Red Hat Enterprise Linux (RHEL) VMs. You need to provide an inventory file that specifies details about the VMs or instances where AAP will be installed. It is designed for users with existing infrastructure who want to deploy AAP without depending on AWS. If you need to install a pattern on a cluster with a different topology than this, use the API install mechanism. -This model has been tested with up to two RHEL VMs (one for AAP and one for Hub). +THis method is useful if you already have pre-configured VMs or bare-metal instances running RHEL. It allows greater flexibility and control over the environment. Using this method requires more manual effort to configure VMs and might need additional customization for non-standard topologies. This model has been tested with up to two RHEL VMs (one for AAP and one for Hub). The requirements for this mode are as follows: * Must be running a version of RHEL that AAP supports * Must be properly entitled with a subscription that makes the appropriate AAP repository available -=== Method 3: Custom Ansible controller (API install aka "Bare") +=== Method 3: Custom Ansible controller (API install) In this method, you provide an existing Ansible Automation Platform (AAP) Controller endpoint, either on bare metal or in a private cloud, without needing AWS or pre-configured VMs. You specify the manifest, endpoint hostname, admin credentials, and pass the installation process to a predefined `controller_config_dir`. This is suitable for complex or custom topologies where you want full control over the deployment. -*Pros*: Provides maximum flexibility and is designed for advanced users who have their own AAP installations, either on-prem or in complex environments that do not fit into the default or AWS-centric model. - -*Cons*: Requires an existing AAP controller, which might not be ideal for users new to AAP or those looking for more hands-off installation +This method provides maximum flexibility and is designed for advanced users who have their own AAP installations, either on-prem or in complex environments that do not fit into the default or AWS-centric model. You need an existing AAP controller, which might not be ideal for users new to AAP or those looking for more hands-off installation == Creating a validated pattern using the AGOF framework @@ -120,9 +112,8 @@ You need to provide some key information to a file named `agof_vault.yml` create $ cp ~/agof/agof_vault_template.yml ~/agof_vault.yml ---- + - -. Agof vault settings - +.Agof vault settings ++ [cols="30%,70%", options="header"] |=== | Argument | Description @@ -143,7 +134,7 @@ $ cp ~/agof/agof_vault_template.yml ~/agof_vault.yml | An AWS region that your account has access to. | `offline_token` -| A Red Hat offline token used to build the RHEL image on https://console.redhat.com[console.redhat.com]. +a| A Red Hat offline token used to build the RHEL image on https://console.redhat.com[console.redhat.com]. [NOTE] ==== @@ -166,7 +157,7 @@ Click the `GENERATE TOKEN` link at https://access.redhat.com/management/api[Red | The Organization Number (Org ID) attached to your Red Hat Subscription for RHEL and AAP. | `activation_key_vault` -| The name of an Activation Key to embed in the imagebuilder image. +a| The name of an Activation Key to embed in the imagebuilder image. [NOTE] ==== @@ -178,7 +169,7 @@ Click the `Create Activation Keys` link at https://console.redhat.com[console.re #imagebuilder_ami: 'The ID of an AWS AMI image, preferably one that was built with this toolkit' | `automation_hub_token_vault` -| A token associated with your AAP subscription used to retrieve Automation Hub content. +a| A token associated with your AAP subscription used to retrieve Automation Hub content. [NOTE] ==== @@ -192,25 +183,23 @@ Click the `Load token` link at https://console.redhat.com[console.redhat.com] > | Optional: The private automation hub URL for validated content. |=== - - . Edit the file and add the following: * `controller_config_dir:` set it's value to `{{ '~/agof_minimal_demo/config' | expanduser }}`. -* `db_password:` set an appropriate value for the postgres password for the DB instance for example `test`. -* `agof_statedir:` set its value to "{{ '~/agof' | expanduser }}" -* `agof_iac_repo:` set its value to "https://github.com/mhjacks/agof_demo_config.git" +* `db_password:` sets an appropriate value for the postgres password for the DB instance for example `test`. +* `agof_statedir:` set its value to `"{{ '~/agof' | expanduser }}"` +* `agof_iac_repo:` set its value to `"https://github.com/mhjacks/agof_demo_config.git"` -. Optional: Create a subscription manifest by following the guidance at link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/htmlred_hat_ansible_automation_platform_operations_guide/assembly-aap-obtain-manifest-files#assembly-aap-obtain-manifest-files[Obtaining a manifest file] +. Optional: Create a subscription manifest by following the guidance at link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/htmlred_hat_ansible_automation_platform_operations_guide/assembly-aap-obtain-manifest-files#assembly-aap-obtain-manifest-files[Obtaining a manifest file]. -.. Update your `agof_vault.yml` file with the path to downloaded manifest zip file. For example add the following: +.. Update your `agof_vault.yml` file with the path to the downloaded manifest zip file. For example add the following: + [source,shell] ---- controller_license_src_file: '~/Downloads/manifest__20240924T131518Z.zip' manifest_content: "{{ lookup('file', controller_license_src_file) | b64encode }}" ---- - ++ .Example agof_vault.yml file [source,yaml] @@ -255,6 +244,7 @@ controller_config_dir: "{{ '~/agof_minimal_demo/config' | expanduser }}" db_password: 'test' agof_statedir: "{{ '~/agof' | expanduser }}" + agof_iac_repo: "https://github.com/mhjacks/agof_demo_config.git" ---- @@ -312,9 +302,9 @@ It is designed for users with existing infrastructure who want to deploy AAP wit ./pattern.sh make legacy_from_os_install INVENTORY=(your_inventory_file) ---- -The path to `your_inventory_file` defaults to ~/inventory_agof if you do not specify one. +The path to `your_inventory_file` defaults to `~/inventory_agof` if you do not specify one. -.Example agof_inventory file +.Example agof_inventory file for just AAP (default) [source,yaml] ---- @@ -341,26 +331,50 @@ username=myuser controller_hostname=192.168.5.207 ---- -=== Method 3: Custom Ansible controller (API install) +.Example agof_inventory file including AAP and Hub -In this method, you provide an existing Ansible Automation Platform (AAP) Controller endpoint, either on bare metal or in a private cloud, without needing AWS or pre-configured VMs. +[source,yaml] +---- +[build_control] +localhost + +[aap_controllers] +192.168.5.207 -Specify +[automation_hub] + +[eda_controllers] + +[aap_controllers:vars] + +[automation_hub:vars] + +[all:vars] +ansible_user=myuser +ansible_ssh_pass=mypass +ansible_become_pass=mypass +ansible_remote_tmp=/tmp/.ansible +username=myuser +controller_hostname=192.168.5.207 +---- -manifest -endpoint hostname -admin credentials, and pass the installation process to a predefined `controller_config_dir`. +=== Method 3: Custom Ansible controller (API install) +In this method, you provide an existing Ansible Automation Platform (AAP) Controller endpoint, either on bare metal or in a private cloud, without needing AWS or pre-configured VMs. + +You supply the manifest contents, endpoint hostname, admin username (defaults to "admin"), and admin password, and then the installation hands off to a `controller_config_dir` you define. +* Run the following command to install using this method: ++ [source,terminal] ---- -./pattern.sh make api_install +$ ./pattern.sh make api_install ---- === Tearing down the installation -To tear down the installation run the following command: - +* To tear down the installation run the following command: ++ [source,terminal] ---- $ ./pattern.sh make aws_uninstall diff --git a/content/learn/vp_agof_config_controller.adoc b/content/learn/vp_agof_config_controller.adoc index ba7f75977..6df3cdbec 100644 --- a/content/learn/vp_agof_config_controller.adoc +++ b/content/learn/vp_agof_config_controller.adoc @@ -14,7 +14,7 @@ include::modules/comm-attributes.adoc[] == Understanding the Ansible GitOps Framework (AGOF) installation process -The Ansible GitOps Framework (AGOF) is a powerful solution designed to automate the deployment and configuration of Ansible Automation Platform (AAP) environments using GitOps principles. It leverages Ansible to manage infrastructure and application provisioning in a declarative, version-controlled way. AGOF provides a structured approach to setting up cloud infrastructure, installing AAP components, and handing over control to the AAP Controller for ongoing automation and management. An overview of the steps involved in configuring a basic demo minimal demo application are listed here: +The Ansible GitOps Framework (AGOF) is a powerful solution designed to automate the deployment and configuration of Ansible Automation Platform (AAP) environments using link:https://opengitops.dev/[GitOps principles]. It leverages Ansible to manage infrastructure and application provisioning in a declarative, version-controlled way. AGOF provides a structured approach to setting up cloud infrastructure, installing AAP components, and handing over control to the AAP Controller for ongoing automation and management. An overview of the steps involved in configuring a basic demo minimal demo application are listed here: === 1. Pre-Init Environment (Bootstrap Ansible) @@ -28,8 +28,8 @@ The Ansible GitOps Framework (AGOF) is a powerful solution designed to automate === 3. Handover to Ansible Controller -* *Controller Setup*: The Ansible Automation Platform (AAP) Controller and optionally the Automation Hub are installed and configured. Entitlements are managed via a manifest, and execution environments and collections are downloaded and prepared. -* *GitOps Mode*: After configuration, AGOF transitions to GitOps mode. All environment changes are managed via Git commits to the repositories used by the controller, ensuring declarative and automated infrastructure management from this point onward. +* *Controller Setup*: The Ansible Automation Platform (AAP) Controller and optionally the Automation Hub are installed and configured. Entitlements are managed through a manifest, and execution environments and collections are downloaded and prepared. +* *GitOps Mode*: After configuration, AGOF transitions to GitOps mode. Git commits to the repositories by the controller manage all environment changes, ensuring declarative and automated infrastructure management from this point onward. == Using the Controller Configuration collection @@ -130,9 +130,13 @@ controller_launch_jobs: organization: "{{ orgname_vault }}" ---- -This file automates the creation, updating, or deletion of Ansible Controller objects (organizations, projects, inventories, credentials, templates, schedules). Sensitive information like passwords and keys are pulled dynamically from vaults, ensuring they are not hardcoded in the configuration. The project’s inventory and playbooks are managed through a Git repository, allowing for continuous integration and delivery (CI/CD) practices. Recurring playbook executions are scheduled automatically, eliminating the need for manual job triggers. +This file automates the creation, updating, or deletion of Ansible Controller objects (organizations, projects, inventories, credentials, templates, schedules). Sensitive information like passwords and keys are pulled dynamically from vaults, ensuring they are not hardcoded in the configuration. -== Key sections and parameters +A a Git repository manages the project’s inventory and playbooks, allowing for continuous integration and delivery (CI/CD) practices. AAP automatically schedules recurring playbook executions, eliminating the need for manual job triggers. + +== Key sections and parameters + +This section describes the parameters associated with the Ansible GitOps Framework minimal configuration demo. === Vault variables From 2c61033f7af859b9c72497542410e6f7341ecd10 Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Fri, 4 Oct 2024 12:13:43 +0100 Subject: [PATCH 17/22] Updating based on SME review --- content/learn/vp_agof.adoc | 34 ++++++++------------ content/learn/vp_agof_config_controller.adoc | 4 +-- 2 files changed, 16 insertions(+), 22 deletions(-) diff --git a/content/learn/vp_agof.adoc b/content/learn/vp_agof.adoc index a0cf0320c..487b665dd 100644 --- a/content/learn/vp_agof.adoc +++ b/content/learn/vp_agof.adoc @@ -36,26 +36,20 @@ The webUI provides an intuitive interface to build, monitor, and manage automati == Ansible framework methods -The three main methods for setting up an Ansible framework are as follows: +The main methods for setting up an Ansible framework are as follows: -=== Method 1: AWS-based install +[NOTE] +==== +This section focussed mainly on the AWS based installation method. +==== + +=== AWS based install This method is ideal for organizations that prefer deploying AAP on AWS infrastructure. This default install process in AAP 2.4 uses AWS by default and offers a fully automated setup. It requires AWS credentials, builds an AWS image with Red Hat's ImageBuilder, and sets up AAP within an AWS VPC and subnet. The installation program creates all the necessary resources, including AAP Controllers and, optionally, additional components such as the Automation Hub. This is the easiest method if you already use AWS, as it automates the provisioning of resources, including VMs and network configurations. This requires AWS infrastructure and credentials. -=== Method 2: Pre-configured VMs Install - -This method allows the installation of AAP on pre-configured Red Hat Enterprise Linux (RHEL) VMs. You need to provide an inventory file that specifies details about the VMs or instances where AAP will be installed. It is designed for users with existing infrastructure who want to deploy AAP without depending on AWS. If you need to install a pattern on a cluster with a different topology than this, use the API install mechanism. - -THis method is useful if you already have pre-configured VMs or bare-metal instances running RHEL. It allows greater flexibility and control over the environment. Using this method requires more manual effort to configure VMs and might need additional customization for non-standard topologies. This model has been tested with up to two RHEL VMs (one for AAP and one for Hub). - -The requirements for this mode are as follows: - -* Must be running a version of RHEL that AAP supports -* Must be properly entitled with a subscription that makes the appropriate AAP repository available - -=== Method 3: Custom Ansible controller (API install) +=== Custom Ansible controller (API install) In this method, you provide an existing Ansible Automation Platform (AAP) Controller endpoint, either on bare metal or in a private cloud, without needing AWS or pre-configured VMs. You specify the manifest, endpoint hostname, admin credentials, and pass the installation process to a predefined `controller_config_dir`. This is suitable for complex or custom topologies where you want full control over the deployment. @@ -196,7 +190,7 @@ Click the `Load token` link at https://console.redhat.com[console.redhat.com] > + [source,shell] ---- -controller_license_src_file: '~/Downloads/manifest__20240924T131518Z.zip' +controller_license_src_file: '~/Downloads/.zip' manifest_content: "{{ lookup('file', controller_license_src_file) | b64encode }}" ---- + @@ -205,9 +199,9 @@ manifest_content: "{{ lookup('file', controller_license_src_file) | b64encode }} [source,yaml] ---- --- -aws_account_nbr_vault: '293265215425' -aws_access_key_vault: 'AKIAIJ6ZIKPAUZGF2643' -aws_secret_key_vault: 'gMC3Jy3/MZtOosjUDHy0Nl/2mp2HQok1JDfCQGKUR' +aws_account_nbr_vault: '' +aws_access_key_vault: '' +aws_secret_key_vault: '' pattern_prefix: 'foga' pattern_dns_zone: 'aws.validatedpatterns.io' @@ -225,7 +219,7 @@ admin_password: 'redhat123!' manifest_content: "Content for a manifest file to entitle AAP Controller. See below for an example of how to point to a local file" #manifest_content: "{{ lookup('file', '~/Downloads/manifest_AVP_20230510T202608Z.zip') | b64encode }}" -org_number_vault: "1778713" +org_number_vault: "" activation_key_vault: "kevs-agof-key" # Set these variables to provide your own AMI, or to re-use an AMI previously generated with this process @@ -278,7 +272,7 @@ a| EDA Automation Controller Once the install completes, you will have a project, an inventory (consisting of the AAP controller), a credential (the private key from ec2), a job template (which runs a fact gather on the AAP controller) and a schedule that will run the job template every 5 minutes, -. Log in to `https:{{ ec2_name_prefix }}.{{ domain }}:8443` with the username `admin` and the password as configured in `admin_password` field of `agof_vault.yml`. +. Log in to `https://aap.{{ ec2_name_prefix }}.{{ domain }}:8443` with the username `admin` and the password as configured in `admin_password` field of `agof_vault.yml`. . Under *Resources* > *Projects* verify the project *Ansible GitOps Framework Minimal Demo* is created with status *Successful*. diff --git a/content/learn/vp_agof_config_controller.adoc b/content/learn/vp_agof_config_controller.adoc index 6df3cdbec..be25b68ed 100644 --- a/content/learn/vp_agof_config_controller.adoc +++ b/content/learn/vp_agof_config_controller.adoc @@ -147,7 +147,7 @@ This specifies the organization name stored in a vault for security purposes. This is the Ansible Controller's username stored in a vault. *`controller_password_vault: '{{ admin_password }}'`*:: -The password is fetched dynamically from a vault for security purposes. +The initial admin password that AAP is configured with to allow the controller_username to log in. This particular password is not retrieved from a vault. === Dynamic variables @@ -158,7 +158,7 @@ The Ansible Controller username is retrieved from the vault variable. The password is dynamically fetched from the vault. === Project configuration -Projects are collections of playbooks that are stored in a Git repository or SCM. This section can define how projects are configured in the Controller. +Projects are git repositories that can contain inventories and collections (and collections can contain playbooks). *`agof_demo_project_name: 'Ansible GitOps Framework Minimal Demo'`*:: This variable holds the name of the project being managed in the controller. From 44038cb59d9cabe97a91946296a18818c099f5b7 Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Mon, 7 Oct 2024 10:40:58 +0100 Subject: [PATCH 18/22] Adding peer review feedback --- content/learn/vp_agof.adoc | 84 ++------------------ content/learn/vp_agof_config_controller.adoc | 12 +-- 2 files changed, 14 insertions(+), 82 deletions(-) diff --git a/content/learn/vp_agof.adoc b/content/learn/vp_agof.adoc index 487b665dd..7b5231926 100644 --- a/content/learn/vp_agof.adoc +++ b/content/learn/vp_agof.adoc @@ -14,7 +14,7 @@ include::modules/comm-attributes.adoc[] == About the Ansible GitOps framework (AGOF)for validated patterns -The link:/patterns/ansible-gitops-framework/[Ansible GitOps Framework] provides an extensible framework to do GitOps with https://docs.ansible.com/platform.html[Ansible Automation Platform] (AAP), and as such provides useful facilities for developing patterns (community and validated) that function with Ansible Automation Platform as the GitOps engine. +The link:/patterns/ansible-gitops-framework/[Ansible GitOps Framework] provides an extensible framework to do GitOps with https://docs.ansible.com/platform.html[Ansible Automation Platform] (AAP). It offers useful facilities for developing patterns (community and validated) that work with AAP as the GitOps engine. When there is no access to OpenShift Container Platform, AGOF provides a standalone solution for deploying and managing validated patterns. This framework leverages GitOps principles without relying on OpenShift. @@ -26,9 +26,9 @@ The link:https://github.com/validatedpatterns/agof/[Ansible GitOps Framework] re == Role of the Ansible Controller -In the Ansible GitOps framework, an ansible controller refers to any machine that initiates and runs a playbook. This includes systems running ansible core or open source Ansible, where the machine executing the playbooks also acts as the controller. AAP provides the automation controller which is Red Hat's productized ansible controller. +In the Ansible GitOps framework, the Ansible Controller refers to any machine that initiates and runs playbooks. This includes systems running Ansible Core or open source Ansible, where the machine executing the playbooks also acts as the controller. The Ansible Automation Platform (AAP) provides the Automation Controller, which is Red Hat’s productized Ansible Controller. -THe link:https://docs.ansible.com/platform.html[Automation controller] is the command and control center for Red Hat Ansible Automation Platform, replacing Ansible Tower. It includes a webUI, API, role-based access control (RBAC), a workflow visualizer, and continuous integration and continuous delivery (CI/CD) integrations to help you organize and manage automation across your enterprise. +The link:https://docs.ansible.com/platform.html[Automation controller] is the command and control center for Red Hat Ansible Automation Platform, replacing Ansible Tower. It includes a web UI, API, role-based access control (RBAC), a workflow visualizer, and continuous integration and continuous delivery (CI/CD) integrations to help you organize and manage automation across your enterprise. The Automation Controller serves as the centralized platform for managing and executing automation across infrastructure. Use the Automation Controller to create job templates that standardize the deployment and execution of Ansible playbooks, making automation more consistent and reusable. It integrates essential components such as execution environments for consistent automation execution, projects (repositories for automation content), inventories (target endpoints), and credentials (for secure access to resources). @@ -40,12 +40,12 @@ The main methods for setting up an Ansible framework are as follows: [NOTE] ==== -This section focussed mainly on the AWS based installation method. +The focus here is primarily on the AWS-based installation method. ==== === AWS based install -This method is ideal for organizations that prefer deploying AAP on AWS infrastructure. This default install process in AAP 2.4 uses AWS by default and offers a fully automated setup. It requires AWS credentials, builds an AWS image with Red Hat's ImageBuilder, and sets up AAP within an AWS VPC and subnet. The installation program creates all the necessary resources, including AAP Controllers and, optionally, additional components such as the Automation Hub. +This method is ideal for organizations deploying AAP on AWS infrastructure. In AAP 2.4, the default installation process uses AWS and offers a fully automated setup. It requires AWS credentials and builds an AWS image with Red Hat’s ImageBuilder, setting up AAP within an AWS VPC and subnet. The installation program creates all necessary resources, including AAP Controllers and, optionally, additional components such as the Automation Hub. This is the easiest method if you already use AWS, as it automates the provisioning of resources, including VMs and network configurations. This requires AWS infrastructure and credentials. @@ -57,7 +57,7 @@ This method provides maximum flexibility and is designed for advanced users who == Creating a validated pattern using the AGOF framework -Whichever method you use to deploy a validated pattern you need to: +To deploy a validated pattern, follow these steps: . Clone the link:https://github.com/mhjacks/agof_demo_config[Ansible GitOps Framework Minimal Demo] repository. This is a minimal pattern that demonstrates how to use the Ansible GitOps Framework. + @@ -242,7 +242,7 @@ agof_statedir: "{{ '~/agof' | expanduser }}" agof_iac_repo: "https://github.com/mhjacks/agof_demo_config.git" ---- -. Run from the AGOF repository directory the following command : +. Run the following command from the AGOF repository directory: + [source,terminal] ---- @@ -284,75 +284,7 @@ Once the install completes, you will have a project, an inventory (consisting of . Under *Views* > *Schedules* verify the schedules are created. - -=== Method 2: Pre-configured VMs Install - -This method allows the installation of AAP on pre-configured Red Hat Enterprise Linux (RHEL) VMs. It requires you to provide an inventory file that specifies details about the VMs or instances where AAP will be installed. - -It is designed for users with existing infrastructure who want to deploy AAP without depending on AWS. If you need to install a pattern on a cluster with a different topology than this, use the API install mechanism. - -[source,shell] ----- -./pattern.sh make legacy_from_os_install INVENTORY=(your_inventory_file) ----- - -The path to `your_inventory_file` defaults to `~/inventory_agof` if you do not specify one. - -.Example agof_inventory file for just AAP (default) - -[source,yaml] ----- -[build_control] -localhost - -[aap_controllers] -192.168.5.207 - -[automation_hub] - -[eda_controllers] - -[aap_controllers:vars] - -[automation_hub:vars] - -[all:vars] -ansible_user=myuser -ansible_ssh_pass=mypass -ansible_become_pass=mypass -ansible_remote_tmp=/tmp/.ansible -username=myuser -controller_hostname=192.168.5.207 ----- - -.Example agof_inventory file including AAP and Hub - -[source,yaml] ----- -[build_control] -localhost - -[aap_controllers] -192.168.5.207 - -[automation_hub] - -[eda_controllers] - -[aap_controllers:vars] - -[automation_hub:vars] - -[all:vars] -ansible_user=myuser -ansible_ssh_pass=mypass -ansible_become_pass=mypass -ansible_remote_tmp=/tmp/.ansible -username=myuser -controller_hostname=192.168.5.207 ----- - -=== Method 3: Custom Ansible controller (API install) +=== Custom Ansible controller (API install) In this method, you provide an existing Ansible Automation Platform (AAP) Controller endpoint, either on bare metal or in a private cloud, without needing AWS or pre-configured VMs. diff --git a/content/learn/vp_agof_config_controller.adoc b/content/learn/vp_agof_config_controller.adoc index be25b68ed..4e5282a6b 100644 --- a/content/learn/vp_agof_config_controller.adoc +++ b/content/learn/vp_agof_config_controller.adoc @@ -12,7 +12,7 @@ aliases: /ocp-framework/agof/ :_content-type: ASSEMBLY include::modules/comm-attributes.adoc[] -== Understanding the Ansible GitOps Framework (AGOF) installation process +== Overview of the Ansible GitOps Framework (AGOF) Installation Process The Ansible GitOps Framework (AGOF) is a powerful solution designed to automate the deployment and configuration of Ansible Automation Platform (AAP) environments using link:https://opengitops.dev/[GitOps principles]. It leverages Ansible to manage infrastructure and application provisioning in a declarative, version-controlled way. AGOF provides a structured approach to setting up cloud infrastructure, installing AAP components, and handing over control to the AAP Controller for ongoing automation and management. An overview of the steps involved in configuring a basic demo minimal demo application are listed here: @@ -29,17 +29,17 @@ The Ansible GitOps Framework (AGOF) is a powerful solution designed to automate === 3. Handover to Ansible Controller * *Controller Setup*: The Ansible Automation Platform (AAP) Controller and optionally the Automation Hub are installed and configured. Entitlements are managed through a manifest, and execution environments and collections are downloaded and prepared. -* *GitOps Mode*: After configuration, AGOF transitions to GitOps mode. Git commits to the repositories by the controller manage all environment changes, ensuring declarative and automated infrastructure management from this point onward. +* *GitOps Mode*: After configuration, AGOF transitions to GitOps mode. GGit commits made by the controller to the repositories manage all environment changes, ensuring declarative and automated infrastructure management from this point onward. -== Using the Controller Configuration collection +== Controller configuration collection -An AGOF pattern for example https://github.com/mhjacks/agof_demo_config is primarily an IaC (infrastructure as code) artifact designed to be used with the `controller_configuration` collections. +An AGOF pattern, for example, https://github.com/mhjacks/agof_demo_config is primarily an IaC (infrastructure as code) artifact designed to be used with the `controller_configuration` collections. The AGOF (Ansible GitOps Framework) repository https://github.com/validatedpatterns/agof contains the code and tools needed to set up a new Ansible Automation Platform (AAP) instance. This setup is automated, using Infrastructure as Code (IaC) practices. It also includes some specific preferences to make it easier for others to publicly share and manage this type of infrastructure setup. This approach ensures the automation controller configuration is version-controlled, dynamic, and reproducible. This method enables deployment automation with minimal manual intervention, which is useful for managing multiple controller instances or different environments in a CI/CD pipeline. -For example for the AGOF minimal configuration demo the file https://github.com/mhjacks/agof_demo_config/blob/main/controller_config.yml is used with Ansible's Controller Configuration Collection, allowing the automation and management of Red Hat Ansible Automation Controller (formerly known as Ansible Tower). +For example, for the AGOF minimal configuration demo the file https://github.com/mhjacks/agof_demo_config/blob/main/controller_config.yml is used with Ansible's Controller Configuration Collection, allowing the automation and management of Red Hat Ansible Automation Controller (formerly known as Ansible Tower). [source,yaml] ---- @@ -132,7 +132,7 @@ controller_launch_jobs: This file automates the creation, updating, or deletion of Ansible Controller objects (organizations, projects, inventories, credentials, templates, schedules). Sensitive information like passwords and keys are pulled dynamically from vaults, ensuring they are not hardcoded in the configuration. -A a Git repository manages the project’s inventory and playbooks, allowing for continuous integration and delivery (CI/CD) practices. AAP automatically schedules recurring playbook executions, eliminating the need for manual job triggers. +A Git repository manages the project’s inventory and playbooks, allowing for continuous integration and delivery (CI/CD) practices. AAP automatically schedules recurring playbook executions, eliminating the need for manual job triggers. == Key sections and parameters From fec7e5022e2f24ffecaae01d34d9d2bd7e70065a Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Mon, 14 Oct 2024 15:27:51 +0100 Subject: [PATCH 19/22] Content updated on peer review --- content/learn/vp_agof.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learn/vp_agof.adoc b/content/learn/vp_agof.adoc index 7b5231926..d200a34e6 100644 --- a/content/learn/vp_agof.adoc +++ b/content/learn/vp_agof.adoc @@ -233,7 +233,7 @@ automation_hub_token_vault: '' #automation_hub_url_certified_vault: 'The private automation hub URL for certified content' #automation_hub_url_validated_vault: 'The private automation hub URL for validated content' -controller_config_dir: "{{ '~/agof_minimal_demo/config' | expanduser }}" +controller_config_dir: "{{ '~/agof_minimal_demo' | expanduser }}" db_password: 'test' From e6e0a82cb3c711cd0945683655d0f215bcbd9abf Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Mon, 14 Oct 2024 15:40:42 +0100 Subject: [PATCH 20/22] Content updated on peer review3 --- content/learn/vp_agof.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learn/vp_agof.adoc b/content/learn/vp_agof.adoc index d200a34e6..c220e0e86 100644 --- a/content/learn/vp_agof.adoc +++ b/content/learn/vp_agof.adoc @@ -179,7 +179,7 @@ Click the `Load token` link at https://console.redhat.com[console.redhat.com] > . Edit the file and add the following: -* `controller_config_dir:` set it's value to `{{ '~/agof_minimal_demo/config' | expanduser }}`. +* `controller_config_dir:` set it's value to `{{ '~/agof_minimal_demo' | expanduser }}`. * `db_password:` sets an appropriate value for the postgres password for the DB instance for example `test`. * `agof_statedir:` set its value to `"{{ '~/agof' | expanduser }}"` * `agof_iac_repo:` set its value to `"https://github.com/mhjacks/agof_demo_config.git"` From 5cfb970ebd7ca3236bc37dc9e0dd1610cb974c9f Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Tue, 15 Oct 2024 10:20:17 +0100 Subject: [PATCH 21/22] Content updated on SME review --- content/learn/vp_agof.adoc | 19 +++++++++++-------- content/learn/vp_agof_config_controller.adoc | 2 +- 2 files changed, 12 insertions(+), 9 deletions(-) diff --git a/content/learn/vp_agof.adoc b/content/learn/vp_agof.adoc index c220e0e86..348de37c1 100644 --- a/content/learn/vp_agof.adoc +++ b/content/learn/vp_agof.adoc @@ -18,7 +18,7 @@ The link:/patterns/ansible-gitops-framework/[Ansible GitOps Framework] provides When there is no access to OpenShift Container Platform, AGOF provides a standalone solution for deploying and managing validated patterns. This framework leverages GitOps principles without relying on OpenShift. -Administrators can use Ansible Automation Platform includes automation controller, a web-based UI interface to define, operate, scale, and delegate automation across their enterprise. +Administrators can use the Ansible Automation Platform, which includes automation controller, a web-based UI interface to define, operate, scale, and delegate automation across their enterprise. The repository at https://github.com/validatedpatterns/agof/[Ansible GitOps Framework] provides code for installing VMs on AWS, if needed. It can also be used with existing VMs or a functional AAP Controller endpoint. @@ -59,19 +59,22 @@ This method provides maximum flexibility and is designed for advanced users who To deploy a validated pattern, follow these steps: -. Clone the link:https://github.com/mhjacks/agof_demo_config[Ansible GitOps Framework Minimal Demo] repository. This is a minimal pattern that demonstrates how to use the Ansible GitOps Framework. +. Clone the link:https://github.com/mhjacks/agof_demo_config[Ansible GitOps Framework Minimal Configuration Demo] repository by running the following command: + [source,terminal] ---- -$ git clone https://github.com/validatedpatterns-demos/agof_minimal_demo.git +$ git clone git@github.com:mhjacks/agof_demo_config.git ---- - -. Clone the link:https://github.com/validatedpatterns/agof[AGOF] repository. This serves as the provisioner for the pattern + +This is a minimal pattern that demonstrates how to use the Ansible GitOps Framework. + +. Clone the link:https://github.com/validatedpatterns/agof[AGOF] repository by running the following command. [source,terminal] ---- -$ git clone https://github.com/validatedpatterns-demos/agof_minimal_demo.git +$ git clone git@github.com:validatedpatterns/agof.git ---- ++ +This serves as the provisioner for the pattern === Deploying using the AWS-based install method @@ -83,7 +86,7 @@ This method: . Deploys AAP on that image using the command line installer. -. Hands over the configuration of the AAP installation to the specified `controller_config_dir`. +. Hands over the configuration of the AAP installation to the specified `agof_controller_config_dir`. .Prerequisites @@ -179,7 +182,7 @@ Click the `Load token` link at https://console.redhat.com[console.redhat.com] > . Edit the file and add the following: -* `controller_config_dir:` set it's value to `{{ '~/agof_minimal_demo' | expanduser }}`. +* `agof_controller_config_dir:` set it's value to `{{ '~/agof_minimal_demo' | expanduser }}`. * `db_password:` sets an appropriate value for the postgres password for the DB instance for example `test`. * `agof_statedir:` set its value to `"{{ '~/agof' | expanduser }}"` * `agof_iac_repo:` set its value to `"https://github.com/mhjacks/agof_demo_config.git"` diff --git a/content/learn/vp_agof_config_controller.adoc b/content/learn/vp_agof_config_controller.adoc index 4e5282a6b..c2c45cc0b 100644 --- a/content/learn/vp_agof_config_controller.adoc +++ b/content/learn/vp_agof_config_controller.adoc @@ -72,7 +72,7 @@ controller_projects: scm_delete_on_update: "no" scm_type: git scm_update_on_launch: "yes" - scm_url: 'https://github.com/hybrid-cloud-demos/agof_minimal_demo.git' + scm_url: 'https://github.com/validatedpatterns-demos/agof_minimal_demo.git' controller_organizations: - name: '{{ orgname_vault }}' From a7d5d3554f32316cbbf5631358551fd614353648 Mon Sep 17 00:00:00 2001 From: Kevin Quinn Date: Tue, 15 Oct 2024 13:41:55 +0100 Subject: [PATCH 22/22] Content updated on SME review 2 --- content/learn/vp_agof.adoc | 1 + 1 file changed, 1 insertion(+) diff --git a/content/learn/vp_agof.adoc b/content/learn/vp_agof.adoc index 348de37c1..9be958acd 100644 --- a/content/learn/vp_agof.adoc +++ b/content/learn/vp_agof.adoc @@ -69,6 +69,7 @@ $ git clone git@github.com:mhjacks/agof_demo_config.git This is a minimal pattern that demonstrates how to use the Ansible GitOps Framework. . Clone the link:https://github.com/validatedpatterns/agof[AGOF] repository by running the following command. ++ [source,terminal] ---- $ git clone git@github.com:validatedpatterns/agof.git