PoC: Deploy GitLab Helm on HA Kubeadm Cluster using QEMU + KVM with Packer, Terraform, Vault, and Ansible
Note
Refer to README-zh-TW.md for Traditional Chinese (Taiwan) version.
This repository (hereinafter referred to as "this repo") is a Proof of Concept (PoC) for Infrastructure as Code. It primarily achieves automated deployment of a High Availability (HA) Kubernetes cluster (Kubeadm / microk8s) in a purely on-premise environment using QEMU-KVM. This repo was developed based on personal exercises conducted during an internship at Cathay General Hospital. The objective is to establish an on-premise GitLab instance capable of automated infrastructure deployment, with the aim of creating a reusable IaC pipeline for legacy systems.
Note
This repo has been approved for public release by the relevant company department as part of a technical portfolio.
The machine specifications used for development are listed below for reference only:
- Chipset: Intel® HM770
- CPU: Intel® Core™ i7 processor 14700HX
- RAM: Micron Crucial Pro 64GB Kit (32GBx2) DDR5-5600 UDIMM
- SSD: WD PC SN560 SDDPNQE-1T00-1032
The project can be cloned using the following command:
git clone -b v1.7.2 --depth 1 https://github.com/csning1998-old/on-premise-gitlab-deployment.gitThe following resource allocation is configured based on RAM constraints:
| Network Segment (CIDR) | Service Tier | Usage (Service) | Storage Pool Name | VIP (HAProxy/Ingress) | Node IP Allocation | Component (Role) | Quantity | Unit vCPU | Unit RAM | Subtotal RAM | Notes |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 172.16.134.0/24 | App (GitLab) | Kubeadm Cluster | iac-kubeadm | 172.16.134.250 | .200 (Master), .21x (Worker) |
Kubeadm Master | 1 | 2 | 6.0 GiB | 6,144 MiB | Used for GitLab Helm Chart deployment |
| Kubeadm Worker | 2 | 4 | 8.0 GiB | 16,384 MiB | For Rails/Sidekiq, GitLab Runner, etc. | ||||||
| 172.16.135.0/24 | App (Harbor) | MicroK8s Cluster | iac-harbor | 172.16.135.250 | .20x (Nodes) |
MicroK8s Node | 1 | 4 | 6.0 GiB | 6,144 MiB | Full Harbor consumes ~4-5 GB |
| 172.16.136.0/24 | Shared | Vault HA | iac-vault | 172.16.136.250 | .20x (Vault), .21x (HAProxy) |
Vault (Raft) | 1 | 2 | 1.0 GiB | 1,024 MiB | Raft is lightweight; Shared secrets management center |
| HAProxy | 1 | 1 | 0.5 GiB | 512 MiB | TCP forwarding only | ||||||
| 172.16.137.0/24 | Data (Harbor) | Postgres HA | iac-postgres-harbor | 172.16.137.250 | .20x (Postgres), .21x (Etcd), .22x (HAProxy) |
Postgres | 1 | 2 | 2.0 GiB | 2,048 MiB | shared_buffers set to 512MB; Instantiated via Module 21 |
| Etcd | 1 | 1 | 1.0 GiB | 1,024 MiB | Patroni low usage | ||||||
| HAProxy | 1 | 1 | 0.5 GiB | 512 MiB | |||||||
| 172.16.138.0/24 | Data (Harbor) | Redis HA | iac-redis-harbor | 172.16.138.250 | .20x (Redis), .21x (HAProxy) |
Redis | 1 | 1 | 1.0 GiB | 1,024 MiB | |
| HAProxy | 1 | 1 | 0.5 GiB | 512 MiB | |||||||
| 172.16.139.0/24 | Data (Harbor) | MinIO HA | iac-minio-harbor | 172.16.139.250 | .20x (MinIO), .21x (HAProxy) |
MinIO | 1 | 2 | 1.5 GiB | 1,536 MiB | Go heap not that heavy |
| HAProxy | 1 | 1 | 0.5 GiB | 512 MiB | |||||||
| 172.16.140.0/24 | Data (GitLab) | Postgres HA | iac-postgres-gitlab | 172.16.140.250 | .20x (Postgres), .21x (Etcd), .22x (HAProxy) |
Postgres | 1 | 2 | 4.0 GiB | 4,096 MiB | Replication of Layer 20 |
| Etcd | 1 | 1 | 1.0 GiB | 1,024 MiB | Same as Harbor Postgres | ||||||
| HAProxy | 1 | 1 | 0.5 GiB | 512 MiB | |||||||
| 172.16.141.0/24 | Data (GitLab) | Redis HA | iac-redis-gitlab | 172.16.141.250 | .20x (Redis), .21x (HAProxy) |
Redis | 1 | 1 | 2.0 GiB | 2,048 MiB | Same as Harbor Redis |
| HAProxy | 1 | 1 | 0.5 GiB | 512 MiB | |||||||
| 172.16.142.0/24 | Data (GitLab) | MinIO HA | iac-minio-gitlab | 172.16.142.250 | .20x (MinIO), .21x (HAProxy) |
MinIO | 1 | 2 | 3.0 GiB | 3,072 MiB | Same as Harbor MinIO |
| HAProxy | 1 | 1 | 0.5 GiB | 512 MiB | |||||||
| Total | 20 | 49,152 MiB | ≈ 48.0 GiB |
-
This repo currently only supports Linux hosts with CPU virtualization functionality. It has not been tested on other distributions such as Fedora, Arch, CentOS, WSL2, etc. The following command can be used to check whether the development machine supports virtualization:
lscpu | grep VirtualizationPossible outputs include:
- Virtualization: VT-x (Intel)
- Virtualization: AMD-V (AMD)
- If there is no output, virtualization may not be supported.
Warning
Compatibility Warning
This repo currently only supports Linux hosts with CPU virtualization functionality. If the host CPU does not support virtualization (e.g., lacking VT-x/AMD-V), please switch to the legacy-workstation-on-ubuntu branch, which supports basic HA Kubeadm cluster setup.
Additionally, this repo is currently an independent personal project and may contain edge cases. Issues will be addressed as they are identified.
Before proceeding, ensure the host system meets the following requirements:
- Linux host (RHEL 10 or Ubuntu 24 recommended).
- CPU virtualization support (VT-x or AMD-V).
sudoprivileges for Libvirt management.podmanandpodman composeinstalled for containerized operations.opensslpackage (provides theopenssl passwdcommand).jqpackage (for JSON parsing).
This project currently provisions the following services (Items 1–5 are configured with HAProxy and Keepalived):
- HA HashiCorp Vault with Raft Storage.
- Postgres / Patroni (includes etcd).
- Redis / Sentinel.
- MinIO (S3) / Distributed MinIO.
- Harbor Container Registry.
- [WIP] GitLab / Runner / Gitaly etc.
- Private Key Encryption.
- OpenTofu Migration for the feature of
*.tfstatefiles encryption.
Note
Section 1 and Section 2 cover the pre-execution setup tasks. See below for details.
The entry.sh script located in the root directory handles all service initialization and lifecycle management. Executing ./entry.sh from the repo root displays the following interface:
➜ on-premise-gitlab-deployment git:(main) ✗ ./entry.sh
... (Some preflight check)
======= IaC-Driven Virtualization Management =======
[INFO] Environment: NATIVE
--------------------------------------------------
[OK] Development Vault (Local): Running (Unsealed)
[OK] Production Vault (Layer10): Running (Unsealed)
------------------------------------------------------------
1) [DEV] Set up TLS for Dev Vault (Local) 7) Setup Core IaC Tools 13) Switch Environment Strategy
2) [DEV] Initialize Dev Vault (Local) 8) Verify IaC Environment 14) Purge Specific Terraform Layer
3) [DEV] Unseal Dev Vault (Local) 9) Build Packer Base Image 15) Purge All Libvirt Resources
4) [PROD] Unseal Production Vault (via Ansible) 10) Provision Terraform Layer 16) Purge All Packer and Terraform Resources
5) Generate SSH Key 11) Rebuild Terraform Layer via Ansible 17) Quit
6) Setup KVM / QEMU for Native 12) Verify SSH
[INPUT] Please select an action:
Options 9, 10, and 11 dynamically populate submenus by scanning the packer/output and terraform/layers directories. The submenus for a complete configuration are shown below:
Note
Option 11 is currently mulfunctioning.
-
When selecting
9) Build Packer Base Image.[INPUT] Please select an action: 9 [INFO] Checking status of libvirt service... [OK] libvirt service is already running. 1) 01A-docker-harbor 4) 04-base-postgres 7) 07-base-vault 10) Build ALL Packer Images 2) 02-base-kubeadm 5) 05-base-redis 8) 08-base-haproxy 11) Back to Main Menu 3) 03-base-microk8s 6) 06-base-minio 9) 09-base-etcd [INPUT] Select a Packer build to run: -
When selecting
10) Provision Terraform Layer.[INPUT] Please select an action: 10 [INFO] Checking status of libvirt service... [OK] libvirt service is already running. 1) 10-vault-raft 4) 30-gitlab-minio 7) 30-harbor-minio 10) 40-gitlab-kubeadm 13) 50-harbor-platform 16) 90-github-meta 2) 20-vault-pki 5) 30-gitlab-postgres 8) 30-harbor-postgres 11) 40-harbor-microk8s 14) 60-gitlab-service 17) Back to Main Menu 3) 30-dev-harbor-core 6) 30-gitlab-redis 9) 30-harbor-redis 12) 50-gitlab-platform 15) 60-harbor-service [INPUT] Select a Terraform layer to UPDATE / PROVISION: -
When selecting
11) Rebuild Layer via Ansible.[INPUT] Please select an action: 11 [INFO] Checking status of libvirt service... [OK] libvirt service is already running. 1) inventory-10-vault-core.yaml 6) inventory-20-harbor-postgres.yaml 2) inventory-20-gitlab-minio.yaml 7) inventory-20-harbor-redis.yaml 3) inventory-20-gitlab-postgres.yaml 8) inventory-30-gitlab-kubeadm.yaml 4) inventory-20-gitlab-redis.yaml 9) inventory-30-harbor-microk8s.yaml 5) inventory-20-harbor-minio.yaml 10) Back to Main Menu [INPUT] Select a Cluster Inventory to run its Playbook:
The following sections detail the usage instructions for entry.sh.
Option 6 in entry.sh automates the installation of the QEMU/KVM environment. This process is currently tested only on Ubuntu 24 and RHEL 10. For other platforms, refer to official documentation to manually configure the KVM and QEMU environment.
-
Install HashiCorp Toolkit - Terraform and Packer
Execute
entry.shin the project root directory and select option7"Setup Core IaC Tools for Native" to install Terraform, Packer, and Ansible. Refer to the official installation guides for more details:Reference: Terraform Installation Reference: Packer Installation Reference: Ansible Installation
The expected output should be the latest version. For instance (in zsh):
... [INPUT] Please select an action: 7 [STEP] Verifying Core IaC Tools (HashiCorp/Ansible)... [STEP] Setting up core IaC tools... [TASK] Installing OS-specific base packages for RHEL... ... [TASK] Installing Ansible Core using pip... ... [INFO] Installing HashiCorp Toolkits (Terraform, Packer, Vault)... [TASK] Installing terraform... ... [TASK] Installing packer... ... [TASK] Installing vault... ... [TASK] Installing to /usr/local/bin/vault [INFO] Verifying installed tools... [STEP] Verifying Core IaC Tools (HashiCorp/Ansible)... [INFO] HashiCorp Packer: Installed [INFO] HashiCorp Terraform: Installed [INFO] HashiCorp Vault: Installed [INFO] Red Hat Ansible: Installed [OK] Core IaC tools setup and verification completed. -
Verify that Podman or Docker is correctly installed. The appropriate installation method should be selected based on the host operating system by following the official documentation linked below:
Reference: Podman Installation Reference: Docker Installation
-
For Podman-based setups, navigate to the project root directory after the installation:
-
The default memlock limit (
ulimit -l) is typically insufficient, causing HashiCorp Vaultmlocksystem calls to fail. In Rootless Podman environments, processes are mapped via UID to a standard host user and inherit existing permission restrictions. To resolve this, the following configuration should be applied to/etc/security/limits.conf:sudo tee -a /etc/security/limits.conf <<EOT ${USER} soft memlock unlimited ${USER} hard memlock unlimited EOT
This configuration enables the Vault process within the user namespace to lock memory. A system reboot is required for these changes to take effect, preventing sensitive data from being paged to unencrypted swap space.
-
For the initial deployment, execute:
podman compose up --build
-
Once the containers are created, use the following command to start the services:
podman compose up -d
-
The default environment is set to
DEBIAN_FRONTEND=noninteractive. To access a container for inspection or modification, execute:podman exec -it iac-controller-base bashIn this context,
iac-controller-baserefers to the root container name for the project. -
The default container status after running
podman compose --profile all up -dandpodman ps -ashould resemble the following:CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 974baf0177f6 docker.io/hashicorp/vault:1.20.2 server -config=/v... 24 seconds ago Up 14 seconds (healthy) 8200/tcp iac-vault-server ea3b31db9a5c localhost/on-premise-iac-controller:qemu-latest /bin/bash -c whil... 24 seconds ago Up 14 seconds iac-runner
-
Note
Resolved: Data Loss Warning
When switching between Podman container and Native environments, all Libvirt resources provisioned by Terraform will be automatically deleted. This measure prevents permission and context conflicts associated with the Libvirt UNIX socket.
- Recommended VSCode Plugins: These extensions provide syntax highlighting for the languages used in this project:
-
Ansible language support extension. Marketplace Link of Ansible
code --install-extension redhat.ansible
-
HCL language support extension for Terraform. Marketplace Link of HashiCorp HCL
code --install-extension HashiCorp.HCL
-
Packer tool extension. Marketplace Link of Packer Powertools
code --install-extension szTheory.vscode-packer-powertools
Important
Initialization must be completed in the following order to ensure proper operation of this repo.
-
Environment Variables File:
entry.shautomatically generates a.envfile for internal shell script use. This file typically requires no manual intervention. -
SSH Key Generation: SSH keys enable automated configuration by allowing services to authenticate with virtual machines during Terraform and Ansible execution. Use option
5"Generate SSH Key" in./entry.shto create a key pair. The default name isid_ed25519_on-premise-gitlab-deployment, and keys are stored in the~/.ssh/directory. -
Environment Switching: Option
13in./entry.shtoggles between "Container" and "Native" environments.This repo utilizes Podman as the container runtime to prevent SELinux permission conflicts. On systems with SELinux enabled (e.g., Fedora, RHEL, CentOS Stream), Docker containers run within the
container_tdomain by default. In such environments, the SELinux policy prohibitscontainer_tfrom connecting to thevirt_var_run_tUNIX socket, even if/var/run/libvirt/libvirt-sockis correctly mounted with0770permissions and proper group ownership. This results in Permission denied errors forvirshor the Terraform libvirt provider.Conversely, the process context (
task_struct) of rootless Podman is typically the user'sunconfined_tor a similar SELinux type, rather than being restricted tocontainer_t. Therefore, assuming the user is a member of thelibvirtgroup, connection to thelibvirtsocket proceeds successfully without additional SELinux policy adjustments. If Docker must be used, alternative workarounds include disabling SELinux (not recommended), implementing custom SELinux modules, or enabling TCP connections forlibvirtdat the cost of reduced security.
Note
Incorrect Libvirt file permissions will directly obstruct the Terraform Libvirt Provider. The following permission checks should be performed before proceeding.
-
Ensure the user account is a member of the
libvirtgroup.sudo usermod -aG libvirt $(whoami)A full logout and login, or a system reboot, is required for the group membership changes to take effect in the current shell session.
-
Modify the
libvirtdconfiguration to delegate socket management to thelibvirtgroup.# Using Vim sudo vim /etc/libvirt/libvirtd.conf # Using Nano sudo nano /etc/libvirt/libvirtd.conf
Uncomment the following lines within the file:
unix_sock_group = "libvirt" # ... unix_sock_rw_perms = "0770"
-
Override the systemd socket unit settings, as systemd configurations take precedence over
libvirtd.conf.-
Open the systemd editor for the socket unit:
sudo systemctl edit libvirtd.socket
-
Insert the following configuration above the
### Edits below this comment will be discardedline to ensure the settings are applied:[Socket] SocketGroup=libvirt SocketMode=0770
Save and exit the editor (Press
Ctrl+O,Enter, thenCtrl+Xin Nano). -
-
Restart the services in the following order to apply the changes.
-
Reload the
systemdmanager configuration:sudo systemctl daemon-reload
-
Stop all
libvirtdrelated services to ensure a clean transition:sudo systemctl stop libvirtd.service libvirtd.socket libvirtd-ro.socket libvirtd-admin.socket
-
Disable
libvirtd.serviceto delegate service management to systemd socket activation:sudo systemctl disable libvirtd.service
-
Restart the
libvirtd.socket:sudo systemctl restart libvirtd.socket
-
-
Verification.
-
Inspect the socket permissions; the output should indicate the
libvirtgroup andsrwxrwx---permissions.ls -la /var/run/libvirt/libvirt-sock
-
Execute the
virshcommand as a non-root user:virsh list --all
-
Successful execution and the display of virtual machines—regardless of whether the list is empty—confirms that permissions are correctly configured.
Note
This project utilizes Terraform GitHub Integration by default for repository management. Consequently, a Fine-grained Personal Access Token must be configured. If the cloned repo is not managed via this integration, the terraform/layers/90-github-meta layer may be skipped or deleted without affecting subsequent operations.
-
Navigate to GitHub Developer Settings to generate a Fine-grained Personal Access Token.
-
Click
Generate new tokenand specify the token name, expiration period, and repository access scope. -
In the Permissions section, configure the following:
Permission Access Level Description Metadata Read-only Mandatory Administration Read and Write For modifying Repo settings and Rulesets Contents Read and Write For reading Ref and Git information Repository security advisories Read and Write For managing security advisories Dependabot alerts Read and Write For managing dependency alerts Secrets Read and Write (Optional) for managing Actions Secrets Variables Read and Write (Optional) for managing Actions Variables Webhooks Read and Write (Optional) for managing Webhooks -
Click
Generate tokenand save the value for the following steps.
Important
Confidential data is centralized within HashiCorp Vault and categorized into Development and Production modes. By default, the Vault instances in this repo utilize HTTPS secured by a self-signed CA. Follow these steps for correct configuration.
-
The Development Vault is a prerequisite for establishing the Production Vault. The Dev Vault serves exclusively to provision the Prod Vault and Packer images; thereafter, all sensitive project data is managed by the Prod Vault.
-
Execute
entry.shand select option1to generate the required TLS handshake files. Fields may be left blank when creating the self-signed CA. If TLS file regeneration is required, execute option1again. -
Navigate to the project root and execute the following command to start the Development Vault server. This repo defaults to running Vault in sidecar mode within the container:
podman compose up -d iac-vault-server
Upon initialization, the Dev Vault generates
vault.dband Raft-related files invault/data/. To recreate the Dev Vault, all files withinvault/data/andvault/keys/must be manually deleted. Open a new terminal window or tab for subsequent operations to prevent environment variable conflicts in the current shell session. -
After completing the previous steps, execute
entry.shand select option2to initialize the Dev Vault. This process also automatically performs the unseal operation. -
Manually update the following variables. All default passwords must be replaced with unique values to ensure security.
-
Purging sensitive variables from shell history after executing
vault kv putcommands is strongly recommended to mitigate data exposure. Refer to Note 0 for details. -
For Development Vault
- The following variables are required for provisioning the production HashiCorp Vault across Packer and Terraform Layer 10:
github_pat: The GitHub Personal Access Token obtained in the previous step.ssh_username,ssh_password: Credentials for SSH access.vm_username,vm_password: Credentials for the virtual machine.ssh_public_key_path,ssh_private_key_path: Paths to the SSH public and private keys on the host.
printf "Enter ssh Password: " read -s ssh_password vault kv put \ -address="https://127.0.0.1:8200" \ -ca-cert="${PWD}/vault/tls/ca.pem" \ secret/on-premise-gitlab-deployment/variables \ github_pat="your-github-personal-access-token" \ ssh_username="some-user-name-for-ssh" \ ssh_password="$ssh_password" \ ssh_password_hash="$(printf '%s' "$ssh_password" | openssl passwd -6 -stdin)" \ vm_username="some-user-name-for-vm" \ vm_password="$ssh_password" \ ssh_public_key_path="~/.ssh/some-ssh-key-name.pub" \ ssh_private_key_path="~/.ssh/some-ssh-key-name" vault kv put \ -address="https://127.0.0.1:8200" \ -ca-cert="${PWD}/vault/tls/ca.pem" \ secret/on-premise-gitlab-deployment/infrastructure \ vault_haproxy_stats_pass="some-password-for-vault-haproxy-stats-pass-for-development-mode" \ vault_keepalived_auth_pass="some-password-for-vault-keepalived-auth-pass-for-development-mode"
If
90-github-metais not used to manage GitHub repository settings, thegithub_patsecret can be deleted. -
For Production Vault
- The following variables are required for provisioning the Terraform layers for Patroni, Sentinel, MinIO (S3), Harbor, and GitLab clusters:
ssh_username,ssh_password: SSH login credentials.vm_username,vm_password: Virtual machine login credentials.ssh_public_key_path,ssh_private_key_path: Paths to the SSH public and private keys on the host machine.pg_superuser_password: Password for the PostgreSQL superuser (postgres). Required for database initialization (initdb), Patroni management operations, and manual maintenance tasks.pg_replication_password: Credentials for the streaming replication user. Patroni utilizes this password when provisioning standby nodes to enable WAL synchronization with the primary.pg_vrrp_secret: VRRP authentication key for Keepalived nodes. Ensures that only authorized nodes participate in Virtual IP (VIP) election and failover, mitigating malicious interference within the local network.redis_requirepass: Authentication password for Redis clients. All clients connecting to Redis, such as GitLab or Harbor, must authenticate via theAUTHcommand using this password.redis_masterauth: Authentication password used by Redis replicas to synchronize with the master. During failover, new replicas utilize this password for handshakes with the newly promoted master. This is typically set identical toredis_requirepassto ensure seamless replication in Sentinel + HA configurations.redis_vrrp_secret: VRRP authentication key for the Redis load balancing layer (HAProxy/Keepalived). Follows the same operational principle aspg_vrrp_secret.minio_root_user: MinIO root administrator account (formerly Access Key), used for MinIO Console access and managing buckets or policies via the MinIO Client (mc).minio_root_password: MinIO root administrator password (formerly Secret Key).minio_vrrp_secret: VRRP authentication key for the MinIO load balancing layer (HAProxy/Keepalived). Follows the same operational principle aspg_vrrp_secret.vault_haproxy_stats_pass: Password for the HAProxy Stats Dashboard (typically on port8404), used for monitoring backend server health and traffic statistics via the Web UI.vault_keepalived_auth_pass: VRRP authentication key for the Vault cluster load balancer to secure the Vault service VIP.harbor_admin_password: Default password for the Harbor Web Portaladminaccount, required for initial project creation and robot account configuration.harbor_pg_db_password: Dedicated password for Harbor services (Core, Notary, Clair) to connect to PostgreSQL. This application-level credential (assigned to theharborDB user) is restricted with fewer privileges thanpg_superuser_password.
export VAULT_ADDR="https://172.16.136.250:443" export VAULT_CACERT="${PWD}/terraform/layers/10-vault-raft/tls/vault-ca.crt" export VAULT_TOKEN=$(jq -r .root_token ansible/fetched/vault/vault_init_output.json) vault secrets enable -path=secret kv-v2
vault kv put secret/on-premise-gitlab-deployment/variables \ ssh_username="some-username-for-ssh-for-production-mode" \ ssh_password="some-password-for-ssh-for-production-mode" \ ssh_password_hash='$some-password-for-ssh-for-production-mode' \ ssh_public_key_path="~/.ssh/id_ed25519_on-premise-gitlab-deployment.pub" \ ssh_private_key_path="~/.ssh/id_ed25519_on-premise-gitlab-deployment" \ vm_username="some-username-for-vm-for-production-mode" \ vm_password="some-password-for-vm-for-production-mode" vault kv put secret/on-premise-gitlab-deployment/infrastructure \ vault_haproxy_stats_pass="some-password-for-vault-haproxy-stats-pass-for-production-mode" \ vault_keepalived_auth_pass="some-password-for-vault-keepalived-auth-pass-for-production-mode" vault kv put secret/on-premise-gitlab-deployment/gitlab/databases \ pg_superuser_password="some-password-for-gitlab-pg-superuser-for-production-mode" \ pg_replication_password="some-password-for-gitlab-pg-replication-for-production-mode" \ pg_vrrp_secret="some-password-for-gitlab-pg-vrrp-for-production-mode" \ redis_requirepass="some-password-for-gitlab-redis-requirepass-for-production-mode" \ redis_masterauth="some-password-for-gitlab-redis-masterauth-for-production-mode" \ redis_vrrp_secret="some-password-for-gitlab-redis-vrrp-secret-for-production-mode" \ minio_root_password="some-password-for-gitlab-minio-root-password-for-production-mode" \ minio_vrrp_secret="some-password-for-gitlab-minio-vrrp-secret-for-production-mode" \ minio_root_user="some-username-for-gitlab-minio-root-user-for-production-mode" vault kv put secret/on-premise-gitlab-deployment/harbor/databases \ pg_superuser_password="some-password-for-harbor-pg-superuser-for-production-mode" \ pg_replication_password="some-password-for-harbor-pg-replication-for-production-mode" \ pg_vrrp_secret="some-password-for-harbor-pg-vrrp-for-production-mode" \ redis_requirepass="some-password-for-harbor-redis-requirepass-for-production-mode" \ redis_masterauth="some-password-for-harbor-redis-masterauth-for-production-mode" \ redis_vrrp_secret="some-password-for-harbor-redis-vrrp-secret-for-production-mode" \ minio_root_password="some-password-for-harbor-minio-root-password-for-production-mode" \ minio_vrrp_secret="some-password-for-harbor-minio-vrrp-secret-for-production-mode" \ minio_root_user="some-username-for-harbor-minio-root-user-for-production-mode" vault kv put secret/on-premise-gitlab-deployment/harbor/app \ harbor_admin_password="some-password-for-harbor-admin-password-for-production-mode" \ harbor_pg_db_password="some-password-for-harbor-pg-db-password-for-production-mode" - The following variables are required for provisioning the Terraform layers for Patroni, Sentinel, MinIO (S3), Harbor, and GitLab clusters:
-
Note 0. Security Notice: Clearing the shell history after executing
vault kv putcommands is strongly recommended to mitigate sensitive data exposure. -
Note 1. Secret Retrieval
-
Use the following command to retrieve credentials from Vault. For example, to fetch the PostgreSQL superuser password:
export VAULT_ADDR="https://172.16.136.250:443" export VAULT_CACERT="${PWD}/terraform/layers/10-vault-core/tls/vault-ca.crt" export VAULT_TOKEN=$(jq -r .root_token ansible/fetched/vault/vault_init_output.json) vault kv get -field=pg_superuser_password secret/on-premise-gitlab-deployment/databases
-
To prevent exposing secrets in the shell output, subshells can be utilized:
export PG_SUPERUSER_PASSWORD=$(vault kv get -field=pg_superuser_password secret/on-premise-gitlab-deployment/databases)
-
For a more streamlined execution, use a single-line command:
export PG_SUPERUSER_PASSWORD=$(VAULT_ADDR="https://172.16.136.250:443" VAULT_CACERT="${PWD}/terraform/layers/10-vault-core/tls/vault-ca.crt" VAULT_TOKEN=$(jq -r .root_token ansible/fetched/vault/vault_init_output.json) vault kv get -field=pg_superuser_password secret/on-premise-gitlab-deployment/databases)
The same procedure applies to the Development Vault and other secrets.
-
-
Note 2:
For reference only since the passwords are already combined into a single-line command
ssh_usernameandssh_passwordrefer to the credentials used for virtual machine access.ssh_password_hashis the hashed value required by cloud-init for automated installation, which must be derived from thessh_passwordstring. For instance, if the password isHelloWorld@k8s, generate the hash using the following command:printf '%s' "HelloWorld@k8s" | openssl passwd -6 -stdin
- If a "command not found" error occurs for
openssl, ensure theopensslpackage is installed. ssh_public_key_pathshould point to the filename of the previously generated public key (typically in*.pubformat).
- If a "command not found" error occurs for
-
Note 3:
SSH identity variables (
ssh_) are primarily utilized in Packer for one-time provisioning, whereas VM identity variables (vm_) are used by Terraform during VM cloning. Both may be set to identical values. While it is possible to configure unique credentials for different VMs by modifying theansible_runner.vm_credentialsvariable and implementingfor_eachloops in the HCL code, this approach introduces unnecessary complexity. Unless specific requirements dictate otherwise, maintaining identical values for SSH and VM identity variables is recommended.
-
-
In this repo, Vault must be unsealed after every startup. The following options are available:
- Option
3inentry.shunseals the Development Vault. This operation is performed by thevault_dev_unseal_handler()shell function. - Option
4inentry.shunseals the Production Vault. This is managed via the90-operation-vault-unseal.yamlAnsible playbook.
Alternatively, the containerized approach described in sections B.1 and B.2 provides a more streamlined workflow.
- Option
Note
These variable files define the configuration for cluster provisioning.
-
Initialize the required
.tfvarsfiles by copying the examples for each layer:for f in terraform/layers/*/terraform.tfvars.example; do cp -n "$f" "${f%.example}"; done
- For High Availability (HA) configurations:
- Services such as Vault (Production mode), Patroni (including etcd), Sentinel, MicroK8s (Harbor), and Kubeadm Master (GitLab) must follow an odd-node configuration (
n % 2 != 0). - MinIO Distributed requires a node count divisible by four (
n % 4 == 0).
- Services such as Vault (Production mode), Patroni (including etcd), Sentinel, MicroK8s (Harbor), and Kubeadm Master (GitLab) must follow an odd-node configuration (
- Static IPs assigned during node provisioning must align with the designated host-only network subnet.
- For High Availability (HA) configurations:
-
This project utilizes Ubuntu Server 24.04.3 LTS (Noble) as the default Guest OS.
- The latest release is available at: https://cdimage.ubuntu.com/ubuntu/releases/24.04/release/.
- The specific version tested for this project is available at: https://old-releases.ubuntu.com/releases/noble/.
- Ensure checksum verification after downloading:
- Latest Noble: https://releases.ubuntu.com/noble/SHA256SUMS
- Old-release Noble: https://old-releases.ubuntu.com/releases/noble/SHA256SUMS
Support for additional Linux distributions, such as Fedora 43 or RHEL 10, is planned for future updates.
-
Independent Testing and Development:
-
Use menu option
9) Build Packer Base Imageto generate a base image. -
Use menu option
10) Provision Terraform Layerto test or redeploy specific layers (e.g., Harbor, Postgres).Note: When rebuilding Harbor in Layer 50, a
module.harbor_system_config.harbor_garbage_collection.gc"Resource not found" error may occur. This is resolved by removingterraform.tfstateandterraform.tfstate.backupfromterraform/layers/50-harbor-platformbefore re-executingterraform apply.
To test Ansible playbooks on existing hosts without reprovisioning virtual machines, use option
11) Rebuild Layer via Ansible. -
-
Resource Cleanup:
14) Purge Specific Terraform Layer: Destroys a specific layer's virtual machines, associated libvirt resources (networks, storage pools), and its Terraform state file. This allows for a clean reprovisioning of that specific layer.15) Purge All Libvirt Resources: Used to clear virtualization resources while maintaining the project state. This executeslibvirt_resource_purger "all", which deletes all guest VMs, networks, and storage pools created by this project, while preserving Packer images and Terraform local state files.16) Purge All Packer and Terraform Resources: Used for a complete cleanup of all artifacts. This deletes all Packer output images and all Terraform local state files, resetting the project environment to a pristine state.
Note
For local management of a cloned repository, this step can be automated by selecting 90-github-meta via option 10) Provision Terraform Layer. The following instructions detail the imperative manual procedure for reference:
-
Inject the GitHub token from Vault using a shell subquery. Execute this from the project root to verify that
${PWD}aligns with the Vault credential directory:export GITHUB_TOKEN=$(VAULT_ADDR="https://127.0.0.1:8200" VAULT_CACERT="${PWD}/vault/tls/ca.pem" VAULT_TOKEN=$(cat ${PWD}/vault/keys/root-token.txt) vault kv get -field=github_pat secret/on-premise-gitlab-deployment/variables)
-
Existing repositories must be imported into the Terraform state before the initial execution of the governance layer.
cd terraform/layers/90-github-meta -
Initialization and Import
- Scenario A (Existing Repository): When managing an existing repository (such as this project), the import operation is mandatory.
- Scenario B (New Repository): When creating a new repository from scratch, the import step can be bypassed.
terraform init terraform import github_repository.this on-premise-gitlab-deployment
-
Apply Ruleset: Executing
terraform planto preview changes before applying is recommended.terraform apply -auto-approve
The output should look similar to the following:
Apply complete! Resources: x added, y changed, z destroyed. Outputs: repository_ssh_url = "git@github.com:username/on-premise-gitlab-deployment.git" ruleset_id = <a-numeric-id>
Importing service certificates into the host trust store enables secure access to the following services without triggering browser security warnings:
- Prod Vault:
https://vault.iac.local - Harbor:
https://harbor.iac.local - Harbor MinIO Console:
https://minio.harbor.iac.local - GitLab:
https://gitlab.iac.local - GitLab MinIO Console:
https://minio.gitlab.iac.local
Complete the following configuration steps in sequence:
-
Configure DNS resolution by appending the following entries to the host's
/etc/hostsfile. These values must be aligned with the actual static IPs provisioned by Terraform:172.16.134.250 gitlab.iac.local 172.16.135.250 harbor.iac.local notary.harbor.iac.local 172.16.136.250 vault.iac.local 172.16.139.250 minio.harbor.iac.local 172.16.142.250 minio.gitlab.iac.local -
Establish Host-level Trust (Infrastructure & Service CAs). Since the
tls/directory is not tracked by git, the Service Root CA should be retrieve from the live Vault server before importing them. Usecurlto fetch the public key of the Service CA directly from the Vault PKI engine. Using-kis required here as the trust chain is not yet established. Set the Vault Address (VIP) and download the Service CA to the local tls directory.export VAULT_ADDR="https://172.16.136.250:443" curl -k $VAULT_ADDR/v1/pki/prod/ca/pem -o terraform/layers/10-vault-core/tls/vault-pki-ca.crt
-
Import BOTH Certificates into System Trust Store:
Now there exists two CA files in
terraform/layers/10-vault-core/tls/:vault-ca.crt: The Infrastructure CA (generated by Terraform locally).vault-pki-ca.crt: The Service CA (downloaded from Vault API).
Execute the import commands based on your OS:
-
RHEL / CentOS / Fedora:
# 1. Copy both CAs to the anchors directory sudo cp terraform/layers/10-vault-core/tls/vault-ca.crt /etc/pki/ca-trust/source/anchors/ sudo cp terraform/layers/10-vault-core/tls/vault-pki-ca.crt /etc/pki/ca-trust/source/anchors/ # 2. Update the trust store sudo update-ca-trust
-
Ubuntu / Debian:
# 1. Copy both CAs to the shared certificates directory sudo cp terraform/layers/10-vault-core/tls/vault-ca.crt /usr/local/share/ca-certificates/vault-ca.crt sudo cp terraform/layers/10-vault-core/tls/vault-pki-ca.crt /usr/local/share/ca-certificates/vault-pki-ca.crt # 2. Update the certificates sudo update-ca-certificates
-
Verify the trust store configuration by testing connectivity to MinIO. This verifies that the host trusts the "Service CA":
curl -I https://minio.harbor.iac.local:9000/minio/health/live
An
HTTP/1.1 200 OKresponse confirms that the trust store is correctly configured. -
Verify the complete certificate chain by accessing the Harbor interface:
curl -vI https://harbor.iac.local
If the output displays
SSL certificate verify okandHTTP/2 200, the full PKI chain—spanning Vault issuance, cert-manager signing, Ingress deployment, and host-level trust—is successfully established.
This repo leverages Packer, Terraform, and Ansible to implement an automated pipeline. Adhering to immutable infrastructure principles, it automates the entire lifecycle, from VM image creation to the provisioning of a complete Kubernetes cluster.
-
Core Bootstrap Workflow: The Development Vault centralizes initial secrets management, followed by the provisioning of the Production Vault.
LoadingsequenceDiagram autonumber actor User participant Entry as entry.sh participant DevVault as Dev Vault<br>(Local) participant TF as Terraform<br>(Layer 10) participant Libvirt participant Ansible participant ProdVault as Prod Vault<br>(Layer 10) %% Step 1: Bootstrap Note over User, DevVault: [Bootstrap Phase] User->>Entry: [DEV] Initialize Dev Vault Entry->>DevVault: Init & Unseal Entry->>DevVault: Enable KV Engine (secret/) User->>DevVault: Write Initial Secrets (SSH Keys, Root Pass) %% Step 2: Infrastructure Note over User, ProdVault: [Layer 10: Infrastructure] User->>Entry: Provision Layer 10 Entry->>TF: Apply (Stage 1) TF->>DevVault: Read SSH Keys/Creds TF->>Libvirt: Create Vault VMs (Active/Standby) TF->>Ansible: Trigger Provisioning Ansible->>ProdVault: Install Vault Binary & Config %% Step 3: Operation Note over User, ProdVault: [Layer 10: Operation] User->>Entry: [PROD] Unseal Production Vault Entry->>Ansible: Run Playbook (90-operation-unseal) Ansible->>ProdVault: Init (if new) & Unseal Ansible-->>Entry: Return Root Token (Saved to Artifacts) %% Step 4: Configuration Note over User, ProdVault: [Layer 10: Configuration] Entry->>TF: Apply (Stage 2 - Vault Provider) TF->>ProdVault: Enable PKI Engine (Root CA) TF->>ProdVault: Configure Roles (postgres, redis, minio) TF->>ProdVault: Enable AppRole Auth -
Data Services and PKI: Provisions data services through automated pipelines. MinIO serves as the representative model for these workflows, which follow the same architectural patterns applied to PostgreSQL and Redis.
LoadingsequenceDiagram autonumber actor User participant TF as Terraform<br>(Layer 20) participant ProdVault as Prod Vault<br>(Layer 10) participant Libvirt participant Ansible participant Agent as Vault Agent<br>(On Guest) participant Service as MinIO Service Note over User, Service: [Layer 20: Provisioning MinIO] %% Terraform Phase User->>TF: Apply Layer 20 (MinIO) TF->>ProdVault: 1. Create AppRole 'harbor-minio' ProdVault-->>TF: Return RoleID & SecretID TF->>Libvirt: 2. Create MinIO VMs & LBs %% Ansible Phase TF->>Ansible: 3. Trigger Playbook (Pass AppRole Creds) Ansible->>Agent: 3a. Install Vault Agent Ansible->>Agent: 3b. Write RoleID/SecretID to /etc/vault.d/approle/ Ansible->>Agent: 3c. Configure Agent Templates (public.crt, private.key) Ansible->>Agent: 3d. Start Vault Agent Service %% Runtime Phase Agent->>ProdVault: 4. Auth (AppRole Login) ProdVault-->>Agent: Return Client Token Agent->>ProdVault: 5. Request Cert (pki/prod/issue/minio-role) ProdVault-->>Agent: Return Signed Cert & Key Agent->>Service: 6. Render Certs to /etc/minio/certs/ Agent->>Service: 7. Restart/Reload MinIO Service Service->>Service: 8. Start with TLS (HTTPS) %% Client Config Ansible->>Service: 9. Trust CA & Configure 'mc' Client
The cluster configurations in this project draw upon the following resources:
Note
Procedures derived directly from official documentation are omitted from the list below.
- Bibin Wilson, B. (2025). How To Setup Kubernetes Cluster Using Kubeadm. devopscube.
- Aditi Sangave (2025). How to Setup HashiCorp Vault HA Cluster with Integrated Storage (Raft). Velotio Tech Blog.
- Dickson Gathima (2025). Building a Highly Available PostgreSQL Cluster with Patroni, etcd, and HAProxy. Medium.
- Deniz TÜRKMEN (2025). Redis Cluster Provisioning — Fully Automated with Ansible. Medium.
(To be continued...)