Skip to content

Comments

chore(deps): update terraform libvirt to v0.9.3#125

Open
renovate[bot] wants to merge 1 commit intomainfrom
renovate/libvirt-0.x
Open

chore(deps): update terraform libvirt to v0.9.3#125
renovate[bot] wants to merge 1 commit intomainfrom
renovate/libvirt-0.x

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Jun 30, 2021

This PR contains the following updates:

Package Type Update Change
libvirt (source) required_provider minor 0.6.20.9.3

Release Notes

dmacvicar/terraform-provider-libvirt (libvirt)

v0.9.3

Compare Source

What's Changed

New Contributors

And thanks for those unmerged PRs @​SkinGad @​yannlambret @​nicholas-rees

Full Changelog: dmacvicar/terraform-provider-libvirt@v0.9.2...v0.9.3

v0.9.2

Compare Source

Full Changelog: dmacvicar/terraform-provider-libvirt@v0.9.1...v0.9.2

Thanks to @​BohdanTkachenko for the feedback, reports and PR suggestions.

v0.9.1

Compare Source

Bugfixes

  • Domains are now undefined with flags VIR_DOMAIN_UNDEFINE_NVRAM and VIR_DOMAIN_UNDEFINE_TPM. These defaults may change in the future and be part of a domain block like create and delete. (#​1203 )
  • Fix SIGSEGV when connecting to a hypervisor over qemu+ssh. Fixes #​1210 (#​1211)
  • Misc documentation fixes #​1209, #​1210

Features

Support for full libvirt API (XML) (#​1208 )

The provider now supports the whole libvirt API 🥳 (* that is supported by libvirtxml), thanks to a code generation engine which generates the whole terraform glue for the schemas and conversions.

For now, the usual resources (domain, network, volume, pool) are included, but this opens the door to handle other resources (secrets, etc) with little effort.

Migration Guide: 0.9.0 → v0.9.1

⚠️ as the schema is now generated, the documentation is now injected into the code generation. As there is no machine readable documentation for libvirt XML, we generated a set of documentation metadata using AI. This process can be improved over time.

Due to the introduction of the generator and some bugs in the 0.9.0 schema, we had to do some changes in the schema.

This document explains how to move Terraform configurations from provider v0.9.0 (the last manual schema) to the current HEAD that uses the libvirt-schema code generator. It only covers resources and attributes that existed in 0.9.0: domains, networks, storage pools, and storage volumes. Anything new that HEAD exposes can simply be added following the generated schema documentation.

What Changed Globally

  1. Attr names now mirror libvirt XML – the generator emits snake_case names derived from the XML schema (e.g., accessmodeaccess_mode, portgroupport_group). Set exactly the fields you care about; anything left null stays absent in the XML.
  2. Value/unit pairs are explicit – whenever libvirt exposes a value with a unit attribute the provider now has two attributes (memory + memory_unit, capacity + capacity_unit, etc.). Leaving the unit unset lets libvirt use its default.
  3. Presence/"yes"/"no" semantics follow libvirt – booleans that previously toggled simple structs may now expect yes/no strings when libvirt models them as attributes (e.g. os.loader_readonly). True presence booleans (like features.acpi) still use Terraform bools.
  4. Nested objects match the XML tree exactly – device sources, interfaces, backing stores, etc. now use the full nested structure. Plan to touch every place where v0.9 flattened things like source.pool or filesystem.source.
  5. Metadata is structured – string blobs became { metadata = { xml = <<EOF ... } } so we can extend later without breaking state.

Domain Resource

Top-level attribute mapping
v0.9 attribute HEAD attribute(s) Notes
unit memory_unit Same semantics, renamed so every value/unit pair is consistent.
max_memory maximum_memory Value only; use maximum_memory_unit if you previously used a non-default unit.
max_memory_slots maximum_memory_slots Same semantics.
current_memory current_memory + optional current_memory_unit Value stays the same; set the unit explicitly if you relied on non-default units.
metadata (string) metadata = { xml = <<EOF ... EOF } Wrap your XML in the nested object.
os.arch os.type_arch The type_* prefix mirrors <os><type arch="..."/>.
os.machine os.type_machine Same rationale as above.
os.kernel_args os.cmdline Field name matches the XML <cmdline> element.
os.loader_path os.loader 0.9 kept the loader path in a separate attribute; now it is the element’s value (see “value + attributes” below).
os.loader_readonly (bool) os.loader_readonly (string) Accepts "yes"/"no" because the XML attribute is a string.
os.nvram.* os.nv_ram = { file, template, format = { type = ... } } Rename plus richer structure.
devices.filesystems[*].accessmode access_mode All camelCase names were converted to snake_case.
devices.filesystems[*].readonly read_only Same semantics.
devices.interfaces[*].source.portgroup source = { network = { port_group = ... } } See below for the full source mapping.
devices.rngs[*].device backend = { random = "/dev/urandom" } or backend = { egd = { ... } } Backends are now modeled exactly like the XML.
OS block specifics
  • os.boot_devices is still a list, but if you previously stored strings you now provide objects: boot_devices = [{ dev = "hd" }, { dev = "network" }].
  • Loader/read-only/secure/stateless flags now accept the literal XML strings ("yes"/"no"). Wrap them in tostring() if you had boolean locals.
  • NVRAM becomes os = { nv_ram = { file = "/var/lib/libvirt/nvram.bin", template = "/usr/share/OVMF/OVMF_VARS.fd", format = { type = "raw" } } }.
Loader value + attributes

<loader> is a “value + attributes” element. The path is the value (os.loader), and every XML attribute becomes a sibling attribute:

os = {
  loader          = "/usr/share/OVMF/OVMF_CODE.fd"
  loader_type     = "pflash"
  loader_readonly = "yes"
  loader_secure   = "no"
  loader_format   = "raw"
}

Leave the attribute unset to let libvirt pick its default (the provider preserves user intent for optional attributes).

Disks and filesystems

0.9 flattened every disk source. HEAD requires you to pick the XML variant explicitly:

# v0.9
source = {
  pool   = libvirt_pool.test.name
  volume = libvirt_volume.test.name
}

# HEAD
source = {
  volume = {
    pool   = libvirt_pool.test.name
    volume = libvirt_volume.test.name
  }
}

# File-based disk (previously source.file)
source = { file = "/var/lib/libvirt/images/disk.qcow2" }

# Block device

# Block device
source = { block = "/dev/sdb" }

Filesystems follow the same pattern. Replace the old flat fields with nested objects:

# v0.9
filesystems = [{
  source     = "/exports/share"
  target     = "shared"
  accessmode = "mapped"
  readonly   = true
}]

# HEAD
filesystems = [{
  source = { mount = { dir = "/exports/share" } }
  target = { dir = "shared" }
  access_mode = "mapped"
  read_only   = true
}]
Variant notation

Every <source> element with mutually exclusive children (files, volumes, blocks, etc.) becomes an object whose attributes map 1:1 to the libvirt XML children. Only set the branch you need:

# Filesystem backed by a block device
source = { block = { dev = "/dev/vdb" } }

# RAM-backed filesystem with extra attributes
source = { ram = { usage = 1024, unit = "MiB" } }

Even if a variant has additional attributes in XML, the generated struct exposes them in that nested object (e.g., ram = { usage = 1024, unit = "MiB" }). This pattern is consistent across disks, filesystems, host devices, etc.

Interfaces

source.network, source.bridge, and source.dev are now mutually exclusive nested objects. Example conversions:

# Network-backed NIC (v0.9)
source = { network = "default" }

# HEAD
source = { network = { network = "default" } }

# Direct/Macvtap NIC (v0.9)
source = { dev = "eth0", mode = "bridge" }

# HEAD
source = { direct = { dev = "eth0", mode = "bridge" } }

portgroup became port_group, wait_for_ip stays the same helper object.

RNG / TPM / other devices
  • RNG devices now mirror <backend>. Use backend = { random = "/dev/urandom" } for /dev/random or backend = { egd = { source = { mode = "connect", host = "unix", service = "..." } } } for EGD sockets.
  • TPM backends are nested (backend = { emulator = { path = "/var/lib/swtpm/sock" } }). Map your previous backend_type to one of the backend objects: emulator, passthrough, or external.
  • Graphics, consoles, serials, and video devices already used nested objects in 0.9; the only change is snake_case attribute names (auto_port, websocket, etc.).
Metadata

0.9 stored raw XML as a string. Now wrap it:

metadata = {
  xml = <<EOFXML
<libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
  <libosinfo:os id="http://libosinfo.org/linux/2024" />
</libosinfo:libosinfo>
EOFXML
}

Storage Volume Resource

Key differences:

v0.9 attribute HEAD attribute(s) Notes
format (string) target = { format = { type = "qcow2" } } The format lives under the target block now.
permissions.* target.permissions.* Same keys, just explicitly under target.
backing_store.format backing_store = { format = { type = "qcow2" } } Mirrors libvirt <format> element.
capacity capacity + optional capacity_unit Leave capacity_unit unset to keep KiB.
allocation allocation + allocation_unit (read-only) Useful when libvirt reports GiB/MiB units.
path (computed) still path, but it mirrors target.path You no longer set this manually. Use pool target paths to control locations.

Everything else (name, pool, create/content) behaves exactly like 0.9. Plan/apply will touch terraform state automatically once you update the config.

Storage Pool Resource

The generated schema simply fills in additional optional sub-objects (source.host, source.auth, features, etc.). All attributes that existed in 0.9 keep their names and shapes:

  • target = { path = "/var/lib/libvirt/pools" } works unchanged.
  • target.permissions.* still take strings, not integers.
  • source.device = [{ path = "/dev/sdb" }] keeps the same structure.

Unless you opt into the new nested fields you do not need to change existing pool configurations.

Network Resource

v0.9 attribute HEAD attribute(s) Notes
mode forward = { mode = "nat" } The forwarding mode now lives under the <forward> element.
bridge (string) bridge = { name = "virbr0" } Bridge attributes are grouped together so libvirt can add more knobs.
autostart still autostart Works the same.
ips still ips, but nested attr names now snake_case (local_ptr, dhcp.hosts, etc.) Structures are the same; you may only need to rename portgroupport_group inside DHCP hosts.

Example conversion:

# v0.9
mode   = "nat"
bridge = "virbr1"

# HEAD
forward = { mode = "nat" }
bridge  = { name = "virbr1" }

DHCP ranges/hosts did not change other than automatic snake_case normalisation.

Contributors

v0.9.0

Compare Source

⚠️ ⚠️ ⚠️ ⚠️ This version of the provider breaks compatibility ⚠️ ⚠️ ⚠️ ⚠️

Background

When this provider was developed, the idea was to mimic a cloud experience on top of libvirt. Because of this, the schema was done as flat as possible, features were abstracted and some features like disks from remote sources were added as convenience.

The initial users of the provider were usually makers of infrastructure software who needed complex network setups. Lot of code was contributed which added complexity outside of its initial design.

So for long time I wanted to restart the provider under a new design principles where:

  • HCL maps almost 1:1 to libvirt XML and therefore almost any libvirt feature can be supported
  • Most of the validation work is left to libvirt which is already doing it
  • No abstractions or extra features, and when they do, they should be designed in a way that are quite independent
  • More consistency, for example, most libvirt APIs can be mostly separated in lifecycle (create, destroy) which map quite well to terraform resources, and then some query APIs, which map well to data sources. This was not the case how we implemented for example querying the IP addresses
  • No unnecessary defensive code, for example, checking that a volume exist when referenced, is a problem that terraform solves if the ID is interpolated and libvirt solves with its own checks, if the volume is referenced by a hardcoded strings.

I knew 1.0 would never come in the current form.

The new provider

The new provider is based on the new plugin framework. This gives us some room for better diagnostics and better plans.

It makes definitions more verbose, but it also means we can implement any libvirt feature. Defaults work as long as they are defaults in libvirt.

Migration plan

You can find the legacy provider in the v0.8 branch. New releases can be done of 0.8.x versions to add bugfixes, so people who rely on it have a path forward. I'd likely not maintain much of 0.8.x, but I guess many people will help here, as they do today with different PRs.

There is no automated way of migrating the HCL of previous providers, but given that it is documented how the new schema is defined, which was not the case with the previous schema, it should be much easier to drive LLMs to perform a conversion.

You should check the documentation and README, which will give you an idea of the main differences and equivalences, but here is an example of the new schema to get an idea:

terraform {
  required_providers {
    libvirt = {
      source = "dmacvicar/libvirt"
    }
  }
}

provider "libvirt" {
  uri = "qemu:///system"
}

# Base Alpine Linux cloud image stored in the default pool.
resource "libvirt_volume" "alpine_base" {
  name   = "alpine-3.22-base.qcow2"
  pool   = "default"
  format = "qcow2"

  create = {
    content = {
      url = "https://dl-cdn.alpinelinux.org/alpine/v3.22/releases/cloud/generic_alpine-3.22.2-x86_64-bios-cloudinit-r0.qcow2"
    }
  }
}

# Writable copy-on-write layer for the VM.
resource "libvirt_volume" "alpine_disk" {
  name     = "alpine-vm.qcow2"
  pool     = "default"
  format   = "qcow2"
  capacity = 2147483648

  backing_store = {
    path   = libvirt_volume.alpine_base.path
    format = "qcow2"
  }
}

# Cloud-init seed ISO.
resource "libvirt_cloudinit_disk" "alpine_seed" {
  name = "alpine-cloudinit"

  user_data = <<-EOF
    #cloud-config
    chpasswd:
      list: |
        root:password
      expire: false

    ssh_pwauth: true

    packages:
      - openssh-server
    timezone: UTC
  EOF

  meta_data = <<-EOF
    instance-id: alpine-001
    local-hostname: alpine-vm
  EOF

  network_config = <<-EOF
    version: 2
    ethernets:
      eth0:
        dhcp4: true
  EOF
}

# Upload the cloud-init ISO into the pool.
resource "libvirt_volume" "alpine_seed_volume" {
  name = "alpine-cloudinit.iso"
  pool = "default"

  create = {
    content = {
      url = libvirt_cloudinit_disk.alpine_seed.path
    }
  }
}

# Virtual machine definition.
resource "libvirt_domain" "alpine" {
  name   = "alpine-vm"
  memory = 1048576
  vcpu   = 1

  os = {
    type    = "hvm"
    arch    = "x86_64"
    machine = "q35"
  }

  features = {
    acpi = true
  }

  devices = {
    disks = [
      {
        source = {
          pool   = libvirt_volume.alpine_disk.pool
          volume = libvirt_volume.alpine_disk.name
        }
        target = {
          dev = "vda"
          bus = "virtio"
        }
      },
      {
        device = "cdrom"
        source = {
          pool   = libvirt_volume.alpine_seed_volume.pool
          volume = libvirt_volume.alpine_seed_volume.name
        }
        target = {
          dev = "sdb"
          bus = "sata"
        }
      }
    ]

    interfaces = [
      {
        type  = "network"
        model = "virtio"  # e1000 is more compatible than virtio for Alpine
        source = {
          network = "default"
        }
        # TODO: wait_for_ip not implemented yet (Phase 2)
        # This will wait during creation until the interface gets an IP
        wait_for_ip = {
          timeout = 300    # seconds, default 300
          source  = "any"  # "lease" (DHCP), "agent" (qemu-guest-agent), or "any" (try both)
        }
      }
    ]

    graphics = {
      vnc = {
        autoport = "yes"
        listen   = "127.0.0.1"
      }
    }
  }

  running = true
}

# Query the domain's interface addresses

# This data source can be used at any time to retrieve current IP addresses
# without blocking operations like Delete
data "libvirt_domain_interface_addresses" "alpine" {
  domain = libvirt_domain.alpine.name
  source = "lease" # optional: "lease" (DHCP), "agent" (qemu-guest-agent), or "any"
}

# Output all interface information
output "vm_interfaces" {
  description = "All network interfaces with their IP addresses"
  value       = data.libvirt_domain_interface_addresses.alpine.interfaces
}

# Output the first IP address found
output "vm_ip" {
  description = "First IP address of the VM"
  value = length(data.libvirt_domain_interface_addresses.alpine.interfaces) > 0 && length(data.libvirt_domain_interface_addresses.alpine.interfaces[0].addrs) > 0 ? data.libvirt_domain_interface_addresses.alpine.interfaces[0].addrs[0].addr : "No IP address found"
}

# Output all IP addresses across all interfaces
output "vm_all_ips" {
  description = "All IP addresses across all interfaces"
  value = flatten([
    for iface in data.libvirt_domain_interface_addresses.alpine.interfaces : [
      for addr in iface.addrs : addr.addr
    ]
  ])
}

Feedback is appreciated. There will be a long journey for people to port and iron all the issues, but it is clear this is the path to go.

Docs: https://registry.terraform.io/providers/dmacvicar/libvirt/latest/docs

v0.8.3

Compare Source

Full Changelog: dmacvicar/terraform-provider-libvirt@v0.8.2...v0.8.3

v0.8.2

Compare Source

What's Changed

Content sniffing
  • The provider no longer detects the image format qcow2 using content sniffing for remote HTTP images. If you leave it blank, it will just set it based on the extension. This allows to use HTTP servers without HTTP Range support.
Upgrade dependencies
Bug fixes

New Contributors

Full Changelog: dmacvicar/terraform-provider-libvirt@v0.8.1...v0.8.2

v0.8.1

Compare Source

What's Changed

This release is mostly about fixes for the SSH transport, which was released with many bugs in v0.8.0

Experimental LVM storage pool support

There is a new experimental feature, support for LVM storage pools. I don't use myself this type of pools, so I put together all the contributions and made the code ready for release mostly based on integration tests. Try it and give feedback.

New Contributors

Full Changelog: dmacvicar/terraform-provider-libvirt@v0.8.0...v0.8.1

v0.8.0

Compare Source

What's Changed

Two big features include improved ssh config support (for example for supporting jump hosts) and a new data source for host information.

Breaking changes
  • DNS is enabled by default, like in libvirt. #​1100
  • Wait intervals for polling libvirt are reduced, making everything faster (including testsuite)
Other highlights:
  • Acceptance testsuite is finally fully passing again
  • Many code cleanups
  • Updated golangci-lint
  • Many updated dependencies
  • Mark disk wwn and nvram arguments as computed by @​wfdewith in #​1064
  • Default machine by @​e4t in #​1014
  • Add Combustion resource to use instead of the ignition one by @​cbosdo in #​1068
Community

We activated discussions, so that the community can share useful files, help each other and also get announcements.

Contributors

Thanks to all the community for their contributions and for supporting other users:

Full Changelog: dmacvicar/terraform-provider-libvirt@v0.7.6...v0.8.0

v0.7.6

Compare Source

Features

  • initial ssh config file support (#​933 )

Thanks @​jbeisser 🥳

v0.7.5

Compare Source

Fixes

  • Fix for configuring network when guest agent is not ready (#​1037)
  • Make IP address configuration more robust by not stopping prematurely (#​1048)
  • build with go 1.21

Special thanks to @​rgl , @​pstrzelczak 🙏

v0.7.4

This release was done to fix the expired GPG key (#​1035)

v0.7.2

Compare Source

Fixes

  • upgrade ingition dependency
  • port to the new libvirt-go dialer constructor
  • make 'option_value' for dnsmasq optional (#​960)
  • Fix malformed connection remote name when using ssh remote uri (#​1030)
  • Fix test make target to run all tests (#​1034)
  • Update URL to show how to setup cert (#​1007)

Thanks to contributors @​michaelbeaumont @​flat35hd99 @​tiaden @​e4t

v0.7.1

Compare Source

Thanks to contributors: @​omertuc, @​rbbratta

Fixes
  • tls: fix typo, use clientCertSearchPath for clientcert.pem (#​940)
  • Fix IPv6 subnet size regression (#​983)

v0.7.0

Compare Source

Thanks to contributors: @​omertuc, @​MusicDin, @​cfergeau, @​jschoone

Major changes
  • Port to Terraform v2 SDK (#​969). Please see the MR #​969 for details and changes.
    While changes should not break anything, there are semantic differences and different checks and validations performed.

    There is one crash I have seen a few times but did not manage to pin down to something specific. Please report if you see something.

Other fixes
  • SCSI use the sd* prefix and not the vd* prefix (#​964)
  • Update reference to Kubitect project (#​966)
  • Rework NetworkUpdate workaround (#​950)
  • Switch from github.com/libvirt/libvirt-go-xml to libvirt.org/go/libvirtxml
  • Typo in destroy network error msg (#​955)
  • Fix networkRange race condition and global state corruption (#​945)

v0.6.14

Compare Source

This release adds support SHA2 signatureswith RSA keys in servers with SHA1 disabled (RFC8332)

It should fix the issues seen in issues #​916 and #​886.

For this, we are using a fork of x/crypto with two patches:

v0.6.13

Compare Source

This release only contains upgrades:

  • build with go 1.17
  • update golang.org/x/crypto (first step to fix #​916 and related bugs)
  • update github.com/digitalocean/go-libvirt

Special thanks to @​davidalger for debugging the ssh problems and providing valuable information.

v0.6.12

Compare Source

This release contains the following fixes:

  • Support TPM devices (#​888)

  • Support specifying websocket port for VNC

  • Fix regression supporting querying qemu-guest-agent for network interfaces (#​873)

  • Fix dead links to XSLT examples (#​912)

  • Fix removal of domains with snapshots or checkpoints (#​899)

  • Support specifying "open" forward mode (#​900)

    "The new forward mode 'open' is just like mode='route', except that no
    firewall rules are added to assure that any traffic does or doesn't
    pass. It is assumed that either they aren't necessary, or they will be
    setup outside the scope of libvirt."

    See: libvirt/libvirt@25e8112

  • Speed up copying images (#​902)

  • Add support for passwords using the SSH URI's (#​887)

  • Fix: force new domain if graphics changed

Also:

  • add generated binary under PHONY section for recurring builds to actually happen (#​903)
  • We have enabled golangci-lint for all new commits, and we will slowly fix code retroactively.

Thanks to our contributors:

v0.6.11

Compare Source

This release contains the following fixes:

  • Enhanced ssh transport support (qemu+ssh), including support for ssh agent and the ability to disable host verification (#​870).
    Fixes #​864.
  • Fix cpu.mode block to use a list. Fixes a provider internal validation error.

Thanks:

v0.6.10

Compare Source

This is a preview release of the next major version of terraform-provider-libvirt.

New Features
Terraform Registry
  • The provider is now available in the Terraform Registry and can be automatically installed by Terraform by using provider requirements.
terraform {
  required_providers {
    libvirt = {
      source = "dmacvicar/libvirt"
      version = "0.6.9-pre3"
    }
  }
}

provider "libvirt" {
  # Configuration options
}
$ terraform init

Should automatically install the provider.

Single Linux build
  • The Linux build should work on all Linux distributions.

The provider does not link to libvirt anymore. Instead it uses the amazing go-libvirt, which implements the libvirt XDR-based RPC protocol.

Windows and MacOS support
  • Because of the above, the provider should work on Windows and MacOS

This release is brought to you by the community. Contributors like @​kskewes and @​MalloZup made this big port possible. Thanks also to the go-libvirt developers who helped getting digitalocean/go-libvirt#138 and digitalocean/go-libvirt#125 merged.

Other fixes
  • Set pool "type" attribute at import time (#​824)
Release Notes
  • There is support for the TLS, SSH and Unix domain sockets transports. They haven't been extensively tested yet. Help is appreciated.
Changes since last pre-release

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@github-actions
Copy link

Terraform Format and Style 🖌success

Terraform Initialization ⚙️success

Terraform Validation 🤖Success! The configuration is valid.

Terraform Plan 📖success

Show Plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_compute_network.vpc will be created
  + resource "google_compute_network" "vpc" {
      + auto_create_subnetworks         = false
      + delete_default_routes_on_create = false
      + gateway_ipv4                    = (known after apply)
      + id                              = (known after apply)
      + mtu                             = (known after apply)
      + name                            = "raspbernetes-vpc"
      + project                         = (known after apply)
      + routing_mode                    = (known after apply)
      + self_link                       = (known after apply)
    }

  # google_compute_subnetwork.subnet will be created
  + resource "google_compute_subnetwork" "subnet" {
      + creation_timestamp         = (known after apply)
      + fingerprint                = (known after apply)
      + gateway_address            = (known after apply)
      + id                         = (known after apply)
      + ip_cidr_range              = "10.10.0.0/24"
      + name                       = "raspbernetes-subnet"
      + network                    = "raspbernetes-vpc"
      + private_ipv6_google_access = (known after apply)
      + project                    = (known after apply)
      + region                     = "us-central1"
      + secondary_ip_range         = (known after apply)
      + self_link                  = (known after apply)
    }

  # google_container_cluster.primary will be created
  + resource "google_container_cluster" "primary" {
      + cluster_ipv4_cidr           = (known after apply)
      + datapath_provider           = (known after apply)
      + default_max_pods_per_node   = (known after apply)
      + enable_binary_authorization = false
      + enable_intranode_visibility = (known after apply)
      + enable_kubernetes_alpha     = false
      + enable_legacy_abac          = false
      + enable_shielded_nodes       = (known after apply)
      + endpoint                    = (known after apply)
      + id                          = (known after apply)
      + initial_node_count          = 1
      + instance_group_urls         = (known after apply)
      + label_fingerprint           = (known after apply)
      + location                    = "us-central1"
      + logging_service             = (known after apply)
      + master_version              = (known after apply)
      + monitoring_service          = (known after apply)
      + name                        = "raspbernetes-gke"
      + network                     = "raspbernetes-vpc"
      + networking_mode             = (known after apply)
      + node_locations              = (known after apply)
      + node_version                = (known after apply)
      + operation                   = (known after apply)
      + private_ipv6_google_access  = (known after apply)
      + project                     = (known after apply)
      + remove_default_node_pool    = true
      + self_link                   = (known after apply)
      + services_ipv4_cidr          = (known after apply)
      + subnetwork                  = "raspbernetes-subnet"
      + tpu_ipv4_cidr_block         = (known after apply)

      + addons_config {
          + cloudrun_config {
              + disabled           = (known after apply)
              + load_balancer_type = (known after apply)
            }

          + horizontal_pod_autoscaling {
              + disabled = (known after apply)
            }

          + http_load_balancing {
              + disabled = (known after apply)
            }

          + network_policy_config {
              + disabled = (known after apply)
            }
        }

      + authenticator_groups_config {
          + security_group = (known after apply)
        }

      + cluster_autoscaling {
          + enabled = (known after apply)

          + auto_provisioning_defaults {
              + oauth_scopes    = (known after apply)
              + service_account = (known after apply)
            }

          + resource_limits {
              + maximum       = (known after apply)
              + minimum       = (known after apply)
              + resource_type = (known after apply)
            }
        }

      + database_encryption {
          + key_name = (known after apply)
          + state    = (known after apply)
        }

      + default_snat_status {
          + disabled = (known after apply)
        }

      + ip_allocation_policy {
          + cluster_ipv4_cidr_block       = (known after apply)
          + cluster_secondary_range_name  = (known after apply)
          + services_ipv4_cidr_block      = (known after apply)
          + services_secondary_range_name = (known after apply)
        }

      + master_auth {
          + client_certificate     = (known after apply)
          + client_key             = (sensitive value)
          + cluster_ca_certificate = (known after apply)

          + client_certificate_config {
              + issue_client_certificate = false
            }
        }

      + network_policy {
          + enabled  = (known after apply)
          + provider = (known after apply)
        }

      + node_config {
          + disk_size_gb      = (known after apply)
          + disk_type         = (known after apply)
          + guest_accelerator = (known after apply)
          + image_type        = (known after apply)
          + labels            = (known after apply)
          + local_ssd_count   = (known after apply)
          + machine_type      = (known after apply)
          + metadata          = (known after apply)
          + min_cpu_platform  = (known after apply)
          + oauth_scopes      = (known after apply)
          + preemptible       = (known after apply)
          + service_account   = (known after apply)
          + tags              = (known after apply)
          + taint             = (known after apply)

          + shielded_instance_config {
              + enable_integrity_monitoring = (known after apply)
              + enable_secure_boot          = (known after apply)
            }

          + workload_metadata_config {
              + node_metadata = (known after apply)
            }
        }

      + node_pool {
          + initial_node_count  = (known after apply)
          + instance_group_urls = (known after apply)
          + max_pods_per_node   = (known after apply)
          + name                = (known after apply)
          + name_prefix         = (known after apply)
          + node_count          = (known after apply)
          + node_locations      = (known after apply)
          + version             = (known after apply)

          + autoscaling {
              + max_node_count = (known after apply)
              + min_node_count = (known after apply)
            }

          + management {
              + auto_repair  = (known after apply)
              + auto_upgrade = (known after apply)
            }

          + node_config {
              + disk_size_gb      = (known after apply)
              + disk_type         = (known after apply)
              + guest_accelerator = (known after apply)
              + image_type        = (known after apply)
              + labels            = (known after apply)
              + local_ssd_count   = (known after apply)
              + machine_type      = (known after apply)
              + metadata          = (known after apply)
              + min_cpu_platform  = (known after apply)
              + oauth_scopes      = (known after apply)
              + preemptible       = (known after apply)
              + service_account   = (known after apply)
              + tags              = (known after apply)
              + taint             = (known after apply)

              + shielded_instance_config {
                  + enable_integrity_monitoring = (known after apply)
                  + enable_secure_boot          = (known after apply)
                }

              + workload_metadata_config {
                  + node_metadata = (known after apply)
                }
            }

          + upgrade_settings {
              + max_surge       = (known after apply)
              + max_unavailable = (known after apply)
            }
        }

      + release_channel {
          + channel = (known after apply)
        }

      + workload_identity_config {
          + identity_namespace = (known after apply)
        }
    }

  # google_container_node_pool.primary_nodes will be created
  + resource "google_container_node_pool" "primary_nodes" {
      + cluster             = "raspbernetes-gke"
      + id                  = (known after apply)
      + initial_node_count  = (known after apply)
      + instance_group_urls = (known after apply)
      + location            = "us-central1"
      + max_pods_per_node   = (known after apply)
      + name                = "raspbernetes-gke-node-pool"
      + name_prefix         = (known after apply)
      + node_count          = 1
      + node_locations      = (known after apply)
      + operation           = (known after apply)
      + project             = (known after apply)
      + version             = (known after apply)

      + management {
          + auto_repair  = (known after apply)
          + auto_upgrade = (known after apply)
        }

      + node_config {
          + disk_size_gb      = (known after apply)
          + disk_type         = (known after apply)
          + guest_accelerator = (known after apply)
          + image_type        = (known after apply)
          + labels            = {
              + "env" = "raspbernetes"
            }
          + local_ssd_count   = (known after apply)
          + machine_type      = "n1-standard-4"
          + metadata          = {
              + "disable-legacy-endpoints" = "true"
            }
          + oauth_scopes      = [
              + "https://www.googleapis.com/auth/logging.write",
              + "https://www.googleapis.com/auth/monitoring",
            ]
          + preemptible       = false
          + service_account   = (known after apply)
          + tags              = [
              + "gke-node",
              + "raspbernetes-gke",
            ]
          + taint             = (known after apply)

          + shielded_instance_config {
              + enable_integrity_monitoring = (known after apply)
              + enable_secure_boot          = (known after apply)
            }

          + workload_metadata_config {
              + node_metadata = (known after apply)
            }
        }

      + upgrade_settings {
          + max_surge       = (known after apply)
          + max_unavailable = (known after apply)
        }
    }

Plan: 4 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + kubernetes_cluster_name = "raspbernetes-gke"
  + region                  = "us-central1"

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Pusher: @renovate[bot], Action: pull_request, Working Directory: infrastructure/gcp, Workflow: terraform-plan

@renovate renovate bot force-pushed the renovate/libvirt-0.x branch from 256e3b7 to cf8ef7d Compare July 3, 2021 00:31
@renovate renovate bot changed the title Update Terraform libvirt to v0.6.9 Update Terraform libvirt to v0.6.10 Jul 3, 2021
@github-actions
Copy link

github-actions bot commented Jul 3, 2021

Terraform Format and Style 🖌success

Terraform Initialization ⚙️success

Terraform Validation 🤖Success! The configuration is valid.

Terraform Plan 📖success

Show Plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_compute_network.vpc will be created
  + resource "google_compute_network" "vpc" {
      + auto_create_subnetworks         = false
      + delete_default_routes_on_create = false
      + gateway_ipv4                    = (known after apply)
      + id                              = (known after apply)
      + mtu                             = (known after apply)
      + name                            = "raspbernetes-vpc"
      + project                         = (known after apply)
      + routing_mode                    = (known after apply)
      + self_link                       = (known after apply)
    }

  # google_compute_subnetwork.subnet will be created
  + resource "google_compute_subnetwork" "subnet" {
      + creation_timestamp         = (known after apply)
      + fingerprint                = (known after apply)
      + gateway_address            = (known after apply)
      + id                         = (known after apply)
      + ip_cidr_range              = "10.10.0.0/24"
      + name                       = "raspbernetes-subnet"
      + network                    = "raspbernetes-vpc"
      + private_ipv6_google_access = (known after apply)
      + project                    = (known after apply)
      + region                     = "us-central1"
      + secondary_ip_range         = (known after apply)
      + self_link                  = (known after apply)
    }

  # google_container_cluster.primary will be created
  + resource "google_container_cluster" "primary" {
      + cluster_ipv4_cidr           = (known after apply)
      + datapath_provider           = (known after apply)
      + default_max_pods_per_node   = (known after apply)
      + enable_binary_authorization = false
      + enable_intranode_visibility = (known after apply)
      + enable_kubernetes_alpha     = false
      + enable_legacy_abac          = false
      + enable_shielded_nodes       = (known after apply)
      + endpoint                    = (known after apply)
      + id                          = (known after apply)
      + initial_node_count          = 1
      + instance_group_urls         = (known after apply)
      + label_fingerprint           = (known after apply)
      + location                    = "us-central1"
      + logging_service             = (known after apply)
      + master_version              = (known after apply)
      + monitoring_service          = (known after apply)
      + name                        = "raspbernetes-gke"
      + network                     = "raspbernetes-vpc"
      + networking_mode             = (known after apply)
      + node_locations              = (known after apply)
      + node_version                = (known after apply)
      + operation                   = (known after apply)
      + private_ipv6_google_access  = (known after apply)
      + project                     = (known after apply)
      + remove_default_node_pool    = true
      + self_link                   = (known after apply)
      + services_ipv4_cidr          = (known after apply)
      + subnetwork                  = "raspbernetes-subnet"
      + tpu_ipv4_cidr_block         = (known after apply)

      + addons_config {
          + cloudrun_config {
              + disabled           = (known after apply)
              + load_balancer_type = (known after apply)
            }

          + horizontal_pod_autoscaling {
              + disabled = (known after apply)
            }

          + http_load_balancing {
              + disabled = (known after apply)
            }

          + network_policy_config {
              + disabled = (known after apply)
            }
        }

      + authenticator_groups_config {
          + security_group = (known after apply)
        }

      + cluster_autoscaling {
          + enabled = (known after apply)

          + auto_provisioning_defaults {
              + oauth_scopes    = (known after apply)
              + service_account = (known after apply)
            }

          + resource_limits {
              + maximum       = (known after apply)
              + minimum       = (known after apply)
              + resource_type = (known after apply)
            }
        }

      + database_encryption {
          + key_name = (known after apply)
          + state    = (known after apply)
        }

      + default_snat_status {
          + disabled = (known after apply)
        }

      + ip_allocation_policy {
          + cluster_ipv4_cidr_block       = (known after apply)
          + cluster_secondary_range_name  = (known after apply)
          + services_ipv4_cidr_block      = (known after apply)
          + services_secondary_range_name = (known after apply)
        }

      + master_auth {
          + client_certificate     = (known after apply)
          + client_key             = (sensitive value)
          + cluster_ca_certificate = (known after apply)

          + client_certificate_config {
              + issue_client_certificate = false
            }
        }

      + network_policy {
          + enabled  = (known after apply)
          + provider = (known after apply)
        }

      + node_config {
          + disk_size_gb      = (known after apply)
          + disk_type         = (known after apply)
          + guest_accelerator = (known after apply)
          + image_type        = (known after apply)
          + labels            = (known after apply)
          + local_ssd_count   = (known after apply)
          + machine_type      = (known after apply)
          + metadata          = (known after apply)
          + min_cpu_platform  = (known after apply)
          + oauth_scopes      = (known after apply)
          + preemptible       = (known after apply)
          + service_account   = (known after apply)
          + tags              = (known after apply)
          + taint             = (known after apply)

          + shielded_instance_config {
              + enable_integrity_monitoring = (known after apply)
              + enable_secure_boot          = (known after apply)
            }

          + workload_metadata_config {
              + node_metadata = (known after apply)
            }
        }

      + node_pool {
          + initial_node_count  = (known after apply)
          + instance_group_urls = (known after apply)
          + max_pods_per_node   = (known after apply)
          + name                = (known after apply)
          + name_prefix         = (known after apply)
          + node_count          = (known after apply)
          + node_locations      = (known after apply)
          + version             = (known after apply)

          + autoscaling {
              + max_node_count = (known after apply)
              + min_node_count = (known after apply)
            }

          + management {
              + auto_repair  = (known after apply)
              + auto_upgrade = (known after apply)
            }

          + node_config {
              + disk_size_gb      = (known after apply)
              + disk_type         = (known after apply)
              + guest_accelerator = (known after apply)
              + image_type        = (known after apply)
              + labels            = (known after apply)
              + local_ssd_count   = (known after apply)
              + machine_type      = (known after apply)
              + metadata          = (known after apply)
              + min_cpu_platform  = (known after apply)
              + oauth_scopes      = (known after apply)
              + preemptible       = (known after apply)
              + service_account   = (known after apply)
              + tags              = (known after apply)
              + taint             = (known after apply)

              + shielded_instance_config {
                  + enable_integrity_monitoring = (known after apply)
                  + enable_secure_boot          = (known after apply)
                }

              + workload_metadata_config {
                  + node_metadata = (known after apply)
                }
            }

          + upgrade_settings {
              + max_surge       = (known after apply)
              + max_unavailable = (known after apply)
            }
        }

      + release_channel {
          + channel = (known after apply)
        }

      + workload_identity_config {
          + identity_namespace = (known after apply)
        }
    }

  # google_container_node_pool.primary_nodes will be created
  + resource "google_container_node_pool" "primary_nodes" {
      + cluster             = "raspbernetes-gke"
      + id                  = (known after apply)
      + initial_node_count  = (known after apply)
      + instance_group_urls = (known after apply)
      + location            = "us-central1"
      + max_pods_per_node   = (known after apply)
      + name                = "raspbernetes-gke-node-pool"
      + name_prefix         = (known after apply)
      + node_count          = 1
      + node_locations      = (known after apply)
      + operation           = (known after apply)
      + project             = (known after apply)
      + version             = (known after apply)

      + management {
          + auto_repair  = (known after apply)
          + auto_upgrade = (known after apply)
        }

      + node_config {
          + disk_size_gb      = (known after apply)
          + disk_type         = (known after apply)
          + guest_accelerator = (known after apply)
          + image_type        = (known after apply)
          + labels            = {
              + "env" = "raspbernetes"
            }
          + local_ssd_count   = (known after apply)
          + machine_type      = "n1-standard-4"
          + metadata          = {
              + "disable-legacy-endpoints" = "true"
            }
          + oauth_scopes      = [
              + "https://www.googleapis.com/auth/logging.write",
              + "https://www.googleapis.com/auth/monitoring",
            ]
          + preemptible       = false
          + service_account   = (known after apply)
          + tags              = [
              + "gke-node",
              + "raspbernetes-gke",
            ]
          + taint             = (known after apply)

          + shielded_instance_config {
              + enable_integrity_monitoring = (known after apply)
              + enable_secure_boot          = (known after apply)
            }

          + workload_metadata_config {
              + node_metadata = (known after apply)
            }
        }

      + upgrade_settings {
          + max_surge       = (known after apply)
          + max_unavailable = (known after apply)
        }
    }

Plan: 4 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + kubernetes_cluster_name = "raspbernetes-gke"
  + region                  = "us-central1"

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Pusher: @renovate[bot], Action: pull_request, Working Directory: infrastructure/gcp, Workflow: terraform-plan

@renovate renovate bot force-pushed the renovate/libvirt-0.x branch from cf8ef7d to fa0e012 Compare October 19, 2021 01:54
@renovate renovate bot changed the title Update Terraform libvirt to v0.6.10 chore(deps): update terraform libvirt to v0.6.11 Oct 19, 2021
@renovate renovate bot changed the title chore(deps): update terraform libvirt to v0.6.11 chore(deps): update terraform libvirt to v0.6.14 Mar 7, 2022
@renovate renovate bot force-pushed the renovate/libvirt-0.x branch from fa0e012 to 50ba164 Compare March 7, 2022 15:11
@renovate renovate bot force-pushed the renovate/libvirt-0.x branch from 50ba164 to 7fc0a35 Compare November 20, 2022 12:11
@renovate renovate bot changed the title chore(deps): update terraform libvirt to v0.6.14 chore(deps): update terraform libvirt to v0.7.0 Nov 20, 2022
@renovate renovate bot force-pushed the renovate/libvirt-0.x branch from 7fc0a35 to f048313 Compare March 16, 2023 13:31
@renovate renovate bot changed the title chore(deps): update terraform libvirt to v0.7.0 chore(deps): update terraform libvirt to v0.7.1 Mar 16, 2023
@renovate renovate bot force-pushed the renovate/libvirt-0.x branch from f048313 to e6f41ab Compare October 10, 2023 00:51
@renovate renovate bot changed the title chore(deps): update terraform libvirt to v0.7.1 chore(deps): update terraform libvirt to v0.7.2 Oct 10, 2023
@renovate renovate bot force-pushed the renovate/libvirt-0.x branch from e6f41ab to 4573300 Compare October 10, 2023 22:46
@renovate renovate bot changed the title chore(deps): update terraform libvirt to v0.7.2 chore(deps): update terraform libvirt to v0.7.4 Oct 10, 2023
@renovate renovate bot changed the title chore(deps): update terraform libvirt to v0.7.4 chore(deps): update terraform libvirt to v0.7.6 Nov 20, 2023
@renovate renovate bot force-pushed the renovate/libvirt-0.x branch from 4573300 to 29c723a Compare November 20, 2023 01:30
@renovate renovate bot force-pushed the renovate/libvirt-0.x branch from 29c723a to af39ba5 Compare September 23, 2024 00:12
@renovate renovate bot changed the title chore(deps): update terraform libvirt to v0.7.6 chore(deps): update terraform libvirt to v0.8.0 Sep 23, 2024
@renovate renovate bot force-pushed the renovate/libvirt-0.x branch from af39ba5 to 46dfe0d Compare October 19, 2024 12:52
@renovate renovate bot changed the title chore(deps): update terraform libvirt to v0.8.0 chore(deps): update terraform libvirt to v0.8.1 Oct 19, 2024
@renovate renovate bot force-pushed the renovate/libvirt-0.x branch from 46dfe0d to ddfaaee Compare March 3, 2025 03:26
@renovate renovate bot changed the title chore(deps): update terraform libvirt to v0.8.1 chore(deps): update terraform libvirt to v0.8.2 Mar 3, 2025
@renovate renovate bot force-pushed the renovate/libvirt-0.x branch from ddfaaee to e6c6112 Compare March 4, 2025 12:13
@renovate renovate bot changed the title chore(deps): update terraform libvirt to v0.8.2 chore(deps): update terraform libvirt to v0.8.3 Mar 4, 2025
@renovate renovate bot force-pushed the renovate/libvirt-0.x branch from e6c6112 to e2943d7 Compare November 8, 2025 05:45
@renovate renovate bot changed the title chore(deps): update terraform libvirt to v0.8.3 chore(deps): update terraform libvirt to v0.9.0 Nov 8, 2025
@renovate renovate bot force-pushed the renovate/libvirt-0.x branch from e2943d7 to 2741337 Compare December 1, 2025 01:38
@renovate renovate bot changed the title chore(deps): update terraform libvirt to v0.9.0 chore(deps): update terraform libvirt to v0.9.1 Dec 1, 2025
@renovate renovate bot force-pushed the renovate/libvirt-0.x branch from 2741337 to 2df2592 Compare January 25, 2026 16:56
@renovate renovate bot changed the title chore(deps): update terraform libvirt to v0.9.1 chore(deps): update terraform libvirt to v0.9.2 Jan 25, 2026
Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
@renovate renovate bot force-pushed the renovate/libvirt-0.x branch from 2df2592 to adbd3cd Compare February 23, 2026 01:44
@renovate renovate bot changed the title chore(deps): update terraform libvirt to v0.9.2 chore(deps): update terraform libvirt to v0.9.3 Feb 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant