Skip to content

kubernetes_manifest sends more fields in the request than in the input manifest #2796

@matthewjwhite

Description

@matthewjwhite

Question

I have been testing the kubernetes_manifest resource and wanted to get clarification on the expected behavior.

To summarize, I've noticed that the request being sent to the Kubernetes API contains more than just the fields in the input manifest, causing Terraform to become the shared owner of fields other than those in the manifest. This makes it difficult to have a separate entity using Server-Side Apply that expects to own a different set of fields. It was additionally my expectation that I would be able to apply the resource without first importing it.

In contrast, kubernetes_config_map_v1_data (which also uses Server-Side Apply) can be applied without importing (importing isn't allowed), and the managed fields are set only for the fields in the input data.

Is this expected, or is there a potential bug here (or perhaps even grounds for an enhancement)? Please see the below recreation steps and my notes on the code.

Recreation Steps

For example, consider the following Service that is initially created through server-side apply outside of Terraform:

$ kubectl apply --server-side=true -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: test
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
EOF
service/test serverside-applied

Getting the resource after shows managedFields set for the above fields as expected:

$ kubectl get service test -o yaml --show-managed-fields=true
...
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:ports:
          k:{"port":80,"protocol":"TCP"}:
            .: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
    manager: kubectl
    operation: Apply
    time: "2025-10-10T15:39:04Z"
...

With kubernetes_manifest, I want to manage the selector field. Sample code:

resource "kubernetes_manifest" "test" {
  manifest = yamldecode(<<EOF
apiVersion: v1
kind: Service
metadata:
  name: test
  namespace: default
spec:
  selector:
    app: my-app
EOF
  )

  field_manager {
    name = "blah"
  }
}

After importing and applying the resource, here is the new managedFields block for the Provider:

kubectl get service test -o json --show-managed-fields=true
...
            {
                "apiVersion": "v1",
                "fieldsType": "FieldsV1",
                "fieldsV1": {
                    "f:spec": {
                        "f:clusterIP": {},
                        "f:clusterIPs": {},
                        "f:internalTrafficPolicy": {},
                        "f:ipFamilies": {},
                        "f:ipFamilyPolicy": {},
                        "f:ports": {
                            "k:{\"port\":80,\"protocol\":\"TCP\"}": {
                                ".": {},
                                "f:port": {},
                                "f:protocol": {},
                                "f:targetPort": {}
                            }
                        },
                        "f:selector": {},
                        "f:sessionAffinity": {},
                        "f:type": {}
                    }
                },
                "manager": "blah",
                "operation": "Apply",
                "time": "2025-10-10T15:49:28Z"
            }
...

Notice that in addition to containing selector, it also contains ports and several other fields, most of which even kubectl wasn't the owner for.

Modifying the resource again from kubectl, changing only one of the fields that it is an owner for, results in a conflict:

$ kubectl apply --server-side=true -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: test
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 500
EOF
error: Apply failed with 1 conflict: conflict with "blah": .spec.ports[port=80,protocol="TCP"].targetPort

Further Investigation

I'm not an expert on the Provider, or Provider development in general, but this is my understanding as to why this is happening:

  1. Current state of the resource is read from the server and put into the object field of the state.
  2. As part of processing the proposed manifest field, TF appears to additionally pull in values for fields that aren’t in the manifest but are in the previous object. The Trace logs printed before and after the Transform corroborate this. This is then saved as the new planned state.
  3. The Provider then applies the planned state’s object field.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions