Skip to content

Conversation

@SarahFrench
Copy link
Member

@SarahFrench SarahFrench commented Dec 1, 2025

Fixes #37954

TODO:

  • Ask around for context about BackendForLocalPlan
  • Check that this doesn't negatively impact use of the cloud backend

This PR

This PR adds an explicit error if the currently selected workspace doesn't match the workspace a plan file was generated against:

╷
│ Error: The plan file describes changes to the "" workspace, but the "default" workspace is currently in use.
│ 
│ Applying this plan with the incorrect workspace selected could result in state being stored in an unexpected location, or a downstream error
│ when Terraform attempts apply a plan using the other workspace's state.
│ 
│ If you'd like to continue to use the plan file, you must run "terraform workspace select " to select the correct workspace.
│ In future make sure the selected workspace is not changed between creating and applying a plan file.
│ 
│ 

We might need slightly different wording based on whether a user is using a cloud block or a backend block, as workspace commands aren't supported with HCPTF.

Has this been implemented and removed in the past?

I took a look at the git blame data around the BackendForLocalPlan (originally "BackendForPlan") method and it doesn't look like a comparison was even implemented and then removed. The closest thing in the blame data related to workspace names was #25262, but this just checks that the current workspace name is valid. It doesn't include comparison to the value in the plan file.

Does this affect the cloud backend?

Conclusion: No

When using the cloud backend workspace CLI commands are blocked because HCP Terraform doesn't support CE workspace features. The value in the .terraform/environment file is controlled by the init command only; when the user changes the workspace value in the cloud block and runs an init command the selected workspace is updated in that file.

This is in contrast to how users manage the .terraform/environment file directly via workspace commands when using other backends, and that (i.e workspaces being managed outside of the init-plan-apply workflow) is the cause of the problem this PR addresses. Using the cloud backend means that users cannot cause a mismatch; manually editing that file would be the only way for users to create the same type of problem and: (1) users are encouraged to leave .terraform/ alone and out of VCS, (2) Terraform would identify the mismatch and prompt for a re-run of init.

Using HCP Terraform in Local execution mode

  • terraform init sets the selected workspace (i.e. the .terraform/environment file) to match the HCPTF workspace present in configuration (or a workspace made interactively).
  • terraform plan -out=tfplan makes a local plan file that contains the HCPTF workspace in use, i.e. matching what's in the cloud block in the config.
  • When the workspace comparison added in this PR occurs the plan's value is compared to the workspace selected via configuration.
    • This means BackendForLocalPlan isn't interrupted and an apply completes as expected.

I did find that if you change the workspace value in the configuration after making a plan, from a user perspective they might think that they're applying the plan against workspace B when the plan (and working directory) describe using workspace A. This is because when a plan file is used Terraform doesn't check if the cloud block matches the backend state file, because the plan file is used as the only source of config for configuring the backend. If that counts as a bug, it feels sufficiently different from #37954 to not address in this PR.

Using HCP Terraform in Remote execution mode

In this execution mode if you run terraform plan -out=tfplan the local plan file doesn't contain workspace data and instead references a specific run in HCPTF that has produced a plan remotely. If a user changes the workspace value in the config between the plan and apply commands Terraform does detect that change and prompts the user to run an init command. Due to this, there doesn't seem to be a scenario in Remote execution where a user can accidentally change what a preexisting plan is applied to.

Target Release

1.15.x

Rollback Plan

  • If a change needs to be reverted, we will roll out an update to the code within 7 days.

Changes to Security Controls

Are there any changes to security controls (access controls, encryption, logging) in this pull request? If so, explain.

CHANGELOG entry

  • This change is user-facing and I added a changelog entry.
  • This change is not user-facing.

@SarahFrench SarahFrench force-pushed the sarah/error-plan-workspace-mismatch branch 2 times, most recently from 857c07b to 317492f Compare December 1, 2025 14:06
@SarahFrench SarahFrench force-pushed the sarah/error-plan-workspace-mismatch branch from 4c8812b to eca7931 Compare January 13, 2026 10:58
@SarahFrench SarahFrench marked this pull request as ready for review January 13, 2026 11:32
@SarahFrench SarahFrench requested a review from a team as a code owner January 13, 2026 11:32
@SarahFrench SarahFrench marked this pull request as draft January 15, 2026 15:19
@SarahFrench

This comment was marked as resolved.

@SarahFrench SarahFrench force-pushed the sarah/error-plan-workspace-mismatch branch from eca7931 to 4aba168 Compare January 22, 2026 13:18
@SarahFrench SarahFrench marked this pull request as ready for review January 22, 2026 13:25
Comment on lines +355 to +362
default:
return nil, diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Workspace data missing from plan file",
Detail: fmt.Sprintf("The plan file does not contain a named workspace, so Terraform cannot determine if it was intended to be used with current workspace %q. This is a bug in Terraform and should be reported.",
currentWorkspace,
),
})
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When writing this I wondered whether a panic would be more appropriate here (i.e. more likely to illicit a bug report) as it's a 'this should never happen' scenario?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Confusing behaviour when applying a plan against a workspace that doesn't match the plan

1 participant