-
Notifications
You must be signed in to change notification settings - Fork 55
Add rule number to the terraform state #245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@Pearl1594 Below you will see why I have a concern about the current state of #242 . I feel like there shouldn't be this many issues when "upgrading" the provider for things that will have ACL rules like this. I'm OK, with having to split up and replace Here were steps taken. I used 0.5.0 initially and made sure all was good with TF plan. I updated my TF binary to be the one created from this branch and ran plan
I then fixed the ports -> port with creating new rules and splitting them up. I originally had 7 rules and 2 of the rules had multiple ports in them. Then I immediately run another plan again to see what it shows and this is it |
|
@Pearl1594 After deleting all rules and applying, I then tested doing things with However, I was able to change the But nothing on This rule still stays the same and no |
…rom ports to port
|
Thanks @CodeBleu - taking your feedback, I tried to see if it was possible to map if the rules in the new schema (with port) matches existing ones and update it should there be any change, but was hitting with multiple issues, so I went ahead with a workflow of replacing the rules. So this is how it works now
I then applied the new config, where I separated ports to the new schema - i.e., use This results in the following in acs: Then I attempt to update rule number 1: Successfully updated rule 1 , and is seen on ACS as well:
Do you see this as an acceptable workflow @CodeBleu ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am observing that existing acl rules are deleted and created or added again after upgrading the provider
With terraform provider 0.5.0 release and the following config.
Created acl rules
terraform {
required_providers {
cloudstack = {
source = "cloudstack/cloudstack"
version = "0.5.0"
}
}
}
resource "cloudstack_network_acl_rule" "test" {
acl_id = "1f1d916b-30c3-41bd-bc00-88cef443a0e2"
rule {
action = "allow"
cidr_list = ["10.0.0.0/24"]
protocol = "tcp"
ports = ["81-83"]
traffic_type = "ingress"
}
rule {
action = "allow"
cidr_list = ["10.0.0.0/24"]
protocol = "tcp"
ports = ["2222-2223"]
traffic_type = "ingress"
}
rule {
action = "allow"
cidr_list = ["10.0.0.0/24"]
protocol = "tcp"
ports = ["8081"]
traffic_type = "ingress"
}
rule {
action = "allow"
cidr_list = ["10.0.0.0/24"]
protocol = "tcp"
ports = ["8086"]
traffic_type = "egress"
}
}
mysql> select * from network_acl_item;
+----+--------------------------------------+--------+------------+----------+--------+----------+---------------------+-----------+-----------+--------------+--------+--------+---------+--------+
| id | uuid | acl_id | start_port | end_port | state | protocol | created | icmp_code | icmp_type | traffic_type | number | action | display | reason |
+----+--------------------------------------+--------+------------+----------+--------+----------+---------------------+-----------+-----------+--------------+--------+--------+---------+--------+
| 63 | 99cbb31a-5ffe-4fa3-b8d6-8382fc01ecb8 | 11 | 8086 | 8086 | Active | tcp | 2025-10-16 07:21:58 | NULL | NULL | Egress | 1 | Allow | 1 | NULL |
| 64 | ff35f826-2164-48ca-ad6c-9394bdb85d08 | 11 | 2222 | 2223 | Active | tcp | 2025-10-16 07:21:58 | NULL | NULL | Ingress | 2 | Allow | 1 | NULL |
| 65 | 22896291-6a17-40ed-b65e-f8caa852af25 | 11 | 81 | 83 | Active | tcp | 2025-10-16 07:21:59 | NULL | NULL | Ingress | 3 | Allow | 1 | NULL |
| 66 | 006b69b3-43fe-4323-bf72-ea3d1a6bebdc | 11 | 8081 | 8081 | Active | tcp | 2025-10-16 07:21:59 | NULL | NULL | Ingress | 4 | Allow | 1 | NULL |
+----+--------------------------------------+--------+------------+----------+--------+----------+---------------------+-----------+-----------+--------------+--------+--------+---------+--------+
4 rows in set (0.00 sec)
based on ur pr , built the terraform binary locally and updated the following config
terraform {
required_providers {
cloudstack = {
source = "localdomain/provider/cloudstack"
version = "0.4.0"
}
}
}
resource "cloudstack_network_acl_rule" "test" {
acl_id = "1f1d916b-30c3-41bd-bc00-88cef443a0e2"
rule {
action = "allow"
cidr_list = ["10.0.0.0/24"]
protocol = "tcp"
port = "81-83"
traffic_type = "ingress"
}
rule {
action = "allow"
cidr_list = ["10.0.0.0/24"]
protocol = "tcp"
port = "2222-2223"
traffic_type = "ingress"
}
rule {
action = "allow"
cidr_list = ["10.0.0.0/24"]
protocol = "tcp"
port = "8081"
traffic_type = "ingress"
}
rule {
action = "allow"
cidr_list = ["10.0.0.0/24"]
protocol = "tcp"
port = "8086"
traffic_type = "egress"
}
}
Did terraform upgrade
terraform init --upgrade
Initializing the backend...
Initializing provider plugins...
- Finding localdomain/provider/cloudstack versions matching "0.4.0"...
- Finding latest version of cloudstack/cloudstack...
- Installing localdomain/provider/cloudstack v0.4.0...
- Installed localdomain/provider/cloudstack v0.4.0 (unauthenticated)
- Using previously-installed cloudstack/cloudstack v0.5.0
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.
Terraform apply
terraform apply
cloudstack_network_acl_rule.test: Refreshing state... [id=1f1d916b-30c3-41bd-bc00-88cef443a0e2]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# cloudstack_network_acl_rule.test will be updated in-place
~ resource "cloudstack_network_acl_rule" "test" {
id = "1f1d916b-30c3-41bd-bc00-88cef443a0e2"
# (3 unchanged attributes hidden)
~ rule {
+ port = "81-83"
~ ports = [
- "2222-2223",
]
# (9 unchanged attributes hidden)
}
~ rule {
+ port = "2222-2223"
~ ports = [
- "8081",
]
# (9 unchanged attributes hidden)
}
~ rule {
+ port = "8081"
~ ports = [
- "8086",
]
~ traffic_type = "egress" -> "ingress"
# (8 unchanged attributes hidden)
}
~ rule {
+ port = "8086"
~ ports = [
- "81-83",
]
~ traffic_type = "ingress" -> "egress"
# (8 unchanged attributes hidden)
}
}
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
cloudstack_network_acl_rule.test: Modifying... [id=1f1d916b-30c3-41bd-bc00-88cef443a0e2]
cloudstack_network_acl_rule.test: Modifications complete after 8s [id=1f1d916b-30c3-41bd-bc00-88cef443a0e2]
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
New resources got created
mysql> select * from network_acl_item;
+----+--------------------------------------+--------+------------+----------+--------+----------+---------------------+-----------+-----------+--------------+--------+--------+---------+--------+
| id | uuid | acl_id | start_port | end_port | state | protocol | created | icmp_code | icmp_type | traffic_type | number | action | display | reason |
+----+--------------------------------------+--------+------------+----------+--------+----------+---------------------+-----------+-----------+--------------+--------+--------+---------+--------+
| 67 | d38f29e1-207b-4456-8e5c-e995bcc6d985 | 11 | 81 | 83 | Active | tcp | 2025-10-16 07:24:29 | NULL | NULL | Ingress | 2 | Allow | 1 | NULL |
| 68 | a6685e26-62e8-4a00-8e11-5299f4805a11 | 11 | 2222 | 2223 | Active | tcp | 2025-10-16 07:24:29 | NULL | NULL | Ingress | 4 | Allow | 1 | NULL |
| 69 | f2792fb8-2454-4da7-904b-d2487ccb5abb | 11 | 8081 | 8081 | Active | tcp | 2025-10-16 07:24:30 | NULL | NULL | Ingress | 1 | Allow | 1 | NULL |
| 70 | 454d4d66-8c9c-4038-a646-1c07c3f83314 | 11 | 8086 | 8086 | Active | tcp | 2025-10-16 07:24:30 | NULL | NULL | Egress | 3 | Allow | 1 | NULL |
+----+--------------------------------------+--------+------------+----------+--------+----------+---------------------+-----------+-----------+--------------+--------+--------+---------+--------+
4 rows in set (0.00 sec)
Now add a rule_number to one of the rules another 4 acl rules got created
action = "allow"
cidr_list = ["10.0.0.0/24"]
protocol = "tcp"
port = "8086"
traffic_type = "egress"
rule_number = 190
mysql> select * from network_acl_item;
+----+--------------------------------------+--------+------------+----------+--------+----------+---------------------+-----------+-----------+--------------+--------+--------+---------+--------+
| id | uuid | acl_id | start_port | end_port | state | protocol | created | icmp_code | icmp_type | traffic_type | number | action | display | reason |
+----+--------------------------------------+--------+------------+----------+--------+----------+---------------------+-----------+-----------+--------------+--------+--------+---------+--------+
| 67 | d38f29e1-207b-4456-8e5c-e995bcc6d985 | 11 | 81 | 83 | Active | tcp | 2025-10-16 07:24:29 | NULL | NULL | Ingress | 2 | Allow | 1 | NULL |
| 68 | a6685e26-62e8-4a00-8e11-5299f4805a11 | 11 | 2222 | 2223 | Active | tcp | 2025-10-16 07:24:29 | NULL | NULL | Ingress | 4 | Allow | 1 | NULL |
| 69 | f2792fb8-2454-4da7-904b-d2487ccb5abb | 11 | 8081 | 8081 | Active | tcp | 2025-10-16 07:24:30 | NULL | NULL | Ingress | 1 | Allow | 1 | NULL |
| 70 | 454d4d66-8c9c-4038-a646-1c07c3f83314 | 11 | 8086 | 8086 | Active | tcp | 2025-10-16 07:24:30 | NULL | NULL | Egress | 3 | Allow | 1 | NULL |
| 71 | faa678c2-7538-484b-a268-6d665e0519e4 | 11 | 81 | 83 | Active | tcp | 2025-10-16 07:27:47 | NULL | NULL | Ingress | 5 | Allow | 1 | NULL |
| 72 | 4569c2d8-3a80-4a48-a943-e40dab7992a0 | 11 | 2222 | 2223 | Active | tcp | 2025-10-16 07:27:48 | NULL | NULL | Ingress | 6 | Allow | 1 | NULL |
| 73 | 74031fc7-18cd-48b3-83f2-9e2136583847 | 11 | 8081 | 8081 | Active | tcp | 2025-10-16 07:27:48 | NULL | NULL | Ingress | 7 | Allow | 1 | NULL |
| 74 | 953c1734-d701-4408-9c17-b6a4aea8c947 | 11 | 8086 | 8086 | Active | tcp | 2025-10-16 07:27:49 | NULL | NULL | Egress | 190 | Allow | 1 | NULL |
+----+--------------------------------------+--------+------------+----------+--------+----------+---------------------+-----------+-----------+--------------+--------+--------+---------+--------+
8 rows in set (0.00 sec)
|
Strangely not hitting when i use your config Afer terraform init upgrade |
|
@Pearl1594 I was actually doing other TF work and still using this PR provider code and saw the following that was a concern. If I revert my provider version back, I do not see that replace/destroy I went ahead to change it from I would assume the ACL rule updates should NOT impact other things like this |
|
@CodeBleu Is this the latest code change that you were testing with? And by reverting you provider version back - you mean to version 0.5.0? |
@Pearl1594 |
|
@CodeBleu could you please share you terraform config , so that I can test it out |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Followed the same steps mentioned in the comment and works fine
When a end user does a upgrade from ( 0.5 to 0.6 ) release
terraform init --upgrade
The migration happens from ports to port - terrafrom will delete and recreate the same acl rules
Because during migration from ports to port, one may decide to add rule_number and description parameter
In order to keep it simple, terraform deletes and recreates the same acl rules
The issue will not occur if the user decides to modify / add new rules, on the same release of 0.6
Terraform does not delete and recreate the rules
|
@CodeBleu Also, i would suggest limiting testing only to terraform for now and ignore opentofu Also support is only included terraform provider upgrade i.e 0.5 to 0.6 and not for downgrade of provider i.e 0.6 to 0.5 |
@kiranchavala |
My biggest concern is why does upgrading the provider cause it to want to delete/replace my k8s cluster? My terraform code is a lot more than this, but this is the parts that are being impacted when I test. k8s.tf acl.tf |
|
This change you are observing with cks cluster - is most likely unrelated to the acl rule resource. |
@Pearl1594 Only change is the updated provider from here! So, how is it not related to the new provider version? |
|
Was the diff in CKS resource not observed on the 0.6.0-rc3 version? I haven't tested it, I'll give it a go; I've been testing just the network ACL rules with this PR. |
Yes. This is what I saying here. |
|
My apologies @CodeBleu - I didn't understand that comment previously. I've tried addressing that issue now by adding some checks to the cloudstack_kubernetes_cluster resource read function to set the state parameters accordingly. Hope this will address the issue. It would be helpful if you could also review it. Thanks |
|
I have also tested with cks resource and with the latest build found that cks resource is not affected
Deploy a cks cluster in a vpc network and add a acl rule
terraform init --upgrade
|
Thanks @CodeBleu I found that there is existing automation in opentofu registry to get latest release of cloudstack terraform whenever there is a release |
CodeBleu
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was able to run the terraform plan with the CKS and it worked fine this time without trying to replace it.
@Pearl1594 I am curious why the resource_cloudstack_kubernetes_cluster.go file had to have these modifications, as it doesn't appear the changes in 0.6.0-rc3 included modifying that file and the issue didn't happen on provider before.
I was able to modify ACL rule_number on rules as well, and confirm they are changing as expected now.
|
@CodeBleu I was able to reproduce the issue you mentioned regarding change in cks config, despite only changing acls on the rc3 packages as well. The changes in Network ACL rule didn't cause the behavioural change in CKS resource as they are independent of each other. It could have been that the configuration used while testing rc3 (0.6.0) didn't use parameters like autoscaler, etc. Im not sure why you weren't able to see the same on the 0.6.0-rc3 package. |
I did see the same behaviour with the CKS issue with 0.6.0-rc3 😄 |














Addresses #242 (comment)