-
Notifications
You must be signed in to change notification settings - Fork 55
Zonewizard #67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Zonewizard #67
Conversation
cloudstack/provider.go
Outdated
| ResourcesMap: map[string]*schema.Resource{ | ||
| "cloudstack_affinity_group": resourceCloudStackAffinityGroup(), | ||
| "cloudstack_autoscale_vm_profile": resourceCloudStackAutoScaleVMProfile(), | ||
| "cloudstack_cluster": resourceCloudStackCluster(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great work @poddm. I'll try testing these changes.
Do we also need to add "cloudstack_zone" here ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@harikrishna-patnala - cloudstack_zone was partially added in a previous merge. I extended the resource to support all of the fields and updated the ID from name to uuid.
|
Added additional resources
This also requires the following |
|
@poddm could you please help in resolving the conflicts |
Update - I'd still like to get these merged once the cloudstack SDK is updated. cloudstack go SDK changes |
|
@poddm can you address conflicts on the PR? |
|
@poddm can you review/rebase the PR & look at failing tests. Thanks. |
|
@rohityadavcloud @vishesh92. This is now rebased and updated to the latest cloudstack library. the ACC tests are failing here. Is there a way to bypass this storage pool check in the simulator? === RUN TestAccCloudStackStoragePool_basic
resource_cloudstack_storage_pool_test.go:29: Step 1/2 error: Error running apply: exit status 1
Error: CloudStack API error 530 (CSExceptionErrorCode: 9999): Failed to add data store: No host up to associate a storage pool with in cluster 1
with cloudstack_storage_pool.test,
on terraform_plugin_test.tf line 37, in resource "cloudstack_storage_pool" "test":
37: resource "cloudstack_storage_pool" "test" {
--- FAIL: TestAccCloudStackStoragePool_basic (53.68s)
FAIL
FAIL github.com/terraform-providers/terraform-provider-cloudstack/cloudstack 55.537s
FAILresource "cloudstack_storage_pool" "test" {
name = "acc_primarystorage1"
url = "nfs://10.147.28.6/export/home/sandbox/primary11"
zone_id = cloudstack_zone.test.id
cluster_id = cloudstack_cluster.test.id
pod_id = cloudstack_pod.test.id
scope = "CLUSTER"
hypervisor = "Simulator"
state = "Maintenance"
tags = "XYZ,123,456"
} |
…tack_vlan_ip_range
|
@kiranchavala, I saw the devlist you were releasing v0.6.0. I'd like to get traction here - i've had this PR open for quite awhile and am using it in our internal fork. I've rebased from the latest and a number of these resources have been added since. However, there are some minor issues in regards to implementation and provider conventions: https://developer.hashicorp.com/terraform/plugin/best-practices.
resource "cloudstack_zone" "foo" {
name = "terraform-zone"
dns1 = "8.8.8.8"
internal_dns1 = "8.8.4.4"
network_type = "Advanced"
}
resource "cloudstack_physical_network" "foo" {
name = "terraform-physical-network"
zone = cloudstack_zone.foo.name # ----> If the name is the same the phsyical network wont be recreated
broadcast_domain_range = "ZONE"
isolation_methods = ["VLAN"]
} |
|
Thanks @poddm for the suggestion Henceforth, we will check the pr for proper naming conventions |
kiranchavala
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hitting the following exception when i delete the following resources
resource "cloudstack_network_service_provider_state" "virtualrouter" {
name = "VirtualRouter"
physical_network_id = cloudstack_physical_network.test.id
enabled = false
}
resource "cloudstack_storage_pool" "test" {
name = "acc_primarystorage1"
url = "nfs://10.0.32.4/acs/primary/ref-trl-9433-k-Mol8-kiran-chavala/ref-trl-9433-k-Mol8-kiran-chavala-kvm-pri1/test"
zone_id = "05d9863d-bd94-41c2-bba8-251aab44637a"
cluster_id = "110d6845-647d-4339-b084-8f9d278fc568"
pod_id = "6079c0a3-e6d5-4eaf-ab47-551f657e5379"
scope = "CLUSTER"
hypervisor = "KVM"
state = "Maintenance"
tags = "XYZ,123,456"
}
The storage pool goes into up state
cloudstack_pod.test: Destruction complete after 2s
╷
│ Error: Error deleting storage pool: CloudStack API error 431 (CSExceptionErrorCode: 4350): Unable to delete storage due to it is not in Maintenance state, pool: StoragePool {"id":8,"name":"acc_primarystorage1","poolType":"NetworkFilesystem","uuid":"8378d30a-9c47-3f13-a3c9-63bf9a0ac607"}
│
│
╵
╷
│ Error: Undefined error: {"errorcode":431,"errortext":"Network Service Provider id=28doesn't exist in the system"}
|
@kiranchavala - updates on these errors:
Also, for the service provider error something deleted it. Did you happen to use
Here are the service providers I see created as part of a new initialized physical network. {
"listnetworkserviceprovidersresponse": {
"count": 7,
"networkserviceprovider": [
{
"name": "ConfigDrive",
"physicalnetworkid": "220ed3ff-eb74-41b3-9547-e4995975e4e5",
"state": "Disabled",
"id": "5af78cb7-3486-4eaa-940f-3af93c4b1cc2",
"servicelist": [
"UserData"
],
"canenableindividualservice": false
},
{
"name": "Tungsten",
"physicalnetworkid": "220ed3ff-eb74-41b3-9547-e4995975e4e5",
"state": "Disabled",
"id": "ad858b62-7935-404f-9469-a1a023503d51",
"servicelist": [
"Dhcp",
"Dns",
"Firewall",
"Lb",
"SourceNat",
"StaticNat",
"PortForwarding",
"SecurityGroup"
],
"canenableindividualservice": true
},
{
"name": "InternalLbVm",
"physicalnetworkid": "220ed3ff-eb74-41b3-9547-e4995975e4e5",
"state": "Disabled",
"id": "262247b7-7366-4c07-ba3b-0be10192f464",
"servicelist": [
"Lb"
],
"canenableindividualservice": true
},
{
"name": "BaremetalPxeProvider",
"physicalnetworkid": "220ed3ff-eb74-41b3-9547-e4995975e4e5",
"state": "Disabled",
"id": "76913e03-e263-4a54-be98-80c06b1580a6",
"servicelist": [],
"canenableindividualservice": false
},
{
"name": "VpcVirtualRouter",
"physicalnetworkid": "220ed3ff-eb74-41b3-9547-e4995975e4e5",
"state": "Disabled",
"id": "06113b93-f532-4002-9b3d-20787d7479b7",
"servicelist": [
"Vpn",
"Dhcp",
"Dns",
"Gateway",
"Lb",
"SourceNat",
"StaticNat",
"PortForwarding",
"UserData"
],
"canenableindividualservice": true
},
{
"name": "SecurityGroupProvider",
"physicalnetworkid": "220ed3ff-eb74-41b3-9547-e4995975e4e5",
"state": "Disabled",
"id": "34a5dac1-1599-484a-9ded-7a54fa92593b",
"servicelist": [
"SecurityGroup"
],
"canenableindividualservice": false
},
{
"name": "VirtualRouter",
"physicalnetworkid": "220ed3ff-eb74-41b3-9547-e4995975e4e5",
"state": "Enabled",
"id": "aa37a9cc-6638-4d72-a6b5-0da241df1e8d",
"servicelist": [
"Vpn",
"Dhcp",
"Dns",
"Gateway",
"Firewall",
"Lb",
"SourceNat",
"StaticNat",
"PortForwarding",
"UserData"
],
"canenableindividualservice": true
}
]
}
}Here is my local destroy output. Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
# cloudstack_network_service_provider_state.virtualrouter will be destroyed
- resource "cloudstack_network_service_provider_state" "virtualrouter" {
- enabled = true -> null
- id = "d97959df-be50-4ec0-a888-ffc898d9b935" -> null
- name = "VirtualRouter" -> null
- physical_network_id = "3eabff91-debb-42f0-bb97-63da296bc1b4" -> null
}
# cloudstack_physical_network.test will be destroyed
- resource "cloudstack_physical_network" "test" {
- broadcast_domain_range = "ZONE" -> null
- id = "3eabff91-debb-42f0-bb97-63da296bc1b4" -> null
- isolation_methods = "VLAN" -> null
- name = "test01" -> null
- network_speed = "1G" -> null
- tags = "vlan" -> null
- zone_id = "8e1e5ae2-37dd-4215-a5cf-d6b223e94cba" -> null
# (2 unchanged attributes hidden)
}
# cloudstack_zone.test will be destroyed
- resource "cloudstack_zone" "test" {
- allocation_state = "Disabled" -> null
- dhcp_provider = "VirtualRouter" -> null
- dns1 = "8.8.8.8" -> null
- dns2 = "8.8.8.8" -> null
- domain = "cloudstack.apache.org" -> null
- id = "8e1e5ae2-37dd-4215-a5cf-d6b223e94cba" -> null
- internal_dns1 = "8.8.4.4" -> null
- internal_dns2 = "8.8.4.4" -> null
- local_storage_enabled = false -> null
- name = "acctest" -> null
- network_type = "Advanced" -> null
- security_group_enabled = false -> null
# (4 unchanged attributes hidden)
}
Plan: 0 to add, 0 to change, 3 to destroy.
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
cloudstack_network_service_provider_state.virtualrouter: Destroying... [id=d97959df-be50-4ec0-a888-ffc898d9b935]
cloudstack_network_service_provider_state.virtualrouter: Destruction complete after 0s
cloudstack_physical_network.test: Destroying... [id=3eabff91-debb-42f0-bb97-63da296bc1b4]
cloudstack_physical_network.test: Destruction complete after 0s
cloudstack_zone.test: Destroying... [id=8e1e5ae2-37dd-4215-a5cf-d6b223e94cba]
cloudstack_zone.test: Destruction complete after 0s
Destroy complete! Resources: 3 destroyed.
$ |
Thanks @poddm for the clarification |
kiranchavala
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM , Tested the creation of the following resources
cloudstack_zone
cloudstack_pod
cloudstack_cluster
cloudstack_physical_network
cloudstack_network_service_provider_state
cloudstack_network_service_provider
cloudstack_secondary_storage
cloudstack_storage_pool
cloudstack_traffic_type
cloudstack_vlan_ip_range
Zone wizard resources:
Also, updated data source cloudstack_zone to work with the uuid vs. name (as the id).
--
Note -