|
| 1 | +--- |
| 2 | +title: How to use the DSCC API and Ansible to collect a storage configuration |
| 3 | +date: 2025-07-15T12:18:34.895Z |
| 4 | +priority: -1 |
| 5 | +author: Dr. Thomas Beha |
| 6 | +authorimage: /img/tb07112025.png |
| 7 | +disable: false |
| 8 | +tags: |
| 9 | + - data-services-on-the-hpe-greenlake-platform |
| 10 | +--- |
| 11 | +<style> li { font-size: 27px; line-height: 33px; max-width: none; } </style> |
| 12 | + |
| 13 | +Capturing the current storage configuration to verify it against best practices or configuration rules is something that customer request regularly. If the customer uses Ansible as their automation platform, the [HPE 3PAR Ansible module](https://github.com/HewlettPackard/hpe3par_ansible_module?tab=readme-ov-file) can be used to create and delete hosts, volumes etc., but it is not really a solution for gathering the complete configuration. |
| 14 | + |
| 15 | +Furthermore, this module uses the WSAPI of individual Alletra storage systems. The HPE Data Services Cloud Console (DSCC) would be the better option to collect storage configuration data of multiple systems, even those that might be distributed across multiple sites. Through a single location, the DSCC would be able to get the data of all storage systems. |
| 16 | + |
| 17 | +[Ansible playbooks for the DSCC](https://developer.hpe.com/blog/automating-operations-on-dscc-using-ansible-playbooks/) were discussed in one of the previous HPE Developer Community blogs. The playbooks offer fact gatherings for storage systems, hosts and volumes. However, once you dig into the details, you will find that the modules have not been updated for more than two years, and do not support the HPE Alletra MP B10000 storage array. In this blog post, I will discuss a possible approach for DSCC data gathering using Ansible built-in functionality to overcome the lack of continuous playbook development. |
| 18 | + |
| 19 | +# Capture the storage system configuration |
| 20 | + |
| 21 | +Upon learning that the playbooks for the DSCC were not well maintained, I looked for a different way to capture the configuration data of all arrays of the HPE Customer Technology Center in Böblingen. The [](https://github.com/HewlettPackard/hpe3par_ansible_module) [HPE 3PAR Ansible module](https://github.com/HewlettPackard/hpe3par_ansible_module?tab=readme-ov-file) requires one to connect to each array individually and does not provide a complete capture of the array configuration. Hence it is not a solution to the problem. A way forward would be to use the HPE Data Services Cloud Console and the corresponding Data Services REST API (the basics are already discussed in previous posts on the HPE Developer Community blog: [Data Services on the HPE GreenLake platform | HPE Developer Portal )](https://developer.hpe.com/greenlake/data-services-on-the-hpe-greenlake-platform/home/). The Data Services REST API offers a complete list of commands that can be issued on the DSCC. |
| 22 | + |
| 23 | +The configuration of a storage system generally includes the configuration data of the storage system itself, the details of the configured volumes of a storage array, the host group and the host details. The first step of gathering the configuration information would be to get a list of storage arrays connected to the Data Services Cloud Console. Once you have the list, you can go and gather details of each storage array. The [Data Services REST API](https://console-us1.data.cloud.hpe.com/doc/api/v1/) is supporting the data gathering by supplying with every array a list of associated links, that refer to controller, disk etc. information of this array. An example of REST API call response is given below: |
| 24 | + |
| 25 | + |
| 26 | + |
| 27 | +In order to be independent of any Python library (or the lack of updates to a Python library), I have decided to use Ansible's built-in functionality to create the DSCC capture playbooks. The basic tasks that are used by the playbooks are the DSCC REST API call using the ansible.builtin.uri function and as a special call variant, the retrieval of the DSCC access token (which is special in terms of the URI used to get the access token). |
| 28 | + |
| 29 | +# Basic tasks |
| 30 | + |
| 31 | +## Retrieving a DSCC access token |
| 32 | + |
| 33 | +The steps to first generate the client ID and the client secret used to access the DSCC REST API was already described in a post on the HPE Developer Community blog: [Using HPE GreenLake Console's API Gateway for Data Services Cloud Console](https://developer.hpe.com/blog/api-console-for-data-services-cloud-console/). |
| 34 | + |
| 35 | +Once you have your client ID and client secret, you can generate an access token that is valid for two hours. This access token will allow you to issue REST API calls to the Data Services Cloud Console, as it identifies you as the user that is linked with the client ID and secret to create the access token. Hence, it is best practice to store the client ID and secret in a secure place. |
| 36 | + |
| 37 | +The below code example had the client credentials stored in the credentials.yml file, that was encrypted using ansible-vault. The current Ansible playbook stores the access token in a file that grants access only to the current user (hence, the access mode 600 for this file) to avoid misuse of the retrieved access token. |
| 38 | + |
| 39 | +```yaml |
| 40 | +— name: Include encrypted vars |
| 41 | + include_vars: credentials.yml |
| 42 | + |
| 43 | +— name: Get Access Token |
| 44 | + ansible.builtin.uri: |
| 45 | + url: "{{ sso_url }}" |
| 46 | + headers: |
| 47 | + Content-Type: "application/x-www-form-urlencoded" |
| 48 | + Authorization: "Basic {{ (dscc_id + ':' + dscc_secret) | b64encode }}" |
| 49 | + method: POST |
| 50 | + body: "grant_type=client_credentials" |
| 51 | + validate_certs: false |
| 52 | + register: oauth_response |
| 53 | + |
| 54 | +— name: Define header |
| 55 | + ansible.builtin.set_fact: |
| 56 | + token: "Bearer {{ oauth_response.json.access_token }}" |
| 57 | + |
| 58 | +— name: Store Token |
| 59 | + ansible.builtin.copy: |
| 60 | + content: "{{ token }}" |
| 61 | + dest: 'vars/token.txt' |
| 62 | + mode: "0600" |
| 63 | +``` |
| 64 | +
|
| 65 | +## DSCC REST API call |
| 66 | +
|
| 67 | +A DSCC REST API call can be with and without a request body and can have multiple responses depending on the actual API call. Nevertheless, it is good practice to build a modular code approach that uses a generalized REST API call to access the Data Services Cloud Console. The generalized DSCC REST API call, shown in the following code block, has its parameters: |
| 68 | +
|
| 69 | +* requestUri (as mentioned in [](https://developer.hpe.com/greenlake/data-services-on-the-hpe-greenlake-platform/home/)[Data Services REST API](https://console-us1.data.cloud.hpe.com/doc/api/v1/)) |
| 70 | +* request method (Get, Post, Delete, Put) |
| 71 | +* request body (optional) |
| 72 | +
|
| 73 | +```yaml |
| 74 | +- name: Include encrypted vars |
| 75 | + include_vars: vars/credentials.yml |
| 76 | + |
| 77 | +- name: Get Access Token |
| 78 | + ansible.builtin.set_fact: |
| 79 | + token: "{{ lookup('file', 'vars/token.txt') }}" |
| 80 | + |
| 81 | +- name: Check the Methood |
| 82 | + ansible.builtin.fail: |
| 83 | + msg: "DSCC-API-CALL: RestAPI Method is not defined!" |
| 84 | + when: method is not defined |
| 85 | + |
| 86 | +- name: Check for the request Uri |
| 87 | + ansible.builtin.fail: |
| 88 | + msg: "DSCC-API-Call: Request URI is not defined!" |
| 89 | + when: request_uri is not defined |
| 90 | + |
| 91 | +- name: DSCC Command - {{request_uri}} |
| 92 | + ansible.builtin.uri: |
| 93 | + url: "{{ base_url }}{{ request_uri }}" |
| 94 | + headers: |
| 95 | + Authorization: "{{ token }}" |
| 96 | + Content-Type: "application/json" |
| 97 | + method: "{{ method }}" |
| 98 | + validate_certs: false |
| 99 | + status_code: [200, 201, 202, 401, 404] |
| 100 | + register: result |
| 101 | + when: body is not defined |
| 102 | + |
| 103 | +- name: Set result status |
| 104 | + ansible.builtin.set_fact: |
| 105 | + status: "{{ result.status }}" |
| 106 | + tmpres: "{{ result }}" |
| 107 | + when: body is not defined |
| 108 | + |
| 109 | +- name: DSCC Command with body {{request_uri}} |
| 110 | + ansible.builtin.uri: |
| 111 | + url: "{{ base_url }}{{ request_uri }}" |
| 112 | + headers: |
| 113 | + Authorization: "{{ token }}" |
| 114 | + Content-Type: "application/json" |
| 115 | + method: "{{ method }}" |
| 116 | + body_format: json |
| 117 | + body: "{{ body | to_json }}" |
| 118 | + validate_certs: false |
| 119 | + status_code: [200, 201, 202, 400, 401, 404] |
| 120 | + register: result2 |
| 121 | + when: body is defined |
| 122 | + |
| 123 | +- name: Set result status |
| 124 | + ansible.builtin.set_fact: |
| 125 | + status: "{{ result2.status }}" |
| 126 | + tmpres: "{{ result2 }}" |
| 127 | + when: body is defined |
| 128 | + |
| 129 | +- name: Set response when status in [200, 201, 202, 401] |
| 130 | + ansible.builtin.set_fact: |
| 131 | + response: "{{ tmpres }}" |
| 132 | + when: status in ['200', '201', '202','401'] |
| 133 | + |
| 134 | +- name: Undefine Response when status not in [200...] |
| 135 | + ansible.builtin.set_fact: |
| 136 | + response: "" |
| 137 | + when: status not in ['200', '201', '202','401'] |
| 138 | +``` |
| 139 | +
|
| 140 | +You can see, that it first retrieves the stored access token, then checks that the method and the request URI is available. Next it issues the API call either with or without a call body before the possible response status is checked and the call response is set. |
| 141 | +
|
| 142 | +# System configuration capture |
| 143 | +
|
| 144 | +The complete workflow of the DSCC data capture is shown in the following flow diagram. First, the list of connected storage arrays is compiled and stored in a dictionary. Next, the playbook will loop through the storage array dictionary in order to capture the details of each connected storage array (this includes looping through all associated links of a storage array and the gathering of all volumes that are defined on the storage array). Afterwards, the host group and host details are also captured and stored. |
| 145 | +
|
| 146 | + |
| 147 | +
|
| 148 | +This system configuration capture flow chart can now be implemented using the above mentioned basic task in combination with the correct request URIs and request bodies. You can see in the example below that the playbook first gets the list of storage arrays (request uri: /api/v1/storage-systems). If the command returns a status code of 401 (i.e. unauthorized access), it repeats the same call after retrieving a refreshed access token (this is the difference between the DSCC-API-Call.yaml and the DSCC-API-401.yaml playbook). After successfully retrieving the system list, a system dictionary is populated first, followed by looping through the dictionary (Loop-Systems.yml playbook) and storing the system configuration information. Afterwards, the host group and hosts details are retrieved and stored. |
| 149 | +
|
| 150 | +```yaml |
| 151 | + hosts: localhost |
| 152 | + vars: |
| 153 | + method: "GET" |
| 154 | + |
| 155 | + tasks: |
| 156 | + - name: DSCC API Call GET storage systems |
| 157 | + vars: |
| 158 | + request_uri: "/api/v1/storage-systems" |
| 159 | + ansible.builtin.include_tasks: |
| 160 | + file: DSCC-API-Call.yaml |
| 161 | + |
| 162 | + - name: Retry the command if status 401 |
| 163 | + vars: |
| 164 | + request_uri: "/api/v1/storage-systems" |
| 165 | + ansible.builtin.include_tasks: |
| 166 | + file: DSCC-API-401.yaml |
| 167 | + when: status == '401' |
| 168 | + |
| 169 | + - name: Set Systems |
| 170 | + ansible.builtin.set_fact: |
| 171 | + systems: "{{ response.json['items'] }}" |
| 172 | + when: status in ['200', '201'] |
| 173 | + |
| 174 | + - name: Initialize Storage system dictionary if not defined |
| 175 | + ansible.builtin.set_fact: |
| 176 | + storage_systems: "{{ storage_systems | default({}) }}" |
| 177 | + - name: Create StorageSystems Dictionary |
| 178 | + ansible.builtin.set_fact: |
| 179 | + storage_systems: "{{ storage_systems | combine({item.name: {'id': item.id, 'resourceUri': item.resourceUri}}) }}" |
| 180 | + with_items: "{{ systems }}" |
| 181 | + |
| 182 | + - name: Loop Systems |
| 183 | + vars: |
| 184 | + ansible.builtin.include_tasks: |
| 185 | + file: Loop-Systems.yaml |
| 186 | + with_dict: "{{storage_systems}}" |
| 187 | + loop_control: |
| 188 | + loop_var: my_system |
| 189 | + |
| 190 | + - name: Get HostGroups |
| 191 | + vars: |
| 192 | + request_uri: "/api/v1/host-initiator-groups" |
| 193 | + ansible.builtin.include_tasks: |
| 194 | + file: DSCC-API-Call.yaml |
| 195 | + - name: Store the HostGroups |
| 196 | + ansible.builtin.copy: |
| 197 | + content: "{{ response.json | to_nice_json }}" |
| 198 | + dest: "../Outputs/hostGroups.json" |
| 199 | + mode: '0600' |
| 200 | + when: response.json is defined |
| 201 | + |
| 202 | + - name: Get Hosts |
| 203 | + ansible.builtin.include_tasks: |
| 204 | + file: GetAllHosts.yaml |
| 205 | + - name: Store the Hosts |
| 206 | + ansible.builtin.copy: |
| 207 | + content: "{{ response.json | to_nice_json }}" |
| 208 | + dest: "../Outputs/hosts.json" |
| 209 | + mode: '0600' |
| 210 | + when: response.json is defined |
| 211 | +``` |
| 212 | +
|
| 213 | +The Loop-Systems.yaml playbook retrieves the storage system details and loops for each system through all the associated links of this array, providing you with a complete capture of the storage array configuration. The captured data is stored in multiple files with the naming structure: **SystemName.associatedLink-Keyname.json.** |
| 214 | +
|
| 215 | +The Ansible playbooks used to capture the system configuration are: |
| 216 | +
|
| 217 | +* Capture-Systems.yaml |
| 218 | +
|
| 219 | + * DSCC-API-Call.yaml |
| 220 | + * DSCC-API-401.yaml |
| 221 | + * Loop-Systems.yaml |
| 222 | +
|
| 223 | + * Loop-Links.yaml |
| 224 | + * GetAllSystemVolumes.yaml |
| 225 | + * GetAllHosts.yaml |
| 226 | +
|
| 227 | +In order to keep this blog readable and not code overloaded only a few of the playbooks used are shown, but all playbooks (and even some more) can be retrieved on Github at: [https://github.com/tbeha/DSCC-Ansible.](https://github.com/tbeha/DSCC-Ansible) |
| 228 | +
|
| 229 | +# Conclusion |
| 230 | +
|
| 231 | +It is possible to use Ansible playbooks to capture the storage array configuration using the HPE Data Services Cloud Console REST API and built-in Ansible functions. Having the storage array captured in one or multiple JSON-files is leading to an obvious next step: use the captured configuration information to automate the redeployment of a storage array. This is one of my planned next activities. Stay tuned to the [HPE Developer Community blog](https://developer.hpe.com/) for more. |
0 commit comments