You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
= Leveraging the Multicloud Object Gateway Component in the {odf} Operator for {productname}
4
4
@@ -8,302 +8,10 @@ Since {productname} does not support local filesystem storage, users can leverag
8
8
9
9
By the nature of `PersistentVolume`, this is not a scale-out, highly available solution and does not replace a scale-out storage system like {odf}. Only a single instance of the gateway is running. If the pod running the gateway becomes unavailable due to rescheduling, updates or unplanned downtime, this will cause temporary degradation of the connected {productname} instances.
10
10
11
-
Using the following procedures, you will install the Local Storage Operator, {odf}, and create a standalone Multicloud Object Gateway to deploy {productname} on {ocp}.
11
+
Deploying {productname-ocp} using {odf} requires you to download the Local Storage Operator, the {odf} Operator, and then deploy a standalone Multicloud Object Gateway using the {ocp} UI. See the following {odf} documentation for these steps:
12
12
13
-
[NOTE]
14
-
====
15
-
The following documentation shares commonality with the official link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.12/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/deploy-standalone-multicloud-object-gateway#doc-wrapper[{odf} documentation].
Use the following procedure to install the Local Storage Operator from the *OperatorHub* before creating {odf} clusters on local storage devices.
22
-
23
-
. Log in to the *OpenShift Web Console*.
24
-
25
-
. Click *Operators* → *OperatorHub*.
26
-
27
-
. Type *local storage* into the search box to find the Local Storage Operator from the list of Operators. Click *Local Storage*.
28
-
29
-
. Click *Install*.
30
-
31
-
. Set the following options on the Install Operator page:
32
-
+
33
-
* For Update channel, select *stable*.
34
-
* For Installation mode, select *A specific namespace on the cluster*.
35
-
* For Installed Namespace, select *Operator recommended namespace openshift-local-storage*.
36
-
* For Update approval, select *Automatic*.
37
-
38
-
. Click *Install*.
39
-
40
-
[id="installing-odf"]
41
-
== Installing {odf} on {ocp}
42
-
43
-
Use the following procedure to install {odf} on {ocp}.
44
-
45
-
.Prerequisites
46
-
47
-
* Access to an {ocp} cluster using an account with `cluster-admin` and Operator installation permissions.
48
-
* You must have at least three worker nodes in the {ocp} cluster.
49
-
* For additional resource requirements, see the link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.12/html-single/planning_your_deployment/index[Planning your deployment] guide.
50
-
51
-
.Procedure
52
-
53
-
. Log in to the *OpenShift Web Console*.
54
-
55
-
. Click *Operators* → *OperatorHub*.
56
-
57
-
. Type *OpenShift Data Foundation* in the search box. Click *OpenShift Data Foundation*.
58
-
59
-
. Click *Install*.
60
-
61
-
. Set the following options on the Install Operator page:
62
-
+
63
-
* For Update channel, select the most recent stable version.
64
-
* For Installation mode, select *A specific namespace on the cluster*.
65
-
* For Installed Namespace, select *Operator recommended Namespace: openshift-storage*.
66
-
* For Update approval, select *Automatic* or *Manual*.
67
-
+
68
-
If you select *Automatic* updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
69
-
+
70
-
If you select *Manual* updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
71
-
72
-
* For Console plugin, select *Enable*.
73
-
74
-
. Click *Install*.
75
-
+
76
-
After the Operator is installed, a pop-up with a message, `Web console update is available` appears on the user interface. Click *Refresh web console* from this pop-up for the console changes to reflect.
77
-
78
-
. Continue to the following section, "Creating a standalone Multicloud Object Gateway", to leverage the Multicloud Object Gateway Component for {productname}.
79
-
80
-
[id="creating-mcg"]
81
-
== Creating a standalone Multicloud Object Gateway using the {ocp} UI
82
-
83
-
Use the following procedure to create a standalone Multicloud Object Gateway.
84
-
85
-
.Prerequisites
86
-
87
-
* You have installed the Local Storage Operator.
88
-
* You have installed the {odf} Operator.
89
-
90
-
.Procedure
91
-
92
-
. In the *OpenShift Web Console*, click *Operators* -> *Installed Operators* to view all installed Operators.
93
-
+
94
-
Ensure that the namespace is `openshift-storage`.
95
-
96
-
. Click *Create StorageSystem*.
97
-
98
-
. On the *Backing storage* page, select the following:
99
-
.. Select *Multicloud Object Gateway* for *Deployment type*.
100
-
.. Select the *Create a new StorageClass using the local storage devices* option.
101
-
.. Click *Next*.
102
-
+
103
-
[NOTE]
104
-
====
105
-
You are prompted to install the Local Storage Operator if it is not already installed. Click *Install*, and follow the procedure as described in "Installing the Local Storage Operator on {ocp}".
106
-
====
107
-
108
-
. On the *Create local volume set* page, provide the following information:
109
-
.. Enter a name for the *LocalVolumeSet* and the *StorageClass*. By default, the local volume set name appears for the storage class name. You can change the name.
110
-
.. Choose one of the following:
111
-
+
112
-
* *Disk on all nodes*
113
-
+
114
-
Uses the available disks that match the selected filters on all the nodes.
115
-
+
116
-
* *Disk on selected nodes*
117
-
+
118
-
Uses the available disks that match the selected filters only on the selected nodes.
119
-
120
-
.. From the available list of *Disk Type*, select *SSD/NVMe*.
121
-
122
-
.. Expand the *Advanced* section and set the following options:
123
-
+
124
-
|===
125
-
|*Volume Mode* | Filesystem is selected by default. Always ensure that Filesystem is selected for Volume Mode.
126
-
|*Device Type* | Select one or more device type from the dropdown list.
127
-
|*Disk Size*| Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included.
128
-
|*Maximum Disks Limit* | This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes.
129
-
|===
130
-
131
-
.. Click *Next*
132
-
+
133
-
A pop-up to confirm the creation of `LocalVolumeSet` is displayed.
134
-
135
-
.. Click *Yes* to continue.
136
-
137
-
. In the *Capacity and nodes* page, configure the following:
138
-
+
139
-
.. *Available raw capacity* is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The *Selected nodes* list shows the nodes based on the storage class.
140
-
.. Click *Next* to continue.
141
-
142
-
. Optional. Select the *Connect to an external key management service* checkbox. This is optional for cluster-wide encryption.
143
-
.. From the *Key Management Service Provider* drop-down list, either select *Vault* or *Thales CipherTrust Manager (using KMIP)*. If you selected *Vault*, go to the next step. If you selected *Thales CipherTrust Manager (using KMIP)*, go to step iii.
144
-
.. Select an *Authentication Method*.
145
-
+
146
-
Using Token Authentication method
147
-
+
148
-
* Enter a unique *Connection Name*, host *Address* of the Vault server ('https://<hostname or ip>'), *Port* number and *Token*.
149
-
+
150
-
* Expand *Advanced Settings* to enter additional settings and certificate details based on your `Vault` configuration:
151
-
+
152
-
** Enter the Key Value secret path in *Backend Path* that is dedicated and unique to OpenShift Data Foundation.
153
-
** Optional: Enter *TLS Server Name* and *Vault Enterprise Namespace*.
154
-
** Upload the respective PEM encoded certificate file to provide the *CA Certificate*, *Client Certificate,* and *Client Private Key*.
155
-
** Click *Save* and skip to step iv.
156
-
+
157
-
Using Kubernetes authentication method
158
-
+
159
-
* Enter a unique Vault *Connection Name*, host *Address* of the Vault server ('https://<hostname or ip>'), *Port* number and *Role* name.
160
-
* Expand *Advanced Settings* to enter additional settings and certificate details based on your Vault configuration:
161
-
** Enter the Key Value secret path in *Backend Path* that is dedicated and unique to {odf}.
162
-
** Optional: Enter *TLS Server Name* and *Authentication Path* if applicable.
163
-
** Upload the respective PEM encoded certificate file to provide the *CA Certificate*, *Client Certificate*, and *Client Private Key*.
164
-
** Click *Save* and skip to step iv.
165
-
166
-
.. To use *Thales CipherTrust Manager (using KMIP)* as the KMS provider, follow the steps below:
167
-
168
-
... Enter a unique *Connection Name* for the Key Management service within the project.
169
-
... In the *Address* and *Port* sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example:
170
-
+
171
-
* *Address*: 123.34.3.2
172
-
* *Port*: 5696
173
-
... Upload the *Client Certificate*, *CA certificate*, and *Client Private Key*.
174
-
... If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above.
175
-
... The *TLS Server* field is optional and used when there is no DNS entry for the KMIP endpoint. For example,`kmip_all_<port>.ciphertrustmanager.local`.
176
-
177
-
.. Select a *Network*.
178
-
.. Click *Next*.
179
-
180
-
. In the *Review and create* page, review the configuration details. To modify any configuration settings, click *Back*.
181
-
182
-
. Click *Create StorageSystem*.
183
-
184
-
185
-
[id="creating-standalone-object-gateway"]
186
-
== Create A standalone Multicloud Object Gateway using the CLI
187
-
188
-
Use the following procedure to install the {odf} (formerly known as OpenShift Container Storage) Operator and configure a single instance Multi-Cloud Gateway service.
189
-
190
-
[NOTE]
191
-
====
192
-
The following configuration cannot be run in parallel on a cluster with {odf} installed.
193
-
====
194
-
195
-
.Procedure
196
-
197
-
. On the *OpenShift Web Console*, and then select *Operators* -> *OperatorHub*.
198
-
199
-
. Search for *{odf}*, and then select *Install*.
200
-
201
-
. Accept all default options, and then select *Install*.
202
-
203
-
. Confirm that the Operator has installed by viewing the *Status* column, which should be marked as *Succeeded*.
204
-
+
205
-
[WARNING]
206
-
====
207
-
When the installation of the {odf} Operator is finished, you are prompted to create a storage system. Do not follow this instruction. Instead, create NooBaa object storage as outlined the following steps.
208
-
====
209
-
210
-
. On your machine, create a file named `noobaa.yaml` with the following information:
211
-
+
212
-
[source,yaml]
213
-
+
214
-
----
215
-
apiVersion: noobaa.io/v1alpha1
216
-
kind: NooBaa
217
-
metadata:
218
-
name: noobaa
219
-
namespace: openshift-storage
220
-
spec:
221
-
dbResources:
222
-
requests:
223
-
cpu: '0.1'
224
-
memory: 1Gi
225
-
dbType: postgres
226
-
coreResources:
227
-
requests:
228
-
cpu: '0.1'
229
-
memory: 1Gi
230
-
----
231
-
+
232
-
This creates a single instance deployment of the _Multi-cloud Object Gateway_.
233
-
234
-
. Apply the configuration with the following command:
235
-
+
236
-
[source,terminal]
237
-
----
238
-
$ oc create -n openshift-storage -f noobaa.yaml
239
-
----
240
-
+
241
-
.Example output
242
-
+
243
-
[source,terminal]
244
-
----
245
-
noobaa.noobaa.io/noobaa created
246
-
----
247
-
248
-
. After a few minutes, the _Multi-cloud Object Gateway_ should finish provisioning. You can enter the following command to check its status:
. Configure a backing store for the gateway by creating the following YAML file, named `noobaa-pv-backing-store.yaml`:
264
-
+
265
-
[source,yaml]
266
-
----
267
-
apiVersion: noobaa.io/v1alpha1
268
-
kind: BackingStore
269
-
metadata:
270
-
finalizers:
271
-
- noobaa.io/finalizer
272
-
labels:
273
-
app: noobaa
274
-
name: noobaa-pv-backing-store
275
-
namespace: openshift-storage
276
-
spec:
277
-
pvPool:
278
-
numVolumes: 1
279
-
resources:
280
-
requests:
281
-
storage: 50Gi <1>
282
-
storageClass: STORAGE-CLASS-NAME <2>
283
-
type: pv-pool
284
-
----
285
-
<1> The overall capacity of the object storage service. Adjust as needed.
286
-
<2> The `StorageClass` to use for the `PersistentVolumes` requested. Delete this property to use the cluster default.
287
-
288
-
. Enter the following command to apply the configuration:
289
-
+
290
-
[source,terminal]
291
-
----
292
-
$ oc create -f noobaa-pv-backing-store.yaml
293
-
----
294
-
+
295
-
.Example output
296
-
+
297
-
[source,terminal]
298
-
----
299
-
backingstore.noobaa.io/noobaa-pv-backing-store created
300
-
----
301
-
+
302
-
This creates the backing store configuration for the gateway. All images in {productname} will be stored as objects through the gateway in a `PersistentVolume` created by the above configuration.
303
-
304
-
. Run the following command to make the `PersistentVolume` backing store the default for all `ObjectBucketClaims` issued by the {productname} Operator:
0 commit comments