Skip to content

Commit e1ca6f0

Browse files
committed
Docs: Update README.md
This patch updates the README.md file to be a bit more detailed in how we can get a working deployment with Ceph as a backend for Cinder and Glance.
1 parent 46162b5 commit e1ca6f0

File tree

1 file changed

+191
-102
lines changed

1 file changed

+191
-102
lines changed

README.md

Lines changed: 191 additions & 102 deletions
Original file line numberDiff line numberDiff line change
@@ -1,149 +1,239 @@
1-
# cinder-operator
2-
// TODO(user): Add simple overview of use/purpose
3-
4-
## Description
5-
// TODO(user): An in-depth paragraph about your project and overview of use
1+
# CINDER-OPERATOR
62

7-
## Getting Started
8-
You’ll need a Kubernetes cluster to run against. Our recommendation for the time being is to use
9-
[OpenShift Local](https://access.redhat.com/documentation/en-us/red_hat_openshift_local/2.2/html/getting_started_guide/installation_gsg) (formerly known as CRC / Code Ready Containers).
10-
We have [companion development tools](https://github.com/openstack-k8s-operators/install_yamls/blob/master/devsetup/README.md) available that will install OpenShift Local for you.
3+
The cinder-operator is an OpenShift Operator built using the Operator Framework
4+
for Go. The Operator provides a way to easily install and manage an OpenStack
5+
Cinder installation on OpenShift. This Operator was developed using RDO
6+
containers for openStack.
117

12-
### Running on the cluster
13-
1. Install Instances of Custom Resources:
8+
## Getting started
9+
10+
**NOTES:**
11+
12+
- *The project is in a rapid development phase and not yet intended for
13+
production consumption, so instructions are meant for developers.*
14+
15+
- *If possible don't run things in your own machine to avoid the risk of
16+
affecting the development of your other projects.*
17+
18+
Here we'll explain how to get a functiona OpenShift deployment running inside a
19+
VM that is running MariaDB, RabbitMQ, KeyStone, Glance, and Cinder services
20+
against a Ceph backend.
21+
22+
There are 4 steps:
23+
24+
- [Install prerequisites](#prerequisites)
25+
- [Deploy an OpenShift cluster](#openshift-cluster)
26+
- [Prepare Storage](#storage)
27+
- [Deploy OpenStack](#deploy)
28+
29+
### Prerequisites
30+
31+
There are some tools that will be required through this process, so the first
32+
thing we do is install them:
1433

1534
```sh
16-
kubectl apply -f config/samples/
35+
sudo dnf install -y git wget make ansible-core python-pip podman gcc
1736
```
1837

19-
2. Build and push your image to the location specified by `IMG`:
20-
38+
We'll also need this repository as well as `install_yamls`:
39+
2140
```sh
22-
make docker-build docker-push IMG=<some-registry>/cinder-operator:tag
41+
cd ~
42+
git clone https://github.com/openstack-k8s-operators/install_yamls.git
43+
git clone https://github.com/openstack-k8s-operators/cinder-operator.git
2344
```
24-
25-
3. Deploy the controller to the cluster with the image specified by `IMG`:
45+
46+
### OpenShift cluster
47+
48+
There are many ways get an OpenShift cluster, and our recommendation for the
49+
time being is to use [OpenShift Local](https://access.redhat.com/documentation/en-us/red_hat_openshift_local/2.5/html/getting_started_guide/index)
50+
(formerly known as CRC / Code Ready Containers).
51+
52+
To help with the deployment we have [companion development tools](https://github.com/openstack-k8s-operators/install_yamls/blob/master/devsetup)
53+
available that will install OpenShift Local for you and will also help with
54+
later steps.
55+
56+
Running OpenShift requires a considerable amount of resources, even more when
57+
running all the operators and services required for an OpenStack deployment,
58+
so make sure that you have enough resources in the machine to run everything.
59+
60+
You will need at least 5 CPUS and 16GB of RAM, preferably more, just for the
61+
local OpenShift VM.
62+
63+
**You will also need to get your [pull-secrets from Red Hat](
64+
https://cloud.redhat.com/openshift/create/local) and store it in the machine,
65+
for example on your home directory as `pull-secret`.**
2666

2767
```sh
28-
make deploy IMG=<some-registry>/cinder-operator:tag
68+
cd ~/install_yamls/devsetup
69+
PULL_SECRET=~/pull-secret CPUS=6 MEMORY=20480 make download_tools crc
2970
```
3071

31-
### Uninstall CRDs
32-
To delete the CRDs from the cluster:
72+
This will take a while, but once it has completed you'll have an OpenShift
73+
cluster ready.
74+
75+
Now you need to set the right environmental variables for the OCP cluster, and
76+
you may want to logging to the cluster manually (although the previous step
77+
already logs in at the end):
3378

3479
```sh
35-
make uninstall
80+
eval $(crc oc-env)
3681
```
3782

38-
### Undeploy controller
39-
UnDeploy the controller to the cluster:
83+
**NOTE**: When CRC finishes the deployment the `oc` client is logged in, but
84+
the token will eventually expire, in that case we can login again with
85+
`oc login -u kubeadmin -p 12345678 https://api.crc.testing:6443`, or use the
86+
[helper functions](CONTRIBUTING.md#helpful-scripts).
87+
88+
Let's now get the cluster version confirming we have access to it:
4089

4190
```sh
42-
make undeploy
91+
oc get clusterversion
4392
```
4493

45-
### Configure Cinder with Ceph backend
94+
If you are running OCP on a different machine you'll need additional steps to
95+
[access its dashboard from an external system](https://github.com/openstack-k8s-operators/install_yamls/tree/master/devsetup#access-ocp-from-external-systems).
96+
97+
### Storage
4698

47-
The Cinder spec API can be used to configure and customize the Ceph backend. In
48-
particular, the `customServiceConfig` parameter should be used, for each
49-
defined volume, to override the `enabled_backends` parameter, which must exist
50-
in `cinder.conf` to make the `cinderVolume` pod run. The global `cephBackend`
51-
parameter is used to specify the Ceph client-related "key/value" pairs required
52-
to connect the service with an external Ceph cluster. Multiple external Ceph
53-
clusters are not supported at the moment. The following represents an example
54-
of the Cinder object that can be used to trigger the Cinder service deployment,
55-
and enable the Cinder backend that points to an external Ceph cluster.
99+
There are 2 kinds of storage we'll need: One for the pods to run, for example
100+
for the MariaDB database files, and another for the OpenStack services to use
101+
for the VMs.
56102

103+
To create the pod storage we run:
104+
105+
```sh
106+
cd ~/install_yamls
107+
make crc_storage
57108
```
58-
apiVersion: cinder.openstack.org/v1beta1
59-
kind: Cinder
60-
metadata:
61-
name: cinder
62-
namespace: openstack
63-
spec:
64-
serviceUser: cinder
65-
databaseInstance: openstack
66-
databaseUser: cinder
67-
cinderAPI:
68-
replicas: 1
69-
containerImage: quay.io/tripleowallabycentos9/openstack-cinder-api:current-tripleo
70-
cinderScheduler:
71-
replicas: 1
72-
containerImage: quay.io/tripleowallabycentos9/openstack-cinder-scheduler:current-tripleo
73-
cinderBackup:
74-
replicas: 1
75-
containerImage: quay.io/tripleowallabycentos9/openstack-cinder-backup:current-tripleo
76-
secret: cinder-secret
77-
cinderVolumes:
78-
volume1:
79-
containerImage: quay.io/tripleowallabycentos9/openstack-cinder-volume:current-tripleo
80-
replicas: 1
81-
customServiceConfig: |
82-
[DEFAULT]
83-
enabled_backends=ceph
84-
cephBackend:
85-
cephFsid: <CephClusterFSID>
86-
cephMons: <CephMons>
87-
cephClientKey: <cephClientKey>
88-
cephUser: openstack
89-
cephPools:
90-
cinder:
91-
name: volumes
92-
nova:
93-
name: vms
94-
glance:
95-
name: images
96-
cinder_backup:
97-
name: backup
98-
extra_pool1:
99-
name: ceph_ssd_tier
100-
extra_pool2:
101-
name: ceph_nvme_tier
102-
extra_pool3:
103-
name: ceph_hdd_tier
109+
110+
As for the storage for the OpenStack services, at the time of this writing only
111+
NFS and Ceph are supported.
112+
113+
For simplicity's sake we'll use a *toy* Ceph cluster that runs in a single
114+
local container using a simple script provided by this project. Beware that
115+
this script overrides things under `/etc/ceph`:
116+
117+
**NOTE**: This step must be run after the OpenShift VM is running because it
118+
binds to an IP address created by it.
119+
120+
```sh
121+
~/cinder-operator/hack/dev/create-ceph.sh
104122
```
105123

106-
When the service is up and running, it's possible to interact with the cinder
107-
API and create the Ceph `cinder type` backend which is associated with the Ceph
108-
tier specified in the config file.
124+
Using an external Ceph cluster is also possible, but out of the scope of this
125+
document, and the manifest we'll use have been tailor made for this specific
126+
*toy* Ceph cluster.
109127

128+
### Deploy
110129

111-
## Contributing
112-
// TODO(user): Add detailed information on how you would like others to contribute to this project
130+
Deploying the podified OpenStack control plane is a 2 step process. First
131+
deploying the operators, and then telling the openstack-operator how we want
132+
our OpenStack deployment to look like.
113133

114-
### How it works
115-
This project aims to follow the Kubernetes [Operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/)
134+
Deploying the openstack operator:
116135

117-
It uses [Controllers](https://kubernetes.io/docs/concepts/architecture/controller/)
118-
which provides a reconcile function responsible for synchronizing resources untile the desired state is reached on the cluster
136+
```sh
137+
cd ~/install_yamls
138+
make openstack
139+
```
119140

120-
### Test It Out
121-
1. Install the CRDs into the cluster:
141+
Once all the operator ready we'll see the pod with:
122142

123143
```sh
124-
make install
144+
oc get pod -l control-plane=controller-manager
125145
```
126146

127-
2. Run your controller (this will run in the foreground, so switch to a new terminal if you want to leave it running):
147+
And now we can tell this operator to deploy RabbitMQ, MariaDB, Keystone, Glance
148+
and Cinder using the Ceph *toy* cluster:
128149

129150
```sh
130-
make run
151+
export OPENSTACK_CR=`realpath ~/cinder-operator/hack/dev/openstack-ceph.yaml`
152+
cd ~/install_yamls
153+
make openstack_deploy
131154
```
132155

133-
**NOTE:** You can also run this in one step by running: `make install run`
156+
After a bit we can see the 5 operators are running:
157+
158+
```sh
159+
oc get pods -l control-plane=controller-manager
160+
```
134161

135-
### Modifying the API definitions
136-
If you are editing the API definitions, generate the manifests such as CRs or CRDs using:
162+
And a while later the services will also appear:
137163

138164
```sh
139-
make manifests
165+
oc get pods -l app=mariadb
166+
oc get pods -l app.kubernetes.io/component=rabbitmq
167+
oc get pods -l service=keystone
168+
oc get pods -l service=glance
169+
oc get pods -l service=cinder
140170
```
141171

142-
**NOTE:** Run `make --help` for more information on all potential `make` targets
172+
### Configure Clients
143173

144-
More information can be found via the [Kubebuilder Documentation](https://book.kubebuilder.io/introduction.html)
174+
Now that we have the OpenStack services running we'll want to setup the
175+
different OpenStack clients.
145176

146-
## License
177+
For convenience this project has a simple script that does it for us:
178+
179+
```sh
180+
source ~/cinder-operator/hack/dev/osp-clients-cfg.sh
181+
```
182+
183+
We can now see available endpoints and services to confirm that the clients and
184+
the Keystone service work as expected:
185+
186+
```sh
187+
openstack service list
188+
openstack endpoint list
189+
```
190+
191+
Upload a glance image:
192+
193+
```sh
194+
cd
195+
wget http://download.cirros-cloud.net/0.5.2/cirros-0.5.2-x86_64-disk.img -O cirros.img
196+
openstack image create cirros --container-format=bare --disk-format=qcow2 < cirros.img
197+
openstack image list
198+
```
199+
200+
And create a cinder volume:
201+
202+
```sh
203+
openstack volume create --size 1 myvolume
204+
```
205+
206+
## Cleanup
207+
208+
To delete the deployed OpenStack we can do:
209+
210+
```sh
211+
cd ~/install_yamls
212+
make openstack_deploy_cleanup
213+
```
214+
215+
Once we've done this we need to recreate the PVs that we created at the start,
216+
since some of them will be in failed state:
217+
218+
```sh
219+
make crc_storage_cleanup crc_storage
220+
```
221+
222+
We can now remove the openstack-operator as well:
223+
224+
```sh
225+
make openstack_cleanup
226+
```
227+
228+
# ADDITIONAL INFORMATION
229+
230+
**NOTE:** Run `make --help` for more information on all potential `make`
231+
targets.
232+
233+
More information about the Makefile can be found via the [Kubebuilder
234+
Documentation]( https://book.kubebuilder.io/introduction.html).
235+
236+
# LICENSE
147237

148238
Copyright 2022.
149239

@@ -158,4 +248,3 @@ distributed under the License is distributed on an "AS IS" BASIS,
158248
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
159249
See the License for the specific language governing permissions and
160250
limitations under the License.
161-

0 commit comments

Comments
 (0)