1
- ===================
2
- System architecture
3
- ===================
1
+ ========================
2
+ Nova System Architecture
3
+ ========================
4
4
5
- OpenStack Compute contains several main components.
5
+ Nova comprises multiple server processes, each performing different
6
+ functions. The user-facing interface is a REST API, while internally Nova
7
+ components communicate via an RPC message passing mechanism.
6
8
7
- - The cloud controller represents the global state and interacts with the
8
- other components. The ``API server `` acts as the web services front end for
9
- the cloud controller. The ``compute controller `` provides compute server
10
- resources and usually also contains the Compute service.
9
+ The API servers process REST requests, which typically involve database
10
+ reads/writes, optionally sending RPC messages to other Nova services,
11
+ and generating responses to the REST calls.
12
+ RPC messaging is done via the **oslo.messaging ** library,
13
+ an abstraction on top of message queues.
14
+ Nova uses a messaging-based, ``shared nothing `` architecture and most of the
15
+ major nova components can be run on multiple servers, and have a manager that
16
+ is listening for RPC messages.
17
+ The one major exception is ``nova-compute ``, where a single process runs on the
18
+ hypervisor it is managing (except when using the VMware or Ironic drivers).
19
+ The manager also, optionally, has periodic tasks.
20
+ For more details on our RPC system, please see: :doc: `/reference/rpc `
11
21
12
- - The ``object store `` is an optional component that provides storage
13
- services; you can also use OpenStack Object Storage instead.
22
+ Nova also uses a central database that is (logically) shared between all
23
+ components. However, to aid upgrade, the DB is accessed through an object
24
+ layer that ensures an upgraded control plane can still communicate with
25
+ a ``nova-compute `` running the previous release.
26
+ To make this possible ``nova-compute `` proxies DB requests over RPC to a
27
+ central manager called ``nova-conductor ``.
14
28
15
- - An ``auth manager `` provides authentication and authorization services when
16
- used with the Compute system; you can also use OpenStack Identity as a
17
- separate authentication service instead.
29
+ To horizontally expand Nova deployments, we have a deployment sharding
30
+ concept called cells. For more information please see: :doc: `/admin/cells `
18
31
19
- - A ``volume controller `` provides fast and permanent block-level storage for
20
- the compute servers.
21
32
22
- - The ``network controller `` provides virtual networks to enable compute
23
- servers to interact with each other and with the public network. You can also
24
- use OpenStack Networking instead.
33
+ Components
34
+ ----------
25
35
26
- - The ``scheduler `` is used to select the most suitable compute controller to
27
- host an instance.
36
+ Below you will find a helpful explanation of the key components
37
+ of a typical Nova deployment.
38
+
39
+ .. image :: /_static/images/architecture.svg
40
+ :width: 100%
41
+
42
+ * **DB **: SQL database for data storage.
43
+
44
+ * **API **: Component that receives HTTP requests, converts commands and
45
+ communicates with other components via the **oslo.messaging ** queue or HTTP.
46
+
47
+ * **Scheduler **: Decides which host gets each instance.
48
+
49
+ * **Compute **: Manages communication with hypervisor and virtual machines.
50
+
51
+ * **Conductor **: Handles requests that need coordination (build/resize), acts
52
+ as a database proxy, or handles object conversions.
53
+
54
+ * **:placement-doc:`Placement <>` **: Tracks resource provider inventories and
55
+ usages.
56
+
57
+ While all services are designed to be horizontally scalable, you should have
58
+ significantly more computes than anything else.
28
59
29
- Compute uses a messaging-based, ``shared nothing `` architecture. All major
30
- components exist on multiple servers, including the compute, volume, and
31
- network controllers, and the Object Storage or Image service. The state of the
32
- entire system is stored in a database. The cloud controller communicates with
33
- the internal object store using HTTP, but it communicates with the scheduler,
34
- network controller, and volume controller using Advanced Message Queuing
35
- Protocol (AMQP). To avoid blocking a component while waiting for a response,
36
- Compute uses asynchronous calls, with a callback that is triggered when a
37
- response is received.
38
60
39
61
Hypervisors
40
- ~~~~~~~~~~~
62
+ -----------
41
63
42
- Compute controls hypervisors through an API server. Selecting the best
64
+ Nova controls hypervisors through an API server. Selecting the best
43
65
hypervisor to use can be difficult, and you must take budget, resource
44
66
constraints, supported features, and required technical specifications into
45
67
account. However, the majority of OpenStack development is done on systems
46
68
using KVM-based hypervisors. For a detailed list of features and
47
69
support across different hypervisors, see :doc: `/user/support-matrix `.
48
70
49
71
You can also orchestrate clouds using multiple hypervisors in different
50
- availability zones. Compute supports the following hypervisors:
72
+ availability zones. Nova supports the following hypervisors:
51
73
52
74
- :ironic-doc: `Baremetal <> `
53
75
@@ -75,35 +97,29 @@ For more information about hypervisors, see
75
97
:doc: `/admin/configuration/hypervisors `
76
98
section in the Nova Configuration Reference.
77
99
100
+
78
101
Projects, users, and roles
79
- ~~~~~~~~~~~~~~~~~~~~~~~~~~
102
+ --------------------------
80
103
81
- To begin using Compute , you must create a user with the
104
+ To begin using Nova , you must create a user with the
82
105
:keystone-doc: `Identity service <> `.
83
106
84
- The Compute system is designed to be used by different consumers in the form of
85
- projects on a shared system, and role-based access assignments. Roles control
107
+ The Nova system is designed to be used by different consumers in the form of
108
+ projects on a shared system, and role-based access assignments. Roles control
86
109
the actions that a user is allowed to perform.
87
110
88
111
Projects are isolated resource containers that form the principal
89
- organizational structure within the Compute service. They consist of an
112
+ organizational structure within the Nova service. They typically consist of an
90
113
individual VLAN, and volumes, instances, images, keys, and users. A user can
91
114
specify the project by appending ``project_id `` to their access key. If no
92
- project is specified in the API request, Compute attempts to use a project with
115
+ project is specified in the API request, Nova attempts to use a project with
93
116
the same ID as the user.
94
117
95
- For projects, you can use quota controls to limit the:
96
-
97
- - Number of volumes that can be launched.
98
-
99
- - Number of processor cores and the amount of RAM that can be allocated.
100
-
101
- - Floating IP addresses assigned to any instance when it launches. This allows
102
- instances to have the same publicly accessible IP addresses.
103
-
104
- - Fixed IP addresses assigned to the same instance when it launches. This
105
- allows instances to have the same publicly or privately accessible IP
106
- addresses.
118
+ For projects, you can use quota controls to limit the number of processor cores
119
+ and the amount of RAM that can be allocated. Other projects also allow quotas
120
+ on their own resources. For example, :neutron-doc: `neutron
121
+ </admin/ops-quotas.html> ` allows you to manage the amount of networks that can
122
+ be created within a project.
107
123
108
124
Roles control the actions a user is allowed to perform. By default, most
109
125
actions do not require a particular role, but you can configure them by editing
@@ -122,54 +138,52 @@ consumption across available hardware resources.
122
138
``project ``. Because of this legacy terminology, some command-line tools use
123
139
``--tenant_id `` where you would normally expect to enter a project ID.
124
140
141
+
125
142
Block storage
126
- ~~~~~~~~~~~~~
143
+ -------------
127
144
128
145
OpenStack provides two classes of block storage: ephemeral storage and
129
146
persistent volume.
130
147
131
148
.. rubric :: Ephemeral storage
132
149
133
150
Ephemeral storage includes a root ephemeral volume and an additional ephemeral
134
- volume.
151
+ volume. These are provided by nova itself.
135
152
136
153
The root disk is associated with an instance, and exists only for the life of
137
154
this very instance. Generally, it is used to store an instance's root file
138
155
system, persists across the guest operating system reboots, and is removed on
139
156
an instance deletion. The amount of the root ephemeral volume is defined by the
140
157
flavor of an instance.
141
158
142
- In addition to the ephemeral root volume, all default types of flavors, except
143
- ``m1.tiny ``, which is the smallest one, provide an additional ephemeral block
144
- device sized between 20 and 160 GB (a configurable value to suit an
145
- environment). It is represented as a raw block device with no partition table
146
- or file system. A cloud-aware operating system can discover, format, and mount
147
- such a storage device. OpenStack Compute defines the default file system for
148
- different operating systems as Ext4 for Linux distributions, VFAT for non-Linux
149
- and non-Windows operating systems, and NTFS for Windows. However, it is
150
- possible to specify any other filesystem type by using ``virt_mkfs `` or
151
- ``default_ephemeral_format `` configuration options.
159
+ In addition to the ephemeral root volume, flavors can provide an additional
160
+ ephemeral block device. It is represented as a raw block device with no
161
+ partition table or file system. A cloud-aware operating system can discover,
162
+ format, and mount such a storage device. Nova defines the default file system
163
+ for different operating systems as ext4 for Linux distributions, VFAT for
164
+ non-Linux and non-Windows operating systems, and NTFS for Windows. However, it
165
+ is possible to configure other filesystem types.
152
166
153
167
.. note ::
154
168
155
169
For example, the ``cloud-init `` package included into an Ubuntu's stock
156
- cloud image, by default, formats this space as an Ext4 file system and
170
+ cloud image, by default, formats this space as an ext4 file system and
157
171
mounts it on ``/mnt ``. This is a cloud-init feature, and is not an OpenStack
158
172
mechanism. OpenStack only provisions the raw storage.
159
173
160
174
.. rubric :: Persistent volume
161
175
162
176
A persistent volume is represented by a persistent virtualized block device
163
- independent of any particular instance, and provided by OpenStack Block
164
- Storage.
177
+ independent of any particular instance. These are provided by the OpenStack
178
+ Block Storage service, cinder .
165
179
166
- Only a single configured instance can access a persistent volume. Multiple
167
- instances cannot access a persistent volume . This type of configuration
168
- requires a traditional network file system to allow multiple instances
169
- accessing the persistent volume. It also requires a traditional network file
170
- system like NFS, CIFS, or a cluster file system such as GlusterFS. These
171
- systems can be built within an OpenStack cluster, or provisioned outside of it,
172
- but OpenStack software does not provide these features.
180
+ Persistent volumes can be accessed by a single instance or attached to multiple
181
+ instances. This type of configuration requires a traditional network file
182
+ system to allow multiple instances accessing the persistent volume. It also
183
+ requires a traditional network file system like NFS, CIFS, or a cluster file
184
+ system such as GlusterFS. These systems can be built within an OpenStack
185
+ cluster, or provisioned outside of it, but OpenStack software does not provide
186
+ these features.
173
187
174
188
You can configure a persistent volume as bootable and use it to provide a
175
189
persistent virtual instance similar to the traditional non-cloud-based
@@ -190,17 +204,17 @@ configuration, see :cinder-doc:`Introduction to the Block Storage service
190
204
191
205
192
206
Building blocks
193
- ~~~~~~~~~~~~~~~
207
+ ---------------
194
208
195
209
In OpenStack the base operating system is usually copied from an image stored
196
- in the OpenStack Image service. This is the most common case and results in an
197
- ephemeral instance that starts from a known template state and loses all
198
- accumulated states on virtual machine deletion. It is also possible to put an
199
- operating system on a persistent volume in the OpenStack Block Storage volume
200
- system . This gives a more traditional persistent system that accumulates states
201
- which are preserved on the OpenStack Block Storage volume across the deletion
202
- and re-creation of the virtual machine. To get a list of available images on
203
- your system, run:
210
+ in the OpenStack Image service, glance . This is the most common case and
211
+ results in an ephemeral instance that starts from a known template state and
212
+ loses all accumulated states on virtual machine deletion. It is also possible
213
+ to put an operating system on a persistent volume in the OpenStack Block
214
+ Storage service . This gives a more traditional persistent system that
215
+ accumulates states which are preserved on the OpenStack Block Storage volume
216
+ across the deletion and re-creation of the virtual machine. To get a list of
217
+ available images on your system, run:
204
218
205
219
.. code-block :: console
206
220
@@ -230,10 +244,9 @@ The displayed image attributes are:
230
244
field is blank.
231
245
232
246
Virtual hardware templates are called ``flavors ``. By default, these are
233
- configurable by admin users, however that behavior can be changed by redefining
234
- the access controls for ``compute_extension:flavormanage `` in
235
- ``/etc/nova/policy.yaml `` on the ``compute-api `` server.
236
- For more information, refer to :doc: `/configuration/policy `.
247
+ configurable by admin users, however, that behavior can be changed by redefining
248
+ the access controls ``policy.yaml `` on the ``nova-compute `` server. For more
249
+ information, refer to :doc: `/configuration/policy `.
237
250
238
251
For a list of flavors that are available on your system:
239
252
@@ -250,8 +263,9 @@ For a list of flavors that are available on your system:
250
263
| 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True |
251
264
+-----+-----------+-------+------+-----------+-------+-----------+
252
265
253
- Compute service architecture
254
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
266
+
267
+ Nova service architecture
268
+ -------------------------
255
269
256
270
These basic categories describe the service architecture and information about
257
271
the cloud controller.
0 commit comments