You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
dev-install installs [TripleO standalone](https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/standalone.html) on a remote system for development use.
3
+
`dev-install` installs [TripleO standalone](https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/deployment/standalone.html) on a remote system for development use.
4
4
5
5
## Host configuration
6
6
7
-
dev-install requires that:
8
-
* an appropriate OS has already been installed
7
+
`dev-install` requires that:
8
+
9
+
* An appropriate OS has already been installed
9
10
* EPEL repositories aren't deployed, they aren't compatible with OpenStack repositories
10
-
*the machine running dev-install can SSH to the standalone host as either root or a user with passwordless sudo access
11
-
*this machine has Ansible installed, and some dependencies like python3-netaddr.
11
+
*The machine running `dev-install` can SSH to the standalone host as either root or a user with passwordless sudo access
12
+
*This machine has Ansible installed, and some dependencies like `python3-netaddr`.
12
13
13
14
You need to deploy the right RHEL version depending on which OSP version you want:
14
15
@@ -17,33 +18,34 @@ You need to deploy the right RHEL version depending on which OSP version you wan
17
18
| 16.2 | 8.4 |
18
19
| 17.1*| 9.2 |
19
20
20
-
> *Current default in dev-install
21
+
> Current default.
21
22
22
-
There is no need to do any other configuration prior to running dev-install.
23
+
There is no need to do any other configuration prior to running `dev-install`.
23
24
When deploying on TripleO from upstream, you need to deploy on CentOS Stream. If CentOS is not Stream, dev-install will migrate it.
24
25
25
26
## Local pre-requisites
26
27
27
-
dev-install requires up to date versions of `ansible` and `make`, both of which must be installed manually before invoking dev-install.
28
+
`dev-install` requires up to date versions of `ansible` and `make`, both of which must be installed manually before invoking `dev-install`.
28
29
29
-
If installing OSP 16.2 with official rhel 8.4 cloud images, it is required that the cloud-init service be disabled before deployment as per [THIS](https://review.opendev.org/c/openstack/tripleo-heat-templates/+/764933)
30
+
If installing OSP 16.2 with official RHEL 8.4 cloud images, it is required that the `cloud-init` service be disabled before deployment as per [THIS](https://review.opendev.org/c/openstack/tripleo-heat-templates/+/764933)
30
31
31
-
At present the deployment depends on a valid DHCP source for the external interface (br-ex) as per [THIS](https://github.com/shiftstack/dev-install/blob/main/playbooks/templates/dev-install_net_config.yaml.j2#L9)
32
+
At present the deployment depends on a valid DHCP source for the external interface (`br-ex`) as per [THIS](https://github.com/shiftstack/dev-install/blob/main/playbooks/templates/dev-install_net_config.yaml.j2#L9)
32
33
33
-
All other requirements should be configured automatically by ansible. Note that dev-install does require root access (or passwordless sudo) on the machine it is invoked from to install certificate management tools (simpleca) in addition to the remote host.
34
+
All other requirements should be configured automatically by Ansible. Note that `dev-install` does require root access (or passwordless sudo) on the machine it is invoked from to install certificate management tools (simpleca) in addition to the remote host.
34
35
35
36
## Running dev-install
36
37
37
-
dev-install is invoked using its Makefile. The simplest invocation is:
38
+
`dev-install` is invoked using its `Makefile`. The simplest invocation is:
38
39
40
+
```console
41
+
make config host=<standalone host>
42
+
make osp_full
39
43
```
40
-
$ make config host=<standalone host>
41
-
$ make osp_full
42
-
```
43
44
44
-
`make config` initialises 2 local statefiles:
45
-
*`inventory` - this is an ansible inventory file, initialised such that `standalone` is an alias for your target host.
46
-
*`local-overrides.yaml` - this is an ansible vars file containing configuration which overrides the defaults in `playbooks/vars/defaults.yaml`.
45
+
`make config` initialises two local state files:
46
+
47
+
*`inventory` - this is an Ansible inventory file, initialised such that `standalone` is an alias for your target host.
48
+
*`local-overrides.yaml` - this is an Ansible vars file containing configuration which overrides the defaults in `playbooks/vars/defaults.yaml`.
47
49
48
50
Both of these files can be safely modified.
49
51
@@ -54,32 +56,29 @@ and then use it when deploying with `make osp_full overrides=local-overrides-<na
54
56
55
57
## Accessing OpenStack from your workstation
56
58
57
-
By default, dev-install configures OpenStack to use the default public IP of the
58
-
host. To access this you just need a correct clouds.yaml, which dev-install
59
-
configures with:
59
+
By default, `dev-install` configures OpenStack to use the default public IP of the host.
60
+
To access this you just need a correct `clouds.yaml`, which `dev-install` configures with:
60
61
61
-
```
62
+
```console
62
63
make local_os_client
63
64
```
64
65
65
-
This will configure your local clouds.yaml with 2 entries:
66
+
This will configure your local `clouds.yaml` with two entries:
67
+
66
68
*`standalone` - The admin user
67
-
*`standalone_openshift` - The appropriately configured non-admin openshift user
69
+
*`standalone_openshift` - The appropriately configured non-admin `openshift` user
68
70
69
-
You can change the name of these entries by editing `local-overrides.yaml` and
70
-
setting `local_cloudname` to something else.
71
+
You can change the name of these entries by editing `local-overrides.yaml` and setting `local_cloudname` to something else.
71
72
72
73
## Validating the installation
73
74
74
-
dev-install provides an additional playbook to validate the fresh deployment.
75
-
This can be run with:
75
+
`dev-install` provides an additional playbook to validate the fresh deployment. This can be run with:
76
76
77
-
```
77
+
```console
78
78
make prepare_stack_testconfig
79
79
```
80
80
81
-
This can be used to configure some helpful defaults for validating your
82
-
cluster, namely:
81
+
This can be used to configure some helpful defaults for validating your cluster, namely:
83
82
84
83
- Configure SSH access
85
84
- Configure routers and security groups to allow external network connectivity
@@ -88,55 +87,42 @@ cluster, namely:
88
87
89
88
## Network configuration
90
89
91
-
dev-install will create a new OVS bridge called br-ex and move the host's
92
-
external interface on to that bridge. This bridge is used to provide the
93
-
`external` provider network if `external_fip_pool_start` and
94
-
`external_fip_pool_end` are defined in `local-overrides.yaml`.
90
+
`dev-install` will create a new OVS bridge called `br-ex` and move the host's external interface on to that bridge.
91
+
This bridge is used to provide the `external` provider network if `external_fip_pool_start` and `external_fip_pool_end` are defined in `local-overrides.yaml`.
95
92
96
-
In addition it will create OVS bridges called br-ctlplane and br-hostonly. The
97
-
former is used internally by OSP. The latter is a second provider network which
98
-
is only routable from the host.
93
+
In addition it will create OVS bridges called `br-ctlplane` and `br-hostonly`. The former is used internally by OSP. The latter is a second provider network which is only routable from the host.
99
94
100
-
Note that we don't enable DHCP on provider networks by default, and it is not
101
-
recommended to enable DHCP on the external network at all. To enable DHCP on the
102
-
hostonly network after installation, run:
95
+
Note that we don't enable DHCP on provider networks by default, and it is not recommended to enable DHCP on the external network at all. To enable DHCP on the `hostonly` network after installation, run:
103
96
104
-
```
97
+
```console
105
98
OS_CLOUD=standalone openstack subnet set --dhcp hostonly-subnet
106
99
```
107
100
108
-
`make local_os_client` will write a
109
-
[sshuttle](https://github.com/sshuttle/sshuttle) script to
110
-
`scripts/sshuttle-standalone.sh` which will route to the hostonly provider
111
-
network over ssh.
101
+
`make local_os_client` will write a [sshuttle](https://github.com/sshuttle/sshuttle) script to `scripts/sshuttle-standalone.sh` which will route to the `hostonly` provider network over ssh.
112
102
113
103
## Configuration
114
104
115
-
dev-install is configured by overriding variables in `local-overrides.yaml`. See
`dev-install` is configured by overriding variables in `local-overrides.yaml`.
106
+
See the [default variable definitions](https://github.com/shiftstack/dev-install/blob/master/playbooks/vars/defaults.yaml) for what can be overridden.
119
107
120
108
## Sizing
121
109
122
110
When idle, a standalone deployment uses approximately:
111
+
123
112
* 16GB RAM
124
-
* 15G on /
125
-
* 3.5G on /home
126
-
* 3.6G on /var/lib/cinder
127
-
* 3.6G on /var/lib/nova
113
+
* 15G on `/`
114
+
* 3.5G on `/home`
115
+
* 3.6G on `/var/lib/cinder`
116
+
* 3.6G on `/var/lib/nova`
128
117
129
-
There is no need to mount /var/lib/cinder and /var/lib/nova separately if / is large enough for your workload.
118
+
There is no need to mount `/var/lib/cinder` and `/var/lib/nova` separately if `/` is large enough for your workload.
130
119
131
120
## Advanced features
132
121
133
122
### NFV enablement
134
123
135
-
This section contains configuration procedures for single root input/output virtualization (SR-IOV)
136
-
for network functions virtualization infrastructure (NFVi) in
137
-
your Standalone OpenStack deployment.
138
-
Unfortunately, most of these parameters don't have default values nor can be automatically figured out in
139
-
a Standalone type environment.
124
+
This section contains configuration procedures for single root input/output virtualization (SR-IOV) for network functions virtualization infrastructure (NFVi) in your Standalone OpenStack deployment.
125
+
Unfortunately, most of these parameters don't have default values nor can they be automatically calculated in a Standalone-type environment.
140
126
141
127
#### SR-IOV Variables
142
128
@@ -145,11 +131,11 @@ To understand how the SR-IOV configuration works, please have a look at this [up
|`sriov_services`|`['OS::TripleO::Services::NeutronSriovAgent', 'OS::TripleO::Services::BootParams']`| List of TripleO services to add to the default Standalone role |
148
-
|`sriov_interface`|`[undefined]`| Name of the SR-IOV capable interface. Must be enabled in BIOS. e.g. `ens1f0`|
134
+
|`sriov_interface`|`[undefined]`| Name of the SR-IOV capable interface. Must be enabled in BIOS (e.g. `ens1f0`)|
149
135
|`sriov_nic_numvfs`|`[undefined]`| Number of Virtual Functions that the NIC can handle. |
150
-
|`sriov_nova_pci_passthrough`|`[undefined]`| List of PCI Passthrough whitelist parameters. [Guidelines](https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/configuring_the_compute_service_for_instance_creation/configuring-pci-passthrough#guidelines-for-configuring-novapcipassthrough-osp) to configure it. |
136
+
|`sriov_nova_pci_passthrough`|`[undefined]`| List of PCI Passthrough allowlist parameters. [Guidelines](https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/configuring_the_compute_service_for_instance_creation/configuring-pci-passthrough#guidelines-for-configuring-novapcipassthrough-osp) to configure it. |
151
137
152
-
Note: when SR-IOV is enabled, a dedicated provider network will be created and binded to the SR-IOV interface.
138
+
Note: when SR-IOV is enabled, a dedicated provider network will be created and bound to the SR-IOV interface.
153
139
154
140
#### Kernel Variables
155
141
@@ -168,10 +154,11 @@ It is possible to configure the deployment to be ready for DPDK:
|`dpdk_services`|`['OS::TripleO::Services::ComputeNeutronOvsDpdk']`| List of TripleO services to add to the default Standalone role |
170
156
|`dpdk_interface`|`[undefined]`| Name of the DPDK capable interface. Must be enabled in BIOS. e.g. `ens1f0`|
171
-
|`tuned_isolated_cores`|`[undefined]`| List of logical CPU ids which need to be isolated from the host processes. This input is provided to the tuned profile cpu-partitioning to configure systemd and repin interrupts (IRQ repinning). |
157
+
|`tuned_isolated_cores`|`[undefined]`| List of logical CPU ids which need to be isolated from the host processes. This input is provided to the tuned profile CPU-partitioning to configure systemd and repin interrupts (IRQ repinning). |
172
158
173
159
When deploying DPDK, it is suggested to configure these options:
174
-
```
160
+
161
+
```yaml
175
162
extra_heat_params:
176
163
# A list or range of host CPU cores to which processes for pinned instance
177
164
# CPUs (PCPUs) can be scheduled:
@@ -206,66 +193,74 @@ It is possible to deploy Edge-style environments, where multiple AZ are configur
206
193
207
194
##### Deploy the Central site
208
195
209
-
Deploy a regular cloud with dev-install, and make sure you set these parameters:
196
+
Deploy a regular cloud with `dev-install`, and make sure you set these parameters:
210
197
211
198
* `dcn_az`: has to be `central`.
212
199
* `tunnel_remote_ips`: list of known public IPs of the AZ nodes.
213
200
214
-
Once this is done, you need to collect the content from `/home/stack/exported-data` into a local directory
215
-
on the host where dev-install is executed.
201
+
Once this is done, you need to collect the content from `/home/stack/exported-data` into a local directory on the host where `dev-install` is executed.
216
202
217
203
##### Deploy the "AZ" sites
218
204
219
-
Before deploying OSP, you need to scp the content from `exported-data` into the remote hosts into
220
-
`/opt/exported-data`.
221
-
Once this is done, you can deploy the AZ sites with a regular config for dev-install, except that you'll need to set
222
-
these parameters:
205
+
Before deploying OSP, you need to `scp` the content from `exported-data` into the remote hosts into `/opt/exported-data`.
206
+
Once this is done, you can deploy the AZ sites with a regular config for `dev-install`, except that you'll need to set these parameters:
223
207
224
-
*`dcn_az`: must contains "az" in the string (e.g. az0, az1)
225
-
*`local_ip`: choose an available IP in the control plane subnet, (e.g. 192.168.24.10)
226
-
*`control_plane_ip`: same as for `local_ip`, pick one that is available (e.g. 192.168.24.11)
227
-
*`hostonly_gateway`: if using provider networks, you'll need to select an available IP (e.g. 192.168.25.2)
228
-
*`tunnel_remote_ips`: the list of known public IPs that will be used to establish the VXLAN tunnels.
229
-
*`hostname`: you got to make sure both central and AZ doesn't use the default hostname (`standalone`), so set it at least on the compute. E.g. `compute1`.
230
-
*`octavia_enabled`: set to `false`.
208
+
* `dcn_az`: must contains `az` in the string (e.g. `az0`, `az1`)
209
+
* `local_ip`: choose an available IP in the control plane subnet (e.g. `192.168.24.10`)
210
+
* `control_plane_ip`: same as for `local_ip`, pick one that is available (e.g. `192.168.24.11`)
211
+
* `hostonly_gateway`: if using provider networks, you'll need to select an available IP (e.g. `192.168.25.2`)
212
+
* `tunnel_remote_ips`: the list of known public IPs that will be used to establish the VXLAN tunnels
213
+
* `hostname`: you must make sure both central and AZ doesn't use the default hostname (`standalone`), so set it at least on the compute (e.g. `compute1`)
214
+
* `octavia_enabled`: set to `false`
231
215
232
216
Notes:
233
217
234
-
* Control plane IPs (192.168.24.x) are arbitrary, if in doubt just use the example ones.
218
+
* Control plane IPs (`192.168.24.x`) are arbitrary, if in doubt just use the example ones.
235
219
* The control plane bridges will be connected thanks to VXLAN tunnels, which is why we need to select control plane IP for AZ nodes that were not taken on the Central site.
236
-
* If you deploy the clouds in OpenStack, you need to make sure that the security groups allow VXLAN (udp/4789).
237
-
* If the public IPs aren't predictable, you'll need to manually change the MTU on the br-ctlplane and br-hostonly on the central
238
-
site and the AZ sites where needed. You can do it by editing the os-net-config configuration file and run os-net-config to apply
220
+
* If you deploy the clouds in OpenStack, you need to make sure that the security groups allow VXLAN (`udp/4789`).
221
+
* If the public IPs aren't predictable, you'll need to manually change the MTU on the `br-ctlplane` and `br-hostonly` on the central
222
+
site and the AZ sites where needed. You can do it by editing the `os-net-config` configuration file and run `os-net-config` to apply
239
223
it.
240
224
241
-
After the installation you can "join" AZs to just have a regular multinode cloud. E.g.:
242
-
```
225
+
After the installation you can "join" AZs to just have a regular multi-node cloud. E.g.:
openstack aggregate add host central compute1.shiftstack
245
230
```
246
231
247
232
Then if you're using OVN (you probably are) you got to execute this on compute nodes:
248
-
```
233
+
234
+
```console
249
235
ovs-vsctl set Open_vSwitch . external-ids:ovn-cms-options="enable-chassis-as-gw,availability-zones=central"
250
236
```
251
237
252
238
#### Post Deployment Stack Updates
253
239
254
240
It is possible to perform stack updates on an ephemeral standalone stack.
255
241
256
-
Copying the generated tripleo_deploy.sh in your deployment users folder (eg. /home/stack/tripleo_deploy.sh) to tripleo_update.sh and add the parameter --force-stack-update. This will allow you to modify the stack configuration without needing to redeploy the entire cloud which can save you considerable time.
242
+
Copying the generated `tripleo_deploy.sh` in your deployment users folder (e.g. `/home/stack/tripleo_deploy.sh`) to `tripleo_update.sh` and add the parameter `--force-stack-update`.
243
+
This will allow you to modify the stack configuration without needing to redeploy the entire cloud which can save you considerable time.
257
244
258
245
#### Post install script
259
246
260
247
It is possible to run any script in post-install with `post_install` parameter:
261
-
```
248
+
249
+
```yaml
262
250
post_install: |
263
251
export OS_CLOUD=standalone
264
252
openstack flavor set --property hw:mem_page_size=large m1.smal
265
253
```
266
254
267
255
And then run `make post_install`.
268
256
269
-
## Tools
257
+
## Troubleshooting
270
258
271
-
You can find tools helping to work with DSAL machines in `tools/dsal` directory.
259
+
If installation fails, examine the Ansible logs for more information.
260
+
Failures will usually occur early in the process or during the `tripleo_deploy` stage. If the latter, you can look at the logs in `/home/stack/standalone_deploy.log`.
261
+
Once the issue has been addressed, you can resume the deployment by stopping `heat` and running the `openstack tripleo deploy` command again via the `tripleo_deploy.sh` script:
0 commit comments