diff --git a/source/adminguide/api.rst b/source/adminguide/api.rst index b4139a8391..53515d6456 100644 --- a/source/adminguide/api.rst +++ b/source/adminguide/api.rst @@ -64,12 +64,16 @@ the user data: #. Run the following command to find the virtual router. .. code:: bash + # cat /var/lib/dhclient/dhclient-eth0.leases | grep dhcp-server-identifier | tail -1 + #. Access user data by running the following command using the result of the above command .. code:: bash + # curl http://10.1.1.1/latest/user-data + Meta Data can be accessed similarly, using a URL of the form http://10.1.1.1/latest/meta-data/{metadata type}. (For backwards compatibility, the previous URL http://10.1.1.1/latest/{metadata type} diff --git a/source/adminguide/host_and_storage_tags.rst b/source/adminguide/host_and_storage_tags.rst index 9bbb18b833..0bd6528bf5 100644 --- a/source/adminguide/host_and_storage_tags.rst +++ b/source/adminguide/host_and_storage_tags.rst @@ -31,30 +31,39 @@ There are two types of host tags: To explain the behavior of host tags, some examples will be demonstrated with two hosts (Host1 and Host2): #. Tag setup: + * Host1: h1 * Host2: h2 * Offering: h1 + When a VM is created with the offering, the deployment will be carried out on Host1, as it is the one that has the tag compatible with the offering. #. Tag setup: + * Host1: h1 * Host2: h2,h3 * Offering: h3 + Hosts and offerings accept a list of tags, with comma (,) being their separator. So in this example, Host2 has the h2 and h3 tags. When a VM is created with the offering, the deployment will be carried out on Host2, as it is the one that has the tag compatible with the offering. #. Tag setup: + * Host1: h1 * Host2: h2,h3 * Offering: (no tag) + When the offering does not have tags, it will be possible to deploy the VM on any host. #. Tag setup: + * Host1: (no tag) * Host2: h2 * Offering: h3 + None of the hosts have compatible tags and it will not be possible to deploy a VM with the offering. However, CloudStack ignores this behavior when a host is manually selected. .. _strict-host-tags: + Strict Host Tags ----------------- During certain operations, such as changing the compute offering or starting or @@ -96,23 +105,31 @@ Storage tags are responsible for directing volumes to compatible primary storage To explain the behavior of storage tags, some examples will be demonstrated: #. Tag setup: + * Storage: A * Offering: A,B + Storage and offering accept a list of tags, with the comma (,) being their separator. Therefore, in this example, the offering has tags A and B. In this example, it will not be possible to allocate the volume, as all the offering tags must exist in the storage. Although the storage has the A tag, it does not have the B tag. #. Tag setup: + * Storage: A,B,C,D,X * Offering: A,B,C + In this example, it will be possible to allocate the volume, as all the offering tags exist in the storage. #. Tag setup: + * Storage: A, B, C * Offering: (no tag) + In this example, it will be possible to allocate the volume, as the offering does not have any tag requirements. #. Tag setup: + * Storage: (no tag) * Offering: D,E + In this example, it will not be possible to allocate the volume, as the storage does not have tags, therefore it does not meet the offering requirements. In short, if the offering has tags, the storage will need to have all the tags for the volume to be allocated. If the offering does not have tags, the volume can be allocated, regardless of whether the storage has a tag or not. diff --git a/source/adminguide/networking/virtual_private_cloud_config.rst b/source/adminguide/networking/virtual_private_cloud_config.rst index 0365e98c60..461db04fda 100644 --- a/source/adminguide/networking/virtual_private_cloud_config.rst +++ b/source/adminguide/networking/virtual_private_cloud_config.rst @@ -308,8 +308,8 @@ Configuring Network Access Control List ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. note:: -Network Access Control Lists can only be created if the service -"NetworkACL" is supported by the created VPC. + Network Access Control Lists can only be created if the service + "NetworkACL" is supported by the created VPC. Define a Network Access Control List (ACL) to control incoming (ingress) and outgoing (egress) traffic between the associated Network Tier @@ -347,14 +347,14 @@ destination" and / or "allow all ingress source" rule to the ACL. Afterwards traffic can be white- or blacklisted. .. note:: -- ACL Rules in Cloudstack are stateful -- Source / Destination CIDRs are always external Networks -- ACL rules can also been seen on the virtual router of the VPC. Ingress - rules are listed in the table iptables table "filter" while egress rules - are placed in the "mangle" table -- ACL rules for ingress and egress are not correlating. For example a - egress "deny all" won't affect traffic in response to an allowed ingress - connection + - ACL Rules in Cloudstack are stateful + - Source / Destination CIDRs are always external Networks + - ACL rules can also been seen on the virtual router of the VPC. Ingress + rules are listed in the table iptables table "filter" while egress rules + are placed in the "mangle" table + - ACL rules for ingress and egress are not correlating. For example a + egress "deny all" won't affect traffic in response to an allowed ingress + connection Creating ACL Lists diff --git a/source/adminguide/networking/vnf_templates_appliances.rst b/source/adminguide/networking/vnf_templates_appliances.rst index fcc57bbac9..2ba354586a 100644 --- a/source/adminguide/networking/vnf_templates_appliances.rst +++ b/source/adminguide/networking/vnf_templates_appliances.rst @@ -15,14 +15,14 @@ VNF Templates and Appliances -======================= +============================ Virtualized Network Functions (VNFs) refers to virtualized software applications which offers network services, for example routers, firewalls, load balancers. Adding a VNF template from an URL -------- +----------------------------------------------------------- To create a VNF appliance, user needs to register a VNF template and add VNF settings. @@ -44,7 +44,7 @@ the same page or under Network -> VNF templates. Updating a VM template to VNF template -------- +----------------------------------------------------------- Users are able to update an existing VM template, which is uploaded from HTTP server or local, or created from volume, to be a VNF template. @@ -63,7 +63,7 @@ HTTP server or local, or created from volume, to be a VNF template. Updating the VNF settings of a VNF template -------------------- +----------------------------------------------------------- Users need to add the VNF nics and VNF details of the VNF templates. @@ -115,7 +115,7 @@ Users need to add the VNF nics and VNF details of the VNF templates. Deploying VNF appliances -------------------- +----------------------------------------------------------- #. Log in to the CloudStack UI as an administrator or end user. @@ -147,15 +147,15 @@ Deploying VNF appliances The following network rules will be applied. - If management network is an isolated network, CloudStack will acquire a public - IP, enable static nat on the VNF appliance, and create firewall rules to allow - traffic to ssh/http/https ports based on access_methods in VNF template details. + IP, enable static nat on the VNF appliance, and create firewall rules to allow + traffic to ssh/http/https ports based on access_methods in VNF template details. - If management network is a shared network with security groups, CloudStack will - create a new security group with rules to allow traffic to ssh/http/https ports - based on access_methods in VNF template details, and assign to the VNF appliance. + create a new security group with rules to allow traffic to ssh/http/https ports + based on access_methods in VNF template details, and assign to the VNF appliance. - If management network is a L2 network, VPC tier or Shared network without security - groups, no network rules will be configured. + groups, no network rules will be configured. #. Click on the "Launch VNF appliance" button diff --git a/source/adminguide/service_offerings.rst b/source/adminguide/service_offerings.rst index ea50df7a26..4e2368cc54 100644 --- a/source/adminguide/service_offerings.rst +++ b/source/adminguide/service_offerings.rst @@ -653,22 +653,22 @@ on different types of networks in CloudStack. .. cssclass:: table-striped table-bordered table-hover -=========================================== =============================== -Networks Network Rate Is Taken from -=========================================== =============================== -Guest network of Virtual Router Guest Network Offering -Public network of Virtual Router Guest Network Offering -Storage network of Secondary Storage VM System Network Offering -Management network of Secondary Storage VM System Network Offering -Storage network of Console Proxy VM System Network Offering -Management network of Console Proxy VM System Network Offering -Storage network of Virtual Router System Network Offering -Management network of Virtual Router System Network Offering -Public network of Secondary Storage instance System Network Offering -Public network of Console Proxy instance System Network Offering -Default network of a guest instance Compute Offering -Additional networks of a guest instance Corresponding Network Offerings -=========================================== =============================== +============================================ =============================== +Networks Network Rate Is Taken from +============================================ =============================== +Guest network of Virtual Router Guest Network Offering +Public network of Virtual Router Guest Network Offering +Storage network of Secondary Storage VM System Network Offering +Management network of Secondary Storage VM System Network Offering +Storage network of Console Proxy VM System Network Offering +Management network of Console Proxy VM System Network Offering +Storage network of Virtual Router System Network Offering +Management network of Virtual Router System Network Offering +Public network of Secondary Storage instance System Network Offering +Public network of Console Proxy instance System Network Offering +Default network of a guest instance Compute Offering +Additional networks of a guest instance Corresponding Network Offerings +============================================ =============================== A guest instance must have a default network, and can also have many additional networks. Depending on various parameters, such as the host diff --git a/source/adminguide/storage.rst b/source/adminguide/storage.rst index b8ed84ccb4..d9b9652a48 100644 --- a/source/adminguide/storage.rst +++ b/source/adminguide/storage.rst @@ -171,14 +171,13 @@ In order to use multiple local storage pools, you need to local.storage.uuid=a43943c1-1759-4073-9db1-bc0ea19203aa,f5b1220b-4446-42dc-a872-cffd281f9f8c local.storage.path=/var/lib/libvirt/images,/var/lib/libvirt/images2 -# #. Restart cloudstack-agent service - Storage pools will be automatically created in libvirt by the CloudStack agent Adding a Local Storage Pool via UI -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ When using UI, ensure that the scope of the storage is set to "Host", and ensure that the protocol is set to "Filesystem". @@ -187,6 +186,7 @@ ensure that the protocol is set to "Filesystem". Changing the Scope of the Primary Storage ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + Scope of a Primary Storage can be changed from Zone-wide to Cluster-wide and vice versa when the Primary Storage is in Disabled state. An action button is displayed in UI for each Primary Storage in Disabled state. @@ -276,7 +276,7 @@ templates, and ISOs. Setting NFS Mount Options on the Storage Pool -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NFS mount options can be added while creating an NFS storage pool for KVM hosts. When the storage pool is mounted on the KVM hypervisor host, @@ -393,6 +393,7 @@ under "Browser" tab for a secondary storage. Read only ~~~~~~~~~ + Secondary storages can also be set to read-only in order to cordon it off from being used for storing any further Templates, Volumes and Snapshots. @@ -401,7 +402,7 @@ from being used for storing any further Templates, Volumes and Snapshots. cmk updateImageStore id=4440f406-b9b6-46f1-93a4-378a75cf15de readonly=true Direct resources to a specific secondary storage -~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ By default, ACS allocates ISOs, volumes, snapshots, and templates to the freest secondary storage of the zone. In order to direct these resources to a specific secondary storage, the user can utilize the functionality of the dynamic secondary storage selectors using heuristic rules. This functionality utilizes JavaScript rules, defined by the user, to direct these resources to a specific secondary storage. When creating the heuristic rule, the script will have access to some preset variables with information about the secondary storage in the zone, about the resource the rule will be applied upon, and about the account that triggered the allocation. These variables are presented in the table below: @@ -409,39 +410,39 @@ By default, ACS allocates ISOs, volumes, snapshots, and templates to the freest | Resource | Variables | +===================================+===================================+ | Secondary Storage | ``id`` | - | +-----------------------------------| + | +-----------------------------------+ | | ``name`` | - | +-----------------------------------| + | +-----------------------------------+ | | ``usedDiskSize`` | - | +-----------------------------------| + | +-----------------------------------+ | | ``totalDiskSize`` | - | +-----------------------------------| + | +-----------------------------------+ | | ``protocol`` | +-----------------------------------+-----------------------------------+ | Snapshot | ``size`` | - | +-----------------------------------| + | +-----------------------------------+ | | ``hypervisorType`` | - | +-----------------------------------| + | +-----------------------------------+ | | ``name`` | +-----------------------------------+-----------------------------------+ | ISO/Template | ``format`` | - | +-----------------------------------| + | +-----------------------------------+ | | ``hypervisorType`` | - | +-----------------------------------| + | +-----------------------------------+ | | ``templateType`` | - | +-----------------------------------| + | +-----------------------------------+ | | ``name`` | +-----------------------------------+-----------------------------------+ | Volume | ``size`` | - | +-----------------------------------| + | +-----------------------------------+ | | ``format`` | +-----------------------------------+-----------------------------------+ | Account | ``id`` | - | +-----------------------------------| + | +-----------------------------------+ | | ``name`` | - | +-----------------------------------| + | +-----------------------------------+ | | ``domain.id`` | - | +-----------------------------------| + | +-----------------------------------+ | | ``domain.name`` | +-----------------------------------+-----------------------------------+ @@ -722,7 +723,7 @@ may take several minutes for the volume to be moved to the new Instance. Instance Storage Migration -~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~ Supported in XenServer, KVM, and VMware. @@ -1099,6 +1100,7 @@ API output bellow. "diskkbsread": 343124, "diskkbswrite": 217619, ... + Bytes read/write, as well as the total IO/s, are exposed via UI, as shown in the image below. |volume-metrics.png| @@ -1157,7 +1159,7 @@ Following is the example for checkVolume API usage and the result in the volume Importing and Unmanaging Volumes from Storage Pools -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Since Apache CloudStack 4.19.1.0, importing and unmanaging volumes from primary storage pools are supported. @@ -1526,7 +1528,7 @@ Deleting objects from a bucket 2. Click on the |delete-button.png| button to delete the selected files from the bucket. Shared FileSystems ---------------- +------------------ CloudStack offers fully managed NFS Shared FileSystems to all users. This section gives technical details on how to create/manage a Shared FileSystem @@ -1536,7 +1538,7 @@ using basic lifecycle operations and also some implementation details. This feature is available only on advanced zones without security groups. Creating a New Shared FileSystem -~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. Log in to the CloudStack UI as a user or administrator. @@ -1600,7 +1602,8 @@ Supported lifecycle operations are : Shared FileSystem Instance -~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~ + The Shared FileSystem Instance is stateless and HA enabled. A new instance is deployed and will start serving the NFS share if the host or VM goes down. The VM is installed with the SystemVM template which is also used by the CPVM and SSVM. @@ -1613,12 +1616,14 @@ required during normal operations. Service Offering ~~~~~~~~~~~~~~~~ + There are two global settings that control what should be the minimum RAM size and minimum CPU count for the Shared FileSystem Instance : 'sharedfsvm.min.cpu.count' and 'sharedfsvm.min.ram.size`. Only those offerings which meet these settings and have HA enabled are shown in the create form. Shared FileSystem Data Volume -~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + The data volume is also visible to the users. It is recommended to use the Shared FileSystem UI/API to manage the data but users or admin can perform actions directly on the data volume or the root volume as well if they wish. Attaching and detaching a disk is not allowed on a Shared FileSystem Instance. diff --git a/source/adminguide/systemvm.rst b/source/adminguide/systemvm.rst index 356334fe27..676f5c1c10 100644 --- a/source/adminguide/systemvm.rst +++ b/source/adminguide/systemvm.rst @@ -198,7 +198,7 @@ Console proxies can be restarted by administrators but this will interrupt existing console sessions for users. Creating an Instance Console Endpoint -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The access to an instance console is created by the API 'createConsoleEndpoint', for the instance specified in the parameter 'virtualmachineid'. By default, @@ -265,7 +265,7 @@ communication with SSL: Changing the Console Proxy SSL Certificate and Domains -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The administrator can configure SSL encryption by selecting a domain and uploading a new SSL certificate and private key. The domain must @@ -656,55 +656,55 @@ column in 'Failed'/'Passed' if there are health check failures of any type. Following global configs have been added for configuring health checks: - ``router.health.checks.enabled`` - If true, router health checks are allowed - to be executed and read. If false, all scheduled checks and API calls for on - demand checks are disabled. Default is true. + to be executed and read. If false, all scheduled checks and API calls for on + demand checks are disabled. Default is true. - ``router.health.checks.basic.interval`` - Interval in minutes at which basic - router health checks are performed. If set to 0, no tests are scheduled. Default - is 3 mins as per the pre 4.14 monitor services. + router health checks are performed. If set to 0, no tests are scheduled. Default + is 3 mins as per the pre 4.14 monitor services. - ``router.health.checks.advanced.interval`` - Interval in minutes at which - advanced router health checks are performed. If set to 0, no tests are scheduled. - Default value is 10 minutes. + advanced router health checks are performed. If set to 0, no tests are scheduled. + Default value is 10 minutes. - ``router.health.checks.config.refresh.interval`` - Interval in minutes at which - router health checks config - such as scheduling intervals, excluded checks, etc - is updated on virtual routers by the management server. This value should be - sufficiently high (like 2x) from the router.health.checks.basic.interval and - router.health.checks.advanced.interval so that there is time between new results - generation for passed data. Default is 10 mins. + router health checks config - such as scheduling intervals, excluded checks, etc + is updated on virtual routers by the management server. This value should be + sufficiently high (like 2x) from the router.health.checks.basic.interval and + router.health.checks.advanced.interval so that there is time between new results + generation for passed data. Default is 10 mins. - ``router.health.checks.results.fetch.interval`` - Interval in minutes at which - router health checks results are fetched by management server. On each result fetch, - management server evaluates need to recreate VR as per configuration of - 'router.health.checks.failures.to.recreate.vr'. This value should be sufficiently - high (like 2x) from the 'router.health.checks.basic.interval' and - 'router.health.checks.advanced.interval' so that there is time between new - results generation and fetch. + router health checks results are fetched by management server. On each result fetch, + management server evaluates need to recreate VR as per configuration of + 'router.health.checks.failures.to.recreate.vr'. This value should be sufficiently + high (like 2x) from the 'router.health.checks.basic.interval' and + 'router.health.checks.advanced.interval' so that there is time between new + results generation and fetch. - ``router.health.checks.failures.to.recreate.vr`` - Health checks failures defined - by this config are the checks that should cause router recreation. If empty the - recreate is not attempted for any health check failure. Possible values are comma - separated script names from systemvm’s /root/health_scripts/ (namely - cpu_usage_check.py, - dhcp_check.py, disk_space_check.py, dns_check.py, gateways_check.py, haproxy_check.py, - iptables_check.py, memory_usage_check.py, router_version_check.py), connectivity.test - or services (namely - loadbalancing.service, webserver.service, dhcp.service) + by this config are the checks that should cause router recreation. If empty the + recreate is not attempted for any health check failure. Possible values are comma + separated script names from systemvm’s /root/health_scripts/ (namely - cpu_usage_check.py, + dhcp_check.py, disk_space_check.py, dns_check.py, gateways_check.py, haproxy_check.py, + iptables_check.py, memory_usage_check.py, router_version_check.py), connectivity.test + or services (namely - loadbalancing.service, webserver.service, dhcp.service) - ``router.health.checks.to.exclude`` - Health checks that should be excluded when - executing scheduled checks on the router. This can be a comma separated list of - script names placed in the '/root/health_checks/' folder. Currently the following - scripts are placed in default systemvm Template - cpu_usage_check.py, - disk_space_check.py, gateways_check.py, iptables_check.py, router_version_check.py, - dhcp_check.py, dns_check.py, haproxy_check.py, memory_usage_check.py. + executing scheduled checks on the router. This can be a comma separated list of + script names placed in the '/root/health_checks/' folder. Currently the following + scripts are placed in default systemvm Template - cpu_usage_check.py, + disk_space_check.py, gateways_check.py, iptables_check.py, router_version_check.py, + dhcp_check.py, dns_check.py, haproxy_check.py, memory_usage_check.py. - ``router.health.checks.free.disk.space.threshold`` - Free disk space threshold - (in MB) on VR below which the check is considered a failure. Default is 100MB. + (in MB) on VR below which the check is considered a failure. Default is 100MB. - ``router.health.checks.max.cpu.usage.threshold`` - Max CPU Usage threshold as - % above which check is considered a failure. + % above which check is considered a failure. - ``router.health.checks.max.memory.usage.threshold`` - Max Memory Usage threshold - as % above which check is considered a failure. + as % above which check is considered a failure. The scripts for following health checks are provided in '/root/health_checks/'. These are not exhaustive and can be modified for covering other scenarios not covered. diff --git a/source/adminguide/templates.rst b/source/adminguide/templates.rst index e92db413c6..dc7d4e53f0 100644 --- a/source/adminguide/templates.rst +++ b/source/adminguide/templates.rst @@ -602,3 +602,5 @@ Attaching an ISO to a Instance :alt: Revoking permsissons from both projects previously added .. |template-permissions-update-5.png| image:: /_static/images/template-permissions-update-5.png :alt: Reseting (removing all) permissions +.. |iso.png| image:: /_static/images/iso-icon.png + :alt: depicts adding an iso image diff --git a/source/adminguide/templates/_bypass-secondary-storage-kvm.rst b/source/adminguide/templates/_bypass-secondary-storage-kvm.rst index 0aa5376a32..080e5ef9a9 100644 --- a/source/adminguide/templates/_bypass-secondary-storage-kvm.rst +++ b/source/adminguide/templates/_bypass-secondary-storage-kvm.rst @@ -49,7 +49,8 @@ From CloudStack 4.14.0, system VM Templates also support direct download. An adm Uploading Certificates for Direct Downloads -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + For direct downloads over HTTPS, the KVM hosts must have valid certificates. These certificates can be either self-signed or signed and will allow the KVM hosts to access the Templates/ISOs and download them. CloudStack provides some APIs to handle certificates for direct downloads: @@ -85,7 +86,7 @@ CloudStack provides some APIs to handle certificates for direct downloads: upload templatedirectdownloadcertificate hypervisor=KVM name=CERTIFICATE_ALIAS zoneid=ZONE_ID certificate=CERTIFICATE_FORMATTED hostid=HOST_ID Synchronising Certificates for Direct Downloads -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ As new hosts may be added to a zone which do not include a certificate which was previously uploaded to pre-existing hosts. @@ -97,7 +98,7 @@ CloudStack provides a way to synchronize certificates across all the connected h - Upload missing certificates to hosts Direct Download Timeouts -~~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^^ With 4.14.0, ability to configure different timeout values for the direct downloading of Templates has been added. Three new global settings have been added for this: diff --git a/source/adminguide/templates/_create_windows.rst b/source/adminguide/templates/_create_windows.rst index eeeb8768ac..6a86934461 100644 --- a/source/adminguide/templates/_create_windows.rst +++ b/source/adminguide/templates/_create_windows.rst @@ -49,7 +49,7 @@ An overview of the procedure is as follows: System Preparation for Windows Server 2008 R2 -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ For Windows 2008 R2, you run Windows System Image Manager to create a custom sysprep response XML file. Windows System Image Manager is @@ -156,7 +156,7 @@ Use the following steps to run sysprep for Windows 2008 R2: System Preparation for Windows Server 2003 R2 -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Earlier versions of Windows have a different sysprep tool. Follow these steps for Windows Server 2003 R2. diff --git a/source/adminguide/templates/_password.rst b/source/adminguide/templates/_password.rst index 3378acb50e..4b62b372ec 100644 --- a/source/adminguide/templates/_password.rst +++ b/source/adminguide/templates/_password.rst @@ -42,7 +42,7 @@ boot it will not set the password but boot will continue normally. Linux OS Installation -~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^ Use the following steps to begin the Linux OS installation: @@ -78,7 +78,7 @@ Use the following steps to begin the Linux OS installation: Windows OS Installation -~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^ Download the installer, CloudInstanceManager.msi, from the `Download page `_ diff --git a/source/adminguide/ui.rst b/source/adminguide/ui.rst index 3a84aa229a..0259e5a53c 100644 --- a/source/adminguide/ui.rst +++ b/source/adminguide/ui.rst @@ -492,7 +492,7 @@ Example for adding custom plugins: ... } -`icon` for the plugin can be chosen from Ant Design icons listed at `Icon - Ant Design Vue`_. +`icon` for the plugin can be chosen from Ant Design icons listed at `Icon - Ant Design Vue `_. For displaying a custom HTML in the plugin, HTML file can be stored in the CloudStack management server's web application directory on the server, i.e., */usr/share/cloudstack-management/webapp* and `path` can be set to the name of the file. For displaying a service or a web page, URL can be set as the `path` of the plugin. |ui-custom-plugin.png| diff --git a/source/adminguide/virtual_machines.rst b/source/adminguide/virtual_machines.rst index 1640ab0e74..83fa259734 100644 --- a/source/adminguide/virtual_machines.rst +++ b/source/adminguide/virtual_machines.rst @@ -1035,7 +1035,7 @@ like many other resources in CloudStack. KVM supports Instance Snapshots when using NFS shared storage. If raw block storage is used (i.e. Ceph), then Instance Snapshots are not possible, since there is no possibility to write RAM memory content anywhere. In such cases you can use as an alternative -`Storage-based VM Snapshots on KVM`_ +:ref:`Storage-based-Instance-Snapshots-on-KVM`. If you need more information about Instance Snapshots on VMware, check out the @@ -1044,7 +1044,7 @@ VMware documentation and the VMware Knowledge Base, especially `_. -.. _`Storage-based Instance Snapshots on KVM`: +.. _Storage-based-Instance-Snapshots-on-KVM: Storage-based Instance Snapshots on KVM --------------------------------------- diff --git a/source/adminguide/virtual_machines/importing_unmanaging_vms.rst b/source/adminguide/virtual_machines/importing_unmanaging_vms.rst index 6c60150c5a..b15c9db653 100644 --- a/source/adminguide/virtual_machines/importing_unmanaging_vms.rst +++ b/source/adminguide/virtual_machines/importing_unmanaging_vms.rst @@ -14,13 +14,13 @@ under the License. About Import Export Instances -------------------------- +----------------------------- For certain hypervisors, CloudStack supports importing of Instances from Managed Hosts, External Hosts, Local Storage and Shared Storage, into CloudStack. Manage or Unmanage Instances on Managed Hosts -------------------------- +--------------------------------------------- .. note:: This is currently only available for **vSphere** and **KVM** clusters. @@ -72,7 +72,7 @@ Listing unmanaged Instances --------------------------- Prerequisites to list unmanaged Instances (vSphere or KVM) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In order for CloudStack to list the Instances that are not managed by CloudStack on a host/cluster, the instances must exist on the hosts that are already part to the CloudStack. @@ -407,7 +407,8 @@ Unmanaging Instance actions - For the Instance being unmanaged: stopped and destroyed usage events (similar to the generated usage events when expunging an Instance), with types: ‘VM.STOP’ and ‘VM.DESTROY', unless the instance has been already stopped before being unmanaged and in this case only ‘VM.DESTROY' is generated. Import Instances from External Hosts -------------------------- +------------------------------------ + .. note:: This is currently only available for **KVM** hypervisor. External Host @@ -504,7 +505,7 @@ choose the temporary storage location on the external host for the converted fil Same response as that of deployVirtualMachine API. Import Instances from Local/Shared Storage ----------------------------------------- +------------------------------------------ .. note:: This is currently only available for **KVM** hypervisor. @@ -540,7 +541,7 @@ The importVm API is utilized to create instances using QCOW2 file from an existi Same response as that of deployVirtualMachine API. Import Instances from Shared Storage -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The importVm API is utilized to create instances using QCOW2 file from an existing Shared Storage pool of a KVM cluster within the CloudStack infrastructure. Only NFS Storage Pool are supported. diff --git a/source/installguide/configuration.rst b/source/installguide/configuration.rst index b50a94ae61..8411608ef0 100644 --- a/source/installguide/configuration.rst +++ b/source/installguide/configuration.rst @@ -1005,7 +1005,7 @@ XenServer and KVM hosts can be added to a cluster at any time. Requirements for XenServer and KVM Hosts -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +**************************************** .. warning:: Make sure the hypervisor host does not have any instances already running before @@ -1026,7 +1026,7 @@ hypervisor in the CloudStack Installation Guide. Since CloudStack 4.20.0, the host arch type is auto detected when adding the host into CloudStack and it must match the cluster arch type for the operation to succeed. XenServer Host Additional Requirements -'''''''''''''''''''''''''''''''''''''' +************************************** If network bonding is in use, the administrator must cable the new host identically to other hosts in the cluster. @@ -1060,7 +1060,7 @@ bonds on the new hosts in the cluster. KVM Host Additional Requirements -'''''''''''''''''''''''''''''''' +******************************** - If shared mountpoint storage is in use, the administrator should ensure that the new host has all the same mountpoints (with storage @@ -1082,7 +1082,7 @@ KVM Host Additional Requirements defaults:cloudstack !requiretty Adding a XenServer Host -^^^^^^^^^^^^^^^^^^^^^^^ +*********************** #. If you have not already done so, install the hypervisor software on the host. You will need to know which version of the hypervisor @@ -1126,7 +1126,7 @@ Adding a XenServer Host Adding a KVM Host -^^^^^^^^^^^^^^^^^ +***************** The steps to add a KVM host are same as adding a XenServer Host as mentioned in the above section. diff --git a/source/installguide/hypervisor/hyperv.rst b/source/installguide/hypervisor/hyperv.rst index 792e51778b..d8a018074c 100644 --- a/source/installguide/hypervisor/hyperv.rst +++ b/source/installguide/hypervisor/hyperv.rst @@ -85,7 +85,7 @@ start: | | y | the file share for the Hyper-V deployment will be | | | | the new folder created in the \\Shares on the | | | | selected volume. You can create sub-folders for both | -| | | CloudStack Primary and Secondary storage within the | +| | | CloudStack Primary and Secondary storage within the | | | | share location. When you select the profile for the | | | | file shares, ensure that you select SMB Share | | | | -Applications. This creates the file shares with | @@ -99,17 +99,17 @@ start: +------------+----------+------------------------------------------------------+ | Virtual | | If you are using Hyper-V 2012 R2, manually create an | | Switch | | external virtual switch before adding the host to | -| | | CloudStack. If the Hyper-V host is added to the Hyper-V | -| | | manager, select the host, then click Virtual Switch | -| | | Manager, then New Virtual Switch. In the External | -| | | Network, select the desired NIC adapter and click | -| | | Apply. | +| | | CloudStack. If the Hyper-V host is added to the | +| | | Hyper-V manager, select the host, then click Virtual | +| | | Switch Manager, then New Virtual Switch. In the | +| | | External Network, select the desired NIC adapter and | +| | | click Apply. | | | | | | | | If you are using Windows 2012 R2, virtual switch is | | | | created automatically. | +------------+----------+------------------------------------------------------+ | Virtual | | Take a note of the name of the virtual switch. You | -| Switch | | need to specify that when configuring CloudStack | +| Switch | | need to specify that when configuring CloudStack | | Name | | physical network labels. | +------------+----------+------------------------------------------------------+ | Hyper-V | | - Add the Hyper-V domain users to the Hyper-V | @@ -122,13 +122,13 @@ start: | | | - This domain user should be part of the Hyper-V | | | | Administrators and Local Administrators group on | | | | the Hyper-V hosts that are to be managed by | -| | | CloudStack. | +| | | CloudStack. | | | | | | | | - The Hyper-V Agent service runs with the | | | | credentials of this domain user account. | | | | | | | | - Specify the credential of the domain user while | -| | | adding a host to CloudStack so that it can manage | +| | | adding a host to CloudStack so that it can manage | | | | it. | | | | | | | | - Specify the credential of the domain user while | @@ -152,6 +152,9 @@ start: | Dial-in | | | +------------+----------+------------------------------------------------------+ +.. NOTE: For this kind of content it might be better to use a CSV table: +.. https://docutils.sourceforge.io/docs/ref/rst/directives.html#csv-table + Hyper-V Installation Steps ~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/source/plugins/nsx-plugin.rst b/source/plugins/nsx-plugin.rst index 06133701a8..0f7e24a7b6 100644 --- a/source/plugins/nsx-plugin.rst +++ b/source/plugins/nsx-plugin.rst @@ -200,13 +200,13 @@ When the first VM is created on the network tier, CloudStack creates the followi .. note:: -The following notations were used in the above section: + The following notations were used in the above section: - - d_id: the 'id' column on the 'domain' table for the caller domain - - a_id: the 'id' column of the 'accounts' table for the owner account - - z_id: the 'id' column of the 'datacenter' table for the zone - - v_id: the 'id' column of the 'vpcs' table for the new VPC being created - - s_id: the 'id' column of the 'networks' table for the network tier being created + - d_id: the 'id' column on the 'domain' table for the caller domain + - a_id: the 'id' column of the 'accounts' table for the owner account + - z_id: the 'id' column of the 'datacenter' table for the zone + - v_id: the 'id' column of the 'vpcs' table for the new VPC being created + - s_id: the 'id' column of the 'networks' table for the network tier being created CKS on NSX @@ -226,4 +226,4 @@ Additional Notes ~~~~~~~~~~~~~~~~~ - Ports 67-68 need to be manually opened for network tiers of VPCs created in NSX based zones with default_deny ACL for DHCP to work as expected. -- When creating routed VPC networks in NSX-enabled zones, ensure that no 2 VPCs use the same CIDR, to prevent IP conflicts upstream (BGP). \ No newline at end of file +- When creating routed VPC networks in NSX-enabled zones, ensure that no 2 VPCs use the same CIDR, to prevent IP conflicts upstream (BGP). diff --git a/source/upgrading/upgrade/mysql.rst b/source/upgrading/upgrade/mysql.rst index ea9e88497a..c5aff07d39 100644 --- a/source/upgrading/upgrade/mysql.rst +++ b/source/upgrading/upgrade/mysql.rst @@ -23,8 +23,9 @@ not be able to start any VM. The following SQL statement needs to be manually executed in order to fix such issue: - .. parsed-literal:: -ALTER TABLE nics MODIFY COLUMN update_time timestamp DEFAULT CURRENT_TIMESTAMP; + .. code-block:: mysql + + ALTER TABLE nics MODIFY COLUMN update_time timestamp DEFAULT CURRENT_TIMESTAMP; The issue is known to affect the following MySQL server versions: