Skip to content

Commit dc905db

Browse files
authored
Merge pull request #189657 from ktoliver/public-repo-77252
manually apply approved update from public repo PR 77252
2 parents 4b70084 + 1cf301e commit dc905db

File tree

4 files changed

+30
-20
lines changed

4 files changed

+30
-20
lines changed

articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-data-studio.md

Lines changed: 13 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -58,17 +58,19 @@ In a few minutes, your creation should successfully complete.
5858

5959
- **the number of worker nodes** you want to deploy to scale out and potentially reach better performances. Before proceeding here, read the [concepts about Postgres Hyperscale](concepts-distributed-postgres-hyperscale.md). The table below indicates the range of supported values and what form of Postgres deployment you get with them. For example, if you want to deploy a server group with 2 worker nodes, indicate 2. This will create three pods, one for the coordinator node/instance and two for the worker nodes/instances (one for each of the workers).
6060

61-
62-
63-
|You need |Shape of the server group you will deploy |Number of worker nodes to indicate |Note |
64-
|---|---|---|---|
65-
|A scaled out form of Postgres to satisfy the scalability needs of your applications. |3 or more Postgres instances, 1 is coordinator, n are workers with n >=2. |n, with n>=2. |The Citus extension that provides the Hyperscale capability is loaded. |
66-
|A basic form of Postgres Hyperscale for you to do functional validation of your application at minimum cost. Not valid for performance and scalability validation. For that you need to use the type of deployments described above. |1 Postgres instance that is both coordinator and worker. |0 and add Citus to the list of extensions to load. |The Citus extension that provides the Hyperscale capability is loaded. |
67-
|A simple instance of Postgres that is ready to scale out when you need it. |1 Postgres instance. It is not yet aware of the semantic for coordinator and worker. To scale it out after deployment, edit the configuration, increase the number of worker nodes and distribute the data. |0 |The Citus extension that provides the Hyperscale capability is present on your deployment but is not yet loaded. |
68-
| | | | |
69-
70-
While indicating 1 worker works, we do not recommend you use it. This deployment will not provide you much value. With it, you will get 2 instances of Postgres: 1 coordinator and 1 worker. With this setup you actually do not scale out the data since you deploy a single worker. As such you will not see an increased level of performance and scalability. We will remove the support of this deployment in a future release.
71-
61+
|You need |Shape of the server group you will deploy |Number of worker nodes to indicate |Note |
62+
|---|---|---|---|
63+
|A scaled out form of Postgres to satisfy the scalability needs of your applications. |3 or more Postgres instances, 1 is coordinator, n are workers with n >=2. |n, with n>=2. |The Citus extension that provides the Hyperscale capability is loaded. |
64+
|A basic form of Postgres Hyperscale for you to do functional validation of your application at minimum cost. Not valid for performance and scalability validation. For that you need to use the type of deployments described above. |1 Postgres instance that is both coordinator and worker. |0 and add Citus to the list of extensions to load. |The Citus extension that provides the Hyperscale capability is loaded. |
65+
|A simple instance of Postgres that is ready to scale out when you need it. |1 Postgres instance. It is not yet aware of the semantic for coordinator and worker. To scale it out after deployment, edit the configuration, increase the number of worker nodes and distribute the data. |0 |The Citus extension that provides the Hyperscale capability is present on your deployment but is not yet loaded. |
66+
| | | | |
67+
68+
This table is demonstrated in the following figure:
69+
70+
:::image type="content" source="media/postgres-hyperscale/deployment-parameters.png" alt-text="Diagram that depicts Postgres Hyperscale worker node parameters and associated architecture." border="false":::
71+
72+
While indicating 1 worker works, we do not recommend you use it. This deployment will not provide you much value. With it, you will get 2 instances of Postgres: 1 coordinator and 1 worker. With this setup you actually do not scale out the data since you deploy a single worker. As such you will not see an increased level of performance and scalability. We will remove the support of this deployment in a future release.
73+
7274
- **the storage classes** you want your server group to use. It is important you set the storage class right at the time you deploy a server group as this cannot be changed after you deploy. If you were to change the storage class after deployment, you would need to extract the data, delete your server group, create a new server group, and import the data. You may specify the storage classes to use for the data, logs and the backups. By default, if you do not indicate storage classes, the storage classes of the data controller will be used.
7375
- to set the storage class for the data, indicate the parameter `--storage-class-data` or `-scd` followed by the name of the storage class.
7476
- to set the storage class for the logs, indicate the parameter `--storage-class-logs` or `-scl` followed by the name of the storage class.

articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-portal.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,11 @@ Be aware of the following considerations when you're deploying:
9090
|A simple instance of Azure Arc-enabled PostgreSQL Hyperscale that is ready to scale out when you need it. |One instance of Azure Arc-enabled PostgreSQL Hyperscale. It isn't yet aware of the semantic for coordinator and worker. To scale it out after deployment, edit the configuration, increase the number of worker nodes, and distribute the data. |*0*. |The Citus extension that provides the Hyperscale capability is present on your deployment, but isn't yet loaded. |
9191
| | | | |
9292

93-
Although you can indicate *1* worker, it's not a good idea to do so. This deployment doesn't provide you with much value. With it, you get two instances of Azure Arc-enabled PostgreSQL Hyperscale: one coordinator and one worker. You don't scale out the data because you deploy a single worker. As such, you don't see an increased level of performance and scalability.
93+
This table is demonstrated in the following figure:
94+
95+
:::image type="content" source="media/postgres-hyperscale/deployment-parameters.png" alt-text="Diagram that depicts Postgres Hyperscale worker node parameters and associated architecture." border="false":::
96+
97+
Although you can indicate *1* worker, it's not a good idea to do so. This deployment doesn't provide you with much value. With it, you get two instances of Azure Arc-enabled PostgreSQL Hyperscale: one coordinator and one worker. You don't scale out the data because you deploy a single worker. As such, you don't see an increased level of performance and scalability.
9498

9599
- **The storage classes you want your server group to use.** It's important to set the storage class right at the time you deploy a server group. You can't change this setting after you deploy. If you don't indicate storage classes, you get the storage classes of the data controller by default.
96100
- To set the storage class for the data, indicate the parameter `--storage-class-data` or `-scd`, followed by the name of the storage class.

articles/azure-arc/data/create-postgresql-hyperscale-server-group.md

Lines changed: 12 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -60,14 +60,18 @@ The main parameters should consider are:
6060

6161

6262

63-
|You need |Shape of the server group you will deploy |`-w` parameter to use |Note |
64-
|---|---|---|---|
65-
|A scaled out form of Postgres to satisfy the scalability needs of your applications. |Three or more Postgres instances, one is coordinator, n are workers with n >=2. |Use `-w n`, with n>=2. |The Citus extension that provides the Hyperscale capability is loaded. |
66-
|A basic form of Postgres Hyperscale for you to do functional validation of your application at minimum cost. Not valid for performance and scalability validation. For that you need to use the type of deployments described above. |One Postgres instance that is both coordinator and worker. |Use `-w 0` and load the Citus extension. Use the following parameters if deploying from command line: `-w 0` --extensions Citus. |The Citus extension that provides the Hyperscale capability is loaded. |
67-
|A simple instance of Postgres that is ready to scale out when you need it. |One Postgres instance. It is not yet aware of the semantic for coordinator and worker. To scale it out after deployment, edit the configuration, increase the number of worker nodes and distribute the data. |Use `-w 0` or do not specify `-w`. |The Citus extension that provides the Hyperscale capability is present on your deployment but is not yet loaded. |
68-
| | | | |
69-
70-
While using `-w 1` works, we do not recommend you use it. This deployment will not provide you much value. With it, you will get two instances of Postgres: One coordinator and one worker. With this setup, you actually do not scale out the data since you deploy a single worker. As such you will not see an increased level of performance and scalability. We will remove the support of this deployment in a future release.
63+
|You need |Shape of the server group you will deploy |`-w` parameter to use |Note |
64+
|---|---|---|---|
65+
|A scaled out form of Postgres to satisfy the scalability needs of your applications. |Three or more Postgres instances, one is coordinator, n are workers with n >=2. |Use `-w n`, with n>=2. |The Citus extension that provides the Hyperscale capability is loaded. |
66+
|A basic form of Postgres Hyperscale for you to do functional validation of your application at minimum cost. Not valid for performance and scalability validation. For that you need to use the type of deployments described above. |One Postgres instance that is both coordinator and worker. |Use `-w 0` and load the Citus extension. Use the following parameters if deploying from command line: `-w 0` --extensions Citus. |The Citus extension that provides the Hyperscale capability is loaded. |
67+
|A simple instance of Postgres that is ready to scale out when you need it. |One Postgres instance. It is not yet aware of the semantic for coordinator and worker. To scale it out after deployment, edit the configuration, increase the number of worker nodes and distribute the data. |Use `-w 0` or do not specify `-w`. |The Citus extension that provides the Hyperscale capability is present on your deployment but is not yet loaded. |
68+
| | | | |
69+
70+
This table is demonstrated in the following figure:
71+
72+
:::image type="content" source="media/postgres-hyperscale/deployment-parameters.png" alt-text="Diagram that depicts Postgres Hyperscale worker node parameters and associated architecture." border="false":::
73+
74+
While using `-w 1` works, we do not recommend you use it. This deployment will not provide you much value. With it, you will get two instances of Postgres: One coordinator and one worker. With this setup, you actually do not scale out the data since you deploy a single worker. As such you will not see an increased level of performance and scalability. We will remove the support of this deployment in a future release.
7175

7276
- **The storage classes** you want your server group to use. It is important you set the storage class right at the time you deploy a server group as this setting cannot be changed after you deploy. You may specify the storage classes to use for the data, logs and the backups. By default, if you do not indicate storage classes, the storage classes of the data controller will be used.
7377
- To set the storage class for the data, indicate the parameter `--storage-class-data` or `-scd` followed by the name of the storage class.
168 KB
Loading

0 commit comments

Comments
 (0)