Skip to content

Commit 31a7767

Browse files
authored
Merge pull request #202797 from markingmyname/pgsserver
[PostgreSQL] Refresh entire Single Server set (Bulk change) use docutune light
2 parents 1be34a4 + 4a2f6c3 commit 31a7767

File tree

108 files changed

+848
-625
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

108 files changed

+848
-625
lines changed

articles/postgresql/single-server/application-best-practices.md

Lines changed: 13 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,10 @@ description: Learn about best practices for building an app by using Azure Datab
44
ms.service: postgresql
55
ms.subservice: single-server
66
ms.topic: conceptual
7-
ms.author: sunila
87
author: sunilagarwal
8+
ms.author: sunila
99
ms.reviewer: ""
10-
ms.date: 12/10/2020
10+
ms.date: 06/24/2022
1111
---
1212

1313
# Best practices for building an application with Azure Database for PostgreSQL
@@ -49,18 +49,21 @@ Here are a few tools and practices that you can use to help debug performance is
4949
With connection pooling, a fixed set of connections is established at the startup time and maintained. This also helps reduce the memory fragmentation on the server that is caused by the dynamic new connections established on the database server. The connection pooling can be configured on the application side if the app framework or database driver supports it. If that is not supported, the other recommended option is to leverage a proxy connection pooler service like [PgBouncer](https://pgbouncer.github.io/) or [Pgpool](https://pgpool.net/mediawiki/index.php/Main_Page) running outside the application and connecting to the database server. Both PgBouncer and Pgpool are community based tools that work with Azure Database for PostgreSQL.
5050

5151
### Retry logic to handle transient errors
52+
5253
Your application might experience transient errors where connections to the database are dropped or lost intermittently. In such situations, the server is up and running after one to two retries in 5 to 10 seconds. A good practice is to wait for 5 seconds before your first retry. Then follow each retry by increasing the wait gradually, up to 60 seconds. Limit the maximum number of retries at which point your application considers the operation failed, so you can then further investigate. See [How to troubleshoot connection errors](./concepts-connectivity.md) to learn more.
5354

5455
### Enable read replication to mitigate failovers
55-
You can use [Data-in Replication](./concepts-read-replicas.md) for failover scenarios. When you're using read replicas, no automated failover between source and replica servers occurs. You'll notice a lag between the source and the replica because the replication is asynchronous. Network lag can be influenced by many factors, like the size of the workload running on the source server and the latency between datacenters. In most cases, replica lag ranges from a few seconds to a couple of minutes.
5656

57+
You can use [Data-in Replication](./concepts-read-replicas.md) for failover scenarios. When you're using read replicas, no automated failover between source and replica servers occurs. You'll notice a lag between the source and the replica because the replication is asynchronous. Network lag can be influenced by many factors, like the size of the workload running on the source server and the latency between datacenters. In most cases, replica lag ranges from a few seconds to a couple of minutes.
5758

5859
## Database deployment
5960

6061
### Configure CI/CD deployment pipeline
62+
6163
Occasionally, you need to deploy changes to your database. In such cases, you can use continuous integration (CI) through [GitHub Actions](https://github.com/Azure/postgresql/blob/master/README.md) for your PostgreSQL server to update the database by running a custom script against it.
6264

6365
### Define manual database deployment process
66+
6467
During manual database deployment, follow these steps to minimize downtime or reduce the risk of failed deployment:
6568

6669
- Create a copy of a production database on a new database by using pg_dump.
@@ -74,16 +77,19 @@ During manual database deployment, follow these steps to minimize downtime or re
7477
> If the application is like an e-commerce app and you can't put it in read-only state, deploy the changes directly on the production database after making a backup. Theses change should occur during off-peak hours with low traffic to the app to minimize the impact, because some users might experience failed requests. Make sure your application code also handles any failed requests.
7578
7679
## Database schema and queries
80+
7781
Here are few tips to keep in mind when you build your database schema and your queries.
7882

7983
### Use BIGINT or UUID for Primary Keys
84+
8085
When building custom application or some frameworks they maybe using `INT` instead of `BIGINT` for primary keys. When you use ```INT```, you run the risk of where the value in your database can exceed storage capacity of ```INT``` data type. Making this change to an existing production application can be time consuming with cost more development time. Another option is to use [UUID](https://www.postgresql.org/docs/current/datatype-uuid.html) for primary keys.This identifier uses an auto-generated 128-bit string, for example ```a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11```. Learn more about [PostgreSQL data types](https://www.postgresql.org/docs/8.1/datatype.html).
8186

8287
### Use indexes
8388

8489
There are many types of [indexes](https://www.postgresql.org/docs/9.1/indexes.html) in Postgres which can be used in different ways. Using an index helps the server find and retrieve specific rows much faster than it could do without an index. But indexes also add overhead to the database server, hence avoid having too many indexes.
8590

8691
### Use autovacuum
92+
8793
You can optimize your server with autovacuum on an Azure Database for PostgreSQL server. PostgreSQL allow greater database concurrency but with every update results in insert and delete. For delete, the records are soft marked which will be purged later. To carry out these tasks, PostgreSQL runs a vacuum job. If you don't vacuum from time to time, the dead tuples that accumulate can result in:
8894

8995
- Data bloat, such as larger databases and tables.
@@ -93,14 +99,17 @@ You can optimize your server with autovacuum on an Azure Database for PostgreSQL
9399
Learn more about [how to optimize with autovacuum](how-to-optimize-autovacuum.md).
94100

95101
### Use pg_stats_statements
96-
Pg_stat_statements is a PostgreSQL extension that's enabled by default in Azure Database for PostgreSQL. The extension provides a means to track execution statistics for all SQL statements executed by a server. See [how to use pg_statement](how-to-optimize-query-stats-collection.md).
97102

103+
Pg_stat_statements is a PostgreSQL extension that's enabled by default in Azure Database for PostgreSQL. The extension provides a means to track execution statistics for all SQL statements executed by a server. See [how to use pg_statement](how-to-optimize-query-stats-collection.md).
98104

99105
### Use the Query Store
106+
100107
The [Query Store](./concepts-query-store.md) feature in Azure Database for PostgreSQL provides a more effective method to track query statistics. We recommend this feature as an alternative to using pg_stats_statements.
101108

102109
### Optimize bulk inserts and use transient data
110+
103111
If you have workload operations that involve transient data or that insert large datasets in bulk, consider using unlogged tables. It provides atomicity and durability, by default. Atomicity, consistency, isolation, and durability make up the ACID properties. See [how to optimize bulk inserts](how-to-optimize-bulk-inserts.md).
104112

105113
## Next Steps
114+
106115
[Postgres Guide](http://postgresguide.com/)

articles/postgresql/single-server/concept-reserved-pricing.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.topic: conceptual
77
ms.author: sunila
88
author: sunilagarwal
99
ms.reviewer: ""
10-
ms.date: 10/06/2021
10+
ms.date: 06/24/2022
1111
---
1212

1313
# Prepay for Azure Database for PostgreSQL compute resources with reserved capacity
@@ -17,6 +17,7 @@ ms.date: 10/06/2021
1717
Azure Database for PostgreSQL now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for PostgreSQL reserved capacity, you make an upfront commitment on PostgreSQL server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for PostgreSQL reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. </br>
1818

1919
## How does the instance reservation work?
20+
2021
You don't need to assign the reservation to specific Azure Database for PostgreSQL servers. An already running Azure Database for PostgreSQL (or ones that are newly deployed) will automatically get the benefit of reserved pricing. By purchasing a reservation, you're pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for PostgreSQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the PostgreSQL Database servers. At the end of the reservation term, the billing benefit expires, and the Azure Database for PostgreSQL are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for PostgreSQL reserved capacity offering](https://azure.microsoft.com/pricing/details/postgresql/). </br>
2122

2223
> [!IMPORTANT]
@@ -44,18 +45,15 @@ The size of reservation should be based on the total amount of compute used by t
4445

4546
For example, let's suppose that you are running one general purpose Gen5 – 32 vCore PostgreSQL database, and two memory-optimized Gen5 – 16 vCore PostgreSQL databases. Further, let's supposed that you plan to deploy within the next month an additional general purpose Gen5 – 8 vCore database server, and one memory-optimized Gen5 – 32 vCore database server. Let's suppose that you know that you will need these resources for at least one year. In this case, you should purchase a 40 (32 + 8) vCores, one-year reservation for single database general purpose - Gen5 and a 64 (2x16 + 32) vCore one year reservation for single database memory optimized - Gen5
4647

47-
4848
## Buy Azure Database for PostgreSQL reserved capacity
4949

5050
1. Sign in to the [Azure portal](https://portal.azure.com/).
5151
2. Select **All services** > **Reservations**.
5252
3. Select **Add** and then in the Purchase reservations pane, select **Azure Database for PostgreSQL** to purchase a new reservation for your PostgreSQL databases.
5353
4. Fill in the required fields. Existing or new databases that match the attributes you select qualify to get the reserved capacity discount. The actual number of your Azure Database for PostgreSQL servers that get the discount depend on the scope and quantity selected.
5454

55-
5655
:::image type="content" source="media/concepts-reserved-pricing/postgresql-reserved-price.png" alt-text="Overview of reserved pricing":::
5756

58-
5957
The following table describes required fields.
6058

6159
| Field | Description |
@@ -78,7 +76,7 @@ Use Azure APIs to programmatically get information for your organization about A
7876
- View and manage reservation access
7977
- Split or merge reservations
8078
- Change the scope of reservations
81-
79+
8280
For more information, see [APIs for Azure reservation automation](../../cost-management-billing/reservations/reservation-apis.md).
8381

8482
## vCore size flexibility

articles/postgresql/single-server/concepts-aks.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.topic: conceptual
77
ms.author: sunila
88
author: sunilagarwal
99
ms.reviewer: ""
10-
ms.date: 07/14/2020
10+
ms.date: 06/24/2022
1111
---
1212

1313
# Connecting Azure Kubernetes Service and Azure Database for PostgreSQL - Single Server
@@ -17,6 +17,7 @@ ms.date: 07/14/2020
1717
Azure Kubernetes Service (AKS) provides a managed Kubernetes cluster you can use in Azure. Below are some options to consider when using AKS and Azure Database for PostgreSQL together to create an application.
1818

1919
## Accelerated networking
20+
2021
Use accelerated networking-enabled underlying VMs in your AKS cluster. When accelerated networking is enabled on a VM, there is lower latency, reduced jitter, and decreased CPU utilization on the VM. Learn more about how accelerated networking works, the supported OS versions, and supported VM instances for [Linux](../../virtual-network/create-vm-accelerated-networking-cli.md).
2122

2223
From November 2018, AKS supports accelerated networking on those supported VM instances. Accelerated networking is enabled by default on new AKS clusters that use those VMs.
@@ -40,10 +41,11 @@ az network nic list --resource-group nodeResourceGroup -o table
4041
```
4142

4243
## Connection pooling
43-
A connection pooler minimizes the cost and time associated with creating and closing new connections to the database. The pool is a collection of connections that can be reused.
4444

45-
There are multiple connection poolers you can use with PostgreSQL. One of these is [PgBouncer](https://pgbouncer.github.io/). In the Microsoft Container Registry, we provide a lightweight containerized PgBouncer that can be used in a sidecar to pool connections from AKS to Azure Database for PostgreSQL. Visit the [docker hub page](https://hub.docker.com/r/microsoft/azureossdb-tools-pgbouncer/) to learn how to access and use this image.
45+
A connection pooler minimizes the cost and time associated with creating and closing new connections to the database. The pool is a collection of connections that can be reused.
46+
47+
There are multiple connection poolers you can use with PostgreSQL. One of these is [PgBouncer](https://pgbouncer.github.io/). In the Microsoft Container Registry, we provide a lightweight containerized PgBouncer that can be used in a sidecar to pool connections from AKS to Azure Database for PostgreSQL. Visit the [docker hub page](https://hub.docker.com/r/microsoft/azureossdb-tools-pgbouncer/) to learn how to access and use this image.
4648

4749
## Next steps
4850

49-
Create an AKS cluster [using the Azure CLI](../../aks/learn/quick-kubernetes-deploy-cli.md), [using Azure PowerShell](../../aks/learn/quick-kubernetes-deploy-powershell.md), or [using the Azure portal](../../aks/learn/quick-kubernetes-deploy-portal.md).
51+
Create an AKS cluster [using the Azure CLI](../../aks/learn/quick-kubernetes-deploy-cli.md), [using Azure PowerShell](../../aks/learn/quick-kubernetes-deploy-powershell.md), or [using the Azure portal](../../aks/learn/quick-kubernetes-deploy-portal.md).

articles/postgresql/single-server/concepts-audit.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.subservice: single-server
66
ms.topic: conceptual
77
ms.author: nlarin
88
author: niklarin
9-
ms.date: 01/28/2020
9+
ms.date: 06/24/2022
1010
---
1111

1212
# Audit logging in Azure Database for PostgreSQL - Single Server
@@ -43,19 +43,19 @@ To use the [portal](https://portal.azure.com):
4343
1. On the left, under **Settings**, select **Server parameters**.
4444
1. Search for **shared_preload_libraries**.
4545
1. Select **PGAUDIT**.
46-
46+
4747
:::image type="content" source="./media/concepts-audit/share-preload-parameter.png" alt-text="Screenshot that shows Azure Database for PostgreSQL enabling shared_preload_libraries for PGAUDIT.":::
4848

4949
1. Restart the server to apply the change.
5050
1. Check that `pgaudit` is loaded in `shared_preload_libraries` by executing the following query in psql:
51-
51+
5252
```SQL
5353
show shared_preload_libraries;
5454
```
5555
You should see `pgaudit` in the query result that will return `shared_preload_libraries`.
5656

5757
1. Connect to your server by using a client like psql, and enable the pgAudit extension:
58-
58+
5959
```SQL
6060
CREATE EXTENSION pgaudit;
6161
```
@@ -78,7 +78,7 @@ To configure pgAudit, in the [portal](https://portal.azure.com):
7878
1. On the left, under **Settings**, select **Server parameters**.
7979
1. Search for the **pgaudit** parameters.
8080
1. Select appropriate settings parameters to edit. For example, to start logging, set **pgaudit.log** to **WRITE**.
81-
81+
8282
:::image type="content" source="./media/concepts-audit/pgaudit-config.png" alt-text="Screenshot that shows Azure Database for PostgreSQL configuring logging with pgAudit.":::
8383
1. Select **Save** to save your changes.
8484

articles/postgresql/single-server/concepts-azure-ad-authentication.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.subservice: single-server
66
ms.topic: conceptual
77
ms.author: sunila
88
author: sunilagarwal
9-
ms.date: 07/23/2020
9+
ms.date: 06/24/2022
1010
---
1111

1212
# Use Azure Active Directory for authenticating with PostgreSQL

articles/postgresql/single-server/concepts-azure-advisor-recommendations.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,9 @@ ms.subservice: single-server
66
ms.topic: conceptual
77
ms.author: alau
88
author: alau-ms
9-
ms.date: 04/08/2021
9+
ms.date: 06/24/2022
1010
---
11+
1112
# Azure Advisor for PostgreSQL
1213

1314
[!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]

0 commit comments

Comments
 (0)