You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/postgresql/single-server/application-best-practices.md
+13-4Lines changed: 13 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,10 +4,10 @@ description: Learn about best practices for building an app by using Azure Datab
4
4
ms.service: postgresql
5
5
ms.subservice: single-server
6
6
ms.topic: conceptual
7
-
ms.author: sunila
8
7
author: sunilagarwal
8
+
ms.author: sunila
9
9
ms.reviewer: ""
10
-
ms.date: 12/10/2020
10
+
ms.date: 06/24/2022
11
11
---
12
12
13
13
# Best practices for building an application with Azure Database for PostgreSQL
@@ -49,18 +49,21 @@ Here are a few tools and practices that you can use to help debug performance is
49
49
With connection pooling, a fixed set of connections is established at the startup time and maintained. This also helps reduce the memory fragmentation on the server that is caused by the dynamic new connections established on the database server. The connection pooling can be configured on the application side if the app framework or database driver supports it. If that is not supported, the other recommended option is to leverage a proxy connection pooler service like [PgBouncer](https://pgbouncer.github.io/) or [Pgpool](https://pgpool.net/mediawiki/index.php/Main_Page) running outside the application and connecting to the database server. Both PgBouncer and Pgpool are community based tools that work with Azure Database for PostgreSQL.
50
50
51
51
### Retry logic to handle transient errors
52
+
52
53
Your application might experience transient errors where connections to the database are dropped or lost intermittently. In such situations, the server is up and running after one to two retries in 5 to 10 seconds. A good practice is to wait for 5 seconds before your first retry. Then follow each retry by increasing the wait gradually, up to 60 seconds. Limit the maximum number of retries at which point your application considers the operation failed, so you can then further investigate. See [How to troubleshoot connection errors](./concepts-connectivity.md) to learn more.
53
54
54
55
### Enable read replication to mitigate failovers
55
-
You can use [Data-in Replication](./concepts-read-replicas.md) for failover scenarios. When you're using read replicas, no automated failover between source and replica servers occurs. You'll notice a lag between the source and the replica because the replication is asynchronous. Network lag can be influenced by many factors, like the size of the workload running on the source server and the latency between datacenters. In most cases, replica lag ranges from a few seconds to a couple of minutes.
56
56
57
+
You can use [Data-in Replication](./concepts-read-replicas.md) for failover scenarios. When you're using read replicas, no automated failover between source and replica servers occurs. You'll notice a lag between the source and the replica because the replication is asynchronous. Network lag can be influenced by many factors, like the size of the workload running on the source server and the latency between datacenters. In most cases, replica lag ranges from a few seconds to a couple of minutes.
57
58
58
59
## Database deployment
59
60
60
61
### Configure CI/CD deployment pipeline
62
+
61
63
Occasionally, you need to deploy changes to your database. In such cases, you can use continuous integration (CI) through [GitHub Actions](https://github.com/Azure/postgresql/blob/master/README.md) for your PostgreSQL server to update the database by running a custom script against it.
62
64
63
65
### Define manual database deployment process
66
+
64
67
During manual database deployment, follow these steps to minimize downtime or reduce the risk of failed deployment:
65
68
66
69
- Create a copy of a production database on a new database by using pg_dump.
@@ -74,16 +77,19 @@ During manual database deployment, follow these steps to minimize downtime or re
74
77
> If the application is like an e-commerce app and you can't put it in read-only state, deploy the changes directly on the production database after making a backup. Theses change should occur during off-peak hours with low traffic to the app to minimize the impact, because some users might experience failed requests. Make sure your application code also handles any failed requests.
75
78
76
79
## Database schema and queries
80
+
77
81
Here are few tips to keep in mind when you build your database schema and your queries.
78
82
79
83
### Use BIGINT or UUID for Primary Keys
84
+
80
85
When building custom application or some frameworks they maybe using `INT` instead of `BIGINT` for primary keys. When you use ```INT```, you run the risk of where the value in your database can exceed storage capacity of ```INT``` data type. Making this change to an existing production application can be time consuming with cost more development time. Another option is to use [UUID](https://www.postgresql.org/docs/current/datatype-uuid.html) for primary keys.This identifier uses an auto-generated 128-bit string, for example ```a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11```. Learn more about [PostgreSQL data types](https://www.postgresql.org/docs/8.1/datatype.html).
81
86
82
87
### Use indexes
83
88
84
89
There are many types of [indexes](https://www.postgresql.org/docs/9.1/indexes.html) in Postgres which can be used in different ways. Using an index helps the server find and retrieve specific rows much faster than it could do without an index. But indexes also add overhead to the database server, hence avoid having too many indexes.
85
90
86
91
### Use autovacuum
92
+
87
93
You can optimize your server with autovacuum on an Azure Database for PostgreSQL server. PostgreSQL allow greater database concurrency but with every update results in insert and delete. For delete, the records are soft marked which will be purged later. To carry out these tasks, PostgreSQL runs a vacuum job. If you don't vacuum from time to time, the dead tuples that accumulate can result in:
88
94
89
95
- Data bloat, such as larger databases and tables.
@@ -93,14 +99,17 @@ You can optimize your server with autovacuum on an Azure Database for PostgreSQL
93
99
Learn more about [how to optimize with autovacuum](how-to-optimize-autovacuum.md).
94
100
95
101
### Use pg_stats_statements
96
-
Pg_stat_statements is a PostgreSQL extension that's enabled by default in Azure Database for PostgreSQL. The extension provides a means to track execution statistics for all SQL statements executed by a server. See [how to use pg_statement](how-to-optimize-query-stats-collection.md).
97
102
103
+
Pg_stat_statements is a PostgreSQL extension that's enabled by default in Azure Database for PostgreSQL. The extension provides a means to track execution statistics for all SQL statements executed by a server. See [how to use pg_statement](how-to-optimize-query-stats-collection.md).
98
104
99
105
### Use the Query Store
106
+
100
107
The [Query Store](./concepts-query-store.md) feature in Azure Database for PostgreSQL provides a more effective method to track query statistics. We recommend this feature as an alternative to using pg_stats_statements.
101
108
102
109
### Optimize bulk inserts and use transient data
110
+
103
111
If you have workload operations that involve transient data or that insert large datasets in bulk, consider using unlogged tables. It provides atomicity and durability, by default. Atomicity, consistency, isolation, and durability make up the ACID properties. See [how to optimize bulk inserts](how-to-optimize-bulk-inserts.md).
Copy file name to clipboardExpand all lines: articles/postgresql/single-server/concept-reserved-pricing.md
+3-5Lines changed: 3 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.topic: conceptual
7
7
ms.author: sunila
8
8
author: sunilagarwal
9
9
ms.reviewer: ""
10
-
ms.date: 10/06/2021
10
+
ms.date: 06/24/2022
11
11
---
12
12
13
13
# Prepay for Azure Database for PostgreSQL compute resources with reserved capacity
@@ -17,6 +17,7 @@ ms.date: 10/06/2021
17
17
Azure Database for PostgreSQL now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for PostgreSQL reserved capacity, you make an upfront commitment on PostgreSQL server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for PostgreSQL reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. </br>
18
18
19
19
## How does the instance reservation work?
20
+
20
21
You don't need to assign the reservation to specific Azure Database for PostgreSQL servers. An already running Azure Database for PostgreSQL (or ones that are newly deployed) will automatically get the benefit of reserved pricing. By purchasing a reservation, you're pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for PostgreSQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the PostgreSQL Database servers. At the end of the reservation term, the billing benefit expires, and the Azure Database for PostgreSQL are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for PostgreSQL reserved capacity offering](https://azure.microsoft.com/pricing/details/postgresql/). </br>
21
22
22
23
> [!IMPORTANT]
@@ -44,18 +45,15 @@ The size of reservation should be based on the total amount of compute used by t
44
45
45
46
For example, let's suppose that you are running one general purpose Gen5 – 32 vCore PostgreSQL database, and two memory-optimized Gen5 – 16 vCore PostgreSQL databases. Further, let's supposed that you plan to deploy within the next month an additional general purpose Gen5 – 8 vCore database server, and one memory-optimized Gen5 – 32 vCore database server. Let's suppose that you know that you will need these resources for at least one year. In this case, you should purchase a 40 (32 + 8) vCores, one-year reservation for single database general purpose - Gen5 and a 64 (2x16 + 32) vCore one year reservation for single database memory optimized - Gen5
46
47
47
-
48
48
## Buy Azure Database for PostgreSQL reserved capacity
49
49
50
50
1. Sign in to the [Azure portal](https://portal.azure.com/).
51
51
2. Select **All services** > **Reservations**.
52
52
3. Select **Add** and then in the Purchase reservations pane, select **Azure Database for PostgreSQL** to purchase a new reservation for your PostgreSQL databases.
53
53
4. Fill in the required fields. Existing or new databases that match the attributes you select qualify to get the reserved capacity discount. The actual number of your Azure Database for PostgreSQL servers that get the discount depend on the scope and quantity selected.
54
54
55
-
56
55
:::image type="content" source="media/concepts-reserved-pricing/postgresql-reserved-price.png" alt-text="Overview of reserved pricing":::
57
56
58
-
59
57
The following table describes required fields.
60
58
61
59
| Field | Description |
@@ -78,7 +76,7 @@ Use Azure APIs to programmatically get information for your organization about A
78
76
- View and manage reservation access
79
77
- Split or merge reservations
80
78
- Change the scope of reservations
81
-
79
+
82
80
For more information, see [APIs for Azure reservation automation](../../cost-management-billing/reservations/reservation-apis.md).
Copy file name to clipboardExpand all lines: articles/postgresql/single-server/concepts-aks.md
+6-4Lines changed: 6 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.topic: conceptual
7
7
ms.author: sunila
8
8
author: sunilagarwal
9
9
ms.reviewer: ""
10
-
ms.date: 07/14/2020
10
+
ms.date: 06/24/2022
11
11
---
12
12
13
13
# Connecting Azure Kubernetes Service and Azure Database for PostgreSQL - Single Server
@@ -17,6 +17,7 @@ ms.date: 07/14/2020
17
17
Azure Kubernetes Service (AKS) provides a managed Kubernetes cluster you can use in Azure. Below are some options to consider when using AKS and Azure Database for PostgreSQL together to create an application.
18
18
19
19
## Accelerated networking
20
+
20
21
Use accelerated networking-enabled underlying VMs in your AKS cluster. When accelerated networking is enabled on a VM, there is lower latency, reduced jitter, and decreased CPU utilization on the VM. Learn more about how accelerated networking works, the supported OS versions, and supported VM instances for [Linux](../../virtual-network/create-vm-accelerated-networking-cli.md).
21
22
22
23
From November 2018, AKS supports accelerated networking on those supported VM instances. Accelerated networking is enabled by default on new AKS clusters that use those VMs.
@@ -40,10 +41,11 @@ az network nic list --resource-group nodeResourceGroup -o table
40
41
```
41
42
42
43
## Connection pooling
43
-
A connection pooler minimizes the cost and time associated with creating and closing new connections to the database. The pool is a collection of connections that can be reused.
44
44
45
-
There are multiple connection poolers you can use with PostgreSQL. One of these is [PgBouncer](https://pgbouncer.github.io/). In the Microsoft Container Registry, we provide a lightweight containerized PgBouncer that can be used in a sidecar to pool connections from AKS to Azure Database for PostgreSQL. Visit the [docker hub page](https://hub.docker.com/r/microsoft/azureossdb-tools-pgbouncer/) to learn how to access and use this image.
45
+
A connection pooler minimizes the cost and time associated with creating and closing new connections to the database. The pool is a collection of connections that can be reused.
46
+
47
+
There are multiple connection poolers you can use with PostgreSQL. One of these is [PgBouncer](https://pgbouncer.github.io/). In the Microsoft Container Registry, we provide a lightweight containerized PgBouncer that can be used in a sidecar to pool connections from AKS to Azure Database for PostgreSQL. Visit the [docker hub page](https://hub.docker.com/r/microsoft/azureossdb-tools-pgbouncer/) to learn how to access and use this image.
46
48
47
49
## Next steps
48
50
49
-
Create an AKS cluster [using the Azure CLI](../../aks/learn/quick-kubernetes-deploy-cli.md), [using Azure PowerShell](../../aks/learn/quick-kubernetes-deploy-powershell.md), or [using the Azure portal](../../aks/learn/quick-kubernetes-deploy-portal.md).
51
+
Create an AKS cluster [using the Azure CLI](../../aks/learn/quick-kubernetes-deploy-cli.md), [using Azure PowerShell](../../aks/learn/quick-kubernetes-deploy-powershell.md), or [using the Azure portal](../../aks/learn/quick-kubernetes-deploy-portal.md).
Copy file name to clipboardExpand all lines: articles/postgresql/single-server/concepts-audit.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ ms.subservice: single-server
6
6
ms.topic: conceptual
7
7
ms.author: nlarin
8
8
author: niklarin
9
-
ms.date: 01/28/2020
9
+
ms.date: 06/24/2022
10
10
---
11
11
12
12
# Audit logging in Azure Database for PostgreSQL - Single Server
@@ -43,19 +43,19 @@ To use the [portal](https://portal.azure.com):
43
43
1. On the left, under **Settings**, select **Server parameters**.
44
44
1. Search for **shared_preload_libraries**.
45
45
1. Select **PGAUDIT**.
46
-
46
+
47
47
:::image type="content" source="./media/concepts-audit/share-preload-parameter.png" alt-text="Screenshot that shows Azure Database for PostgreSQL enabling shared_preload_libraries for PGAUDIT.":::
48
48
49
49
1. Restart the server to apply the change.
50
50
1. Check that `pgaudit` is loaded in `shared_preload_libraries` by executing the following query in psql:
51
-
51
+
52
52
```SQL
53
53
show shared_preload_libraries;
54
54
```
55
55
You should see `pgaudit`in the query result that will return `shared_preload_libraries`.
56
56
57
57
1. Connect to your server by using a client like psql, and enable the pgAudit extension:
58
-
58
+
59
59
```SQL
60
60
CREATE EXTENSION pgaudit;
61
61
```
@@ -78,7 +78,7 @@ To configure pgAudit, in the [portal](https://portal.azure.com):
78
78
1. On the left, under **Settings**, select **Server parameters**.
79
79
1. Search for the **pgaudit** parameters.
80
80
1. Select appropriate settings parameters to edit. For example, to start logging, set **pgaudit.log** to **WRITE**.
81
-
81
+
82
82
:::image type="content" source="./media/concepts-audit/pgaudit-config.png" alt-text="Screenshot that shows Azure Database for PostgreSQL configuring logging with pgAudit.":::
0 commit comments