Skip to content

Commit 8a6af45

Browse files
authored
Merge pull request #176616 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/MicrosoftDocs/azure-docs (branch master)
2 parents fb1e012 + 372ee5a commit 8a6af45

File tree

3 files changed

+20
-20
lines changed

3 files changed

+20
-20
lines changed

articles/cosmos-db/sql/performance-testing.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ After reading this article, you'll be able to answer the following questions:
2222
* Where can I find a sample .NET client application for performance testing of Azure Cosmos DB?
2323
* How do I achieve high throughput levels with Azure Cosmos DB from my client application?
2424

25-
To get started with code, download the project from [Azure Cosmos DB performance testing sample](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/documentdb-benchmark).
25+
To get started with code, download the project from [Azure Cosmos DB performance testing sample](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Tools/Benchmark).
2626

2727
> [!NOTE]
2828
> The goal of this application is to demonstrate how to get the best performance from Azure Cosmos DB with a small number of client machines. The goal of the sample is not to achieve the peak throughput capacity of Azure Cosmos DB (which can scale without any limits).
@@ -32,7 +32,7 @@ If you're looking for client-side configuration options to improve Azure Cosmos
3232
## Run the performance testing application
3333
The quickest way to get started is to compile and run the .NET sample, as described in the following steps. You can also review the source code and implement similar configurations on your own client applications.
3434

35-
**Step 1:** Download the project from [Azure Cosmos DB performance testing sample](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/documentdb-benchmark), or fork the GitHub repository.
35+
**Step 1:** Download the project from [Azure Cosmos DB performance testing sample](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Tools/Benchmark), or fork the GitHub repository.
3636

3737
**Step 2:** Modify the settings for EndpointUrl, AuthorizationKey, CollectionThroughput, and DocumentTemplate (optional) in App.config.
3838

@@ -92,7 +92,7 @@ After you have the app running, you can try different [indexing policies](../ind
9292

9393
In this article, we looked at how you can perform performance and scale testing with Azure Cosmos DB by using a .NET console app. For more information, see the following articles:
9494

95-
* [Azure Cosmos DB performance testing sample](https://github.com/Azure/azure-cosmos-dotnet-v2/tree/master/samples/documentdb-benchmark)
95+
* [Azure Cosmos DB performance testing sample](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Tools/Benchmark)
9696
* [Client configuration options to improve Azure Cosmos DB performance](performance-tips.md)
9797
* [Server-side partitioning in Azure Cosmos DB](../partitioning-overview.md)
9898
* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.

articles/data-factory/pricing-concepts.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -46,10 +46,10 @@ To accomplish the scenario, you need to create a pipeline with the following ite
4646
**Total Scenario pricing: $0.16811**
4747

4848
- Data Factory Operations = **$0.0001**
49-
- Read/Write = 10\*00001 = $0.0001 [1 R/W = $0.50/50000 = 0.00001]
50-
- Monitoring = 2\*000005 = $0.00001 [1 Monitoring = $0.25/50000 = 0.000005]
49+
- Read/Write = 10\*0.00001 = $0.0001 [1 R/W = $0.50/50000 = 0.00001]
50+
- Monitoring = 2\*0.000005 = $0.00001 [1 Monitoring = $0.25/50000 = 0.000005]
5151
- Pipeline Orchestration & Execution = **$0.168**
52-
- Activity Runs = 001\*2 = 0.002 [1 run = $1/1000 = 0.001]
52+
- Activity Runs = 0.001\*2 = $0.002 [1 run = $1/1000 = 0.001]
5353
- Data Movement Activities = $0.166 (Prorated for 10 minutes of execution time. $0.25/hour on Azure Integration Runtime)
5454

5555
## Copy data and transform with Azure Databricks hourly
@@ -78,10 +78,10 @@ To accomplish the scenario, you need to create a pipeline with the following ite
7878
**Total Scenario pricing: $0.16916**
7979

8080
- Data Factory Operations = **$0.00012**
81-
- Read/Write = 11\*00001 = $0.00011 [1 R/W = $0.50/50000 = 0.00001]
82-
- Monitoring = 3\*000005 = $0.00001 [1 Monitoring = $0.25/50000 = 0.000005]
81+
- Read/Write = 11\*0.00001 = $0.00011 [1 R/W = $0.50/50000 = 0.00001]
82+
- Monitoring = 3\*0.000005 = $0.00001 [1 Monitoring = $0.25/50000 = 0.000005]
8383
- Pipeline Orchestration & Execution = **$0.16904**
84-
- Activity Runs = 001\*3 = 0.003 [1 run = $1/1000 = 0.001]
84+
- Activity Runs = 0.001\*3 = $0.003 [1 run = $1/1000 = 0.001]
8585
- Data Movement Activities = $0.166 (Prorated for 10 minutes of execution time. $0.25/hour on Azure Integration Runtime)
8686
- External Pipeline Activity = $0.000041 (Prorated for 10 minutes of execution time. $0.00025/hour on Azure Integration Runtime)
8787

@@ -113,10 +113,10 @@ To accomplish the scenario, you need to create a pipeline with the following ite
113113
**Total Scenario pricing: $0.17020**
114114

115115
- Data Factory Operations = **$0.00013**
116-
- Read/Write = 11\*00001 = $0.00011 [1 R/W = $0.50/50000 = 0.00001]
117-
- Monitoring = 4\*000005 = $0.00002 [1 Monitoring = $0.25/50000 = 0.000005]
116+
- Read/Write = 11\*0.00001 = $0.00011 [1 R/W = $0.50/50000 = 0.00001]
117+
- Monitoring = 4\*0.000005 = $0.00002 [1 Monitoring = $0.25/50000 = 0.000005]
118118
- Pipeline Orchestration & Execution = **$0.17007**
119-
- Activity Runs = 001\*4 = 0.004 [1 run = $1/1000 = 0.001]
119+
- Activity Runs = 0.001\*4 = $0.004 [1 run = $1/1000 = 0.001]
120120
- Data Movement Activities = $0.166 (Prorated for 10 minutes of execution time. $0.25/hour on Azure Integration Runtime)
121121
- Pipeline Activity = $0.00003 (Prorated for 1 minute of execution time. $0.002/hour on Azure Integration Runtime)
122122
- External Pipeline Activity = $0.000041 (Prorated for 10 minutes of execution time. $0.00025/hour on Azure Integration Runtime)
@@ -168,10 +168,10 @@ To accomplish the scenario, you need to create a pipeline with the following ite
168168
**Total Scenario pricing: $1.4631**
169169

170170
- Data Factory Operations = **$0.0001**
171-
- Read/Write = 10\*00001 = $0.0001 [1 R/W = $0.50/50000 = 0.00001]
172-
- Monitoring = 2\*000005 = $0.00001 [1 Monitoring = $0.25/50000 = 0.000005]
171+
- Read/Write = 10\*0.00001 = $0.0001 [1 R/W = $0.50/50000 = 0.00001]
172+
- Monitoring = 2\*0.000005 = $0.00001 [1 Monitoring = $0.25/50000 = 0.000005]
173173
- Pipeline Orchestration & Execution = **$1.463**
174-
- Activity Runs = 001\*2 = 0.002 [1 run = $1/1000 = 0.001]
174+
- Activity Runs = 0.001\*2 = $0.002 [1 run = $1/1000 = 0.001]
175175
- Data Flow Activities = $1.461 prorated for 20 minutes (10 mins execution time + 10 mins TTL). $0.274/hour on Azure Integration Runtime with 16 cores general compute
176176

177177
## Data integration in Azure Data Factory Managed VNET
@@ -199,10 +199,10 @@ To accomplish the scenario, you need to create two pipelines with the following
199199
**Total Scenario pricing: $1.45523**
200200

201201
- Data Factory Operations = $0.00023
202-
- Read/Write = 20*00001 = $0.0002 [1 R/W = $0.50/50000 = 0.00001]
203-
- Monitoring = 6*000005 = $0.00003 [1 Monitoring = $0.25/50000 = 0.000005]
202+
- Read/Write = 20*0.00001 = $0.0002 [1 R/W = $0.50/50000 = 0.00001]
203+
- Monitoring = 6*0.000005 = $0.00003 [1 Monitoring = $0.25/50000 = 0.000005]
204204
- Pipeline Orchestration & Execution = $1.455
205-
- Activity Runs = 0.001*6 = 0.006 [1 run = $1/1000 = 0.001]
205+
- Activity Runs = 0.001*6 = $0.006 [1 run = $1/1000 = 0.001]
206206
- Data Movement Activities = $0.333 (Prorated for 10 minutes of execution time. $0.25/hour on Azure Integration Runtime)
207207
- Pipeline Activity = $1.116 (Prorated for 7 minutes of execution time plus 60 minutes TTL. $1/hour on Azure Integration Runtime)
208208

articles/dms/tutorial-postgresql-azure-postgresql-online-portal.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,9 +38,9 @@ In this tutorial, you learn how to:
3838

3939
To complete this tutorial, you need to:
4040

41-
* Download and install [PostgreSQL community edition](https://www.postgresql.org/download/) 9.4, 9.5, 9.6, or 10. The source PostgreSQL Server version must be 9.4, 9.5, 9.6, 10, 11 or 12. For more information, see the article [Supported PostgreSQL Database Versions](../postgresql/concepts-supported-versions.md).
41+
* Download and install [PostgreSQL community edition](https://www.postgresql.org/download/) 9.4, 9.5, 9.6, or 10. The source PostgreSQL Server version must be 9.4, 9.5, 9.6, 10, 11, 12, or 13. For more information, see [Supported PostgreSQL database versions](../postgresql/concepts-supported-versions.md).
4242

43-
Also note that the target Azure Database for PostgreSQL version must be equal to or later than the on-premises PostgreSQL version. For example, PostgreSQL 9.6 can migrate to Azure Database for PostgreSQL 9.6, 10, or 11, but not to Azure Database for PostgreSQL 9.5. Migrations to PostgreSQL 13+ are not supported at this time.
43+
Also note that the target Azure Database for PostgreSQL version must be equal to or later than the on-premises PostgreSQL version. For example, PostgreSQL 9.6 can migrate to Azure Database for PostgreSQL 9.6, 10, or 11, but not to Azure Database for PostgreSQL 9.5.
4444

4545
* [Create an Azure Database for PostgreSQL server](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server](../postgresql/quickstart-create-hyperscale-portal.md).
4646
* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.

0 commit comments

Comments
 (0)