Skip to content

Commit edca1cc

Browse files
committed
scale cosmos on a schedule
1 parent 4b4748e commit edca1cc

File tree

3 files changed

+30
-0
lines changed

3 files changed

+30
-0
lines changed

articles/cosmos-db/TOC.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -867,6 +867,9 @@
867867
- name: Find request unit charge
868868
displayName: request units, RUs, RU, charge, consumption
869869
href: find-request-unit-charge.md
870+
- name: Scale up-down using Azure Functions Timer
871+
displayName: request units, RUs, RU, timer
872+
href: scale-on-schedule.md
870873
- name: Work with containers and items
871874
items:
872875
- name: Work with Cosmos DB data

articles/cosmos-db/index.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -152,6 +152,8 @@ landingContent:
152152
url: how-to-provision-container-throughput.md
153153
- text: Get the request unit charges
154154
url: find-request-unit-charge.md
155+
- text: Scale using Azure Functions Timer
156+
url: scale-on-schedule.md
155157

156158
# Card
157159
- title: Build an app with SQL API
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
---
2+
title: Scale Azure Cosmos DB on a schedule
3+
description: Learn how to scale changes in throughput in Azure Cosmos DB using PowerShell and Azure Functions.
4+
author: markjbrown
5+
ms.service: cosmos-db
6+
ms.topic: conceptual
7+
ms.date: 01/07/2020
8+
ms.author: mjbrown
9+
---
10+
11+
# Scale Azure Cosmos DB using Azure Functions Timer Trigger
12+
13+
Azure Cosmos DB performance is based on the amount of provisioned throughput expressed in Request Units per second (RU/s). The provisioning is at a second granularity and is billed based upon the highest RU/s per hour. This provisioned capacity model enables the service to provide a predictable and consistent throughput, guaranteed low latency, and high availability. In most production workloads this is necessary. However, in development and testing environments where Cosmos is only used during working hours, Cosmos can be scaled up in the morning and scaled back down in the evening after working hours.
14+
15+
Throughput can be set via [Azure Resource Manager (ARM) Templates](resource-manager-samples.md), [Azure CLI](cli-samples.md), [PowerShell](powershell-samples-sql.md), or for Core (SQL) API accounts, using the Cosmos SDK. The benefit for using ARM Templates, Azure CLI or PowerShell is they support all Cosmos DB model APIs.
16+
17+
## Azure Cosmos DB throughput scheduler sample project
18+
19+
To simplify the process for scaling Azure Cosmos DB on a schedule we've created a sample project, [Azure Cosmos Throughput Scheduler](https://github.com/Azure-Samples/azure-cosmos-throughput-scheduler). This project is an Azure Functions app with two Timer Triggers, ScaleUpTrigger and ScaleDownTrigger. The triggers run a PowerShell script that set the throughput on each resource defined in the `scale.json` file in each trigger. The ScaleUpTrigger is configured to run at 8am UTC and the ScaleDownTrigger is configured to run at 6pm UTC and can be easily changed in the `function.json` for each trigger.
20+
21+
This project can be cloned locally, modified to specify the Azure Cosmos DB resources to scale up and down and the schedule to run. Then deployed in an Azure subscription and secured using Managed Service Identity with [Role-based Access Control](role-based-access-control.md) (RBAC) permissions using the Cosmos DB Operator role to set throughput on your Azure Cosmos accounts.
22+
23+
## Next Steps
24+
25+
- Learn more and download sample, [Azure Cosmos Throughput Scheduler](https://github.com/Azure-Samples/azure-cosmos-throughput-scheduler).

0 commit comments

Comments
 (0)