You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: 02_Overview_Cosmos_DB/README.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,8 +16,10 @@ The focus for this developer guide is [Azure Cosmos DB for NoSQL](https://learn.
16
16
17
17
Azure Cosmos DB offers three capacity modes: provisioned throughput, serverless and autoscale modes. creating an Azure Cosmos DB account, it's essential to evaluate the workload's characteristics in order to choose the appropriate mode to optimize both performance and cost efficiency.
18
18
19
+
[**Serverless mode**](https://learn.microsoft.com/en-us/azure/cosmos-db/serverless) offers a more flexible and pay-as-you-go approach, where only the Request Units consumed are billed. This is particularly advantageous for applications with sporadic or unpredictable usage patterns, as it eliminates the need to provision resources upfront.
20
+
19
21
[**Provisioned throughput mode**](https://learn.microsoft.com/azure/cosmos-db/set-throughput) allocates a fixed amount of resources, measured in [Request Units per second (RUs/s)](https://learn.microsoft.com/azure/cosmos-db/request-units), which is ideal for applications with predictable and steady workloads. This ensures consistent performance and can be more cost-effective when there is a constant or high demand for database operations. RU/s can be set at both the database and container levels, allowing for fine-grained control over resource allocation.
20
22
21
-
[**Serverless mode**](https://learn.microsoft.com/en-us/azure/cosmos-db/serverless) offers a more flexible and pay-as-you-go approach, where only the Request Units consumed are billed. This is particularly advantageous for applications with sporadic or unpredictable usage patterns, as it eliminates the need to provision resources upfront.
23
+
[**Autoscale mode**](https://learn.microsoft.com/azure/cosmos-db/provision-throughput-autoscale) builds upon the provisioned throughput mode but allows for the database or container automatically and instantly scale up or down resources based on demand, ensuring that the application can handle varying workloads efficiently. When configuring autoscale, a maximum (Tmax) value threshold is set for a predictable maximum cost. This mode is suitable for applications with fluctuating usage patterns or infrequently used applications.
22
24
23
-
[**Autoscale mode**](https://learn.microsoft.com/azure/cosmos-db/provision-throughput-autoscale) builds upon the provisioned throughput mode but allows for the database or container automatically and instantly scales up or down resources based on demand, ensuring that the application can handle varying workloads efficiently. When configuring autoscale, a maximum (Tmax) value threshold is set for a predictable maximum cost. This mode is suitable for applications with fluctuating usage patterns or infrequently used applications.
25
+
[**Dynamic scaling**](https://learn.microsoft.com/en-us/azure/cosmos-db/autoscale-per-partition-region) allows for the automatic and independent scaling of non-uniform workloads across regions and partitions according to usage patterns. For instance, in a disaster recovery configuration with two regions, the primary region may experience high traffic while the secondary region can scale down to idle, thereby saving costs. This approach is also highly effective for multi-regional applications, where traffic patterns fluctuate based on the time of day in each region.
Copy file name to clipboardExpand all lines: 07_Create_First_Cosmos_DB_Project/README.md
+41-12Lines changed: 41 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,37 +45,66 @@ The following concepts are covered in detail in this lab:
45
45
The `azure-cosmos` library is used to create a Cosmos DB API for NoSQL database client. The client enables both DDL (data definition language) and DML (data manipulation language) operations.
The `create_database` method is used to create a database. If the database already exists, an exception is thrown, therefore verify the database already exists before creating it.
The `create_container_if_not_exists` method is used to create a container. If the container already exists, the method will retrieve the existing container.
One method of creating a document is using the `create_item` method. This method takes a single document and inserts it into the database, if the item already exists in the container, and exception is thrown. Alternatively, the `upsert_item` method can also be used to insert a document into the database and in this case, if the document already exists, it will be updated.
74
+
75
+
```python
76
+
# Create a document
77
+
container.upsert_item(product_dict)
78
+
```
68
79
69
-
In progress
80
+
### Reading documents
70
81
71
-
### Updating a document
82
+
The `read_item` method can be used to retrieve a single document if both the `id` value and `partition_key` value are known. Otherwise, the `query_items` method can be used to retrieve a list of documents using a [SQL-like query](https://learn.microsoft.com/azure/cosmos-db/nosql/tutorial-query).
72
83
73
-
In progress
84
+
```python
85
+
items = container.query_items(query="SELECT * FROM prod", enable_cross_partition_query=True)
86
+
```
74
87
75
88
### Deleting a document
76
89
77
-
In progress
90
+
The `delete_item` method is used to delete a document from the container.
Azure Cosmos DB automatically indexes all properties for all items in a container. However, the creation of additional indexes can improve performance and add functionality such as spatial querying and vector search.
99
+
100
+
The following indexes are supported by Azure Cosmos DB:
101
+
102
+
The **Range Index** supports efficient execution of queries involving numerical and string data types. It is optimized for inequality comparisons (<, <=, >, >=) and sorting operations. Range indexes are particularly useful for time-series data, financial applications, and any scenario that requires filtering or sorting over a numeric range or alphabetically ordered strings.
103
+
104
+
The **Spatial Index** excels with geospatial data types such as points, lines, and polygons. Spatial queries include operations such as finding intersections, conducting proximity searches, and handling bounding-box queries. Spatial indexes are crucial for applications that require geographic information system (GIS) capabilities, location-based services, and asset tracking.
105
+
106
+
The **Composite Index** combines multiple properties into a single entry, optimizing complex queries that use multiple properties for filtering and sorting. They significantly improve the performance of multidimensional queries by reducing the number of request units (RUs) consumed during these operations.
78
107
79
-
### Querying documents
108
+
The **Vector Index** is specialized for high-dimensional vector data. Use cases include similarity searches, recommendation systems, and any other application requiring efficient handling of high-dimensional vectors. This index type optimizes the storage and retrieval of vectors typically utilized in AI application patterns such as RAG (Retrieval Augmented Generation).
80
109
81
-
In progress
110
+
Learn more about indexing in the [Azure documentation](https://learn.microsoft.com/azure/cosmos-db/index-overview)
Copy file name to clipboardExpand all lines: Labs/lab_0_explore_and_use_models.ipynb
+3-19Lines changed: 3 additions & 19 deletions
Original file line number
Diff line number
Diff line change
@@ -15,22 +15,6 @@
15
15
"\n",
16
16
"When integrating Azure OpenAI service in a solution written in Python, the OpenAI Python client library is used. This library is maintained by OpenAI, and is compatible with the Azure OpenAI service.\n",
17
17
"\n",
18
-
"Install the latest `openai` client library:"
19
-
]
20
-
},
21
-
{
22
-
"cell_type": "code",
23
-
"execution_count": null,
24
-
"metadata": {},
25
-
"outputs": [],
26
-
"source": [
27
-
"! pip install openai"
28
-
]
29
-
},
30
-
{
31
-
"cell_type": "markdown",
32
-
"metadata": {},
33
-
"source": [
34
18
"When using the OpenAI client library, the Azure OpenAI `key` and `endpoint` for the service are needed. In this case, ensure the Azure OpenAI `key` and `endpoint` is located in a `.env` file in the root of this project, you will need to create this file. The `.env` file should contain the following values (replace the value with your own `key` and `endpoint`):\n",
0 commit comments