Skip to content

Commit 949422a

Browse files
authored
Merge pull request #245730 from MicrosoftDocs/main
7/20/2023 PM Publish
2 parents b7bf710 + 98a98ca commit 949422a

File tree

249 files changed

+3902
-5273
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

249 files changed

+3902
-5273
lines changed

.openpublishing.redirection.json

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13338,6 +13338,16 @@
1333813338
"redirect_url": "/azure/governance/policy/samples/index",
1333913339
"redirect_document_id": false
1334013340
},
13341+
{
13342+
"source_path_from_root": "/articles/governance/policy/samples/PCIv3_2_1_2018_audit.md",
13343+
"redirect_url": "/azure/governance/policy/samples/pci-dss-3-2-1",
13344+
"redirect_document_id": false
13345+
},
13346+
{
13347+
"source_path_from_root": "/articles/governance/policy/samples/pci_dss_v4.0.md",
13348+
"redirect_url": "/azure/governance/policy/samples/pci-dss-4-0",
13349+
"redirect_document_id": false
13350+
},
1334113351
{
1334213352
"source_path_from_root": "/articles/azure-policy/create-manage-policy.md",
1334313353
"redirect_url": "/azure/governance/policy/tutorials/create-and-manage",

articles/active-directory/fundamentals/whats-new-sovereign-clouds.md

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,53 @@ Azure AD receives improvements on an ongoing basis. To stay up to date with the
2121

2222
This page updates monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Sovereign Clouds](whats-new-archive.md).
2323

24+
## June 2023
25+
26+
### General Availability - Apply RegEx Replace to groups claim content
27+
28+
29+
30+
**Type:** New feature
31+
**Service category:** Enterprise Apps
32+
**Product capability:** SSO
33+
34+
Today, when group claims are added to tokens Azure Active Directory attempts to include all of the groups the user is a member of. In larger organizations where users are members of hundreds of groups this can often exceed the limits of what can go in the token. This feature enables more customers to connect their apps to Azure Active Directory by making connections easier and more robust through automation of the application’s creation process. This specifically allows the set of groups included in the token to be limited to only those that are assigned to the application. For more information, see: [Regex-based claims transformation](../develop/saml-claims-customization.md#regex-based-claims-transformation).
35+
36+
---
37+
38+
### General Availability - Azure Active Directory SSO integration with Cisco Unified Communications Manager
39+
40+
41+
42+
**Type:** New feature
43+
**Service category:** Enterprise Apps
44+
**Product capability:** Platform
45+
46+
Cisco Unified Communications Manager (Unified CM) provides reliable, secure, scalable, and manageable call control and session management. When you integrate Cisco Unified Communications Manager with Azure Active Directory, you can:
47+
48+
- Control in Azure Active Directory who has access to Cisco Unified Communications Manager.
49+
- Enable your users to be automatically signed-in to Cisco Unified Communications Manager with their Azure AD accounts.
50+
- Manage your accounts in one central location - the Azure portal.
51+
52+
53+
For more information, see: [Azure Active Directory SSO integration with Cisco Unified Communications Manager](../saas-apps/cisco-unified-communications-manager-tutorial.md).
54+
55+
---
56+
57+
### General Availability - Number Matching for Microsoft Authenticator notifications
58+
59+
**Type:** Plan for Change
60+
**Service category:** Microsoft Authenticator App
61+
**Product capability:** User Authentication
62+
63+
Microsoft Authenticator app’s number matching feature has been Generally Available since Nov 2022! If you haven't already used the rollout controls (via Azure portal Admin UX and MSGraph APIs) to smoothly deploy number matching for users of Microsoft Authenticator push notifications, we highly encourage you to do so. We previously announced that we'll remove the admin controls and enforce the number match experience tenant-wide for all users of Microsoft Authenticator push notifications starting February 27, 2023. After listening to customers, we'll extend the availability of the rollout controls for a few more weeks. Organizations can continue to use the existing rollout controls until May 8, 2023, to deploy number matching in their organizations. Microsoft services will start enforcing the number matching experience for all users of Microsoft Authenticator push notifications after May 8, 2023. We'll also remove the rollout controls for number matching after that date.
64+
65+
If customers don’t enable number match for all Microsoft Authenticator push notifications prior to May 8, 2023, Authenticator users may experience inconsistent sign-ins while the services are rolling out this change. To ensure consistent behavior for all users, we highly recommend you enable number match for Microsoft Authenticator push notifications in advance.
66+
67+
For more information, see: [How to use number matching in multifactor authentication (MFA) notifications - Authentication methods policy](../authentication/how-to-mfa-number-match.md)
68+
69+
---
70+
2471
## May 2023
2572

2673
### General Availability - Admins can now restrict users from self-service accessing their BitLocker keys

articles/ai-services/openai/concepts/models.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn about the different model capabilities that are available wit
55
ms.service: cognitive-services
66
ms.subservice: openai
77
ms.topic: conceptual
8-
ms.date: 07/12/2023
8+
ms.date: 07/20/2023
99
ms.custom: event-tier1-build-2022, references_regions, build-2023, build-2023-dataai
1010
manager: nitinme
1111
author: mrbullwinkle #ChrisHMSFT
@@ -65,11 +65,10 @@ Currently, we offer three families of Embeddings models for different functional
6565

6666
The DALL-E models, currently in preview, generate images from text prompts that the user provides.
6767

68-
6968
## Model summary table and region availability
7069

7170
> [!IMPORTANT]
72-
> South Central US is temporarily unavailable for creating new resources due to high demand.
71+
> South Central US and East US are temporarily unavailable for creating new resources and deployments due to high demand.
7372
7473
### GPT-4 models
7574

@@ -92,8 +91,8 @@ GPT-3.5 Turbo is used with the Chat Completion API. GPT-3.5 Turbo (0301) can als
9291
| Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
9392
| --------- | --------------------- | ------------------- | -------------------- | ---------------------- |
9493
| `gpt-35-turbo`<sup>1</sup> (0301) | East US, France Central, South Central US, UK South, West Europe | N/A | 4,096 | Sep 2021 |
95-
| `gpt-35-turbo` (0613) | East US, France Central, UK South | N/A | 4,096 | Sep 2021 |
96-
| `gpt-35-turbo-16k` (0613) | East US, France Central, UK South | N/A | 16,384 | Sep 2021 |
94+
| `gpt-35-turbo` (0613) | East US, France Central, Japan East, North Central US, UK South | N/A | 4,096 | Sep 2021 |
95+
| `gpt-35-turbo-16k` (0613) | East US, France Central, Japan East, North Central US, UK South | N/A | 16,384 | Sep 2021 |
9796

9897
<sup>1</sup> Version `0301` of gpt-35-turbo will be retired on January 4, 2024. See [model updates](#model-updates) for model upgrade behavior.
9998

articles/ai-services/openai/faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ sections:
4848
- question: |
4949
I am trying to use embeddings and received the error "InvalidRequestError: Too many inputs. The max number of inputs is 1." How do I fix this?
5050
answer: |
51-
This error typically occurs when you try to send a batch of text to embed in a single API request as an array. Currently Azure OpenAI does not support batching with embedding requests. Embeddings API calls should consist of a single string input per request. The string can be up to 8191 tokens in length when using the text-embedding-ada-002 (Version 2) model.
51+
This error typically occurs when you try to send a batch of text to embed in a single API request as an array. Currently Azure OpenAI only supports arrays of embeddings with multiple inputs for the `text-embedding-ada-002` Version 2 model. This model version supports an array consisting of up to 16 inputs per API request. The array can be up to 8191 tokens in length when using the text-embedding-ada-002 (Version 2) model.
5252
- question: |
5353
Where can I read about better ways to use Azure OpenAI to get the responses I want from the service?
5454
answer: |

articles/ai-services/openai/how-to/quota.md

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,14 +8,19 @@ manager: nitinme
88
ms.service: cognitive-services
99
ms.subservice: openai
1010
ms.topic: how-to
11-
ms.date: 07/18/2023
11+
ms.date: 07/20/2023
1212
ms.author: mbullwin
1313
---
1414

1515
# Manage Azure OpenAI Service quota
1616

1717
Quota provides the flexibility to actively manage the allocation of rate limits across the deployments within your subscription. This article walks through the process of managing your Azure OpenAI quota.
1818

19+
## Prerequisites
20+
21+
> [!IMPORTANT]
22+
> Quota requires the **Cognitive Services Usages Reader** role. This role provides the minimal access necessary to view quota usage across an Azure subscription. This role can be found in the Azure portal under **Subscriptions** > **Access control (IAM)** > **Add role assignment** > search for **Cognitive Services Usages Reader**.
23+
1924
## Introduction to quota
2025

2126
Azure OpenAI's quota feature enables assignment of rate limits to your deployments, up-to a global limit called your “quota.” Quota is assigned to your subscription on a per-region, per-model basis in units of **Tokens-per-Minute (TPM)**. When you onboard a subscription to Azure OpenAI, you'll receive default quota for most available models. Then, you'll assign TPM to each deployment as it is created, and the available quota for that model will be reduced by that amount. You can continue to create deployments and assign them TPM until you reach your quota limit. Once that happens, you can only create new deployments of that model by reducing the TPM assigned to other deployments of the same model (thus freeing TPM for use), or by requesting and being approved for a model quota increase in the desired region.

articles/ai-services/openai/how-to/switching-endpoints.md

Lines changed: 11 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.service: cognitive-services
88
ms.subservice: openai
99
ms.custom: devx-track-python
1010
ms.topic: how-to
11-
ms.date: 05/24/2023
11+
ms.date: 07/20/2023
1212
manager: nitinme
1313
---
1414

@@ -161,9 +161,9 @@ embedding = openai.Embedding.create(
161161
</tr>
162162
</table>
163163

164-
## Azure OpenAI embeddings doesn't support multiple inputs
164+
## Azure OpenAI embeddings multiple input support
165165

166-
Many examples show passing multiple inputs into the embeddings API. For Azure OpenAI, currently we must pass a single text input per call.
166+
OpenAI currently allows a larger number of array inputs with text-embedding-ada-002. Azure OpenAI currently supports input arrays up to 16 for text-embedding-ada-002 Version 2. Both require the max input token limit per API request to remain under 8191 for this model.
167167

168168
<table>
169169
<tr>
@@ -173,7 +173,7 @@ Many examples show passing multiple inputs into the embeddings API. For Azure Op
173173
<td>
174174

175175
```python
176-
inputs = ["A", "B", "C"]
176+
inputs = ["A", "B", "C"]
177177

178178
embedding = openai.Embedding.create(
179179
input=inputs,
@@ -187,14 +187,13 @@ embedding = openai.Embedding.create(
187187
<td>
188188

189189
```python
190-
inputs = ["A", "B", "C"]
191-
192-
for text in inputs:
193-
embedding = openai.Embedding.create(
194-
input=text,
195-
deployment_id="text-embedding-ada-002"
196-
#engine="text-embedding-ada-002"
197-
)
190+
inputs = ["A", "B", "C"] #max array size=16
191+
192+
embedding = openai.Embedding.create(
193+
input=inputs,
194+
deployment_id="text-embedding-ada-002"
195+
#engine="text-embedding-ada-002"
196+
)
198197
```
199198

200199
</td>

articles/ai-services/openai/whats-new.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.author: mbullwin
88
ms.service: cognitive-services
99
ms.subservice: openai
1010
ms.topic: whats-new
11-
ms.date: 06/12/2023
11+
ms.date: 07/20/2023
1212
recommendations: false
1313
keywords:
1414
---
@@ -17,10 +17,14 @@ keywords:
1717

1818
## July 2023
1919

20-
### Support for function calling
20+
### Support for function calling
2121

2222
- [Azure OpenAI now supports function calling](./how-to/function-calling.md) to enable you to work with functions in the chat completions API.
2323

24+
### Embedding input array increase
25+
26+
- Azure OpenAI now [supports arrays with up to 16 inputs](./how-to/switching-endpoints.md#azure-openai-embeddings-multiple-input-support) per API request with text-embedding-ada-002 Version 2.
27+
2428
## June 2023
2529

2630
### Use Azure OpenAI on your own data (preview)

articles/api-management/api-management-howto-app-insights.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ You can specify loggers on different levels:
9090

9191
Specifying *both*:
9292
- By default, the single API logger (more granular level) overrides the one for all APIs.
93-
- If the loggers configured at the two levels are different, and you need both loggers to receive telemetry (multiplexing), please contact Microsoft Support.
93+
- If the loggers configured at the two levels are different, and you need both loggers to receive telemetry (multiplexing), please contact Microsoft Support. Please note that multiplexing is not supported if you're using the same logger (Application Insights destination) at the "All APIs" level and the single API level. For multiplexing to work correctly, you must configure different loggers at the "All APIs" and individual API level and request assistance from Microsoft support to enable multiplexing for your service.
9494

9595
## What data is added to Application Insights
9696

articles/api-management/cosmosdb-data-source-policy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: dlepow
66

77
ms.service: api-management
88
ms.custom: devx-track-azurecli
9-
ms.topic: reference
9+
ms.topic: article
1010
ms.date: 06/07/2023
1111
ms.author: danlep
1212
---

articles/api-management/http-data-source-policy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ services: api-management
55
author: dlepow
66

77
ms.service: api-management
8-
ms.topic: reference
8+
ms.topic: article
99
ms.date: 03/07/2023
1010
ms.author: danlep
1111
---

0 commit comments

Comments
 (0)