You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/communication-services/concepts/ai.md
+16-15Lines changed: 16 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,9 +11,9 @@ ms.topic: conceptual
11
11
ms.service: azure-communication-services
12
12
---
13
13
14
-
# Artificial intelligence overview
14
+
# Artificial intelligence (AI) overview
15
15
16
-
AI technologies are useful for many communication experiences. AI can help humans communicate better and accomplish their mission more efficiently, for example, a banking employee may use an AI generated meeting summary to help them follow up. AI can reduce human workloads and enable more flexible customer engagement, such as operating a 24/7 phone bot that customers call to check their account balance.
16
+
Artificial intelligence (AI) technologies are useful for many communication experiences. AI can help humans communicate better and accomplish their mission more efficiently, for example, a banking employee may use an AI generated meeting summary to help them follow up. AI can reduce human workloads and enable more flexible customer engagement, such as operating a 24/7 phone bot that customers call to check their account balance.
17
17
18
18
More examples include:
19
19
- Operate a chat or voice bot that responds to human conversation.
@@ -33,29 +33,30 @@ This section summarizes features for integrating AI into Azure Communication mes
33
33
34
34
### Direct Integrations
35
35
36
-
-**[Advanced message analysis](../concepts/advanced-messaging/message-analysis/message-analysis.md)** The Azure Communication Services messaging APIs for WhatsApp provide a built-in integration with Azure OpenAI that analyzes and annotates messages. This integration can detect the user's language, recognize their intent, and extract key phrases.
37
-
-**[Azure Bot Service: Chat channel integration](../quickstarts/chat/quickstart-botframework-integration.md)** - The Azure Communication Services chat system is directly integrated with Azure Bot Service. This integration simplifies creating chat bots that engage with human users.
36
+
-**[Advanced message analysis](../concepts/advanced-messaging/message-analysis/message-analysis.md)**: The Azure Communication Services messaging APIs for WhatsApp provide a built-in integration with Azure OpenAI that analyzes and annotates messages. This integration can detect the user's language, recognize their intent, and extract key phrases.
37
+
-**[Azure Bot Service: Chat channel integration](../quickstarts/chat/quickstart-botframework-integration.md)**: The Azure Communication Services chat system is directly integrated with Azure Bot Service. This integration simplifies creating chat bots that engage with human users.
38
38
39
39
### Accessors
40
40
All Azure Communication Services messaging capabilities are accessible through REST APIs, server-oriented SDKs, and Event Grid notifications. You can use these SDKs to export content to an external datastore and attach a language model to summarize conversations. Or you can use the SDKs to integrate a bot that directly engages with human users. For example, this [GitHub sample](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/chat-nlp-analysis) shows how Azure Communication Services APIs for chat can be accessed through REST APIs and then analyzed by Azure OpenAI.
41
41
42
42
## Voice, video, and telephony
43
43
44
-
This section summarizes features for integrating AI into Azure Communication voice and video calling.
44
+
This section summarizes features for integrating AI into Azure Communication voice and video calling.
45
45
46
46
### Direct Integrations
47
47
48
-
-**[Call Automation REST APIs and SDKs](../concepts/call-automation/call-automation.md)**- Azure Communication Services has simple APIs for [synthesizing](../concepts/call-automation/play-action.md) and [recognizing](../concepts/call-automation/recognize-action.md) speech. The most common scenario for these APIs is implementing voice bots, which is sometimes called interactive voice response (IVR).
49
-
-**[Microsoft Copilot Studio](/microsoft-copilot-studio/voice-overview)** - Copilot Studio is directly integrated with Azure Communication Services telephony. This integration is designed for voice bots and IVR.
50
-
-**[Client captions](../concepts/voice-video-calling/closed-captions.md)** The Calling client SDK provides APIs for real-time closed captions, optimized for accessibility.
51
-
-**[Copilot in the Azure portal](/azure/communication-services/concepts/voice-video-calling/call-diagnostics#copilot-in-azure-for-call-diagnostics)** - You can use Copilot in the Azure portal to ask questions about Azure Communication Services. Copilot uses Azure technical documentation to answer your questions and is best used for asking questions about error codes and API behavior.
52
-
-**[Client background effects](../quickstarts/voice-video-calling/get-started-video-effects.md?pivots=platform-web)** - The Calling client SDKs provide APIs for blurring or replacing a user's background.
53
-
-**[Client noise enhancement and effects](../tutorials/audio-quality-enhancements/add-noise-supression.md?pivots=platform-web)** - The Calling client SDK integrates a [DeepVQE](https://arxiv.org/abs/2306.03177) machine learning model to improve audio quality through echo cancellation and background noise suppression. This transformation is toggled on and off by using the client SDK.
48
+
-**[Call Automation REST APIs and SDKs](../concepts/call-automation/call-automation.md)**: Azure Communication Services has simple APIs for [synthesizing](../concepts/call-automation/play-action.md) and [recognizing](../concepts/call-automation/recognize-action.md) speech. The most common scenario for these APIs is implementing voice bots, which is sometimes called interactive voice response (IVR).
49
+
-**[Microsoft Copilot Studio](/microsoft-copilot-studio/voice-overview)**: Copilot Studio is directly integrated with Azure Communication Services telephony. This integration is designed for voice bots and IVR.
50
+
-**[Client captions](../concepts/voice-video-calling/closed-captions.md)**: The Calling client SDK provides APIs for real-time closed captions, optimized for accessibility.
51
+
-**[Copilot in the Azure portal](/azure/communication-services/concepts/voice-video-calling/call-diagnostics#copilot-in-azure-for-call-diagnostics)**: You can use Copilot in the Azure portal to ask questions about Azure Communication Services. Copilot uses Azure technical documentation to answer your questions and is best used for asking questions about error codes and API behavior.
52
+
-**[Client background effects](../quickstarts/voice-video-calling/get-started-video-effects.md?pivots=platform-web)**: The Calling client SDKs provide APIs for blurring or replacing a user's background.
53
+
-**[Client noise enhancement and effects](../tutorials/audio-quality-enhancements/add-noise-supression.md?pivots=platform-web)**: The Calling client SDK integrates a [DeepVQE](https://arxiv.org/abs/2306.03177) machine learning model to improve audio quality through echo cancellation and background noise suppression. This transformation is toggled on and off by using the client SDK.
54
54
55
55
### Accessors
56
56
Similar to Azure Communication Services messaging, there are REST APIs for many voice and video calling features. However the real-time nature of calling requires closed source SDKs and more complex APIs such as websocket streaming.
57
57
58
-
-**[Call Automation REST APIs and SDKs](../concepts/call-automation/call-automation.md)** - Services and AI applications use Call Automation REST APIs to answer, route, and manage all types of Azure voice and video calls.
59
-
-**[Service-to-service audio streaming](../concepts/call-automation/audio-streaming-concept.md)** - AI applications use Azure's service-to-service WebSockets API to stream audio data. This works in both directions, your AI can listen to a call, and speak.
60
-
-**[Service-to-service real-time transcription](../concepts/call-automation/real-time-transcription.md)** - AI applications use Azure's service-to-service WebSockets API to stream a real-time, Azure-generated transcription. Compared to audio or video content, transcript data is often easier for AI models to reason upon.
61
-
-**[Client raw audio and video](../concepts/voice-video-calling/media-access.md)** - The Calling client SDK provides APIs for accessing and modifying the raw audio and video feed. An example scenario is taking the video feed, using computer vision to distinguish the human speaker from their background, and customizing that background.
58
+
-**[Call Automation REST APIs and SDKs](../concepts/call-automation/call-automation.md)**: Services and AI applications use Call Automation REST APIs to answer, route, and manage all types of Azure voice and video calls.
59
+
-**[Service-to-service audio streaming](../concepts/call-automation/audio-streaming-concept.md)**: AI applications use Azure's service-to-service WebSockets API to stream audio data. This works in both directions, your AI can listen to a call, and speak.
60
+
-**[Service-to-service real-time transcription](../concepts/call-automation/real-time-transcription.md)**: AI applications use Azure's service-to-service WebSockets API to stream a real-time, Azure-generated transcription. Compared to audio or video content, transcript data is often easier for AI models to reason upon.
61
+
-**[Call recording](../concepts/voice-video-calling/call-recording.md)**: You can record Azure calls in your own datastore and then direct AI services to process that content.
62
+
-**[Client raw audio and video](../concepts/voice-video-calling/media-access.md)**: The Calling client SDK provides APIs for accessing and modifying the raw audio and video feed. An example scenario is taking the video feed, using computer vision to distinguish the human speaker from their background, and customizing that background.
Copy file name to clipboardExpand all lines: articles/defender-for-iot/organizations/configure-sensor-settings-portal.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -187,7 +187,7 @@ Select **Add VLAN** to add more VLANs as needed.
187
187
188
188
### Public addresses
189
189
190
-
Add public addresses that might have been used for internal use and shouldn't be included as suspicious IP addresses or tracking the data.
190
+
Add the public addresses of internal devices into this configuration to ensure that the sensor includes them in the inventory and doesn't treat them as internal communication.
191
191
192
192
1. In the **Settings** tab, type the **IP address** and **Mask** address.
Copy file name to clipboardExpand all lines: articles/defender-for-iot/organizations/whats-new.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -59,7 +59,7 @@ We now support the OCPI protocol. See [the updated protocol list](concept-suppor
59
59
60
60
### New sensor setting type Public addresses
61
61
62
-
We're adding the **Public addresses** type to the sensor settings, that allows you to exclude public IP addresses that might have been used for internal use and shouldn't be tracked. For more information, see [add sensor settings](configure-sensor-settings-portal.md#add-sensor-settings).
62
+
We're adding the **Public addresses** type to the sensor settings that allows you to register the public addresses of internal devices and ensure that the sensor doesn't treat them as internal communication. For more information, see [add sensor settings](configure-sensor-settings-portal.md#add-sensor-settings).
0 commit comments