You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Removing example boilerplate that checks for environment variables existence. These examples aren't executable, so it wasn't doing anything except being distracting for people reading the example
- Change over more eligible examples to use the openai/v1 endpoint
- Addressing valid lint warnings.
// Example_audioTranscription demonstrates how to transcribe speech to text using Azure OpenAI's Whisper model.
@@ -22,30 +24,38 @@ import (
22
24
// The example uses environment variables for configuration:
23
25
// - AOAI_WHISPER_ENDPOINT: Your Azure OpenAI endpoint URL
24
26
// - AOAI_WHISPER_MODEL: The deployment name of your Whisper model
27
+
// - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions.
25
28
//
26
29
// Audio transcription is useful for accessibility features, creating searchable archives of audio content,
27
30
// generating captions or subtitles, and enabling voice commands in applications.
// The example uses environment variables for configuration:
72
82
// - AOAI_TTS_ENDPOINT: Your Azure OpenAI endpoint URL
73
83
// - AOAI_TTS_MODEL: The deployment name of your text-to-speech model
84
+
// - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions.
74
85
//
75
86
// Text-to-speech conversion is valuable for creating audiobooks, virtual assistants,
76
87
// accessibility tools, and adding voice interfaces to applications.
// The example uses environment variables for configuration:
125
141
// - AOAI_WHISPER_ENDPOINT: Your Azure OpenAI endpoint URL
126
142
// - AOAI_WHISPER_MODEL: The deployment name of your Whisper model
143
+
// - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions.
127
144
//
128
145
// Speech translation is essential for cross-language communication, creating multilingual content,
129
146
// and building applications that break down language barriers.
// Example_usingAzureContentFiltering demonstrates how to use Azure OpenAI's content filtering capabilities.
@@ -23,27 +25,29 @@ import (
23
25
// The example uses environment variables for configuration:
24
26
// - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL
25
27
// - AOAI_MODEL: The deployment name of your model
28
+
// - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions.
// The example uses environment variables for configuration:
136
140
// - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL
137
141
// - AOAI_MODEL: The deployment name of your model
142
+
// - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions.
138
143
//
139
144
// Streaming with prompt filtering is useful for:
140
145
// - Real-time content moderation
141
146
// - Progressive content delivery
142
147
// - Monitoring content safety during generation
143
148
// - Building responsive applications with content safety checks
0 commit comments