You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/containers/azure-container-instance-recipe.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ ms.author: aahi
14
14
#Customer intent: As a potential customer, I want to know more about how Azure AI services provides and supports Docker containers for each service.
15
15
---
16
16
17
-
# Deploy and run container on Azure Container Instance
17
+
# Deploy and run containers on Azure Container Instance
18
18
19
19
With the following steps, scale Azure AI services applications in the cloud easily with Azure [Container Instances](/azure/container-instances/). Containerization helps you focus on building your applications instead of managing the infrastructure. For more information on using containers, see [features and benefits](../cognitive-services-container-support.md#features-and-benefits).
Copy file name to clipboardExpand all lines: articles/ai-services/content-safety/how-to/embedded-content-safety.md
+2-4Lines changed: 2 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -81,14 +81,12 @@ Memory load – An embedded content safety text analysis process consumes about
81
81
82
82
### SDK parameters that can impact performance
83
83
84
-
Below SDK parameters can impact the inference time of the embedded content safety model.
84
+
The following SDK parameters can impact the inference time of the embedded content safety model.
85
85
86
86
-`gpuEnabled` Set as **true** to enable GPU, otherwise CPU is used. Generally inference time is shorter on GPU.
87
87
-`numThreads` This parameter only works for CPU. It defines number of threads to be used in a multi-threaded environment. We support a maximum number of four threads.
88
88
89
-
See next section to for the performance benchmark data on popular PC CPUs and GPUs
90
-
91
-
89
+
See next section for performance benchmark data on popular PC CPUs and GPUs.
92
90
93
91
94
92
### Performance benchmark data on popular CPUs and GPUs
Azure AI Document Intelligence is an Azure AI service that lets you build automated data processing software using machine-learning technology. Document Intelligence enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your documents. The results are delivered as structured data that ../includes the relationships in the original file.
23
+
Azure AI Document Intelligence is an Azure AI service that lets you build automated data processing software using machine-learning technology. Document Intelligence enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your documents. The results are delivered as structured data that ../includes the relationships in the original file. Containers process only the data provided to them and solely utilize the resources they are permitted to access. Containers cannot process data from other regions.
24
24
25
25
In this article you can learn how to download, install, and run Document Intelligence containers. Containers enable you to run the Document Intelligence service in your own environment. Containers are great for specific security and data governance requirements.
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/whats-new.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ author: jboback
7
7
manager: nitinme
8
8
ms.service: azure-ai-language
9
9
ms.topic: whats-new
10
-
ms.date: 08/22/2024
10
+
ms.date: 10/04/2024
11
11
ms.author: jboback
12
12
---
13
13
@@ -19,6 +19,10 @@ Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
19
19
20
20
* Custom Summarization has been discontinued and is no longer available in the Studio and documentation.
21
21
22
+
## August 2024
23
+
*[CLU utterance limit in a project](conversational-language-understanding/service-limits.md#data-limits) increased from 25,000 to 50,000.
24
+
*[CLU new version of training configuration, version 2024-08-01-preview, is available now](conversational-language-understanding/concepts/best-practices.md#address-out-of-domain-utterances), which improves the quality of intent identification for out of domain utterances.
25
+
22
26
## July 2024
23
27
24
28
*[Conversational PII redaction](https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/announcing-conversational-pii-detection-service-s-general/ba-p/4162881) service in English-language contexts is now Generally Available (GA).
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/models.md
+170-2Lines changed: 170 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -49,7 +49,175 @@ Once access has been granted, you will need to create a deployment for each mode
49
49
50
50
Support for the **o1 series** models was added in API version `2024-09-01-preview`.
51
51
52
-
The `max_tokens` parameter has been deprecated and replaced with the new `max_completion_tokens` parameter. **o1 series** models will only work with the `max_completions_tokens` parameter.
52
+
The `max_tokens` parameter has been deprecated and replaced with the new `max_completion_tokens` parameter. **o1 series** models will only work with the `max_completion_tokens` parameter.
53
+
54
+
### Usage
55
+
56
+
These models do not currently support the same set of parameters as other models that use the chat completions API. Only a very limited subset is currently supported, so common parameters like `temperature`, `top_p`, are not available and including them will cause your request to fail. `o1-preview` and `o1-mini` models will also not accept the system message role as part of the messages array.
You may need to upgrade your version of the OpenAI Python library to take advantage of the new `max_completion_tokens` parameter.
61
+
62
+
```cmd
63
+
pip install openai --upgrade
64
+
```
65
+
66
+
If you are new to using Microsoft Entra ID for authentication see [How to configure Azure OpenAI Service with Microsoft Entra ID authentication](../how-to/managed-identity.md).
67
+
68
+
```python
69
+
from openai import AzureOpenAI
70
+
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
model="o1-preview-new", # replace with the model deployment name of your o1-preview, or o1-mini model
114
+
messages=[
115
+
{"role": "user", "content": "What steps should I think about when writing my first Python API?"},
116
+
],
117
+
max_completion_tokens=5000
118
+
119
+
)
120
+
121
+
print(response.model_dump_json(indent=2))
122
+
```
123
+
124
+
# [Output](#tab/python-output)
125
+
126
+
```json
127
+
{
128
+
"id": "chatcmpl-AEj7pKFoiTqDPHuxOcirA9KIvf3yz",
129
+
"choices": [
130
+
{
131
+
"finish_reason": "stop",
132
+
"index": 0,
133
+
"logprobs": null,
134
+
"message": {
135
+
"content": "Writing your first Python API is an exciting step in developing software that can communicate with other applications. An API (Application Programming Interface) allows different software systems to interact with each other, enabling data exchange and functionality sharing. Here are the steps you should consider when creating your first Python API:\n\n1. **Define the Purpose and Requirements**\n\n - **Identify the Functionality**: Clearly outline what your API is supposed to do. What data or services will it provide to the users?\n - **Determine the Endpoints**: Plan the different URLs (endpoints) through which users can access the API functionalities.\n - **Specify Request and Response Formats**: Decide on the data formats (usually JSON) for incoming requests and outgoing responses.\n\n2. **Choose the Right Framework**\n\n Python offers several frameworks for building APIs. Two of the most popular are:\n\n - **Flask**: A lightweight and flexible web framework, great for small to medium-sized APIs.\n - **FastAPI**: A modern, high-performance framework for building APIs with Python 3.6+ types, offering automatic interactive documentation.\n\n **Example**:\n ```bash\n pip install flask\n ```\n or\n ```bash\n pip install fastapi uvicorn\n ```\n\n3. **Set Up the Development Environment**\n\n - **Create a Virtual Environment**: Isolate your project dependencies using `venv` or `conda`.\n - **Install Required Packages**: Ensure all necessary libraries and packages are installed.\n\n **Example**:\n ```bash\n python -m venv env\n source env/bin/activate # On Windows use `env\\Scripts\\activate`\n ```\n\n4. **Implement the API Endpoints**\n\n - **Write the Code for Each Endpoint**: Implement the logic that handles requests and returns responses.\n - **Use Decorators to Define Routes**: In frameworks like Flask, you use decorators to specify the URL endpoints.\n\n **Example with Flask**:\n ```python\n from flask import Flask, request, jsonify\n\n app = Flask(__name__)\n\n @app.route('/hello', methods=['GET'])\n def hello_world():\n return jsonify({'message': 'Hello, World!'})\n\n if __name__ == '__main__':\n app.run(debug=True)\n ```\n\n5. **Handle Data Serialization and Deserialization**\n\n - **Parsing Incoming Data**: Use libraries to parse JSON or other data formats from requests.\n - **Formatting Output Data**: Ensure that responses are properly formatted in JSON or XML.\n\n6. **Implement Error Handling**\n\n - **Handle Exceptions Gracefully**: Provide meaningful error messages and HTTP status codes.\n - **Validate Input Data**: Check for required fields and appropriate data types to prevent errors.\n\n **Example**:\n ```python\n @app.errorhandler(404)\n def resource_not_found(e):\n return jsonify(error=str(e)), 404\n ```\n\n7. **Add Authentication and Authorization (If Necessary)**\n\n - **Secure Endpoints**: If your API requires, implement security measures such as API keys, tokens (JWT), or OAuth.\n - **Manage User Sessions**: Handle user login states and permissions appropriately.\n\n8. **Document Your API**\n\n - **Use Tools Like Swagger/OpenAPI**: Automatically generate interactive API documentation.\n - **Provide Usage Examples**: Help users understand how to interact with your API.\n\n **Example with FastAPI**:\n FastAPI automatically generates docs at `/docs` using Swagger UI.\n\n9. **Test Your API**\n\n - **Write Unit and Integration Tests**: Ensure each endpoint works as expected.\n - **Use Testing Tools**: Utilize tools like `unittest`, `pytest`, or API testing platforms like Postman.\n\n **Example**:\n ```python\n import unittest\n class TestAPI(unittest.TestCase):\n def test_hello_world(self):\n response = app.test_client().get('/hello')\n self.assertEqual(response.status_code, 200)\n ```\n\n10. **Optimize Performance**\n\n - **Improve Response Times**: Optimize your code and consider using asynchronous programming if necessary.\n - **Manage Resource Utilization**: Ensure your API can handle the expected load.\n\n11. **Deploy Your API**\n\n - **Choose a Hosting Platform**: Options include AWS, Heroku, DigitalOcean, etc.\n - **Configure the Server**: Set up the environment to run your API in a production setting.\n - **Use a Production Server**: Instead of the development server, use WSGI servers like Gunicorn or Uvicorn.\n\n **Example**:\n ```bash\n uvicorn main:app --host 0.0.0.0 --port 80\n ```\n\n12. **Monitor and Maintain**\n\n - **Logging**: Implement logging to track events and errors.\n - **Monitoring**: Use monitoring tools to track performance and uptime.\n - **Update and Patch**: Keep dependencies up to date and patch any security vulnerabilities.\n\n13. **Consider Versioning**\n\n - **Plan for Updates**: Use versioning in your API endpoints to manage changes without breaking existing clients.\n - **Example**:\n ```python\n @app.route('/v1/hello', methods=['GET'])\n ```\n\n14. **Gather Feedback and Iterate**\n\n - **User Feedback**: Encourage users to provide feedback on your API.\n - **Continuous Improvement**: Use the feedback to make improvements and add features.\n\n**Additional Tips**:\n\n- **Keep It Simple**: Start with a minimal viable API and expand functionality over time.\n- **Follow RESTful Principles**: Design your API according to REST standards to make it intuitive and standard-compliant.\n- **Security Best Practices**: Always sanitize inputs and protect against common vulnerabilities like SQL injection and cross-site scripting (XSS).\nBy following these steps, you'll be well on your way to creating a functional and robust Python API. Good luck with your development!",
136
+
"refusal": null,
137
+
"role": "assistant",
138
+
"function_call": null,
139
+
"tool_calls": null
140
+
},
141
+
"content_filter_results": {
142
+
"hate": {
143
+
"filtered": false,
144
+
"severity": "safe"
145
+
},
146
+
"protected_material_code": {
147
+
"filtered": false,
148
+
"detected": false
149
+
},
150
+
"protected_material_text": {
151
+
"filtered": false,
152
+
"detected": false
153
+
},
154
+
"self_harm": {
155
+
"filtered": false,
156
+
"severity": "safe"
157
+
},
158
+
"sexual": {
159
+
"filtered": false,
160
+
"severity": "safe"
161
+
},
162
+
"violence": {
163
+
"filtered": false,
164
+
"severity": "safe"
165
+
}
166
+
}
167
+
}
168
+
],
169
+
"created": 1728073417,
170
+
"model": "o1-preview-2024-09-12",
171
+
"object": "chat.completion",
172
+
"service_tier": null,
173
+
"system_fingerprint": "fp_503a95a7d8",
174
+
"usage": {
175
+
"completion_tokens": 1843,
176
+
"prompt_tokens": 20,
177
+
"total_tokens": 1863,
178
+
"completion_tokens_details": {
179
+
"audio_tokens": null,
180
+
"reasoning_tokens": 448
181
+
},
182
+
"prompt_tokens_details": {
183
+
"audio_tokens": null,
184
+
"cached_tokens": 0
185
+
}
186
+
},
187
+
"prompt_filter_results": [
188
+
{
189
+
"prompt_index": 0,
190
+
"content_filter_results": {
191
+
"custom_blocklists": {
192
+
"filtered": false
193
+
},
194
+
"hate": {
195
+
"filtered": false,
196
+
"severity": "safe"
197
+
},
198
+
"jailbreak": {
199
+
"filtered": false,
200
+
"detected": false
201
+
},
202
+
"self_harm": {
203
+
"filtered": false,
204
+
"severity": "safe"
205
+
},
206
+
"sexual": {
207
+
"filtered": false,
208
+
"severity": "safe"
209
+
},
210
+
"violence": {
211
+
"filtered": false,
212
+
"severity": "safe"
213
+
}
214
+
}
215
+
}
216
+
]
217
+
}
218
+
```
219
+
220
+
---
53
221
54
222
### Region availability
55
223
@@ -196,7 +364,7 @@ You can also use the OpenAI text to speech voices via Azure AI Speech. To learn
This table doesn't include fine-tuning regional availability information. Consult the the [fine-tuning section](#fine-tuning-models) for this information.
367
+
This table doesn't include fine-tuning regional availability information. Consult the [fine-tuning section](#fine-tuning-models) for this information.
200
368
201
369
For information on default quota, refer to the [quota and limits article](../quotas-limits.md).
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/batch.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -83,7 +83,9 @@ In the Studio UI the deployment type will appear as `Global-Batch`.
83
83
:::image type="content" source="../media/how-to/global-batch/global-batch.png" alt-text="Screenshot that shows the model deployment dialog in Azure OpenAI Studio with Global-Batch deployment type highlighted." lightbox="../media/how-to/global-batch/global-batch.png":::
84
84
85
85
> [!TIP]
86
-
> Each line of your input file for batch processing has a `model` attribute that requires a global batch **deployment name**. For a given input file, all names must be the same deployment name. This is different from OpenAI where the concept of model deployments does not exist.
86
+
> Each line of your input file for batch processing has a `model` attribute that requires a global batch **deployment name**. For a given input file, all names must be the same deployment name. This is different from OpenAI where the concept of model deployments does not exist.
87
+
>
88
+
> For the best performance we recommend submitting large files for patch processing, rather than a large number of small files with only a few lines in each file.
0 commit comments