You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/agents/how-to/tools/openapi-spec.md
+1-13Lines changed: 1 addition & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ OpenAPI Specified tool improves your function calling experience by providing st
25
25
automated, and scalable API integrations that enhance the capabilities and efficiency of your agent.
26
26
[OpenAPI specifications](https://spec.openapis.org/oas/latest.html) provide a formal standard for
27
27
describing HTTP APIs. This allows people to understand how an API works, how a sequence of APIs
28
-
work together, generate client code, create tests, apply design standards, and more.
28
+
work together, generate client code, create tests, apply design standards, and more. Currently, we support 3 authentication types with the OpenAPI 3.0 specified tools: `anonymous`, `API key`, `managed identity`.
29
29
30
30
## Set up
31
31
1. Ensure you've completed the prerequisites and setup steps in the [quickstart](../../quickstart.md).
@@ -51,18 +51,6 @@ work together, generate client code, create tests, apply design standards, and m
51
51
- Connection name: `YOUR_CONNECTION_NAME` (You will use this connection name in the sample code below.)
52
52
- Access: you can choose either *this project only* or *shared to all projects*. Just make sure in the sample code below, the project you entered connection string for has access to this connection.
Copy file name to clipboardExpand all lines: articles/ai-services/agents/how-to/tools/overview.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ services: cognitive-services
6
6
manager: nitinme
7
7
ms.service: azure
8
8
ms.topic: how-to
9
-
ms.date: 12/11/2024
9
+
ms.date: 12/18/2024
10
10
author: aahill
11
11
ms.author: aahi
12
12
ms.custom: azure-ai-agents
@@ -39,3 +39,4 @@ Agents can access multiple tools in parallel. These can be both Azure OpenAI-hos
39
39
|[Code interpreter](./code-interpreter.md)| Enables agents to write and run Python code in a sandboxed execution environment. | ✔️ | ✔️ | ✔️ | ✔️ |
40
40
|[Function calling](./function-calling.md)| Allows you to describe the structure of functions to an agent and then return the functions that need to be called along with their arguments. | ✔️ | ✔️ | ✔️ | ✔️ |
41
41
|[OpenAPI Specification](./openapi-spec.md)| Connect to an external API using an OpenAPI 3.0 specified tool, allowing for scalable interoperability with various applications. | ✔️ | ✔️ | ✔️ | ✔️ |
42
+
|[Azure functions](./azure-functions.md)| Use Azure functions to leverage the scalability and flexibility of serverless computing. | ✔️ ||| ✔️ |
@@ -18,7 +18,7 @@ Azure OpenAI Service is powered by a diverse set of models with different capabi
18
18
19
19
| Models | Description |
20
20
|--|--|
21
-
|[o1-preview and o1-mini](#o1-preview-and-o1-mini-models-limited-access)| Limited access models, specifically designed to tackle reasoning and problem-solving tasks with increased focus and capability. |
21
+
|[o1 & o1-mini](#o1-and-o1-mini-models-limited-access)| Limited access models, specifically designed to tackle reasoning and problem-solving tasks with increased focus and capability. |
22
22
|[GPT-4o & GPT-4o mini & GPT-4 Turbo](#gpt-4o-and-gpt-4-turbo)| The latest most capable Azure OpenAI models with multimodal versions, which can accept both text and images as input. |
23
23
|[GPT-4o-Realtime-Preview](#gpt-4o-realtime-preview)| A GPT-4o model that supports low-latency, "speech in, speech out" conversational interactions. |
24
24
|[GPT-4](#gpt-4)| A set of models that improve on GPT-3.5 and can understand and generate natural language and code. |
@@ -28,200 +28,33 @@ Azure OpenAI Service is powered by a diverse set of models with different capabi
28
28
|[Whisper](#whisper-models)| A series of models in preview that can transcribe and translate speech to text. |
29
29
|[Text to speech](#text-to-speech-models-preview) (Preview) | A series of models in preview that can synthesize text to speech. |
30
30
31
-
## o1-preview and o1-mini models limited access
31
+
## o1 and o1-mini models limited access
32
32
33
-
The Azure OpenAI `o1-preview` and `o1-mini` models are specifically designed to tackle reasoning and problem-solving tasks with increased focus and capability. These models spend more time processing and understanding the user's request, making them exceptionally strong in areas like science, coding, and math compared to previous iterations.
33
+
The Azure OpenAI `o1` and `o1-mini` models are specifically designed to tackle reasoning and problem-solving tasks with increased focus and capability. These models spend more time processing and understanding the user's request, making them exceptionally strong in areas like science, coding, and math compared to previous iterations.
34
34
35
35
| Model ID | Description | Max Request (tokens) | Training Data (up to) |
36
36
| --- | :--- |:--- |:---: |
37
-
|`o1-preview` (2024-09-12) | The most capable model in the o1 series, offering enhanced reasoning abilities.| Input: 128,000 <br> Output: 32,768 | Oct 2023 |
37
+
|`o1` (2024-12-17) | The most capable model in the o1 series, offering enhanced reasoning abilities. <br> **Request access: [limited access model application](https://aka.ms/OAI/o1access)** <br> - Structured outputs<br> - Text, image processing <br> - Functions/Tools <br> | Input: 200,000 <br> Output: 100,000 ||
38
+
|`o1-preview` (2024-09-12) | Older preview version | Input: 128,000 <br> Output: 32,768 | Oct 2023 |
38
39
|`o1-mini` (2024-09-12) | A faster and more cost-efficient option in the o1 series, ideal for coding tasks requiring speed and lower resource consumption.| Input: 128,000 <br> Output: 65,536 | Oct 2023 |
39
40
40
41
### Availability
41
42
42
-
The `o1-preview` and `o1-mini` models are now available for API access and model deployment. **Registration is required, and access will be granted based on Microsoft's eligibility criteria**.
43
+
The `o1` and `o1-mini` models are now available for API access and model deployment. **Registration is required, and access will be granted based on Microsoft's eligibility criteria**. Customers who previously applied and received access to `o1-preview`, don't need to reapply as they are automatically on the wait-list for the latest model.
43
44
44
-
Request access: [limited access model application](https://aka.ms/oai/modelaccess)
45
+
Request access: [limited access model application](https://aka.ms/OAI/o1access)
45
46
46
-
Once access has been granted, you will need to create a deployment for each model.
47
+
Once access has been granted, you will need to create a deployment for each model. If you have an existing `o1-preview` deployment in place upgrade is currently not supported, you will need to create a new deployment.
47
48
48
-
### API support
49
-
50
-
Support for the **o1 series** models was added in API version `2024-09-01-preview`.
51
-
52
-
The `max_tokens` parameter has been deprecated and replaced with the new `max_completion_tokens` parameter. **o1 series** models will only work with the `max_completion_tokens` parameter.
53
-
54
-
### Usage
55
-
56
-
These models do not currently support the same set of parameters as other models that use the chat completions API. Only a very limited subset is currently supported, so common parameters like `temperature`, `top_p`, are not available and including them will cause your request to fail. `o1-preview` and `o1-mini` models will also not accept the system message role as part of the messages array.
You may need to upgrade your version of the OpenAI Python library to take advantage of the new `max_completion_tokens` parameter.
61
-
62
-
```cmd
63
-
pip install openai --upgrade
64
-
```
65
-
66
-
If you are new to using Microsoft Entra ID for authentication see [How to configure Azure OpenAI Service with Microsoft Entra ID authentication](../how-to/managed-identity.md).
67
-
68
-
```python
69
-
from openai import AzureOpenAI
70
-
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
model="o1-preview-new", # replace with the model deployment name of your o1-preview, or o1-mini model
114
-
messages=[
115
-
{"role": "user", "content": "What steps should I think about when writing my first Python API?"},
116
-
],
117
-
max_completion_tokens=5000
118
-
119
-
)
120
-
121
-
print(response.model_dump_json(indent=2))
122
-
```
123
-
124
-
# [Output](#tab/python-output)
125
-
126
-
```json
127
-
{
128
-
"id": "chatcmpl-AEj7pKFoiTqDPHuxOcirA9KIvf3yz",
129
-
"choices": [
130
-
{
131
-
"finish_reason": "stop",
132
-
"index": 0,
133
-
"logprobs": null,
134
-
"message": {
135
-
"content": "Writing your first Python API is an exciting step in developing software that can communicate with other applications. An API (Application Programming Interface) allows different software systems to interact with each other, enabling data exchange and functionality sharing. Here are the steps you should consider when creating your first Python API:\n\n1. **Define the Purpose and Requirements**\n\n - **Identify the Functionality**: Clearly outline what your API is supposed to do. What data or services will it provide to the users?\n - **Determine the Endpoints**: Plan the different URLs (endpoints) through which users can access the API functionalities.\n - **Specify Request and Response Formats**: Decide on the data formats (usually JSON) for incoming requests and outgoing responses.\n\n2. **Choose the Right Framework**\n\n Python offers several frameworks for building APIs. Two of the most popular are:\n\n - **Flask**: A lightweight and flexible web framework, great for small to medium-sized APIs.\n - **FastAPI**: A modern, high-performance framework for building APIs with Python 3.6+ types, offering automatic interactive documentation.\n\n **Example**:\n ```bash\n pip install flask\n ```\n or\n ```bash\n pip install fastapi uvicorn\n ```\n\n3. **Set Up the Development Environment**\n\n - **Create a Virtual Environment**: Isolate your project dependencies using `venv` or `conda`.\n - **Install Required Packages**: Ensure all necessary libraries and packages are installed.\n\n **Example**:\n ```bash\n python -m venv env\n source env/bin/activate # On Windows use `env\\Scripts\\activate`\n ```\n\n4. **Implement the API Endpoints**\n\n - **Write the Code for Each Endpoint**: Implement the logic that handles requests and returns responses.\n - **Use Decorators to Define Routes**: In frameworks like Flask, you use decorators to specify the URL endpoints.\n\n **Example with Flask**:\n ```python\n from flask import Flask, request, jsonify\n\n app = Flask(__name__)\n\n @app.route('/hello', methods=['GET'])\n def hello_world():\n return jsonify({'message': 'Hello, World!'})\n\n if __name__ == '__main__':\n app.run(debug=True)\n ```\n\n5. **Handle Data Serialization and Deserialization**\n\n - **Parsing Incoming Data**: Use libraries to parse JSON or other data formats from requests.\n - **Formatting Output Data**: Ensure that responses are properly formatted in JSON or XML.\n\n6. **Implement Error Handling**\n\n - **Handle Exceptions Gracefully**: Provide meaningful error messages and HTTP status codes.\n - **Validate Input Data**: Check for required fields and appropriate data types to prevent errors.\n\n **Example**:\n ```python\n @app.errorhandler(404)\n def resource_not_found(e):\n return jsonify(error=str(e)), 404\n ```\n\n7. **Add Authentication and Authorization (If Necessary)**\n\n - **Secure Endpoints**: If your API requires, implement security measures such as API keys, tokens (JWT), or OAuth.\n - **Manage User Sessions**: Handle user login states and permissions appropriately.\n\n8. **Document Your API**\n\n - **Use Tools Like Swagger/OpenAPI**: Automatically generate interactive API documentation.\n - **Provide Usage Examples**: Help users understand how to interact with your API.\n\n **Example with FastAPI**:\n FastAPI automatically generates docs at `/docs` using Swagger UI.\n\n9. **Test Your API**\n\n - **Write Unit and Integration Tests**: Ensure each endpoint works as expected.\n - **Use Testing Tools**: Utilize tools like `unittest`, `pytest`, or API testing platforms like Postman.\n\n **Example**:\n ```python\n import unittest\n class TestAPI(unittest.TestCase):\n def test_hello_world(self):\n response = app.test_client().get('/hello')\n self.assertEqual(response.status_code, 200)\n ```\n\n10. **Optimize Performance**\n\n - **Improve Response Times**: Optimize your code and consider using asynchronous programming if necessary.\n - **Manage Resource Utilization**: Ensure your API can handle the expected load.\n\n11. **Deploy Your API**\n\n - **Choose a Hosting Platform**: Options include AWS, Heroku, DigitalOcean, etc.\n - **Configure the Server**: Set up the environment to run your API in a production setting.\n - **Use a Production Server**: Instead of the development server, use WSGI servers like Gunicorn or Uvicorn.\n\n **Example**:\n ```bash\n uvicorn main:app --host 0.0.0.0 --port 80\n ```\n\n12. **Monitor and Maintain**\n\n - **Logging**: Implement logging to track events and errors.\n - **Monitoring**: Use monitoring tools to track performance and uptime.\n - **Update and Patch**: Keep dependencies up to date and patch any security vulnerabilities.\n\n13. **Consider Versioning**\n\n - **Plan for Updates**: Use versioning in your API endpoints to manage changes without breaking existing clients.\n - **Example**:\n ```python\n @app.route('/v1/hello', methods=['GET'])\n ```\n\n14. **Gather Feedback and Iterate**\n\n - **User Feedback**: Encourage users to provide feedback on your API.\n - **Continuous Improvement**: Use the feedback to make improvements and add features.\n\n**Additional Tips**:\n\n- **Keep It Simple**: Start with a minimal viable API and expand functionality over time.\n- **Follow RESTful Principles**: Design your API according to REST standards to make it intuitive and standard-compliant.\n- **Security Best Practices**: Always sanitize inputs and protect against common vulnerabilities like SQL injection and cross-site scripting (XSS).\nBy following these steps, you'll be well on your way to creating a functional and robust Python API. Good luck with your development!",
136
-
"refusal": null,
137
-
"role": "assistant",
138
-
"function_call": null,
139
-
"tool_calls": null
140
-
},
141
-
"content_filter_results": {
142
-
"hate": {
143
-
"filtered": false,
144
-
"severity": "safe"
145
-
},
146
-
"protected_material_code": {
147
-
"filtered": false,
148
-
"detected": false
149
-
},
150
-
"protected_material_text": {
151
-
"filtered": false,
152
-
"detected": false
153
-
},
154
-
"self_harm": {
155
-
"filtered": false,
156
-
"severity": "safe"
157
-
},
158
-
"sexual": {
159
-
"filtered": false,
160
-
"severity": "safe"
161
-
},
162
-
"violence": {
163
-
"filtered": false,
164
-
"severity": "safe"
165
-
}
166
-
}
167
-
}
168
-
],
169
-
"created": 1728073417,
170
-
"model": "o1-preview-2024-09-12",
171
-
"object": "chat.completion",
172
-
"service_tier": null,
173
-
"system_fingerprint": "fp_503a95a7d8",
174
-
"usage": {
175
-
"completion_tokens": 1843,
176
-
"prompt_tokens": 20,
177
-
"total_tokens": 1863,
178
-
"completion_tokens_details": {
179
-
"audio_tokens": null,
180
-
"reasoning_tokens": 448
181
-
},
182
-
"prompt_tokens_details": {
183
-
"audio_tokens": null,
184
-
"cached_tokens": 0
185
-
}
186
-
},
187
-
"prompt_filter_results": [
188
-
{
189
-
"prompt_index": 0,
190
-
"content_filter_results": {
191
-
"custom_blocklists": {
192
-
"filtered": false
193
-
},
194
-
"hate": {
195
-
"filtered": false,
196
-
"severity": "safe"
197
-
},
198
-
"jailbreak": {
199
-
"filtered": false,
200
-
"detected": false
201
-
},
202
-
"self_harm": {
203
-
"filtered": false,
204
-
"severity": "safe"
205
-
},
206
-
"sexual": {
207
-
"filtered": false,
208
-
"severity": "safe"
209
-
},
210
-
"violence": {
211
-
"filtered": false,
212
-
"severity": "safe"
213
-
}
214
-
}
215
-
}
216
-
]
217
-
}
218
-
```
219
-
220
-
---
49
+
To learn more about the advanced `o1` series models see, [getting started with o1 series reasoning models](../how-to/reasoning.md).
221
50
222
51
### Region availability
223
52
224
-
Available for standard and global standard deployment in East US, East US2, North Central US, South Central US, Sweden Central, West US, and West US3 for approved customers.
53
+
| Model | Region |
54
+
|---|---|
55
+
|`o1`| East US2 (Global Standard) <br> Sweden Central (Global Standard) |
56
+
|`o1-preview`| See the [models table](#global-standard-model-availability). |
57
+
|`o1-mini`| See the [models table](#global-provisioned-managed-model-availability). |
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/gpt-with-vision.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ manager: nitinme
15
15
16
16
Vision-enabled chat models are large multimodal models (LMM) developed by OpenAI that can analyze images and provide textual responses to questions about them. They incorporate both natural language processing and visual understanding. The current vision-enabled models are GPT-4 Turbo with Vision, GPT-4o, and GPT-4o-mini.
17
17
18
-
The vision-enabled models answer general questions about what's present in the images or videos you upload.
18
+
The vision-enabled models answer general questions about what's present in the images you upload.
19
19
20
20
> [!TIP]
21
21
> To use vision-enabled models, you call the Chat Completion API on a supported model that you have deployed. If you're not familiar with the Chat Completion API, see the [Vision-enabled chat how-to guide](/azure/ai-services/openai/how-to/chatgpt?tabs=python&pivots=programming-language-chat-completions).
@@ -290,7 +290,7 @@ Every response includes a `"finish_reason"` field. It has the following possible
290
290
-`length`: Incomplete model output due to the `max_tokens` input parameter or model's token limit.
291
291
-`content_filter`: Omitted content due to a flag from our content filters.
292
292
293
-
293
+
<!--
294
294
295
295
### Create a video retrieval index
296
296
@@ -366,7 +366,7 @@ Every response includes a `"finish_reason"` field. It has the following possible
366
366
```bash
367
367
curl.exe -v -X GET "https://<YOUR_ENDPOINT_URL>/computervision/retrieval/indexes/my-video-index/ingestions?api-version=2023-05-01-preview&$top=20" -H "ocp-apim-subscription-key: <YOUR_SUBSCRIPTION_KEY>"
0 commit comments