Skip to content

Commit 1566897

Browse files
update readme for openai tests (Azure#40330)
* update readme for openai tests * few updates * python
1 parent 741181a commit 1566897

File tree

1 file changed

+77
-1
lines changed

1 file changed

+77
-1
lines changed

sdk/openai/azure-openai/README.md

Lines changed: 77 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,79 @@
11
# Azure OpenAI testing placeholder
22

3-
> Note: This is not a real package. Its purpose is to run tests/validate Azure OpenAI endpoints with the official [Python SDK](https://github.com/openai/openai-python).
3+
> Note: This is not a real package. Its purpose is to run tests/validate Azure OpenAI endpoints with the official [Python SDK](https://github.com/openai/openai-python).
4+
5+
## Prequisites
6+
7+
- Python >= 3.9
8+
- pip install -r dev_requirements.txt
9+
10+
## How to run tests
11+
12+
You can run the tests using pytest. The tests are located in the `tests` directory.
13+
14+
- To run all tests, use the following command:
15+
```bash
16+
pytest tests
17+
```
18+
- To run a specific test file, use the following command:
19+
```bash
20+
pytest tests/test_file.py
21+
```
22+
- To run a specific test, use the following command:
23+
```bash
24+
pytest tests/test_file.py -k test_name
25+
```
26+
27+
## How to write a test
28+
29+
- Test configuration is found in `conftest.py`. This contains the variables that control the API versions we test against, the models we use, and the Azure OpenAI resources we use.
30+
- The `configure` decorator is used to configure the client and necessary kwargs for the test. It takes care of setting up the client and passing the necessary parameters to the test function. The client built is based on the api_type passed in `pytest.mark.parametrize`.
31+
- `build_kwargs` in `conftest.py` is used to build the kwargs for the test. It is where the model name should be set for the test. Any new test file requires that it's kwargs are configured to include which model name to use for Azure vs. OpenAI (due to sometimes different names used between Azure and OpenAI depending on what the deployment was called for Azure).
32+
- Anatomy of a test explained below.
33+
34+
35+
```python
36+
import pytest
37+
import openai
38+
39+
from devtools_testutils import AzureRecordedTestCase # we don't record but this gives us access to nice helpers
40+
from conftest import (
41+
GPT_4_AZURE, # Maps to Azure resource with gpt-4* model deployed
42+
GPT_4_OPENAI, # Maps to OpenAI testing with gpt-4* model
43+
configure, # Configures the client and necessary kwargs for the test
44+
PREVIEW, # Maps to the latest preview version of the API
45+
STABLE, # Maps to the latest stable version of the API
46+
)
47+
48+
@pytest.mark.live_test_only # test is live only
49+
class TestFeature(AzureRecordedTestCase):
50+
@configure # creates the client and passes through the kwargs to the test
51+
@pytest.mark.parametrize( # parametrizes the test to run with Azure and OpenAI clients
52+
"api_type, api_version",
53+
[(GPT_4_AZURE, PREVIEW), (GPT_4_OPENAI, "v1")] # list[tuple(api_type, api_version), ...]
54+
)
55+
def test_responses(self, client: openai.AzureOpenAI | openai.OpenAI, api_type, api_version, **kwargs):
56+
# call the API feature(s) to test
57+
response = client.responses.create(
58+
input="Hello, how are you?",
59+
**kwargs, # model is passed through kwargs
60+
)
61+
62+
# test response assertions
63+
assert response.id is not None
64+
assert response.created_at is not None
65+
assert response.model
66+
assert response.object == "response"
67+
assert response.status in ["completed", "incomplete", "failed", "in_progress"]
68+
assert response.usage.input_tokens is not None
69+
assert response.usage.output_tokens is not None
70+
```
71+
72+
## Other testing info
73+
74+
- [Live pipeline](https://dev.azure.com/azure-sdk/internal/_build?definitionId=6157) and [weekly pipeline](https://dev.azure.com/azure-sdk/internal/_build?definitionId=6158) for testing Azure OpenAI endpoints.
75+
- Tests for each feature should run against the latest stable (if supported) and preview version of Azure OpenAI.
76+
- Parity testing with non-Azure OpenAI is done on a weekly basis in weekly pipeline.
77+
- Tests are live only, there are no recordings/playback for tests.
78+
- By default uses Entra ID to authenticate the Azure client.
79+
- Test pipeline configuration and environment variables found in [tests.yml](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/openai/tests.yml).

0 commit comments

Comments
 (0)