Skip to content

Commit 2c8aea2

Browse files
chore: sync repo
1 parent d6efb9d commit 2c8aea2

File tree

390 files changed

+2030
-1941
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

390 files changed

+2030
-1941
lines changed

.github/workflows/ci.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ jobs:
1212
lint:
1313
timeout-minutes: 10
1414
name: lint
15-
runs-on: ${{ github.repository == 'stainless-sdks/digitalocean-genai-sdk-python' && 'depot-ubuntu-24.04' || 'ubuntu-latest' }}
15+
runs-on: ${{ github.repository == 'stainless-sdks/serverless-inference-sdk-prod-python' && 'depot-ubuntu-24.04' || 'ubuntu-latest' }}
1616
steps:
1717
- uses: actions/checkout@v4
1818

@@ -31,7 +31,7 @@ jobs:
3131
run: ./scripts/lint
3232

3333
upload:
34-
if: github.repository == 'stainless-sdks/digitalocean-genai-sdk-python'
34+
if: github.repository == 'stainless-sdks/serverless-inference-sdk-prod-python'
3535
timeout-minutes: 10
3636
name: upload
3737
permissions:
@@ -57,7 +57,7 @@ jobs:
5757
test:
5858
timeout-minutes: 10
5959
name: test
60-
runs-on: ${{ github.repository == 'stainless-sdks/digitalocean-genai-sdk-python' && 'depot-ubuntu-24.04' || 'ubuntu-latest' }}
60+
runs-on: ${{ github.repository == 'stainless-sdks/serverless-inference-sdk-prod-python' && 'depot-ubuntu-24.04' || 'ubuntu-latest' }}
6161
steps:
6262
- uses: actions/checkout@v4
6363

.stats.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
configured_endpoints: 126
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/digitalocean%2Fdigitalocean-genai-sdk-bdf24159c6ebb5402d6c05a5165cb1501dc37cf6c664baa9eb318efb0f89dddd.yml
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/digitalocean%2Fserverless-inference-sdk-prod-bdf24159c6ebb5402d6c05a5165cb1501dc37cf6c664baa9eb318efb0f89dddd.yml
33
openapi_spec_hash: 686329a97002025d118dc2367755c18d
4-
config_hash: 39a1554af43cd406e37b5ed5c943649c
4+
config_hash: 12e97f0629a2a5d6a96762892ecc1b0c

CONTRIBUTING.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,7 @@ $ rye sync --all-features
1717
You can then run scripts using `rye run python script.py` or by activating the virtual environment:
1818

1919
```sh
20-
$ rye shell
21-
# or manually activate - https://docs.python.org/3/library/venv.html#how-venvs-work
20+
# Activate the virtual environment - https://docs.python.org/3/library/venv.html#how-venvs-work
2221
$ source .venv/bin/activate
2322

2423
# now you can omit the `rye run` prefix
@@ -37,7 +36,7 @@ $ pip install -r requirements-dev.lock
3736

3837
Most of the SDK is generated code. Modifications to code will be persisted between generations, but may
3938
result in merge conflicts between manual patches and changes from the generator. The generator will never
40-
modify the contents of the `src/digitalocean_genai_sdk/lib/` and `examples/` directories.
39+
modify the contents of the `src/serverless_inference_sdk_prod/lib/` and `examples/` directories.
4140

4241
## Adding and running examples
4342

@@ -63,7 +62,7 @@ If you’d like to use the repository from source, you can either install from g
6362
To install via git:
6463

6564
```sh
66-
$ pip install git+ssh://[email protected]/stainless-sdks/digitalocean-genai-sdk-python.git
65+
$ pip install git+ssh://[email protected]/stainless-sdks/serverless-inference-sdk-prod-python.git
6766
```
6867

6968
Alternatively, you can build from source and install the wheel file:
@@ -121,7 +120,7 @@ the changes aren't made through the automated pipeline, you may want to make rel
121120

122121
### Publish with a GitHub workflow
123122

124-
You can release to package managers by using [the `Publish PyPI` GitHub action](https://www.github.com/stainless-sdks/digitalocean-genai-sdk-python/actions/workflows/publish-pypi.yml). This requires a setup organization or repository secret to be set up.
123+
You can release to package managers by using [the `Publish PyPI` GitHub action](https://www.github.com/stainless-sdks/serverless-inference-sdk-prod-python/actions/workflows/publish-pypi.yml). This requires a setup organization or repository secret to be set up.
125124

126125
### Publish manually
127126

LICENSE

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
Copyright 2025 digitalocean-genai-sdk
1+
Copyright 2025 serverless-inference-sdk-prod
22

33
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
44

README.md

Lines changed: 46 additions & 58 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
# Digitalocean Genai SDK Python API library
1+
# Serverless Inference SDK Prod Python API library
22

3-
[![PyPI version](https://img.shields.io/pypi/v/digitalocean_genai_sdk.svg)](https://pypi.org/project/digitalocean_genai_sdk/)
3+
[![PyPI version](https://img.shields.io/pypi/v/serverless_inference_sdk_prod.svg)](https://pypi.org/project/serverless_inference_sdk_prod/)
44

5-
The Digitalocean Genai SDK Python library provides convenient access to the Digitalocean Genai SDK REST API from any Python 3.8+
5+
The Serverless Inference SDK Prod Python library provides convenient access to the Serverless Inference SDK Prod REST API from any Python 3.8+
66
application. The library includes type definitions for all request params and response fields,
77
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
88

@@ -16,23 +16,23 @@ The REST API documentation can be found on [help.openai.com](https://help.openai
1616

1717
```sh
1818
# install from this staging repo
19-
pip install git+ssh://[email protected]/stainless-sdks/digitalocean-genai-sdk-python.git
19+
pip install git+ssh://[email protected]/stainless-sdks/serverless-inference-sdk-prod-python.git
2020
```
2121

2222
> [!NOTE]
23-
> Once this package is [published to PyPI](https://app.stainless.com/docs/guides/publish), this will become: `pip install --pre digitalocean_genai_sdk`
23+
> Once this package is [published to PyPI](https://app.stainless.com/docs/guides/publish), this will become: `pip install --pre serverless_inference_sdk_prod`
2424
2525
## Usage
2626

2727
The full API of this library can be found in [api.md](api.md).
2828

2929
```python
3030
import os
31-
from digitalocean_genai_sdk import DigitaloceanGenaiSDK
31+
from serverless_inference_sdk_prod import ServerlessInferenceSDKProd
3232

33-
client = DigitaloceanGenaiSDK(
33+
client = ServerlessInferenceSDKProd(
3434
api_key=os.environ.get(
35-
"DIGITALOCEAN_GENAI_SDK_API_KEY"
35+
"SERVERLESS_INFERENCE_SDK_PROD_API_KEY"
3636
), # This is the default and can be omitted
3737
)
3838

@@ -42,21 +42,21 @@ print(assistants.first_id)
4242

4343
While you can provide an `api_key` keyword argument,
4444
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
45-
to add `DIGITALOCEAN_GENAI_SDK_API_KEY="My API Key"` to your `.env` file
45+
to add `SERVERLESS_INFERENCE_SDK_PROD_API_KEY="My API Key"` to your `.env` file
4646
so that your API Key is not stored in source control.
4747

4848
## Async usage
4949

50-
Simply import `AsyncDigitaloceanGenaiSDK` instead of `DigitaloceanGenaiSDK` and use `await` with each API call:
50+
Simply import `AsyncServerlessInferenceSDKProd` instead of `ServerlessInferenceSDKProd` and use `await` with each API call:
5151

5252
```python
5353
import os
5454
import asyncio
55-
from digitalocean_genai_sdk import AsyncDigitaloceanGenaiSDK
55+
from serverless_inference_sdk_prod import AsyncServerlessInferenceSDKProd
5656

57-
client = AsyncDigitaloceanGenaiSDK(
57+
client = AsyncServerlessInferenceSDKProd(
5858
api_key=os.environ.get(
59-
"DIGITALOCEAN_GENAI_SDK_API_KEY"
59+
"SERVERLESS_INFERENCE_SDK_PROD_API_KEY"
6060
), # This is the default and can be omitted
6161
)
6262

@@ -85,25 +85,13 @@ Typed requests and responses provide autocomplete and documentation within your
8585
Nested parameters are dictionaries, typed using `TypedDict`, for example:
8686

8787
```python
88-
from digitalocean_genai_sdk import DigitaloceanGenaiSDK
88+
from serverless_inference_sdk_prod import ServerlessInferenceSDKProd
8989

90-
client = DigitaloceanGenaiSDK()
90+
client = ServerlessInferenceSDKProd()
9191

9292
assistant_object = client.assistants.create(
9393
model="gpt-4o",
94-
tool_resources={
95-
"code_interpreter": {"file_ids": ["string"]},
96-
"file_search": {
97-
"vector_store_ids": ["string"],
98-
"vector_stores": [
99-
{
100-
"chunking_strategy": {"type": "auto"},
101-
"file_ids": ["string"],
102-
"metadata": {"foo": "string"},
103-
}
104-
],
105-
},
106-
},
94+
tool_resources={},
10795
)
10896
print(assistant_object.tool_resources)
10997
```
@@ -114,9 +102,9 @@ Request parameters that correspond to file uploads can be passed as `bytes`, or
114102

115103
```python
116104
from pathlib import Path
117-
from digitalocean_genai_sdk import DigitaloceanGenaiSDK
105+
from serverless_inference_sdk_prod import ServerlessInferenceSDKProd
118106

119-
client = DigitaloceanGenaiSDK()
107+
client = ServerlessInferenceSDKProd()
120108

121109
client.audio.transcribe_audio(
122110
file=Path("/path/to/file"),
@@ -128,27 +116,27 @@ The async client uses the exact same interface. If you pass a [`PathLike`](https
128116

129117
## Handling errors
130118

131-
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `digitalocean_genai_sdk.APIConnectionError` is raised.
119+
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `serverless_inference_sdk_prod.APIConnectionError` is raised.
132120

133121
When the API returns a non-success status code (that is, 4xx or 5xx
134-
response), a subclass of `digitalocean_genai_sdk.APIStatusError` is raised, containing `status_code` and `response` properties.
122+
response), a subclass of `serverless_inference_sdk_prod.APIStatusError` is raised, containing `status_code` and `response` properties.
135123

136-
All errors inherit from `digitalocean_genai_sdk.APIError`.
124+
All errors inherit from `serverless_inference_sdk_prod.APIError`.
137125

138126
```python
139-
import digitalocean_genai_sdk
140-
from digitalocean_genai_sdk import DigitaloceanGenaiSDK
127+
import serverless_inference_sdk_prod
128+
from serverless_inference_sdk_prod import ServerlessInferenceSDKProd
141129

142-
client = DigitaloceanGenaiSDK()
130+
client = ServerlessInferenceSDKProd()
143131

144132
try:
145133
client.assistants.list()
146-
except digitalocean_genai_sdk.APIConnectionError as e:
134+
except serverless_inference_sdk_prod.APIConnectionError as e:
147135
print("The server could not be reached")
148136
print(e.__cause__) # an underlying Exception, likely raised within httpx.
149-
except digitalocean_genai_sdk.RateLimitError as e:
137+
except serverless_inference_sdk_prod.RateLimitError as e:
150138
print("A 429 status code was received; we should back off a bit.")
151-
except digitalocean_genai_sdk.APIStatusError as e:
139+
except serverless_inference_sdk_prod.APIStatusError as e:
152140
print("Another non-200-range status code was received")
153141
print(e.status_code)
154142
print(e.response)
@@ -176,10 +164,10 @@ Connection errors (for example, due to a network connectivity problem), 408 Requ
176164
You can use the `max_retries` option to configure or disable retry settings:
177165

178166
```python
179-
from digitalocean_genai_sdk import DigitaloceanGenaiSDK
167+
from serverless_inference_sdk_prod import ServerlessInferenceSDKProd
180168

181169
# Configure the default for all requests:
182-
client = DigitaloceanGenaiSDK(
170+
client = ServerlessInferenceSDKProd(
183171
# default is 2
184172
max_retries=0,
185173
)
@@ -194,16 +182,16 @@ By default requests time out after 1 minute. You can configure this with a `time
194182
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/#fine-tuning-the-configuration) object:
195183

196184
```python
197-
from digitalocean_genai_sdk import DigitaloceanGenaiSDK
185+
from serverless_inference_sdk_prod import ServerlessInferenceSDKProd
198186

199187
# Configure the default for all requests:
200-
client = DigitaloceanGenaiSDK(
188+
client = ServerlessInferenceSDKProd(
201189
# 20 seconds (default is 1 minute)
202190
timeout=20.0,
203191
)
204192

205193
# More granular control:
206-
client = DigitaloceanGenaiSDK(
194+
client = ServerlessInferenceSDKProd(
207195
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
208196
)
209197

@@ -221,10 +209,10 @@ Note that requests that time out are [retried twice by default](#retries).
221209

222210
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
223211

224-
You can enable logging by setting the environment variable `DIGITALOCEAN_GENAI_SDK_LOG` to `info`.
212+
You can enable logging by setting the environment variable `SERVERLESS_INFERENCE_SDK_PROD_LOG` to `info`.
225213

226214
```shell
227-
$ export DIGITALOCEAN_GENAI_SDK_LOG=info
215+
$ export SERVERLESS_INFERENCE_SDK_PROD_LOG=info
228216
```
229217

230218
Or to `debug` for more verbose logging.
@@ -246,19 +234,19 @@ if response.my_field is None:
246234
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
247235

248236
```py
249-
from digitalocean_genai_sdk import DigitaloceanGenaiSDK
237+
from serverless_inference_sdk_prod import ServerlessInferenceSDKProd
250238

251-
client = DigitaloceanGenaiSDK()
239+
client = ServerlessInferenceSDKProd()
252240
response = client.assistants.with_raw_response.list()
253241
print(response.headers.get('X-My-Header'))
254242

255243
assistant = response.parse() # get the object that `assistants.list()` would have returned
256244
print(assistant.first_id)
257245
```
258246

259-
These methods return an [`APIResponse`](https://github.com/stainless-sdks/digitalocean-genai-sdk-python/tree/main/src/digitalocean_genai_sdk/_response.py) object.
247+
These methods return an [`APIResponse`](https://github.com/stainless-sdks/serverless-inference-sdk-prod-python/tree/main/src/serverless_inference_sdk_prod/_response.py) object.
260248

261-
The async client returns an [`AsyncAPIResponse`](https://github.com/stainless-sdks/digitalocean-genai-sdk-python/tree/main/src/digitalocean_genai_sdk/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
249+
The async client returns an [`AsyncAPIResponse`](https://github.com/stainless-sdks/serverless-inference-sdk-prod-python/tree/main/src/serverless_inference_sdk_prod/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
262250

263251
#### `.with_streaming_response`
264252

@@ -320,10 +308,10 @@ You can directly override the [httpx client](https://www.python-httpx.org/api/#c
320308

321309
```python
322310
import httpx
323-
from digitalocean_genai_sdk import DigitaloceanGenaiSDK, DefaultHttpxClient
311+
from serverless_inference_sdk_prod import ServerlessInferenceSDKProd, DefaultHttpxClient
324312

325-
client = DigitaloceanGenaiSDK(
326-
# Or use the `DIGITALOCEAN_GENAI_SDK_BASE_URL` env var
313+
client = ServerlessInferenceSDKProd(
314+
# Or use the `SERVERLESS_INFERENCE_SDK_PROD_BASE_URL` env var
327315
base_url="http://my.test.server.example.com:8083",
328316
http_client=DefaultHttpxClient(
329317
proxy="http://my.test.proxy.example.com",
@@ -343,9 +331,9 @@ client.with_options(http_client=DefaultHttpxClient(...))
343331
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
344332

345333
```py
346-
from digitalocean_genai_sdk import DigitaloceanGenaiSDK
334+
from serverless_inference_sdk_prod import ServerlessInferenceSDKProd
347335

348-
with DigitaloceanGenaiSDK() as client:
336+
with ServerlessInferenceSDKProd() as client:
349337
# make requests here
350338
...
351339

@@ -362,7 +350,7 @@ This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) con
362350

363351
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
364352

365-
We are keen for your feedback; please open an [issue](https://www.github.com/stainless-sdks/digitalocean-genai-sdk-python/issues) with questions, bugs, or suggestions.
353+
We are keen for your feedback; please open an [issue](https://www.github.com/stainless-sdks/serverless-inference-sdk-prod-python/issues) with questions, bugs, or suggestions.
366354

367355
### Determining the installed version
368356

@@ -371,8 +359,8 @@ If you've upgraded to the latest version but aren't seeing any new features you
371359
You can determine the version that is being used at runtime with:
372360

373361
```py
374-
import digitalocean_genai_sdk
375-
print(digitalocean_genai_sdk.__version__)
362+
import serverless_inference_sdk_prod
363+
print(serverless_inference_sdk_prod.__version__)
376364
```
377365

378366
## Requirements

SECURITY.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ before making any information public.
1616
## Reporting Non-SDK Related Security Issues
1717

1818
If you encounter security issues that are not directly related to SDKs but pertain to the services
19-
or products provided by Digitalocean Genai SDK, please follow the respective company's security reporting guidelines.
19+
or products provided by Serverless Inference SDK Prod, please follow the respective company's security reporting guidelines.
2020

2121
---
2222

0 commit comments

Comments
 (0)