Skip to content

Commit b207e9a

Browse files
feat(api): add gpu droplets
1 parent d940e66 commit b207e9a

File tree

506 files changed

+539
-484
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

506 files changed

+539
-484
lines changed

.stats.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
configured_endpoints: 168
22
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/digitalocean%2Fgradientai-f8e8c290636c1e218efcf7bfe92ba7570c11690754d21287d838919fbc943a80.yml
33
openapi_spec_hash: 1eddf488ecbe415efb45445697716f5d
4-
config_hash: 683ea6ba4d63037c1c72484e5936e73c
4+
config_hash: 0a72b6161859b504ed3b5a2a142ba5a5

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ $ pip install -r requirements-dev.lock
3636

3737
Most of the SDK is generated code. Modifications to code will be persisted between generations, but may
3838
result in merge conflicts between manual patches and changes from the generator. The generator will never
39-
modify the contents of the `src/gradientai/lib/` and `examples/` directories.
39+
modify the contents of the `src/do_gradientai/lib/` and `examples/` directories.
4040

4141
## Adding and running examples
4242

README.md

Lines changed: 25 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ The full API of this library can be found in [api.md](api.md).
2626

2727
```python
2828
import os
29-
from gradientai import GradientAI
29+
from do_gradientai import GradientAI
3030

3131
api_client = GradientAI(
3232
api_key=os.environ.get("GRADIENTAI_API_KEY"), # This is the default and can be omitted
@@ -38,6 +38,7 @@ inference_client = GradientAI(
3838
)
3939
agent_client = GradientAI(
4040
agent_key=os.environ.get("GRADIENTAI_AGENT_KEY"), # This is the default and can be omitted
41+
agent_endpoint="https://my-cool-agent.agents.do-ai.run",
4142
)
4243

4344
print(api_client.agents.list())
@@ -67,7 +68,7 @@ Simply import `AsyncGradientAI` instead of `GradientAI` and use `await` with eac
6768
```python
6869
import os
6970
import asyncio
70-
from gradientai import AsyncGradientAI
71+
from do_gradientai import AsyncGradientAI
7172

7273
client = AsyncGradientAI(
7374
api_key=os.environ.get("GRADIENTAI_API_KEY"), # This is the default and can be omitted
@@ -107,8 +108,8 @@ Then you can enable it by instantiating the client with `http_client=DefaultAioH
107108

108109
```python
109110
import asyncio
110-
from gradientai import DefaultAioHttpClient
111-
from gradientai import AsyncGradientAI
111+
from do_gradientai import DefaultAioHttpClient
112+
from do_gradientai import AsyncGradientAI
112113

113114

114115
async def main() -> None:
@@ -136,7 +137,7 @@ asyncio.run(main())
136137
We provide support for streaming responses using Server Side Events (SSE).
137138

138139
```python
139-
from gradientai import GradientAI
140+
from do_gradientai import GradientAI
140141

141142
client = GradientAI()
142143

@@ -157,7 +158,7 @@ for completion in stream:
157158
The async client uses the exact same interface.
158159

159160
```python
160-
from gradientai import AsyncGradientAI
161+
from do_gradientai import AsyncGradientAI
161162

162163
client = AsyncGradientAI()
163164

@@ -189,7 +190,7 @@ Typed requests and responses provide autocomplete and documentation within your
189190
Nested parameters are dictionaries, typed using `TypedDict`, for example:
190191

191192
```python
192-
from gradientai import GradientAI
193+
from do_gradientai import GradientAI
193194

194195
client = GradientAI()
195196

@@ -208,16 +209,16 @@ print(completion.stream_options)
208209

209210
## Handling errors
210211

211-
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `gradientai.APIConnectionError` is raised.
212+
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `do_gradientai.APIConnectionError` is raised.
212213

213214
When the API returns a non-success status code (that is, 4xx or 5xx
214-
response), a subclass of `gradientai.APIStatusError` is raised, containing `status_code` and `response` properties.
215+
response), a subclass of `do_gradientai.APIStatusError` is raised, containing `status_code` and `response` properties.
215216

216-
All errors inherit from `gradientai.APIError`.
217+
All errors inherit from `do_gradientai.APIError`.
217218

218219
```python
219-
import gradientai
220-
from gradientai import GradientAI
220+
import do_gradientai
221+
from do_gradientai import GradientAI
221222

222223
client = GradientAI()
223224

@@ -231,12 +232,12 @@ try:
231232
],
232233
model="llama3.3-70b-instruct",
233234
)
234-
except gradientai.APIConnectionError as e:
235+
except do_gradientai.APIConnectionError as e:
235236
print("The server could not be reached")
236237
print(e.__cause__) # an underlying Exception, likely raised within httpx.
237-
except gradientai.RateLimitError as e:
238+
except do_gradientai.RateLimitError as e:
238239
print("A 429 status code was received; we should back off a bit.")
239-
except gradientai.APIStatusError as e:
240+
except do_gradientai.APIStatusError as e:
240241
print("Another non-200-range status code was received")
241242
print(e.status_code)
242243
print(e.response)
@@ -264,7 +265,7 @@ Connection errors (for example, due to a network connectivity problem), 408 Requ
264265
You can use the `max_retries` option to configure or disable retry settings:
265266

266267
```python
267-
from gradientai import GradientAI
268+
from do_gradientai import GradientAI
268269

269270
# Configure the default for all requests:
270271
client = GradientAI(
@@ -290,7 +291,7 @@ By default requests time out after 1 minute. You can configure this with a `time
290291
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
291292

292293
```python
293-
from gradientai import GradientAI
294+
from do_gradientai import GradientAI
294295

295296
# Configure the default for all requests:
296297
client = GradientAI(
@@ -350,7 +351,7 @@ if response.my_field is None:
350351
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
351352

352353
```py
353-
from gradientai import GradientAI
354+
from do_gradientai import GradientAI
354355

355356
client = GradientAI()
356357
response = client.chat.completions.with_raw_response.create(
@@ -366,9 +367,9 @@ completion = response.parse() # get the object that `chat.completions.create()`
366367
print(completion.choices)
367368
```
368369

369-
These methods return an [`APIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/gradientai/_response.py) object.
370+
These methods return an [`APIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) object.
370371

371-
The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/gradientai/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
372+
The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
372373

373374
#### `.with_streaming_response`
374375

@@ -438,7 +439,7 @@ You can directly override the [httpx client](https://www.python-httpx.org/api/#c
438439

439440
```python
440441
import httpx
441-
from gradientai import GradientAI, DefaultHttpxClient
442+
from do_gradientai import GradientAI, DefaultHttpxClient
442443

443444
client = GradientAI(
444445
# Or use the `GRADIENT_AI_BASE_URL` env var
@@ -461,7 +462,7 @@ client.with_options(http_client=DefaultHttpxClient(...))
461462
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
462463

463464
```py
464-
from gradientai import GradientAI
465+
from do_gradientai import GradientAI
465466

466467
with GradientAI() as client:
467468
# make requests here
@@ -489,8 +490,8 @@ If you've upgraded to the latest version but aren't seeing any new features you
489490
You can determine the version that is being used at runtime with:
490491

491492
```py
492-
import gradientai
493-
print(gradientai.__version__)
493+
import do_gradientai
494+
print(do_gradientai.__version__)
494495
```
495496

496497
## Requirements

0 commit comments

Comments
 (0)