You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `gradientai.APIConnectionError` is raised.
212
+
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `do_gradientai.APIConnectionError` is raised.
212
213
213
214
When the API returns a non-success status code (that is, 4xx or 5xx
214
-
response), a subclass of `gradientai.APIStatusError` is raised, containing `status_code` and `response` properties.
215
+
response), a subclass of `do_gradientai.APIStatusError` is raised, containing `status_code` and `response` properties.
215
216
216
-
All errors inherit from `gradientai.APIError`.
217
+
All errors inherit from `do_gradientai.APIError`.
217
218
218
219
```python
219
-
importgradientai
220
-
fromgradientaiimport GradientAI
220
+
importdo_gradientai
221
+
fromdo_gradientaiimport GradientAI
221
222
222
223
client = GradientAI()
223
224
@@ -231,12 +232,12 @@ try:
231
232
],
232
233
model="llama3.3-70b-instruct",
233
234
)
234
-
exceptgradientai.APIConnectionError as e:
235
+
exceptdo_gradientai.APIConnectionError as e:
235
236
print("The server could not be reached")
236
237
print(e.__cause__) # an underlying Exception, likely raised within httpx.
237
-
exceptgradientai.RateLimitError as e:
238
+
exceptdo_gradientai.RateLimitError as e:
238
239
print("A 429 status code was received; we should back off a bit.")
239
-
exceptgradientai.APIStatusError as e:
240
+
exceptdo_gradientai.APIStatusError as e:
240
241
print("Another non-200-range status code was received")
241
242
print(e.status_code)
242
243
print(e.response)
@@ -264,7 +265,7 @@ Connection errors (for example, due to a network connectivity problem), 408 Requ
264
265
You can use the `max_retries` option to configure or disable retry settings:
265
266
266
267
```python
267
-
fromgradientaiimport GradientAI
268
+
fromdo_gradientaiimport GradientAI
268
269
269
270
# Configure the default for all requests:
270
271
client = GradientAI(
@@ -290,7 +291,7 @@ By default requests time out after 1 minute. You can configure this with a `time
290
291
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
291
292
292
293
```python
293
-
fromgradientaiimport GradientAI
294
+
fromdo_gradientaiimport GradientAI
294
295
295
296
# Configure the default for all requests:
296
297
client = GradientAI(
@@ -350,7 +351,7 @@ if response.my_field is None:
350
351
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
@@ -366,9 +367,9 @@ completion = response.parse() # get the object that `chat.completions.create()`
366
367
print(completion.choices)
367
368
```
368
369
369
-
These methods return an [`APIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/gradientai/_response.py) object.
370
+
These methods return an [`APIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) object.
370
371
371
-
The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/gradientai/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
372
+
The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
372
373
373
374
#### `.with_streaming_response`
374
375
@@ -438,7 +439,7 @@ You can directly override the [httpx client](https://www.python-httpx.org/api/#c
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
462
463
463
464
```py
464
-
fromgradientaiimport GradientAI
465
+
fromdo_gradientaiimport GradientAI
465
466
466
467
with GradientAI() as client:
467
468
# make requests here
@@ -489,8 +490,8 @@ If you've upgraded to the latest version but aren't seeing any new features you
489
490
You can determine the version that is being used at runtime with:
0 commit comments