Skip to content

Commit b47c9bd

Browse files
committed
chore(docs): document replicate.use()
1 parent 70c1af2 commit b47c9bd

File tree

1 file changed

+131
-0
lines changed

1 file changed

+131
-0
lines changed

README.md

Lines changed: 131 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -436,6 +436,137 @@ with Replicate() as replicate:
436436
# HTTP client is now closed
437437
```
438438

439+
## Experimental: Using `replicate.use()`
440+
441+
> [!WARNING]
442+
> The `replicate.use()` interface is experimental and subject to change. We welcome your feedback on this new API design.
443+
444+
The `use()` method provides a more concise way to call Replicate models as functions. This experimental interface offers a more pythonic approach to running models:
445+
446+
```python
447+
import replicate
448+
449+
# Create a model function
450+
flux_dev = replicate.use("black-forest-labs/flux-dev")
451+
452+
# Call it like a regular Python function
453+
outputs = flux_dev(
454+
prompt="a cat wearing a wizard hat, digital art",
455+
num_outputs=1,
456+
aspect_ratio="1:1",
457+
output_format="webp",
458+
)
459+
460+
# outputs is a list of URLPath objects that auto-download when accessed
461+
for output in outputs:
462+
print(output) # e.g., Path(/tmp/a1b2c3/output.webp)
463+
```
464+
465+
### Language models with streaming
466+
467+
Many models, particularly language models, support streaming output. Use the `streaming=True` parameter to get results as they're generated:
468+
469+
```python
470+
import replicate
471+
472+
# Create a streaming language model function
473+
llama = replicate.use("meta/meta-llama-3-8b-instruct", streaming=True)
474+
475+
# Stream the output
476+
output = llama(prompt="Write a haiku about Python programming", max_tokens=50)
477+
478+
for chunk in output:
479+
print(chunk, end="", flush=True)
480+
```
481+
482+
### Chaining models
483+
484+
You can easily chain models together by passing the output of one model as input to another:
485+
486+
```python
487+
import replicate
488+
489+
# Create two model functions
490+
flux_dev = replicate.use("black-forest-labs/flux-dev")
491+
llama = replicate.use("meta/meta-llama-3-8b-instruct")
492+
493+
# Generate an image
494+
images = flux_dev(prompt="a mysterious ancient artifact")
495+
496+
# Describe the image
497+
description = llama(
498+
prompt="Describe this image in detail",
499+
image=images[0], # Pass the first image directly
500+
)
501+
502+
print(description)
503+
```
504+
505+
### Async support
506+
507+
For async/await patterns, use the `use_async=True` parameter:
508+
509+
```python
510+
import asyncio
511+
import replicate
512+
513+
514+
async def main():
515+
# Create an async model function
516+
flux_dev = replicate.use("black-forest-labs/flux-dev", use_async=True)
517+
518+
# Await the result
519+
outputs = await flux_dev(prompt="futuristic city at sunset")
520+
521+
for output in outputs:
522+
print(output)
523+
524+
525+
asyncio.run(main())
526+
```
527+
528+
### Accessing URLs without downloading
529+
530+
If you need the URL without downloading the file, use the `get_path_url()` helper:
531+
532+
```python
533+
import replicate
534+
from replicate.lib._predictions_use import get_path_url
535+
536+
flux_dev = replicate.use("black-forest-labs/flux-dev")
537+
outputs = flux_dev(prompt="a serene landscape")
538+
539+
for output in outputs:
540+
url = get_path_url(output)
541+
print(f"URL: {url}") # https://replicate.delivery/...
542+
```
543+
544+
### Creating predictions without waiting
545+
546+
To create a prediction without waiting for it to complete, use the `create()` method:
547+
548+
```python
549+
import replicate
550+
551+
llama = replicate.use("meta/meta-llama-3-8b-instruct")
552+
553+
# Start the prediction
554+
run = llama.create(prompt="Explain quantum computing")
555+
556+
# Check logs while it's running
557+
print(run.logs())
558+
559+
# Get the output when ready
560+
result = run.output()
561+
print(result)
562+
```
563+
564+
### Current limitations
565+
566+
- The `use()` method must be called at the module level (not inside functions or classes)
567+
- Type hints are limited compared to the standard client interface
568+
- This is an experimental API and may change in future releases
569+
439570
## Versioning
440571

441572
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:

0 commit comments

Comments
 (0)