Skip to content

Enable prompt caching for direct OpenAI integrationΒ #2222

@reynaldichernando

Description

@reynaldichernando

A lot of our users upon landing on https://docs.puter.com/playground/, clicked run, but don't bother to wait for the result to show up

the main example we show is the gpt-5-nano model which does take a while to generate the text

it would be good if we have a way to have prompt caching so we could speed up this response

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions