-
-
Notifications
You must be signed in to change notification settings - Fork 777
Integrations OpenAI
remsky edited this page Feb 2, 2025
·
5 revisions
You can use Kokoro as the model with OpenAI. Here’s how you can integrate it:
-
Install OpenAI Python Library:
- If you haven’t already, install the OpenAI Python library:
pip install openai
- If you haven’t already, install the OpenAI Python library:
-
Use the OpenAI Client:
- Here is an example of how to use the OpenAI client to generate speech:
from openai import OpenAI client = OpenAI( base_url="http://localhost:8880/v1", api_key="not-needed" ) with client.audio.speech.with_streaming_response.create( model="kokoro", voice="af_sky+af_bella", # Single or multiple voicepack combo input="Hello world!" ) as response: response.stream_to_file("output.mp3")
- Here is an example of how to use the OpenAI client to generate speech:
-
Running Kokoro FastAPI as a Docker Container:
- If you are running Kokoro FastAPI as a Docker container, change the base URL to
http://host.docker.internal:8880/v1.
- If you are running Kokoro FastAPI as a Docker container, change the base URL to
Here is an example of how to configure the OpenAI client for a Docker container:
from openai import OpenAI
client = OpenAI(
base_url="http://host.docker.internal:8880/v1", api_key="not-needed"
)
with client.audio.speech.with_streaming_response.create(
model="kokoro",
voice="af_sky+af_bella", # Single or multiple voicepack combo
input="Hello world!"
) as response:
response.stream_to_file("output.mp3")By following these steps, you can integrate Kokoro with OpenAI and use the OpenAI-Compatible Speech Endpoint effectively.
See sidebar for pages