Skip to content

Add support for optional LLM model parameter in content processing#261

Open
sammydeprez wants to merge 1 commit intosouzatharsis:mainfrom
sammydeprez:feature/langchain_basemodel_as_input
Open

Add support for optional LLM model parameter in content processing#261
sammydeprez wants to merge 1 commit intosouzatharsis:mainfrom
sammydeprez:feature/langchain_basemodel_as_input

Conversation

@sammydeprez
Copy link

Instead of defining and being limited to the models supported in the library via

  • llm_model_name: str
  • api_key_label: str
    parameters.

I have added the option to add an llm_model (a langchain BaseLanguageModel object)
This gives the user the options to use any provider (supported by Langchain) without making changes in the podcastfy library

So instead of this

audio_file = generate_podcast(
    urls=["https://en.wikipedia.org/wiki/Artificial_intelligence"],
    llm_model_name="gpt-4-turbo",
    api_key_label="OPENAI_API_KEY"
)

The user can now do this:

# Define a llm
from langchain_openai.chat_models import AzureChatOpenAI
llm_model = AzureChatOpenAI(model="gpt-4.1")

# Use the model
audio_file = generate_podcast(
    urls=["https://en.wikipedia.org/wiki/Artificial_intelligence"],
    llm_model=llm_model
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant