You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is it possible to register an additional OpenAI model, such as the new GPT4 or older GPT3.5 model, instead of the supported text-davinci-003 model in the model_name_or_path argument for PromptNode()?
from haystack.document_stores import FAISSDocumentStore
from haystack.nodes import PromptTemplate, DensePassageRetriever, OpenAIAnswerGenerator
from haystack.pipelines import GenerativeQAPipeline
from haystack.utils import print_answers
# Load the FAISS document store.
document_store = FAISSDocumentStore.load(index_path="data/my_faiss", config_path="data/my_faiss_config.json")
# Initialize DPR Retriever to encode documents, encode question and query documents
retriever = DensePassageRetriever(
document_store=document_store,
query_embedding_model="facebook/dpr-question_encoder-single-nq-base",
passage_embedding_model="facebook/dpr-ctx_encoder-single-nq-base",
use_gpu=True,
embed_title=True,
)
# Initialise the prompt for the OpenAI answer generator model.
lfqa_prompt = PromptTemplate(
name="lfqa",
prompt_text="""Synthesize a detailed answer using only the following related text.
Your answer should be in your own words and be no longer than 250 words.
Your answer should be self contained, not reference the related text in meta and not identify specific clause numbers.
Use only the required number of words to answer the question directly.
If you are not certain of the answer, say so.
\n\n Related text: $context \n\n Question: $query \n\n Answer:""",
prompt_params=["context", "query"]
)
# Let's initiate the OpenAIAnswerGenerator
generator = OpenAIAnswerGenerator(
api_key='',
model="text-davinci-003",
max_tokens=300,
presence_penalty=0.1,
frequency_penalty=0.1,
top_k=1,
temperature=0,
prompt_template=lfqa_prompt
)
# Initialise the GenerativeQAPipeline function to pass the generator and retriever to
pipe = GenerativeQAPipeline(generator=generator, retriever=retriever)
# Run the query
answer = pipe.run(
query="What are some methods of smoke and heat management?", params={"Generator": {"top_k": 1}, "Retriever": {"top_k": 3}}
)
# Print the output
print_answers(answer, details="minimum")
I've searched for "Register a new invocation layer for gpt-3.5-turbo using the register method.", but can't seem to find anything in the API documentation.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Is it possible to register an additional OpenAI model, such as the new GPT4 or older GPT3.5 model, instead of the supported text-davinci-003 model in the model_name_or_path argument for PromptNode()?
I've searched for "Register a new invocation layer for gpt-3.5-turbo using the register method.", but can't seem to find anything in the API documentation.
Thanks in advance.
Beta Was this translation helpful? Give feedback.
All reactions