-
Notifications
You must be signed in to change notification settings - Fork 280
Open
Description
Since we are using the prediction to be
prediction = model(input)
print(prediction.value)
There is some convention that is being used behind the scenes as well.
If the prompt contains input inbetween, for instance:
Prompt = f"""Generate a summary using the INPUT {input} and the format should be {OUTPUT}"
then, while declaring the model, we need to give
model = tg.BlackboxLLM(engine, Prompt.replace() ....)
while predicting, we need to provide
pred = model(inputs). where inputs denote the {input}. But is there any way we can avoid this redundancy?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels