Replies: 2 comments
-
Great question -- a couple of thoughts here:
|
Beta Was this translation helpful? Give feedback.
0 replies
-
Using PaLM API in LangChain can also help differentiate between input questions that should be answered with SQL on BQ tables vs other analysis paths |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone
I currently have a pipeline using OpenAI where I pass information about my internal company database tables as a prompt, and then ask a user defined question, and then get an SQL query and a response.
As you might have guessed, this takes alot of tokens since I need to describe my tables in the prompt and costs alot.
I am trying to now fine tune a text-bison model by passing it training examples of the input text along with an appropriate output response. For the training, I can pass the same prompt as the OpenAI pipeline, where I describe my tables and then ask the model to generate a query.
But, the Vertex AI page on fine tuning says to use training examples which will be the same as you would get an input in production. This would mean that I also pass the whole table description in the production pipeline as well, and this is exactly what I am trying to avoid.
As an example:
For training:
In the above example, the model knows the tables through the prompt and then finds the appropriate table for the text it was given.
But in a production environment, I want to give only the 'text', and not the table descriptions, since that would take up tokens and cost more, and that is what I am trying to avoid in the first place.
Any idea how to go about this or am I approaching the problem in the wrong way?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions