Replies: 1 comment 2 replies
-
🤖 Hi Sergio, Your approach seems to be on the right track. Here are some suggestions based on your questions:
import pickle
# Save embeddings to a file
with open('embeddings.pkl', 'wb') as f:
pickle.dump(embeddings, f)
# Load embeddings from a file
with open('embeddings.pkl', 'rb') as f:
embeddings = pickle.load(f)
from libs.langchain.langchain.memory import Memory
# Initialize the memory
memory = Memory()
# Add a past interaction to the memory
memory.add_interaction(user_query, model_response)
# Use the memory for training
model.train(memory.get_interactions()) You can refer to the Memory documentation for more details. I hope this helps! Let me know if you have any other questions. Best, Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi guys!
At the moment, I have the following requirement: I want to feed the OpenAI model 3.5 with the information we have from our apps in our documentation. This documentation is stored in a public Confluence space. Our idea is that this model is able to generate answers for customers in our service desk portal based in the information it has. It's basically really similar to what dosu bot is doing here.
What would be the best approach?
I'm currently going to develop it on Python, previously I tried with Typescript and using ApifyDataSetLoader. This was my previous approach:
`
const loader = new ApifyDatasetLoader("dataSetId", {
datasetMappingFunction: (item) =>
new Document({
pageContent: (item.text || "") as string,
metadata: { source: item.url },
}),
clientOptions: {
token: "tokenApify",
},
});
)
I know it's a lot of questions, but I just want to be sure I take a good and safe approach for this requirement. I would like to know and be sure about what I'm going to do and the reasons in order to be optimized.
Thanks so much in advance!
Best regards,
Sergio
Beta Was this translation helpful? Give feedback.
All reactions