-
-
Notifications
You must be signed in to change notification settings - Fork 16
Description
Embeddings were generated using openai and we need to find a way to remove them. It might require a rethink in terms of how we're doing. Full text search. Essentially we generated embeddings so that we could do semantic search on all of the texts and then we could hand it to the llm to do rag and summarisation. But I guess there's some problems with that. One is completely tied to openai because of the way the embeddings are generated. We're also two. The searches primarily about summarisation and answers, whereas maybe a lot of people just want to see the actual results, right? So I don't know whether it means it's probably a better idea to move to full text search in a SQL database or something like that, or whether there's a way to do this without opening our eye. Or I mean, I think the answer summarisation is kind of useful book. Maybe if the core app doesn't include it that would be good. I don't know that we necessarily need a specific solution, but we need some sort of plan