Proof of Concept for Local RAG Extraction. We are using, mainly:
- Mistral
- ollama
- llamaindex
Start a mongo server instance in Docker. Acts as unstructured data store:
docker run -p 27017:27017 --name rag-mongo -d mongo:latestStart ChromaDB Server in Docker Container:
docker run -p 8000:8000 --name rag-chromadb -d chromadb/chroma:latestInstall ollama. On Linux:
curl https://ollama.ai/install.sh | shWe will be using ollama to serve our local LLM as a POC. Keep this terminal open.
ollama serveTo interact with the model (and download it implicitly), you can also use a run command for a specific model:
ollama run dolphin-mixtral:latest
ollama run mixtral:latest
Note that some of these models are