Ollama, an application based on llama.cpp, now allows users to run any of the 45,000+ GGUF models from Hugging Face directly on their local machines, simplifying the process of interacting with large language models for AI enthusiasts and developers alike.
This repository demonstrates how to integrate LangChain with Ollama to interact with Hugging Face GGUF models locally. The examples show how to create simple yet powerful applications using locally-hosted language models. Here is the link to an article I wrote on this topic.
- Python 3.8+
- Ollama installed and running on your system
- A Hugging Face GGUF model downloaded via Ollama
- langchain-ollama
- langchain
- Clone this repository:
git clone https://github.com/yourusername/langchain-ollama-examples
cd langchain-ollama-examples- Install required packages:
pip install -r requirements.txt- First, ensure Ollama is running and you have downloaded your desired model:
ollama run hf.co/username/model-repository- Run the examples:
python main.py
python main_with_memory.py