-
Download Ollama from https://ollama.com/download/linux
curl -fsSL https://ollama.com/install.sh | sh -
Run the LLM of your choice https://ollama.com/search
- Use deepseek-r1:32b as an example
ollama run deepseek-r1:32b
- This will download the LLM and start to run it. Your model is saved in
.ollama/models.du -hto check size of the directory. - You should be able to chat with your LLM now. 😃
ctrl + Dto exit.
-
By default Ollama only serves (listen to) localhost:11434. Ragflow that runs in docker container will be considered as a different network. So update the Ollama's config to serve on 0.0.0.0:11434:
sudo nano /etc/systemd/system/ollama.service
[Service] Environment="OLLAMA_HOST=0.0.0.0:11434"
-
Update your package information:
sudo apt update
-
Install packages to allow apt to use HTTPS repositories:
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common
-
Add Docker's official GPG key:
curl -fsSL [https://download.docker.com/linux/ubuntu/gpg](https://download.docker.com/linux/ubuntu/gpg) | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg -
Add the Docker repository:
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] [https://download.docker.com/linux/ubuntu](https://download.docker.com/linux/ubuntu) $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
-
Update package database with Docker packages:
sudo apt update
-
Install Docker Engine, CLI, containerd, Buildx plugin, and Compose plugin:
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
-
Add your user to the docker group to run Docker without sudo:
sudo usermod -aG docker $USERsudo shutdown now
Then start WSL again.
-
Start the Docker service:
sudo service docker start
-
Verify Docker is installed correctly:
docker run hello-world
This command downloads a test image and runs it in a container. When the container runs, it prints a confirmation message and exits.
-
Set the maximum number of memory map areas a process can have. (This might be optional)
C:\Users\user_name\.wslconfig[kernel] vm.max_map_count=262144sudo shutdown now
Then start WSL again.
-
Clone Ragflow repo
git clone https://github.com/infiniflow/ragflow.git
-
Select Image with Embedding Models (none slim version)
cd ragflow/docker nano .env# RAGFLOW_IMAGE=infiniflow/ragflow:v0.18.0-slim RAGFLOW_IMAGE=infiniflow/ragflow:v0.18.0 -
Start up the server using the pre-built Docker images:
# Use CPU for embedding and DeepDoc tasks: docker compose -f docker-compose.yml up -d # To use GPU to accelerate embedding and DeepDoc tasks: # docker compose -f docker-compose-gpu.yml up -d
- Video demo comming soon!
- If the containers are not properly shutdown, it may cause error next time
docker compose down