Skip to content

ycrc/llms-on-hpc

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 

Repository files navigation

llms-on-hpc

This document reproduces the commands in the LLMs on HPC workshop presentation. It has been updated assuming you are running the workshop using your regular account, instead of the course account provided to you during the workshop.

Link to the full presentation: LLMs on HPC

Setup

Note

You should not set OLLAMA_MODELS to the shared model directory, as we did in the workshop. The shared directory is only accessible using course accounts.

Copy Jupyter notebook to your home directory. The copy command below will only work on bouchet. If you cloned this repo (ycrc/llms-on-hpc), you don't need to perform the next two commands.

mkdir ~/ycrc_llm_workshop
cp /apps/data/training/hpc_llm/ollama.ipynb ~/ycrc_llm_workshop

Running Ollama

salloc -p devel --mem=10G

Note

You don't need to set OLLAMA_HOST as we did in the workshop. New functionality for our ollama module automatically picks an open port.

module load ollama

ollama serve

ollama list

ollama run llama3.1:8b

Enter Prompt: "Why is the Sky blue"?

The model will not respond. Stop the current request, exit the model, and then exit the compute node.

Ctrl-C

/bye

exit

Running Ollama - with a GPU

salloc -p gpu_devel --gpus=1

nvidia-smi

module load ollama

ollama serve

ollama list

ollama run llama3.1:8b

Enter Prompt: "Why is the Sky blue"?

You should have received a response. Exit the model

/bye

Now run a different model:

ollama run llama3.3:70b

Enter Prompt: "Why is the Sky blue"?

It will not respond again, because the GPU is not large enough for the 70b model.

Stop the current request, exit the model, and then exit the compute node.

Ctrl-C

/bye

exit

Ollama in Jupyter/Python

Create an environment to run the jupyter note book.

salloc -p devel --mem=20G

conda create --name ollama python jupyter jupyterlab

conda activate ollama

pip install ollama

ycrc_conda_env.sh update

exit

Reproducibilty and Modelfiles

vim Modelfile

# Enter settings into the model file

Note

If you are using the shared model directory, you need to postfix your model with a unique identifier (below we use your netid). You don't need to do this if you are not using the shared directory.

ollama create my_model_$USER -f Modelfile

About

Code repo to accompany the LLMs on HPC workshop

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors