Skip to content

Commit 05bde78

Browse files
authored
folder name change for llama3 (meta-llama#469)
2 parents 135052a + c68410c commit 05bde78

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

recipes/quickstart/Running_Llama2_Anywhere/Running_Llama_on_Mac_Windows_Linux.ipynb renamed to recipes/quickstart/Running_Llama3_Anywhere/Running_Llama_on_Mac_Windows_Linux.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -40,9 +40,9 @@
4040
"\n",
4141
"Run `ollama pull llama3:70b` to download the Llama 3 70b chat model, also in the 4-bit quantized format with size 39GB.\n",
4242
"\n",
43-
"Then you can run `ollama run llama3` and ask Llama 3 questions such as \"who wrote the book godfather?\" or \"who wrote the book godfather? answer in one sentence.\" You can also try `ollama run llama3:70b`, but the inference speed will most likely be too slow - for example, on an Apple M1 Pro with 32GB RAM, it takes over 10 seconds to generate one token (vs over 10 tokens per second with Llama 3 7b chat).\n",
43+
"Then you can run `ollama run llama3` and ask Llama 3 questions such as \"who wrote the book godfather?\" or \"who wrote the book godfather? answer in one sentence.\" You can also try `ollama run llama3:70b`, but the inference speed will most likely be too slow - for example, on an Apple M1 Pro with 32GB RAM, it takes over 10 seconds to generate one token using Llama 3 70b chat (vs over 10 tokens per second with Llama 3 8b chat).\n",
4444
"\n",
45-
"You can also run the following command to test Llama 3 (7b chat):\n",
45+
"You can also run the following command to test Llama 3 8b chat:\n",
4646
"```\n",
4747
" curl http://localhost:11434/api/chat -d '{\n",
4848
" \"model\": \"llama3\",\n",

0 commit comments

Comments
 (0)