Skip to content

Commit 23afbd4

Browse files
committed
some typo fixes; codellama 70; tokens generated; colab link
1 parent aab327c commit 23afbd4

File tree

1 file changed

+12
-7
lines changed

1 file changed

+12
-7
lines changed

recipes/quickstart/Prompt_Engineering_with_Llama_3.ipynb

Lines changed: 12 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,8 @@
55
"cell_type": "markdown",
66
"metadata": {},
77
"source": [
8+
"<a href=\"https://colab.research.google.com/github/meta-llama/llama-recipes/blob/main/recipes/quickstart/Prompt_Engineering_with_Llama_3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
9+
"\n",
810
"# Prompt Engineering with Llama 3\n",
911
"\n",
1012
"Prompt engineering is using natural language to produce a desired response from a large language model (LLM).\n",
@@ -45,7 +47,7 @@
4547
"\n",
4648
"#### Llama 3\n",
4749
"1. `llama-3-8b` - base pretrained 8 billion parameter model\n",
48-
"1. `llama-3-70b` - base pretrained 8 billion parameter model\n",
50+
"1. `llama-3-70b` - base pretrained 70 billion parameter model\n",
4951
"1. `llama-3-8b-instruct` - instruction fine-tuned 8 billion parameter model\n",
5052
"1. `llama-3-70b-instruct` - instruction fine-tuned 70 billion parameter model (flagship)\n",
5153
"\n",
@@ -75,12 +77,15 @@
7577
"1. `codellama-7b` - code fine-tuned 7 billion parameter model\n",
7678
"1. `codellama-13b` - code fine-tuned 13 billion parameter model\n",
7779
"1. `codellama-34b` - code fine-tuned 34 billion parameter model\n",
80+
"1. `codellama-70b` - code fine-tuned 70 billion parameter model\n",
7881
"1. `codellama-7b-instruct` - code & instruct fine-tuned 7 billion parameter model\n",
7982
"2. `codellama-13b-instruct` - code & instruct fine-tuned 13 billion parameter model\n",
8083
"3. `codellama-34b-instruct` - code & instruct fine-tuned 34 billion parameter model\n",
84+
"3. `codellama-70b-instruct` - code & instruct fine-tuned 70 billion parameter model\n",
8185
"1. `codellama-7b-python` - Python fine-tuned 7 billion parameter model\n",
8286
"2. `codellama-13b-python` - Python fine-tuned 13 billion parameter model\n",
83-
"3. `codellama-34b-python` - Python fine-tuned 34 billion parameter model"
87+
"3. `codellama-34b-python` - Python fine-tuned 34 billion parameter model\n",
88+
"3. `codellama-70b-python` - Python fine-tuned 70 billion parameter model"
8489
]
8590
},
8691
{
@@ -124,11 +129,11 @@
124129
"\n",
125130
"> Our destiny is written in the stars.\n",
126131
"\n",
127-
"...is tokenized into `[\"Our\", \"destiny\", \"is\", \"written\", \"in\", \"the\", \"stars\", \".\"]` for Llama 3.\n",
132+
"...is tokenized into `[\"Our\", \" destiny\", \" is\", \" written\", \" in\", \" the\", \" stars\", \".\"]` for Llama 3. See [this](https://tiktokenizer.vercel.app/?model=meta-llama%2FMeta-Llama-3-8B) for an interactive tokenizer tool.\n",
128133
"\n",
129134
"Tokens matter most when you consider API pricing and internal behavior (ex. hyperparameters).\n",
130135
"\n",
131-
"Each model has a maximum context length that your prompt cannot exceed. That's 8K tokens for Llama 3 and 100K for Code Llama. \n"
136+
"Each model has a maximum context length that your prompt cannot exceed. That's 8K tokens for Llama 3, 4K for Llama 2, and 100K for Code Llama. \n"
132137
]
133138
},
134139
{
@@ -164,7 +169,7 @@
164169
"from groq import Groq\n",
165170
"\n",
166171
"# Get a free API key from https://console.groq.com/keys\n",
167-
"# os.environ[\"GROQ_API_KEY\"] = \"YOUR_KEY_HERE\"\n",
172+
"os.environ[\"GROQ_API_KEY\"] = \"YOUR_GROQ_API_KEY\"\n",
168173
"\n",
169174
"LLAMA3_70B_INSTRUCT = \"llama3-70b-8192\"\n",
170175
"LLAMA3_8B_INSTRUCT = \"llama3-8b-8192\"\n",
@@ -699,7 +704,7 @@
699704
"source": [
700705
"### Limiting Extraneous Tokens\n",
701706
"\n",
702-
"A common struggle is getting output without extraneous tokens (ex. \"Sure! Here's more information on...\").\n",
707+
"A common struggle with Llama 2 is getting output without extraneous tokens (ex. \"Sure! Here's more information on...\"), even if explicit instructions are given to Llama 2 to be concise and no preamble. Llama 3 can better follow instructions.\n",
703708
"\n",
704709
"Check out this improvement that combines a role, rules and restrictions, explicit instructions, and an example:"
705710
]
@@ -766,7 +771,7 @@
766771
"name": "python",
767772
"nbconvert_exporter": "python",
768773
"pygments_lexer": "ipython3",
769-
"version": "3.12.3"
774+
"version": "3.10.14"
770775
},
771776
"last_base_url": "https://bento.edge.x2p.facebook.net/",
772777
"last_kernel_id": "161e2a7b-2d2b-4995-87f3-d1539860ecac",

0 commit comments

Comments
 (0)