|
49 | 49 | }, |
50 | 50 | { |
51 | 51 | "cell_type": "code", |
52 | | - "execution_count": 1, |
| 52 | + "execution_count": null, |
53 | 53 | "id": "8dcf656b", |
54 | 54 | "metadata": {}, |
55 | 55 | "outputs": [], |
|
64 | 64 | "source": [ |
65 | 65 | "### String PromptTemplates\n", |
66 | 66 | "\n", |
67 | | - "The [String PromptTemplates](https://python.langchain.com/v0.2/docs/concepts/#string-prompttemplates) is used to format a string input. By default, the template takes Python `f-string` format. There are currently 2 choices of `template_format` available: `f-string` and `jinja2`. Later we will see the use of `jinja2` format. In the example below, we will use the `f-string` format." |
| 67 | + "The [String PromptTemplates](https://python.langchain.com/v0.2/docs/concepts/#string-prompttemplates) are used to format a string input. By default, the templates take Python's `f-string` format. There are currently 2 choices of `template_format` available: `f-string` and `jinja2`. Later we will see the use of `jinja2` format. In the example below, we will use the `f-string` format." |
68 | 68 | ] |
69 | 69 | }, |
70 | 70 | { |
|
91 | 91 | }, |
92 | 92 | { |
93 | 93 | "cell_type": "code", |
94 | | - "execution_count": 3, |
| 94 | + "execution_count": null, |
95 | 95 | "id": "13cf3124", |
96 | 96 | "metadata": {}, |
97 | 97 | "outputs": [], |
|
124 | 124 | }, |
125 | 125 | { |
126 | 126 | "cell_type": "code", |
127 | | - "execution_count": 5, |
| 127 | + "execution_count": null, |
128 | 128 | "id": "9", |
129 | 129 | "metadata": {}, |
130 | 130 | "outputs": [], |
|
163 | 163 | }, |
164 | 164 | { |
165 | 165 | "cell_type": "code", |
166 | | - "execution_count": 8, |
| 166 | + "execution_count": null, |
167 | 167 | "id": "12", |
168 | 168 | "metadata": {}, |
169 | 169 | "outputs": [], |
|
192 | 192 | "source": [ |
193 | 193 | "#### Your turn 😎\n", |
194 | 194 | "\n", |
195 | | - "Create a `StringPromptTemplate` that outputs some text generation prompt, for example, \"Sun is part of galaxy ...\".\n", |
| 195 | + "Create a `String PromptTemplate` that outputs some text generation prompt, for example, \"Sun is part of galaxy ...\".\n", |
196 | 196 | "\n", |
197 | 197 | "Feel free to experiment with the built in [Python `f-string` ](https://docs.python.org/3.11/tutorial/inputoutput.html#formatted-string-literals) for the `prompt` input argument to the model." |
198 | 198 | ] |
199 | 199 | }, |
200 | 200 | { |
201 | 201 | "cell_type": "code", |
202 | | - "execution_count": 10, |
| 202 | + "execution_count": null, |
203 | 203 | "id": "b0cdc634", |
204 | 204 | "metadata": {}, |
205 | 205 | "outputs": [], |
|
220 | 220 | "id": "8e67e77a", |
221 | 221 | "metadata": {}, |
222 | 222 | "source": [ |
223 | | - "LangChain have implemented a [`Runnable`](https://api.python.langchain.com/en/stable/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) protocol that allows us to create custom \"chains\".\n", |
| 223 | + "LangChain has implemented a [`Runnable`](https://api.python.langchain.com/en/stable/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) protocol that allows us to create custom \"chains\".\n", |
224 | 224 | "This protocol has a standard interface for defining and invoking various LLMs, PromptTemplates, and other components, enabling reusability.\n", |
225 | 225 | "For more details, go to LangChain's [Runnable documentation](https://python.langchain.com/v0.2/docs/concepts/#runnable-interface).\n", |
226 | 226 | "\n", |
227 | 227 | "```{note}\n", |
228 | | - "In this tutorial, you will see the use of `.invoke` method on various LangChain's object.\n", |
| 228 | + "In this tutorial, you will see the use of the `.invoke` method on various LangChain objects.\n", |
229 | 229 | "This is essentially using that standard interface for the `Runnable` protocol.\n", |
230 | 230 | "```" |
231 | 231 | ] |
|
240 | 240 | }, |
241 | 241 | { |
242 | 242 | "cell_type": "code", |
243 | | - "execution_count": 11, |
| 243 | + "execution_count": null, |
244 | 244 | "id": "16", |
245 | 245 | "metadata": {}, |
246 | 246 | "outputs": [], |
|
250 | 250 | }, |
251 | 251 | { |
252 | 252 | "cell_type": "code", |
253 | | - "execution_count": 12, |
| 253 | + "execution_count": null, |
254 | 254 | "id": "17", |
255 | 255 | "metadata": {}, |
256 | 256 | "outputs": [], |
|
290 | 290 | }, |
291 | 291 | { |
292 | 292 | "cell_type": "code", |
293 | | - "execution_count": 14, |
| 293 | + "execution_count": null, |
294 | 294 | "id": "864d5266", |
295 | 295 | "metadata": {}, |
296 | 296 | "outputs": [], |
|
313 | 313 | "id": "8b730d9e", |
314 | 314 | "metadata": {}, |
315 | 315 | "source": [ |
316 | | - "If you'd like to access the base object `Llama` object from the `llama-cpp-python` package, you can access it via the `.client` attribute of the `LlamaCpp` object." |
| 316 | + "If you'd like to access the base `Llama` object from the `llama-cpp-python` package, you can access it via the `.client` attribute of the `LlamaCpp` object." |
317 | 317 | ] |
318 | 318 | }, |
319 | 319 | { |
|
338 | 338 | }, |
339 | 339 | { |
340 | 340 | "cell_type": "code", |
341 | | - "execution_count": 17, |
| 341 | + "execution_count": null, |
342 | 342 | "id": "18", |
343 | 343 | "metadata": {}, |
344 | 344 | "outputs": [], |
|
400 | 400 | "metadata": {}, |
401 | 401 | "source": [ |
402 | 402 | "As we can see above, the template reads as follows:\n", |
403 | | - "- `eos_token` is a string that is added at the top of the resulting string after prompt is formatted.\n", |
| 403 | + "- `eos_token` is a string that is added at the top of the resulting string after the prompt is formatted.\n", |
404 | 404 | "You can also see that `eos_token` is used to append `content` string values from an `assistant` `role`.\n", |
405 | 405 | "You can find this value by going to the Model's [`tokenizer_config.json`](https://huggingface.co/allenai/OLMo-7B-Instruct-hf/blob/main/tokenizer_config.json#L233) file and looking for the `eos_token` key. *Unfornately, this is currently the only way to get this information, you can go to https://github.com/ggerganov/llama.cpp/issues/5040 for more details.* In our case, the `eos_token` is `<|endoftext|>`.\n", |
406 | | - "- `messages` is a list of dictionary that is iterated over. As you can see that this dictionary should contain a `role` and `content` key.\n", |
407 | | - "- `add_generation_prompt` is a boolean that is used to determine whether to add a generation prompt or not. In this case, when it's the last message and `add_generation_prompt` is `True`, it will add `<|assistant|>` string to the end of the prompt." |
| 406 | + "- `messages` is a list of dictionary that is iterated over. As you can see these dictionaries should contain `role` and `content` keys.\n", |
| 407 | + "- `add_generation_prompt` is a boolean that is used to determine whether to add a generation prompt or not. In this case, when it's the last message and `add_generation_prompt` is `True`, it will add the `<|assistant|>` string to the end of the prompt." |
408 | 408 | ] |
409 | 409 | }, |
410 | 410 | { |
411 | 411 | "cell_type": "markdown", |
412 | 412 | "id": "f1ad5f5c", |
413 | 413 | "metadata": {}, |
414 | 414 | "source": [ |
415 | | - "Now that we know what the template expects we can create the final prompt string by passing in the expected input variables, this time, instead of using the `.format` method, let's see what happens if we use the `.invoke` method on the `PromptTemplate` object." |
| 415 | + "Now that we know what the template expects we can create the final prompt string by passing in the expected input variables. This time, instead of using the `.format` method, let's see what happens if we use the `.invoke` method on the `PromptTemplate` object." |
416 | 416 | ] |
417 | 417 | }, |
418 | 418 | { |
|
464 | 464 | }, |
465 | 465 | { |
466 | 466 | "cell_type": "code", |
467 | | - "execution_count": 21, |
| 467 | + "execution_count": null, |
468 | 468 | "id": "4caf24cf", |
469 | 469 | "metadata": {}, |
470 | 470 | "outputs": [], |
|
488 | 488 | "id": "6d33e0d4", |
489 | 489 | "metadata": {}, |
490 | 490 | "source": [ |
491 | | - "You can see below that we get [`StringPromptValue`](https://api.python.langchain.com/en/latest/prompt_values/langchain_core.prompt_values.StringPromptValue.html) object this time as the output rather than pure string. But we can still get the string value by calling the `.to_string` method on the `StringPromptValue` object." |
| 491 | + "You can see below that we get a [`StringPromptValue`](https://api.python.langchain.com/en/latest/prompt_values/langchain_core.prompt_values.StringPromptValue.html) object this time as the output rather than a pure string. But we can still get the string value by calling the `.to_string` method on the `StringPromptValue` object." |
492 | 492 | ] |
493 | 493 | }, |
494 | 494 | { |
|
560 | 560 | "STEP 2: Prompt Template reads the variables to form the prompt text as output - \"What are stars and moon?\" \n", |
561 | 561 | "STEP 3: The prompt is given as input to the LLM model. \n", |
562 | 562 | "STEP 4: LLM Model produces output. \n", |
563 | | - "STEP 5: The output goes through StrOutputParser that parses it into string and gives the result. " |
| 563 | + "STEP 5: The output goes through StrOutputParser that parses it into a string and gives the result. " |
564 | 564 | ] |
565 | 565 | }, |
566 | 566 | { |
|
573 | 573 | }, |
574 | 574 | { |
575 | 575 | "cell_type": "code", |
576 | | - "execution_count": 23, |
| 576 | + "execution_count": null, |
577 | 577 | "id": "25", |
578 | 578 | "metadata": {}, |
579 | 579 | "outputs": [], |
|
667 | 667 | }, |
668 | 668 | { |
669 | 669 | "cell_type": "code", |
670 | | - "execution_count": 28, |
| 670 | + "execution_count": null, |
671 | 671 | "id": "28", |
672 | 672 | "metadata": {}, |
673 | 673 | "outputs": [], |
|
682 | 682 | }, |
683 | 683 | { |
684 | 684 | "cell_type": "code", |
685 | | - "execution_count": 29, |
| 685 | + "execution_count": null, |
686 | 686 | "id": "29", |
687 | 687 | "metadata": {}, |
688 | 688 | "outputs": [], |
|
705 | 705 | }, |
706 | 706 | { |
707 | 707 | "cell_type": "code", |
708 | | - "execution_count": 30, |
| 708 | + "execution_count": null, |
709 | 709 | "id": "31", |
710 | 710 | "metadata": {}, |
711 | 711 | "outputs": [], |
|
748 | 748 | "source": [ |
749 | 749 | "#### Your turn 😎\n", |
750 | 750 | "\n", |
751 | | - "Try different messages value(s) and see how the output changes. But remember to follow the template structure.\n", |
| 751 | + "Try different message values and see how the output changes. But remember to follow the template structure.\n", |
752 | 752 | "The dictionary keys must contain `role` and `content` and the allowed `role` values are only `user` and `assistant`." |
753 | 753 | ] |
754 | 754 | }, |
|
761 | 761 | "source": [ |
762 | 762 | "# Write your llm_chain.invoke code here, feel free to also, create your own template and try partial_variables" |
763 | 763 | ] |
| 764 | + }, |
| 765 | + { |
| 766 | + "cell_type": "code", |
| 767 | + "execution_count": null, |
| 768 | + "id": "daade3a0", |
| 769 | + "metadata": {}, |
| 770 | + "outputs": [], |
| 771 | + "source": [] |
| 772 | + }, |
| 773 | + { |
| 774 | + "cell_type": "code", |
| 775 | + "execution_count": null, |
| 776 | + "id": "e20c605d-ecd3-400a-9ee7-cd1c9fa9d486", |
| 777 | + "metadata": {}, |
| 778 | + "outputs": [], |
| 779 | + "source": [] |
764 | 780 | } |
765 | 781 | ], |
766 | 782 | "metadata": { |
|
779 | 795 | "name": "python", |
780 | 796 | "nbconvert_exporter": "python", |
781 | 797 | "pygments_lexer": "ipython3", |
782 | | - "version": "3.11.9" |
| 798 | + "version": "3.11.11" |
783 | 799 | } |
784 | 800 | }, |
785 | 801 | "nbformat": 4, |
|
0 commit comments