Skip to content

Commit 739a2f5

Browse files
committed
final cleanup before pushing to blueprint
1 parent ac6459b commit 739a2f5

File tree

3 files changed

+29
-14
lines changed

3 files changed

+29
-14
lines changed

docs/class1/module1/lab3.rst

Lines changed: 17 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -186,12 +186,10 @@ First up, Calvin & Hobbes
186186
187187
docker exec -it ollama ollama run calvin-hobbes
188188
189-
.. code-block:: bash
190-
191-
>>> tell me about snowmen
189+
Once at the >>> prompt, I asked the model to "tell me about snowmen"
192190
193-
Your responses will vary, LLMs are not digests regurgitating data. If it stops short, try prompting it again.
194-
My response:
191+
Your responses will vary as LLMs are not digests regurgitating data. If it stops short, try prompting it again.
192+
The response i got is below:
195193
196194
.. code-block:: bash
197195
@@ -211,17 +209,21 @@ My response:
211209
cooler than your boring old snowman!\nHobbes: *sigh* Just don't burn down the neighborhood
212210
with your reckless plans.
213211
214-
Next up, Phineas & Ferb
212+
Make sure to exit the model.
215213
216214
.. code-block:: bash
217215
218-
docker exec -it ollama ollama run phineas-ferb
216+
/bye
217+
218+
Next up, Phineas & Ferb
219219
220220
.. code-block:: bash
221221
222-
>>> tell me about snowmen
222+
docker exec -it ollama ollama run phineas-ferb
223+
224+
Once at the >>> prompt, I repeated "tell me about snowmen" for comparison.
223225
224-
And my response. Notice the tone and style is different than the calvin-hobbes model.
226+
The response is below. Notice the tone and style is different than the calvin-hobbes model.
225227
226228
.. code-block:: bash
227229
@@ -264,6 +266,12 @@ And my response. Notice the tone and style is different than the calvin-hobbes m
264266
265267
(They continue to engage in an epic snowball battle, laughing and having fun together)
266268
269+
Make sure to exit the model.
270+
271+
.. code-block:: bash
272+
273+
/bye
274+
267275
.. note::
268276
269277
I tried at first to build these custom models based on tinyllama, but that model wasn't

docs/class1/module3/lab2.rst

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,8 @@ Steps
2828

2929
.. image:: images/00_n8n_Interface.png
3030

31-
#. Enter account details and click **Next**.
31+
#. Enter account details and click **Next**. Make sure to use a real email address to get your license key, and
32+
check your junk folder if you don't get it quickly.
3233

3334
.. image:: images/01_owner_account.png
3435

@@ -63,15 +64,15 @@ Steps
6364
.. image:: images/08_back_to_canvas.png
6465

6566
#. Create your agent! Click the + icon that is attached to your trigger and select AI > AI Agent from the side menu. When your agent definition screen comes up,
66-
take a look around, but change nothing and head back to your canvas:
67+
take a look around, but change nothing and head back to your canvas:
6768

6869
.. image:: images/09_click_ai.png
6970
.. image:: images/10_select_agent.png
7071
.. image:: images/11_agent_define.png
7172

7273
#. Enable your agent with Ollama. Click the add chat model button, then select Ollama Chat Model from the side menu. When the Ollama definition screen comes up,
73-
click in the box and create new credentials. All you need to do for Ollama credentials is to set the actual IP of your host machine by replacing ``localhost`` with
74-
``10.1.1.5`` Once complete, close out of the definition screen:
74+
click in the box and create new credentials. All you need to do for Ollama credentials is to set the actual IP of your host machine by replacing ``localhost`` with
75+
``10.1.1.5`` Once complete, close out of the definition screen:
7576

7677
.. image:: images/12_connect_model.png
7778
.. image:: images/13_create_cred.png
@@ -84,4 +85,9 @@ click in the box and create new credentials. All you need to do for Ollama crede
8485
.. image:: images/16_type_hello.png
8586
.. image:: images/17_mic_drop.png
8687

88+
.. note::
89+
90+
This hello world is a drop replacement for what we did in Open WebUI--a graphical front-end for chatting with an
91+
LLM. But there is so much more to both tools to discover!
92+
8793
#. Homework: Now that you've done the lab, explore the memory and tool options in your agent. The memory allows you to insert your chat data in any of a number of databases. The tools are connectors to various other resources and web utilities like ticketing services and chats. Check out the various triggers besides chat interface, as well. There are incredible ways to trigger these flows, too. Please imagine the possibilities for automating a million things in your workday with simple agentic flows.

docs/class1/module4/lab1.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,8 @@ The output should resemble this:
4242
✔ Container aigw-processors-demo Started 0.5s
4343
✔ Container aigw Started
4444
45-
It might take a couple minutes for the containers to fully load. You can check status with **docker ps**
45+
It might take a couple minutes for the containers to fully load. You can check status with **docker ps**. If
46+
one of the containers never gets there, run **docker compose restart**.
4647

4748
.. code-block:: bash
4849

0 commit comments

Comments
 (0)