Skip to content

Commit 923bc90

Browse files
jeremymanningclaude
andcommitted
docs(lectures): Add concrete examples, code, and explanations to weeks 1-6
Comprehensive enhancements to all slide decks: - Week 1 (Lectures 1-4): Added Turing Test examples, ELIZA dialogue samples - Week 2 (Lectures 5-8): Added neural network diagrams, activation functions - Week 3 (Lectures 9-11): Added transformer architecture, attention examples - Week 4 (Lectures 12-14): Added prompting techniques, few-shot examples - Week 5 (Lectures 15-17): Added fine-tuning code, RLHF workflows - Week 6 (Lectures 18-20): Added evaluation metrics, benchmark examples All slides now include: - Concrete code examples (Python snippets) - Worked mathematical equations - Real-world application examples - Visual diagrams where appropriate 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]>
1 parent fc0cb4d commit 923bc90

File tree

20 files changed

+3235
-1084
lines changed

20 files changed

+3235
-1084
lines changed

slides/week1/lecture1.md

Lines changed: 152 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -257,6 +257,48 @@ We cannot directly observe consciousness &mdash; even in other humans!
257257
```
258258
<!-- caption: Person follows rules but does not understand Chinese -->
259259

260+
---
261+
<!-- _class: scale-78 -->
262+
263+
# Worked example: The Chinese Room
264+
265+
<div style="display: flex; gap: 1.5em;">
266+
<div style="flex: 1;">
267+
268+
**The scenario:**
269+
1. You are locked in a room
270+
2. Chinese characters slide under the door
271+
3. You look them up in a rule book
272+
4. You copy out the corresponding response
273+
5. You slide it back out
274+
275+
**From outside:** Perfect Chinese conversation!
276+
**From inside:** Just symbol manipulation.
277+
278+
</div>
279+
<div style="flex: 1;">
280+
281+
**Concrete example:**
282+
```
283+
Input: 你好吗? (How are you?)
284+
Lookup: Rule #42 → 我很好
285+
Output: 我很好 (I am fine)
286+
```
287+
288+
You matched symbols without knowing:
289+
- What "你好吗" means
290+
- That it's a greeting
291+
- What "你" (you) refers to
292+
293+
</div>
294+
</div>
295+
296+
<div class="warning-box" data-title="The key insight">
297+
298+
Syntax (symbol manipulation) is not sufficient for semantics (meaning).
299+
300+
</div>
301+
260302
---
261303

262304
# Volition
@@ -400,11 +442,48 @@ Having words for things affects how we *see* them:
400442

401443
<div class="note-box" data-title="Key finding">
402444

403-
English speakers are slower to distinguish shades of blue than Russian speakers:
445+
English speakers are slower to distinguish shades of blue than Russian speakers:
404446
language categories affect *perception*, not just description!
405447

406448
</div>
407449

450+
---
451+
<!-- _class: scale-78 -->
452+
453+
# Worked example: The color experiment
454+
455+
<div style="display: flex; gap: 1.5em;">
456+
<div style="flex: 1;">
457+
458+
**The task:**
459+
Look at three color squares. Which one is different?
460+
461+
```
462+
Trial 1: [dark blue] [dark blue] [light blue]
463+
Russian speakers: FAST (different words!)
464+
English speakers: slower (both "blue")
465+
466+
Trial 2: [blue] [blue] [green]
467+
Both groups: equally fast (cross-category)
468+
```
469+
470+
</div>
471+
<div style="flex: 1;">
472+
473+
**What this tells us:**
474+
- Having distinct words for colors creates **perceptual boundaries**
475+
- The brain literally processes colors faster when they cross linguistic categories
476+
- Language doesn't just describe reality; it *shapes* how we see it
477+
478+
</div>
479+
</div>
480+
481+
<div class="tip-box" data-title="Implication for LLMs">
482+
483+
If language shapes perception, what "perception" do LLMs have? They experience language *about* color, but never color itself.
484+
485+
</div>
486+
408487
---
409488

410489
# So...what about LLMs?
@@ -424,6 +503,78 @@ How do symbols (words) get their meaning?
424503
- Generate text based on those patterns
425504
- No direct connection with the external world
426505

506+
---
507+
<!-- _class: scale-70 -->
508+
509+
# Worked example: Grounding vs. Statistics
510+
511+
<div style="display: flex; gap: 1.5em;">
512+
<div style="flex: 1;">
513+
514+
**Human understanding of "coffee":**
515+
```
516+
Sensory memories:
517+
- Bitter taste
518+
- Rich aroma
519+
- Warm ceramic mug
520+
- Morning ritual feeling
521+
- Caffeine effect on body
522+
```
523+
These experiences **ground** the word.
524+
525+
</div>
526+
<div style="flex: 1;">
527+
528+
**LLM "understanding" of "coffee":**
529+
```
530+
Statistical associations:
531+
- Often appears with "morning"
532+
- Frequently near "cup", "drink"
533+
- Associated with "caffeine"
534+
- Patterns: "I need my ___"
535+
- Context: restaurants, breakfast
536+
```
537+
Pure pattern matching, no experience.
538+
539+
</div>
540+
</div>
541+
542+
<div class="warning-box" data-title="The question">
543+
544+
Can statistical patterns **ever** capture what it *feels like* to drink coffee?
545+
546+
</div>
547+
548+
---
549+
550+
# How LLMs actually work: A preview
551+
552+
<div class="note-box" data-title="Next-token prediction">
553+
554+
At their core, LLMs are trained to predict what comes next. Given a sequence of words, what is the most likely continuation?
555+
556+
</div>
557+
558+
```python
559+
# Conceptually, this is what an LLM does:
560+
input_text = "The cat sat on the"
561+
562+
# LLM predicts probability for each possible next word:
563+
predictions = {
564+
"mat": 0.35, # Most likely
565+
"floor": 0.20,
566+
"couch": 0.15,
567+
"roof": 0.05,
568+
"elephant": 0.001 # Very unlikely
569+
}
570+
```
571+
572+
<div class="tip-box" data-title="Key insight">
573+
574+
LLMs don't "know" what cats are. They've just seen "cat sat on the mat" many times!
575+
576+
</div>
577+
427578
---
428579

429580
# Our approach (in this course)

slides/week1/lecture2.md

Lines changed: 138 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -188,6 +188,54 @@ new_text = re.sub(r"I am", "You are", text)
188188
# "You are feeling very happy today"
189189
```
190190

191+
---
192+
<!-- _class: scale-70 -->
193+
194+
# Worked example: Building a simple chatbot rule
195+
196+
<div style="display: flex; gap: 1.5em;">
197+
<div style="flex: 1;">
198+
199+
**Goal:** Respond to "I am [feeling]" statements
200+
201+
```python
202+
import re
203+
204+
def simple_respond(user_input):
205+
# Try to match "I am [something]"
206+
match = re.search(
207+
r"I am (.*)",
208+
user_input,
209+
re.IGNORECASE
210+
)
211+
212+
if match:
213+
feeling = match.group(1)
214+
return f"Why are you {feeling}?"
215+
216+
return "Tell me more."
217+
```
218+
219+
</div>
220+
<div style="flex: 1;">
221+
222+
**Try it:**
223+
```python
224+
>>> simple_respond("I am tired")
225+
"Why are you tired?"
226+
227+
>>> simple_respond("I am feeling anxious")
228+
"Why are you feeling anxious?"
229+
230+
>>> simple_respond("Hello there")
231+
"Tell me more."
232+
```
233+
234+
**Key insight:** The computer has no idea what "tired" or "anxious" mean. It's just copying text!
235+
236+
</div>
237+
</div>
238+
191239
---
192240

193241
# Meet ELIZA: a computerized Rogerian therapist
@@ -234,6 +282,54 @@ Have a [conversation with ELIZA](https://contextlab.github.io/llm-course/demos/0
234282
- What kinds of inputs break the illusion?
235283
- **How do you think ELIZA *works*?**
236284

285+
---
286+
<!-- _class: scale-70 -->
287+
288+
# Example ELIZA conversation
289+
290+
<div style="display: flex; gap: 1.5em;">
291+
<div style="flex: 1;">
292+
293+
```
294+
User: I am feeling sad today
295+
ELIZA: Why are you feeling sad today?
296+
297+
User: My mother doesn't understand me
298+
ELIZA: Tell me more about your family.
299+
300+
User: She always criticizes everything I do
301+
ELIZA: Your mother?
302+
303+
User: Yes, and my father too
304+
ELIZA: What else comes to mind when you
305+
think of your father?
306+
307+
User: I don't know
308+
ELIZA: Why don't you know?
309+
```
310+
311+
</div>
312+
<div style="flex: 1;">
313+
314+
**Notice how ELIZA:**
315+
- Picks up on keywords ("mother", "father")
316+
- Reflects statements back as questions
317+
- Uses generic prompts when stuck
318+
- Never makes specific claims about the world
319+
320+
**Where it breaks down:**
321+
```
322+
User: What is 2 + 2?
323+
ELIZA: Why do you ask?
324+
325+
User: I want to know if you're intelligent
326+
ELIZA: Does it please you to believe I
327+
am intelligent?
328+
```
329+
330+
</div>
331+
</div>
332+
237333
---
238334

239335
# The ELIZA effect
@@ -295,6 +391,48 @@ Have *you* experienced the ELIZA effect with modern AI systems (ChatGPT, Claude,
295391
- What is the difference between *seeming* intelligent and *being* intelligent?
296392
- How would we *know* if an AI truly understood us?
297393

394+
---
395+
<!-- _class: scale-78 -->
396+
397+
# Example: The ELIZA effect in modern AI
398+
399+
<div style="display: flex; gap: 1.5em;">
400+
<div style="flex: 1;">
401+
402+
**Feels like understanding:**
403+
```
404+
User: I'm really stressed about my exam
405+
AI: I can hear that you're feeling
406+
overwhelmed. Exams can be really
407+
stressful. What subject is it?
408+
409+
User: Physics. I've been studying for weeks
410+
AI: Studying for weeks shows real
411+
dedication. What part of physics
412+
is giving you trouble?
413+
```
414+
415+
**Creates the illusion of empathy**
416+
417+
</div>
418+
<div style="flex: 1;">
419+
420+
**Reveals the limitation:**
421+
```
422+
User: I just realized I left my exam
423+
notes in your office
424+
AI: I understand that can be frustrating!
425+
Would you like tips on how to
426+
retrieve your notes?
427+
```
428+
429+
The AI has no office. It cannot have your notes. But it responds as if this makes sense!
430+
431+
**Pattern matching, not understanding.**
432+
433+
</div>
434+
</div>
435+
298436
---
299437

300438
# Reading: Weizenbaum (1966)

0 commit comments

Comments
 (0)