Skip to content

Commit 1a23e70

Browse files
committed
embraceai meetup
1 parent 8480d6d commit 1a23e70

File tree

2 files changed

+9
-2
lines changed

2 files changed

+9
-2
lines changed
Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,14 @@
11
---
2-
title: "[UPCOMING] Embrace:AI // 2025.06 - Reasoning LLMs & Multimodal Architecture"
2+
title: "Embrace:AI // 2025.06 - Reasoning LLMs & Multimodal Architecture"
33
date: 2025-11-06
44
author: "ZanSara"
55
featuredImage: "/talks/2025-11-06-embrace-ai-meetup-reasoning-models.png"
6-
externalLink: "https://www.meetup.com/embrace-ai/events/311629934/"
76
---
7+
8+
[Announcement](https://www.meetup.com/embrace-ai/events/311629934/), [slides](https://drive.google.com/file/d/1QoUVlA915-7UJqu9DkxKQUc3deJhsB8t/view?usp=sharing).
9+
All resources can also be found in
10+
[my archive](https://docs.google.com/presentation/d/1RzJOwSwaLcNFkkPuvpR9e9pRzf0mnz23l8H29X9UDSA/edit?usp=drive_link).
11+
12+
---
13+
14+
At [Embrace.ai's November Meetup](https://www.meetup.com/embrace-ai/), part of the [Lisbon AI Week](https://lisbonaiweek.com/) I talked about reasoning models: what they are, what they aren't, how they work and when to use them. Is GPT-5 AGI? Is it an AI Agent? Or is it just a glorified chain-of-thought prompt under the hood? To answer these questions, I classified LLMs into a small "taxonomy" based on the post-training steps they go through, in order to highlight how reasoning models differ qualitatively from their predecessors just like instruction-tuned models were not simple text-completion models with a better prompt. I also covered the effect of increasing the reasoning effort of the model, clarifying when it's useful and when it can lead to overthinking.
374 KB
Loading

0 commit comments

Comments
 (0)