Skip to content

Commit 7892ec9

Browse files
authored
Qwen3.5 from scratch (#969)
* Qwen3.5 from scratch * update * update
1 parent 4612d20 commit 7892ec9

File tree

9 files changed

+4317
-57
lines changed

9 files changed

+4317
-57
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -188,6 +188,7 @@ Several folders contain optional materials as a bonus for interested readers:
188188
- [Gemma 3 From Scratch](ch05/12_gemma3/)
189189
- [Olmo 3 From Scratch](ch05/13_olmo3/)
190190
- [Tiny Aya From Scratch](ch05/15_tiny-aya/)
191+
- [Qwen3.5 From Scratch](ch05/16_qwen3.5/)
191192
- [Chapter 5 with other LLMs as Drop-In Replacement (e.g., Llama 3, Qwen 3)](ch05/14_ch05_with_other_llms/)
192193
- **Chapter 6: Finetuning for classification**
193194
- [Additional Experiments Finetuning Different Layers and Using Larger Models](ch06/02_bonus_additional-experiments)

ch05/11_qwen3/standalone-qwen3-plus-kvcache.ipynb

Lines changed: 32 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -82,9 +82,9 @@
8282
"name": "stdout",
8383
"output_type": "stream",
8484
"text": [
85-
"huggingface_hub version: 0.35.3\n",
86-
"tokenizers version: 0.22.1\n",
87-
"torch version: 2.8.0\n"
85+
"huggingface_hub version: 1.5.0\n",
86+
"tokenizers version: 0.22.2\n",
87+
"torch version: 2.8.0+cu128\n"
8888
]
8989
}
9090
],
@@ -659,16 +659,16 @@
659659
},
660660
{
661661
"cell_type": "code",
662-
"execution_count": 14,
662+
"execution_count": null,
663663
"id": "adf0a6b7-b688-42c9-966e-c223d34db99f",
664664
"metadata": {},
665665
"outputs": [
666666
{
667667
"data": {
668668
"text/plain": [
669-
"tensor([[[-0.2256, -0.0164, -0.7070, ..., 0.4414, 0.1245, 1.0703],\n",
670-
" [-0.6602, 0.5352, -0.0718, ..., -0.0737, 0.5391, 0.3086],\n",
671-
" [-0.4785, -0.1562, 0.1045, ..., -0.2324, 0.2354, 0.6328]]],\n",
669+
"tensor([[[-0.2334, -0.0134, -0.7031, ..., 0.4316, 0.1177, 1.0703],\n",
670+
" [-0.6641, 0.5352, -0.0752, ..., -0.0698, 0.5430, 0.3203],\n",
671+
" [-0.4785, -0.1748, 0.1074, ..., -0.2354, 0.2354, 0.6289]]],\n",
672672
" dtype=torch.bfloat16, grad_fn=<UnsafeViewBackward0>)"
673673
]
674674
},
@@ -922,16 +922,7 @@
922922
"id": "699cb1b8-a67d-49fb-80a6-0dad9d81f392",
923923
"outputId": "55b2f28c-142f-4698-9d23-d27456d3ed6d"
924924
},
925-
"outputs": [
926-
{
927-
"name": "stderr",
928-
"output_type": "stream",
929-
"text": [
930-
"/Users/sebastian/Developer/LLMs-from-scratch/.venv/lib/python3.13/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
931-
" from .autonotebook import tqdm as notebook_tqdm\n"
932-
]
933-
}
934-
],
925+
"outputs": [],
935926
"source": [
936927
"import json\n",
937928
"import os\n",
@@ -1182,29 +1173,50 @@
11821173
"<think>\n",
11831174
"Okay, the user wants a short introduction to large language models. Let me start by recalling what I know. Large language models are AI systems that can understand and generate human language. They're trained on massive datasets, so they can learn complex patterns and nuances.\n",
11841175
"\n",
1185-
"I should mention their ability to understand and generate text, not just specific tasks. Maybe include examples like chatbots or content generation. Also, emphasize their adaptability and efficiency. Oh, and maybe touch on their applications in various fields. Let me check if I'm covering all key points without being too technical. Keep it concise, around a sentence or two. Make sure it's clear and easy to understand.\n",
1176+
"I should mention their ability to understand and generate text, not just specific tasks. Maybe include examples like chatbots or language assistants. Also, emphasize their adaptability and versatility. Oh, and maybe touch on their applications in various fields. Let me check if I'm covering all key points without being too technical. Keep it concise, around a sentence or two. Make sure it's clear and easy to understand.\n",
11861177
"</think>\n",
11871178
"\n",
1188-
"Large language models (LLMs) are AI systems designed to understand and generate human language, enabling tasks like text generation, translation, and content creation. They are trained on vast datasets, allowing them to learn complex patterns and nuances, making them versatile for a wide range of applications."
1179+
"Large language models (LLMs) are AI systems designed to understand and generate human language, enabling tasks like text generation, translation, and answering questions. They are trained on vast datasets, allowing them to learn complex patterns and nuances, making them versatile for applications in various domains.\n",
1180+
"\n",
1181+
"Generation speed: 48.46 tokens/sec\n",
1182+
"GPU memory used: 1.50 GB\n"
11891183
]
11901184
}
11911185
],
11921186
"source": [
1187+
"import time\n",
1188+
"\n",
11931189
"input_token_ids_tensor = torch.tensor(input_token_ids, device=device).unsqueeze(0)\n",
11941190
"\n",
1191+
"if torch.cuda.is_available():\n",
1192+
" torch.cuda.reset_peak_memory_stats()\n",
1193+
"\n",
1194+
"start_time = time.perf_counter()\n",
1195+
"generated_tokens = 0\n",
11951196
"\n",
11961197
"for token in generate_text_basic_stream(\n",
11971198
" model=model,\n",
11981199
" token_ids=input_token_ids_tensor,\n",
11991200
" max_new_tokens=500,\n",
12001201
" eos_token_id=tokenizer.eos_token_id\n",
12011202
"):\n",
1203+
" generated_tokens += 1\n",
12021204
" token_id = token.squeeze(0).tolist()\n",
12031205
" print(\n",
12041206
" tokenizer.decode(token_id),\n",
12051207
" end=\"\",\n",
12061208
" flush=True\n",
1207-
" )"
1209+
" )\n",
1210+
"\n",
1211+
"elapsed = time.perf_counter() - start_time\n",
1212+
"tokens_per_sec = generated_tokens / elapsed if elapsed > 0 else 0.0\n",
1213+
"print(f\"\\n\\nGeneration speed: {tokens_per_sec:.2f} tokens/sec\")\n",
1214+
"\n",
1215+
"if torch.cuda.is_available():\n",
1216+
" def calc_gpu_gb(x):\n",
1217+
" return f\"{x / 1024 / 1024 / 1024:.2f} GB\"\n",
1218+
"\n",
1219+
" print(f\"GPU memory used: {calc_gpu_gb(torch.cuda.max_memory_allocated())}\")\n"
12081220
]
12091221
},
12101222
{

0 commit comments

Comments
 (0)