From f9a6451515ab579ff837ade5a511da65bffff699 Mon Sep 17 00:00:00 2001 From: Max Jones <14077947+maxrjones@users.noreply.github.com> Date: Mon, 6 Oct 2025 15:11:36 -0400 Subject: [PATCH] Fix cut-off sentence in GPU blog post --- src/posts/gpu-pipeline/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/posts/gpu-pipeline/index.md b/src/posts/gpu-pipeline/index.md index d160cac67..2df9b0ee7 100644 --- a/src/posts/gpu-pipeline/index.md +++ b/src/posts/gpu-pipeline/index.md @@ -101,7 +101,7 @@ PyTorch’s `DataLoader` includes options like `num_workers`, `pin_memory`, and ## Hackathon: Strategies Explored! -During the hackathon, we tested the following strategies to improve the data loading performance. In the end, we were able to achieve +During the hackathon, we tested the following strategies to improve the data loading performance. In the end, we were able to achieve at least ~17x improvement on 1 GPU in training throughput by optimizing data loading and preprocessing steps. ### Step 1: Optimized Chunking & Compression