Skip to content

Commit b0be6db

Browse files
authored
Update 20241104-llm-engineer-s-handbook.md
1 parent 6ea2ce1 commit b0be6db

File tree

1 file changed

+3
-2
lines changed

1 file changed

+3
-2
lines changed

_books/20241104-llm-engineer-s-handbook.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,9 @@
11
---
22
authors:
33
- pauliusztin
4+
- maximelabonne
45
cover: images/books/20241104-llm-engineer-s-handbook/cover.jpg
5-
description: Book of the Week. LLM Engineer's Handbook by Paul Iusztin
6+
description: Book of the Week. LLM Engineer's Handbook by Paul Iusztin and Maxime Labonne
67
end: 2024-11-08 23:59:59
78
image: images/books/20241104-llm-engineer-s-handbook/preview.jpg
89
links:
@@ -16,4 +17,4 @@ start: 2024-11-04 00:00:00
1617
title: LLM Engineer's Handbook
1718
---
1819

19-
Artificial intelligence has undergone rapid advancements, and Large Language Models (LLMs) are at the forefront of this revolution. This LLM book offers insights into designing, training, and deploying LLMs in real-world scenarios by leveraging MLOps best practices. The guide walks you through building an LLM-powered twin that’s cost-effective, scalable, and modular. It moves beyond isolated Jupyter notebooks, focusing on how to build production-grade end-to-end LLM systems. Throughout this book, you will learn data engineering, supervised fine-tuning, and deployment. The hands-on approach to building the LLM Twin use case will help you implement MLOps components in your own projects. You will also explore cutting-edge advancements in the field, including inference optimization, preference alignment, and real-time data processing, making this a vital resource for those looking to apply LLMs in their projects. By the end of this book, you will be proficient in deploying LLMs that solve practical problems while maintaining low-latency and high-availability inference capabilities. Whether you are new to artificial intelligence or an experienced practitioner, this book delivers guidance and practical techniques that will deepen your understanding of LLMs and sharpen your ability to implement them effectively.
20+
Artificial intelligence has undergone rapid advancements, and Large Language Models (LLMs) are at the forefront of this revolution. This LLM book offers insights into designing, training, and deploying LLMs in real-world scenarios by leveraging MLOps best practices. The guide walks you through building an LLM-powered twin that’s cost-effective, scalable, and modular. It moves beyond isolated Jupyter notebooks, focusing on how to build production-grade end-to-end LLM systems. Throughout this book, you will learn data engineering, supervised fine-tuning, and deployment. The hands-on approach to building the LLM Twin use case will help you implement MLOps components in your own projects. You will also explore cutting-edge advancements in the field, including inference optimization, preference alignment, and real-time data processing, making this a vital resource for those looking to apply LLMs in their projects. By the end of this book, you will be proficient in deploying LLMs that solve practical problems while maintaining low-latency and high-availability inference capabilities. Whether you are new to artificial intelligence or an experienced practitioner, this book delivers guidance and practical techniques that will deepen your understanding of LLMs and sharpen your ability to implement them effectively.

0 commit comments

Comments
 (0)