Skip to content

PacktPublishing/DeepSeek-in-Practice

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

62 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DeepSeek in Practice, First Edition

This is the code repository for DeepSeek in Practice, First Edition, published by Packt.

From basics to fine-tuning, distillation, agent design, and prompt engineering of open source LLM

Andy Peng, Alex Strick van Linschoten, Duarte O. Carmo

      Free PDF       Graphic Bundle       Amazon      

About the book

DeepSeek in Practice, First Edition

Learn how to build, fine-tune, and deploy AI systems using DeepSeek, one of the most influential open-source large language models available today. This book guides you through real-world DeepSeek applications—from understanding its core architecture and training foundations to developing reasoning agents and deploying production-ready systems. Starting with a concise synthesis of DeepSeek's research, breakthroughs, and open-source philosophy, you’ll progress to hands-on projects including prompt engineering, workflow design, and rationale distillation. Through detailed case studies—ranging from document understanding to legal clause analysis—you’ll see how to use DeepSeek in high-value GenAI scenarios. You’ll also learn to build sophisticated agent workflows and prepare data for fine-tuning. By the end of the book, you’ll have the skills to integrate DeepSeek into local deployments, cloud CI/CD pipelines, and custom LLMOps environments. Written by experts with deep knowledge of open-source LLMs and deployment ecosystems, this book is your comprehensive guide to DeepSeek’s capabilities and implementation.

Key Learnings

  • Discover DeepSeek's unique traits in the LLM landscape
  • Compare DeepSeek's multimodal features with leading models
  • Consume DeepSeek via the official API, Ollama, and llama.cpp
  • Use DeepSeek for coding, document understanding, and creative ideation
  • Integrate DeepSeek with third-party platforms like OpenRouter and Cloudflare
  • Distill and deploy DeepSeek models into production environments
  • Identify when and where to use DeepSeek
  • Understand DeepSeek's open philosophy

Chapters

Chapters Colab Kaggle Gradient Studio Lab
Chapter 1: What is DeepSeek?
Chapter 2: Deep Dive into DeepSeek
Chapter 3: Prompting DeepSeek
Chapter 4: Using DeepSeek: Case Studies
Chapter 5: Building with DeepSeek
  • 01-initial-prototype.ipynb
Open In Colab
Open In Kaggle
Open In Gradient
Open In Studio Lab
  • 07-aws-deployment.ipynb
Open In Colab
Open In Kaggle
Open In Gradient
Open In Studio Lab
Chapter 6: Agents with DeepSeek
  • 01-evaluator-optimizer.ipynb
Open In Colab
Open In Kaggle
Open In Gradient
Open In Studio Lab
  • 02-orchestrator-worker.ipynb
Open In Colab
Open In Kaggle
Open In Gradient
Open In Studio Lab
  • 03-tool-calling-agent.ipynb
Open In Colab
Open In Kaggle
Open In Gradient
Open In Studio Lab
Chapter 7: DeepSeek-Driven Fine-Tuning of Gemma 3 for Legal Reasoning
Chapter 8: Deploying DeepSeek Models

Requirements for this book

To get the most out of this book, ensure you have the following:

Software Requirements

  • Hands-on experience with Python, working with APIs, and tools such as Ollama or llama.cpp
  • Familiarity with tools and platforms like uv and Docker
  • A solid understanding of machine learning concepts

Hardware Requirements

  • A computer capable of running local LLMs using tools like Ollama or llama.cpp

  • At least 8–16 GB of RAM, depending on the model sizes you plan to run

  • Adequate CPU/GPU resources for development, testing, and experimentation

Get to know Authors

Andy Peng is a Senior Engineer at Amazon, leading both 0-to-1 innovation and 10x scaling across AWS Bedrock and SageMaker. He specializes in large language model inference optimization and evaluation for models like DeepSeek, Qwen, and Claude. His work spans Amazon S3, AWS Fargate, App Runner, Alexa Health & Wellness, and fintech. A NeurIPS 2025 Chair and program committee member for ICML, ICLR, KDD, and NeurIPS, he contributes to CNCF and the Linux Foundation, mentors at the University of Washington, and serves as Resident Expert at the AI2 Incubator.

Alex Strick van Linschoten is a Machine Learning Engineer at ZenML. He led the development of the LLMOps Database, a comprehensive collection of over 800 case studies examining LLMOps and GenAI implementations in production environments. His work focuses on bridging the gap between machine learning research and production deployment, particularly within the LLMOps space. He transitioned to software engineering after earning a PhD in History and spending 15 years living and working as a historian and researcher in Afghanistan. He has authored, edited, and translated several books based on his historical research and is currently based in Delft, the Netherlands.

Duarte O. Carmo is a technologist from Lisbon, Portugal, now based in Copenhagen, Denmark. For the past decade, he's worked at the intersection of machine learning, artificial intelligence, software, data, and people. He has helped solve problems for both global corporations and small startups across industries such as healthcare, finance, agriculture, and advertising. His approach to solving tough problems always starts with the same thing: people. For the past five years, he's been running his one-man consulting company, working with clients of all sizes and across industries. He's also a regular speaker in the Python and machine learning communities and an active writer.

Other Related Books

About

DeepSeek Essentials, published by Packt

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors