menu
{ "item_title" : "Large Language Model Recipes", "item_author" : [" Bharath Kumar Bolla", "Kalpa Subbaiah", "Sashi Kiran Kaata "], "item_description" : "The Large Language Model Recipes book is a comprehensive, practical guide designed to help developers, data scientists, and AI engineers navigate the rapidly evolving landscape of Large Language Models (LLMs). Moving beyond theory, this book provides a hands-on, recipe-based approach to mastering the entire LLMs lifecycle, from selecting the right open-source model to fine-tuning it on custom data and deploying it for production at scale.Starting with the fundamentals of setting up a robust development environment, the book guides you through the critical decisions of model selection (Llama, Mistral, Falcon) and data preparation. It offers deep dives into advanced training techniques, including full fine-tuning, instruction tuning, and parameter-efficient methods like LoRA and QLoRA that make training accessible on consumer hardware.The book doesn't stop at training. It tackles the crucial last mile of AI development: deployment and optimization. You will learn how to shrink models with quantization, serve them with high-throughput engines like vLLM and TGI, and evaluate their performance using industry-standard benchmarks. Finally, it explores cutting-edge frontiers, including Retrieval-Augmented Generation (RAG) for grounding models in real-time data, building multimodal vision-language applications, and designing autonomous AI agents.Whether you are building a specialized chatbot, a code assistant, or a complex reasoning agent, this book provides the tested recipes and code you need to develop efficient, scalable, and robust AI solutions today.What you will learn:Design production-ready LLM systems using the Feature/Training/Inference (FTI) frameworkApply advanced fine-tuning methods, including LoRA and QLoRA, for efficient model adaptationBuild and optimize RAG pipelines with effective retrieval strategies and vector databasesDeploy optimized LLMs using quantization techniques and scalable inference frameworksDevelop multimodal and agentic AI applications with vision-language models and autonomous agentsWho this book is for:This book is ideal for software developers, machine learning engineers, data scientists, and technical researchers who want to move beyond using API endpoints and start", "item_img_path" : "https://covers2.booksamillion.com/covers/bam/9/79/886/882/9798868826061_b.jpg", "price_data" : { "retail_price" : "59.99", "online_price" : "59.99", "our_price" : "59.99", "club_price" : "59.99", "savings_pct" : "0", "savings_amt" : "0.00", "club_savings_pct" : "0", "club_savings_amt" : "0.00", "discount_pct" : "10", "store_price" : "" } }
Large Language Model Recipes|Bharath Kumar Bolla

Large Language Model Recipes : A Hands-On Guide to Fine-Tuning, Optimization, Deployment, and Real-World Applications

PRE-ORDER NOW:
local_shippingShip to Me
Preorder. This item will be available on June 13, 2026 .
FREE Shipping for Club Members help

Overview

The Large Language Model Recipes book is a comprehensive, practical guide designed to help developers, data scientists, and AI engineers navigate the rapidly evolving landscape of Large Language Models (LLMs). Moving beyond theory, this book provides a hands-on, recipe-based approach to mastering the entire LLMs lifecycle, from selecting the right open-source model to fine-tuning it on custom data and deploying it for production at scale.Starting with the fundamentals of setting up a robust development environment, the book guides you through the critical decisions of model selection (Llama, Mistral, Falcon) and data preparation. It offers deep dives into advanced training techniques, including full fine-tuning, instruction tuning, and parameter-efficient methods like LoRA and QLoRA that make training accessible on consumer hardware.The book doesn't stop at training. It tackles the crucial "last mile" of AI development: deployment and optimization. You will learn how to shrink models with quantization, serve them with high-throughput engines like vLLM and TGI, and evaluate their performance using industry-standard benchmarks. Finally, it explores cutting-edge frontiers, including Retrieval-Augmented Generation (RAG) for grounding models in real-time data, building multimodal vision-language applications, and designing autonomous AI agents.Whether you are building a specialized chatbot, a code assistant, or a complex reasoning agent, this book provides the tested recipes and code you need to develop efficient, scalable, and robust AI solutions today.What you will learn:

  • Design production-ready LLM systems using the Feature/Training/Inference (FTI) framework
  • Apply advanced fine-tuning methods, including LoRA and QLoRA, for efficient model adaptation
  • Build and optimize RAG pipelines with effective retrieval strategies and vector databases
  • Deploy optimized LLMs using quantization techniques and scalable inference frameworks
  • Develop multimodal and agentic AI applications with vision-language models and autonomous agents
Who this book is for:

This book is ideal for software developers, machine learning engineers, data scientists, and technical researchers who want to move beyond using API endpoints and start

Details

  • ISBN-13: 9798868826061
  • ISBN-10: 9798868826061
  • Publisher: Apress
  • Publish Date: June 2026
  • Page Count: 219

Related Categories

You May Also Like...

    1

BAM Customer Reviews