menu
{ "item_title" : "Introduction to Deep Learning", "item_author" : [" Sandro Skansi", "Kristina Sekrst "], "item_description" : "This textbook introduces deep learning in a style that is accessible, rigorous, and grounded in working code. It walks through the most widely used algorithms and architectures step by step, with mathematical derivations kept intuitive and Python examples woven through every chapter. The second edition keeps everything from the first, including convolutional networks, LSTMs, Word2vec, RBMs, DBNs, neural Turing machines, memory networks, and autoencoders. It then covers the systems that have reshaped the field since: generative adversarial networks, the transformer architecture and its attention mechanism, the full training pipeline behind modern large language models (LLMs), prompt engineering with real-life guardrail scenarios, parameter-efficient fine-tuning with LoRA, retrieval-augmented generation with vector databases, knowledge graphs, and agentic AI systems illustrated through an industrial case study. Topics and features: Introduces fundamentals of machine learning and mathematical and computational prerequisites for deep learning Discusses feed-forward neural networks, convolutional networks, and recurrent architectures, and explores the modifications applicable to any neural network Covers the transformer architecture from first principles, including self-attention, multi-head attention, positional encoding, and a minimal annotated implementation Reviews open research problems, from hallucinations and quadratic scaling to alignment faking and the interpretability of model internals This proven, fully revised textbook is written for graduate and advanced undergraduate students of computer science, cognitive science, and mathematics. It should prove equally valuable for readers in linguistics, logic, philosophy, and psychology. Sandro Skansi is an Associate Professor at the University of Zagreb, Croatia, where he teaches logic, political philosophy, artificial intelligence, and cognitive science. Kristina Sekrst is a research associate at the University of Zagreb and a principal engineer at Preamble AI.", "item_img_path" : "https://covers2.booksamillion.com/covers/bam/3/03/225/459/3032254590_b.jpg", "price_data" : { "retail_price" : "54.99", "online_price" : "54.99", "our_price" : "54.99", "club_price" : "54.99", "savings_pct" : "0", "savings_amt" : "0.00", "club_savings_pct" : "0", "club_savings_amt" : "0.00", "discount_pct" : "10", "store_price" : "" } }
Introduction to Deep Learning|Sandro Skansi

Introduction to Deep Learning : Neural Networks, Large Language Models and Agentic AI

PRE-ORDER NOW:
local_shippingShip to Me
Preorder. This item will be available on August 5, 2026 .
FREE Shipping for Club Members help

Overview

This textbook introduces deep learning in a style that is accessible, rigorous, and grounded in working code. It walks through the most widely used algorithms and architectures step by step, with mathematical derivations kept intuitive and Python examples woven through every chapter.

The second edition keeps everything from the first, including convolutional networks, LSTMs, Word2vec, RBMs, DBNs, neural Turing machines, memory networks, and autoencoders. It then covers the systems that have reshaped the field since: generative adversarial networks, the transformer architecture and its attention mechanism, the full training pipeline behind modern large language models (LLMs), prompt engineering with real-life guardrail scenarios, parameter-efficient fine-tuning with LoRA, retrieval-augmented generation with vector databases, knowledge graphs, and agentic AI systems illustrated through an industrial case study.

Topics and features:

    Introduces fundamentals of machine learning and mathematical and computational prerequisites for deep learning Discusses feed-forward neural networks, convolutional networks, and recurrent architectures, and explores the modifications applicable to any neural network Covers the transformer architecture from first principles, including self-attention, multi-head attention, positional encoding, and a minimal annotated implementation Reviews open research problems, from hallucinations and quadratic scaling to alignment faking and the interpretability of model internals
This proven, fully revised textbook is written for graduate and advanced undergraduate students of computer science, cognitive science, and mathematics. It should prove equally valuable for readers in linguistics, logic, philosophy, and psychology.

Sandro Skansi is an Associate Professor at the University of Zagreb, Croatia, where he teaches logic, political philosophy, artificial intelligence, and cognitive science. Kristina Sekrst is a research associate at the University of Zagreb and a principal engineer at Preamble AI.

This item is Non-Returnable

Details

  • ISBN-13: 9783032254597
  • ISBN-10: 3032254590
  • Publisher: Springer
  • Publish Date: August 2026
  • Page Count: 105

Related Categories

You May Also Like...

    1

BAM Customer Reviews