menu
{ "item_title" : "Optimization Algorithms for Distributed Machine Learning", "item_author" : [" Gauri Joshi "], "item_description" : "This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.", "item_img_path" : "https://covers3.booksamillion.com/covers/bam/3/03/119/069/3031190696_b.jpg", "price_data" : { "retail_price" : "49.99", "online_price" : "49.99", "our_price" : "49.99", "club_price" : "49.99", "savings_pct" : "0", "savings_amt" : "0.00", "club_savings_pct" : "0", "club_savings_amt" : "0.00", "discount_pct" : "10", "store_price" : "" } }
Optimization Algorithms for Distributed Machine Learning|Gauri Joshi

Optimization Algorithms for Distributed Machine Learning

local_shippingShip to Me
In Stock.
FREE Shipping for Club Members help

Overview

This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.

This item is Non-Returnable

Details

  • ISBN-13: 9783031190698
  • ISBN-10: 3031190696
  • Publisher: Springer
  • Publish Date: November 2023
  • Dimensions: 9.61 x 6.69 x 0.31 inches
  • Shipping Weight: 0.53 pounds
  • Page Count: 127

Related Categories

You May Also Like...

    1

BAM Customer Reviews