Search from the table of contents of 2.5 million books
Advanced Search (Beta)
Home > Distributed Machine Learning with Python: Accelerating model training and serving with distributed systems

Distributed Machine Learning with Python: Accelerating model training and serving with distributed systems


Book Informaton

Distributed Machine Learning with Python: Accelerating model training and serving with distributed systems

Author

G. Wang

Year of Publication

2022

Publisher

Packt Publishing

Language

en

ISBN

9781801815697

ARI Id

1673539480965


Find on

World Cat

OpenLibrary

Internet Archive


This page has been accessed 5 times.
Asian Research Index Whatsapp Chanel
Asian Research Index Whatsapp Chanel

Join our Whatsapp Channel to get regular updates.

Citation Options
Download Citation

Showing 1 to 20 of 149 entries
Chapters/HeadingsAuthor(s)PagesInfo
Section 1 – Data Parallelism
Chapter 1: Splitting Input Data
Single-node training is too slow
The mismatch between data loading bandwidth and model training bandwidth
Single-node training time on popular datasets
Accelerating the training process with data parallelism
Data parallelism – the high-level bits
Stochastic gradient descent
Model synchronization
Hyperparameter tuning
Global batch size
Learning rate adjustment
Model synchronization schemes
Summary
Chapter 2: Parameter Server and All-Reduce
Technical requirements
Parameter server architecture
Communication bottleneck in the parameter server architecture
Sharding the model among parameter servers
Implementing the parameter server
Chapters/HeadingsAuthor(s)PagesInfo
Showing 1 to 20 of 149 entries