Non-Fiction Books:

Foundations of Deep Learning

Click to share your rating 0 ratings (0.0/5.0 average) Thanks for your vote!
$446.00
Releases

Pre-order to reserve stock from our first shipment. Your credit card will not be charged until your order is ready to ship.

Available for pre-order now
Free Delivery with Primate
Join Now

Free 14 day free trial, cancel anytime.

Buy Now, Pay Later with:

4 payments of $111.50 with Afterpay Learn more

6 weekly interest-free payments of $74.33 with Laybuy Learn more

Pre-order Price Guarantee

If you pre-order an item and the price drops before the release date, you'll pay the lowest price. This happens automatically when you pre-order and pay by credit card or pickup.

If paying by PayPal, Afterpay, Laybuy, Zip, Klarna, POLi, Online EFTPOS or internet banking, and the price drops after you have paid, you can ask for the difference to be refunded.

If Mighty Ape's price changes before release, you'll pay the lowest price.

Availability

This product will be released on

Delivering to:

It should arrive:

  • 18-25 October using International Courier

Description

Deep learning has significantly reshaped a variety of technologies, such as image processing, natural language processing, and audio processing. The excellent generalizability of deep learning is like a “cloud” to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. This irreconciliation considerably undermines the confidence of deploying deep learning to security-critical areas, including autonomous vehicles and medical diagnosis, where small algorithmic mistakes can lead to fatal disasters. This book seeks to explaining the excellent generalizability, including generalization analysis via the size-independent complexity measures, the role of optimization in understanding the generalizability, and the relationship between generalizability and ethical/security issues.  The efforts to understand the excellent generalizability are following two major paths: (1) developing size-independent complexity measures, which can evaluate the “effective” hypothesis complexity that can be learned, instead of the whole hypothesis space; and (2) modelling the learned hypothesis through stochastic gradient methods, the dominant optimizers in deep learning, via stochastic differential functions and the geometry of the associated loss functions. Related works discover that over-parameterization surprisingly bring many good properties to the loss functions. Rising concerns of deep learning are seen on the ethical and security issues, including privacy preservation and adversarial robustness. Related works also reveal an interplay between them and generalizability: a good generalizability usually means a good privacy-preserving ability; and more robust algorithms might have a worse generalizability.  We expect readers can have a big picture of the current knowledge in deep learning theory, understand how the deep learning theory can guide new algorithm designing, and identify future research directions. Readers need knowledge of calculus, linear algebra, probability, statistics, and statistical learning theory.
Release date NZ
October 12th, 2024
Audience
  • Professional & Vocational
Edition
1st ed. 2024
Illustrations
XX, 280 p.
Pages
280
ISBN-13
9789811682322
Product ID
35835572

Customer previews

Nobody has previewed this product yet. You could be the first!

Write a Preview

Help & options

Filed under...