Synopsis of `Neural Network Learning: Theoretical Foundations'

This book describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Research on pattern classification with binary-output networks is surveyed, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and calculating estimates of the dimension for several neural network models. A model of classification by real-output networks is developed, and the usefulness of classification with a "large margin"is demonstrated. The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. They also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient constructive learning algorithms. The book is self-contained and is intended to be accessible to researchers and graduate students in computer science, engineering, and mathematics.

Describing recent theoretical advances in the study of artificial neural networks, this work explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Research on pattern classification with binary-output networks is surveyed, including a discussion of the relevance of the Vapnik-Chervonenkis dimension, and calculating estimates of the dimension for several neural network models. A model of classification by real-output networks is developed, and the usefulness of classification with a large margin is demonstrated. The authors explain the role of scale-sensitive versions of the Vapnik-Chervonenkis dimension in large margin classification, and in real prediction. They also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient constructive learning algorithms. The book is self-contained and is intended to be accessible to researchers and graduate students in computer science, engineering, and mathematics.


Some reviews:

"Anthony and Bartlett have given us the most thorough treatment of the statistical analysis of neural network learning available to date. They have presented a complete picture of how the proofs are derived... The book is therefore an invaluable reference for the learning theorist, at the same time providing the first full treatment of the data-dependent analysis that has brought learning theory significantly closer to the practitioner." AI Magazine.

"The book is a useful and readable monograph. For beginners it is a nice introduction to the subject, for experts a valuable reference." Zentralblatt MATH.

Further Information on `Neural Network Learning: Theoretical Foundations'