Computational Learning Theory: an Introduction
Martin Anthony and Norman Biggs
Table of Contents
Notation
Ch. 1. Concepts, Hypotheses, Learning Algorithms 1.1. Introduction 1.2. Concepts 1.3. Training and Learning 1.4. Learning by Construction 1.5. Learning by Enumeration Further Remarks Exercises Ch. 2. Boolean Formulae and Representations 2.1. Monomial Concepts 2.2. A Learning Algorithm for Monomials 2.3. The Standard Notation for Boolean Functions 2.4. Learning Disjunctions of Small Monomials 2.5. Representations of Hypothesis Spaces Further Remarks Exercises Ch. 3. Probabilistic Learning 3.1. An Algorithm for Learning Rays 3.2. Probably Approximately Correct Learning 3.3. Illustration--Learning Rays is Pac 3.4. Exact Learning Further Remarks Exercises Ch. 4. Consistent Algorithms and Learnability 4.1. Potential Learnability 4.2. The Finite Case 4.3. Decision Lists 4.4. A Consistent Algorithm for Decision Lists Further Remarks Exercises Ch. 5. Efficient Learning--I 5.1. Outline of Complexity Theory 5.2. Running Time of Learning Algorithms 5.3. An Approach to the Efficiency of Pac Learning 5.4. The Consistency Problem 5.5. A Hardness Result Further Remarks Exercises Ch. 6. Efficient Learning--II 6.1. Efficiency in Terms of Confidence and Accuracy 6.2. Pac Learning and the Consistency Problem 6.3. The Size of a Representation 6.4. Finding the Smallest Consistent Hypothesis 6.5. Occam Algorithms 6.6. Examples of Occam Algorithms 6.7. Epac Learning Further Remarks Exercises Ch. 7. The VC Dimension 7.1. Motivation 7.2. The Growth Function 7.3. The VC Dimension 7.4. The VC Dimension of the Real Perceptron 7.5. Sauer's Lemma Further Remarks Exercises Ch. 8. Learning and the VC Dimension 8.1. Introduction 8.2. VC Dimension and Potential Learnability 8.3. Proof of the Fundamental Theorem 8.4. Sample Complexity of Consistent Algorithms 8.5. Lower Bounds on Sample Complexity 8.6. Comparison of Sample Complexity Bounds Further Remarks Exercises Ch. 9. VC Dimension and Efficient Learning 9.1. Graded Real Hypothesis Spaces 9.2. Efficient Learning of Graded Spaces 9.3. VC Dimension and Boolean Spaces 9.4. Optimal Sample Complexity for Boolean Spaces 9.5. Efficiency With Respect to Representations 9.6. Dimension-based Occam Algorithms 9.7. Epac Learning Again Further Remarks Exercises Ch. 10. Linear Threshold Networks 10.1. The Boolean Perceptron 10.2. An Incremental Algorithm 10.3. A Finiteness Result 10.4. Finding a Consistent Hypothesis 10.5. Feedforward Neural Networks 10.6. VC Dimension of Feedforward Networks 10.7. Hardness Results for Neural Networks Further Remarks Exercises References Index