An Introduction to Machine Learning
Gebonden Engels 2017 2e druk 9783319639123Samenvatting
This textbook presents fundamental machine learning concepts in an easy to understand manner by providing practical advice, using straightforward examples, and offering engaging discussions of relevant applications. The main topics include Bayesian classifiers, nearest-neighbor classifiers, linear and polynomial classifiers, decision trees, neural networks, and support vector machines. Later chapters show how to combine these simple tools by way of “boosting,” how to exploit them in more complicated domains, and how to deal with diverse advanced practical issues. One chapter is dedicated to the popular genetic algorithms.
This revised edition contains three entirely new chapters on critical topics regarding the pragmatic application of machine learning in industry. The chapters examine multi-label domains, unsupervised learning and its use in deep learning, and logical approaches to induction. Numerous chapters have been expanded, and the presentation of the material has been enhanced. The book contains many new exercises, numerous solved examples, thought-provoking experiments, and computer assignments for independent work.
Specificaties
Lezersrecensies
Inhoudsopgave
<p>1.2 Minor Digression: Hill-Climbing Search....................................................... 5</p>
1.3 Hill Climbing in Machine Learning................................................................ 9<p></p>
<p>1.4 The Induced Classifier’s Performance........................................................ 12</p>
<p>1.5 Some Difficulties with Available Data......................................................... 14</p>
<p>1.6 Summary and Historical Remarks............................................................... 18</p>
<p>1.7 Solidify Your Knowledge.............................................................................. 19</p>
<p>2 Probabilities: Bayesian Classifiers 22</p>
<p>2.1 The Single-Attribute Case............................................................................. 22</p>
<p>2.2 Vectors of Discrete Attributes..................................................................... 27</p>
<p>2.3 Probabilities of Rare Events: Exploiting the Expert’s Intuition............. 29</p>
2.4 How to Handle Continuous Attributes....................................................... 35<p></p>
<p>2.5 Gaussian “Bell” Function: A Standard pdf................................................. 38</p>
2.6 Approximating PDFs with Sets of Gaussians............................................ 40<p></p>
<p>2.7 Summary and Historical Remarks............................................................... 43</p>
<p>2.8 Solidify Your Knowledge.............................................................................. 46</p>
<p>3 Similarities: Nearest-Neighbor Classifiers 49</p>
<p>3.1 The k-Nearest-Neighbor Rule...................................................................... 49</p> <p>3.2 Measuring Similarity...................................................................................... 52</p>
<p>3.3 Irrelevant Attributes and Scaling Problems............................................... 56</p>
3.4 Performance Considerations........................................................................ 60<p></p>
<p>3.5 Weighted Nearest Neighbors....................................................................... 63</p>
<p>3.6 Removing Dangerous Examples.................................................................. 65</p>
3.7 Removing Redundant Examples.................................................................. 68<p></p>
<p>3.8 Summary and Historical Remarks............................................................... 71</p>
<p>3.9 Solidify Your Knowledge.............................................................................. 72</p>
</div> <br> <div> <p> </p>
<p> </p>
<p> </p>
<p> </p>
<p>4 Inter-Class Boundaries:</p>
<p>Linear and Polynomial Classifiers 75</p>
<p>4.1 The Essence..................................................................................................... 75</p>
<p>4.2 The Additive Rule: Perceptron Learning.................................................... 79</p>
<p>4.3 The Multiplicative Rule: WINNOW............................................................ 85</p>
<p>4.4 Domains with More than Two Classes........................................................ 88</p>
<p>4.5 Polynomial Classifiers..................................................................................... 91</p>
<p>4.6 Specific Aspects of Polynomial Classifiers................................................... 93</p>
<p>4.7 Numerical Domains and Support Vector Machines................................... 97</p>
<p>4.8 Summary and Historical Remarks.............................................................. 100</p>
<p>4.9 Solidify Your Knowledge............................................................................. 101</p>
<p>5 Artificial Neural Networks 105</p>
<p>5.1 Multilayer Perceptrons as Classifiers.......................................................... 105</p>
<p>5.2 Neural Network’s Error............................................................................... 110</p>
<p>5.3 Backpropagation of Error........................................................................... 111</p>
<p>5.4 Special Aspects of Multilayer Perceptrons................................................ 117</p>
<p>5.5 Architectural Issues...................................................................................... 121</p>
<p>5.6 Radial Basis Function Networks................................................................. 123</p>
<p>5.7 Summary and Historical Remarks.............................................................. 126</p>
<p>5.8 Solidify Your Knowledge............................................................................. 128</p>
<p>6 Decision Trees 130</p>
<p>6.1 Decision Trees <as classifiers.......................................................................="" 130="" </p><p>6.2 Induction of Decision Trees........................................................................ 134</p>
<p>6.3 How Much Information Does an Attribute Convey?............................... 137</p>
<p>6.4 Binary Split of a Numeric Attribute.......................................................... 142</p>
<p>6.5 Pruning.......................................................................................................... 144</p>
<p>6.6 Converting the Decision Tree into Rules.................................................. 149</p>
<p>6.7 Summary and Historical Remarks.............................................................. 151</p>
<p>6.8 Solidify Your Knowledge............................................................................. 153</p>
<p>7 Computational Learning Theory 157</p>
<p>7.1 PAC Learning................................................................................................. 157</p>
7.2 Examples of PAC Learnability.................................................................... 161<p></p>
<p>7.3 Some Practical and Theoretical Consequences......................................... 164</p>
<p>7.4 VC-Dimension and Learnability................................................................. 166</p>
<p>7.5 Summary and Historical Remarks.............................................................. 169</p>
<p>7.6 Exercises and Thought Experiments......................................................... 170</p>
</div> <br> <div> <p> </p>
<p> </p>
<p> </p>
<p> </p>
<p>8 A Few Instructive Applications 173</p>
<p>8.1 Character Recognition................................................................................ 173</p>
8.2 Oil-Spill Recognition.................................................................................... 177<p></p>
<p>8.3 Sleep Classification...................................................................................... 181</p>
8.4 Brain-Computer Interface.......................................................................... 185<p></p>
<p>8.5 Medical Diagnosis........................................................................................ 189</p>
8.6 Text Classification........................................................................................ 192<p></p>
8.7 Summary and Historical Remarks............................................................ 194<p></p>
<p>8.8 Exercises and Thought Experiments........................................................ 195</p>
9 Induction of Voting Assemblies 198<p></p>
<p>9.1 Bagging.......................................................................................................... 198</p>
<p>9.2 Schapire’s Boosting..................................................................................... 201</p>
<p>9.3 Adaboost: Practical Version of Boosting................................................. <205</p>
<p>9.4 Variations on the Boosting Theme........................................................... 210</p>
9.5 Cost-Saving Benefits of the Approach...................................................... 213<p></p>
<p>9.6 Summary and Historical Remarks............................................................ 215</p>
<p>9.7 Solidify Your Knowledge............................................................................ 216</p>
<p>10 Some Practical Aspects to Know About 219</p>
<p>10.1 A Learner’s Bias.......................................................................................... 219</p>
<p>10.2 Imbalanced Training Sets........................................................................... 223</p>
<p>10.3 Context-Dependent Domains..................................................................... 228</p>
10.4 Unknown Attribute Values......................................................................... 231<p></p>
<p>10.5 Attribute Selection....................................................................................... 234</p>
10.6 Miscellaneous............................................................................................... 237<p></p>
<p>10.7 Summary and Historical Remarks............................................................ 238</p>
10.8 Solidify Your Knowledge............................................................................ 240<p></p>
<p>11 Performance Evaluation 243</p>
<p>11.1 Basic Performance Criteria........................................................................ 243</p>
<p>11.2 Precision and Recall.................................................................................... 247</p>
<p>11.3 Other Ways to Measure Performance..................................................... 252</p>
<p>11.4 Learning Curves and Computational Costs............................................. 255</p>
11.5 Methodologies of Experimental Evaluation............................................. 258<p></p>
<p>11.6 Summary and Historical Remarks............................................................ 261</p>
11.7 Solidify Your Knowledge............................................................................ 263<p></p>
</div> <br> <div> <p> </p>
<p></p>
<p> </p>
<p> </p>
<p>12 Statistical Significance 266</p>
<p>12.1 Sampling a Population................................................................................ 266</p>
<p>12.2 Benefiting from the Normal Distribution................................................ 271</p>
<p>12.3 Confidence Intervals................................................................................... 275</p>
<p>12.4 Statistical Evaluation of a Classifier.......................................................... 277</p>
<p>12.5 Another Kind of Statistical Evaluation..................................................... 280</p>
<p>12.6 Comparing Machine-Learning Techniques.............................................. 281</p>
12.7 Summary and Historical Remarks............................................................ 284<p></p>
<p>12.8 Solidify Your Knowledge............................................................................ 285<</p>
<p>13 Induction in Multi-Label Domains 287</p>
13.1 Classical Machine Learning in<p></p>
Multi-Label Domains................................................................................... 287<p></p>
<p>13.2 Treating Each Class Separately:</p>
<p>Binary Relevance......................................................................................... 290</p>
<p>13.3 Classifier Chains........................................................................................... 293</p>
<p>13.4 Another Possibility: Stacking..................................................................... 296</p>
<p>13.5 A Note on Hierarchically Ordered Classes............................................... 298</p>
<p>13.6 Aggregating the Classes.............................................................................. 301</p>
<p>13.7 Criteria for Performance Evaluation........................................................ 304</p>
13.8 Summary and Historical Remarks............................................................ 307<p></p>
<p>13.9 Solidify Your Knowledge............................................................................ 308</p>
14 Unsupervised Learning 311<p></p>
14.1 Cluster Analysis........................................................................................... 311<p></p>
<p>14.2 A Simple Algorithm: k-Means.................................................................... 315</p>
<p>14.3 More Advanced Versions of k-Means...................................................... 321</p>
<p>14.4 Hierarchical Aggregation............................................................................ 323</p>
14.5 Self-Organizing Feature Maps: Introduction........................................... 326<p></p>
<p>14.6 Some Important Details.............................................................................. 329</p>
<p>14.7 Why Feature Maps?.................................................................................... 332</p>
<p>14.8 Summary and Historical Remarks............................................................ 334</p>
<p>14.9 Solidify Your Knowledge............................................................................ 335</p>
<p>15 Classifiers in the Form of Rulesets 338</p>
<p>15.1 A Class Described By Rules....................................................................... 338</p>
<p>15.2 Inducing Rulesets by Sequential Covering............................................... 341</p>
15.3 Predicates and Recursion.......................................................................... 344<p></p>
<p>15.4 More Advanced Search Operators............................................................ 347</p>
</div> <br> <p> </p>
<p> </p>
<p> </p>
<p> </p>
<p>15.5 Summary and Historical Remarks.............................................................. 349</p>
<p>15.6 Solidify Your Knowledge............................................................................ 350</p>
<p>16 The Genetic Algorithm< 352< </p><p>16.1 The Baseline Genetic Algorithm................................................................ 352</p>
<p>16.2 Implementing the Individual Modules...................................................... 355</p>
<p>16.3 Why it Works............................................................................................... 359</p>
16.4 The Danger of Premature Degeneration................................................. 362<p></p>
<p>16.5 Other Genetic Operators............................................................................ 364</p>
16.6 Some Advanced Versions........................................................................... 367<p></p>
<p>16.7 Selections in k-NN Classifiers..................................................................... 370</p>
<p>16.8 Summary and Historical Remarks............................................................ 373</p>
<p>16.9 Solidify Your Knowledge............................................................................ 374</p>
<p>17 Reinforcement Learning 376</p>
<p>17.1 How to Choose the Most Rewarding Action........................................... 376</p>
<p>17.2 States and Actions in a Game.................................................................... 379</p>
17.3 The SARSA Approach................................................................................. 383<p></p>
<p>17.4 Summary and Historical Remarks............................................................ 384</p>
<p>17.5 Solidify Your Knowledge............................................................................ 384</p>
<p>Index 395</p>
Rubrieken
- advisering
- algemeen management
- coaching en trainen
- communicatie en media
- economie
- financieel management
- inkoop en logistiek
- internet en social media
- it-management / ict
- juridisch
- leiderschap
- marketing
- mens en maatschappij
- non-profit
- ondernemen
- organisatiekunde
- personal finance
- personeelsmanagement
- persoonlijke effectiviteit
- projectmanagement
- psychologie
- reclame en verkoop
- strategisch management
- verandermanagement
- werk en loopbaan