登入選單
返回Google圖書搜尋
Generalization Performance of Bayes Optimal Classification Algorithm for Learning a Perceptron
註釋The generalization error of Bayes optimal classification algorithm when learning a perceptron from noise-free random training examples is calculated exactly using methods of statistical mechanics. It is shown that if an assumption of replica symmetry is made then, in the thermodynamic limit, the error of Bayes optimal algorithm is less than the error of a canonical stochastic learning algorithm, by a factor approaching √2 as the ratio of number of training examples to perceptron weights grows. In addition, it is shown that approximations to the generalization error of Bayes optimal algorithm can be achieved by learning algorithms that use a two layer neural net to learn a perceptron.