機器學習算法中茬贵,假設學習器在預測中逼近正確的結果郊霎,其中包括在訓練中未出現(xiàn)的樣本沼头。既然是未知的狀況,結果可以是任意的結果书劝,若沒有其他假設进倍,這任務就無法解決。這種關于目標函數(shù)的必要假設就稱為*歸納偏置*购对。
歸納偏差有點像我們所說的先驗(Prior)猾昆,但是有點不同的是歸納偏差在學習的過程中不會更新,但是先驗在學習后會不斷地被更新骡苞。
Algorithm | Inductive Bias
---|---
Linear Regression | The relationship between the attributes x and the output y is linear. The goal is to minimize the sum of squared errors.
Single-Unit Perceptron | Each input votes independently toward the final classification (interactions between inputs are not possible).
Neural Networks with Backpropagation | Smooth interpolation between data points.
K-Nearest Neighbors | The classification of an instance x will be most similar to the classification of other instances that are nearby in Euclidean distance.
Support Vector Machines | Distinct classes tend to be separated by wide margins.
Naive Bayes | Each input depends only on the output class or label; the inputs are independent from each other.