1. 背景:
最早是由 Vladimir N. Vapnik 和 Alexey Ya. Chervonenkis 在1963年提出
目前的版本(soft margin)是由Corinna Cortes 和 Vapnik在1993年提出素征,并在1995年發(fā)表
深度學(xué)習(xí)(2012)出現(xiàn)之前赠制,SVM被認(rèn)為機(jī)器學(xué)習(xí)中近十幾年來(lái)最成功斑司,表現(xiàn)最好的算法
2 . 機(jī)器學(xué)習(xí)的一般框架:
訓(xùn)練集 => 提取特征向量 => 結(jié)合一定的算法(分類(lèi)器:比如決策樹(shù)绒极,KNN)=>得到結(jié)果
3 . 介紹:
3.1 例子:
兩類(lèi)却盘?哪條線最好?
3.2 SVM尋找區(qū)分兩類(lèi)的超平面(hyper plane), 使邊際(margin)最大
總共可以有多少個(gè)可能的超平面棘钞?無(wú)數(shù)條
如何選取使邊際(margin)最大的超平面 (Max Margin Hyperplane)鉴竭?
超平面到一側(cè)最近點(diǎn)的距離等于到另一側(cè)最近點(diǎn)的距離,兩側(cè)的兩個(gè)超平面平行
3. 線性可區(qū)分(linear separable) 和 線性不可區(qū)分 (linear inseparable)
3.1 線性可分情況
3.1.1 定義與公式建立
超平面可以定義為:其中:
W: weight vectot
X: 訓(xùn)練實(shí)例
b: bias
所有坐落在邊際的兩邊的的超平面上的被稱(chēng)作”支持向量(support vectors)"
3.1.2 SVM如何找出最大邊際的超平面呢(MMH)循未?
利用一些數(shù)學(xué)推倒陷猫,可變?yōu)橛邢拗频耐箖?yōu)化問(wèn)題(convex quadratic optimization)
利用 Karush-Kuhn-Tucker (KKT)條件和拉格朗日公式,可以推出MMH可以被表示為以下“決定邊界 (decision boundary)”
3.1.3 對(duì)于任何測(cè)試(要?dú)w類(lèi)的)實(shí)例的妖,帶入以上公式绣檬,得出的符號(hào)是正還是負(fù)決定
3.1.4 特點(diǎn)
訓(xùn)練好的模型的算法復(fù)雜度是由支持向量的個(gè)數(shù)決定的,而不是由數(shù)據(jù)的維度決定的羔味。所以SVM不太容易產(chǎn)生overfitting
SVM訓(xùn)練出來(lái)的模型完全依賴(lài)于支持向量(Support Vectors), 即使訓(xùn)練集里面所有非支持向量的點(diǎn)都被去除河咽,重復(fù)訓(xùn)練過(guò)程,結(jié)果仍然會(huì)得到完全一樣的模型赋元。
一個(gè)SVM如果訓(xùn)練得出的支持向量個(gè)數(shù)比較小忘蟹,SVM訓(xùn)練出的模型比較容易被泛化。
3.2 線性不可分的情況
數(shù)據(jù)集在空間中對(duì)應(yīng)的向量不可被一個(gè)超平面區(qū)分開(kāi)
3.2.1 兩個(gè)步驟來(lái)解決:
利用一個(gè)非線性的映射把原數(shù)據(jù)集中的向量點(diǎn)轉(zhuǎn)化到一個(gè)更高維度的空間中
在這個(gè)高維度的空間中找一個(gè)線性的超平面來(lái)根據(jù)線性可分的情況處理
3.2.2 核方法
3.2.2.1 動(dòng)機(jī)
在線性SVM中轉(zhuǎn)化為最優(yōu)化問(wèn)題時(shí)求解的公式計(jì)算都是以內(nèi)積(dot product)的形式出現(xiàn)的搁凸,就是把訓(xùn)練集中的向量點(diǎn)轉(zhuǎn)化到高維的非線性映射函數(shù)媚值,因?yàn)閮?nèi)積的算法復(fù)雜度非常大,所以我們利用核函數(shù)來(lái)取代計(jì)算非線性映射函數(shù)的內(nèi)積护糖。
3.2.2.2常用的核函數(shù)(kernel functions)
- h度多項(xiàng)式核函數(shù)(polynomial kernel of degree h)
- 高斯徑向基核函數(shù)(Gaussian radial basis function kernel)
- S型核函數(shù)(Sigmoid function kernel):
如何選擇使用哪個(gè)kernel褥芒?
根據(jù)先驗(yàn)知識(shí),比如圖像分類(lèi)嫡良,通常使用RBF锰扶,文字不使用RBF
嘗試不同的kernel,根據(jù)結(jié)果準(zhǔn)確度而定
4. SVM擴(kuò)展可解決多個(gè)類(lèi)別分類(lèi)問(wèn)題
對(duì)于每個(gè)類(lèi)寝受,有一個(gè)當(dāng)前類(lèi)和其他類(lèi)的二類(lèi)分類(lèi)器(one-vs-rest)
5 SVM運(yùn)用
5.1 sklearn簡(jiǎn)單例子
from sklearn import svm
x = [[2, 0], [1, 1], [2, 3]]
y = [0, 0, 1]
clf = svm.SVC(kernel = 'linear')
clf.fit(x, y)
print(clf)
# get support vectors
print(clf.support_vectors_)
# get indices of support vectors
print(clf.support_)
# get number of support vectors for each class
print(clf.n_support_)
# 預(yù)測(cè)點(diǎn)(2坷牛,0)
print(clf.predict([[2,0]]))
運(yùn)行結(jié)果:
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma='auto', kernel='linear',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False)
[[ 1. 1.]
[ 2. 3.]]
[1 2]
[1 1]
[0]
5.2 sklearn畫(huà)出決定界限
import numpy as np
import pylab as pl
from sklearn import svm
# we create 40 separable points
X = np.r_[np.random.randn(20, 2) - [2, 2], np.random.randn(20, 2) + [2, 2]]
Y = [0]*20 +[1]*20 # 前20個(gè)點(diǎn)歸類(lèi)為0 ,后20個(gè)為1
# fit the model
clf = svm.SVC(kernel='linear')
clf.fit(X, Y)
# get the separating hyperplane
w = clf.coef_[0]
a = -w[0]/w[1]
xx = np.linspace(-5, 5)
yy = a*xx - (clf.intercept_[0])/w[1]
# plot the parallels to the separating hyperplane that pass through the support vectors
b = clf.support_vectors_[0] # 第一個(gè)支持向量
yy_down = a*xx + (b[1] - a*b[0])
b = clf.support_vectors_[-1] # 最后一個(gè)支持向量
yy_up = a*xx + (b[1] - a*b[0])
print("w: ", w)
print("a: ", a)
# print "xx: ", xx
# print "yy: ", yy
print("support_vectors_: ", clf.support_vectors_)
print("clf.coef_: ", clf.coef_)
# switching to the generic n-dimensional parameterization of the hyperplan to the 2D-specific equation
# of a line y=a.x +b: the generic w_0x + w_1y +w_3=0 can be rewritten y = -(w_0/w_1) x + (w_3/w_1)
# plot the line, the points, and the nearest vectors to the plane
pl.plot(xx, yy, 'k-')
pl.plot(xx, yy_down, 'k--')
pl.plot(xx, yy_up, 'k--')
pl.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=80, facecolors='none')
pl.scatter(X[:, 0], X[:, 1], c=Y, cmap=pl.cm.Paired)
pl.axis('tight')
pl.show()
運(yùn)行結(jié)果:
5.3 利用SVM進(jìn)行人臉識(shí)別實(shí)例
from __future__ import print_function
from time import time
import logging
import matplotlib.pyplot as plt
from sklearn.cross_validation import train_test_split
from sklearn.datasets import fetch_lfw_people
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.decomposition import RandomizedPCA
from sklearn.svm import SVC
print(__doc__)
# Display progress logs on stdout
logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s')
###############################################################################
# Download the data, if not already on disk and load it as numpy arrays
lfw_people = fetch_lfw_people(min_faces_per_person=70, resize=0.4)
# introspect the images arrays to find the shapes (for plotting)
n_samples, h, w = lfw_people.images.shape
# for machine learning we use the 2 data directly (as relative pixel
# positions info is ignored by this model)
X = lfw_people.data
n_features = X.shape[1]
# the label to predict is the id of the person
y = lfw_people.target
target_names = lfw_people.target_names
n_classes = target_names.shape[0]
print("Total dataset size:")
print("n_samples: %d" % n_samples)
print("n_features: %d" % n_features)
print("n_classes: %d" % n_classes)
###############################################################################
# Split into a training set and a test set using a stratified k fold
# split into a training and testing set
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25)
###############################################################################
# Compute a PCA (eigenfaces) on the face dataset (treated as unlabeled
# dataset): unsupervised feature extraction / dimensionality reduction
n_components = 150
print("Extracting the top %d eigenfaces from %d faces"
% (n_components, X_train.shape[0]))
t0 = time()
pca = RandomizedPCA(n_components=n_components, whiten=True).fit(X_train) # 將高維特征值降維
print("done in %0.3fs" % (time() - t0))
eigenfaces = pca.components_.reshape((n_components, h, w))
print("Projecting the input data on the eigenfaces orthonormal basis")
t0 = time()
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
print("done in %0.3fs" % (time() - t0))
###############################################################################
# Train a SVM classification model
print("Fitting the classifier to the training set")
t0 = time()
param_grid = {'C': [1e3, 5e3, 1e4, 5e4, 1e5],
'gamma': [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.1], }
clf = GridSearchCV(SVC(kernel='rbf', class_weight='auto'), param_grid)
clf = clf.fit(X_train_pca, y_train)
print("done in %0.3fs" % (time() - t0))
print("Best estimator found by grid search:")
print(clf.best_estimator_)
###############################################################################
# Quantitative evaluation of the model quality on the test set
print("Predicting people's names on the test set")
t0 = time()
y_pred = clf.predict(X_test_pca)
print("done in %0.3fs" % (time() - t0))
print(classification_report(y_test, y_pred, target_names=target_names))
print(confusion_matrix(y_test, y_pred, labels=range(n_classes)))
###############################################################################
# Qualitative evaluation of the predictions using matplotlib
def plot_gallery(images, titles, h, w, n_row=3, n_col=4):
"""Helper function to plot a gallery of portraits"""
plt.figure(figsize=(1.8 * n_col, 2.4 * n_row))
plt.subplots_adjust(bottom=0, left=.01, right=.99, top=.90, hspace=.35)
for i in range(n_row * n_col):
plt.subplot(n_row, n_col, i + 1)
plt.imshow(images[i].reshape((h, w)), cmap=plt.cm.gray)
plt.title(titles[i], size=12)
plt.xticks(())
plt.yticks(())
# plot the result of the prediction on a portion of the test set
def title(y_pred, y_test, target_names, i):
pred_name = target_names[y_pred[i]].rsplit(' ', 1)[-1]
true_name = target_names[y_test[i]].rsplit(' ', 1)[-1]
return 'predicted: %s\ntrue: %s' % (pred_name, true_name)
prediction_titles = [title(y_pred, y_test, target_names, i)
for i in range(y_pred.shape[0])]
plot_gallery(X_test, prediction_titles, h, w)
# plot the gallery of the most significative eigenfaces
eigenface_titles = ["eigenface %d" % i for i in range(eigenfaces.shape[0])]
plot_gallery(eigenfaces, eigenface_titles, h, w)
plt.show()
????????????【注】:本文為麥子學(xué)院機(jī)器學(xué)習(xí)課程的學(xué)習(xí)筆記
相關(guān)學(xué)習(xí)鏈接:
http://blog.pluskid.org/?p=632
http://blog.csdn.net/v_july_v/article/details/7624837