1.流行學(xué)習(xí)的概念:
流形學(xué)習(xí)方法(Manifold Learning)找颓,簡(jiǎn)稱流形學(xué)習(xí)合愈,自2000年在著名的科學(xué)雜志《Science》被首次提出以來,已成為信息科學(xué)領(lǐng)域的研究熱點(diǎn)击狮。在理論和應(yīng)用上想暗,流形學(xué)習(xí)方法都具有重要的研究意義。
假設(shè)數(shù)據(jù)是均勻采樣于一個(gè)高維歐氏空間中的低維流形帘不,流形學(xué)習(xí)就是從高維采樣數(shù)據(jù)中恢復(fù)低維流形結(jié)構(gòu)说莫,即找到高維空間中的低維流形,并求出相應(yīng)的嵌入映射寞焙,以實(shí)現(xiàn)維數(shù)約簡(jiǎn)或者數(shù)據(jù)可視化储狭。它是從觀測(cè)到的現(xiàn)象中去尋找事物的本質(zhì),找到產(chǎn)生數(shù)據(jù)的內(nèi)在規(guī)律捣郊。
簡(jiǎn)單地理解辽狈,流形學(xué)習(xí)方法可以用來對(duì)高維數(shù)據(jù)降維,如果將維度降到2維或3維呛牲,我們就能將原始數(shù)據(jù)可視化刮萌,從而對(duì)數(shù)據(jù)的分布有直觀的了解,發(fā)現(xiàn)一些可能存在的規(guī)律娘扩。
2.流行學(xué)習(xí)的分類
可以將流形學(xué)習(xí)方法分為線性的和非線性的兩種着茸;
線性的流形學(xué)習(xí)方法如我們熟知的主成分分析(Principal Component Analysis,PCA)琐旁,線形判別分析(Linear Discriminant Analysis涮阔,LDA)。
非線性的流形學(xué)習(xí)方法如等距映射(Isomap)灰殴、拉普拉斯特征映射(Laplacian eigenmaps敬特,LE)、局部線性嵌入(Locally-linear embedding牺陶,LLE)伟阔、多維標(biāo)度分析(MDS,Multidimensional Scaling)掰伸、部分切空間排列算法(LTSA 皱炉,Local tangent space alignment)、t-分布鄰域嵌入算法(t-SNE t-distributed stochastic neighbor embedding algorithm)碱工。
接下來是一個(gè)小實(shí)驗(yàn)娃承,對(duì)MNIST數(shù)據(jù)集降維和可視化,采用了十多種算法怕篷,算法在sklearn里都已集成历筝,畫圖工具采用matplotlib。
-加載數(shù)據(jù)
#coding:utf-8
from time import time
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d.axes3d import Axes3D
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as lda
from sklearn import (manifold,datasets,decomposition,ensemble,random_projection)
#加載sklearn中datasets模塊的MNIST數(shù)據(jù)廊谓,有5種digits
digits = datasets.load_digits(n_class=5)
X = digits.data
y = digits.target
#(901, 64) 一共901個(gè)樣本梳猪,每張圖片的大小是8*8,展開后是64維蒸痹。
print X.shape
n_img_per_row = 20
img = np.zeros((10 * n_img_per_row,10 * n_img_per_row))
for i in range(n_img_per_row):
ix = 10 * i + 1
for j in range(n_img_per_row):
iy = 10 * j + 1
img[ix:ix + 8,iy:iy + 8] = X[i * n_img_per_row + j].reshape((8,8))
plt.imshow(img,cmap=plt.cm.binary)
plt.title('A selection from the 64-dimensional digits dataset')
plt.show()
運(yùn)行代碼春弥,獲得X的大小是(901,64),也就是901個(gè)樣本叠荠。下圖顯示了部分樣本:
- 降維
plot_ embedding_ 2d()將前2維數(shù)據(jù)可視化匿沛,plot_ embedding_ 3d()將3維數(shù)據(jù)可視化。
n_neighbors = 30
#二維
def plot_embedding_2d(X,title=None):
#坐標(biāo)縮放到[0榛鼎,1)區(qū)間
x_min,x_max = np.min(X,axis=0),np.max(X,axis=0)
X = (X - x_min)/(x_max - x_min)
#降維后坐標(biāo)為(X[i逃呼,0],X[i者娱,1])抡笼,在該位置畫出對(duì)應(yīng)的digits
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
for i in range(X.shape[0]):
ax.text(X[i,0],X[i,1],str(digits.target[i]),
color = plt.cm.Set1(y[i]/10.),
fontdict={'weight':'bold','size':9})
if title is not None:
plt.title(title)
#三維
def plot_embedding_3d(X,title=None):
# 坐標(biāo)縮放到[0,1)區(qū)間
x_min, x_max = np.min(X, axis=0), np.max(X, axis=0)
X = (X - x_min) / (x_max - x_min)
# 降維后坐標(biāo)為(X[i黄鳍,0]推姻,X[i,1]框沟,X[i藏古,2]),在該位置畫出對(duì)應(yīng)的digits
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1,projection='3d')
for i in range(X.shape[0]):
ax.text(X[i, 0], X[i, 1],X[i,2], str(digits.target[i]),
color=plt.cm.Set1(y[i] / 10.),
fontdict={'weight': 'bold', 'size': 9})
if title is not None:
plt.title(title)
-實(shí)現(xiàn)算法
隨機(jī)映射忍燥,從64維降到2維
#隨機(jī)映射 n_components=2校翔,從64維降到2維
print("Computing random projection")
rp = random_projection.SparseRandomProjection(n_components=2,random_state=42)
X_projected = rp.fit_transform(X)
plot_embedding_2d(X_projected,"Random Projection")
主成分分析PCA 從64維降到3維
#主成分分析PCA 從64維降到2維、3維
print("Computing PCA projection")
t0 = time()
X_pca = decomposition.TruncatedSVD(n_components=3).fit_transform(X)
plot_embedding_2d(X_pca[:,0:2],"PCA 2D")
plot_embedding_3d(X_pca,"PCA 3D (time %.2fs)" %(time() -t0))
線形判別分析(Linear Discriminant Analysis灾前,LDA)從64維降到2防症,3維
#線形判別分析(Linear Discriminant Analysis,LDA)從64維降到2哎甲,3維
print("Computing LDA projection")
X2 = X.copy()
X2.flat[::X.shape[1] + 1] += 0.01 # Make X invertible
t0 = time()
X_lda = lda(n_components=3).fit_transform(X2,y)
plot_embedding_2d(X_lda[:,0:2],"LDA 2D" )
plot_embedding_3d(X_lda,"LDA 3D (time %.2fs)" %(time() - t0))
等距映射(Isomap)從64維降到2維
#等距映射(Isomap)從64維降到2維
print("Computing Isomap embedding")
t0 = time()
X_iso = manifold.Isomap(n_neighbors,n_components=2).fit_transform(X)
print("Done.")
plot_embedding_2d(X_iso,"Isomap (time %.2fs)" %(time() - t0))
局部線性嵌入(Locally-linear embedding蔫敲,LLE)從64維降到2維
#標(biāo)準(zhǔn)版 局部線性嵌入(Locally-linear embedding,LLE)從64維降到2維
print("Computing LLE embedding")
clf = manifold.LocallyLinearEmbedding(n_neighbors,n_components=2,method='standard')
t0 = time()
X_lle = clf.fit_transform(X)
#Done. Reconstruction error: 1.11351e-06
print("Done. Reconstruction error: %g" %clf.reconstruction_error_)
plot_embedding_2d(X_lle,"Locally Linear Embedding (time %.2fs)" %(time() - t0))
#改進(jìn)版 局部線性嵌入(Locally-linear embedding炭玫,LLE)從64維降到2維
print("Computing modified LLE embedding")
clf = manifold.LocallyLinearEmbedding(n_neighbors,n_components=2,method='modified')
t0 = time()
X_mlle = clf.fit_transform(X)
#Done. Reconstruction error: 0.282968
print("Done. Reconstruction error: %g" %clf.reconstruction_error_)
plot_embedding_2d(X_mlle,"Modified Locally Linear Embedding (time %.2fs)" %(time() - t0))
#hessian 局部線性嵌入(Locally-linear embedding奈嘿,LLE)從64維降到2維
print("Computing Hessian LLE embedding")
clf = manifold.LocallyLinearEmbedding(n_neighbors,n_components=2,method='hessian')
t0 = time()
X_hlle = clf.fit_transform(X)
#Done. Reconstruction error: 0.158393
print("Done. Reconstruction error: %g" %clf.reconstruction_error_)
plot_embedding_2d(X_hlle,"Hessian Locally Linear Embedding (time %.2fs)" %(time() - t0))
部分切空間排列算法(LTSA ,Local tangent space alignment) 從64維降到2維
#部分切空間排列算法(LTSA 吞加,Local tangent space alignment) 從64維降到2維
print("Computing LTSA embedding")
clf = manifold.LocallyLinearEmbedding(n_neighbors,n_components=2,method='ltsa')
t0 = time()
X_ltsa = clf.fit_transform(X)
print("Done. Reconstruction error: %g" %clf.reconstruction_error_)
plot_embedding_2d(X_ltsa,"Local Tangent Space Alignment (time %.2fs)" %(time() - t0))
多維標(biāo)度分析(MDS裙犹,Multidimensional Scaling)從64維降到2維
#多維標(biāo)度分析(MDS尽狠,Multidimensional Scaling)從64維降到2維
print("Computing MDS embedding")
clf= manifold.MDS(n_components=2,n_init=1,max_iter=100)
t0 = time()
X_mds = clf.fit_transform(X)
print("Done. Stress: %f" %clf.stress_)
plot_embedding_2d(X_mds,"MDS (time %.2fs)" %(time()-t0))
隨機(jī)森林從64維降到2維
#隨機(jī)森林從64維降到2維
print("Computing Totally Random Trees embedding")
hasher = ensemble.RandomTreesEmbedding(n_estimators=200,random_state=0,max_depth=5)
t0 = time()
X_transformed = hasher.fit_transform(X)
pca = decomposition.TruncatedSVD(n_components=2)
X_reduced = pca.fit_transform(X_transformed)
plot_embedding_2d(X_reduced,"Random Trees (time %.2fs)" %(time()-t0))
譜嵌入 從64維降到2維
#譜嵌入 從64維降到2維
print("Computing Spectral embedding")
embedder = manifold.SpectralEmbedding(n_components=2,random_state=0,eigen_solver="arpack")
t0 = time()
X_se = embedder.fit_transform(X)
plot_embedding_2d(X_se,"Spectral (time %.2fs)" %(time()-t0))
t-分布鄰域嵌入算法(t-SNE t-distributed stochastic neighbor embedding algorithm) 從64維降到2,3維
#t-分布鄰域嵌入算法(t-SNE t-distributed stochastic neighbor embedding algorithm) 從64維降到2,3維
#init設(shè)置embedding的初始化方式,可選random或者pca叶圃,這里用pca袄膏,比起random,init會(huì)更stable一些。
print("Computing t-SNE embedding")
tsne = manifold.TSNE(n_components=3,init='pca',random_state=0)
t0 = time()
X_tsne = tsne.fit_transform(X)
#降維后得到X_ tsne沉馆,大小是(901,3)
print X_tsne.shape
plot_embedding_2d(X_tsne[:,0:2],"t-SNE 2D")
plot_embedding_3d(X_tsne,"t-SNE 3D (time %.2fs)" %(time()-t0))
plt.show()
總結(jié)
十多種算法,結(jié)果各有好壞德崭,總體上t-SNE表現(xiàn)最優(yōu)斥黑,但它的計(jì)算復(fù)雜度也是最高的。
參考:
https://blog.csdn.net/u012162613/article/details/45920827
流行學(xué)習(xí)概念:https://blog.csdn.net/zhulingchen/article/details/2123129
代碼:https://github.com/wepe/MachineLearning