功能強(qiáng)大的python包(五):sklearn(機(jī)器學(xué)習(xí))

1. sklearn簡(jiǎn)介

sklearn圖標(biāo)

sklearn是基于python語(yǔ)言的機(jī)器學(xué)習(xí)工具包伐憾,是目前做機(jī)器學(xué)習(xí)項(xiàng)目當(dāng)之無(wú)愧的第一工具篙螟。
sklearn自帶了大量的數(shù)據(jù)集芹扭,可供我們練習(xí)各種機(jī)器學(xué)習(xí)算法吨娜。
sklearn集成了數(shù)據(jù)預(yù)處理脓匿、數(shù)據(jù)特征選擇、數(shù)據(jù)特征降維宦赠、分類\回歸\聚類模型陪毡、模型評(píng)估等非常全面算法米母。

2.sklearn數(shù)據(jù)類型

機(jī)器學(xué)習(xí)最終處理的數(shù)據(jù)都是數(shù)字,只不過(guò)這些數(shù)據(jù)可能以不同的形態(tài)被呈現(xiàn)出來(lái)毡琉,如矩陣铁瞒、文字、圖片桅滋、視頻慧耍、音頻等。

3.sklearn總覽

sklearn包含的模塊

數(shù)據(jù)集

image
  • sklearn.datasets
  1. 獲取小數(shù)據(jù)集(本地加載):datasets.load_xxx( )
  2. 獲取大數(shù)據(jù)集(在線下載):datasets.fetch_xxx( )
  3. 本地生成數(shù)據(jù)集(本地構(gòu)造):datasets.make_xxx( )
數(shù)據(jù)集 介紹
load_iris( ) 鳶尾花數(shù)據(jù)集:3類虱歪、4個(gè)特征蜂绎、150個(gè)樣本
load_boston( ) 波斯頓房?jī)r(jià)數(shù)據(jù)集:13個(gè)特征、506個(gè)樣本
load_digits( ) 手寫數(shù)字?jǐn)?shù)據(jù)集:10類笋鄙、64個(gè)特征师枣、1797個(gè)樣本
load_breast_cancer( ) 乳腺癌數(shù)據(jù)集:2類、30個(gè)特征萧落、569個(gè)樣本
load_diabets( ) 糖尿病數(shù)據(jù)集:10個(gè)特征践美、442個(gè)樣本
load_wine( ) 紅酒數(shù)據(jù)集:3類、13個(gè)特征找岖、178個(gè)樣本
load_files( ) 加載自定義的文本分類數(shù)據(jù)集
load_linnerud( ) 體能訓(xùn)練數(shù)據(jù)集:3個(gè)特征陨倡、20個(gè)樣本
load_sample_image( ) 加載單個(gè)圖像樣本
load_svmlight_file( ) 加載svmlight格式的數(shù)據(jù)
make_blobs( ) 生成多類單標(biāo)簽數(shù)據(jù)集
make_biclusters( ) 生成雙聚類數(shù)據(jù)集
make_checkerboard( ) 生成棋盤結(jié)構(gòu)數(shù)組,進(jìn)行雙聚類
make_circles( ) 生成二維二元分類數(shù)據(jù)集
make_classification( ) 生成多類單標(biāo)簽數(shù)據(jù)集
make_friedman1( ) 生成采用了多項(xiàng)式和正弦變換的數(shù)據(jù)集
make_gaussian_quantiles( ) 生成高斯分布數(shù)據(jù)集
make_hastie_10_2( ) 生成10維度的二元分類數(shù)據(jù)集
make_low_rank_matrix( ) 生成具有鐘形奇異值的低階矩陣
make_moons( ) 生成二維二元分類數(shù)據(jù)集
make_multilabel_classification( ) 生成多類多標(biāo)簽數(shù)據(jù)集
make_regression( ) 生成回歸任務(wù)的數(shù)據(jù)集
make_s_curve( ) 生成S型曲線數(shù)據(jù)集
make_sparse_coded_signal( ) 生成信號(hào)作為字典元素的稀疏組合
make_sparse_spd_matrix( ) 生成稀疏堆成的正定矩陣
make_sparse_uncorrelated( ) 使用稀疏的不相關(guān)設(shè)計(jì)生成隨機(jī)回歸問(wèn)題
make_spd_matrix( ) 生成隨機(jī)堆成的正定矩陣
make_swiss_roll( ) 生成瑞士卷曲線數(shù)據(jù)集

數(shù)據(jù)集讀取的部分代碼:

from sklearn import datasets
import matplotlib.pyplot as plt

iris = datasets.load_iris()
features = iris.data
target = iris.target
print(features.shape,target.shape)
print(iris.feature_names)

boston = datasets.load_boston()
boston_features = boston.data
boston_target = boston.target
print(boston_features.shape,boston_target.shape)
print(boston.feature_names)

digits = datasets.load_digits()
digits_features = digits.data
digits_target = digits.target
print(digits_features.shape,digits_target.shape)

img = datasets.load_sample_image('flower.jpg')
print(img.shape)
plt.imshow(img)
plt.show()

data,target = datasets.make_blobs(n_samples=1000,n_features=2,centers=4,cluster_std=1)
plt.scatter(data[:,0],data[:,1],c=target)
plt.show()

data,target = datasets.make_classification(n_classes=4,n_samples=1000,n_features=2,n_informative=2,n_redundant=0,n_clusters_per_class=1)
print(data.shape)
plt.scatter(data[:,0],data[:,1],c=target)
plt.show()

x,y = datasets.make_regression(n_samples=10,n_features=1,n_targets=1,noise=1.5,random_state=1)
print(x.shape,y.shape)
plt.scatter(x,y)
plt.show()

數(shù)據(jù)預(yù)處理

image
  • sklearn.preprocessing
函數(shù) 功能
preprocessing.scale( ) 標(biāo)準(zhǔn)化
preprocessing.MinMaxScaler( ) 最大最小值標(biāo)準(zhǔn)化
preprocessing.StandardScaler( ) 數(shù)據(jù)標(biāo)準(zhǔn)化
preprocessing.MaxAbsScaler( ) 絕對(duì)值最大標(biāo)準(zhǔn)化
preprocessing.RobustScaler( ) 帶離群值數(shù)據(jù)集標(biāo)準(zhǔn)化
preprocessing.QuantileTransformer( ) 使用分位數(shù)信息變換特征
preprocessing.PowerTransformer( ) 使用冪變換執(zhí)行到正態(tài)分布的映射
preprocessing.Normalizer( ) 正則化
preprocessing.OrdinalEncoder( ) 將分類特征轉(zhuǎn)換為分類數(shù)值
preprocessing.LabelEncoder( ) 將分類特征轉(zhuǎn)換為分類數(shù)值
preprocessing.MultiLabelBinarizer( ) 多標(biāo)簽二值化
preprocessing.OneHotEncoder( ) 獨(dú)熱編碼
preprocessing.KBinsDiscretizer( ) 將連續(xù)數(shù)據(jù)離散化
preprocessing.FunctionTransformer( ) 自定義特征處理函數(shù)
preprocessing.Binarizer( ) 特征二值化
preprocessing.PolynomialFeatures( ) 創(chuàng)建多項(xiàng)式特征
preprocesssing.Normalizer( ) 正則化
preprocessing.Imputer( ) 彌補(bǔ)缺失值

數(shù)據(jù)預(yù)處理代碼


import numpy as np
from sklearn import preprocessing

#標(biāo)準(zhǔn)化:將數(shù)據(jù)轉(zhuǎn)換為均值為0许布,方差為1的數(shù)據(jù)兴革,即標(biāo)注正態(tài)分布的數(shù)據(jù)
x = np.array([[1,-1,2],[2,0,0],[0,1,-1]])
x_scale = preprocessing.scale(x)
print(x_scale.mean(axis=0),x_scale.std(axis=0))

std_scale = preprocessing.StandardScaler().fit(x)
x_std = std_scale.transform(x)
print(x_std.mean(axis=0),x_std.std(axis=0))

#將數(shù)據(jù)縮放至給定范圍(0-1)
mm_scale = preprocessing.MinMaxScaler()
x_mm = mm_scale.fit_transform(x)
print(x_mm.mean(axis=0),x_mm.std(axis=0))

#將數(shù)據(jù)縮放至給定范圍(-1-1),適用于稀疏數(shù)據(jù)
mb_scale = preprocessing.MaxAbsScaler()
x_mb = mb_scale.fit_transform(x)
print(x_mb.mean(axis=0),x_mb.std(axis=0))

#適用于帶有異常值的數(shù)據(jù)
rob_scale = preprocessing.RobustScaler()
x_rob = rob_scale.fit_transform(x)
print(x_rob.mean(axis=0),x_rob.std(axis=0))

#正則化
nor_scale = preprocessing.Normalizer()
x_nor = nor_scale.fit_transform(x)
print(x_nor.mean(axis=0),x_nor.std(axis=0))

#特征二值化:將數(shù)值型特征轉(zhuǎn)換位布爾型的值
bin_scale = preprocessing.Binarizer()
x_bin = bin_scale.fit_transform(x)
print(x_bin)

#將分類特征或數(shù)據(jù)標(biāo)簽轉(zhuǎn)換位獨(dú)熱編碼
ohe = preprocessing.OneHotEncoder()
x1 = ([[0,0,3],[1,1,0],[1,0,2]])
x_ohe = ohe.fit(x1).transform([[0,1,3]])
print(x_ohe)

import numpy as np
from sklearn.preprocessing import PolynomialFeatures

x = np.arange(6).reshape(3,2)
poly = PolynomialFeatures(2)
x_poly = poly.fit_transform(x)
print(x)
print(x_poly)
import numpy as np
from sklearn.preprocessing import FunctionTransformer

#自定義的特征轉(zhuǎn)換函數(shù)
transformer = FunctionTransformer(np.log1p)

x = np.array([[0,1],[2,3]])
x_trans = transformer.transform(x)
print(x_trans)
import numpy as np
import sklearn.preprocessing

x = np.array([[-3,5,15],[0,6,14],[6,3,11]])
kbd = preprocessing.KBinsDiscretizer(n_bins=[3,2,2],encode='ordinal').fit(x)
x_kbd = kbd.transform(x)
print(x_kbd)
from sklearn.preprocessing import MultiLabelBinarizer

#多標(biāo)簽二值化
mlb = MultiLabelBinarizer()
x_mlb = mlb.fit_transform([(1,2),(3,4),(5,)])
print(x_mlb)
  • sklearn.svm
函數(shù) 介紹
svm.OneClassSVM( ) 無(wú)監(jiān)督異常值檢測(cè)

上述preprocessing類函數(shù)的方法如下:

preprocessing.xxx函數(shù)方法 介紹
xxx.fit( ) 擬合數(shù)據(jù)
xxx.fit_transform( ) 擬合并轉(zhuǎn)換數(shù)據(jù)
xxx.get_params( ) 獲取函數(shù)參數(shù)
xxx.inverse_transform( ) 逆轉(zhuǎn)換
xxx.set_params( ) 設(shè)置參數(shù)
transform( ) 轉(zhuǎn)換數(shù)據(jù)

特征選擇

image

很多時(shí)候我們用于模型訓(xùn)練的數(shù)據(jù)集包含許多的特征,這些特征要么是有冗余蜜唾,要么是對(duì)結(jié)果的相關(guān)性很性忧;這時(shí)通過(guò)精心挑選一些"好"的特征來(lái)訓(xùn)練模型袁余,既能減小模型訓(xùn)練時(shí)間擎勘,也能夠提升模型性能。

例如一個(gè)數(shù)據(jù)集包含(鼻翼長(zhǎng)颖榜、眼角長(zhǎng)棚饵、額頭寬、血型)這四個(gè)特征掩完;我們用這些數(shù)據(jù)集進(jìn)行人臉識(shí)別噪漾,必定會(huì)去除(血型)這個(gè)特征后再進(jìn)行人臉識(shí)別;因?yàn)椋ㄑ停┻@個(gè)特征對(duì)于人臉識(shí)別這個(gè)目標(biāo)來(lái)說(shuō)是一個(gè)無(wú)用的特征且蓬。

  • sklean.feature_selection
函數(shù) 功能
feature_selection.SelectKBest( ) feature.selection.chi2 feature_selection.f_regression mutual_info_regression 選擇K個(gè)得分最高的特征
feature_selection.VarianceThreshold( ) 無(wú)監(jiān)督特征選擇
feature_selection.REF( ) 遞歸式特征消除
feature_selection.REFCV( ) 遞歸式特征消除交叉驗(yàn)證法
feature_selection.SelectFromModel( ) 特征選擇

特征選擇實(shí)現(xiàn)代碼

from sklearn.datasets import load_digits
from sklearn.feature_selection import SelectKBest,chi2

digits = load_digits()
data = digits.data
target = digits.target
print(data.shape)
data_new = SelectKBest(chi2,k=20).fit_transform(data,target)
print(data_new.shape)
from sklearn.feature_selection import VarianceThreshold

x = [[0,0,1],[0,1,0],[1,0,0],[0,1,1],[0,1,0],[0,1,1]]
vt = VarianceThreshold(threshold=(0.8*(1-0.8)))
x_new = vt.fit_transform(x)
print(x)
print(x_new)
from sklearn.svm import LinearSVC
from sklearn.datasets import load_iris
from sklearn.feature_selection import SelectFromModel

iris = load_iris()
x,y = iris.data,iris.target

lsvc = LinearSVC(C=0.01,penalty='l1',dual=False).fit(x,y)
model = SelectFromModel(lsvc,prefit=True)
x_new = model.transform(x)

print(x.shape)
print(x_new.shape)
from sklearn.svm import SVC
from sklearn.model_selection import StratifiedKFold,cross_val_score
from sklearn.feature_selection import RFECV
from sklearn.datasets import load_iris

iris = load_iris()
x,y = iris.data,iris.target

svc = SVC(kernel='linear')
rfecv = RFECV(estimator=svc,step=1,cv=StratifiedKFold(2),scoring='accuracy',verbose=1,n_jobs=1).fit(x,y)
x_rfe = rfecv.transform(x)
print(x_rfe.shape)

clf = SVC(gamma="auto", C=0.8)   
scores = (cross_val_score(clf, x_rfe, y, cv=5))
print(scores)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std()*2))


特征降維

特征降維

面對(duì)特征巨大的數(shù)據(jù)集欣硼,除了進(jìn)行特征選擇之外,我們還可以采取特征降維算法來(lái)減少特征數(shù)缅疟;特征降維于特征選擇的區(qū)別在于:特征選擇是從原始特征中挑選特征;而特征降維則是從原始特征中生成新的特征耘斩。

很多人會(huì)有比較特征選擇與特征降維優(yōu)劣的心理,其實(shí)這種脫離實(shí)際問(wèn)題的比較意義不大桅咆,我們要明白每一種算法都是有其擅長(zhǎng)的領(lǐng)域括授。

  • sklearn.decomposition
函數(shù) 功能
decomposition.PCA( ) 主成分分析
decomposition.KernelPCA( ) 核主成分分析
decomposition.IncrementalPCA( ) 增量主成分分析
decomposition.MiniBatchSparsePCA( ) 小批量稀疏主成分分析
decomposition.SparsePCA 稀疏主成分分析
decomposition.FactorAnalysis( ) 因子分析
decomposition.TruncatedSVD( ) 截?cái)嗟钠娈愔捣纸?/td>
decomposition.FastICA( ) 獨(dú)立成分分析的快速算法
decomposition.DictionaryLearning 字典學(xué)習(xí)
decomposition.MiniBatchDictonaryLearning( ) 小批量字典學(xué)習(xí)
decomposition.dict_learning( ) 字典學(xué)習(xí)用于矩陣分解
decomposition.dict_learning_online( ) 在線字典學(xué)習(xí)用于矩陣分解
decomposition.LatentDirichletAllocation( ) 在線變分貝葉斯算法的隱含迪利克雷分布
decomposition.NMF( ) 非負(fù)矩陣分解
decomposition.SparseCoder( ) 稀疏編碼

特征降維代碼實(shí)現(xiàn)

"""數(shù)據(jù)降維"""

from sklearn.decomposition import PCA

x = np.array([[-1,-1],[-2,-1],[-3,-2],[1,1],[2,1],[3,2]])
pca1 = PCA(n_components=2)
pca2 = PCA(n_components='mle')
pca1.fit(x)
pca2.fit(x)
x_new1 = pca1.transform(x)
x_new2 = pca2.transform(x)
print(x_new1.shape)
print(x_new2.shape)
import numpy as np
from sklearn.decomposition import KernelPCA
import matplotlib.pyplot as plt
import math

#kernelPCA適用于對(duì)數(shù)據(jù)進(jìn)行非線性降維
x = []
y = []
N = 500

for i in range(N):
    deg = np.random.randint(0,360)
    if np.random.randint(0,2)%2 == 0:
        x.append([6*math.sin(deg),6*math.cos(deg)])
        y.append(1)
    else:
        x.append([15*math.sin(deg),15*math.cos(deg)])
        y.append(0)
        
y = np.array(y)
x = np.array(x)

kpca = KernelPCA(kernel='rbf',n_components=14)
x_kpca = kpca.fit_transform(x)
print(x_kpca.shape)
from sklearn.datasets import load_digits
from sklearn.decomposition import IncrementalPCA
from scipy import sparse
X, _ = load_digits(return_X_y=True)

#增量主成分分析:適用于大數(shù)據(jù)
transform = IncrementalPCA(n_components=7,batch_size=200)
transform.partial_fit(X[:100,:])

x_sparse = sparse.csr_matrix(X)
x_transformed = transform.fit_transform(x_sparse)
x_transformed.shape
import numpy as np
from sklearn.datasets import make_friedman1
from sklearn.decomposition import MiniBatchSparsePCA

x,_ = make_friedman1(n_samples=200,n_features=30,random_state=0)
transformer = MiniBatchSparsePCA(n_components=5,batch_size=50,random_state=0)
transformer.fit(x)
x_transformed = transformer.transform(x)
print(x_transformed.shape)
from sklearn.datasets import load_digits
from sklearn.decomposition import FactorAnalysis

x,_ = load_digits(return_X_y=True)
transformer = FactorAnalysis(n_components=7,random_state=0)
x_transformed = transformer.fit_transform(x)
print(x_transformed.shape)
  • sklearn.manifold
函數(shù) 功能
manifold.LocallyLinearEmbedding( ) 局部非線性嵌入
manifold.Isomap( ) 流形學(xué)習(xí)
manifold.MDS( ) 多維標(biāo)度法
manifold.t-SNE( ) t分布隨機(jī)鄰域嵌入
manifold.SpectralEmbedding( ) 頻譜嵌入非線性降維

分類模型

image

分類模型是能夠從數(shù)據(jù)集中學(xué)習(xí)知識(shí)荚虚,進(jìn)而提升自我認(rèn)知的一種模型,經(jīng)過(guò)學(xué)習(xí)后母债,它能夠區(qū)分出它所見(jiàn)過(guò)的事物漏隐;這種模型就非常類似一個(gè)識(shí)物的小朋友。

函數(shù) 功能
tree.DecisionTreeClassifier( ) 決策樹(shù)

決策樹(shù)分類

from sklearn.datasets import load_iris
from sklearn import tree

x,y = load_iris(return_X_y=True)
clf = tree.DecisionTreeClassifier()
clf = clf.fit(x,y)
tree.plot_tree(clf)
  • sklearn.ensemble
函數(shù) 功能
ensemble.BaggingClassifier() 裝袋法集成學(xué)習(xí)
ensemble.AdaBoostClassifier( ) 提升法集成學(xué)習(xí)
ensemble.RandomForestClassifier( ) 隨機(jī)森林分類
ensemble.ExtraTreesClassifier( ) 極限隨機(jī)樹(shù)分類
ensemble.RandomTreesEmbedding( ) 嵌入式完全隨機(jī)樹(shù)
ensemble.GradientBoostingClassifier( ) 梯度提升樹(shù)
ensemble.VotingClassifier( ) 投票分類法

BaggingClassifier

#使用sklearn庫(kù)實(shí)現(xiàn)的決策樹(shù)裝袋法提升分類效果。其中X和Y分別是鳶尾花(iris)數(shù)據(jù)集中的自變量(花的特征)和因變量(花的類別)

from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn import datasets

#加載iris數(shù)據(jù)集
iris=datasets.load_iris()
X=iris.data
Y=iris.target

#生成K折交叉驗(yàn)證數(shù)據(jù)
kfold=KFold(n_splits=9)

#決策樹(shù)及交叉驗(yàn)證
cart=DecisionTreeClassifier(criterion='gini',max_depth=2)
cart=cart.fit(X,Y)
result=cross_val_score(cart,X,Y,cv=kfold)  #采用K折交叉驗(yàn)證的方法來(lái)驗(yàn)證算法效果
print('CART數(shù)結(jié)果:',result.mean())

#裝袋法及交叉驗(yàn)證
model=BaggingClassifier(base_estimator=cart,n_estimators=100) #n_estimators=100為建立100個(gè)分類模型
result=cross_val_score(model,X,Y,cv=kfold)  #采用K折交叉驗(yàn)證的方法來(lái)驗(yàn)證算法效果
print('裝袋法提升后的結(jié)果:',result.mean())

AdaBoostClassifier

#基于sklearn庫(kù)中的提升法分類器對(duì)決策樹(shù)進(jìn)行優(yōu)化,提高分類準(zhǔn)確率兴猩,其中l(wèi)oad_breast_cancer()方法加載乳腺癌數(shù)據(jù)集悬襟,自變量(細(xì)胞核的特征)和因變量(良性帚桩、惡性)分別賦給X疼邀,Y變量

from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn import datasets

#加載數(shù)據(jù)
dataset_all=datasets.load_breast_cancer()
X=dataset_all.data
Y=dataset_all.target

#初始化基本隨機(jī)數(shù)生成器
kfold=KFold(n_splits=10)

#決策樹(shù)及交叉驗(yàn)證
dtree=DecisionTreeClassifier(criterion='gini',max_depth=3)

#提升法及交叉驗(yàn)證
model=AdaBoostClassifier(base_estimator=dtree,n_estimators=100)
result=cross_val_score(model,X,Y,cv=kfold)
print("提升法改進(jìn)結(jié)果:",result.mean())

RandomForestClassifier 、ExtraTreesClassifier

#使用sklearn庫(kù)中的隨機(jī)森林算法和決策樹(shù)算法進(jìn)行效果比較丛塌,數(shù)據(jù)集由生成器隨機(jī)生成


from sklearn.model_selection import cross_val_score
from sklearn.datasets import make_blobs
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.tree import DecisionTreeClassifier
import matplotlib.pyplot as plt

#make_blobs:sklearn中自帶的取類數(shù)據(jù)生成器隨機(jī)生成測(cè)試樣本,make_blobs方法中n_samples表示生成的隨機(jī)數(shù)樣本數(shù)量畜疾,n_features表示每個(gè)樣本的特征數(shù)量赴邻,centers表示類別數(shù)量,random_state表示隨機(jī)種子
x,y=make_blobs(n_samples=1000,n_features=6,centers=50,random_state=0)
plt.scatter(x[:,0],x[:,1],c=y)
plt.show()

#構(gòu)造隨機(jī)森林模型
clf=RandomForestClassifier(n_estimators=10,max_depth=None,min_samples_split=2,random_state=0)  #n_estimators表示弱學(xué)習(xí)器的最大迭代次數(shù)啡捶,或者說(shuō)最大的弱學(xué)習(xí)器的個(gè)數(shù)姥敛。如果設(shè)置值太小,模型容易欠擬合瞎暑;如果太大彤敛,計(jì)算量會(huì)較大,并且超過(guò)一定的數(shù)量后了赌,模型提升很小
scores=cross_val_score(clf,x,y)
print('RandomForestClassifier result:',scores.mean())

#構(gòu)造極限森林模型
clf=ExtraTreesClassifier(n_estimators=10,max_depth=None,min_samples_split=2,random_state=0)
scores=cross_val_score(clf,x,y)
print('ExtraTreesClassifier result:',scores.mean())
#極限隨機(jī)數(shù)的效果好于隨機(jī)森林墨榄,原因在于計(jì)算分割點(diǎn)方法中的隨機(jī)性進(jìn)一步增強(qiáng);相較于隨機(jī)森林勿她,其閾值是針對(duì)每個(gè)候選特征隨機(jī)生成的袄秩,并且選擇最佳閾值作為分割規(guī)則,這樣能夠減少一點(diǎn)模型的方程逢并,總體上效果更好

GradientBoostingClassifier

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.datasets import make_blobs


#make_blobs:sklearn中自帶的取類數(shù)據(jù)生成器隨機(jī)生成測(cè)試樣本之剧,make_blobs方法中n_samples表示生成的隨機(jī)數(shù)樣本數(shù)量,n_features表示每個(gè)樣本的特征數(shù)量砍聊,centers表示類別數(shù)量背稼,random_state表示隨機(jī)種子
x,y=make_blobs(n_samples=1000,n_features=6,centers=50,random_state=0)
plt.scatter(x[:,0],x[:,1],c=y)
plt.show()

x_train, x_test, y_train, y_test = train_test_split(x,y)

# 模型訓(xùn)練,使用GBDT算法
gbr = GradientBoostingClassifier(n_estimators=3000, max_depth=2, min_samples_split=2, learning_rate=0.1)
gbr.fit(x_train, y_train.ravel())

y_gbr = gbr.predict(x_train)
y_gbr1 = gbr.predict(x_test)
acc_train = gbr.score(x_train, y_train)
acc_test = gbr.score(x_test, y_test)
print(acc_train)
print(acc_test)

VotingClassifier

import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.ensemble import VotingClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split

#VotingClassifier方法是一次使用多種分類模型進(jìn)行預(yù)測(cè)玻蝌,將多數(shù)預(yù)測(cè)結(jié)果作為最終結(jié)果
x,y = datasets.make_moons(n_samples=500,noise=0.3,random_state=42)

plt.scatter(x[y==0,0],x[y==0,1])
plt.scatter(x[y==1,0],x[y==1,1])
plt.show()

x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.3)

voting_hard = VotingClassifier(estimators=[
    ('log_clf', LogisticRegression()),
    ('svm_clf', SVC()),
    ('dt_clf', DecisionTreeClassifier(random_state=10)),], voting='hard')

voting_soft = VotingClassifier(estimators=[
    ('log_clf', LogisticRegression()),
    ('svm_clf', SVC(probability=True)),
    ('dt_clf', DecisionTreeClassifier(random_state=10)),
], voting='soft')

voting_hard.fit(x_train,y_train)
print(voting_hard.score(x_test,y_test))

voting_soft.fit(x_train,y_train)
print(voting_soft.score(x_test,y_test))
  • sklearn.linear_model
函數(shù) 功能
linear_model.LogisticRegression( ) 邏輯回歸
linear_model.Perceptron( ) 線性模型感知機(jī)
linear_model.SGDClassifier( ) 具有SGD訓(xùn)練的線性分類器
linear_model.PassiveAggressiveClassifier( ) 增量學(xué)習(xí)分類器

LogisticRegression

import numpy as np
from sklearn import linear_model,datasets
from sklearn.model_selection import train_test_split

iris = datasets.load_iris()
x = iris.data
y = iris.target

x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.3)

logreg = linear_model.LogisticRegression(C=1e5)
logreg.fit(x_train,y_train)

prepro = logreg.score(x_test,y_test)
print(prepro)

Perceptron

from sklearn.datasets import load_digits
from sklearn.linear_model import Perceptron

x,y = load_digits(return_X_y=True)
clf = Perceptron(tol=1e-3,random_state=0)
clf.fit(x,y)
clf.score(x,y)

SGDClassifier

import numpy as np
from sklearn.linear_model import SGDClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline

x = np.array([[-1,-1],[-2,-1],[1,1],[2,1]])
y = np.array([1,1,2,2])

clf = make_pipeline(StandardScaler(),SGDClassifier(max_iter=1000,tol=1e-3))
clf.fit(x,y)
print(clf.score(x,y))
print(clf.predict([[-0.8,-1]]))

PassiveAggressiveClassifier

from sklearn.linear_model import PassiveAggressiveClassifier
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split

x,y = make_classification(n_features=4,random_state=0)
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.3)

clf = PassiveAggressiveClassifier(max_iter=1000,random_state=0,tol=1e-3)
clf.fit(x_train,y_train)
print(clf.score(x_test,y_test))
函數(shù) 功能
svm.SVC( ) 支持向量機(jī)分類
svm.NuSVC( ) Nu支持向量分類
svm.LinearSVC( ) 線性支持向量分類

SVC

import numpy as np
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC

x = [[2,0],[1,1],[2,3]]
y = [0,0,1]

clf = SVC(kernel='linear')
clf.fit(x,y)
print(clf.predict([[2,2]]))

NuSVC

from sklearn import svm
from numpy import *

x = array([[0],[1],[2],[3]])
y = array([0,1,2,3])

clf = svm.NuSVC()
clf.fit(x,y)
print(clf.predict([[4]]))

LinearSVC

import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.svm import LinearSVC

iris = datasets.load_iris()
X = iris.data
y = iris.target

plt.scatter(X[y==0, 0], X[y==0, 1], color='red')
plt.scatter(X[y==1, 0], X[y==1, 1], color='blue')
plt.show()

svc = LinearSVC(C=10**9)
svc.fit(X, y)
print(svc.score(X,y))
函數(shù) 功能
neighbors.NearestNeighbors( ) 無(wú)監(jiān)督學(xué)習(xí)臨近搜索
neighbors.NearestCentroid( ) 最近質(zhì)心分類器
neighbors.KNeighborsClassifier() K近鄰分類器
neighbors.KDTree( ) KD樹(shù)搜索最近鄰
neighbors.KNeighborsTransformer( ) 數(shù)據(jù)轉(zhuǎn)換為K個(gè)最近鄰點(diǎn)的加權(quán)圖

NearestNeighbors

import numpy as np
from sklearn.neighbors import NearestNeighbors

samples = [[0,0,2],[1,0,0],[0,0,1]]
neigh = NearestNeighbors(n_neighbors=2,radius=0.4)
neigh.fit(samples)

print(neigh.kneighbors([[0,0,1.3]],2,return_distance=True))
print(neigh.radius_neighbors([[0,0,1.3]],0.4,return_distance=False))

NearestCentroid

from sklearn.neighbors import NearestCentroid
import numpy as np

x = np.array([[-1,-1],[-2,-1],[-3,-2],[1,1],[2,1],[3,2]])
y = np.array([1,1,1,2,2,2])

clf = NearestCentroid()
clf.fit(x,y)
print(clf.predict([[-0.8,-1]]))

KNeighborsClassifier

from sklearn.neighbors import KNeighborsClassifier

x,y = [[0],[1],[2],[3]],[0,0,1,1]

neigh = KNeighborsClassifier(n_neighbors=3)
neigh.fit(x,y)
print(neigh.predict([[1.1]]))

KDTree

import numpy as np
from sklearn.neighbors import KDTree
rng = np.random.RandomState(0)
x = rng.random_sample((10,3))
tree = KDTree(x,leaf_size=2)
dist,ind = tree.query(x[:1],k=3)
print(ind)

KNeighborsClassifier

from sklearn.neighbors import KNeighborsClassifier
 
X = [[0], [1], [2], [3], [4], [5], [6], [7], [8]]
y = [0, 0, 0, 1, 1, 1, 2, 2, 2]
 
neigh = KNeighborsClassifier(n_neighbors=3)
neigh.fit(X, y)
print(neigh.predict([[1.1]]))
  • sklearn.discriminant_analysis
函數(shù) 功能
discriminant_analysis.LinearDiscriminantAnalysis( ) 線性判別分析
discriminant_analysis.QuadraticDiscriminantAnalysis( ) 二次判別分析

LDA

from sklearn import datasets
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA

iris = datasets.load_iris()
X = iris.data[:-5]
pre_x = iris.data[-5:]
y = iris.target[:-5]
print ('first 10 raw samples:', X[:10])
clf = LDA()
clf.fit(X, y)
X_r = clf.transform(X)
pre_y = clf.predict(pre_x)
#降維結(jié)果
print ('first 10 transformed samples:', X_r[:10])
#預(yù)測(cè)目標(biāo)分類結(jié)果
print ('predict value:', pre_y)

QDA

from sklearn import datasets
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA
from sklearn.model_selection import train_test_split

iris = datasets.load_iris()

x = iris.data
y = iris.target

x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.3)

clf = QDA()
clf.fit(x_train,y_train)
print(clf.score(x_test,y_test))

  • sklearn.gaussian_process
函數(shù) 功能
gaussian_process.GaussianProcessClassifier( ) 高斯過(guò)程分類
  • sklearn.naive_bayes
函數(shù) 功能
naive_bayes.GaussianNB( ) 樸素貝葉斯
naive_bayes.MultinomialNB( ) 多項(xiàng)式樸素貝葉斯
naive_bayes.BernoulliNB( ) 伯努利樸素貝葉斯

GaussianNB

from sklearn import datasets
from sklearn.naive_bayes import GaussianNB

iris = datasets.load_iris()
clf = GaussianNB()
clf = clf.fit(iris.data,iris.target)

y_pre = clf.predict(iris.data)

MultinomialNB

from sklearn import datasets
from sklearn.naive_bayes import MultinomialNB

iris = datasets.load_iris()
clf = MultinomialNB()
clf = clf.fit(iris.data, iris.target)
y_pred=clf.predict(iris.data)

BernoulliNB

from sklearn import datasets
from sklearn.naive_bayes import BernoulliNB

iris = datasets.load_iris()
clf = BernoulliNB()
clf = clf.fit(iris.data, iris.target)
y_pred=clf.predict(iris.data)

回歸模型

image
  • sklearn.tree
函數(shù) 功能
tree.DecisionTreeRegress( ) 回歸決策樹(shù)
tree.ExtraTreeRegressor( ) 極限回歸樹(shù)

DecisionTreeRegressor蟹肘、ExtraTreeRegressor

"""回歸"""

from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor,ExtraTreeRegressor
from sklearn.metrics import r2_score,mean_squared_error,mean_absolute_error
import numpy as np

boston = load_boston()
x = boston.data
y = boston.target

x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.3)

dtr = DecisionTreeRegressor()
dtr.fit(x_train,y_train)

etr = ExtraTreeRegressor()
etr.fit(x_train,y_train)

yetr_pred = etr.predict(x_test)
ydtr_pred = dtr.predict(x_test)

print(dtr.score(x_test,y_test))
print(r2_score(y_test,ydtr_pred))

print(etr.score(x_test,y_test))
print(r2_score(y_test,yetr_pred))

  • sklearn.ensemble
函數(shù) 功能
ensemble.GradientBoostingRegressor( ) 梯度提升法回歸
ensemble.AdaBoostRegressor( ) 提升法回歸
ensemble.BaggingRegressor( ) 裝袋法回歸
ensemble.ExtraTreeRegressor( ) 極限樹(shù)回歸
ensemble.RandomForestRegressor( ) 隨機(jī)森林回歸

GradientBoostingRegressor

import numpy as np
from sklearn.ensemble import GradientBoostingRegressor as GBR
from sklearn.datasets import make_regression

X, y = make_regression(1000, 2, noise=10)

gbr = GBR()
gbr.fit(X, y)
gbr_preds = gbr.predict(X)

AdaBoostRegressor

from sklearn.ensemble import AdaBoostRegressor
from sklearn.datasets import make_regression

x,y = make_regression(n_features=4,n_informative=2,random_state=0,shuffle=False)
regr = AdaBoostRegressor(random_state=0,n_estimators=100)
regr.fit(x,y)
regr.predict([[0,0,0,0]])

BaggingRegressor

from sklearn.ensemble import BaggingRegressor
from sklearn.datasets import make_regression
from sklearn.svm import SVR

x,y = make_regression(n_samples=100,n_features=4,n_informative=2,n_targets=1,random_state=0,shuffle=False)
br = BaggingRegressor(base_estimator=SVR(),n_estimators=10,random_state=0).fit(x,y)
br.predict([[0,0,0,0]])

ExtraTreesRegressor

from sklearn.datasets import load_diabetes
from sklearn.model_selection import train_test_split
from sklearn.ensemble import ExtraTreesRegressor

x,y = load_diabetes(return_X_y=True)
x_train,x_test,y_train,y_test = train_test_split(X,y,random_state=0)

etr = ExtraTreesRegressor(n_estimators=100,random_state=0).fit(x_train,y_train)
print(etr.score(x_test,y_test))

RandomForestRegressor

from sklearn.ensemble import RandomForestRegressor
from sklearn.datasets import make_regression

x,y = make_regression(n_features=4,n_informative=2,random_state=0,shuffle=False)

rfr = RandomForestRegressor(max_depth=2,random_state=0)
rfr.fit(x,y)
print(rfr.predict([[0,0,0,0]]))
  • sklearn.linear_model
函數(shù) 功能
linear_model.LinearRegression( ) 線性回歸
linear_model.Ridge( ) 嶺回歸
linear_model.Lasso( ) 經(jīng)L1訓(xùn)練后的正則化器
linear_model.ElasticNet( ) 彈性網(wǎng)絡(luò)
linear_model.MultiTaskLasso( ) 多任務(wù)Lasso
linear_model.MultiTaskElasticNet( ) 多任務(wù)彈性網(wǎng)絡(luò)
linear_model.Lars( ) 最小角回歸
linear_model.OrthogonalMatchingPursuit( ) 正交匹配追蹤模型
linear_model.BayesianRidge( ) 貝葉斯嶺回歸
linear_model.ARDRegression( ) 貝葉斯ADA回歸
linear_model.SGDRegressor( ) 隨機(jī)梯度下降回歸
linear_model.PassiveAggressiveRegressor( ) 增量學(xué)習(xí)回歸
linear_model.HuberRegression( ) Huber回歸
import numpy as np
from sklearn.linear_model import Ridge
from sklearn.linear_model import Lasso

np.random.seed(0)
x = np.random.randn(10,5)
y = np.random.randn(10)
clf1 = Ridge(alpha=1.0)
clf2 = Lasso()
clf2.fit(x,y)
clf1.fit(x,y)
print(clf1.predict(x))
print(clf2.predict(x))
  • sklearn.svm
函數(shù) 功能
svm.SVR( ) 支持向量機(jī)回歸
svm.NuSVR( ) Nu支持向量回歸
svm.LinearSVR( ) 線性支持向量回歸
  • sklearn.neighbors
函數(shù) 功能
neighbors.KNeighborsRegressor( ) K近鄰回歸
neighbors.RadiusNeighborsRegressor( ) 基于半徑的近鄰回歸
  • sklearn.kernel_ridge
函數(shù) 功能
kernel_ridge.KernelRidge( ) 內(nèi)核嶺回歸
  • sklearn.gaussian_process
函數(shù) 功能
gaussian_process.GaussianProcessRegressor( ) 高斯過(guò)程回歸

GaussianProcessRegressor

from sklearn.datasets import make_friedman2
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import DotProduct,WhiteKernel

x,y = make_friedman2(n_samples=500,noise=0,random_state=0)

kernel = DotProduct()+WhiteKernel()
gpr = GaussianProcessRegressor(kernel=kernel,random_state=0).fit(x,y)
print(gpr.score(x,y))
  • sklearn.cross_decomposition
函數(shù) 功能
cross_decomposition.PLSRegression( ) 偏最小二乘回歸
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.cross_decomposition import PLSRegression
from sklearn.model_selection import train_test_split

boston = datasets.load_boston()

x = boston.data
y = boston.target

x_df = pd.DataFrame(x,columns=boston.feature_names)
y_df = pd.DataFrame(y)

pls = PLSRegression(n_components=2)

x_train,x_test,y_train,y_test = train_test_split(x_df,y_df,test_size=0.3,random_state=1)

pls.fit(x_train,y_train)
print(pls.predict(x_test))

聚類模型

聚類模型
  • sklearn.cluster
函數(shù) 功能
cluster.DBSCAN( ) 基于密度的聚類
cluster.GaussianMixtureModel( ) 高斯混合模型
cluster.AffinityPropagation( ) 吸引力傳播聚類
cluster.AgglomerativeClustering( ) 層次聚類
cluster.Birch( ) 利用層次方法的平衡迭代聚類
cluster.KMeans( ) K均值聚類
cluster.MiniBatchKMeans( ) 小批量K均值聚類
cluster.MeanShift( ) 平均移位聚類
cluster.OPTICS( ) 基于點(diǎn)排序來(lái)識(shí)別聚類結(jié)構(gòu)
cluster.SpectralClustering( ) 譜聚類
cluster.Biclustering( ) 雙聚類
cluster.ward_tree( ) 集群病房樹(shù)
  • 模型方法
方法 功能
xxx.fit( ) 模型訓(xùn)練
xxx.get_params( ) 獲取模型參數(shù)
xxx.predict( ) 預(yù)測(cè)新輸入數(shù)據(jù)
xxx.score( ) 評(píng)估模型分類/回歸/聚類模型
xxx.set_params( ) 設(shè)置模型參數(shù)

模型評(píng)估

模型評(píng)估
  • 分類模型評(píng)估
函數(shù) 功能
metrics.accuracy_score( ) 準(zhǔn)確率
metrics.average_precision_score( ) 平均準(zhǔn)確率
metrics.log_loss( ) 對(duì)數(shù)損失
metrics.confusion_matrix( ) 混淆矩陣
metrics.classification_report( ) 分類模型評(píng)估報(bào)告:準(zhǔn)確率、召回率俯树、F1-score
metrics.roc_curve( ) 受試者工作特性曲線
metrics.auc( ) ROC曲線下面積
metrics.roc_auc_score( ) AUC值
  • 回歸模型評(píng)估
函數(shù) 功能
metrics.mean_squared_error( ) 平均決定誤差
metrics.median_absolute_error( ) 中值絕對(duì)誤差
metrics.r2_score( ) 決定系數(shù)
  • 聚類模型評(píng)估
函數(shù) 功能
metrics.adjusted_rand_score( ) 隨機(jī)蘭德調(diào)整指數(shù)
metrics.silhouette_score( ) 輪廓系數(shù)

模型優(yōu)化

函數(shù) 功能
model_selection.cross_val_score( ) 交叉驗(yàn)證
model_selection.LeaveOneOut( ) 留一法
model_selection.LeavePout( ) 留P法交叉驗(yàn)證
model_selection.GridSearchCV( ) 網(wǎng)格搜索
model_selection.RandomizedSearchCV( ) 隨機(jī)搜索
model_selection.validation_curve( ) 驗(yàn)證曲線
model_selection.learning_curve( ) 學(xué)習(xí)曲線

寫在最后

本文所涉及的分類/回歸/聚類算法都將在我的個(gè)眾【人類之奴】中一一進(jìn)行詳細(xì)講解疆前,歡迎大家一起學(xué)習(xí)交流。

人類之奴

這篇文章的創(chuàng)作花了整整一周的時(shí)間聘萨,希望可以為大家的學(xué)習(xí)之路披荊斬棘竹椒!

后續(xù)將為大家?guī)?lái)更多更優(yōu)質(zhì)的文章!

優(yōu)秀參考

sklearn提供的自帶的數(shù)據(jù)集(make_blobs)

sklearn.datasets常用功能詳解

Sklearn-cluster聚類方法

Sklearn官方文檔中文整理5——高斯過(guò)程篇

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末米辐,一起剝皮案震驚了整個(gè)濱河市胸完,隨后出現(xiàn)的幾起案子书释,更是在濱河造成了極大的恐慌,老刑警劉巖赊窥,帶你破解...
    沈念sama閱讀 218,122評(píng)論 6 505
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件爆惧,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡锨能,警方通過(guò)查閱死者的電腦和手機(jī)扯再,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,070評(píng)論 3 395
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)址遇,“玉大人熄阻,你說(shuō)我怎么就攤上這事【笤迹” “怎么了秃殉?”我有些...
    開(kāi)封第一講書人閱讀 164,491評(píng)論 0 354
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)浸剩。 經(jīng)常有香客問(wèn)我钾军,道長(zhǎng),這世上最難降的妖魔是什么绢要? 我笑而不...
    開(kāi)封第一講書人閱讀 58,636評(píng)論 1 293
  • 正文 為了忘掉前任吏恭,我火速辦了婚禮,結(jié)果婚禮上重罪,老公的妹妹穿的比我還像新娘砸泛。我一直安慰自己,他們只是感情好蛆封,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,676評(píng)論 6 392
  • 文/花漫 我一把揭開(kāi)白布唇礁。 她就那樣靜靜地躺著,像睡著了一般惨篱。 火紅的嫁衣襯著肌膚如雪盏筐。 梳的紋絲不亂的頭發(fā)上,一...
    開(kāi)封第一講書人閱讀 51,541評(píng)論 1 305
  • 那天砸讳,我揣著相機(jī)與錄音琢融,去河邊找鬼。 笑死簿寂,一個(gè)胖子當(dāng)著我的面吹牛漾抬,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播常遂,決...
    沈念sama閱讀 40,292評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼纳令,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來(lái)了?” 一聲冷哼從身側(cè)響起平绩,我...
    開(kāi)封第一講書人閱讀 39,211評(píng)論 0 276
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤圈匆,失蹤者是張志新(化名)和其女友劉穎,沒(méi)想到半個(gè)月后捏雌,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體跃赚,經(jīng)...
    沈念sama閱讀 45,655評(píng)論 1 314
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,846評(píng)論 3 336
  • 正文 我和宋清朗相戀三年性湿,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了纬傲。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 39,965評(píng)論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡肤频,死狀恐怖叹括,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情着裹,我是刑警寧澤,帶...
    沈念sama閱讀 35,684評(píng)論 5 347
  • 正文 年R本政府宣布米同,位于F島的核電站骇扇,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏面粮。R本人自食惡果不足惜少孝,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,295評(píng)論 3 329
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望熬苍。 院中可真熱鬧稍走,春花似錦、人聲如沸柴底。這莊子的主人今日做“春日...
    開(kāi)封第一講書人閱讀 31,894評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)柄驻。三九已至狐树,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間鸿脓,已是汗流浹背抑钟。 一陣腳步聲響...
    開(kāi)封第一講書人閱讀 33,012評(píng)論 1 269
  • 我被黑心中介騙來(lái)泰國(guó)打工, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留野哭,地道東北人在塔。 一個(gè)月前我還...
    沈念sama閱讀 48,126評(píng)論 3 370
  • 正文 我出身青樓,卻偏偏與公主長(zhǎng)得像拨黔,于是被迫代替她去往敵國(guó)和親蛔溃。 傳聞我的和親對(duì)象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,914評(píng)論 2 355

推薦閱讀更多精彩內(nèi)容