Python機(jī)器學(xué)習(xí)基礎(chǔ)教程學(xué)習(xí)筆記(2)——KNN處理Iris數(shù)據(jù)集
1 常規(guī)引用
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import mglearn
2 加載數(shù)據(jù)集
from sklearn.datasets import load_iris
iris_dataset = load_iris()
# Bunch對(duì)象暴匠,和字典相似
print("keys of iris_dataset: \n{}".format(iris_dataset.keys()))
keys of iris_dataset:
dict_keys(['data', 'target', 'target_names', 'DESCR', 'feature_names', 'filename'])
print('target names:{}'.format(iris_dataset['target_names']))# 要預(yù)測(cè)的花的品種
print('feature names:{}'.format(iris_dataset['feature_names']))# 特征
target names:['setosa' 'versicolor' 'virginica']
feature names:['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']
print("type of data :{}".format(type(iris_dataset['data'])))# 數(shù)據(jù)的類型
print("shape of data :{}".format(iris_dataset['data'].shape))# 數(shù)據(jù)形狀,150條記錄榕酒,每條記錄4個(gè)特征值岔帽,(樣本數(shù),特征數(shù))
type of data :<class 'numpy.ndarray'>
shape of data :(150, 4)
print("first five rows of data:\n{}".format(iris_dataset['data'][:5]))
first five rows of data:
[[5.1 3.5 1.4 0.2]
[4.9 3. 1.4 0.2]
[4.7 3.2 1.3 0.2]
[4.6 3.1 1.5 0.2]
[5. 3.6 1.4 0.2]]
print("type of target:{}".format(type(iris_dataset['target'])))
print("shape of target:{}".format(iris_dataset['target'].shape))
type of target:<class 'numpy.ndarray'>
shape of target:(150,)
print("target:\n{}".format(iris_dataset['target']))
target:
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2]
3 拆分訓(xùn)練集和測(cè)試集
# train_test_split 打亂數(shù)據(jù)集并進(jìn)行拆分荠雕,75%的數(shù)據(jù)作為訓(xùn)練集,剩下的25作為測(cè)試集
from sklearn.model_selection import train_test_split
# random_state參數(shù)指定了隨機(jī)數(shù)生成器的種子
X_train, X_test, y_train, y_test = train_test_split(iris_dataset['data'],iris_dataset['target'],random_state=0)
# train的數(shù)據(jù)是112個(gè),為150的75%
print("X_train shape :{}".format(X_train.shape))
print("y_train shape :{}".format(y_train.shape))
X_train shape :(112, 4)
y_train shape :(112,)
# train的數(shù)據(jù)是112個(gè)唉堪,為150的75%
print("X_test shape :{}".format(X_test.shape))
print("y_test shape :{}".format(y_test.shape))
X_test shape :(38, 4)
y_test shape :(38,)
4 構(gòu)建dataframe并可視化
觀察數(shù)據(jù):
- 看看如果不用機(jī)器學(xué)習(xí)能不能輕松完成任務(wù)
- 需要的信息有沒有包含在數(shù)據(jù)中
- 檢查數(shù)據(jù)也是發(fā)現(xiàn)異常值和特殊值的好辦法
- 檢查數(shù)據(jù)的最佳方法之一就是將其可視化
- 散點(diǎn)圖(sactter plot)能做二個(gè)特征的可視化,多個(gè)特征時(shí)肩民,可以做散點(diǎn)圖矩陣(pair plot)
# 利用X_train中的數(shù)據(jù)創(chuàng)建DataFrame
# 利用iris_dataset.feature_names中的字符串對(duì)數(shù)據(jù)列進(jìn)行標(biāo)記
iris_dataframe = pd.DataFrame(X_train,columns=iris_dataset.feature_names)
# 通過pd.plotting.scatter_matrix繪制散點(diǎn)圖矩陣唠亚,從圖中,看出可以通過四個(gè)特征將三種花區(qū)分開
grr = pd.plotting.scatter_matrix(
iris_dataframe, # 數(shù)據(jù)集
c=y_train, # c是指color持痰,設(shè)置點(diǎn)的顏色灶搜,這里沒指定顏色,通過下面的cmap來設(shè)置顏色工窍,這個(gè)參數(shù)是由scatter_matrix方法傳到scatter方法的
figsize=(15,15), # 大懈盥簟(寬,高)
marker='o', # 標(biāo)注點(diǎn)的樣式患雏,我所知道的'.'是小圓點(diǎn)鹏溯,'o'是大圓點(diǎn),'^'是上三角淹仑,'v'是下三角
hist_kwds={'bins':20}, # other plotting keyword arguments To be passed to hist function ? 目測(cè)和柱型圖的寬有關(guān)
s=60, # The size of each point. 這個(gè)參數(shù)是由scatter_matrix方法傳到scatter方法的
alpha=.8, # 透明度
cmap=mglearn.cm3 # 這個(gè)參數(shù)是由scatter_matrix方法傳到scatter方法的丙挽,然后scatter方法里,把傳給plotting方法
)
output_17_0
5 用knn分類算法進(jìn)行分類
knn算法
- k近鄰分類器
- 只保存訓(xùn)練集的數(shù)據(jù)
- 對(duì)新數(shù)據(jù)點(diǎn)進(jìn)行預(yù)測(cè)匀借,算法會(huì)在訓(xùn)練集中尋找與這個(gè)新數(shù)據(jù)點(diǎn)距離最近的數(shù)據(jù)點(diǎn)颜阐,然后將找到的數(shù)據(jù)點(diǎn)的標(biāo)簽賦值給這個(gè)新數(shù)據(jù)點(diǎn)
- k的含義是,訓(xùn)練集中與新數(shù)據(jù)點(diǎn)最近的任意的k個(gè)鄰居(比如3個(gè)或者5個(gè))吓肋,用這些鄰居中數(shù)量最多的類別做出預(yù)測(cè)凳怨。
5.1 訓(xùn)練
from sklearn.neighbors import KNeighborsClassifier
# n_neighbors=1,設(shè)置knn算法中的k值為1蓬坡,只考慮最近的一個(gè)鄰居數(shù)據(jù)點(diǎn)猿棉。
knn = KNeighborsClassifier(n_neighbors=1)
# 調(diào)用knn對(duì)象的fit方法進(jìn)行訓(xùn)練磅叛,輸入?yún)?shù)為X_train(訓(xùn)練數(shù)據(jù))和y_train(訓(xùn)練標(biāo)簽)。
knn.fit(X_train,y_train)
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
metric_params=None, n_jobs=None, n_neighbors=1, p=2,
weights='uniform')
5.2 預(yù)測(cè)
X_new = np.array([[5,2.9,1,0.2]])#必須是二維數(shù)組萨赁,才能傳入到predict方法中
print("X_new.shape:{}".format(X_new.shape))
X_new.shape:(1, 4)
prediction = knn.predict(X_new)
print("Prediction:{}".format(prediction))
print("Predicted target name:{}".format(iris_dataset["target_names"][prediction]))
Prediction:[0]
Predicted target name:['setosa']
6 評(píng)價(jià)模型
6.1 方法1
y_pred = knn.predict(X_test)
print("Test set predictions\n{}".format(y_pred))
Test set predictions
[2 1 0 2 0 2 0 1 1 1 2 1 1 1 1 0 1 1 0 0 2 1 0 0 2 0 0 1 1 0 2 1 0 2 2 1 0
2]
print("Test set score:{:.2f}".format(np.mean(y_pred==y_test)))
Test set score:0.97
6.2 直接調(diào)用knn.score
print("Test set score:{:.2f}".format(knn.score(X_test,y_test)))
Test set score:0.97
7 小結(jié)
fit
弊琴、predict
、score
方法是scikit-learn
監(jiān)督學(xué)習(xí)模型中最常用的接口杖爽。