一.數(shù)據(jù)處理
數(shù)據(jù)來源https://github.com/wuyimengmaths/data,進行一些簡單的數(shù)據(jù)處理,并用決策樹模型進行預(yù)測拳喻。
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams['font.size'] = 16
from sklearn.model_selection import train_test_split, cross_val_score, cross_validate
from sklearn.tree import DecisionTreeClassifier
df = pd.read_csv(r'C:\Users\PC\cpsc330\lectures\data/students.csv') ##自己下載好的文件路徑
##df.head() 查看表前幾條數(shù)據(jù)
df.columns = ["meat", "grade", "cilantro"] ##更換表頭名
##df.describe() 查看數(shù)據(jù)整體的描述
scatter = plt.scatter(df["meat"], df["grade"], c=df["cilantro"]=="Yes", cmap=plt.cm.coolwarm);
plt.xlabel("Meat consumption (% days)");
plt.ylabel("Expected grade (%)");
plt.legend(scatter.legend_elements()[0], ["No", "Yes"]);
##df["cilantro"].value_counts() 查看是否吃香菜的情況
X = df[["meat", "grade"]] ##X.head()
y = df["cilantro"] ##y.head()
modelfirst = DecisionTreeClassifier(max_depth=None)
modelfirst.fit(X, y)
scorefirst = modelfirst.score(X, y)
predictionfirst = modelfirst.predict([[50, 50]])
二.一些改進方法
1.將數(shù)據(jù)源進行去重
#sort_values排序
#drop_duplicates去重meat grade數(shù)值都相同的
#reset_index用來重置索引,因為有時候?qū)ataframe做處理后索引可能是亂的
df_nodup = df.sort_values(by="cilantro").drop_duplicates(subset=df.columns[:-1]).reset_index(drop=True)
2.決策樹深度這一參數(shù)改變對分值的影響
X_nodup = df_nodup[["meat", "grade"]] ##去重后
y_nodup = df_nodup["cilantro"]
max_depths = np.arange(1, 18)
scores = []
##遍歷決策樹深度的參數(shù)
for max_depth in max_depths:
score = DecisionTreeClassifier(max_depth=max_depth).fit(X_nodup, y_nodup).score(X_nodup, y_nodup)
scores.append(score)
plt.plot(max_depths, scores);
plt.xlabel("max depth");
plt.ylabel("accuracy score");
3.使用train_test_split函數(shù)拆分?jǐn)?shù)據(jù)集
df_train, df_test = train_test_split(df_nodup, random_state=123)
如圖中符,將去重后的數(shù)據(jù)集 隨機 拆分為訓(xùn)練集和測試集。
image.png
再拆分誉帅,進行訓(xùn)練評估
X_train = df_train[["meat", "grade"]] ##訓(xùn)練集特征
y_train = df_train["cilantro"] ##訓(xùn)練集標(biāo)簽 也叫target
X_test = df_test[["meat", "grade"]] ##測試集特征
y_test = df_test["cilantro"] ##測試集標(biāo)簽 也叫target
model = DecisionTreeClassifier() ##建立決策樹模型
model.fit(X_train, y_train); ##以訓(xùn)練集來對模型進行訓(xùn)練
score_train = model.score(X_train, y_train) ## 1.0
score_test = model.score(X_test, y_test) ##0.5
4.過擬合
當(dāng)對測試數(shù)據(jù)的得分低于對訓(xùn)練數(shù)據(jù)的得分淀散,產(chǎn)生過擬合現(xiàn)象,解決這個問題的一個方法是減少決策樹的最大深度蚜锨。
5.交叉驗證評估
image.png
交叉驗證評估模型代碼
model_treeone = DecisionTreeClassifier(max_depth=1)
cv_score = cross_val_score(model_treeone, X_train, y_train, cv=4)
cv10_score = cross_val_score(model_treeone, X_train, y_train, cv=10)