第2章 數(shù)據(jù)獲取
數(shù)據(jù)獲取手段:
- 數(shù)據(jù)倉(cāng)庫(kù)
- 監(jiān)測(cè)與抓取
- 填寫秆乳、埋點(diǎn)、日志
- 計(jì)算
數(shù)據(jù)倉(cāng)庫(kù)
將所有業(yè)務(wù)數(shù)據(jù)經(jīng)匯總處理,構(gòu)成數(shù)據(jù)倉(cāng)庫(kù)(DW)
- 全部事實(shí)的記錄
- 部分維度與數(shù)據(jù)的整理(數(shù)據(jù)集市-DM)
數(shù)據(jù)庫(kù) VS 倉(cāng)庫(kù)
- 數(shù)據(jù)庫(kù)面向業(yè)務(wù)存儲(chǔ)盲赊,倉(cāng)庫(kù)面向主題存儲(chǔ)
(主題:較高層次上對(duì)分析對(duì)象數(shù)據(jù)的一個(gè)完整并且一致的描述) - 數(shù)據(jù)庫(kù)針對(duì)應(yīng)用(OLTP)铣鹏,倉(cāng)庫(kù)針對(duì)分析(OLAP)
- 數(shù)據(jù)庫(kù)組織規(guī)范敷扫,倉(cāng)庫(kù)可能冗余,相對(duì)變化大诚卸,數(shù)據(jù)量大
監(jiān)測(cè)與抓取
常用工具:
urllib葵第、requests、scrapy
PhantomJS合溺、beautifulSoup卒密、Xpath
填寫、埋點(diǎn)棠赛、日志
- 用戶填寫信息
- APP或網(wǎng)頁(yè)埋點(diǎn)(特點(diǎn)流程的信息記錄點(diǎn))
- 操作日志
計(jì)算
通過已有數(shù)據(jù)計(jì)算生成衍生數(shù)據(jù)
數(shù)據(jù)學(xué)習(xí)網(wǎng)站
- 數(shù)據(jù)競(jìng)賽網(wǎng)站(Kaggle&天池)
- 數(shù)據(jù)集網(wǎng)站(ImageNet/Open Images)
- 統(tǒng)計(jì)數(shù)據(jù)(統(tǒng)計(jì)局哮奇、政府機(jī)關(guān)、公司財(cái)報(bào)等)
第3章 探索性數(shù)據(jù)分析(單因子&對(duì)比)與可視化
理論鋪墊
- 集中趨勢(shì):均值睛约、中位數(shù)與分位數(shù)鼎俘、眾數(shù)
- 離中趨勢(shì):標(biāo)準(zhǔn)差、方差
- 數(shù)據(jù)分布:偏態(tài)與峰態(tài)辩涝、正態(tài)分布與三大分布
偏態(tài)系數(shù):數(shù)據(jù)平均值偏離狀態(tài)的衡量
峰態(tài)系數(shù):數(shù)據(jù)分布集中強(qiáng)度的衡量 - 抽樣定理:抽樣誤差贸伐、抽樣精度
數(shù)據(jù)分類
- 定類(類別):根據(jù)事物離散、無(wú)差別屬性進(jìn)行的分類怔揩,如:名族
- 定序(順序):可以界定數(shù)據(jù)的大小捉邢,但不能測(cè)定差值:如:收入的低、中商膊、高
- 定距(間隔):可以界定數(shù)據(jù)大小的同時(shí)伏伐,可測(cè)定差值,但無(wú)絕對(duì)零點(diǎn)晕拆,如:溫度
- 定比(比率):可以界定數(shù)據(jù)大小秘案,可測(cè)定差值,有絕對(duì)零點(diǎn)
單屬性分析
- 異常值分析:離散異常值,連續(xù)異常值阱高,常識(shí)異常值
- 對(duì)比分析:絕對(duì)數(shù)與相對(duì)數(shù)赚导,時(shí)間、空間赤惊、維度比較
- 結(jié)構(gòu)分析:各組成部分的分布與規(guī)律
- 分布分析:數(shù)據(jù)分布頻率的顯式分析
例:對(duì)HR.csv進(jìn)行分析
#satisfaction_level 的分析
import numpy as np
import pandas as pd
df = pd.read_csv("./data/HR.csv")
#先提取出該列數(shù)據(jù)
sl_s = df["satisfaction_level"]
# 先看看有沒有異常值NaN
print(sl_s[sl_s.isnull()])
#看一下改行所有數(shù)據(jù)
print(df[df['satisfaction_level'].isnull()])
#丟棄異常值
sl_s = sl_s.dropna()
sl_s.mean() #均值
sl_s.std() #標(biāo)準(zhǔn)差
sl_s.quantile(q=0.25) #下四分位數(shù)
sl_s.skew() #偏度
sl_s.kurt() #峰度
#獲取離散化分布
print(np.histogram(sl_s.values,bins=np.arange(0.0,1.1,0.1)))
#輸出:(array([ 195, 1214, 532, 974, 1668, 2146, 1973, 2074, 2220, 2004]), array([0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ]))
#number_project 的分析
#靜態(tài)結(jié)構(gòu)分析
import pandas as pd
df = pd.read_csv("./data/HR.csv")
#先提取出該列數(shù)據(jù)
np_s = df["number_project"]
print(np_s.describe())
print("偏度\t",np_s.skew(),"峰度\t",np_s.kurt())
#計(jì)算樣本出現(xiàn)次數(shù)
print(np_s.value_counts())
#獲得構(gòu)成吼旧、比例
print(np_s.value_counts(normalize=True))
#排序
print(np_s.value_counts(normalize=True).sort_index())
#簡(jiǎn)單對(duì)比分析操作
import pandas as pd
df = pd.read_csv("./data/HR.csv")
#先剔除異常值
#剔除空值 axis=0表示行,1表示列未舟, how=any表示任意為空
df = df.dropna(axis=0,how="any")
df = df[df["last_evaluation"]<=1][df["salary"]!="nme"][df["department"]!="sale"]
#以部門為單位進(jìn)行簡(jiǎn)單對(duì)比分析
print(df.groupby("department").mean())
#單獨(dú)拉出某一列來分析
print(df.loc[:,["last_evaluation","department"]].groupby("department").mean())
#自己定義函數(shù)進(jìn)行對(duì)比, 計(jì)算極差
print(df.loc[:,["average_monthly_hours","department"]].groupby("department")["average_monthly_hours"].apply(lambda x:x.max()-x.min()))
可視化分析
可視化工具:matplotlib圈暗、seaborn、plotly
#基本圖形繪制
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
df = pd.read_csv("./data/HR.csv")
print(df["salary"].value_counts())
#用seaborn改變樣式
sns.set_style(style="darkgrid")
sns.set_palette("summer")
#直接用seaborn繪制裕膀,用hue還可以進(jìn)行多層繪制
sns.countplot(x="salary",hue="department",data=df)
#柱狀直方圖
plt.title("SALARY")
plt.xlabel("salary")
plt.ylabel("Number")
plt.xticks(np.arange(len(df["salary"].value_counts())),df["salary"].value_counts().index)
plt.axis([-0.5,4,0,8000]) #設(shè)置顯示范圍
plt.bar(np.arange(len(df["salary"].value_counts())),df["salary"].value_counts(),width=0.5)
#標(biāo)注值
for x,y in zip(np.arange(len(df["salary"].value_counts())),df["salary"].value_counts()):
plt.text(x,y,y,ha="center",va="bottom")
plt.show()
#直方圖
f = plt.figure()
f.add_subplot(1,3,1)
sns.distplot(df["satisfaction_level"],bins=10)
f.add_subplot(1,3,2)
sns.distplot(df["last_evaluation"],bins=10)
f.add_subplot(1,3,3)
sns.distplot(df["average_monthly_hours"],bins=10)
plt.show()
#折線圖
# sub_df = df.groupby("time_spend_company").mean()
# sns.pointplot(sub_df.index,sub_df["left"])
sns.pointplot(x="time_spend_company",y="left",data=df)
#餅圖
lbs = df["department"].value_counts().index
explodes = [0.1 if i == "sales" else 0 for i in lbs]
plt.pie(df["department"].value_counts(normalize=True),labels=lbs,explode=explodes)
plt.show()
第4章 探索性數(shù)據(jù)分析(多因子與復(fù)合分析)
理論鋪墊
- 假設(shè)檢驗(yàn)與方差檢驗(yàn)
- 相關(guān)系數(shù):皮爾遜员串、斯皮爾曼
- 回歸:線性回歸
- 主成分分析(PCA)與奇異值分解(SVD)
主成分分析(PCA)步驟
- 求特征協(xié)方差矩陣
- 求協(xié)方差的特征值和特征向量
- 將特征值從大到小排序,選擇其中最大的k個(gè)
- 將樣本店投影到選取的特征向量上
編碼實(shí)現(xiàn)
import numpy as np
import scipy.stats as ss
#生成標(biāo)準(zhǔn)正態(tài)分布
norm_dist = ss.norm.rvs(size=20)
print(norm_dist)
#檢測(cè)正態(tài)分布
print(ss.normaltest(norm_dist))
#Python實(shí)現(xiàn)卡方檢驗(yàn), 結(jié)果依次為(檢驗(yàn)統(tǒng)計(jì)量昼扛,P值寸齐,自由度,理論分布)
print(ss.chi2_contingency([[15,95],[85,5]]))
#獨(dú)立t分布檢驗(yàn),結(jié)果依次為(檢驗(yàn)統(tǒng)計(jì)量抄谐,P值)
print(ss.ttest_ind(ss.norm.rvs(size=10),ss.norm.rvs(size=20)))
#方差檢驗(yàn)
print(ss.f_oneway([49,50,39,40,43],[28,32,30,26,34],[38,40,45,42,48]))
#使用qq圖對(duì)比觀察一個(gè)分布是否符合一個(gè)已知的分布渺鹦,如:橫軸為正態(tài)分布分位數(shù),縱軸為觀察分布的值蛹含,得到的散點(diǎn)圖毅厚,如果和xy周的角平分線重合則符合分布
from statsmodels.graphics.api import qqplot
from matplotlib import pyplot as plt
plt.show(qqplot(ss.norm.rvs(size=100))) #默認(rèn)檢驗(yàn)是否為正態(tài)分布
#相關(guān)系數(shù)
import pandas as pd
s1 = pd.Series([0.1,0.2,1.1,2.4,1.3,0.3,0.5])
s2 = pd.Series([0.5,0.4,1.2,2.5,1.1,0.7,0.1])
#直接求相關(guān)系數(shù)
print(s1.corr(s2))
#也可以指定方法
print(s1.corr(s2,method="spearman"))
#使用dataframe直接進(jìn)相關(guān)系數(shù)的計(jì)算,因?yàn)閐ataframe是針對(duì)于列計(jì)算浦箱,所以需要轉(zhuǎn)換
df = pd.DataFrame(np.array([s1,s2]).T)
print(df.corr())
print(df.corr(method="spearman"))
#回歸
x = np.arange(10).astype(np.float).reshape([10,1])
y = x*3 + 4 +np.random.random((10,1))
from sklearn.linear_model import LinearRegression
reg = LinearRegression() #構(gòu)建線性回歸
res = reg.fit(x,y) #擬合
y_pred = reg.predict(x)
print("預(yù)測(cè)測(cè)結(jié)果:\n",y_pred,"\n參數(shù):",res.coef_,"截距:",res.intercept_)
#主成分分析(PCA)
data = np.array([np.array([2.5,0.5,2.2,1.9,3.1,2.3,2,1,1.5,1.1]),np.array([2.4,0.7,2.9,2.2,3,2.7,1.6,1.1,1.6,0.9])]).T
print(data)
#注:sklearn里用PCA的是奇異值分解
from sklearn.decomposition import PCA
lower_dim = PCA(n_components=1)
lower_dim.fit(data)
print("降維后信息量:",lower_dim.explained_variance_ratio_,"\n轉(zhuǎn)換后: ",lower_dim.fit_transform(data))
復(fù)合分析
- 交叉分析
- 因子分析
- 分組與鉆取
- 聚類吸耿、相關(guān)、回歸(暫略)
交叉分析:除了縱向分析橫向分析外酷窥,可以使用 交叉分析 分析數(shù)據(jù)屬性和屬性的關(guān)系
#實(shí)例:查看各個(gè)部門離職率之間是否有明顯差異
#使用獨(dú)立t檢驗(yàn)方法咽安,兩兩間求t檢驗(yàn)統(tǒng)計(jì)量并求出p值,得到各個(gè)部門離職分布
import pandas as pd
import numpy as np
import scipy.stats as ss
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv("./data/HR.csv")
dp_indices = df.groupby(by="department").indices #按部門分組,用indices得到索引
sales_values = df["left"].iloc[dp_indices["sales"]].values
technical_values = df["left"].iloc[dp_indices["technical"]].values
print(ss.ttest_ind(sales_values,technical_values)) #打印p統(tǒng)計(jì)量
#兩兩求p值,熱力圖
dp_keys = list(dp_indices.keys()) #取出indices的keys
dp_t_mat = np.zeros([len(dp_keys),len(dp_keys)]) #初始化矩陣
for i in range(len(dp_keys)):
for j in range(len(dp_keys)):
p_value = ss.ttest_ind(df["left"].iloc[dp_indices[dp_keys[i]]].values,\
df["left"].iloc[dp_indices[dp_keys[j]]].values)[1]
if p_value<0.05:
dp_t_mat[i][j]=-1
else:
dp_t_mat[i][j] = p_value
print(dp_t_mat)
sns.heatmap(dp_t_mat,xticklabels=dp_keys,yticklabels=dp_keys)
plt.show()
#透視表
piv_tb = pd.pivot_table(df,values="left",index=["promotion_last_5years","salary"],columns=["Work_accident"],aggfunc=np.mean)
sns.heatmap(piv_tb,vmin=0,vmax=1,cmap=sns.color_palette("Reds"))
plt.show()
分組與鉆取
鉆取是改變數(shù)據(jù)維度層次竖幔,變換分析粒度的過程板乙,可分為向下和向上鉆取。
連續(xù)屬性在分組前需要進(jìn)行離散化拳氢。
連續(xù)分組:
- 分割(一階差分)募逞、拐點(diǎn)(二階差分)
- 聚類
- 不純度(Gini系數(shù))
#實(shí)例:分組分析
import pandas as pd
import scipy.stats as ss
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context(font_scale=1.5)
df = pd.read_csv("./data/HR.csv")
#離散值:
#設(shè)置hue向下根據(jù)部門鉆取
# sns.barplot(x="salary",y="left",hue="department",data=df)
# plt.show()
#連續(xù)值
sl_s = df["satisfaction_level"]
sns.barplot(list(range(len(sl_s))),sl_s.sort_values())
plt.show()
相關(guān)分析
#實(shí)例:相關(guān)分析
import pandas as pd
import scipy.stats as ss
import matplotlib.pyplot as plt
import seaborn as sns
import math
sns.set_context(font_scale=1.5)
df = pd.read_csv("./data/HR.csv")
#計(jì)算相關(guān)技術(shù),畫相關(guān)圖(會(huì)自動(dòng)去除離散屬性)
sns.heatmap(df.corr(),vmax=1,vmin=-1,cmap=sns.color_palette("RdBu",n_colors=128))
plt.show()
#離散情況下:
#Gini
def getGini(a1, a2):
assert (len(a1) == len(a2))
d = dict()
for i in list(range(len(a1))):
d[a1[i]] = d.get(a1[i], []) + [a2[i]]
return 1 - sum([getProbSS(d[k]) * len(d[k]) / float(len(a1)) for k in d])
#可能性平方和
def getProbSS(s):
if not isinstance(s,pd.core.series.Series):
s = pd.Series(s)
prt_ary = np.array(pd.groupby(s, by=s).count().values / float(len(s)))
return sum(prt_ary ** 2)
#熵
def getEntropy(s):
if not isinstance(s, pd.core.series.Series):
s = pd.Series(s)
prt_ary = np.array(pd.groupby(s, by=s).count().values / float(len(s)))
return -(np.log2(prt_ary) * prt_ary).sum()
#條件熵
def getCondEntropy(a1, a2):
assert (len(a1) == len(a2))
d = dict()
for i in list(range(len(a1))):
d[a1[i]] = d.get(a1[i], []) + [a2[i]]
return sum([getEntropy(d[k]) * len(d[k]) / float(len(a1)) for k in d])
#熵增益
def getEntropyGain(a1, a2):
return getEntropy(a2) - getCondEntropy(a1, a2)
#熵增益率
def getEntropyGainRatio(a1, a2):
return getEntropyGain(a1, a2) / getEntropy(a2)
#相關(guān)度
def getDiscreteRelation(a1, a2):
return getEntropyGain(a1, a2) / math.sqrt(getEntropy(a1) * getEntropy(a2))
#離散相關(guān)性度量
s1 = pd.Series(["X1", "X1", "X2", "X2", "X2", "X2"])
s2 = pd.Series(["Y1", "Y1", "Y1", "Y2", "Y2", "Y2"])
print(getEntropy(s1))
print(getEntropy(s2))
print(getCondEntropy(s1, s2))
print(getCondEntropy(s2, s1))
print(getEntropyGain(s1, s2))
print(getEntropyGain(s2, s1))
print(getEntropyGainRatio(s1, s2))
print(getEntropyGainRatio(s2, s1))
print(getDiscreteRelation(s1, s2))
print(getDiscreteRelation(s2, s1))
因子分析
第5章 預(yù)處理理論
數(shù)據(jù)清洗
數(shù)據(jù)樣本抽樣
- 樣本要具有代表性
- 樣本比例要平衡以及樣本不均衡時(shí)如何處理
- 考慮全量數(shù)據(jù)
異常值(空值)處理
- 識(shí)別異常值和重復(fù)值
Pandas:isnull()/duplicated() - 直接丟棄(包括重復(fù)數(shù)據(jù))
Pandas:drop()/dropna()/drop_duplicated() - 把異常當(dāng)作一個(gè)新的屬性馋评,替代原值
Pandas:fillna() - 集中值指代
Pandas:fillna() - 邊界值指代
Pandas:fillna() - 插值
Pandas:interpolate() --- Series
df = pd.DataFrame({"A": ["a0", "a1", "a1", "a2", "a3", "a4"], "B": ["b0", "b1", "b2", "b2", "b3", None],
"C": [1, 2, None, 3, 4, 5], "D": [0.1, 10.2, 11.4, 8.9, 9.1, 12], "E": [10, 19, 32, 25, 8, None],
"F": ["f0", "f1", "g2", "f3", "f4", "f5"]})
df.isnull() #去除空值
df.dropna(subset=["B","C"]) #指定去除某一行的空值
df.duplicated(["A"],keep="first") #重復(fù)值放接,keep設(shè)定保留值
df.drop_duplicates(["A","B"],keep="first",inplace=False) #AB同時(shí)重復(fù)
df["B"].fillna("b*") #異常值替換
df["E"].fillna(df["E"].mean())
df["E"].interpolate(method="spline",order=3) #插值
pd.Series([1, None, 4, 10, 8]).interpolate()
#四分位數(shù)法過濾
df[df['D'] < df["D"].quantile(0.75) + 1.5 * (df["D"].quantile(0.75) - df["D"].quantile(0.25))][df["D"] > df["D"].quantile(0.25) - 1.5 * (df["D"].quantile(0.75) - df["D"].quantile(0.25))]
df[[True if item.startswith("f") else False for item in list(df["F"].values)]]
特征預(yù)處理
標(biāo)注(label)
- 特征選擇
- 特征變換
- 特征降維
- 特征衍生
一、特征選擇:剔除與標(biāo)注不相關(guān)或冗余的特征
思路:過濾思想留特、包裹思想纠脾、嵌入思想
#例:特征選擇
import pandas as pd
import numpy as np
import scipy.stats as ss
from sklearn.feature_selection import SelectKBest,RFE,SelectFromModel
from sklearn.svm import SVR
from sklearn.tree import DecisionTreeRegressor
def main():
df=pd.DataFrame({"A":ss.norm.rvs(size=10),"B":ss.norm.rvs(size=10),\
"C":ss.norm.rvs(size=10),"D":np.random.randint(low=0,high=2,size=10)})
X=df.loc[:,["A","B","C"]] #特征
Y=df.loc[:,"D"] #標(biāo)注
print("X",X)
print("Y",Y)
#過濾思想
skb=SelectKBest(k=2)
skb.fit(X.values,Y.values)
print(skb.transform(X.values))
#包裹思想
rfe=RFE(estimator=SVR(kernel="linear"),n_features_to_select=2,step=1)
print(rfe.fit_transform(X,Y))
#嵌入思想
sfm=SelectFromModel(estimator=DecisionTreeRegressor(),threshold=0.01)
print(sfm.fit_transform(X,Y))
if __name__=="__main__":
main()
二玛瘸、特征變換:對(duì)指化、離散化苟蹈、數(shù)據(jù)平滑糊渊、歸一化、數(shù)值化慧脱、正規(guī)化
離散化:將連續(xù)變量分成幾段(bins)
原因:克服數(shù)據(jù)缺陷渺绒、某些算法要求、非線數(shù)據(jù)映射
方法:等頻菱鸥、等距宗兼、自因變量?jī)?yōu)化
離散化(分箱):
深度:數(shù)的個(gè)數(shù)
寬度:數(shù)的區(qū)間
#數(shù)據(jù)分箱
import pandas as pd
lst = [6, 8, 10, 15, 16, 24, 25, 40, 67]
#等深分箱
x = pd.qcut(lst,q=3,labels=["low","medium","high"])
print(x)
#等寬分箱
x = pd.cut(lst,bins=3,labels=["low","medium","high"])
print(x)
歸一化與標(biāo)準(zhǔn)化
import numpy as np
from sklearn.preprocessing import MinMaxScaler,StandardScaler
print(MinMaxScaler().fit_transform(np.array([1, 4, 10, 15, 21]).reshape(-1, 1)))
print(StandardScaler().fit_transform(np.array([1, 1, 1, 1, 0, 0, 0, 0]).reshape(-1, 1)))
print(StandardScaler().fit_transform(np.array([1, 0, 0, 0, 0, 0, 0, 0]).reshape(-1, 1)))
數(shù)值化
定類:標(biāo)簽化
定序:獨(dú)熱
import numpy as np
#標(biāo)簽化與獨(dú)熱編碼
from sklearn.preprocessing import LabelEncoder,OneHotEncoder
print(LabelEncoder().fit_transform(np.array(["Down","Down","Up","Down","Up"]).reshape(-1,1)))
print(LabelEncoder().fit_transform(np.array(["Low","Medium","Low","High","Medium"]).reshape(-1,1)))
lb_encoder=LabelEncoder()
lb_trans_f=lb_encoder.fit_transform(np.array(["Red","Yellow","Blue","Green"]))
print(lb_trans_f)
oht_enoder=OneHotEncoder().fit(lb_trans_f.reshape(-1,1))
print(oht_enoder.transform(lb_encoder.transform(np.array(["Red","Blue"])).reshape(-1,1)).toarray())
正規(guī)化
1、直接用在特征上
2氮采、用在每個(gè)對(duì)象的各個(gè)特征的表示(特征矩陣的行)
3殷绍、模型的參數(shù)上(回歸模型使用較多)
import numpy as np
# 規(guī)范化
from sklearn.preprocessing import Normalizer
print(Normalizer(norm="l1").fit_transform(np.array([[1, 1, 3, -1, 2]])))
print(Normalizer(norm="l2").fit_transform(np.array([[1, 1, 3, -1, 2]])))
三、特征降維
- PCA鹊漠、奇異值分解等線性降維
- LDA降維
LDA:線性判別式分析
核心思想:投影變化后同一標(biāo)注內(nèi)距離盡可能兄鞯健;不同標(biāo)注間距離盡可能大
import numpy as np
# LDA降維
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
y = np.array([0, 0, 0, 1, 1, 1])
clf = LinearDiscriminantAnalysis()
print(clf.fit_transform(X,y))
#當(dāng)做分類器
clf.fit(X, y)
print(clf.predict([[-0.8, -1]]))
四贸呢、特征衍生
- 加減乘除
- 求導(dǎo)與高階求導(dǎo)
- 人工歸納
實(shí)例:HR表的特征處理
第6章 挖掘建模
- 訓(xùn)練集:用來訓(xùn)練與擬合模型
- 測(cè)試集:模型泛化能力的考量镰烧,泛化:對(duì)數(shù)據(jù)的預(yù)測(cè)能力
- 驗(yàn)證集:當(dāng)通過訓(xùn)練集訓(xùn)練出多個(gè)模型后拢军,使用驗(yàn)證集數(shù)據(jù)糾偏或比較預(yù)測(cè)
K-fold交叉驗(yàn)證:將數(shù)據(jù)集分成K份楞陷,每份輪流作一遍測(cè)試集,其他作訓(xùn)練集
#用train_test_split劃分訓(xùn)練集茉唉、測(cè)試集固蛾、驗(yàn)證集
from sklearn.model_selection import train_test_split
f_v=features.values
f_names=features.columns.values
l_v=label.values
#驗(yàn)證集占20%
X_tt,X_validation,Y_tt,Y_validation=train_test_split(f_v,l_v,test_size=0.2)
#訓(xùn)練集占60%,測(cè)試集占20%
X_train,X_test,Y_train,Y_test=train_test_split(X_tt,Y_tt,test_size=0.25)
步驟如下:
1)算距離:給定測(cè)試對(duì)象度陆,計(jì)算它與訓(xùn)練集中的每個(gè)對(duì)象的距離
2)找鄰居:圈定距離最近的k個(gè)訓(xùn)練對(duì)象艾凯,作為測(cè)試對(duì)象的近鄰
3)做分類:根據(jù)這k個(gè)近鄰歸屬的主要類別,來對(duì)測(cè)試對(duì)象分類
#KNN 算法演示
import pandas as pd
from sklearn.preprocessing import MinMaxScaler,StandardScaler
from sklearn.preprocessing import LabelEncoder,OneHotEncoder
from sklearn.decomposition import PCA
#sl:satisfaction_level---False:MinMaxScaler;True:StandardScaler
#le:last_evaluation---False:MinMaxScaler;True:StandardScaler
#npr:number_project---False:MinMaxScaler;True:StandardScaler
#amh:average_monthly_hours--False:MinMaxScaler;True:StandardScaler
#tsc:time_spend_company--False:MinMaxScaler;True:StandardScaler
#wa:Work_accident--False:MinMaxScaler;True:StandardScaler
#pl5:promotion_last_5years--False:MinMaxScaler;True:StandardScaler
#dp:department--False:LabelEncoding;True:OneHotEncoding
#slr:salary--False:LabelEncoding;True:OneHotEncoding
def hr_preprocessing(sl=False,le=False,npr=False,amh=False,tsc=False,wa=False,pl5=False,dp=False,slr=False,lower_d=False,ld_n=1):
df=pd.read_csv("./data/HR.csv")
#1懂傀、清洗數(shù)據(jù)
df=df.dropna(subset=["satisfaction_level","last_evaluation"])
df=df[df["satisfaction_level"]<=1][df["salary"]!="nme"]
#2趾诗、得到標(biāo)注
label = df["left"]
df = df.drop("left", axis=1)
#3、特征選擇
#4蹬蚁、特征處理
scaler_lst=[sl,le,npr,amh,tsc,wa,pl5]
column_lst=["satisfaction_level","last_evaluation","number_project",\
"average_monthly_hours","time_spend_company","Work_accident",\
"promotion_last_5years"]
for i in range(len(scaler_lst)):
if not scaler_lst[i]:
df[column_lst[i]]=\
MinMaxScaler().fit_transform(df[column_lst[i]].values.reshape(-1,1)).reshape(1,-1)[0]
else:
df[column_lst[i]]=\
StandardScaler().fit_transform(df[column_lst[i]].values.reshape(-1,1)).reshape(1,-1)[0]
scaler_lst=[slr,dp]
column_lst=["salary","department"]
for i in range(len(scaler_lst)):
if not scaler_lst[i]:
if column_lst[i]=="salary":
df[column_lst[i]]=[map_salary(s) for s in df["salary"].values]
else:
df[column_lst[i]]=LabelEncoder().fit_transform(df[column_lst[i]])
df[column_lst[i]]=MinMaxScaler().fit_transform(df[column_lst[i]].values.reshape(-1,1)).reshape(1,-1)[0]
else:
df=pd.get_dummies(df,columns=[column_lst[i]])
if lower_d:
return PCA(n_components=ld_n).fit_transform(df.values),label
return df,label
d=dict([("low",0),("medium",1),("high",2)])
def map_salary(s):
return d.get(s,0)
def hr_modeling(features,label):
# 用train_test_split劃分訓(xùn)練集恃泪、測(cè)試集、驗(yàn)證集
from sklearn.model_selection import train_test_split
f_v=features.values
l_v=label.values
#驗(yàn)證集占20%
X_tt,X_validation,Y_tt,Y_validation=train_test_split(f_v,l_v,test_size=0.2)
#訓(xùn)練集占60%犀斋,測(cè)試集占20%
X_train,X_test,Y_train,Y_test=train_test_split(X_tt,Y_tt,test_size=0.25)
#KNN
from sklearn.neighbors import NearestNeighbors,KNeighborsClassifier
knn_clf = KNeighborsClassifier(n_neighbors=3) #設(shè)置k=3個(gè)鄰居
knn_clf.fit(X_train,Y_train) #擬合
Y_pred = knn_clf.predict(X_validation) #驗(yàn)證
#衡量指標(biāo) 準(zhǔn)確率贝乎、召回率、F值
from sklearn.metrics import accuracy_score, recall_score, f1_score
print("ACC:",accuracy_score(Y_validation,Y_pred))
print("REC:",recall_score(Y_validation,Y_pred))
print("F-Score",f1_score(Y_validation,Y_pred))
#模型存儲(chǔ)
from sklearn.externals import joblib
joblib.dump(knn_clf,"knn_clf") #存儲(chǔ)
knn_clf = joblib.load("knn_clf") #使用
Y_pred = knn_clf.predict(X_validation)
print("ACC:", accuracy_score(Y_validation, Y_pred))
print("REC:", recall_score(Y_validation, Y_pred))
print("F-Score", f1_score(Y_validation, Y_pred))
def main():
features,label=hr_preprocessing()
hr_modeling(features,label)
if __name__=="__main__":
main()
#為方便調(diào)用叽粹,可以使用models[]列表統(tǒng)一管理模型
def hr_modeling(features,label):
# 用train_test_split劃分訓(xùn)練集览效、測(cè)試集却舀、驗(yàn)證集
from sklearn.model_selection import train_test_split
f_v=features.values
l_v=label.values
#驗(yàn)證集占20%
X_tt,X_validation,Y_tt,Y_validation=train_test_split(f_v,l_v,test_size=0.2)
#訓(xùn)練集占60%,測(cè)試集占20%
X_train,X_test,Y_train,Y_test=train_test_split(X_tt,Y_tt,test_size=0.25)
#衡量指標(biāo):準(zhǔn)確率锤灿、召回率挽拔、F值
from sklearn.metrics import accuracy_score, recall_score, f1_score
#KNN
from sklearn.neighbors import NearestNeighbors,KNeighborsClassifier
#為以后調(diào)用方便,可以建立models列表但校,將模型統(tǒng)一管理
models=[]
models.append(("KNN",KNeighborsClassifier(n_neighbors=3)))
for clf_name,clf in models:
clf.fit(X_train,Y_train)
#xy_lst[i]:0訓(xùn)練集篱昔、1驗(yàn)證集、2測(cè)試集
xy_lst=[(X_train,Y_train),(X_validation,Y_validation),(X_test,Y_test)]
for i in range(len(xy_lst)):
X_part=xy_lst[i][0]
Y_part=xy_lst[i][1]
Y_pred=clf.predict(X_part)
print(i)
print(clf_name,"-ACC:",accuracy_score(Y_part,Y_pred))
print(clf_name,"-REC:",recall_score(Y_part,Y_pred))
print(clf_name,"-F1:",f1_score(Y_part,Y_pred))
決策樹切分方法
- 信息增益-ID3
- 信息增益率-C4.5
- Gini系數(shù)-CART