泰坦尼克之災(zāi)是kaggle的一個(gè)入門案例,本文是我關(guān)于這個(gè)比賽的一些記錄
1雕擂、jupyter notebook的安裝
相比我之前使用的pycharm,jupyter notebook的優(yōu)點(diǎn)是允許把代碼寫入獨(dú)立的cell中贱勃,然后單獨(dú)執(zhí)行井赌。這樣做意味著我們可以在測試項(xiàng)目時(shí)單獨(dú)測試特定代碼塊谤逼,無需從頭開始執(zhí)行代碼。
安裝教程參考https://blog.csdn.net/dream_an/article/details/50464940
使用教程參考https://blog.csdn.net/lee_j_r/article/details/52791228
2仇穗、泰坦尼克之災(zāi)比賽
大致流程如下森缠,接下來我會(huì)根據(jù)這個(gè)流程解釋并貼出相應(yīng)的代碼:
1、數(shù)據(jù)準(zhǔn)備和了解:下載數(shù)據(jù)并了解數(shù)據(jù)的屬性
2仪缸、數(shù)據(jù)清洗:即數(shù)據(jù)預(yù)處理,對缺失值進(jìn)行補(bǔ)充或者把機(jī)器學(xué)習(xí)不能處理的數(shù)值類型轉(zhuǎn)化為可以處理的int型
3列肢、特征工程:提煉出新特征并進(jìn)行特征選擇來提高模型準(zhǔn)確率
4恰画、基準(zhǔn)模型:跑幾個(gè)基礎(chǔ)模型。我選擇了線性回歸瓷马、邏輯回歸拴还、隨機(jī)森林這三個(gè),并通過交叉驗(yàn)證查看準(zhǔn)確率
5欧聘、融合模型:我選擇融合隨機(jī)森林和邏輯回歸模型
2.1數(shù)據(jù)準(zhǔn)備和了解
首先片林,在kaggle上下載數(shù)據(jù):https://www.kaggle.com/c/titanic/data
各個(gè)屬性的含義如下:
PassengerId:乘客ID
Survived:是否獲救
Pclass:乘客票務(wù)艙,1表示最高級
Name:乘客姓名
Sex:性別
Age:年齡
SibSp:堂兄弟妹個(gè)數(shù)
Parch:父母與小孩個(gè)數(shù)
Ticket:船票信息
Fare:票價(jià)
Cabin:客艙
Embarked:登船港口
再者怀骤,導(dǎo)入數(shù)據(jù)并對各個(gè)屬性進(jìn)行大致統(tǒng)計(jì)
"""查看數(shù)據(jù)"""
import pandas as pd
titanic = pd.read_csv('train.csv')
# titanic.head(3)
print(titanic.describe())
print(titanic.info())
輸出結(jié)果為
PassengerId Survived Pclass Age SibSp \
count 891.000000 891.000000 891.000000 714.000000 891.000000
mean 446.000000 0.383838 2.308642 29.699118 0.523008
std 257.353842 0.486592 0.836071 14.526497 1.102743
min 1.000000 0.000000 1.000000 0.420000 0.000000
25% 223.500000 0.000000 2.000000 20.125000 0.000000
50% 446.000000 0.000000 3.000000 28.000000 0.000000
75% 668.500000 1.000000 3.000000 38.000000 1.000000
max 891.000000 1.000000 3.000000 80.000000 8.000000
Parch Fare
count 891.000000 891.000000
mean 0.381594 32.204208
std 0.806057 49.693429
min 0.000000 0.000000
25% 0.000000 7.910400
50% 0.000000 14.454200
75% 0.000000 31.000000
max 6.000000 512.329200
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
PassengerId 891 non-null int64
Survived 891 non-null int64
Pclass 891 non-null int64
Name 891 non-null object
Sex 891 non-null object
Age 714 non-null float64
SibSp 891 non-null int64
Parch 891 non-null int64
Ticket 891 non-null object
Fare 891 non-null float64
Cabin 204 non-null object
Embarked 889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.6+ KB
None
可以初步發(fā)現(xiàn)费封,Age這一列的count只有714,說明需要進(jìn)行缺失值填充蒋伦;Sex弓摘、Embarked、Name等屬性都是object類型痕届,要將其轉(zhuǎn)化為機(jī)器學(xué)習(xí)能處理的類型韧献。這都是數(shù)據(jù)預(yù)處理的內(nèi)容,無論哪個(gè)案例都需要數(shù)據(jù)預(yù)處理研叫。
2.2數(shù)據(jù)清洗
數(shù)據(jù)清洗即數(shù)據(jù)預(yù)處理锤窑,包括對缺失值進(jìn)行填充和把機(jī)器學(xué)習(xí)不能處理的數(shù)值類型轉(zhuǎn)化為可以處理的int型。
缺失值填充一般用到fillna()函數(shù)嚷炉,分兩種情況:
1.如果是數(shù)值類型渊啰,用平均值或中位數(shù)取代
2.如果是分類數(shù)據(jù),用最常見的類別取代
具體應(yīng)用如下:
Age這一列有缺失值渤昌,用中位數(shù)填充(或平均值)
"""數(shù)據(jù)預(yù)處理"""
# 用中值填補(bǔ)缺失值
titanic['Age'] = titanic['Age'].fillna(titanic['Age'].median())
Sex這一列的值male虽抄、female不能直接處理,分別將其轉(zhuǎn)化為0独柑、1
print(titanic['Sex'].unique())
# titanic.loc[0]表示第0行的樣本
# titanic.loc[0, 'PassengerId']表示行為0迈窟,列為PassengerId的值
titanic.loc[titanic['Sex'] == 'male', 'Sex'] = 0
titanic.loc[titanic['Sex'] == 'female', 'Sex'] = 1
Embarked這一列有缺失值,用最常見的類別即‘S’填充忌栅,然后再將數(shù)據(jù)轉(zhuǎn)化為int型
print(titanic['Embarked'].describe())
print(titanic['Embarked'].unique())
titanic['Embarked'] = titanic['Embarked'].fillna('S')
titanic.loc[titanic['Embarked'] == 'S', 'Embarked'] = 0
titanic.loc[titanic['Embarked'] == 'C', 'Embarked'] = 1
titanic.loc[titanic['Embarked'] == 'Q', 'Embarked'] = 2
2.3特征工程
“數(shù)據(jù)決定了機(jī)器學(xué)習(xí)的上限车酣,而算法只是盡可能逼近這個(gè)上限”曲稼,這里的數(shù)據(jù)指的就是經(jīng)過特征工程得到的數(shù)據(jù)。特征工程指的是把原始數(shù)據(jù)轉(zhuǎn)變?yōu)槟P偷挠?xùn)練數(shù)據(jù)的過程湖员,它的目的就是獲取更好的訓(xùn)練數(shù)據(jù)特征贫悄,使得機(jī)器學(xué)習(xí)模型逼近這個(gè)上限。特征工程學(xué)習(xí)參考https://www.cnblogs.com/wxquare/p/5484636.html
案例中娘摔,通過提煉新特征并進(jìn)行特征選擇可以提高模型準(zhǔn)確率窄坦。
提煉的3個(gè)新特征為FamilySize:SibSp和Parch的人數(shù)相加,看看是否家庭人數(shù)越多獲救幾率越大凳寺;NameLength:名字長度鸭津,外國名字越長地位越高;Title:在Name里提取的肠缨,類似Mr逆趋、Mrs、Dr表示性別職業(yè)
re.search():掃描整個(gè)字符串并返回第一個(gè)成功的匹配晒奕,沒有就返回none
關(guān)于正則表達(dá)式:https://www.jb51.net/article/15707.htm
# 提煉新特征
titanic['FamilySize'] = titanic['SibSp'] + titanic['Parch']
titanic['NameLength'] = titanic['Name'].apply(lambda x: len(x))
import re
import pandas as pd
def get_title(name):
title_search = re.search(' ([A-Za-z]+)\.', name)
if title_search:
return title_search.group(1)
return ''
titles = titanic['Name'].apply(get_title)
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Dr": 5, "Rev": 6, "Major": 7, "Col": 8, "Mlle": 9,
"Mme": 10, "Don": 11, "Lady": 12, "Countess": 13, "Jonkheer": 14, "Sir": 15, "Capt": 16, "Ms": 17
}
for k, v in title_mapping.items():
titles[titles == k] = v
print(pd.value_counts(titles))
titanic['Title'] = titles
SelectKBest():https://blog.csdn.net/sunshunli/article/details/82051138
# 特征選擇
import numpy as np
from sklearn.feature_selection import SelectKBest, f_classif
import matplotlib.pyplot as plt
predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked",
"FamilySize", "NameLength", "Title"]
selector = SelectKBest(f_classif, k=5)# 方差分析闻书,計(jì)算方差分析(ANOVA)的F值 (組間均方 / 組內(nèi)均方),選取前5個(gè)特征
selector.fit(titanic[predictors], titanic['Survived'])
scores = -np.log10(selector.pvalues_)
plt.bar(range(len(predictors)), scores)
plt.xticks(range(len(predictors)), predictors, rotation='vertical')
plt.show()
可以發(fā)現(xiàn)“Pclass”脑慧、 "Sex"魄眉、“Fare”、"NameLength"和“Title”這5個(gè)特征比較重要
2.4基準(zhǔn)模型
線性回歸是機(jī)器學(xué)習(xí)最基礎(chǔ)的算法闷袒,邏輯回歸相比線性回歸杆融,代碼更加簡潔、而隨機(jī)森林是集成學(xué)習(xí)霜运,也不容易過擬合脾歇,所以我選擇這三個(gè)模型,并通過交叉驗(yàn)證將初始樣本分為3份淘捡,每次2份用作訓(xùn)練集藕各,剩下1份作為測試集,這樣可以有3次訓(xùn)練焦除,得到3次訓(xùn)練結(jié)果激况,平均之后得到最后結(jié)果。
2.4.1線性回歸
"""線性回歸"""
from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import KFold
from sklearn import metrics
# 選擇特征
predictors = ['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked']
# 導(dǎo)入線性回歸
alg = LinearRegression()
# 將樣本分為3份進(jìn)行交叉驗(yàn)證
kf = KFold(titanic.shape[0], n_folds=3, random_state=1)
predictions = []
for train_index, test_index in kf:
# 用于訓(xùn)練的特征數(shù)據(jù)
train_predictors = titanic[predictors].iloc[train_index, :]
# 特征數(shù)據(jù)的label(即是否獲救)
train_target = titanic['Survived'].iloc[train_index] # train_target = titanic['Survived'][train_index]
# 訓(xùn)練線性回歸模型
alg.fit(train_predictors, train_target) test_predictions = alg.predict(titanic[predictors].iloc[test_index, :])
predictions.append(test_predictions)
# 線性回歸得到的結(jié)果是在[0,1]膘魄,轉(zhuǎn)化為類別
import numpy as np
predictions = np.concatenate(predictions, axis=0)# predictions = np.hstack(predictions)
predictions[predictions > .5] = 1
predictions[predictions <= .5] = 0
# predictions = np.where(predictions > .5, 1, 0)
# 線性模型準(zhǔn)確率
accuracy = sum(predictions == titanic['Survived']) / len(predictions)
print(accuracy)
輸出線性回歸準(zhǔn)確率為
0.783389450056
得到的準(zhǔn)確率有點(diǎn)低乌逐,我們嘗試下邏輯回歸模型
2.4.2邏輯回歸
函數(shù):cross_val_score(model_name, X,y, cv=k)
參數(shù):1创葡、模型函數(shù)名浙踢,如 LogisticRegression()2、訓(xùn)練集 3灿渴、測試屬性 4洛波、K折交叉驗(yàn)證
作用:驗(yàn)證某個(gè)模型在某個(gè)訓(xùn)練集上的穩(wěn)定性胰舆,輸出k個(gè)預(yù)測精度。
"""邏輯回歸"""
from sklearn import cross_validation
from sklearn.linear_model import LogisticRegression
alg = LogisticRegression(random_state=1)
scores = cross_validation.cross_val_score(alg, titanic[predictors], titanic['Survived'], cv=3)
print(scores)
print(scores.mean())
輸出邏輯回歸準(zhǔn)確率為
[ 0.78451178 0.78787879 0.79124579]
0.787878787879
準(zhǔn)確率提高了一點(diǎn)
2.4.3隨機(jī)森林
隨機(jī)森林的隨機(jī)體現(xiàn)在兩點(diǎn):1蹬挤、取樣本是隨機(jī)的缚窿,且是有放回的 2、特征的選擇是隨機(jī)的焰扳,不一定所有的屬性特征都要用到倦零。森林表示生成多個(gè)決策樹
函數(shù):RandomForestClassifier()
參數(shù)解釋:random_state = 1 表示此處代碼多運(yùn)行幾次得到的隨機(jī)值都是一樣的,如果不設(shè)置吨悍,兩次執(zhí)行的隨機(jī)值是不一樣的光绕;n_estimators=50 表示有50棵決策樹;樹的分裂的條件是: min_samples_split =4代表樣本不停的分裂畜份,某一個(gè)節(jié)點(diǎn)上的樣本如果只有4個(gè)了 ,就不再繼續(xù)分裂了欣尼;min_samples_leaf =2表示葉子節(jié)點(diǎn)的最小個(gè)數(shù)為2
"""隨機(jī)森林"""
from sklearn import cross_validation
from sklearn.ensemble import RandomForestClassifier
alg = RandomForestClassifier(random_state=1, n_estimators=50, min_samples_split=4, min_samples_leaf=2)
kf = cross_validation.KFold(titanic.shape[0], n_folds=3, random_state=1)
scores = cross_validation.cross_val_score(alg, titanic[predictors], titanic['Survived'], cv=kf)
print(scores.mean())
輸出隨機(jī)森林準(zhǔn)確率為
0.81593714927
準(zhǔn)確率超過了81%爆雹。后續(xù)還可以通過調(diào)整隨機(jī)森林模型的參數(shù)看是否能繼續(xù)提高準(zhǔn)確率,但不是越高越好愕鼓,可能會(huì)導(dǎo)致過擬合
2.5混合模型
混合模型是在競賽中常用的辦法:集成多個(gè)模型钙态,得出每個(gè)模型的結(jié)果,并賦予每個(gè)模型權(quán)重菇晃,求出最后的平均結(jié)果册倒。上面跑基準(zhǔn)模型發(fā)現(xiàn)邏輯回歸和隨機(jī)森林效果更好,所以我集成了這兩種模型磺送。如果發(fā)現(xiàn)某個(gè)模型更好驻子,權(quán)重可以更高(案例中隨機(jī)森林權(quán)重是2,邏輯回歸的是1)
predict_proba():https://blog.csdn.net/m0_37870649/article/details/79549142
# 混合模型
from sklearn.ensemble import GradientBoostingClassifier
import numpy as np
algorithms = [
[RandomForestClassifier(random_state=1, n_estimators=20, min_samples_split=4, min_samples_leaf=2),
['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked', 'FamilySize', 'NameLength', 'Title']],
[LogisticRegression(random_state=1),
['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked', 'FamilySize', 'NameLength', 'Title']]
]
kf = KFold(titanic.shape[0], n_folds=11, random_state=1)
predictions = []
for train, test in kf:
train_target = titanic['Survived'].iloc[train]
full_test_prediction = []
for alg, predictors in algorithms:
alg.fit(titanic[predictors].iloc[train, :], train_target)
test_prediction = alg.predict_proba(titanic[predictors].iloc[test, :].astype(float))[:, 1]
full_test_prediction.append(test_prediction)
test_predictions = (full_test_prediction[0] * 2 + full_test_prediction[1]) / 3
test_predictions[test_predictions > .5] = 1
test_predictions[test_predictions <= .5] = 0
predictions.append(test_predictions)
predictions = np.concatenate(predictions, axis=0)
accuracy = sum(predictions == titanic['Survived']) / len(predictions)
print(accuracy)
輸出混合模型結(jié)果為
0.832772166105
總結(jié):對于一般的簡單機(jī)器學(xué)習(xí)估灿,先進(jìn)行數(shù)據(jù)探索崇呵,了解屬性;再將原始數(shù)據(jù)經(jīng)過預(yù)處理之后得到完整可處理的數(shù)據(jù)即數(shù)據(jù)清洗馅袁;再進(jìn)行特征工程域慷,得到重要特征;然后跑基準(zhǔn)模型看看效果汗销;復(fù)雜的數(shù)據(jù)可以嘗試混合模型集成學(xué)習(xí)犹褒,具體情況具體分析。