@作者:煉己者
本博客所有內(nèi)容以學(xué)習(xí)、研究和分享為主,如需轉(zhuǎn)載凛捏,請聯(lián)系本人凳干,標(biāo)明作者和出處,并且是非商業(yè)用途连霉,謝謝!
大家也可以去我的github上看,最近剛更新轰坊,用jupyter notebook寫的铸董,視覺效果上感覺會更棒
https://github.com/lianjizhe/kaggle_tiantic_code,有幫助的話給個(gè)星呀衰倦。
摘要
本文主要是帶你入門kaggle最基礎(chǔ)的比賽——泰坦尼克號之災(zāi)袒炉,里面有各種可視化為你展示做的過程,并非只有一大段代碼樊零,希望能帶大家真正地去入門
這是我二月份參加的kaggle大賽,當(dāng)時(shí)參考了很多大佬的代碼,也算是完整的把這個(gè)流程走了一遍,取得了前%2的成績我磁。這個(gè)比賽對我很重要,因?yàn)榕琶壳傲俗そ螅抛屪约河行判囊恢鼻靶卸峒琛T谶@里呈現(xiàn)給大家,之前有發(fā)到CSDN博客沉衣,后來?xiàng)壧柫擞舾保F(xiàn)在正式地放到這里來,以后還會多多參加比賽豌习。
現(xiàn)在如果還按這個(gè)代碼去跑存谎,排名估計(jì)會下降不少了,畢竟這么久了肥隆,大家也可以在這個(gè)基礎(chǔ)上多多改善
正文
一. 導(dǎo)入數(shù)據(jù)包與數(shù)據(jù)集
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
train=pd.read_csv(r'H:\kaggle\train.csv')
test=pd.read_csv(r'H:\kaggle\test.csv')
PassengerId=test['PassengerId']
all_data = pd.concat([train, test], ignore_index = True)
二. 數(shù)據(jù)分析
1.總體預(yù)覽
train.head()
?PassengerID(ID)
?Survived(存活與否)
?Pclass(客艙等級既荚,較為重要)
?Name(姓名,可提取出更多信息)
?Sex(性別栋艳,較為重要)
?Age(年齡恰聘,較為重要)
?Parch(直系親友)
?SibSp(旁系)
?Ticket(票編號)
?Fare(票價(jià))
?Cabin(客艙編號)
?Embarked(上船的港口編號)
[input]:
train.info()
[output]:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
PassengerId 891 non-null int64
Survived 891 non-null int64
Pclass 891 non-null int64
Name 891 non-null object
Sex 891 non-null object
Age 714 non-null float64
SibSp 891 non-null int64
Parch 891 non-null int64
Ticket 891 non-null object
Fare 891 non-null float64
Cabin 204 non-null object
Embarked 889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.6+ KB
從上面的數(shù)據(jù)我們可以發(fā)現(xiàn)有的特征是有空值的
2.數(shù)據(jù)初步分析(使用統(tǒng)計(jì)學(xué)與繪圖)
- 目的:初步了解數(shù)據(jù)之間的相關(guān)性,為構(gòu)造特征工程以及模型建立做準(zhǔn)備
[input]:
train['Survived'].value_counts()
[output]:
0 549
1 342
Name: Survived, dtype: int64
1)Sex Feature:女性幸存率遠(yuǎn)高于男性
sns.barplot(x="Sex", y="Survived", data=train)
2)Pclass Feature:乘客社會等級越高,幸存率越高
sns.barplot(x="Pclass", y="Survived", data=train)
3)SibSp Feature:配偶及兄弟姐妹數(shù)適中的乘客幸存率更高
sns.barplot(x="SibSp", y="Survived", data=train)
4)Parch Feature:父母與子女?dāng)?shù)適中的乘客幸存率更高
sns.barplot(x="Parch", y="Survived", data=train)從不同生還情況的密度圖可以看出吸占,在年齡15歲的左側(cè)晴叨,生還率有明顯差別,密度圖非交叉區(qū)域面積非常大矾屯,但在其他年齡段兼蕊,則差別不是很明顯,認(rèn)為是隨機(jī)所致件蚕,因此可以考慮將此年齡偏小的區(qū)域分離出來遍略。
facet = sns.FacetGrid(train, hue="Survived",aspect=2)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, train['Age'].max()))
facet.add_legend()
plt.xlabel('Age')
plt.ylabel('density')
6)Embarked登港港口與生存情況的分析
結(jié)果分析:C地的生存率更高,這個(gè)也應(yīng)該保留為模型特征.
sns.countplot('Embarked',hue='Survived',data=train)
7)Title Feature(New):不同稱呼的乘客幸存率不同
新增Title特征,從姓名中提取乘客的稱呼骤坐,歸納為六類绪杏。
all_data['Title'] = all_data['Name'].apply(lambda x:x.split(',')[1].split('.')[0].strip())
Title_Dict = {}
Title_Dict.update(dict.fromkeys(['Capt', 'Col', 'Major', 'Dr', 'Rev'], 'Officer'))
Title_Dict.update(dict.fromkeys(['Don', 'Sir', 'the Countess', 'Dona', 'Lady'], 'Royalty'))
Title_Dict.update(dict.fromkeys(['Mme', 'Ms', 'Mrs'], 'Mrs'))
Title_Dict.update(dict.fromkeys(['Mlle', 'Miss'], 'Miss'))
Title_Dict.update(dict.fromkeys(['Mr'], 'Mr'))
Title_Dict.update(dict.fromkeys(['Master','Jonkheer'], 'Master'))
all_data['Title'] = all_data['Title'].map(Title_Dict)
sns.barplot(x="Title", y="Survived", data=all_data)
8)FamilyLabel Feature(New):家庭人數(shù)為2到4的乘客幸存率較高
新增FamilyLabel特征,先計(jì)算FamilySize=Parch+SibSp+1纽绍,然后把FamilySize分為三類蕾久。
all_data['FamilySize']=all_data['SibSp']+all_data['Parch']+1
sns.barplot(x="FamilySize", y="Survived", data=all_data)
按生存率把FamilySize分為三類,構(gòu)成FamilyLabel特征拌夏。
def Fam_label(s):
if (s >= 2) & (s <= 4):
return 2
elif ((s > 4) & (s <= 7)) | (s == 1):
return 1
elif (s > 7):
return 0
all_data['FamilyLabel']=all_data['FamilySize'].apply(Fam_label)
sns.barplot(x="FamilyLabel", y="Survived", data=all_data)
9)Deck Feature(New):不同甲板的乘客幸存率不同
新增Deck特征僧著,先把Cabin空缺值填充為'Unknown'履因,再提取Cabin中的首字母構(gòu)成乘客的甲板號。
all_data['Cabin'] = all_data['Cabin'].fillna('Unknown')
all_data['Deck']=all_data['Cabin'].str.get(0)
sns.barplot(x="Deck", y="Survived", data=all_data)
10)TicketGroup Feature(New):與2至4人共票號的乘客幸存率較高
新增TicketGroup特征盹愚,統(tǒng)計(jì)每個(gè)乘客的共票號數(shù)栅迄。
Ticket_Count = dict(all_data['Ticket'].value_counts())
all_data['TicketGroup'] = all_data['Ticket'].apply(lambda x:Ticket_Count[x])
sns.barplot(x='TicketGroup', y='Survived', data=all_data)
按生存率把TicketGroup分為三類。
def Ticket_Label(s):
if (s >= 2) & (s <= 4):
return 2
elif ((s > 4) & (s <= 8)) | (s == 1):
return 1
elif (s > 8):
return 0
all_data['TicketGroup'] = all_data['TicketGroup'].apply(Ticket_Label)
sns.barplot(x='TicketGroup', y='Survived', data=all_data)
3.數(shù)據(jù)清洗
1)缺失值填充
Age Feature:Age缺失量為263皆怕,缺失量較大毅舆,用Sex, Title, Pclass三個(gè)特征構(gòu)建隨機(jī)森林模型,填充年齡缺失值愈腾。
from sklearn.ensemble import RandomForestRegressor
age_df = all_data[['Age', 'Pclass','Sex','Title']]
age_df=pd.get_dummies(age_df)
known_age = age_df[age_df.Age.notnull()].as_matrix()
unknown_age = age_df[age_df.Age.isnull()].as_matrix()
y = known_age[:, 0]
X = known_age[:, 1:]
rfr = RandomForestRegressor(random_state=0, n_estimators=100, n_jobs=-1)
rfr.fit(X, y)
predictedAges = rfr.predict(unknown_age[:, 1::])
all_data.loc[ (all_data.Age.isnull()), 'Age' ] = predictedAges
Embarked Feature:Embarked缺失量為2憋活,缺失Embarked信息的乘客的Pclass均為1,且Fare均為80虱黄,因?yàn)镋mbarked為C且Pclass為1的乘客的Fare中位數(shù)為80悦即,所以缺失值填充為C。
all_data[all_data['Embarked'].isnull()]
[input]:
all_data.groupby(by=["Pclass","Embarked"]).Fare.median()
[Output]:
Pclass Embarked
1 C 78.2667
Q 90.0000
S 52.0000
2 C 15.3146
Q 12.3500
S 15.3750
3 C 7.8958
Q 7.7500
S 8.0500
Name: Fare, dtype: float64
all_data['Embarked'] = all_data['Embarked'].fillna('C')
Fare Feature:Fare缺失量為1橱乱,缺失Fare信息的乘客的Embarked為S辜梳,Pclass為3,所以用Embarked為S泳叠,Pclass為3的乘客的Fare中位數(shù)填充作瞄。
all_data[all_data['Fare'].isnull()]
fare=all_data[(all_data['Embarked'] == "S") & (all_data['Pclass'] == 3)].Fare.median()
all_data['Fare']=all_data['Fare'].fillna(fare)
2)同組識別
把姓氏相同的乘客劃分為同一組,從人數(shù)大于一的組中分別提取出每組的婦女兒童和成年男性析二。
all_data['Surname']=all_data['Name'].apply(lambda x:x.split(',')[0].strip())
Surname_Count = dict(all_data['Surname'].value_counts())
all_data['FamilyGroup'] = all_data['Surname'].apply(lambda x:Surname_Count[x])
Female_Child_Group=all_data.loc[(all_data['FamilyGroup']>=2) & ((all_data['Age']<=12) | (all_data['Sex']=='female'))]
Male_Adult_Group=all_data.loc[(all_data['FamilyGroup']>=2) & (all_data['Age']>12) & (all_data['Sex']=='male')]
發(fā)現(xiàn)絕大部分女性和兒童組的平均存活率都為1或0粉洼,即同組的女性和兒童要么全部幸存节预,要么全部遇難叶摄。
Female_Child=pd.DataFrame(Female_Child_Group.groupby('Surname')['Survived'].mean().value_counts())
Female_Child.columns=['GroupCount']
Female_Child
sns.barplot(x=Female_Child.index, y=Female_Child["GroupCount"]).set_xlabel('AverageSurvived')
絕大部分成年男性組的平均存活率也為1或0。
Male_Adult=pd.DataFrame(Male_Adult_Group.groupby('Surname')['Survived'].mean().value_counts())
Male_Adult.columns=['GroupCount']
Male_Adult
因?yàn)槠毡橐?guī)律是女性和兒童幸存率高安拟,成年男性幸存較低蛤吓,所以我們把不符合普遍規(guī)律的反常組選出來單獨(dú)處理。把女性和兒童組中幸存率為0的組設(shè)置為遇難組糠赦,把成年男性組中存活率為1的設(shè)置為幸存組会傲,推測處于遇難組的女性和兒童幸存的可能性較低,處于幸存組的成年男性幸存的可能性較高拙泽。
[Input]:
Female_Child_Group=Female_Child_Group.groupby('Surname')['Survived'].mean()
Dead_List=set(Female_Child_Group[Female_Child_Group.apply(lambda x:x==0)].index)
print(Dead_List)
Male_Adult_List=Male_Adult_Group.groupby('Surname')['Survived'].mean()
Survived_List=set(Male_Adult_List[Male_Adult_List.apply(lambda x:x==1)].index)
print(Survived_List)
[Output]:
{'Panula', 'Lefebre', 'Lobb', 'Johnston', 'Robins', 'Ilmakangas', 'Turpin', 'Arnold-Franchi', 'Lahtinen', 'Barbara', 'Goodwin', 'Oreskovic', 'Van Impe', 'Strom', 'Rosblom', 'Cacic', 'Attalah', 'Caram', 'Vander Planke', 'Palsson', 'Skoog', 'Danbom', 'Rice', 'Canavan', 'Bourke', 'Jussila', 'Olsson', 'Boulos', 'Zabour', 'Sage', 'Ford'}
{'Beane', 'Frauenthal', 'Harder', 'Nakid', 'Bishop', 'Beckwith', 'Bradley', 'Chambers', 'Cardeza', 'Daly', 'Goldenberg', 'Kimball', 'McCoy', 'Jussila', 'Frolicher-Stehli', 'Duff Gordon', 'Greenfield', 'Dick', 'Jonsson', 'Taylor'}
為了使處于這兩種反常組中的樣本能夠被正確分類淌山,對測試集中處于反常組中的樣本的Age,Title顾瞻,Sex進(jìn)行懲罰修改泼疑。
train=all_data.loc[all_data['Survived'].notnull()]
test=all_data.loc[all_data['Survived'].isnull()]
test.loc[(test['Surname'].apply(lambda x:x in Dead_List)),'Sex'] = 'male'
test.loc[(test['Surname'].apply(lambda x:x in Dead_List)),'Age'] = 60
test.loc[(test['Surname'].apply(lambda x:x in Dead_List)),'Title'] = 'Mr'
test.loc[(test['Surname'].apply(lambda x:x in Survived_List)),'Sex'] = 'female'
test.loc[(test['Surname'].apply(lambda x:x in Survived_List)),'Age'] = 5
test.loc[(test['Surname'].apply(lambda x:x in Survived_List)),'Title'] = 'Miss'
3)特征轉(zhuǎn)換
選取特征,轉(zhuǎn)換為數(shù)值變量荷荤,劃分訓(xùn)練集和測試集退渗。
all_data=pd.concat([train, test])
all_data=all_data[['Survived','Pclass','Sex','Age','Fare','Embarked','Title','FamilyLabel','Deck','TicketGroup']]
all_data=pd.get_dummies(all_data)
train=all_data[all_data['Survived'].notnull()]
test=all_data[all_data['Survived'].isnull()].drop('Survived',axis=1)
X = train.as_matrix()[:,1:]
y = train.as_matrix()[:,0]
4.建模和優(yōu)化
1)參數(shù)優(yōu)化
用網(wǎng)格搜索自動化選取最優(yōu)參數(shù)移稳,事實(shí)上我用網(wǎng)格搜索得到的最優(yōu)參數(shù)是n_estimators = 28,max_depth = 6会油。但是參考另一篇Kernel把參數(shù)改為n_estimators = 26个粱,max_depth = 6之后交叉驗(yàn)證分?jǐn)?shù)和kaggle評分都有略微提升。
[Input]:
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.feature_selection import SelectKBest
pipe=Pipeline([('select',SelectKBest(k=20)),
('classify', RandomForestClassifier(random_state = 10, max_features = 'sqrt'))])
param_test = {'classify__n_estimators':list(range(20,50,2)),
'classify__max_depth':list(range(3,60,3))}
gsearch = GridSearchCV(estimator = pipe, param_grid = param_test, scoring='roc_auc', cv=10)
gsearch.fit(X,y)
print(gsearch.best_params_, gsearch.best_score_)
[Output]:
{'classify__max_depth': 6, 'classify__n_estimators': 42} 0.88109635084
2)訓(xùn)練模型
[Input]:
from sklearn.pipeline import make_pipeline
select = SelectKBest(k = 20)
clf = RandomForestClassifier(random_state = 10, warm_start = True,
n_estimators = 26,
max_depth = 6,
max_features = 'sqrt')
pipeline = make_pipeline(select, clf)
pipeline.fit(X, y)
[Output]:
Out[40]:
Pipeline(memory=None,
steps=[('selectkbest', SelectKBest(k=20, score_func=<function f_classif at 0x000000000C8AE048>)), ('randomforestclassifier', RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=6, max_features='sqrt', max_leaf_nodes=None,
min_impurity_decreas...estimators=26, n_jobs=1,
oob_score=False, random_state=10, verbose=0, warm_start=True))])
3)交叉驗(yàn)證
[input]:
from sklearn import cross_validation, metrics
cv_score = cross_validation.cross_val_score(pipeline, X, y, cv= 10)
print("CV Score : Mean - %.7g | Std - %.7g " % (np.mean(cv_score), np.std(cv_score)))
[Output]:
CV Score : Mean - 0.8451402 | Std - 0.03276752
5.預(yù)測
predictions = pipeline.predict(test)
submission = pd.DataFrame({"PassengerId": PassengerId, "Survived": predictions.astype(np.int32)})
submission.to_csv(r"h:\kaggle\submission1.csv", index=False)
以下是我所有文章的目錄翻翩,大家如果感興趣都许,也可以前往查看
??戳右邊:打開它,也許會看到很多對你有幫助的文章