這個很多人應(yīng)該也了解過,也就不介紹了捞稿。本文參考了一些實例,也有一些自己的思考拼缝,最終正確率80%左右娱局,有時間繼續(xù)改進(jìn)。首先讀入數(shù)據(jù):
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LogisticRegression
import re
train_df = pd.read_csv ('titanic_train.csv')
train_df.head()
主要包含以下信息:
# PassengerId 乘客ID
# Survived 生存率
# Pclass 乘客倉位 1/2/3等倉位
# Name 乘客姓名
# Sex 性別
# Age 年齡
# SibSp 堂兄弟姐妹個數(shù)
# Parch 父母與小孩的個數(shù)
# Ticket 船票信息
# Fare 票價
# Cabin 客艙
# Embarked 登船港口
參數(shù)很多咧七,一點點分析吧
接下來看看各特征與生存率的關(guān)系如何衰齐,首先看看性別、年齡继阻、船艙等級與獲救的情況
import matplotlib.pyplot as plt
plt.rcParams['font.sans-serif']=['SimHei'] #用來正常顯示中文標(biāo)簽
plt.rcParams['axes.unicode_minus']=False #用來正常顯示負(fù)號
fig = plt.figure()
plt.scatter(train_df.Survived,train_df.Age)
plt.ylabel(u"年齡")
plt.title(u"年齡獲救分布")
print train_df.Age.count()
plt.show()
可以看出年紀(jì)大的娇斩,65歲以上的生存率較低,其他的就看不出什么穴翩,相關(guān)性沒有想象中那么大,同理锦积,按性別劃分:
性別特征影響就大了芒帕,男性的死亡率遠(yuǎn)大于女性,女性的生存率相當(dāng)?shù)拇蠓峤椋浑娪袄镎f的女人和小孩優(yōu)先不是亂說的
果然背蟆,一等艙獲救概率最高,錢還是個好東西看了基本情況哮幢,接下來開始對數(shù)據(jù)進(jìn)行清洗年齡需要補(bǔ)全带膀,剛看了年齡似乎沒有太大的區(qū)分,是否可以轉(zhuǎn)換成小孩橙垢,中青年垛叨,老年人這幾類呢;Cabin參數(shù)各種各樣柜某,最好分為有Cabin和沒有Cabin兩類嗽元;姓名也需要處理,姓名長的是否影響很大呢喂击,稱謂是否也有影響剂癌;SibSp與Parch是否可以合為一類呢,命名為FamilySize首先補(bǔ)全年齡翰绊,并作出區(qū)分
首先補(bǔ)全年齡佩谷,并按照小孩旁壮,中年,老年人分類
#首先補(bǔ)全年齡谐檀,并按照小孩抡谐,中年,老年人分類
fig = plt.figure()
#print train_df.describe()
def set_missing_ages(df):
# 把已有的數(shù)值型特征取出來丟進(jìn)Random Forest Regressor中
age_df = df[['Age','Fare', 'Parch', 'SibSp', 'Pclass']
known_age = age_df[age_df.Age.notnull()].as_matrix()
unknown_age = age_df[age_df.Age.isnull()].as_matrix()
# y即目標(biāo)年齡
y = known_age[:, 0]
# X即特征屬性值
X = known_age[:, 1:]
# fit到RandomForestRegressor之中
rfr = RandomForestRegressor(random_state=0, n_estimators=2000, n_jobs=-1)
rfr.fit(X, y)
predictedAges = rfr.predict(unknown_age[:, 1::])
#print predictedAges
df.loc[ (df.Age.isnull()), 'Age' ] = predictedAges
return df, rfr
def set_new_Age(df):
df['new_Age'] = 1
df.loc[train_df["Age"] <=12,'new_Age'] = 0
df.loc[train_df["Age"] >= 60 , 'new_Age'] = 2
return df
train_df, rfr=set_missing_ages(train_df)
train_df = set_new_Age(train_df)
Survived_Age0 = train_df.new_Age[train_df.Survived == 0].value_counts()
Survived_Age1 = train_df.new_Age[train_df.Survived == 1].value_counts()
df=pd.DataFrame({u'獲救':Survived_Age1, u'未獲救':Survived_Age0})
df.plot(kind='bar', stacked=True)
plt.show()
可以看出稚补,小孩的獲救概率最大童叠,60歲上的老人幾乎不能獲救,中青年獲救幾率一般
接下來看看有無船艙的獲救情況
可以看出课幕,有船艙的獲救概率很大厦坛,這個指標(biāo)的影響也挺大的
接下來看看name這個特征,長度以及稱謂都可以提取出來看看乍惊,這里查看title這個特征
恩杜秸,title的影響果然很大,名字也不是隨便取的啊润绎,不過換算到中國估計這種方法就不好使了,接下來再把性別轉(zhuǎn)換一下撬碟,又新添加兩個FamilySize和NameLength的特征,就不具體分析了莉撇。
這樣一個一個看有些麻煩呢蛤,采用sklearn中的一個特征選擇庫能很快選出哪些特征最重要
from sklearn.feature_selection import SelectKBest, f_classif
import matplotlib.pyplot as plt
print train_df
predictors = ["Pclass","Cabin","Sex", "new_Age", "SibSp", "Parch", "Fare", "FamilySize", "Title", "NameLength"]
#train_df[predictors]
selector = SelectKBest(f_classif, k=5)
selector.fit(train_df[predictors], train_df["Survived"])
scores = -np.log10(selector.pvalues_)
plt.bar(range(len(predictors)), scores)
plt.xticks(range(len(predictors)), predictors, rotation='vertical')
plt.show()
可以看出Pclass,Cabin,sex,fare,title,nameLength這幾個指標(biāo)的影響最大
接下來對測試集采用同樣的操作
test_df = pd.read_csv('test.csv', header=0)
test_df['Sex'] = test_df['Sex'].map( {'female': 0, 'male': 1} ).astype(int)
if len(test_df.Fare[ test_df.Fare.isnull() ]) > 0:
median_fare = np.zeros(3)
for f in range(0,3): # loop 0 to 2
median_fare[f] = test_df[ test_df.Pclass == f+1 ]['Fare'].dropna().median()
for f in range(0,3): # loop 0 to 2
test_df.loc[ (test_df.Fare.isnull()) & (test_df.Pclass == f+1 ), 'Fare'] = median_fare[f]
test_df["FamilySize"] = test_df["SibSp"] + test_df["Parch"]
test_df["NameLength"] = test_df["Name"].apply(lambda x: len(x))
titles = test_df["Name"].apply(get_title)
#print pd.value_counts(titles)
title_dict = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Dr": 5, "Rev": 6, "Major": 7, "Col": 7, "Mlle": 8,
"Mme": 8, "Don": 9, "Lady": 10, "Countess": 10, "Jonkheer": 10, "Sir": 9, "Capt": 7, "Ms": 2,"Dona": 2}
for key, value in title_dict.items():
titles[titles == key] = value
test_df["Title"] = titles
#年齡操作
age_predictor = test_df[['Age','Fare', 'Parch', 'SibSp', 'Pclass']]
null_age = age_predictor[test_df.Age.isnull()].as_matrix()
X = null_age[:,1:]
predictedAges = rfr.predict(X)
test_df.loc[ (test_df.Age.isnull()), 'Age' ] = predictedAges
test_df = set_new_Age(test_df)
test_df = set_Cabin_type(test_df)
帶入模型進(jìn)行預(yù)測
帶入模型進(jìn)行預(yù)測
predictors = ["Pclass","Cabin","Sex", "new_Age", "Fare","FamilySize","NameLength","Title"]
#采用兩種隨機(jī)森林和邏輯回歸的融合算法
algorithms = [
RandomForestClassifier(n_estimators=100,min_samples_split=4, min_samples_leaf=2),
LogisticRegression(random_state=1)
]
full_predictions = []
output = []
for alg in algorithms:
alg.fit(train_df[predictors].as_matrix(), train_df["Survived"])
predictions = alg.predict_proba(test_df[predictors].as_matrix()).astype(float)[:,1]
full_predictions.append(predictions)
output = (full_predictions[0]*3 + full_predictions[1])/4
#forest = RandomForestClassifier(n_estimators=100,min_samples_split=4, min_samples_leaf=2)
output[output <=0.5] = 0
output[output >0.5] = 1
output = output.astype(int)
print output
print 'Predicting...'
#output = alg.predict(test_df[predictors].as_matrix()).astype(int)
predictions_file = open("third.csv", "wb")
open_file_object = csv.writer(predictions_file)
open_file_object.writerow(["PassengerId","Survived"])
open_file_object.writerows(zip(ids, output))
predictions_file.close()
print 'Done.'
OK,到這里基本就完成了棍郎,最后的效果一般其障,80%,排名2000左右涂佃,不算太好励翼,需要繼續(xù)從以下幾個方面思考:1、還有什么特征可以提取 2辜荠、算法只是單純的調(diào)用了汽抚,參數(shù)還有什么可以調(diào)整的,是否有更好的算法伯病。