實戰(zhàn)3-Boston房價預測

1. 問題描述

根據給定的80多個維度的特征信息預測房價戒祠。

2. 大體思路

數據探索
通過可視化憋肖,查看房價分布萌衬,各特征和房價之間的相關性等。

數據清洗

  • 補全數據:去掉缺失過多的特征毕源、與預測目標相關性很低的特征浪漠。之后通過補上眾數等方法把缺省數據補上。

  • 轉換成0/1:當特征值是object的時候霎褐,根據具體意義轉換成數字址愿。

  • 對于數字數據進行標準化

  • 打亂順序,分出訓練集和測試集

建立模型

  • 彈性網絡( Elastic Net)
    當多個特征和另一個特征相關的時候彈性網絡非常有用冻璃。
    ElasticNet 是一種使用L1和L2先驗作為正則化矩陣的線性回歸模型.這種組合用于只有很少的權重非零的稀疏模型响谓,比如:class:Lasso, 但是又能保持:class:Ridge 的正則化屬性.我們可以使用 l1_ratio 參數來調節(jié)L1和L2的凸組合(一類特殊的線性組合)。

  • GBDT(Gradient Boosting Decision Tree)迭代決策樹
    目前GBDT有兩個不同的描述版本俱饿,兩者各有支持者歌粥,讀文獻時要注意區(qū)分。殘差版本把GBDT說成一個殘差迭代樹拍埠,認為每一棵回歸樹都在學習前N-1棵樹的殘差失驶,Gradient版本把GBDT說成一個梯度迭代樹,使用梯度下降法求解枣购,認為每一棵回歸樹在學習前N-1棵樹的梯度下降值嬉探,
    第一個版本詳見博客GBDT(MART) 迭代決策樹入門教程 | 簡介
    第二個版本詳見博客GBDT(Gradient Boosting Decision Tree) 沒有實現只有原理

具體代碼:

Visualization.py

#!/usr/bin/python
# -*- coding: utf-8 -*-

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns


#explore the dataset
sns.set(style="whitegrid",color_codes=True)
sns.set(font_scale=1)

houses=pd.read_csv('sources/train.csv')
print houses.head()

houses_test=pd.read_csv('sources/test.csv')
print houses_test.head()

print "_______________________________________________"
print houses.shape
print houses_test.shape


#info method provides information about dataset like
#total values in each column, null/not null, datatype, memory occupied etc
print "_______________________________________________"
print houses.info()


#Describe gives statistical information about numerical columns in the dataset
print "_______________________________________________"
print houses.describe()

#How many columns with different datatypes are there?
print "_______________________________________________"
print houses.get_dtype_counts()


#coreelation in data
corr=houses.corr()["SalePrice"]
print "_______________________________________________"
print corr[np.argsort(corr, axis=0)[::-1]]


# plotting correlations
num_feat = houses.columns[houses.dtypes != object]
num_feat = num_feat[1:-1]
labels = []
values = []
for col in num_feat:
    labels.append(col)
    values.append(np.corrcoef(houses[col].values, houses.SalePrice.values)[0, 1])

ind = np.arange(len(labels))
width = 0.9
fig, ax = plt.subplots(figsize=(12, 40))
rects = ax.barh(ind, np.array(values), color='red')
ax.set_yticks(ind + ((width) / 2.))
ax.set_yticklabels(labels, rotation='horizontal')
ax.set_xlabel("Correlation coefficient")
ax.set_title("Correlation Coefficients w.r.t Sale Price");

plt.show()

#查看數據的多重線性:即變量間的相關關系
#Multicollinearity increases the standard errors of the coefficients.
# That means, multicollinearity makes some variables statistically insignificant
# when they should be significant.

# To avoid this we can do 3 things:
# Completely remove those variables
# Make new feature by adding them or by some other operation.
# Use PCA, which will reduce feature set to small number of non-collinear features.
correlations=houses.corr()
attrs = correlations.iloc[:-1,:-1] # all except target

threshold = 0.5  ##
important_corrs = (attrs[abs(attrs) > threshold][attrs != 1.0]) \
    .unstack().dropna().to_dict()

unique_important_corrs = pd.DataFrame(
    list(set([(tuple(sorted(key)), important_corrs[key]) \
    for key in important_corrs])),
        columns=['Attribute Pair', 'Correlation'])

    # sorted by absolute value
unique_important_corrs = unique_important_corrs.ix[
    abs(unique_important_corrs['Correlation']).argsort()[::-1]]

print "_______________________________________________"
print unique_important_corrs


#Heatmap
corrMatrix = houses[["SalePrice", "OverallQual", "GrLivArea", "GarageCars",
                     "GarageArea", "GarageYrBlt", "TotalBsmtSF", "1stFlrSF", "FullBath",
                     "TotRmsAbvGrd", "YearBuilt", "YearRemodAdd"]].corr()
sns.set(font_scale=1.10)
plt.figure(figsize=(10,10))
sns.heatmap(corrMatrix,vmax=.8,linewidths=0.01, square=True,annot=True,cmap='viridis',linecolor="white")
plt.title('Correlation between features')
plt.show()

dataCleaning.py

#!/usr/bin/python
# -*- coding: utf-8 -*-

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.utils import shuffle
from sklearn import ensemble, tree, linear_model
from sklearn.cross_validation import train_test_split, cross_val_score
from sklearn.metrics import r2_score, mean_squared_error


houses=pd.read_csv('sources/train.csv')
houses_test=pd.read_csv('sources/test.csv')


#Importing my function
# Prints R2 and RMSE scores
def get_score(prediction, lables):
    print('R2: {}'.format(r2_score(prediction, lables)))
    print('RMSE: {}'.format(np.sqrt(mean_squared_error(prediction, lables))))

# Shows scores for train and validation sets
def train_test(estimator, x_trn, x_tst, y_trn, y_tst):
    prediction_train = estimator.predict(x_trn)
    # Printing estimator
    print(estimator)
    # Printing train scores
    get_score(prediction_train, y_trn)
    prediction_test = estimator.predict(x_tst)
    # Printing test scores
    print("Test")
    get_score(prediction_test, y_tst)







# checking for missing data
NAs=pd.concat([houses.isnull().sum(),houses_test.isnull().sum()],axis=1,keys=['Train','Test'])
print NAs[NAs.sum(axis=1)>0]









#spliting to features and lables and deleting variable I don't need
train_labels=houses.pop('SalePrice')
features=pd.concat([houses,houses_test],keys=['train','test'])
#get rid of features that have more than half of missing information or do not correlate to SalePrice
features.drop(
    ['Utilities', 'RoofMatl', 'MasVnrArea', 'BsmtFinSF1', 'BsmtFinSF2', 'BsmtUnfSF', 'Heating', 'LowQualFinSF',
     'BsmtFullBath', 'BsmtHalfBath', 'Functional', 'GarageYrBlt', 'GarageArea', 'GarageCond', 'WoodDeckSF',
     'OpenPorchSF', 'EnclosedPorch', '3SsnPorch', 'ScreenPorch', 'PoolArea', 'PoolQC', 'Fence', 'MiscFeature',
     'MiscVal'],
    axis=1, inplace=True)











#Filling NAs and converting features
# MSSubClass as str
features['MSSubClass'] = features['MSSubClass'].astype(str)

# MSZoning NA in pred. filling with most popular values
features['MSZoning'] = features['MSZoning'].fillna(features['MSZoning'].mode()[0])

# LotFrontage  NA in all. I suppose NA means 0
features['LotFrontage'] = features['LotFrontage'].fillna(features['LotFrontage'].mean())

# Alley  NA in all. NA means no access
features['Alley'] = features['Alley'].fillna('NOACCESS')

# Converting OverallCond to str
features.OverallCond = features.OverallCond.astype(str)

# MasVnrType NA in all. filling with most popular values
features['MasVnrType'] = features['MasVnrType'].fillna(features['MasVnrType'].mode()[0])

# BsmtQual, BsmtCond, BsmtExposure, BsmtFinType1, BsmtFinType2
# NA in all. NA means No basement
for col in ('BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2'):
    features[col] = features[col].fillna('NoBSMT')

# TotalBsmtSF  NA in pred. I suppose NA means 0
features['TotalBsmtSF'] = features['TotalBsmtSF'].fillna(0)

# Electrical NA in pred. filling with most popular values
features['Electrical'] = features['Electrical'].fillna(features['Electrical'].mode()[0])

# KitchenAbvGr to categorical
features['KitchenAbvGr'] = features['KitchenAbvGr'].astype(str)

# KitchenQual NA in pred. filling with most popular values
features['KitchenQual'] = features['KitchenQual'].fillna(features['KitchenQual'].mode()[0])

# FireplaceQu  NA in all. NA means No Fireplace
features['FireplaceQu'] = features['FireplaceQu'].fillna('NoFP')

# GarageType, GarageFinish, GarageQual  NA in all. NA means No Garage
for col in ('GarageType', 'GarageFinish', 'GarageQual'):
    features[col] = features[col].fillna('NoGRG')

# GarageCars  NA in pred. I suppose NA means 0
features['GarageCars'] = features['GarageCars'].fillna(0.0)

# SaleType NA in pred. filling with most popular values
features['SaleType'] = features['SaleType'].fillna(features['SaleType'].mode()[0])

# Year and Month to categorical
features['YrSold'] = features['YrSold'].astype(str)
features['MoSold'] = features['MoSold'].astype(str)

# Adding total sqfootage feature and removing Basement, 1st and 2nd floor features
features['TotalSF'] = features['TotalBsmtSF'] + features['1stFlrSF'] + features['2ndFlrSF']
features.drop(['TotalBsmtSF', '1stFlrSF', '2ndFlrSF'], axis=1, inplace=True)










#Log transformation  顯示價格區(qū)間分布
train_labels=np.log(train_labels)
sns.distplot(train_labels)





# Standardizing numeric data
numeric_features = features.loc[:,['LotFrontage', 'LotArea', 'GrLivArea', 'TotalSF']]
numeric_features_standardized = (numeric_features - numeric_features.mean())/numeric_features.std()
ax = sns.pairplot(numeric_features_standardized)

#Converting categorical data to dummies
# Getting Dummies from Condition1 and Condition2
conditions = set([x for x in features['Condition1']] + [x for x in features['Condition2']])
dummies = pd.DataFrame(data=np.zeros((len(features.index), len(conditions))),
                       index=features.index, columns=conditions)
for i, cond in enumerate(zip(features['Condition1'], features['Condition2'])):
    dummies.ix[i, cond] = 1
features = pd.concat([features, dummies.add_prefix('Condition_')], axis=1)
features.drop(['Condition1', 'Condition2'], axis=1, inplace=True)

# Getting Dummies from Exterior1st and Exterior2nd
exteriors = set([x for x in features['Exterior1st']] + [x for x in features['Exterior2nd']])
dummies = pd.DataFrame(data=np.zeros((len(features.index), len(exteriors))),
                       index=features.index, columns=exteriors)
for i, ext in enumerate(zip(features['Exterior1st'], features['Exterior2nd'])):
    dummies.ix[i, ext] = 1
features = pd.concat([features, dummies.add_prefix('Exterior_')], axis=1)
features.drop(['Exterior1st', 'Exterior2nd', 'Exterior_nan'], axis=1, inplace=True)

# Getting Dummies from all other categorical vars
for col in features.dtypes[features.dtypes == 'object'].index:
    for_dummy = features.pop(col)
    features = pd.concat([features, pd.get_dummies(for_dummy, prefix=col)], axis=1)






#Obtaining standardized dataset
### Copying features
features_standardized = features.copy()
### Replacing numeric features by standardized values
features_standardized.update(numeric_features_standardized)






#Splitting train and test features
### Splitting features
train_features = features.loc['train'].drop('Id', axis=1).select_dtypes(include=[np.number]).values
test_features = features.loc['test'].drop('Id', axis=1).select_dtypes(include=[np.number]).values

### Splitting standardized features
train_features_st = features_standardized.loc['train'].drop('Id', axis=1).select_dtypes(include=[np.number]).values
test_features_st = features_standardized.loc['test'].drop('Id', axis=1).select_dtypes(include=[np.number]).values



#Splitting to train and validation sets
### Shuffling train sets
train_features_st, train_features, train_labels = shuffle(train_features_st, train_features, train_labels)
### Splitting
x_train, x_test, y_train, y_test = train_test_split(train_features, train_labels, test_size=0.1, random_state=200)
x_train_st, x_test_st, y_train_st, y_test_st = train_test_split(train_features_st, train_labels, test_size=0.1, random_state=200)




plt.show()

ElasticNet.py

#!/usr/bin/python
# -*- coding: utf-8 -*-

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import linear_model

from dataCleaning import  *

#用Elastic擬合數字特征
ENSTest = linear_model.ElasticNetCV(alphas=[0.0001, 0.0005, 0.001, 0.01, 0.1, 1, 10], l1_ratio=[.01, .1, .5, .9, .99], max_iter=5000).fit(x_train_st, y_train_st)
train_test(ENSTest, x_train_st, x_test_st, y_train_st, y_test_st) #RMSE 均方根誤差

# Average R2 score and standart deviation of 5-fold cross-validation
scores = cross_val_score(ENSTest, train_features_st, train_labels, cv=5)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))

GrandientBoosting.py

from dataCleaning import  *

GBest = ensemble.GradientBoostingRegressor(n_estimators=3000, learning_rate=0.05, max_depth=3, max_features='sqrt',
                                           min_samples_leaf=15, min_samples_split=10, loss='huber').fit(x_train,
                                                                                                        y_train)
train_test(GBest, x_train, x_test, y_train, y_test)

#max_features='sqrt' to reduce overfitting of my model.
#use loss='huber' because it more tolerant to outliers
# All other hyper-parameters was chosen using GridSearchCV

# Average R2 score and standart deviation of 5-fold cross-validation
scores = cross_val_score(GBest, train_features_st, train_labels, cv=5)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))

Ensembling.py

from GradientBoosting import GBest
from ElasticNet import ENSTest
from dataCleaning import *
from LR import clf_lr

#final ensemble model is an average of Gradient Boosting and Elastic Net predictions.


# Retraining models
GB_model = GBest.fit(train_features, train_labels)
ENST_model = ENSTest.fit(train_features_st, train_labels)
LR_model=clf_lr.fit(train_features_st, train_labels)

test=pd.read_csv('sources/test.csv')

## Getting our SalePrice estimation
Final_labels = (np.exp(GB_model.predict(test_features)) + np.exp(ENST_model.predict(test_features_st))) / 2
## Saving to CSV
pd.DataFrame({'Id': test.Id, 'SalePrice': Final_labels}).to_csv('result.csv', index =False)

擴展

GBDT

工作原理:http://blog.csdn.net/w28971023/article/details/8240756

GBDT的核心就在于,每一棵樹學的是之前所有樹結論和的殘差棉圈,這個殘差就是一個加預測值后能得真實值的累加量涩堤。比如A的真實年齡是18歲,但第一棵樹的預測年齡是12歲分瘾,差了6歲胎围,即殘差為6歲。那么在第二棵樹里我們把A的年齡設為6歲去學習德召,如果第二棵樹真的能把A分到6歲的葉子節(jié)點白魂,那累加兩棵樹的結論就是A的真實年齡;如果第二棵樹的結論是5歲上岗,則A仍然存在1歲的殘差福荸,第三棵樹里A的年齡就變成1歲,繼續(xù)學肴掷。這就是Gradient Boosting在GBDT中的意義敬锐,簡單吧背传。

Boosting的最大好處在于,每一步的殘差計算其實變相地增大了分錯instance的權重台夺,而已經分對的instance則都趨向于0径玖。這樣后面的樹就能越來越專注那些前面被分錯的instance。

適用范圍:該版本GBDT幾乎可用于所有回歸問題(線性/非線性)谒养,相對logistic regression僅能用于線性回歸挺狰,GBDT的適用面非常廣。亦可用于二分類問題(設定閾值买窟,大于閾值為正例丰泊,反之為負例)。

調參:http://www.cnblogs.com/pinard/p/6143927.html
主要還是用GridSearchCV網格搜素逐個確定參數

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯(lián)系作者
  • 序言:七十年代末始绍,一起剝皮案震驚了整個濱河市瞳购,隨后出現的幾起案子,更是在濱河造成了極大的恐慌亏推,老刑警劉巖学赛,帶你破解...
    沈念sama閱讀 222,464評論 6 517
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現場離奇詭異吞杭,居然都是意外死亡盏浇,警方通過查閱死者的電腦和手機,發(fā)現死者居然都...
    沈念sama閱讀 95,033評論 3 399
  • 文/潘曉璐 我一進店門芽狗,熙熙樓的掌柜王于貴愁眉苦臉地迎上來绢掰,“玉大人,你說我怎么就攤上這事童擎〉尉ⅲ” “怎么了?”我有些...
    開封第一講書人閱讀 169,078評論 0 362
  • 文/不壞的土叔 我叫張陵顾复,是天一觀的道長班挖。 經常有香客問我芯砸,道長萧芙,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 59,979評論 1 299
  • 正文 為了忘掉前任假丧,我火速辦了婚禮末购,結果婚禮上,老公的妹妹穿的比我還像新娘虎谢。我一直安慰自己,他們只是感情好曹质,可當我...
    茶點故事閱讀 69,001評論 6 398
  • 文/花漫 我一把揭開白布婴噩。 她就那樣靜靜地躺著擎场,像睡著了一般。 火紅的嫁衣襯著肌膚如雪几莽。 梳的紋絲不亂的頭發(fā)上迅办,一...
    開封第一講書人閱讀 52,584評論 1 312
  • 那天,我揣著相機與錄音章蚣,去河邊找鬼站欺。 笑死,一個胖子當著我的面吹牛纤垂,可吹牛的內容都是我干的矾策。 我是一名探鬼主播,決...
    沈念sama閱讀 41,085評論 3 422
  • 文/蒼蘭香墨 我猛地睜開眼峭沦,長吁一口氣:“原來是場噩夢啊……” “哼贾虽!你這毒婦竟也來了?” 一聲冷哼從身側響起吼鱼,我...
    開封第一講書人閱讀 40,023評論 0 277
  • 序言:老撾萬榮一對情侶失蹤蓬豁,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后菇肃,有當地人在樹林里發(fā)現了一具尸體地粪,經...
    沈念sama閱讀 46,555評論 1 319
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 38,626評論 3 342
  • 正文 我和宋清朗相戀三年琐谤,在試婚紗的時候發(fā)現自己被綠了蟆技。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 40,769評論 1 353
  • 序言:一個原本活蹦亂跳的男人離奇死亡笑跛,死狀恐怖付魔,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情飞蹂,我是刑警寧澤几苍,帶...
    沈念sama閱讀 36,439評論 5 351
  • 正文 年R本政府宣布,位于F島的核電站陈哑,受9級特大地震影響妻坝,放射性物質發(fā)生泄漏。R本人自食惡果不足惜惊窖,卻給世界環(huán)境...
    茶點故事閱讀 42,115評論 3 335
  • 文/蒙蒙 一刽宪、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧界酒,春花似錦圣拄、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,601評論 0 25
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽岳掐。三九已至,卻和暖如春饭耳,著一層夾襖步出監(jiān)牢的瞬間串述,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 33,702評論 1 274
  • 我被黑心中介騙來泰國打工寞肖, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留纲酗,地道東北人。 一個月前我還...
    沈念sama閱讀 49,191評論 3 378
  • 正文 我出身青樓新蟆,卻偏偏與公主長得像觅赊,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子栅葡,可洞房花燭夜當晚...
    茶點故事閱讀 45,781評論 2 361

推薦閱讀更多精彩內容