官方解釋
Python中的xgboost可以通過get_fscore獲取特征重要性,先看看官方對于這個方法的說明:
get_score(fmap=’’, importance_type=‘weight’)
Get feature importance of each feature. Importance type can be defined as:
‘weight’: the number of times a feature is used to split the data across all trees.
‘gain’: the average gain across all splits the feature is used in.
‘cover’: the average coverage across all splits the feature is used in.
‘total_gain’: the total gain across all splits the feature is used in.
‘total_cover’: the total coverage across all splits the feature is used in.
看釋義不直觀,下面通過訓(xùn)練一個簡單的模型,輸出這些重要性指標(biāo),再結(jié)合釋義進(jìn)行解釋米奸。
代碼實踐
首先構(gòu)造10個樣例的樣本,每個樣例有兩維特征,標(biāo)簽為0或1步绸,二分類問題:
import numpy as np
sample_num = 10
feature_num = 2
np.random.seed(0)
data = np.random.randn(sample_num, feature_num)
np.random.seed(0)
label = np.random.randint(0, 2, sample_num)
輸出data和label:
array([[ 1.76405235, 0.40015721],
[ 0.97873798, 2.2408932 ],
[ 1.86755799, -0.97727788],
[ 0.95008842, -0.15135721],
[-0.10321885, 0.4105985 ],
[ 0.14404357, 1.45427351],
[ 0.76103773, 0.12167502],
[ 0.44386323, 0.33367433],
[ 1.49407907, -0.20515826],
[ 0.3130677 , -0.85409574]])
# label:
array([0, 1, 1, 0, 1, 1, 1, 1, 1, 1])
訓(xùn)練,這里為了便于下面計算吃媒,將樹深度設(shè)為3(‘max_depth’: 3)瓤介,只用一棵樹(num_boost_round=1):
import xgboost as xgb
train_data = xgb.DMatrix(data, label=label)
params = {'max_depth': 3}
bst = xgb.train(params, train_data, num_boost_round=1)
輸出重要性指標(biāo):
for importance_type in ('weight', 'gain', 'cover', 'total_gain', 'total_cover'):
print('%s: ' % importance_type, bst.get_score(importance_type=importance_type))
結(jié)果:
weight: {'f0': 1, 'f1': 2}
gain: {'f0': 0.265151441, 'f1': 0.375000015}
cover: {'f0': 10.0, 'f1': 4.0}
total_gain: {'f0': 0.265151441, 'f1': 0.75000003}
total_cover: {'f0': 10.0, 'f1': 8.0}
畫出唯一的一棵樹圖:
xgb.to_graphviz(bst, num_trees=0)
下面就結(jié)合這張圖,解釋下各指標(biāo)含義:
weight: {‘f0’: 1, ‘f1’: 2}
在所有樹中赘那,某特征被用來分裂節(jié)點的次數(shù)惑朦,在本例中,可見分裂第1個節(jié)點時用到f0漓概,分裂第2漾月,3個節(jié)點時用到f1,所以weight_f0 = 1, weight_f1 = 2胃珍。
total_cover: {‘f0’: 10.0, ‘f1’: 8.0}
第1個節(jié)點梁肿,f0被用來對所有10個樣例進(jìn)行分裂蜓陌,之后的節(jié)點中f0沒再被用到,所以f0的total_cover為10.0吩蔑,此時f0 >= 0.855563045的樣例有5個钮热,落入右子樹;
第2個節(jié)點烛芬,f1被用來對上面落入右子樹的5個樣例進(jìn)行分裂隧期,其中f1 >= -0.178257734的樣例有3個,落入右子樹赘娄;
第3個節(jié)點仆潮,f1被用來對上面落入右子樹的3個樣例進(jìn)行分裂。
總結(jié)起來遣臼,f0在第1個節(jié)點分裂了10個樣例性置,所以total_cover_f0 = 10,f1在第2揍堰、3個節(jié)點分別用于分裂5鹏浅、3個樣例,所以total_cover_f1 = 5 + 3 = 8屏歹。total_cover表示在所有樹中隐砸,某特征在每次分裂節(jié)點時處理(覆蓋)的所有樣例的數(shù)量。
cover: {‘f0’: 10.0, ‘f1’: 4.0}
cover = total_cover / weight
蝙眶,在本例中季希,cover_f0 = 10 / 1,cover_f1 = 8 / 2 = 4
.
total_gain: {‘f0’: 0.265151441, ‘f1’: 0.75000003}
在所有樹中械馆,某特征在每次分裂節(jié)點時帶來的總增益胖眷,如果用熵或基尼不純衡量分裂前后的信息量分別為i0和i1,則增益為(i0 - i1)霹崎。
gain: {‘f0’: 0.265151441, ‘f1’: 0.375000015}
gain = total_gain / weight珊搀,在本例中,gain_f0 = 0.265151441 / 1尾菇,gain_f1 = 75000003 / 2 = 375000015.
在平時的使用中境析,多用total_gain來對特征重要性進(jìn)行排序。
構(gòu)造xgboost分類器還有另外一種方式派诬,這種方式類似于sklearn中的分類器劳淆,采用fit, transform形式訓(xùn)練模型:
from xgboost import XGBClassifier
cls = XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bytree=1, gamma=0, learning_rate=0.07, max_delta_step=0,
max_depth=3, min_child_weight=1, missing=None, n_estimators=300,
n_jobs=1, nthread=None, objective='binary:logistic', random_state=0,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
silent=True, subsample=1)
# 訓(xùn)練模型
# cls.fit(data, label)
采用下面的方式獲取特征重要性指標(biāo):
for importance_type in ('weight', 'gain', 'cover', 'total_gain', 'total_cover'):
print('%s: ' % importance_type, cls.get_booster().get_score(importance_type=importance_type))