統(tǒng)計(jì)學(xué)習(xí)方法 李航 決策樹模型 python sklearn 實(shí)現(xiàn) 及課后習(xí)題

  • 李航
    決策樹(decision)是一種基本的分類與回歸算法缝呕。
    決策樹呈樹形結(jié)構(gòu)敦迄,在分類問題中,表示基于特征對實(shí)例進(jìn)行分類的過程佳魔。
    它可以認(rèn)為是if-then規(guī)則的集合曙聂,也可以認(rèn)為定義在特征空間與類空間上 的條件概率分布。
    其主要優(yōu)點(diǎn)在于模型具有可讀性吃引,分類速度快筹陵。
    學(xué)習(xí)時(shí),利用訓(xùn)練數(shù)據(jù)镊尺,根據(jù)損失函數(shù)最小化的原則建立決策樹模型。 預(yù)測時(shí)并思,對新的數(shù)據(jù)庐氮,利用決策樹模型進(jìn)行分類。
    決策樹的學(xué)習(xí)通常包括三個(gè)部分:特征選擇宋彼、決策樹生成和決策樹的修剪
    決策樹的思想主要來源于Quinlan在1986年提出的ID3算法和1993年的C4.5算 法弄砍,以及Breiman等人在1984年提出的CART算法

  • 決策樹的一些優(yōu)點(diǎn):
    易于理解和解釋。數(shù)可以可視化输涕。 幾乎不需要數(shù)據(jù)預(yù)處理音婶。其他方法經(jīng)常需要數(shù)據(jù)標(biāo)準(zhǔn)化,創(chuàng)建虛擬變量和刪除缺失值莱坎。決策樹還不支持缺失值衣式。 使用樹的花費(fèi)(例如預(yù)測數(shù)據(jù))是訓(xùn)練數(shù)據(jù)點(diǎn)(data points)數(shù)量的對數(shù)。 可以同時(shí)處理數(shù)值變量和分類變量檐什。其他方法大都適用于分析一種變量的集合碴卧。 可以處理多值輸出變量問題。 使用白盒模型乃正。如果一個(gè)情況被觀察到,使用邏輯判斷容易表示這種規(guī)則。相反耙册,如果是黑盒模型(例如人工神經(jīng)網(wǎng)絡(luò)),結(jié)果會(huì)非常難解釋凡人。 可以使用統(tǒng)計(jì)檢驗(yàn)檢驗(yàn)?zāi)P汀_@樣做被認(rèn)為是提高模型的可行度叹阔。 即使對真實(shí)模型來說划栓,假設(shè)無效的情況下,也可以較好的適用条获。

  • 決策樹的一些缺點(diǎn):
    決策樹學(xué)習(xí)可能創(chuàng)建一個(gè)過于復(fù)雜的樹忠荞,并不能很好的預(yù)測數(shù)據(jù)。也就是過擬合帅掘。修剪機(jī)制(現(xiàn)在不支持)委煤,設(shè)置一個(gè)葉子節(jié)點(diǎn)需要的最小樣本數(shù)量,或者數(shù)的最大深度修档,可以避免過擬合碧绞。 決策樹可能是不穩(wěn)定的,因?yàn)榧词狗浅P〉淖儺愔ㄎ眩赡軙?huì)產(chǎn)生一顆完全不同的樹讥邻。這個(gè)問題通過decision trees with an ensemble來緩解。 學(xué)習(xí)一顆最優(yōu)的決策樹是一個(gè)NP-完全問題under several aspects of optimality and even for simple concepts院峡。因此兴使,傳統(tǒng)決策樹算法基于啟發(fā)式算法,例如貪婪算法照激,即每個(gè)節(jié)點(diǎn)創(chuàng)建最優(yōu)決策发魄。這些算法不能產(chǎn)生一個(gè)全家最優(yōu)的決策樹。對樣本和特征隨機(jī)抽樣可以降低整體效果偏差俩垃。 概念難以學(xué)習(xí)励幼,因?yàn)闆Q策樹沒有很好的解釋他們,例如口柳,XOR, parity or multiplexer problems. 如果某些分類占優(yōu)勢苹粟,決策樹將會(huì)創(chuàng)建一棵有偏差的樹。因此跃闹,建議在訓(xùn)練之前嵌削,先抽樣使樣本均衡

  • 數(shù)據(jù)

data = np.array([[1,2,2,3],
                 [1,2,2,2],
                 [1,1,2,2],
                 [1,1,1,3],
                 [1,2,2,3],
                 [2,2,2,3],
                 [2,2,2,2],
                 [2,1,1,2],
                 [2,2,1,1],
                 [2,2,1,1],
                 [3,2,1,1],
                 [3,2,1,2],
                 [3,1,2,2],
                 [3,1,2,1],
                 [3,2,2,3]])
label = np.array([0,0,1,1,0,0,0,1,1,1,1,1,1,1,0])
target = [3,1,2,1]

python代碼5.1例題ID3算法實(shí)現(xiàn)

import numpy as np

class Tree(object):
    def __init__(self,node_type, Class = None, features = None):
        self.node_type = node_type
        self.dict = {}
        self.Class = Class
        self.feature_index = features

    def add_tree(self,val,tree):
        self.dict[val] = tree

    def predict(self,features):
        if self.node_type == 'leaf':
            return self.Class

        tree = self.dict[features[self.feature_index]]
        return tree.predict(features)

class Id3_tree(object):
    def __init__(self, data, label, features, epsilon):
        self.leaf = 'leaf'
        self.internal = 'internal'
        self.epsilon = epsilon
        self.root = self.__build(data, label, features)

    def __build(self, data, labels,features):
        label_kinds = np.unique(labels)
        if len(np.unique(label_kinds)) == 1:
            return Tree(self.leaf, label_kinds[0])
        (max_class, max_len) = max([(i, len(list(filter(lambda x: x == i, labels))))
                                    for i in range(len(label_kinds))],key=lambda x: x[1])
        features_num = len(features)
        if features_num == 0:
            return Tree(self.leaf, label_kinds[0])

        Hd = self.__caclulate_hd(labels)
        Hda = self.__caclulate_hda(data,labels,features_num)
        Gda = np.tile(Hd, features_num) - Hda

        max_contribution_feature = list(Gda).index(np.max(Gda))
        if Gda[max_contribution_feature] < self.epsilon:
            return Tree(self.leaf, Class=max_class)
        data_tmp = np.hstack((data[:, :max_contribution_feature], data[:, max_contribution_feature + 1:]))
        sub_features = list(filter(lambda x: x != max_contribution_feature, features))
        tree = Tree(self.internal, features=max_contribution_feature)
        feature_s = np.unique(data[:, max_contribution_feature])
        for feature_index, feature in enumerate(feature_s):
            dx = np.where(data[:, max_contribution_feature] == feature)
            sub_tree = self.__build(data_tmp[dx[0]], labels[dx[0]], sub_features)
            tree.add_tree(feature, sub_tree)
        return tree

    def __caclulate_hd(self, labels):
        label_kinds = np.unique(labels)
        Hd = 0
        for label in label_kinds:
            count = list(labels).count(label)
            p = float(count) / float(len(labels))
            Hd -= p * np.log2(p)
        return Hd

    def __caclulate_hda(self, data, labels, features_num):
        Hda = np.zeros(features_num)
        for feature_index in range(features_num):
            feature_s = np.unique(data[:, feature_index])
            for feature in feature_s:
                dx = np.where(data[:, feature_index] == feature)
                p = float(len(dx[0])) / float(len(labels))
                h = self.__caclulate_hd(labels[dx])
                Hda[feature_index] += p * h
        return Hda
id3_tree = Id3_tree(data, label, [i for i in range(4)], 0.1)
prediction = id3_tree.root.predict(target)
print('Target belong %s' % prediction)

python代碼5.1例題C4.3算法實(shí)現(xiàn)

import numpy as np

class Tree(object):
    def __init__(self,node_type, Class = None, features = None):
        self.node_type = node_type
        self.dict = {}
        self.Class = Class
        self.feature_index = features

    def add_tree(self,val,tree):
        self.dict[val] = tree

    def predict(self,features):
        if self.node_type == 'leaf':
            return self.Class

        tree = self.dict[features[self.feature_index]]
        return tree.predict(features)

class C45_tree(object):
    def __init__(self, data, label, features, epsilon):
        self.leaf = 'leaf'
        self.internal = 'internal'
        self.epsilon = epsilon
        self.root = self.__build(data, label, features)

    def __build(self, data, labels,features):
        label_kinds = np.unique(labels)
        if len(np.unique(label_kinds)) == 1:
            return Tree(self.leaf, label_kinds[0])
        (max_class, max_len) = max([(i, len(list(filter(lambda x: x == i, labels))))
                                    for i in range(len(label_kinds))],key=lambda x: x[1])
        features_num = len(features)
        if features_num == 0:
            return Tree(self.leaf, label_kinds[0])

        Hd = self.__caclulate_hd(labels)
        Hda, Ha = self.__caclulate_hda_ha(data,labels,features_num)
        Gda = np.tile(Hd, features_num) - Hda
        Grda = Gda / Ha
        max_contribution_feature = list(Grda).index(np.max(Grda))
        if Grda[max_contribution_feature] < self.epsilon:
            return Tree(self.leaf, Class=max_class)
        data_tmp = np.hstack((data[:, :max_contribution_feature], data[:, max_contribution_feature + 1:]))
        sub_features = list(filter(lambda x: x != max_contribution_feature, features))
        tree = Tree(self.internal, features=max_contribution_feature)
        feature_s = np.unique(data[:, max_contribution_feature])
        for feature_index, feature in enumerate(feature_s):
            dx = np.where(data[:, max_contribution_feature] == feature)
            sub_tree = self.__build(data_tmp[dx[0]], labels[dx[0]], sub_features)
            tree.add_tree(feature, sub_tree)
        return tree

    def __caclulate_hd(self, labels):
        label_kinds = np.unique(labels)
        Hd = 0
        for label in label_kinds:
            count = list(labels).count(label)
            p = float(count) / float(len(labels))
            Hd -= p * np.log2(p)
        return Hd

    def __caclulate_hda_ha(self, data, labels, features_num):
        Hda = np.zeros(features_num)
        Ha = np.zeros(features_num)
        for feature_index in range(features_num):
            feature_s = np.unique(data[:, feature_index])
            for feature in feature_s:
                dx = np.where(data[:, feature_index] == feature)
                p = float(len(dx[0])) / float(len(labels))
                h = self.__caclulate_hd(labels[dx])
                Hda[feature_index] += p * h
                Ha[feature_index] -= p * np.log2(p)
        return Hda, Ha

c45_tree = C45_tree(data, label, [i for i in range(4)], 0.1)
prediction = c45_tree.root.predict(target)
print('Target belong %s' % prediction)

python代碼5.1例題CART算法實(shí)現(xiàn)

import numpy as np

class Tree(object):
    def __init__(self,node_type, Class = None, feature_index = None, feature = None):
        self.node_type = node_type
        self.dict = {}
        self.Class = Class
        self.feature_index = feature_index
        self.feature = feature

    def add_tree(self,val,tree):
        self.dict[val] = tree

    def predict(self,features):
        if self.node_type == 'leaf':
            return self.Class
        if features[self.feature_index] == self.feature:
            tree = self.dict[self.feature]
        else:
            tree = self.dict[-1]
        return tree.predict(features)

class Id3_tree(object):
    def __init__(self, data, label, features):
        self.leaf = 'leaf'
        self.internal = 'internal'
        self.root = self.__build(data, label, features)

    def __build(self, data, labels,features):
        label_kinds = np.unique(labels)
        if len(np.unique(label_kinds)) == 1:
            return Tree(self.leaf, label_kinds[0])
        features_num = len(features)
        if features_num == 0:
            return Tree(self.leaf, label_kinds[0])

        Ga = self.__caclulate_ga(data,labels,features_num)
        ga_fea_min = min(Ga[0])
        fea_local = list(Ga[0]).index(ga_fea_min)
        ga_fea_index = 0
        for dx, i in enumerate(Ga[1:]):
            ga_fea_min_tmp = min(i)
            if ga_fea_min_tmp < ga_fea_min:
                fea_local = list(i).index(ga_fea_min_tmp)
                ga_fea_min = ga_fea_min_tmp
                ga_fea_index = dx + 1
        data_tmp = np.hstack((data[:, :ga_fea_index], data[:, ga_fea_index + 1:]))
        sub_features = list(filter(lambda x: x != ga_fea_index, features))
        feature_s = np.unique(data[:, ga_fea_index])
        tree = Tree(self.internal, feature_index=features[ga_fea_index], feature=feature_s[fea_local])
        dx_y = np.where(data[:, ga_fea_index] == feature_s[fea_local])
        sub_tree = self.__build(data_tmp[dx_y], labels[dx_y], sub_features)
        tree.add_tree(feature_s[fea_local], sub_tree)
        dx_n = np.where(data[:, ga_fea_index] != feature_s[fea_local])
        sub_tree = self.__build(data_tmp[dx_n], labels[dx_n], sub_features)
        tree.add_tree(-1, sub_tree)

        return tree

    def __caclulate_q(self, labels):
        label_kinds = np.unique(labels)
        q = 0
        for label in label_kinds:
            count = list(labels).count(label)
            p = float(count) / float(len(labels))
            q += p * (1 - p)
        return q

    def __caclulate_ga(self, data, labels, features_num):
        Ga = []
        for feature_index in range(features_num):
            feature_s = np.unique(data[:, feature_index])
            Gai = np.zeros(len(feature_s))
            for index, feature in enumerate(feature_s):
                dx_y = np.where(data[:, feature_index] == feature)
                p = float(len(dx_y[0])) / float(len(labels))
                q_y = self.__caclulate_q(labels[dx_y])
                dx_n = np.where(data[:, feature_index] != feature)
                q_n = self.__caclulate_q(labels[dx_n])
                dx = np.where(data[:, feature_index] == feature)
                Gai[index] += (p * q_y + (1 - p) * q_n)
            Ga.append(Gai)
        return Ga

id3_tree = Id3_tree(data, label, [i for i in range(4)])
prediction = id3_tree.root.predict(target)
print('Target belong %s' % prediction)

sklearn代碼所用數(shù)據(jù)為kaggle中mnist數(shù)據(jù),將特征PCA至六維

# -*- coding: utf-8 -*-
"""
使用sklearn實(shí)現(xiàn)的DT算法進(jìn)行分類的一個(gè)實(shí)例辣卒,
使用數(shù)據(jù)集是Kaggle數(shù)字手寫體數(shù)據(jù)庫
"""
import os
import pandas as pd
import numpy as np
from sklearn import tree
from sklearn.decomposition import PCA

# 加載數(shù)據(jù)集
def load_data(filename, n, mode):
    data_pd = pd.read_csv(filename)
    data = np.asarray(data_pd)
    pca = PCA(n_components=n)
    if not mode == 'test':
        dateset = pca.fit_transform(data[:, 1:])
        return dateset, data[:, 0]
    else:
        dateset = pca.fit_transform(data)
        return dateset, 1

def main(train_data_path, test_data_path, n_dim):
    train_data, train_label = load_data(train_data_path, n_dim, 'train')
    print("Train set :" + repr(len(train_data)))
    test_data, _ = load_data(test_data_path, n_dim, 'test')
    print("Test set :" + repr(len(test_data)))
    dt = tree.DecisionTreeClassifier()
    # 訓(xùn)練數(shù)據(jù)集
    dt.fit(train_data, train_label)
    # 訓(xùn)練準(zhǔn)確率
    score = dt.score(train_data, train_label)
    print(">Training accuracy = " + repr(score))
    predictions = []
    for index in range(len(test_data)):
        # 預(yù)測
        result = dt.predict([test_data[index]])
        # 預(yù)測掷贾,返回概率數(shù)組
        predict2 = dt.predict_proba([test_data[index]])
        predictions.append([index + 1, result[0]])
        print(">Index : %s, predicted = %s   p%s" % (index + 1, result[0], predict2))
    columns = ['ImageId', 'Label']
    save_file = pd.DataFrame(columns=columns, data=predictions)
    save_file.to_csv('m.csv', index=False, encoding="utf-8")

if __name__ == "__main__":
    train_data_path = 'train.csv'
    test_data_path = 'train.csv'
    n_dim = 6
    main(train_data_path, test_data_path, n_dim)

課后習(xí)題

喜歡點(diǎn)贊關(guān)注哈

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市荣茫,隨后出現(xiàn)的幾起案子想帅,更是在濱河造成了極大的恐慌,老刑警劉巖啡莉,帶你破解...
    沈念sama閱讀 219,366評(píng)論 6 508
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件港准,死亡現(xiàn)場離奇詭異旨剥,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī)浅缸,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,521評(píng)論 3 395
  • 文/潘曉璐 我一進(jìn)店門轨帜,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人衩椒,你說我怎么就攤上這事蚌父。” “怎么了毛萌?”我有些...
    開封第一講書人閱讀 165,689評(píng)論 0 356
  • 文/不壞的土叔 我叫張陵苟弛,是天一觀的道長。 經(jīng)常有香客問我阁将,道長膏秫,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 58,925評(píng)論 1 295
  • 正文 為了忘掉前任做盅,我火速辦了婚禮缤削,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘吹榴。我一直安慰自己亭敢,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,942評(píng)論 6 392
  • 文/花漫 我一把揭開白布腊尚。 她就那樣靜靜地躺著吨拗,像睡著了一般。 火紅的嫁衣襯著肌膚如雪婿斥。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,727評(píng)論 1 305
  • 那天哨鸭,我揣著相機(jī)與錄音民宿,去河邊找鬼。 笑死像鸡,一個(gè)胖子當(dāng)著我的面吹牛活鹰,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播只估,決...
    沈念sama閱讀 40,447評(píng)論 3 420
  • 文/蒼蘭香墨 我猛地睜開眼志群,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了蛔钙?” 一聲冷哼從身側(cè)響起锌云,我...
    開封第一講書人閱讀 39,349評(píng)論 0 276
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎吁脱,沒想到半個(gè)月后桑涎,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體彬向,經(jīng)...
    沈念sama閱讀 45,820評(píng)論 1 317
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,990評(píng)論 3 337
  • 正文 我和宋清朗相戀三年攻冷,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了娃胆。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 40,127評(píng)論 1 351
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡等曼,死狀恐怖里烦,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情禁谦,我是刑警寧澤胁黑,帶...
    沈念sama閱讀 35,812評(píng)論 5 346
  • 正文 年R本政府宣布,位于F島的核電站枷畏,受9級(jí)特大地震影響别厘,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜拥诡,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,471評(píng)論 3 331
  • 文/蒙蒙 一触趴、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧渴肉,春花似錦冗懦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,017評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至乌奇,卻和暖如春没讲,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背礁苗。 一陣腳步聲響...
    開封第一講書人閱讀 33,142評(píng)論 1 272
  • 我被黑心中介騙來泰國打工爬凑, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人试伙。 一個(gè)月前我還...
    沈念sama閱讀 48,388評(píng)論 3 373
  • 正文 我出身青樓嘁信,卻偏偏與公主長得像,于是被迫代替她去往敵國和親疏叨。 傳聞我的和親對象是個(gè)殘疾皇子潘靖,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,066評(píng)論 2 355

推薦閱讀更多精彩內(nèi)容