2 隨機森林

  • 決策樹的構(gòu)建是一個遞歸的過程蕴潦,理想情況下所有的記錄都能被精確分類扫夜,即生成決策樹葉節(jié)點都有確定的類型惋戏,但現(xiàn)實這種條件往往很難滿足,這使得決策樹在構(gòu)建時可能很難停止贫奠。即使構(gòu)建完成唬血,也常常會使得最終的節(jié)點數(shù)過多望蜡,從而導(dǎo)致過度擬合(overfitting),因此在實際應(yīng)用中需要設(shè)定停止條件拷恨,當(dāng)達到停止條件時脖律,直接停止決策樹的構(gòu)建。但這仍然不能完全解決過度擬合問題腕侄,過度擬合的典型表現(xiàn)是決策樹對訓(xùn)練數(shù)據(jù)錯誤率很低小泉,而對測試數(shù)據(jù)其錯誤率卻非常高。過度擬合常見原因有:(1)訓(xùn)練數(shù)據(jù)中存在噪聲冕杠;(2)數(shù)據(jù)不具有代表性微姊。過度擬合的典型表現(xiàn)是決策樹的節(jié)點過多,因此實際中常常需要對構(gòu)建好的決策樹進行枝葉裁剪(Prune Tree)分预,但它不能解決根本問題兢交,隨機森林算法的出現(xiàn)能夠較好地解決過度擬合問題。
    下面是隨機森林的完整代碼笼痹,以及數(shù)據(jù)集配喳,其中有一些部分還不是很懂,感覺需要將隨機森林算法的一般步驟給出來与倡,先從整體上來看界逛,然后再去看具體細節(jié)!7淖息拜!
# Random Forest Algorithm on Sonar Dataset
from random import seed
from random import randrange
from csv import reader
from math import sqrt

# Load a CSV file
def load_csv(filename):
    dataset = list()
    with open(filename, 'r') as file:
        csv_reader = reader(file)
        for row in csv_reader:
            if not row: continue
            dataset.append(row)
        return dataset

# Convert string column to float
def str_column_to_float(dataset, column):
    for row in dataset:
        row[column] = float(row[column].strip())

# Convert string column to integer
def str_column_to_int(dataset, column):
    class_values = [row[column] for row in dataset]
    unique = set(class_values)
    lookup = dict()
    for i, value in enumerate(unique):
        lookup[value] = i
    for row in dataset:
        row[column] = lookup[row[column]]
    return lookup

# Split a dataset into k folds
def cross_validation_split(dataset, n_folds):
    dataset_split = list()
    dataset_copy = list(dataset)
    fold_size = int(len(dataset) / n_folds)
    for i in range(n_folds):
        fold = list()
        while len(fold) < fold_size:
            index = randrange(len(dataset_copy))
            fold.append(dataset_copy.pop(index))
        dataset_split.append(fold)
    return dataset_split

# Calculate accuracy percentage
def accuracy_metric(actual, predicted):
    correct = 0
    for i in range(len(actual)):
        if actual[i] == predicted[i]:
            correct += 1
    return correct / float(len(actual)) * 100.0

# Evaluate an algorithm using a cross validation split
def evaluate_algorithm(dataset, algorithm, n_folds, *args):
    folds = cross_validation_split(dataset, n_folds)
    scores = list()
    for fold in folds:
        train_set = list(folds)
        train_set.remove(fold)
        train_set = sum(train_set, [])
        test_set = list()
        for row in fold:
            row_copy = list(row)
            test_set.append(row_copy)
            row_copy[-1] = None
        predicted = algorithm(train_set, test_set, *args)
        actual = [row[-1] for row in fold]
        accuracy = accuracy_metric(actual, predicted)
        scores.append(accuracy)
    return scores

# Split a dataset based on an attribute and an attribute value
def test_split(index, value, dataset):
    left, right = list(), list()
    for row in dataset:
        if row[index] < value:
            left.append(row)
        else:
            right.append(row)
    return left, right

# Calculate the Gini index for a split dataset
def gini_index(groups, classes):
    # count all samples at split point
    n_instances = float(sum([len(group) for group in groups]))
    # sum weighted Gini index for each group
    gini = 0.0
    for group in groups:
        size = float(len(group))
        # avoid divide by zero
        if size == 0:
            continue
        score = 0.0
        # score the group based on the score for each class
        for class_val in classes:
            p = [row[-1] for row in group].count(class_val) / size
            score += p * p
        # weight the group score by its relative size
        gini += (1.0 - score) * (size / n_instances)
    return gini

# Select the best split point for a dataset
def get_split(dataset, n_features):
    class_values = list(set(row[-1] for row in dataset))
    b_index, b_value, b_score, b_groups = 999, 999, 999, None
    features = list()
    while len(features) < n_features:
        index = randrange(len(dataset[0])-1)
        if index not in features:
            features.append(index)
    for index in features:
        for row in dataset:
            groups = test_split(index, row[index], dataset)
            gini = gini_index(groups, class_values)
            if gini < b_score:
                b_index, b_value, b_score, b_groups = index, row[index], gini, groups
    return {'index':b_index, 'value':b_value, 'groups':b_groups}

# Create a terminal node value
def to_terminal(group):
    outcomes = [row[-1] for row in group]
    return max(set(outcomes), key=outcomes.count)

# Create child splits for a node or make terminal
def split(node, max_depth, min_size, n_features, depth):
    left, right = node['groups']
    del(node['groups'])
    # check for a no split
    if not left or not right:
        node['left'] = node['right'] = to_terminal(left + right)
        return
    # check for max depth
    if depth >= max_depth:
        node['left'], node['right'] = to_terminal(left), to_terminal(right)
        return
    # process left child
    if len(left) <= min_size:
        node['left'] = to_terminal(left)
    else:
        node['left'] = get_split(left, n_features)
        split(node['left'], max_depth, min_size, n_features, depth+1)
    # process right child
    if len(right) <= min_size:
        node['right'] = to_terminal(right)
    else:
        node['right'] = get_split(right, n_features)
        split(node['right'], max_depth, min_size, n_features, depth+1)

# Build a decision tree
def build_tree(train, max_depth, min_size, n_features):
    root = get_split(train, n_features)
    split(root, max_depth, min_size, n_features, 1)
    return root

# Make a prediction with a decision tree
def predict(node, row):
    if row[node['index']] < node['value']:
        if isinstance(node['left'], dict):
            return predict(node['left'], row)
        else:
            return node['left']
    else:
        if isinstance(node['right'], dict):
            return predict(node['right'], row)
        else:
            return node['right']

# Create a random subsample from the dataset with replacement
def subsample(dataset, ratio):
    sample = list()
    n_sample = round(len(dataset) * ratio)
    while len(sample) < n_sample:
        index = randrange(len(dataset))
        sample.append(dataset[index])
    return sample

# Make a prediction with a list of bagged trees
def bagging_predict(trees, row):
    predictions = [predict(tree, row) for tree in trees]
    return max(set(predictions), key=predictions.count)

# Random Forest Algorithm
def random_forest(train, test, max_depth, min_size, sample_size, n_trees, n_features):
    trees = list()
    for i in range(n_trees):
        sample = subsample(train, sample_size)
        tree = build_tree(sample, max_depth, min_size, n_features)
        trees.append(tree)
    predictions = [bagging_predict(trees, row) for row in test]
    return(predictions)



'''數(shù)據(jù)集:https://archive.ics.uci.edu/ml/datasets/Connectionist+Bench+(Sonar,+Mines+vs.+Rocks)'''
# Test the random forest algorithm
seed(2)
# load and prepare data
filename = '/Users/liuchaoqun01/Desktop/sonar.all-data.csv'
dataset = load_csv(filename)
# convert string attributes to integers
for i in range(0, len(dataset[0])-1):
    str_column_to_float(dataset, i)
# convert class column to integers
str_column_to_int(dataset, len(dataset[0])-1)
# evaluate algorithm
n_folds = 5
max_depth = 10
min_size = 1
sample_size = 1.0
n_features = int(sqrt(len(dataset[0])-1))
for n_trees in [1, 5, 10]:
    scores = evaluate_algorithm(dataset, random_forest, n_folds, max_depth, min_size, sample_size, n_trees, n_features)
    print('Trees: %d' % n_trees)
    print('Scores: %s' % scores)
    print('Mean Accuracy: %.3f%%' % (sum(scores)/float(len(scores))))
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市净响,隨后出現(xiàn)的幾起案子少欺,更是在濱河造成了極大的恐慌,老刑警劉巖馋贤,帶你破解...
    沈念sama閱讀 217,084評論 6 503
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件赞别,死亡現(xiàn)場離奇詭異,居然都是意外死亡配乓,警方通過查閱死者的電腦和手機仿滔,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,623評論 3 392
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來犹芹,“玉大人崎页,你說我怎么就攤上這事⊙。” “怎么了飒焦?”我有些...
    開封第一講書人閱讀 163,450評論 0 353
  • 文/不壞的土叔 我叫張陵,是天一觀的道長屿笼。 經(jīng)常有香客問我牺荠,道長翁巍,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 58,322評論 1 293
  • 正文 為了忘掉前任休雌,我火速辦了婚禮灶壶,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘杈曲。我一直安慰自己例朱,他們只是感情好,可當(dāng)我...
    茶點故事閱讀 67,370評論 6 390
  • 文/花漫 我一把揭開白布鱼蝉。 她就那樣靜靜地躺著,像睡著了一般箫荡。 火紅的嫁衣襯著肌膚如雪魁亦。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,274評論 1 300
  • 那天羔挡,我揣著相機與錄音洁奈,去河邊找鬼。 笑死绞灼,一個胖子當(dāng)著我的面吹牛利术,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播低矮,決...
    沈念sama閱讀 40,126評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼印叁,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了军掂?” 一聲冷哼從身側(cè)響起轮蜕,我...
    開封第一講書人閱讀 38,980評論 0 275
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎蝗锥,沒想到半個月后跃洛,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,414評論 1 313
  • 正文 獨居荒郊野嶺守林人離奇死亡终议,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,599評論 3 334
  • 正文 我和宋清朗相戀三年汇竭,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片穴张。...
    茶點故事閱讀 39,773評論 1 348
  • 序言:一個原本活蹦亂跳的男人離奇死亡细燎,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出陆馁,到底是詐尸還是另有隱情找颓,我是刑警寧澤,帶...
    沈念sama閱讀 35,470評論 5 344
  • 正文 年R本政府宣布叮贩,位于F島的核電站击狮,受9級特大地震影響佛析,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜彪蓬,卻給世界環(huán)境...
    茶點故事閱讀 41,080評論 3 327
  • 文/蒙蒙 一寸莫、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧档冬,春花似錦膘茎、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,713評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至盐数,卻和暖如春棒拂,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背玫氢。 一陣腳步聲響...
    開封第一講書人閱讀 32,852評論 1 269
  • 我被黑心中介騙來泰國打工帚屉, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人漾峡。 一個月前我還...
    沈念sama閱讀 47,865評論 2 370
  • 正文 我出身青樓攻旦,卻偏偏與公主長得像,于是被迫代替她去往敵國和親生逸。 傳聞我的和親對象是個殘疾皇子牢屋,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 44,689評論 2 354

推薦閱讀更多精彩內(nèi)容