主流機(jī)器學(xué)習(xí)【轉(zhuǎn)載】模型模板代碼[xgb, lgb, Keras, LR]

ref:http://m.blog.csdn.net/leyounger/article/details/78667538

Preprocess

# 通用的預(yù)處理框架

import pandas as pd
import numpy as np
import scipy as sp

# 文件讀取
def read_csv_file(f, logging=False):
    print("==========讀取數(shù)據(jù)=========")
    data =  pd.read_csv(f)
    if logging:
        print(data.head(5))
        print(f, "包含以下列")
        print(data.columns.values)
        print(data.describe())
        print(data.info())
    return data

LR

# 通用的LogisticRegression框架

import pandas as pd
import numpy as np
from scipy import sparse
from sklearn.preprocessing import OneHotEncoder
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler

# 1. load data
df_train = pd.DataFrame()
df_test  = pd.DataFrame()
y_train = df_train['label'].values

# 2. process data
ss = StandardScaler()


# 3. feature engineering/encoding
# 3.1 For Labeled Feature
enc = OneHotEncoder()
feats = ["creativeID", "adID", "campaignID"]
for i, feat in enumerate(feats):
    x_train = enc.fit_transform(df_train[feat].values.reshape(-1, 1))
    x_test = enc.fit_transform(df_test[feat].values.reshape(-1, 1))
    if i == 0:
        X_train, X_test = x_train, x_test
    else:
        X_train, X_test = sparse.hstack((X_train, x_train)), sparse.hstack((X_test, x_test))

# 3.2 For Numerical Feature
# It must be a 2-D Data for StandardScalar, otherwise reshape(-1, len(feats)) is required
feats = ["price", "age"]
x_train = ss.fit_transform(df_train[feats].values)
x_test  = ss.fit_transform(df_test[feats].values)
X_train, X_test = sparse.hstack((X_train, x_train)), sparse.hstack((X_test, x_test))

# model training
lr = LogisticRegression()
lr.fit(X_train, y_train)
proba_test = lr.predict_proba(X_test)[:, 1]

LightBGM

二分類

import lightgbm as lgb
import pandas as pd
import numpy as np
import pickle
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split

print("Loading Data ... ")

# 導(dǎo)入數(shù)據(jù)
train_x, train_y, test_x = load_data()

# 用sklearn.cross_validation進(jìn)行訓(xùn)練數(shù)據(jù)集劃分晃跺,這里訓(xùn)練集和交叉驗(yàn)證集比例為7:3琼锋,可以自己根據(jù)需要設(shè)置
X, val_X, y, val_y = train_test_split(
    train_x,
    train_y,
    test_size=0.05,
    random_state=1,
    stratify=train_y ## 這里保證分割后y的比例分布與原數(shù)據(jù)一致
)

X_train = X
y_train = y
X_test = val_X
y_test = val_y


# create dataset for lightgbm
lgb_train = lgb.Dataset(X_train, y_train)
lgb_eval = lgb.Dataset(X_test, y_test, reference=lgb_train)
# specify your configurations as a dict
params = {
    'boosting_type': 'gbdt',
    'objective': 'binary',
    'metric': {'binary_logloss', 'auc'},
    'num_leaves': 5,
    'max_depth': 6,
    'min_data_in_leaf': 450,
    'learning_rate': 0.1,
    'feature_fraction': 0.9,
    'bagging_fraction': 0.95,
    'bagging_freq': 5,
    'lambda_l1': 1,  
    'lambda_l2': 0.001,  # 越小l2正則程度越高
    'min_gain_to_split': 0.2,
    'verbose': 5,
    'is_unbalance': True
}

# train
print('Start training...')
gbm = lgb.train(params,
                lgb_train,
                num_boost_round=10000,
                valid_sets=lgb_eval,
                early_stopping_rounds=500)

print('Start predicting...')

preds = gbm.predict(test_x, num_iteration=gbm.best_iteration)  # 輸出的是概率結(jié)果

# 導(dǎo)出結(jié)果
threshold = 0.5
for pred in preds:
    result = 1 if pred > threshold else 0

# 導(dǎo)出特征重要性
importance = gbm.feature_importance()
names = gbm.feature_name()
with open('./feature_importance.txt', 'w+') as file:
    for index, im in enumerate(importance):
        string = names[index] + ', ' + str(im) + '\n'
        file.write(string)

多分類

import lightgbm as lgb
import pandas as pd
import numpy as np
import pickle
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split

print("Loading Data ... ")

# 導(dǎo)入數(shù)據(jù)
train_x, train_y, test_x = load_data()

# 用sklearn.cross_validation進(jìn)行訓(xùn)練數(shù)據(jù)集劃分边苹,這里訓(xùn)練集和交叉驗(yàn)證集比例為7:3午笛,可以自己根據(jù)需要設(shè)置
X, val_X, y, val_y = train_test_split(
    train_x,
    train_y,
    test_size=0.05,
    random_state=1,
    stratify=train_y ## 這里保證分割后y的比例分布與原數(shù)據(jù)一致
)

X_train = X
y_train = y
X_test = val_X
y_test = val_y


# create dataset for lightgbm
lgb_train = lgb.Dataset(X_train, y_train)
lgb_eval = lgb.Dataset(X_test, y_test, reference=lgb_train)
# specify your configurations as a dict
params = {
    'boosting_type': 'gbdt',
    'objective': 'multiclass',
    'num_class': 9,
    'metric': 'multi_error',
    'num_leaves': 300,
    'min_data_in_leaf': 100,
    'learning_rate': 0.01,
    'feature_fraction': 0.8,
    'bagging_fraction': 0.8,
    'bagging_freq': 5,
    'lambda_l1': 0.4,
    'lambda_l2': 0.5,
    'min_gain_to_split': 0.2,
    'verbose': 5,
    'is_unbalance': True
}

# train
print('Start training...')
gbm = lgb.train(params,
                lgb_train,
                num_boost_round=10000,
                valid_sets=lgb_eval,
                early_stopping_rounds=500)

print('Start predicting...')

preds = gbm.predict(test_x, num_iteration=gbm.best_iteration)  # 輸出的是概率結(jié)果

# 導(dǎo)出結(jié)果
for pred in preds:
    result = prediction = int(np.argmax(pred))

# 導(dǎo)出特征重要性
importance = gbm.feature_importance()
names = gbm.feature_name()
with open('./feature_importance.txt', 'w+') as file:
    for index, im in enumerate(importance):
        string = names[index] + ', ' + str(im) + '\n'
        file.write(string)

XGB

二分類

import numpy as np
import pandas as pd
import xgboost as xgb
import time
from sklearn.model_selection import StratifiedKFold


from sklearn.model_selection import train_test_split
train_x, train_y, test_x = load_data()

# 構(gòu)建特征


# 用sklearn.cross_validation進(jìn)行訓(xùn)練數(shù)據(jù)集劃分,這里訓(xùn)練集和交叉驗(yàn)證集比例為7:3,可以自己根據(jù)需要設(shè)置
X, val_X, y, val_y = train_test_split(
    train_x,
    train_y,
    test_size=0.01,
    random_state=1,
    stratify=train_y
)

# xgb矩陣賦值
xgb_val = xgb.DMatrix(val_X, label=val_y)
xgb_train = xgb.DMatrix(X, label=y)
xgb_test = xgb.DMatrix(test_x)

# xgboost模型 #####################

params = {
    'booster': 'gbtree',
    # 'objective': 'multi:softmax',  # 多分類的問題、
    # 'objective': 'multi:softprob',   # 多分類概率
    'objective': 'binary:logistic',
    'eval_metric': 'logloss',
    # 'num_class': 9,  # 類別數(shù),與 multisoftmax 并用
    'gamma': 0.1,  # 用于控制是否后剪枝的參數(shù),越大越保守踱卵,一般0.1、0.2這樣子据过。
    'max_depth': 8,  # 構(gòu)建樹的深度惋砂,越大越容易過擬合
    'alpha': 0,   # L1正則化系數(shù)
    'lambda': 10,  # 控制模型復(fù)雜度的權(quán)重值的L2正則化項(xiàng)參數(shù),參數(shù)越大绳锅,模型越不容易過擬合西饵。
    'subsample': 0.7,  # 隨機(jī)采樣訓(xùn)練樣本
    'colsample_bytree': 0.5,  # 生成樹時(shí)進(jìn)行的列采樣
    'min_child_weight': 3,
    # 這個(gè)參數(shù)默認(rèn)是 1,是每個(gè)葉子里面 h 的和至少是多少鳞芙,對(duì)正負(fù)樣本不均衡時(shí)的 0-1 分類而言
    # 眷柔,假設(shè) h 在 0.01 附近,min_child_weight 為 1 意味著葉子節(jié)點(diǎn)中最少需要包含 100 個(gè)樣本原朝。
    # 這個(gè)參數(shù)非常影響結(jié)果驯嘱,控制葉子節(jié)點(diǎn)中二階導(dǎo)的和的最小值,該參數(shù)值越小喳坠,越容易 overfitting鞠评。
    'silent': 0,  # 設(shè)置成1則沒有運(yùn)行信息輸出,最好是設(shè)置為0.
    'eta': 0.03,  # 如同學(xué)習(xí)率
    'seed': 1000,
    'nthread': -1,  # cpu 線程數(shù)
    'missing': 1,
    'scale_pos_weight': (np.sum(y==0)/np.sum(y==1))  # 用來處理正負(fù)樣本不均衡的問題,通常群攫摹:sum(negative cases) / sum(positive cases)
    # 'eval_metric': 'auc'
}
plst = list(params.items())
num_rounds = 2000  # 迭代次數(shù)
watchlist = [(xgb_train, 'train'), (xgb_val, 'val')]

# 交叉驗(yàn)證
result = xgb.cv(plst, xgb_train, num_boost_round=200, nfold=4, early_stopping_rounds=200, verbose_eval=True, folds=StratifiedKFold(n_splits=4).split(X, y))

# 訓(xùn)練模型并保存
# early_stopping_rounds 當(dāng)設(shè)置的迭代次數(shù)較大時(shí)剃幌,early_stopping_rounds 可在一定的迭代次數(shù)內(nèi)準(zhǔn)確率沒有提升就停止訓(xùn)練
model = xgb.train(plst, xgb_train, num_rounds, watchlist, early_stopping_rounds=200)
model.save_model('../data/model/xgb.model')  # 用于存儲(chǔ)訓(xùn)練出的模型

preds = model.predict(xgb_test)

# 導(dǎo)出結(jié)果
threshold = 0.5
for pred in preds:
    result = 1 if pred > threshold else 0

Keras

二分類

import numpy as np
import pandas as pd
import time
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt

from keras.models import Sequential
from keras.layers import Dropout
from keras.layers import Dense, Activation
from keras.utils.np_utils import to_categorical

# coding=utf-8
from model.util import load_data as load_data_1
from model.util_combine_train_test import load_data as load_data_2
from sklearn.preprocessing import StandardScaler # 用于特征的標(biāo)準(zhǔn)化
from sklearn.preprocessing import Imputer

print("Loading Data ... ")
# 導(dǎo)入數(shù)據(jù)
train_x, train_y, test_x = load_data()

# 構(gòu)建特征
X_train = train_x.values
X_test  = test_x.values
y = train_y

imp = Imputer(missing_values='NaN', strategy='mean', axis=0)
X_train = imp.fit_transform(X_train)

sc = StandardScaler()
sc.fit(X_train)
X_train = sc.transform(X_train)
X_test  = sc.transform(X_test)


model = Sequential()
model.add(Dense(256, input_shape=(X_train.shape[1],)))
model.add(Activation('tanh'))
model.add(Dropout(0.3))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.3))
model.add(Dense(512))
model.add(Activation('tanh'))
model.add(Dropout(0.3))
model.add(Dense(256))
model.add(Activation('linear'))
model.add(Dense(1)) # 這里需要和輸出的維度一致
model.add(Activation('sigmoid'))

# For a multi-class classification problem
model.compile(loss='binary_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

epochs = 100
model.fit(X_train, y, epochs=epochs, batch_size=2000, validation_split=0.1, shuffle=True)

# 導(dǎo)出結(jié)果
threshold = 0.5
for index, case in enumerate(X_test):
    case =np.array([case])
    prediction_prob = model.predict(case)
    prediction = 1 if prediction_prob[0][0] > threshold else 0

多分類

import numpy as np
import pandas as pd
import time
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt

from keras.models import Sequential
from keras.layers import Dropout
from keras.layers import Dense, Activation
from keras.utils.np_utils import to_categorical

# coding=utf-8
from model.util import load_data as load_data_1
from model.util_combine_train_test import load_data as load_data_2
from sklearn.preprocessing import StandardScaler # 用于特征的標(biāo)準(zhǔn)化
from sklearn.preprocessing import Imputer

print("Loading Data ... ")
# 導(dǎo)入數(shù)據(jù)
train_x, train_y, test_x = load_data()

# 構(gòu)建特征
X_train = train_x.values
X_test  = test_x.values
y = train_y

# 特征處理
sc = StandardScaler()
sc.fit(X_train)
X_train = sc.transform(X_train)
X_test  = sc.transform(X_test)
y = to_categorical(y) ## 這一步很重要,一定要將多類別的標(biāo)簽進(jìn)行one-hot編碼


model = Sequential()
model.add(Dense(256, input_shape=(X_train.shape[1],)))
model.add(Activation('tanh'))
model.add(Dropout(0.3))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.3))
model.add(Dense(512))
model.add(Activation('tanh'))
model.add(Dropout(0.3))
model.add(Dense(256))
model.add(Activation('linear'))
model.add(Dense(9)) # 這里需要和輸出的維度一致
model.add(Activation('softmax'))

# For a multi-class classification problem
model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])

epochs = 200
model.fit(X_train, y, epochs=epochs, batch_size=200, validation_split=0.1, shuffle=True)

# 導(dǎo)出結(jié)果
for index, case in enumerate(X_test):
    case = np.array([case])
    prediction_prob = model.predict(case)
    prediction = np.argmax(prediction_prob)
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末晾浴,一起剝皮案震驚了整個(gè)濱河市负乡,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌脊凰,老刑警劉巖抖棘,帶你破解...
    沈念sama閱讀 218,204評(píng)論 6 506
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場(chǎng)離奇詭異狸涌,居然都是意外死亡切省,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,091評(píng)論 3 395
  • 文/潘曉璐 我一進(jìn)店門杈抢,熙熙樓的掌柜王于貴愁眉苦臉地迎上來数尿,“玉大人,你說我怎么就攤上這事惶楼。” “怎么了?”我有些...
    開封第一講書人閱讀 164,548評(píng)論 0 354
  • 文/不壞的土叔 我叫張陵歼捐,是天一觀的道長(zhǎng)何陆。 經(jīng)常有香客問我,道長(zhǎng)豹储,這世上最難降的妖魔是什么贷盲? 我笑而不...
    開封第一講書人閱讀 58,657評(píng)論 1 293
  • 正文 為了忘掉前任,我火速辦了婚禮剥扣,結(jié)果婚禮上巩剖,老公的妹妹穿的比我還像新娘。我一直安慰自己钠怯,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,689評(píng)論 6 392
  • 文/花漫 我一把揭開白布晦炊。 她就那樣靜靜地躺著鞠鲜,像睡著了一般。 火紅的嫁衣襯著肌膚如雪断国。 梳的紋絲不亂的頭發(fā)上贤姆,一...
    開封第一講書人閱讀 51,554評(píng)論 1 305
  • 那天,我揣著相機(jī)與錄音稳衬,去河邊找鬼霞捡。 笑死,一個(gè)胖子當(dāng)著我的面吹牛薄疚,可吹牛的內(nèi)容都是我干的弄砍。 我是一名探鬼主播,決...
    沈念sama閱讀 40,302評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼输涕,長(zhǎng)吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼音婶!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起莱坎,我...
    開封第一講書人閱讀 39,216評(píng)論 0 276
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤衣式,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后檐什,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體碴卧,經(jīng)...
    沈念sama閱讀 45,661評(píng)論 1 314
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,851評(píng)論 3 336
  • 正文 我和宋清朗相戀三年乃正,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了住册。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 39,977評(píng)論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡瓮具,死狀恐怖荧飞,靈堂內(nèi)的尸體忽然破棺而出凡人,到底是詐尸還是另有隱情,我是刑警寧澤叹阔,帶...
    沈念sama閱讀 35,697評(píng)論 5 347
  • 正文 年R本政府宣布挠轴,位于F島的核電站,受9級(jí)特大地震影響耳幢,放射性物質(zhì)發(fā)生泄漏岸晦。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,306評(píng)論 3 330
  • 文/蒙蒙 一睛藻、第九天 我趴在偏房一處隱蔽的房頂上張望启上。 院中可真熱鬧,春花似錦店印、人聲如沸冈在。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,898評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)讥邻。三九已至,卻和暖如春院峡,著一層夾襖步出監(jiān)牢的瞬間兴使,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 33,019評(píng)論 1 270
  • 我被黑心中介騙來泰國(guó)打工照激, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留发魄,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 48,138評(píng)論 3 370
  • 正文 我出身青樓俩垃,卻偏偏與公主長(zhǎng)得像励幼,于是被迫代替她去往敵國(guó)和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子口柳,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,927評(píng)論 2 355

推薦閱讀更多精彩內(nèi)容