(嘗試)用戶廣告點(diǎn)擊情況預(yù)測(cè)

使用的數(shù)據(jù)是阿里云天池的數(shù)據(jù)(https://tianchi.aliyun.com/dataset/dataDetail?dataId=56)土榴,數(shù)據(jù)中包含了四張表,分別為用戶行為日志behavior_log(簡(jiǎn)稱為bl)、原始樣本骨架raw_sample(簡(jiǎn)稱為rs)搜囱、廣告基本信息表ad_feature(簡(jiǎn)稱為af)、用戶基本信息表user_profile(簡(jiǎn)稱為up)趾痘。

下面僅嘗試使用一下隨機(jī)森林進(jìn)行簡(jiǎn)單的預(yù)測(cè),所以將缺失值直接刪除蔓钟,最后預(yù)測(cè)效果不錯(cuò)永票,準(zhǔn)確率高達(dá)93.95%。

代碼如下:

import pandas as pd

import numpy as np

import datetime

import matplotlib.pyplot as plt

import warnings

warnings.filterwarnings('ignore')? #忽視警告

up = pd.read_csv(r'E:\datafile\ad_clik\user_profile.csv')

af = pd.read_csv(r'E:\datafile\ad_clik\ad_feature.csv')

rs = pd.read_csv(r'E:\datafile\ad_clik\raw_sample.csv',iterator=True,chunksize=10000000,header=0)

# 提取出全非空的數(shù)據(jù)滥沫,395932

u = []? #665836個(gè)缺失

u.extend(user_class_null)

u.extend(user_pvalue_null)

u.extend(user_class_pvalue_null)

complete_up = up[~up['userid'].isin(u)]

# 各用戶特征的分布情況侣集,還有廣告屬性的分布情況,此處省略

vector=['cms_segid','cms_group_id','final_gender_code','age_level','pvalue_level','shopping_level','occupation','new_user_class_level ']

%matplotlib inline

for i in vector:

? ? y = complete_up[i].value_counts().reset_index()

? ? y.columns = [i,'person_count']

? ? y.sort_values(by=i,ascending = True)


? ? x = y[i].tolist()

? ? cou = y['person_count'].tolist()

? ? plt.figure(figsize=(15,8))

? ? plt.bar(x,cou)

? ? plt.show()


# 設(shè)置訓(xùn)練集

t1 = '2017-05-06 00:00:00'

t2 = '2017-05-12 23:59:59'

f = '%Y-%m-%d %H:%M:%S'

startTime = datetime.datetime.strptime(t1,f).timestamp()? #1494000000.0

endTime = datetime.datetime.strptime(t2,f).timestamp()? ? #1494604799.0

# 只要complete_up表中的userid

u = complete_up['userid'].tolist()

# 不要af表中的brand缺失的adgroup_id

a = af[af['brand'].isnull()]['adgroup_id'].tolist()

count = 0

for chunk in rs:

? ? chunk.drop(index=chunk[chunk.time_stamp < startTime].index,inplace=True)

? ? chunk.drop(index=chunk[chunk.time_stamp > endTime].index,inplace=True)

? ? chunk.drop(index=chunk[chunk['adgroup_id'].isin(a)].index,inplace=True)

? ? chunk.drop(index=chunk[~chunk['user'].isin(u)].index,inplace=True)

? ? list = []

? ? for i in chunk.time_stamp.index:

? ? ? ? d = chunk.time_stamp[i]

? ? ? ? dates = datetime.datetime.fromtimestamp(d)

? ? ? ? list.append(dates)

? ? chunk.insert(loc=3,column='datetimes',value=list)

? ? del chunk['time_stamp']

? ? chunk.to_csv('E:\\datafile\\rs\\rs_train_complete.csv',mode='a',index=False,header=0)? #header=0,是布爾類型兰绣,表示不加入列名

? ? count += 1

? ? print(count,end='-')

print('ok')

# 連接up和af

rs_train = pd.read_csv('E:\\datafile\\rs\\rs_train_complete.csv',header=None,

? ? ? ? ? ? ? ? ? ? ? names=['userid','adgroup_id','datatimes','pid','noclk','clk'])

df = pd.merge(rs_train,up,how='left',on='userid')

df = pd.merge(df,af,how='left',on='adgroup_id')

### 由于有的廣告屬性的取值特別多肚吏,可以根據(jù)點(diǎn)擊量和點(diǎn)擊率進(jìn)行分桶,做數(shù)據(jù)轉(zhuǎn)換

# 先計(jì)算cate_id的點(diǎn)擊量和點(diǎn)擊率

cate_y = df['cate_id'][df['clk']==1].value_counts().reset_index()

cate_y.columns = ['cate_id','clk']

# cate_n = df['cate_id'][df['clk']==0].value_counts().reset_index()

# cate_n.columns = ['cate_id','nclk']

cate_sum = df['cate_id'].value_counts().reset_index()

cate_sum.columns = ['cate_id','counts']

cate = pd.merge(cate_y,cate_sum,how='outer',on='cate_id')

cate = cate.fillna(0)

cate['clk_ratio'] = cate['clk']/cate['counts']

cate['clk_ratio'] = cate['clk_ratio'].map(lambda x:('%.4f')%x)

cate['clk_ratio'] = cate['clk_ratio'].astype(float)

cate['cate_clk_bins'] = pd.qcut(cate['clk'],16,duplicates='drop',labels=[1,2,3,4,5,6,7,8,9,10])

cate['cate_clk_bins'] = cate['cate_clk_bins'].astype(int)

cate['cate_clk_ratio_bins'] = pd.qcut(cate['clk_ratio'],14,duplicates='drop',labels=[1,2,3,4,5,6,7,8,9,10])

cate['cate_clk_ratio_bins'] = cate['cate_clk_ratio_bins'].astype(int)

cate.drop(['clk','counts','clk_ratio'],axis=1,inplace=True)

# 先計(jì)算cate_id的點(diǎn)擊量和點(diǎn)擊率

cust_y = df['customer'][df['clk']==1].value_counts().reset_index()

cust_y.columns = ['customer','clk']

# cate_n = df['cate_id'][df['clk']==0].value_counts().reset_index()

# cate_n.columns = ['cate_id','nclk']

cust_sum = df['customer'].value_counts().reset_index()

cust_sum.columns = ['customer','counts']

cust = pd.merge(cust_y,cust_sum,how='outer',on='customer')

cust = cust.fillna(0)

cust['clk_ratio'] = cust['clk']/cust['counts']

cust['clk_ratio'] = cust['clk_ratio'].map(lambda x:('%.4f')%x)

cust['clk_ratio'] = cust['clk_ratio'].astype(float)

cust['cust_clk_bins'] = pd.qcut(cust['clk'],65,duplicates='drop',labels=[1,2,3,4,5,6,7,8,9,10])

cust['cust_clk_bins'] = cust['cust_clk_bins'].astype(int)

cust['cust_clk_ratio_bins'] = pd.qcut(cust['clk_ratio'],26,duplicates='drop',labels=[1,2,3,4,5,6,7,8,9,10])

cust['cust_clk_ratio_bins'] = cust['cust_clk_ratio_bins'].astype(int)

cust.drop(['clk','counts','clk_ratio'],axis=1,inplace=True)

# 先計(jì)算campaign_id的點(diǎn)擊量和點(diǎn)擊率

camp_y = df['campaign_id'][df['clk']==1].value_counts().reset_index()

camp_y.columns = ['campaign_id','clk']

# cate_n = df['cate_id'][df['clk']==0].value_counts().reset_index()

# cate_n.columns = ['cate_id','nclk']

camp_sum = df['campaign_id'].value_counts().reset_index()

camp_sum.columns = ['campaign_id','counts']

camp = pd.merge(camp_y,camp_sum,how='outer',on='campaign_id')

camp = camp.fillna(0)

camp['clk_ratio'] = camp['clk']/camp['counts']

camp['clk_ratio'] = camp['clk_ratio'].map(lambda x:('%.4f')%x)

camp['clk_ratio'] = camp['clk_ratio'].astype(float)

camp['camp_clk_bins'] = pd.qcut(camp['clk'],100,duplicates='drop',labels=[1,2,3,4,5,6,7,8,9,10])

camp['camp_clk_bins'] = camp['camp_clk_bins'].astype(int)

# camp['clk_bins'].unique().size

camp['camp_clk_ratio_bins'] = pd.qcut(camp['clk_ratio'],30,duplicates='drop',labels=[1,2,3,4,5,6,7,8,9,10])

camp['camp_clk_ratio_bins'] = camp['camp_clk_ratio_bins'].astype(int)

#camp['clk_ratio_bins'].unique().size

camp.drop(['clk','counts','clk_ratio'],axis=1,inplace=True)

# 先計(jì)算campaign_id的點(diǎn)擊量和點(diǎn)擊率

brand_y = df['brand'][df['clk']==1].value_counts().reset_index()

brand_y.columns = ['brand','clk']

# cate_n = df['cate_id'][df['clk']==0].value_counts().reset_index()

# cate_n.columns = ['cate_id','nclk']

brand_sum = df['brand'].value_counts().reset_index()

brand_sum.columns = ['brand','counts']

brand = pd.merge(brand_y,brand_sum,how='outer',on='brand')

brand = brand.fillna(0)

brand['clk_ratio'] = brand['clk']/brand['counts']

brand['clk_ratio'] = brand['clk_ratio'].map(lambda x:('%.4f')%x)

brand['clk_ratio'] = brand['clk_ratio'].astype(float)

brand['brand_clk_bins'] = pd.qcut(brand['clk'],40,duplicates='drop',labels=[1,2,3,4,5,6,7,8,9,10])

brand['brand_clk_bins'] = brand['brand_clk_bins'].astype(int)

# brand['clk_bins'].unique().size

brand['brand_clk_ratio_bins'] = pd.qcut(brand['clk_ratio'],22,duplicates='drop',labels=[1,2,3,4,5,6,7,8,9,10])

brand['brand_clk_ratio_bins'] = brand['brand_clk_ratio_bins'].astype(int)

# brand['clk_ratio_bins'].unique().size

brand.drop(['clk','counts','clk_ratio'],axis=1,inplace=True)

### 相關(guān)性分析

from sklearn.feature_selection import chi2,SelectKBest

X = t[['cms_segid','cms_group_id','final_gender_code','age_level','pvalue_level','shopping_level','occupation','new_user_class_level ',

? ? ? 'cate_clk_bins','cate_clk_ratio_bins','cust_clk_bins','cust_clk_ratio_bins','camp_clk_bins','camp_clk_ratio_bins','brand_clk_bins',

? ? ? 'brand_clk_ratio_bins']].values

print(X.shape)

y = t['clk'].tolist()

# selector = SelectKBest(chi2,k='all')

# selector.fit(X, y)

# scores = selector.scores_

# scores? #第4狭魂、5、6、7雌澄、8個(gè)特征相關(guān)性并不明顯

# pvalues = selector.pvalues_

# pvalues? #p值都小于0.05

selector = SelectKBest(chi2,k=11)

v = selector.fit(X, y).get_support(indices=True)

print(v)

scores = selector.scores_

print(scores)? #第4斋泄、5、6镐牺、7炫掐、8個(gè)特征相關(guān)性并不明顯

### 同時(shí)使用SPSS也驗(yàn)證了price和clk也相關(guān)

### 預(yù)測(cè)

t = pd.merge(df,cate,how='left',on='cate_id')

t = pd.merge(t,cust,how='left',on='customer')

t = pd.merge(t,camp,how='left',on='campaign_id')

t = pd.merge(t,brand,how='left',on='brand')

todrop = ['userid','adgroup_id','datatimes','noclk','cate_id','campaign_id','customer','brand']

t.drop(todrop,axis=1,inplace=True)

## 先使用訓(xùn)練集進(jìn)行交叉驗(yàn)證

from sklearn.ensemble import RandomForestClassifier

from sklearn.model_selection import cross_val_score,train_test_split

# 只取定類數(shù)據(jù)中相關(guān)性強(qiáng)的前11個(gè)特征,加上pid和price

rf_x = t[['cms_segid','cms_group_id','final_gender_code','cate_clk_bins','cate_clk_ratio_bins','cust_clk_bins','cust_clk_ratio_bins',

? ? ? ? ? 'camp_clk_bins','camp_clk_ratio_bins','brand_clk_bins','brand_clk_ratio_bins','pid','price']].values

rf_y = t['clk'].tolist()

train_X,test_X, train_y, test_y = train_test_split(rf_x,rf_y,test_size=1/5)

clf1 = RandomForestClassifier(n_estimators=10,max_depth=None,min_samples_split=2,random_state=0)

scores = cross_val_score(clf1,train_X,train_y,scoring='accuracy',cv=5)

clf1.fit(train_X,train_y)

y_pred = clf1.predict(test_X)

print(scores.mean())? # 0.9394756096191237

print(scores.std())? # 0.00022112156289561522

test = pd.DataFrame([y_pred,test_y],index=['y_pred','test_y'])

test = test.T

print(test[(y_pred!=test_y) & (y_pred==1)]['y_pred'].size)

print(test[(y_pred!=test_y) & (y_pred==0)]['y_pred'].size)

print('預(yù)測(cè)準(zhǔn)確率:',test[(y_pred==test_y)]['y_pred'].size/test['y_pred'].size)? # 0.9395123965128687

## 再預(yù)測(cè)測(cè)試集

rs_test = pd.DataFrame()

# 設(shè)置提取的時(shí)間段

t1 = '2017-05-13 00:00:00'

t2 = '2017-05-13 23:59:59'

f = '%Y-%m-%d %H:%M:%S'

startTime = datetime.datetime.strptime(t1,f).timestamp()? #1494000000.0

endTime = datetime.datetime.strptime(t2,f).timestamp()? ? #1494604799.0

# 只要complete_up表中的userid

u = complete_up['userid'].tolist()

# 不要af表中的brand缺失的adgroup_id

a = af[af['brand'].isnull()]['adgroup_id'].tolist()

count = 0

for chunk in rs:

? ? chunk.drop(index=chunk[chunk.time_stamp < startTime].index,inplace=True)

? ? chunk.drop(index=chunk[chunk.time_stamp > endTime].index,inplace=True)

? ? chunk.drop(index=chunk[chunk['adgroup_id'].isin(a)].index,inplace=True)

? ? chunk.drop(index=chunk[~chunk['user'].isin(u)].index,inplace=True)

? ? list = []

? ? for i in chunk.time_stamp.index:

? ? ? ? d = chunk.time_stamp[i]

? ? ? ? dates = datetime.datetime.fromtimestamp(d)

? ? ? ? list.append(dates)

? ? chunk.insert(loc=3,column='datetimes',value=list)

? ? del chunk['time_stamp']

? ? rs_test = pd.concat([rs_test,chunk])

? ? count += 1

? ? print(count,end='-')

print('ok')

rs_test.columns = ['userid','adgroup_id','datetimes','pid','nonclk','clk']

temp = pd.merge(rs_test,up,how='left',on='userid')

rf_test = pd.merge(temp,af,how='left',on='adgroup_id')

temp = pd.merge(rf_test,cate,how='left',on='cate_id')

temp = pd.merge(temp,cust,how='left',on='customer')

temp = pd.merge(temp,camp,how='left',on='campaign_id')

rf_test = pd.merge(temp,brand,how='left',on='brand')

todrop = ['userid','adgroup_id','datetimes','nonclk','cate_id','campaign_id','customer','brand','age_level','pvalue_level',

? ? ? ? ? 'shopping_level','occupation','new_user_class_level ']

rf_test.drop(todrop,axis=1,inplace=True)

# 刪除那些沒有有缺失值的

RF_test = rf_test.dropna()

# 找出沒有匹配到的數(shù)據(jù)? 18451條

test_null = rf_test[rf_test.isnull().T.any()]

test_null.describe()

test_x = RF_test[['cms_segid','cms_group_id','final_gender_code','cate_clk_bins','cate_clk_ratio_bins','cust_clk_bins','cust_clk_ratio_bins',

? ? ? ? ? 'camp_clk_bins','camp_clk_ratio_bins','brand_clk_bins','brand_clk_ratio_bins','pid','price']].values

test_y = RF_test['clk'].tolist()

y_pred = clf1.predict(test_x)

test = pd.DataFrame([y_pred,test_y],index=['y_pred','test_y'])

test = test.T

print(test[(y_pred!=test_y) & (y_pred==1)]['y_pred'].size)

print(test[(y_pred!=test_y) & (y_pred==0)]['y_pred'].size)

print('預(yù)測(cè)準(zhǔn)確率:',test[(y_pred==test_y)]['y_pred'].size/test['y_pred'].size)? # 0.9395123965128687

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末睬涧,一起剝皮案震驚了整個(gè)濱河市募胃,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌畦浓,老刑警劉巖痹束,帶你破解...
    沈念sama閱讀 218,941評(píng)論 6 508
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場(chǎng)離奇詭異讶请,居然都是意外死亡祷嘶,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,397評(píng)論 3 395
  • 文/潘曉璐 我一進(jìn)店門夺溢,熙熙樓的掌柜王于貴愁眉苦臉地迎上來论巍,“玉大人,你說我怎么就攤上這事风响〖翁” “怎么了?”我有些...
    開封第一講書人閱讀 165,345評(píng)論 0 356
  • 文/不壞的土叔 我叫張陵状勤,是天一觀的道長(zhǎng)鞋怀。 經(jīng)常有香客問我,道長(zhǎng)荧降,這世上最難降的妖魔是什么接箫? 我笑而不...
    開封第一講書人閱讀 58,851評(píng)論 1 295
  • 正文 為了忘掉前任,我火速辦了婚禮朵诫,結(jié)果婚禮上辛友,老公的妹妹穿的比我還像新娘。我一直安慰自己剪返,他們只是感情好废累,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,868評(píng)論 6 392
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著脱盲,像睡著了一般邑滨。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上钱反,一...
    開封第一講書人閱讀 51,688評(píng)論 1 305
  • 那天掖看,我揣著相機(jī)與錄音匣距,去河邊找鬼。 笑死哎壳,一個(gè)胖子當(dāng)著我的面吹牛毅待,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播归榕,決...
    沈念sama閱讀 40,414評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼尸红,長(zhǎng)吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來了刹泄?” 一聲冷哼從身側(cè)響起外里,我...
    開封第一講書人閱讀 39,319評(píng)論 0 276
  • 序言:老撾萬榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎特石,沒想到半個(gè)月后盅蝗,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,775評(píng)論 1 315
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡县匠,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,945評(píng)論 3 336
  • 正文 我和宋清朗相戀三年风科,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片乞旦。...
    茶點(diǎn)故事閱讀 40,096評(píng)論 1 350
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡贼穆,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出兰粉,到底是詐尸還是另有隱情故痊,我是刑警寧澤,帶...
    沈念sama閱讀 35,789評(píng)論 5 346
  • 正文 年R本政府宣布玖姑,位于F島的核電站愕秫,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏焰络。R本人自食惡果不足惜戴甩,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,437評(píng)論 3 331
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望闪彼。 院中可真熱鬧甜孤,春花似錦、人聲如沸畏腕。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,993評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽描馅。三九已至把夸,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間铭污,已是汗流浹背恋日。 一陣腳步聲響...
    開封第一講書人閱讀 33,107評(píng)論 1 271
  • 我被黑心中介騙來泰國(guó)打工膀篮, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人谚鄙。 一個(gè)月前我還...
    沈念sama閱讀 48,308評(píng)論 3 372
  • 正文 我出身青樓各拷,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國(guó)和親闷营。 傳聞我的和親對(duì)象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,037評(píng)論 2 355