前言:機器學(xué)習(xí)工程師一半的時間花在數(shù)據(jù)的清洗、特征選擇读跷、降維等數(shù)據(jù)處理上面底燎,下面就以郵件過濾系統(tǒng)為例,介紹一下機器學(xué)習(xí)模型構(gòu)建前的一些非常重要的工作社裆。
-
收集數(shù)據(jù)
不同的項目有不同的數(shù)據(jù)來源拙绊,這在前面介紹過。
-
查看數(shù)據(jù)
這次訓(xùn)練模型的數(shù)據(jù)當(dāng)然是六萬多份郵件以及郵件的標簽泳秀,如下圖:
通過數(shù)據(jù)可以得到如下:
任務(wù)
- 監(jiān)督學(xué)習(xí)還是無監(jiān)督學(xué)習(xí)标沪?二分類還是多分類?文本分類還是 結(jié)構(gòu)化數(shù)據(jù)分類嗜傅?短文本分類還是長文本分類金句?
答:有便簽,監(jiān)督學(xué)習(xí)吕嘀,二分類违寞,長文本分類
數(shù)據(jù)
- 樣本如何定義贞瞒?什么樣的數(shù)據(jù)作為特征?如果劃分訓(xùn)練集和測 試集趁曼?
答:可以分為發(fā)送郵件地址军浆;接受郵件地址;發(fā)送時間挡闰;郵件內(nèi)容乒融;郵件長度
如何從上述的特征中選出合適的特征?
答:通過統(tǒng)計計算
選擇合適的模型摄悯;根據(jù)具體的任務(wù)優(yōu)化模型赞季;模型調(diào)優(yōu);多模 型融合 -
數(shù)據(jù)預(yù)處理
- 分別提取上述特征到一個csv文件
1 把便簽轉(zhuǎn)化為數(shù)字
代碼如下:
import sys
import os
import time
'''
把六萬條數(shù)據(jù)奢驯,寫到一行上碟摆,制作標簽,標簽已經(jīng)給你標注好
'''
#1制作標簽字典
def label_dict(label_path):
type_dict = {"spam":"1","ham":"0"}
content = open(label_path)
index_dict = {}
#用try防止出錯發(fā)生
try:
for line in content:
arr = line.split(" ")
if len(arr)==2:
key,value=arr
value=value.replace("../data",'').replace("\n",'')
index_dict[value]=type_dict[key.lower()]
finally:
content.close()
return index_dict
a = label_dict("./full/index")
print(a)
輸出結(jié)果如下:
'/028/239': '0', '/028/240': '0', '/028/241': '1', '/028/242': '1', '/028/243': '1', '/028/244': '1', '/028/245': '1', '/028/2
2 提取特征叨橱,先定義一個文件的特征提取
def feature_dict(email_path):
email_content = open(email_path,'r',encoding="gb2312",errors="ignore")
content_dict={}
try:
is_content = False
for line in email_content:
line = line.strip()#去除首尾空格字符
if line.startswith("From:"):
content_dict["from"] = line[5:]
elif line.startswith("To"):
content_dict["to"]=line[3:]
elif line.startswith("Date"):
content_dict["date"]=line[5:]
elif not line:
is_content=True
if is_content:
if "content" in content_dict:
content_dict['content'] += line
else:
content_dict['content'] = line
pass
finally:
email_content.close()
return content_dict
輸出結(jié)果:
{'from': ' "yan"<(8月27-28,上海)培訓(xùn)課程>', 'to': ' lu@ccert.edu.cn', 'date': ' Tue, 30 Aug 2005 10:08:15 +0800', 'content': '非財務(wù)糾淼牟莆窆芾-(沙盤模擬
3.把上述的字典轉(zhuǎn)化為文本
def dict_to_text(email_path):
content_dict=feature_dict(email_path)
# 進行處理
result_str = content_dict.get('from', 'unkown').replace(',', '').strip() + ","
result_str += content_dict.get('to', 'unknown').replace(',', '').strip() + ","
result_str += content_dict.get('date', 'unknown').replace(',', '').strip() + ","
result_str += content_dict.get('content', 'unknown').replace(',', ' ').strip()
return result_str
輸出的結(jié)果為:
"yan"<(8月27-28上海)培訓(xùn)課程>,lu@ccert.edu.cn,Tue 30 Aug 2005 10:08:15 +0800,非財務(wù)糾淼牟莆窆芾-(沙盤模擬)------如何運用財務(wù)岳硤岣吖芾砑
4.提取上述特征典蜕,寫入到一個文件中,兩個for循環(huán)
start = time.time()
index_dict = label_dict("./full/index")
list0 = os.listdir('./data') # 文件夾的名稱
for l1 in list0: # 開始把N個文件夾中的file寫入N*n個wiriter
l1_path = './data/' + l1
print('開始處理文件夾' + l1_path)
list1 = os.listdir(l1_path)
write_file_path = './process/process01_' + l1
with open(write_file_path, "w", encoding='utf-8') as writer:
for l2 in list1:
l2_path = l1_path + "/" + l2 # 得到要處理文件的具體路徑
index_key = "/" + l1 + "/" + l2
if index_key in index_dict:
content_str = dict_to_text(l2_path)
content_str += "," + index_dict[index_key] + "\n"
writer.writelines(content_str)
with open('./result_process01', "w", encoding='utf-8') as writer:
for l1 in list0:
file_path = './process/process01_' + l1
print("開始合并文件:" + file_path)
with open(file_path, encoding='utf-8') as file:
for line in file:
writer.writelines(line)
end = time.time()
print('數(shù)據(jù)處理總共耗時%.2f' % (end - start))
得到結(jié)果如下:
-
數(shù)據(jù)分析
分別查看特征屬性對標簽值的相關(guān)性
1.查看郵件收發(fā)地址對標簽的影響
df = pd.read_csv('./result_process01', sep = ',', header = None, names= ['from','to', 'date', 'content','label'])
def 獲取郵件收發(fā)地址(strl):#發(fā)送接收地址提取
it = re.findall(r"@([A-Za-z0-9]*\.[A-Za-z0-9\.]+)", str(strl))#正則匹配
result = ''
if len(it)>0:
result = it[0]
else:
result = 'unknown'
return result
df['from_address'] = pd.Series(map(lambda str : 獲取郵件收發(fā)地址(str), df['from']))#map映射并添加
df['to_address'] = pd.Series(map(lambda str: 獲取郵件收發(fā)地址(str), df['to']))
#開始分析:多少種地址,每種多少個
print(df['from_address'].unique().shape)
print(df['from_address'].value_counts().head(5))
from_address_df = df.from_address.value_counts().to_frame()#轉(zhuǎn)為結(jié)構(gòu)化的輸出,輸出帶索引
print(from_address_df.head(5))
結(jié)果:
(3567,)
163.com 7500
mail.tsinghua.edu.cn 6498
126.com 5822
tom.com 4075
mails.tsinghua.edu.cn 3205
可以看出地址對是否為垃圾郵件沒有影響罗洗。
時間也沒有影響
2.對內(nèi)容進行分詞
print('='*30 + '現(xiàn)在開始分詞愉舔,請耐心等待5分鐘。伙菜。轩缤。' + '='*20)
df['content'] = df['content'].astype('str')#astype類型轉(zhuǎn)換,轉(zhuǎn)為str
df['jieba_cut_content'] = list(map(lambda st: " ".join(jieba.cut(st)), df['content']))
print(df["jieba_cut_content"].head(4))
3.判斷郵件長度對是否為垃圾郵件有沒有影響
def 郵件長度統(tǒng)計(lg):
if lg <= 10:
return 0
elif lg <= 100:
return 1
elif lg <= 500:
return 2
elif lg <= 1000:
return 3
elif lg <= 1500:
return 4
elif lg <= 2000:
return 5
elif lg <= 2500:
return 6
elif lg <= 3000:
return 7
elif lg <= 4000:
return 8
elif lg <= 5000:
return 9
elif lg <= 10000:
return 10
elif lg <= 20000:
return 11
elif lg <= 30000:
return 12
elif lg <= 50000:
return 13
else:
return 14
df['content_length'] = pd.Series(map(lambda st:len(st), df['content']))
df['content_length_type'] = pd.Series(map(lambda st: 郵件長度統(tǒng)計(st), df['content_length']))
# print(df.head(10)) #如果不count就按照自然順序排
df2 = df.groupby(['content_length_type', 'label'])['label'].agg(['count']).reset_index()#agg 計算并且添加count用于后續(xù)計算
df3 = df2[df2.label == 1][['content_length_type', 'count']].rename(columns = {'count' : 'c1'})
df4 = df2[df2.label == 0][['content_length_type', 'count']].rename(columns = {'count' : 'c2'})
df5 = pd.merge(df3, df4)#注意pandas中merge與concat的區(qū)別
df5['c1_rage'] = df5.apply(lambda r: r['c1'] / (r['c1'] + r['c2']), axis = 1)
df5['c2_rage'] = df5.apply(lambda r: r['c2'] / (r['c1'] + r['c2']), axis = 1)
# print(df5)
#畫圖出來觀測為信號添加做準備
plt.plot(df5['content_length_type'], df5['c1_rage'], label = u'垃圾郵件比例')
plt.plot(df5['content_length_type'], df5['c2_rage'], label = u'正常郵件比例')
plt.grid(True)
plt.legend(loc = 0)#加入圖例
plt.show()
可見郵件對長度還是有一定影響的
寫出擬合函數(shù):
def process_content_sema(x):
if x > 10000:
return 0.5 / np.exp(np.log10(x) - np.log10(500)) + np.log(abs(x - 500) + 1) - np.log(abs(x - 10000)) + 1
else:
return 0.5 / np.exp(np.log10(x) - np.log10(500)) + np.log(abs(x - 500) + 1)
4,特征提取
刪除沒有用的特征,把有用的特征給你保存下來
df['content_length_sema'] = list(map(lambda st: process_content_sema(st), df['content_length']))
# print(df.head(10))
# sys.exit(0)
print(df.dtypes) #可以查看每一列的數(shù)據(jù)類型贩绕,也可以查看每一列的名稱
df.drop(['from', 'to', 'date', 'from_address', 'to_address', \
'date_week','date_hour', 'date_time_quantum', 'content', \
'content_length', 'content_length_type'], 1, inplace=True)
print(df.info())
print(df.head(10))
df.to_csv('./result_process02', encoding='utf-8', index = False)
df.to_csv('./result_process02.csv', encoding='utf-8', index = False)
結(jié)果如下:
-
模型訓(xùn)練
選擇用貝葉斯算法進行模型計算火的,原因是速度快,且效果好
選擇召回率對模型進行評估
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer#CountVectorizer把詞進行可視化
from sklearn.decomposition import TruncatedSVD
from sklearn.naive_bayes import BernoulliNB
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score, precision_score, recall_score
# mpl.rcParams['font.sans-serif'] = [u'simHei']
# mpl.rcParams['axes.unicode_minus'] = False
df = pd.read_csv('./result_process02.csv', sep =',')
# print(df.head(5))
df.dropna(axis = 0, how ='any', inplace = True) #按行刪除Nan 確保數(shù)據(jù)安全
# print(df.head(5))
# print(df.info())
x_train, x_test, y_train, y_test = train_test_split(df[['has_date','jieba_cut_content']],\
df['label'],test_size = 0.2, random_state = 0)
# print("訓(xùn)練數(shù)據(jù)集大惺缜恪:%d" % x_train.shape[0])
# print("測試集數(shù)據(jù)大辛蠛住:%d" % x_test.shape[0])
# print(x_train.head(10))
# print(x_test.head(10)) #注意前面索引
#================================================================================================
print('='*30 + '開始計算tf—idf權(quán)重' + '='*30)
transformer = TfidfVectorizer(norm = 'l2', use_idf = True)#逆向文件頻率
svd = TruncatedSVD(n_components=20)
jieba_cut_content = list(x_train['jieba_cut_content'].astype('str'))
transformer_model = transformer.fit(jieba_cut_content)
df1 = transformer_model.transform(jieba_cut_content)
# print(df1)
# print(df1.shape)
print('='*30 + '開始SVD降維計算' + '='*30)
svd_model = svd.fit(df1)
df2 = svd_model.transform(df1)
data = pd.DataFrame(df2)
# print(data.head(10))
# print(data.info())
print('='*30 + '重新構(gòu)建矩陣開始' + '='*30)
data['has_date'] = list(x_train['has_date'])
# data['content_length_sema'] = list(x_train['content_length_sema'])
# print(data.head(10))
# print(data.info())
print('='*30 + '構(gòu)建伯努利貝葉斯模型' + '='*30)
nb = BernoulliNB(alpha = 1.0, binarize = 0.0005)#二值轉(zhuǎn)換閾值
model = nb.fit(data, y_train)
#================================================================================
print('='*30 + '構(gòu)建測試集' + '='*30)
jieba_cut_content_test = list(x_test['jieba_cut_content'].astype('str'))
data_test = pd.DataFrame(svd_model.transform(transformer_model.transform(jieba_cut_content_test)))
data_test['has_date'] = list(x_test['has_date'])
# data_test['content_length_sema'] = list(x_test['content_length_sema'])
# print(data_test.head(10))
# print(data_test.info())
#開始預(yù)測
print('='*30 + '開始預(yù)測測試集' + '='*30)
y_predict = model.predict(data_test)
precision = precision_score(y_test, y_predict)
recall = recall_score(y_test, y_predict)
f1mean = f1_score(y_test, y_predict)
print('精確率為:%0.5f' % precision)
print('召回率:%0.5f' % recall)
print('F1均值為:%0.5f' % f1mean)
結(jié)果:
精確率為:0.94549
召回率:0.98925
F1均值為:0.96688
詳細代碼以及說明見github地址:https://github.com/dctongsheng/Spam-filtering-projects001