任務(wù)要求:
- 基本文本處理技能:中英文字符串處理(刪除不相關(guān)的字符幌陕、去停用詞);分詞(結(jié)巴分詞)汽煮;詞搏熄、字符頻率統(tǒng)計。
- 語言模型暇赤;
unigram
心例、bigram
、trigram
頻率統(tǒng)計鞋囊。 - jiebe分詞介紹和使用
- 中英文字符串處理(刪除不相關(guān)的字符止后、去停用詞)
- 以保留相關(guān)字符方式刪除不相關(guān)字符
for text in data['text']:
for uchar in text:
# 判斷是否為漢字
if uchar >= u'\u4e00' and uchar<=u'\u9fa5':
continue
# 判斷是否為數(shù)字
if uchar >= u'\u0030' and uchar<=u'\u0039':
continue
# 判斷是否為英文字母
if (uchar >= u'\u0041' and uchar<=u'\u005a') or (uchar >= u'\u0061' and uchar<=u'\u007a'):
continue
else:
text = text.replace(uchar, '')
content.append(text)
-
jieba
分詞
text_jieba = jieba.cut(text, cut_all=False)
cut_all
參數(shù)用來控制是否采用全模式
- 去停用詞
使用中文停用詞表
for word in text_jieba:
if word not in stop_words:
text.append(word)
- 詞、字符頻率統(tǒng)計
def get_wordsCounter(data):
all_content = []
# 把所有的text放到一個list中
for content in data:
all_content.extend(content)
# 對字符頻率統(tǒng)計
counter = Counter(all_content)
count_pairs = counter.most_common(VOCAB_SIZE - 1)
words_counter = pd.DataFrame([i[0] for i in count_pairs], columns={'words'})
words_counter['counter'] = [i[1] for i in count_pairs]
return words_counter
- 語言模型
統(tǒng)計語言模型是一個單詞序列上的概率分布溜腐,對于一個給定長度為m的序列译株,它可以為整個序列產(chǎn)生一個概率,即想辦法找到一個概率分布挺益,它可以表示任意一個句子或序列出現(xiàn)的概率歉糜。
-
unigram
:一元文法模型——上下文無關(guān)模型
該模型只考慮當(dāng)前詞本身出現(xiàn)的概率,而不考慮當(dāng)前詞的上下文環(huán)境望众。
每個句子出現(xiàn)的概率為每個單詞概率成績 -
依賴于上下文環(huán)境的詞的概率分布的統(tǒng)計計算機(jī)語言模型匪补。可以理解為當(dāng)前詞的概率與前面的個詞有關(guān)系
-
bigram
:當(dāng)時稱為二元
bigram
模型烂翰,當(dāng)前詞只與它前面的一個詞相關(guān)夯缺,這樣概率求解公式:
-
trigram
: 當(dāng)時稱為三元
trigram
模型,同理當(dāng)前詞只與它前面的兩個詞相關(guān)
-
完整代碼
# -*- coding: utf-8 -*-
"""
Created on Mon May 13 13:49:10 2019
@author: pc
"""
import pandas as pd
import jieba
from collections import Counter
TRAIN_PATH = 'E:/task2/cnews.train.txt'
STOPWORDS_PATH = 'E:/task2/ChineseStopWords.txt'
VOCAB_SIZE = 5000
def read_file(file_name):
'''
讀文件
'''
file_path = {'train': TRAIN_PATH}
contents = []
labels = []
with open(file_path[file_name], 'r', encoding='utf-8') as f:
for line in f:
try:
labels.append(line.strip().split('\t')[0])
contents.append(line.strip().split('\t')[1])
except:
pass
data = pd.DataFrame()
data['text'] = contents
data['label'] = labels
return data
def get_stopwordslist(path):
stopwords = [line.strip() for line in open(path, 'r', encoding='utf-8').readlines()]
return stopwords
def pre_data(data):
content = []
stop_words = get_stopwordslist(STOPWORDS_PATH)
for text in data['text']:
for uchar in text:
# 判斷是否為漢字
if uchar >= u'\u4e00' and uchar<=u'\u9fa5':
continue
# 判斷是否為數(shù)字
if uchar >= u'\u0030' and uchar<=u'\u0039':
continue
# 判斷是否為英文字母
if (uchar >= u'\u0041' and uchar<=u'\u005a') or (uchar >= u'\u0061' and uchar<=u'\u007a'):
continue
else:
text = text.replace(uchar, '')
# jieba分詞
text_jieba = jieba.cut(text, cut_all=False)
# 去停用詞
text = []
for word in text_jieba:
if word not in stop_words:
text.append(word)
content.append(text)
return content
def get_wordsCounter(data):
'''
詞甘耿,字符頻率統(tǒng)計
'''
all_content = []
# 把所有的text放到一個list中
for content in data:
all_content.extend(content)
# 對字符頻率統(tǒng)計
counter = Counter(all_content)
count_pairs = counter.most_common(VOCAB_SIZE - 1)
words_counter = pd.DataFrame([i[0] for i in count_pairs], columns={'words'})
words_counter['counter'] = [i[1] for i in count_pairs]
return words_counter
train = read_file('train')
train = train.iloc[:100]
content = pre_data(train)
counter_words = get_wordsCounter(content)