完成一個(gè)相對(duì)簡(jiǎn)單的“關(guān)鍵字提取”算法
思路:擁有關(guān)鍵詞最多的句子就是最重要的句子淑倾。我們把句子按照關(guān)鍵詞數(shù)量的多少排序蒂培,取前n句搀缠,即可匯總成我們的摘要缰泡。
步驟:
- 給在文章中出現(xiàn)的單詞按照算法計(jì)算出重要性
- 按照句子中單詞的重要性算出句子的總分
- 按照句子的總分給文章中的每個(gè)句子排序
- 取出前n個(gè)句子作為摘要
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.corpus import stopwords
from collections import defaultdict
from string import punctuation
from heapq import nlargest
"""
nltk.tokenize 是NLTK提供的分詞工具包烧颖。所謂的分詞 ?tokenize? 實(shí)際就是把段落分成句子弱左,把句子分成一個(gè)個(gè)單詞的過程。我們導(dǎo)入的 sent_tokenize() 函數(shù)對(duì)應(yīng)的是分段為句倒信。 word_tokenize()函數(shù)對(duì)應(yīng)的是分句為詞科贬。
stopwords 是一個(gè)列表,包含了英文中那些頻繁出現(xiàn)的詞鳖悠,如am, is, are榜掌。
defaultdict 是一個(gè)帶有默認(rèn)值的字典容器。
puctuation 是一個(gè)列表乘综,包含了英文中的標(biāo)點(diǎn)和符號(hào)憎账。
nlargest() 函數(shù)可以很快地求出一個(gè)容器中最大的n個(gè)數(shù)字。
"""
stopwords = set(stopwords.words('english') + list(punctuation))
#stopwords包含的是我們?cè)谌粘I钪袝?huì)遇到的出現(xiàn)頻率很高的詞卡辰,如do, I, am, is, are等等胞皱,這種詞匯是不應(yīng)該算是我們的 關(guān)鍵字邪意。同樣的標(biāo)點(diǎn)符號(hào)(punctuation)也不能被算作是關(guān)鍵字。
max_cut = 0.9
min_cut = 0.1
#限制了在文本中出現(xiàn)重要性過高過低的詞反砌。就像在跳水比賽中會(huì)去掉最高分和最低分一樣雾鬼。我們也需要去掉那些重 要性過高和過低的詞來提升算法的效果。
def compute_frequencies(word_sent):
"""
計(jì)算出每個(gè)詞出現(xiàn)的頻率
:param word_sent: 是一個(gè)已經(jīng)分好詞的列表
:return: 一個(gè)詞典freq[], freq[w]代表了w出現(xiàn)的頻率
"""
freq = defaultdict(int)#defaultdict和普通的dict 的區(qū)別是它可以設(shè)置default值 參數(shù)是int默認(rèn)值是0
#統(tǒng)計(jì)每個(gè)詞出現(xiàn)的頻率:
for s in word_sent:
for word in s:
if word not in stopwords:
freq[word] += 1
#得出最高出現(xiàn)頻次m
m = float(max(freq.values()))
#所有單詞的頻次統(tǒng)除m
for w in list(freq.keys()):
freq[w] = freq[w]/m
if freq[w] >= max_cut or freq[w] <= min_cut:
del freq[w]
# 最后返回的是
# {key:單詞, value: 重要性}
return freq
def summarize(text, n):
"""
用來總結(jié)的主要函數(shù)
text是輸入的文本
n是摘要的句子個(gè)數(shù)
返回包含摘要的列表
"""
# 首先先把句子分出來
sents = sent_tokenize(text)
assert n <= len(sents)
# 然后再分詞
word_sent = [word_tokenize(s.lower()) for s in sents]
# self._freq是一個(gè)詞和詞頻率的字典
freq = compute_frequencies(word_sent)
#ranking則是句子和句子重要性的詞典
ranking = defaultdict(int)
for i, word in enumerate(word_sent):
for w in word:
if w in freq:
ranking[i] += freq[w]
sents_idx = rank(ranking, n)
return [sents[j] for j in sents_idx]
"""
考慮到句子比較多的情況
用遍歷的方式找最大的n個(gè)數(shù)比較慢
我們這里調(diào)用heapq中的函數(shù)
創(chuàng)建一個(gè)最小堆來完成這個(gè)功能
返回的是最小的n個(gè)數(shù)所在的位置
"""
def rank(ranking, n):
return nlargest(n, ranking, key=ranking.get)
#運(yùn)行程序
if __name__ == '__main__':
with open("news.txt", "r") as myfile:
text = myfile.read().replace('\n','')
res = summarize(text, 2)
for i in range(len(res)):
print("* " + res[i])
分析這篇文章:臉書推出人工智能翻譯宴树,谷歌感到壓力山大
得到:
- Rather than analyze a sentence sequentially, one piece at a time, a convolutional neural network can analyze many different pieces at once, before organizing those pieces into a logical hierarchy.Even if the system is only marginally more accurate than systems like the one Google rolled out in the fall, the company says its technique is more efficient that other neural network-based methods.Others may help push the technique forward as well.
- This past fall, Google unveiled a new translation system driven entirely by neural networks that topped existing models, and many other companies and researchers are pushing in the same direction, most notably Microsoft and Chinese web giant Baidu.But Facebook is taking a slightly different tack from most of the other big players.
疑問:
n=2 為什么這么兩大段策菜,不應(yīng)該是兩個(gè)句子嗎?