寫在前面
- 態(tài)度決定高度!讓優(yōu)秀成為一種習(xí)慣硼讽!
- 世界上沒有什么事兒是加一次班解決不了的巢价,如果有,就加兩次!(- - -茂強(qiáng))
word2vec
大名鼎鼎的word2vec在這里就不再解釋什么了壤躲,多說無益城菊,不太明白的就去百度google吧,下面就說一下各種實(shí)現(xiàn)吧
準(zhǔn)備預(yù)料
python-gensim
一個(gè)簡單到爆的方式碉克,甚至可以一行代碼解決問題凌唬。
from gensim.models import word2vec
sentences = word2vec.Text8Corpus("C:/traindataw2v.txt") # 加載語料
model = word2vec.Word2Vec(sentences, size=200) # 訓(xùn)練skip-gram模型; 默認(rèn)window=5
#獲取“學(xué)習(xí)”的詞向量
print("學(xué)習(xí):" + model["學(xué)習(xí)"])
# 計(jì)算兩個(gè)詞的相似度/相關(guān)程度
y1 = model.similarity("不錯(cuò)", "好")
# 計(jì)算某個(gè)詞的相關(guān)詞列表
y2 = model.most_similar("書", topn=20) # 20個(gè)最相關(guān)的
# 尋找對(duì)應(yīng)關(guān)系
print("書-不錯(cuò)漏麦,質(zhì)量-")
y3 = model.most_similar(['質(zhì)量', '不錯(cuò)'], ['書'], topn=3)
# 尋找不合群的詞
y4 = model.doesnt_match("書 書籍 教材 很".split())
# 保存模型,以便重用
model.save("db.model")
# 對(duì)應(yīng)的加載方式
model = word2vec.Word2Vec.load("db.model")
好了唁奢,gensim的方式說完了
下邊就讓我們看一下參數(shù)吧
默認(rèn)參數(shù)如下:
sentences=None
size=100
alpha=0.025
window=5
min_count=5
max_vocab_size=None
sample=1e-3
seed=1
workers=3
min_alpha=0.0001
sg=0
hs=0
negative=5
cbow_mean=1
hashfxn=hash
iter=5
null_word=0
trim_rule=None
sorted_vocab=1
batch_words=MAX_WORDS_IN_BATCH
是不是感覺很意外窝剖,為啥有這么多參數(shù),平時(shí)都不怎么用赐纱,但是,一個(gè)訓(xùn)練好的模型的好與壞與其參數(shù)密不可分诚隙,之所以代碼把這些參數(shù)開放出來,是有一定的意義的久又,下面就讓我們來一一的看一下各個(gè)參數(shù)的意義在哪里吧效五。
sentences:就是每一行每一行的句子,但是句子長度不要過大畏妖,簡單的說就是上圖的樣子
sg:這個(gè)是訓(xùn)練時(shí)用的算法,當(dāng)為0時(shí)采用的是CBOW算法半夷,當(dāng)為1時(shí)會(huì)采用skip-gram
size:這個(gè)是定義訓(xùn)練的向量的長度
window:是在一個(gè)句子中,當(dāng)前詞和預(yù)測(cè)詞的最大距離
alpha:是學(xué)習(xí)率迅细,是控制梯度下降算法的下降速度的
seed:用于隨機(jī)數(shù)發(fā)生器巫橄。與初始化詞向量有關(guān)
min_count: 字典截?cái)?,詞頻少于min_count次數(shù)的單詞會(huì)被丟棄掉
max_vocab_size:詞向量構(gòu)建期間的RAM限制茵典。如果所有不重復(fù)單詞個(gè)數(shù)超過這個(gè)值湘换,則就消除掉其中最不頻繁的一個(gè),None表示沒有限制
sample:高頻詞匯的隨機(jī)負(fù)采樣的配置閾值,默認(rèn)為1e-3,范圍是(0,1e-5)
workers:設(shè)置多線程訓(xùn)練模型枚尼,機(jī)器的核數(shù)越多贴浙,訓(xùn)練越快
hs:如果為1則會(huì)采用hierarchica·softmax策略,Hierarchical Softmax是一種對(duì)輸出層進(jìn)行優(yōu)化的策略署恍,輸出層從原始模型的利用softmax計(jì)算概率值改為了利用Huffman樹計(jì)算概率值崎溃。如果設(shè)置為0(默認(rèn)值),則負(fù)采樣策略會(huì)被使用
negative:如果大于0盯质,那就會(huì)采用負(fù)采樣袁串,此時(shí)該值的大小就表示有多少個(gè)“noise words”會(huì)被使用,通常設(shè)置在(5-20)呼巷,默認(rèn)是5囱修,如果該值設(shè)置成0,那就表示不采用負(fù)采樣
cbow_mean:在采用cbow模型時(shí)王悍,此值如果是0破镰,就會(huì)使用上下文詞向量的和,如果是1(默認(rèn)值)压储,就會(huì)采用均值
hashfxn:hash函數(shù)來初始化權(quán)重鲜漩。默認(rèn)使用python的hash函數(shù)
iter: 迭代次數(shù),默認(rèn)為5
trim_rule: 用于設(shè)置詞匯表的整理規(guī)則集惋,指定那些單詞要留下孕似,哪些要被刪除」涡蹋可以設(shè)置為None(min_count會(huì)被使用)或者一個(gè)接受(word, count, min_count)并返回utils.RULE_DISCARD喉祭,utils.RULE_KEEP或者utils.RULE_DEFAULT,這個(gè)設(shè)置只會(huì)用在構(gòu)建詞典的時(shí)候雷绢,不會(huì)成為模型的一部分
sorted_vocab: 如果為1(defau·t),則在分配word index 的時(shí)候會(huì)先對(duì)單詞基于頻率降序排序胶惰。
batch_words:每一批傳遞給每個(gè)線程單詞的數(shù)量孵滞,默認(rèn)為10000坊饶,如果超過該值匿级,則會(huì)被截?cái)?/p>
python-tensorflow
官方網(wǎng)站實(shí)現(xiàn)的是n-gram方式
Skip-Gram是給定input word來預(yù)測(cè)上下文津函。而CBOW是給定上下文尔苦,來預(yù)測(cè)input word
首先數(shù)據(jù)還是上邊的數(shù)據(jù)
-
讀取數(shù)據(jù)
words = [] with open("c:/traindatav.txt", "r", encoding="utf-8") as f: for line in f.readlines(): text = line.split(" => ") if len(text) == 2: lable = text[0].strip() listsentence = [w for w in text[1].split(" ") if re.match("[\u4e00-\u9fa5]+", w) and len(w) >= 2] words.extend(listsentence)
words存放單詞允坚,這里單詞都是按照順序進(jìn)入words里邊的
-
構(gòu)建詞典
vocabulary_size = 10000 def build_dataset(words): count = [['UNK', -1]] count.extend(collections.Counter(words).most_common(vocabulary_size - 1)) dictionary = dict() for word, _ in count: dictionary[word] = len(dictionary) data = list() unk_count = 0 for word in words: if word in dictionary: index = dictionary[word] else: index = 0 # dictionary['UNK'] unk_count += 1 data.append(index) count[0][1] = unk_count reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys())) return data, count, dictionary, reverse_dictionary data, count, dictionary, reverse_dictionary = build_dataset(words)
vocabulary_size聲明了詞典里邊用多少單詞填充,其余的都用UNK填充展运,
這里篩選單詞的條件是詞頻乐疆,當(dāng)然這里如果有好的想法也可以自行改進(jìn)贬养,比如去頭除尾误算,詞頻太高的也不要儿礼,詞頻太低的也不要蚊夫,我這里選擇了10000歌詞去訓(xùn)練
其中dictionary中存放的數(shù)據(jù)如下圖
這里邊的數(shù)據(jù)表示為每個(gè)詞標(biāo)注一個(gè)索引
其中data里邊存放的數(shù)據(jù)如下圖
這里邊的數(shù)數(shù)字標(biāo)識(shí)了words里邊詞的對(duì)應(yīng)的索引,數(shù)據(jù)都是從上邊的dictionary中取出來的
其中count表示的是詞頻統(tǒng)計(jì)琅轧,如下圖
reverse_dictionary表示的是dictionary的反轉(zhuǎn)
-
參數(shù)聲明
batch_size = 128 embedding_size = 128 # Dimension of the embedding vector. skip_window = 1 # How many words to consider left and right. num_skips = 2 # How many times to reuse an input to generate a label. # We pick a random validation set to sample nearest neighbors. Here we limit the # validation samples to the words that have a low numeric ID, which by # construction are also the most frequent. valid_size = 16 # Random set of words to evaluate similarity on. valid_window = 100 # Only pick dev samples in the head of the distribution. valid_examples = np.random.choice(valid_window, valid_size, replace=False) num_sampled = 64 # Number of negative examples to sample.
-
構(gòu)建skip-gram模型的迭代函數(shù)
data_index = 0 def generate_batch(batch_size, num_skips, skip_window): global data_index assert batch_size % num_skips == 0 assert num_skips <= 2 * skip_window batch = np.ndarray(shape=(batch_size), dtype=np.int32) labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32) span = 2 * skip_window + 1 # [ skip_window target skip_window ] buffer = collections.deque(maxlen=span) for _ in range(span): buffer.append(data[data_index]) data_index = (data_index + 1) % len(data) for i in range(batch_size // num_skips): target = skip_window # target label at the center of the buffer targets_to_avoid = [skip_window] for j in range(num_skips): while target in targets_to_avoid: target = random.randint(0, span - 1) targets_to_avoid.append(target) batch[i * num_skips + j] = buffer[skip_window] labels[i * num_skips + j, 0] = buffer[target] buffer.append(data[data_index]) data_index = (data_index + 1) % len(data) return batch, labels
其中batch = np.ndarray(shape=(batch_size), dtype=np.int32)是產(chǎn)生一個(gè)128維的向量睹酌, labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)時(shí)產(chǎn)生128*1的一個(gè)矩陣憋沿,buffer里邊存放的是選出來的一個(gè)窗口上下文詞的索引卤妒,數(shù)據(jù)來源于data共缕,data_index全局標(biāo)識(shí)words的索引图谷,也就是data的每一個(gè)值便贵,其作用是為了在每一次迭代的過程中平滑的去產(chǎn)生上下文窗口承璃。
一個(gè)叫做skip_window的參數(shù)盔粹,它代表著我們從當(dāng)前input word的一側(cè)(左邊或右邊)選取詞的數(shù)量舷嗡。num_skips进萄,它代表著我們從整個(gè)窗口中選取多少個(gè)不同的詞作為我們的output word
-
構(gòu)建計(jì)算圖
graph = tf.Graph() with graph.as_default(): # Input data. train_inputs = tf.placeholder(tf.int32, shape=[batch_size]) train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1]) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # Ops and variables pinned to the CPU because of missing GPU implementation with tf.device('/cpu:0'): # Look up embeddings for inputs. embeddings = tf.Variable( tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)) embed = tf.nn.embedding_lookup(embeddings, train_inputs) # Construct the variables for the NCE loss nce_weights = tf.Variable( tf.truncated_normal([vocabulary_size, embedding_size],stddev=1.0 / math.sqrt(embedding_size))) nce_biases = tf.Variable(tf.zeros([vocabulary_size])) # Compute the average NCE loss for the batch. # tf.nce_loss automatically draws a new sample of the negative labels each # time we evaluate the loss. loss = tf.reduce_mean( tf.nn.nce_loss(weights=nce_weights, biases=nce_biases, inputs=embed, labels=train_labels, num_sampled = num_sampled, num_classes=vocabulary_size)) # Construct the SGD optimizer using a learning rate of 1.0. optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss) # Compute the cosine similarity between minibatch examples and all embeddings. norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True)) normalized_embeddings = embeddings / norm valid_embeddings = tf.nn.embedding_lookup( normalized_embeddings, valid_dataset) similarity = tf.matmul(valid_embeddings, normalized_embeddings, transpose_b=True) # Add variable initializer. init = tf.global_variables_initializer()
首先聲明數(shù)據(jù)placeholder,train_inputs【128】兜蠕,train_labels【128x1】熊杨,然后聲明valid_dataset晶府,這個(gè)是存放詞頻相對(duì)比較高一些有效詞,主要是為了訓(xùn)練結(jié)束后計(jì)算這些詞的相似詞
embeddings【10000x128】的詞向量矩陣剂习,embed要訓(xùn)練批次對(duì)應(yīng)的詞向量矩陣鳞绕,nce_weights表示nce損失下的權(quán)重矩陣们何,tf.truncated_normal()產(chǎn)生的是一個(gè)截尾的正態(tài)分布控轿,nce_biases表示偏置項(xiàng)茬射,loss就是損失函數(shù)在抛,也就是目標(biāo)函數(shù),optimizer表示的是迭代優(yōu)化隨機(jī)梯度下降法档悠,用以優(yōu)化loss函數(shù),步長為1.0磨德,similarity是為了根據(jù)embeddings計(jì)算valid_dataset中存放的詞的相似度
大概的神經(jīng)網(wǎng)絡(luò)圖如圖典挑,知道原理即可您觉,圖也是借來的
-
開始迭代計(jì)算
num_steps = 100001 with tf.Session(graph=graph) as session: # We must initialize all variables before we use them. init.run() print("Initialized") average_loss = 0 for step in range(num_steps): batch_inputs, batch_labels = generate_batch(batch_size, num_skips, skip_window) feed_dict = {train_inputs: batch_inputs, train_labels: batch_labels} # We perform one update step by evaluating the optimizer op (including it # in the list of returned values for session.run() _, loss_val = session.run([optimizer, loss], feed_dict=feed_dict) average_loss += loss_val if step % 2000 == 0: if step > 0: average_loss /= 2000 # The average loss is an estimate of the loss over the last 2000 batches. print("Average loss at step ", step, ": ", average_loss) average_loss = 0 # Note that this is expensive (~20% slowdown if computed every 500 steps) if step % 10000 == 0: sim = similarity.eval() for i in range(valid_size): valid_word = reverse_dictionary[valid_examples[i]] top_k = 8 # number of nearest neighbors nearest = (-sim[i, :]).argsort()[1:top_k + 1] log_str = "Nearest to %s:" % valid_word for k in range(top_k): close_word = reverse_dictionary[nearest[k]] log_str = "%s %s," % (log_str, close_word) print(log_str) final_embeddings = normalized_embeddings.eval()
其實(shí)上邊的訓(xùn)練很簡單肆糕,每次迭代都會(huì)根據(jù)generate_batch產(chǎn)生batch_inputs, batch_labels诚啃,這就是要喂給graph的數(shù)據(jù)始赎,然后就是執(zhí)行迭代了仔燕,迭代過程中晰搀,每個(gè)2000次都會(huì)輸出平均的誤差厕隧,每個(gè)10000次都會(huì)計(jì)算一下valid_dataset中的詞的前topK=8的相似詞, 最后final_embeddings存儲(chǔ)的就是標(biāo)準(zhǔn)化的詞向量髓迎。
-最后就是可視化
def plot_with_labels(low_dim_embs, labels, filename='tsne.png'):
assert low_dim_embs.shape[0] >= len(labels), "More labels than embeddings"
plt.figure(figsize=(18, 18)) # in inches
for i, label in enumerate(labels):
x, y = low_dim_embs[i, :]
plt.scatter(x, y)
plt.annotate(label,
xy=(x, y),
xytext=(5, 2),
textcoords='offset points',
ha='right',
va='bottom')
plt.savefig(filename)
try:
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
plot_only = 500
low_dim_embs = tsne.fit_transform(final_embeddings[:plot_only, :])
labels = [reverse_dictionary[i] for i in range(plot_only)]
plot_with_labels(low_dim_embs, labels)
except ImportError:
print("Please install sklearn, matplotlib, and scipy to visualize embeddings.")
可視化采用的是TSNE,這里就不多說了橄维,如果項(xiàng)具體了解争舞,請(qǐng)參考:數(shù)據(jù)降維竞川,其他的就不多說了委乌。
word2vec的spark實(shí)現(xiàn)
至于spark的實(shí)現(xiàn)就直接上代碼了荣回,這個(gè)很簡單心软,而且官網(wǎng)上也有很詳細(xì)的教程著蛙,個(gè)人感覺spark做的api簡直就是再也不能人性化了册踩,未來spark的方向也是深度學(xué)習(xí)和實(shí)時(shí)流暂吉,這個(gè)我個(gè)人感覺也算是走上spark的主流道路了慕的。坐等人性化深度學(xué)習(xí)api的來臨肮街。
廢話不多說判导,直接上代碼绕辖。
object WordToVec {
def main(args :Array[String]): Unit ={
val conf = new SparkConf().setAppName("WordToVec")
.setMaster("local")
val sc = new SparkContext(conf)
val stopwords = Array("的","是","你","我","他","她","它","和","了","而","有","人","被","做","對(duì)","與") //無效詞
val input = sc.textFile("c:/traindataw2v.txt")
.map(line => line.split(" "))
.map(_.filter(_.matches("[\u4E00-\u9FA5]+")).toSeq) //過濾中文
.map(_.filter(!stopwords.contains(_)))
.map(_.filter(_.length >= 2)) //長度必須大于2
val word2vec = new Word2Vec()
.setMinCount(2) //詞頻大于2的詞才能入選詞典
.setWindowSize(5) //上下文窗口長度為5
.setVectorSize(50) //詞的向量維度為50
.setNumIterations(25) //迭代次數(shù)為25
.setNumPartitions(3) // 數(shù)據(jù)分區(qū)3
.setSeed(12345) //隨機(jī)數(shù)產(chǎn)生seed
val model = word2vec.fit(input)
// model.save(sc, "D:/word2vecTmal")
// val model = Word2VecModel.load(sc,"D:/word2vecTmal")
val word = model.getVectors.keySet
val writer = new PrintWriter(new File("c:/resultw2v.txt" ))
model.getVectors.foreach(kv => {
writer.write(kv._1 + " => " + kv._2.mkString(" ") + "\n")
})
writer.close()
val synonyms = model.findSynonyms("很好", 5) //計(jì)算很好一次的5個(gè)最相似的詞并輸出
for((synonym, cosineSimilarity) <- synonyms) {
println(s"$synonym $cosineSimilarity")
}
sc.stop()
}
}
總結(jié)
個(gè)人建議仪际,訓(xùn)練word2vec的時(shí),如果想在單機(jī)情況下去訓(xùn)練的話最好用第一種方案成榜,如果想在集群伴栓,或者數(shù)據(jù)量比較大的情況下可以采用分布式的spark訓(xùn)練,這兩個(gè)的結(jié)果可靠性都要比tensorflow官方實(shí)現(xiàn)的要好额港。這跟tensorflow的實(shí)現(xiàn)方法是有直接關(guān)系的肚医。
好了不多說了,大家可以自己去實(shí)踐一下舰涌,畢竟我說的不算瓷耙,實(shí)踐是最好的老師搁痛。后續(xù)會(huì)持續(xù)書寫相關(guān)的算法鸡典,敬請(qǐng)期待枪芒,都是干貨舅踪,不摻水硫朦。