Tensorflow學(xué)習(xí)筆記之二:電影評論識別分類

前言


本教程來自TensorFlow官方教程 tensorflow.org,網(wǎng)站和大部分學(xué)習(xí)資源被墻涡匀,請自主進(jìn)行科學(xué)上網(wǎng)何恶。文章旨在記錄自己學(xué)習(xí)機(jī)器學(xué)習(xí)相關(guān)知識的過程寄啼,如對您學(xué)習(xí)過程有所助益恩够,不勝榮幸背率。文章是個人翻譯话瞧,水平有限,希望理解寝姿。教程采用Anaconda版本的Python交排,是Python3代碼,編輯器是Pycharm。

原文Github地址

原文colab地址(科學(xué)上網(wǎng))

這篇文章將會建立一個神經(jīng)網(wǎng)絡(luò)模型通過分析電影評論的文字饵筑,從而得出他是負(fù)面的還是正面的評論的分類埃篓。這是一個典型的二分類問題,一種重要且廣泛適用的機(jī)器學(xué)習(xí)問題根资。

我們將會使用來自互聯(lián)網(wǎng)影視數(shù)據(jù)庫的IMDB dataset數(shù)據(jù)集. 它包含了50000條電影評論文本市咆。我們會將它分為25000條訓(xùn)練數(shù)據(jù)和25000條測試數(shù)據(jù)热幔,并且兩者訓(xùn)練數(shù)據(jù)和測試數(shù)據(jù)負(fù)面和正面評價個數(shù)都是均等的搞隐。

本篇教程將會使用tf.keras框架讨越。tf.keras,是TensorFlow中用于建立和訓(xùn)練模型的高級API裤纹。你可以使用以下Python代碼導(dǎo)入keras框架:

import tensorflow as tf
from tensorflow import keras

import numpy as np

print(tf.__version__)
1.10.0

下載IMDB 數(shù)據(jù)集


現(xiàn)在IMDB數(shù)據(jù)集已經(jīng)集成于TensorFlow.它已經(jīng)被預(yù)處理為序列的整數(shù)委刘,每一個整數(shù)代表著該單詞在詞典中的位置。
你可以使用下面的代碼下載IMDB數(shù)據(jù)集鹰椒,如果你已經(jīng)下載锡移,使用下面代碼會直接讀取該數(shù)據(jù)集:

imdb = keras.datasets.imdb

(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz
17465344/17464789 [==============================] - 0s 0us/step

參數(shù)num_words = 10000 表示數(shù)據(jù)集將會保留出現(xiàn)頻率在前10000的單詞,有些稀有單詞將會被拋棄以保證數(shù)據(jù)的可處理性吹零。

查看數(shù)據(jù)


讓我們花一些時間來了解一下數(shù)據(jù)集的形式罩抗。數(shù)據(jù)集已經(jīng)被預(yù)處理過了拉庵,每一個電影評論都是都是一長串的整數(shù)數(shù)字灿椅,代表著每個單詞在字典中的位置。每一個標(biāo)簽都是一個數(shù)字0或1,0代表該條評價是負(fù)面的茫蛹,1代表該條評價是正面的操刀。

print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))

查看訓(xùn)練集的大小

Training entries: 25000, labels: 25000

每個評論都被轉(zhuǎn)換成了長串整數(shù),我們看到的每一個訓(xùn)練數(shù)據(jù)是這樣的:

print(train_data[0])
[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]

電影評論基本上長度都不會相等婴洼。但是輸入神經(jīng)網(wǎng)絡(luò)的數(shù)據(jù)必須是相等的骨坑,我們稍后將會解決這個問題。

print(len(train_data[0]), len(train_data[1]))
(218, 189)

把數(shù)字轉(zhuǎn)化為單詞

有時候能把數(shù)字轉(zhuǎn)化成單詞將會很有用柬采。這里我們寫了一個函數(shù)用于將數(shù)字映射為詞典中的單詞:

# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()

# The first indices are reserved
word_index = {k:(v+3) for k,v in word_index.items()} 
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2  # unknown
word_index["<UNUSED>"] = 3

reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])

def decode_review(text):
    return ' '.join([reverse_word_index.get(i, '?') for i in text])
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb_word_index.json
1646592/1641221 [==============================] - 0s 0us/step

現(xiàn)在我們可以嘗試解碼一個電影評論:

print(decode_review(train_data[0]))
<START> this film was just brilliant casting location scenery story direction everyone's really suited the part they played and you could just imagine being there robert <UNK> is an amazing actor and now the same being director <UNK> father came from the same scottish island as myself so i loved the fact there was a real connection with this film the witty remarks throughout the film were great it was just brilliant so much that i bought the film as soon as it was released for <UNK> and would recommend it to everyone to watch and the fly fishing was amazing really cried at the end it was so sad and you know what they say if you cry at a film it must have been good and this definitely was also <UNK> to the two little boy's that played the <UNK> of norman and paul they were just brilliant children are often left out of the <UNK> list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was shared with us all

準(zhǔn)備數(shù)據(jù)


影視評價文本想要輸入到神經(jīng)網(wǎng)絡(luò)中必須將其轉(zhuǎn)化為張量欢唾,為此有多種途徑來解決這個問題:

  • 對數(shù)組進(jìn)行獨熱碼處理(one-hot-encode),將其轉(zhuǎn)化為0和1的向量例如粉捻,序列[3,5]將成為10,000維向量礁遣,除了索引3和5(它們是1)之外全部為零。然后肩刃,將其作為我們網(wǎng)絡(luò)中的第一層 - 一個可以處理浮點矢量數(shù)據(jù)的全連接層祟霍。但是,這種方法太消耗內(nèi)存盈包,需要num_words * num_reviews大小矩陣沸呐。
  • 或者,我們可以填充數(shù)組呢燥,使它們都具有相同的長度崭添,然后創(chuàng)建一個num_examples * max_length的張量。我們可以使用能夠處理這種形狀的嵌入層作為我們網(wǎng)絡(luò)中的第一層疮茄。

本文將使用第二種方法滥朱。

由于電影評論的長度必須相同,我們將使用pad_sequences函數(shù)來標(biāo)準(zhǔn)化長度

train_data = keras.preprocessing.sequence.pad_sequences(train_data,
                                                        value=word_index["<PAD>"],
                                                        padding='post',
                                                        maxlen=256)

test_data = keras.preprocessing.sequence.pad_sequences(test_data,
                                                       value=word_index["<PAD>"],
                                                       padding='post',
                                                       maxlen=256)

我們來看一下現(xiàn)在的長度:

print(len(train_data[0]), len(train_data[1]))
(256, 256)

然后我們再看一下被pad過的第一條評論的內(nèi)容:

print(train_data[0])
[   1   14   22   16   43  530  973 1622 1385   65  458 4468   66 3941
    4  173   36  256    5   25  100   43  838  112   50  670    2    9
   35  480  284    5  150    4  172  112  167    2  336  385   39    4
  172 4536 1111   17  546   38   13  447    4  192   50   16    6  147
 2025   19   14   22    4 1920 4613  469    4   22   71   87   12   16
   43  530   38   76   15   13 1247    4   22   17  515   17   12   16
  626   18    2    5   62  386   12    8  316    8  106    5    4 2223
 5244   16  480   66 3785   33    4  130   12   16   38  619    5   25
  124   51   36  135   48   25 1415   33    6   22   12  215   28   77
   52    5   14  407   16   82    2    8    4  107  117 5952   15  256
    4    2    7 3766    5  723   36   71   43  530  476   26  400  317
   46    7    4    2 1029   13  104   88    4  381   15  297   98   32
 2071   56   26  141    6  194 7486   18    4  226   22   21  134  476
   26  480    5  144   30 5535   18   51   36   28  224   92   25  104
    4  226   65   16   38 1334   88   12   16  283    5   16 4472  113
  103   32   15   16 5345   19  178   32    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0    0    0    0    0    0    0    0    0    0    0
    0    0    0    0]

建立模型


神經(jīng)網(wǎng)絡(luò)是由層的堆疊來實現(xiàn)的力试,因此需要兩個架構(gòu)性的決策:

  • 在這個模型中需要多少層
  • 每一層需要多少隱藏的神經(jīng)元

在這個例子中徙邻,我們的輸入層是包含單詞索引的數(shù)組。預(yù)測的標(biāo)簽是0或者1畸裳。我們可以建立這樣一個模型來解決這個問題:

# input shape is the vocabulary count used for the movie reviews (10,000 words)
vocab_size = 10000

model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))

model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        (None, None, 16)          160000    
_________________________________________________________________
global_average_pooling1d (Gl (None, 16)                0         
_________________________________________________________________
dense (Dense)                (None, 16)                272       
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 17        
=================================================================
Total params: 160,289
Trainable params: 160,289
Non-trainable params: 0
_________________________________________________________________

該模型的堆疊方式是這樣的:
1.第一層是嵌入層缰犁。 該層采用整數(shù)編碼的詞匯表,并查找每個詞索引的嵌入向量怖糊。 這些向量是作為模型訓(xùn)練學(xué)習(xí)的帅容。 向量為輸出數(shù)組添加維度。 生成的維度為:(batch, sequence, embedding)伍伤。如需進(jìn)一步了解并徘,可參考官網(wǎng)keras的嵌入層文檔。

2.接下來扰魂,GlobalAveragePooling1D層通過對序列維度求平均麦乞,為每個示例返回固定長度的輸出向量蕴茴。 這允許模型以最簡單的方式處理可變長度的輸入。

3.一個擁有16個神經(jīng)元的全連接層

4.全連接層姐直,也是輸出層倦淀,激活函數(shù)采用sigmoid,輸出一個0和1之間的浮點數(shù)声畏,用來表示置信度撞叽。

配置本教程使用的優(yōu)化器和損失函數(shù):

model.compile(optimizer=tf.train.AdamOptimizer(),
              loss='binary_crossentropy',
              metrics=['accuracy'])

創(chuàng)建驗證數(shù)據(jù)集


當(dāng)我們訓(xùn)練神經(jīng)網(wǎng)絡(luò)的時候,我們想要不斷檢查我們神經(jīng)網(wǎng)絡(luò)的精度插龄。我們可以從原訓(xùn)練數(shù)據(jù)分離出10000組數(shù)據(jù)作為驗證數(shù)據(jù)集愿棋。(為什么不用測試數(shù)據(jù)呢?因為我們的目標(biāo)是開發(fā)一個神經(jīng)網(wǎng)絡(luò)并使用訓(xùn)練數(shù)據(jù)不斷的調(diào)節(jié)神經(jīng)網(wǎng)絡(luò)均牢,最終僅使用測試數(shù)據(jù)來評估我們的模型初斑。)

x_val = train_data[:10000]
partial_x_train = train_data[10000:]

y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]

訓(xùn)練模型


history = model.fit(partial_x_train,
                    partial_y_train,
                    epochs=40,
                    batch_size=512,
                    validation_data=(x_val, y_val),
                    verbose=1)

訓(xùn)練該模型使用15000個樣本,batch_size 為512膨处,共訓(xùn)練了40個epoch
在訓(xùn)練的同時會記錄與驗證數(shù)據(jù)集的對比結(jié)果见秤。

Train on 15000 samples, validate on 10000 samples
Epoch 1/40
15000/15000 [==============================] - 1s 43us/step - loss: 0.6951 - acc: 0.5043 - val_loss: 0.6929 - val_acc: 0.5117
Epoch 2/40
15000/15000 [==============================] - 0s 27us/step - loss: 0.6912 - acc: 0.5311 - val_loss: 0.6903 - val_acc: 0.5281
Epoch 3/40
15000/15000 [==============================] - 0s 28us/step - loss: 0.6893 - acc: 0.5553 - val_loss: 0.6888 - val_acc: 0.5674
Epoch 4/40
15000/15000 [==============================] - 0s 27us/step - loss: 0.6870 - acc: 0.5961 - val_loss: 0.6866 - val_acc: 0.5853
Epoch 5/40
15000/15000 [==============================] - 0s 27us/step - loss: 0.6841 - acc: 0.6161 - val_loss: 0.6831 - val_acc: 0.6584
Epoch 6/40
15000/15000 [==============================] - 0s 29us/step - loss: 0.6802 - acc: 0.6869 - val_loss: 0.6789 - val_acc: 0.6999
Epoch 7/40
15000/15000 [==============================] - 0s 28us/step - loss: 0.6746 - acc: 0.7159 - val_loss: 0.6735 - val_acc: 0.7093
Epoch 8/40
15000/15000 [==============================] - 0s 28us/step - loss: 0.6670 - acc: 0.7367 - val_loss: 0.6654 - val_acc: 0.7395
Epoch 9/40
15000/15000 [==============================] - 0s 26us/step - loss: 0.6569 - acc: 0.7586 - val_loss: 0.6546 - val_acc: 0.7523
Epoch 10/40
15000/15000 [==============================] - 0s 26us/step - loss: 0.6436 - acc: 0.7728 - val_loss: 0.6408 - val_acc: 0.7585
Epoch 11/40
15000/15000 [==============================] - 0s 26us/step - loss: 0.6274 - acc: 0.7625 - val_loss: 0.6245 - val_acc: 0.7662
Epoch 12/40
15000/15000 [==============================] - 0s 26us/step - loss: 0.6074 - acc: 0.7823 - val_loss: 0.6051 - val_acc: 0.7710
Epoch 13/40
15000/15000 [==============================] - 0s 26us/step - loss: 0.5840 - acc: 0.7901 - val_loss: 0.5840 - val_acc: 0.7785
Epoch 14/40
15000/15000 [==============================] - 0s 26us/step - loss: 0.5583 - acc: 0.8007 - val_loss: 0.5589 - val_acc: 0.7902
Epoch 15/40
15000/15000 [==============================] - 0s 27us/step - loss: 0.5305 - acc: 0.8105 - val_loss: 0.5331 - val_acc: 0.7982
Epoch 16/40
15000/15000 [==============================] - 0s 26us/step - loss: 0.5029 - acc: 0.8191 - val_loss: 0.5087 - val_acc: 0.8046
Epoch 17/40
15000/15000 [==============================] - 0s 27us/step - loss: 0.4750 - acc: 0.8329 - val_loss: 0.4848 - val_acc: 0.8184
Epoch 18/40
15000/15000 [==============================] - 0s 26us/step - loss: 0.4487 - acc: 0.8433 - val_loss: 0.4618 - val_acc: 0.8260
Epoch 19/40
15000/15000 [==============================] - 0s 27us/step - loss: 0.4241 - acc: 0.8540 - val_loss: 0.4409 - val_acc: 0.8339
Epoch 20/40
15000/15000 [==============================] - 0s 27us/step - loss: 0.4015 - acc: 0.8639 - val_loss: 0.4221 - val_acc: 0.8411
Epoch 21/40
15000/15000 [==============================] - 0s 26us/step - loss: 0.3806 - acc: 0.8711 - val_loss: 0.4051 - val_acc: 0.8465
Epoch 22/40
15000/15000 [==============================] - 0s 26us/step - loss: 0.3619 - acc: 0.8765 - val_loss: 0.3903 - val_acc: 0.8513
Epoch 23/40
15000/15000 [==============================] - 0s 26us/step - loss: 0.3454 - acc: 0.8809 - val_loss: 0.3776 - val_acc: 0.8564
Epoch 24/40
15000/15000 [==============================] - 0s 26us/step - loss: 0.3302 - acc: 0.8859 - val_loss: 0.3663 - val_acc: 0.8595
Epoch 25/40
15000/15000 [==============================] - 0s 26us/step - loss: 0.3169 - acc: 0.8899 - val_loss: 0.3566 - val_acc: 0.8622
Epoch 26/40
15000/15000 [==============================] - 0s 27us/step - loss: 0.3048 - acc: 0.8931 - val_loss: 0.3481 - val_acc: 0.8650
Epoch 27/40
15000/15000 [==============================] - 0s 27us/step - loss: 0.2941 - acc: 0.8965 - val_loss: 0.3407 - val_acc: 0.8680
Epoch 28/40
15000/15000 [==============================] - 0s 27us/step - loss: 0.2839 - acc: 0.8991 - val_loss: 0.3341 - val_acc: 0.8701
Epoch 29/40
15000/15000 [==============================] - 0s 26us/step - loss: 0.2748 - acc: 0.9022 - val_loss: 0.3286 - val_acc: 0.8719
Epoch 30/40
15000/15000 [==============================] - 0s 27us/step - loss: 0.2669 - acc: 0.9043 - val_loss: 0.3235 - val_acc: 0.8720
Epoch 31/40
15000/15000 [==============================] - 0s 26us/step - loss: 0.2585 - acc: 0.9082 - val_loss: 0.3192 - val_acc: 0.8753
Epoch 32/40
15000/15000 [==============================] - 0s 27us/step - loss: 0.2518 - acc: 0.9101 - val_loss: 0.3154 - val_acc: 0.8755
Epoch 33/40
15000/15000 [==============================] - 0s 26us/step - loss: 0.2443 - acc: 0.9119 - val_loss: 0.3121 - val_acc: 0.8754
Epoch 34/40
15000/15000 [==============================] - 0s 26us/step - loss: 0.2378 - acc: 0.9154 - val_loss: 0.3089 - val_acc: 0.8757
Epoch 35/40
15000/15000 [==============================] - 0s 27us/step - loss: 0.2320 - acc: 0.9161 - val_loss: 0.3060 - val_acc: 0.8769
Epoch 36/40
15000/15000 [==============================] - 0s 27us/step - loss: 0.2257 - acc: 0.9195 - val_loss: 0.3038 - val_acc: 0.8774
Epoch 37/40
15000/15000 [==============================] - 0s 27us/step - loss: 0.2203 - acc: 0.9214 - val_loss: 0.3019 - val_acc: 0.8778
Epoch 38/40
15000/15000 [==============================] - 0s 28us/step - loss: 0.2150 - acc: 0.9232 - val_loss: 0.2993 - val_acc: 0.8786
Epoch 39/40
15000/15000 [==============================] - 0s 27us/step - loss: 0.2096 - acc: 0.9257 - val_loss: 0.2977 - val_acc: 0.8792
Epoch 40/40
15000/15000 [==============================] - 0s 27us/step - loss: 0.2047 - acc: 0.9275 - val_loss: 0.2959 - val_acc: 0.8803

評估模型


我們來看一下我們訓(xùn)練的模型表現(xiàn)如何

results = model.evaluate(test_data, test_labels)

print(results)
25000/25000 [==============================] - 0s 13us/step
[0.3104253210735321, 0.87236]

可以看出模型的精確度已經(jīng)達(dá)到了87%,隨著訓(xùn)練次數(shù)的增加真椿,這個值可能最終會逼近95%

畫一個圖來查看過擬合的狀況

model.fit()這個函數(shù)會返回在訓(xùn)練過程中的歷史數(shù)據(jù)鹃答,在TensorFlow中會以History 對象的形式存在,它是一個dictionary突硝,我們可以打印它來觀察它的結(jié)構(gòu):

history_dict = history.history
history_dict.keys()
dict_keys(['loss', 'val_loss', 'acc', 'val_acc'])

我們可以使用下面的繪圖代碼來更直觀的查看過擬合的情況:

import matplotlib.pyplot as plt

acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(1, len(acc) + 1)

# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()

plt.show()
plt.clf()   # clear figure
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']

plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()

plt.show()
損失值
精確度

在該圖中测摔,點表示訓(xùn)練過程中的損失值和精確度,實線表示驗證數(shù)據(jù)集的損失值和精確度解恰。
可以看出锋八,訓(xùn)練損失值隨著每個epoch而減少,并且訓(xùn)練精確度隨epoch增加而增加护盈。這在使用梯度下降優(yōu)化時是符合預(yù)期的挟纱。

驗證精確度和訓(xùn)練精確度在20個epoch之后稍微有一些“分道揚鑣”。這是過度擬合的一個例子:模型在訓(xùn)練數(shù)據(jù)上的表現(xiàn)比在以前從未見過的數(shù)據(jù)上表現(xiàn)得更好腐宋。在此之后紊服,模型由于過度優(yōu)化,無法將結(jié)果更精準(zhǔn)的適用于測試數(shù)據(jù)了胸竞。

對于這種特殊情況欺嗤,我們可以通過在二十個左右的epoch之后停止訓(xùn)練來防止過度擬合。在以后的教程中卫枝,您將看到如何使用回調(diào)自動執(zhí)行此操作煎饼。

#title MIT License
#
# Copyright (c) 2017 Fran?ois Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市校赤,隨后出現(xiàn)的幾起案子吆玖,更是在濱河造成了極大的恐慌淤袜,老刑警劉巖,帶你破解...
    沈念sama閱讀 217,406評論 6 503
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件衰伯,死亡現(xiàn)場離奇詭異,居然都是意外死亡积蔚,警方通過查閱死者的電腦和手機(jī)意鲸,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,732評論 3 393
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來尽爆,“玉大人怎顾,你說我怎么就攤上這事∈” “怎么了槐雾?”我有些...
    開封第一講書人閱讀 163,711評論 0 353
  • 文/不壞的土叔 我叫張陵,是天一觀的道長幅狮。 經(jīng)常有香客問我募强,道長,這世上最難降的妖魔是什么崇摄? 我笑而不...
    開封第一講書人閱讀 58,380評論 1 293
  • 正文 為了忘掉前任擎值,我火速辦了婚禮,結(jié)果婚禮上逐抑,老公的妹妹穿的比我還像新娘鸠儿。我一直安慰自己厕氨,他們只是感情好进每,可當(dāng)我...
    茶點故事閱讀 67,432評論 6 392
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著命斧,像睡著了一般田晚。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上国葬,一...
    開封第一講書人閱讀 51,301評論 1 301
  • 那天肉瓦,我揣著相機(jī)與錄音,去河邊找鬼胃惜。 笑死泞莉,一個胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的船殉。 我是一名探鬼主播鲫趁,決...
    沈念sama閱讀 40,145評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼利虫!你這毒婦竟也來了挨厚?” 一聲冷哼從身側(cè)響起堡僻,我...
    開封第一講書人閱讀 39,008評論 0 276
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎疫剃,沒想到半個月后钉疫,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,443評論 1 314
  • 正文 獨居荒郊野嶺守林人離奇死亡巢价,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,649評論 3 334
  • 正文 我和宋清朗相戀三年牲阁,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片壤躲。...
    茶點故事閱讀 39,795評論 1 347
  • 序言:一個原本活蹦亂跳的男人離奇死亡城菊,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出碉克,到底是詐尸還是另有隱情凌唬,我是刑警寧澤,帶...
    沈念sama閱讀 35,501評論 5 345
  • 正文 年R本政府宣布漏麦,位于F島的核電站客税,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏撕贞。R本人自食惡果不足惜霎挟,卻給世界環(huán)境...
    茶點故事閱讀 41,119評論 3 328
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望麻掸。 院中可真熱鬧酥夭,春花似錦、人聲如沸脊奋。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,731評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽诚隙。三九已至讶隐,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間久又,已是汗流浹背巫延。 一陣腳步聲響...
    開封第一講書人閱讀 32,865評論 1 269
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機(jī)就差點兒被人妖公主榨干…… 1. 我叫王不留地消,地道東北人炉峰。 一個月前我還...
    沈念sama閱讀 47,899評論 2 370
  • 正文 我出身青樓,卻偏偏與公主長得像脉执,于是被迫代替她去往敵國和親疼阔。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 44,724評論 2 354

推薦閱讀更多精彩內(nèi)容

  • 文章主要分為:一、深度學(xué)習(xí)概念婆廊;二迅细、國內(nèi)外研究現(xiàn)狀;三淘邻、深度學(xué)習(xí)模型結(jié)構(gòu)茵典;四、深度學(xué)習(xí)訓(xùn)練算法宾舅;五统阿、深度學(xué)習(xí)的優(yōu)點...
    艾剪疏閱讀 21,834評論 0 58
  • https://developers.google.com/machine-learning/crash-cour...
    iOSDevLog閱讀 2,661評論 1 11
  • 陳孝正 阿正 初戀 摩羯座 AB型血 預(yù)備黨員 他出生在一個工人家庭是一名遺腹子,他的母親性情乖戾贴浙,從小對...
    有恃無恐_ffdd閱讀 670評論 0 0
  • “救救我吧醫(yī)生!救救我吧署恍!”這聲音像索命的冤魂崎溃,一聲比一聲更加凄厲。 這是我們第一次見面的時候盯质,彼時的我看著這個衣...
    Ribuprofen閱讀 356評論 0 4
  • 今天過經(jīng)營分析報告會袁串,放大字體,多比較
    霧中人影閱讀 109評論 0 0