2023-04-10test

def model_lstm(X_train, X_val, y_train, y_val, word_to_index):
    input_shape = (SEQ_LENGTH,)
    x_train_in = Input(input_shape, dtype='int32', name="x_train")

    # word_index存儲的是所有vocabulary的映射關(guān)系
    nb_words = min(MAX_NB_WORDS, len(word_to_index))
    embedding_layer = Embedding(nb_words, EMBEDDING_DIM, input_length=SEQ_LENGTH)(x_train_in)
    print("embedding layer is::", embedding_layer)
    print("build model.....")

    # return_sequences=True表示返回的是序列锣吼,否則下面的LSTM無法使用,但是如果下一層不是LSTM蓝厌,則可以不寫
    lstm_1 = LSTM(EMBEDDING_DIM, name="LSTM_1", return_sequences=True)(embedding_layer)
    lstm_2 = LSTM(EMBEDDING_DIM_2, name="LSTM_2")(lstm_1)
    dense = Dense(nb_words, activation="softmax", name="Dense_1")(lstm_2)

    model = Model(inputs=x_train_in, outputs=dense)
    print(model.summary())

    adam = Adam(lr=0.0001, beta_1=0.9, beta_2=0.99, epsilon=1e-08)
    model.compile(loss='categorical_crossentropy',
                  optimizer=adam,
                  metrics=['accuracy'])
embedding layer is:: KerasTensor(type_spec=TensorSpec(shape=(None, 10, 800), dtype=tf.float32, name=None), name='embedding/embedding_lookup/Identity_1:0', description="created by layer 'embedding'")
build model.....
Model: "model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 x_train (InputLayer)        [(None, 10)]              0         
                                                                 
 embedding (Embedding)       (None, 10, 800)           1948800   
                                                                 
 LSTM_1 (LSTM)               (None, 10, 800)           5123200   
                                                                 
 LSTM_2 (LSTM)               (None, 1600)              15366400  
                                                                 
 Dense_1 (Dense)             (None, 2436)              3900036   
                                                                 
=================================================================
Total params: 26,338,436
Trainable params: 26,338,436
Non-trainable params: 0
_________________________________________________________________
None
F:\anaconda3\lib\site-packages\keras\optimizers\optimizer_v2\adam.py:110: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
WARNING:tensorflow:`period` argument is deprecated. Please use `save_freq` to specify the frequency in number of batches seen.
Train....
Epoch 1/50
732/732 [==============================] - ETA: 0s - loss: 6.2225 - accuracy: 0.1076
Epoch 1: val_loss improved from inf to 6.08880, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 499s 667ms/step - loss: 6.2225 - accuracy: 0.1076 - val_loss: 6.0888 - val_accuracy: 0.1149
Epoch 2/50
732/732 [==============================] - ETA: 0s - loss: 5.9625 - accuracy: 0.1208
Epoch 2: val_loss improved from 6.08880 to 6.01894, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 514s 702ms/step - loss: 5.9625 - accuracy: 0.1208 - val_loss: 6.0189 - val_accuracy: 0.1165
Epoch 3/50
732/732 [==============================] - ETA: 0s - loss: 5.7897 - accuracy: 0.1296
Epoch 3: val_loss improved from 6.01894 to 5.87968, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 503s 687ms/step - loss: 5.7897 - accuracy: 0.1296 - val_loss: 5.8797 - val_accuracy: 0.1231
Epoch 4/50
732/732 [==============================] - ETA: 0s - loss: 5.5915 - accuracy: 0.1372
Epoch 4: val_loss improved from 5.87968 to 5.74615, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 496s 677ms/step - loss: 5.5915 - accuracy: 0.1372 - val_loss: 5.7462 - val_accuracy: 0.1287
Epoch 5/50
732/732 [==============================] - ETA: 0s - loss: 5.3582 - accuracy: 0.1474
Epoch 5: val_loss improved from 5.74615 to 5.60099, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 508s 694ms/step - loss: 5.3582 - accuracy: 0.1474 - val_loss: 5.6010 - val_accuracy: 0.1401
Epoch 6/50
732/732 [==============================] - ETA: 0s - loss: 5.0717 - accuracy: 0.1608
Epoch 6: val_loss improved from 5.60099 to 5.47120, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 480s 656ms/step - loss: 5.0717 - accuracy: 0.1608 - val_loss: 5.4712 - val_accuracy: 0.1444
Epoch 7/50
732/732 [==============================] - ETA: 0s - loss: 4.7379 - accuracy: 0.1846
Epoch 7: val_loss improved from 5.47120 to 5.28951, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 513s 701ms/step - loss: 4.7379 - accuracy: 0.1846 - val_loss: 5.2895 - val_accuracy: 0.1627
Epoch 8/50
732/732 [==============================] - ETA: 0s - loss: 4.3856 - accuracy: 0.2226
Epoch 8: val_loss improved from 5.28951 to 5.10057, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 490s 669ms/step - loss: 4.3856 - accuracy: 0.2226 - val_loss: 5.1006 - val_accuracy: 0.1844
Epoch 9/50
732/732 [==============================] - ETA: 0s - loss: 4.0185 - accuracy: 0.2718
Epoch 9: val_loss improved from 5.10057 to 4.93234, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 523s 714ms/step - loss: 4.0185 - accuracy: 0.2718 - val_loss: 4.9323 - val_accuracy: 0.2067
Epoch 10/50
732/732 [==============================] - ETA: 0s - loss: 3.6519 - accuracy: 0.3300
Epoch 10: val_loss improved from 4.93234 to 4.76022, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 492s 672ms/step - loss: 3.6519 - accuracy: 0.3300 - val_loss: 4.7602 - val_accuracy: 0.2352
Epoch 11/50
732/732 [==============================] - ETA: 0s - loss: 3.2911 - accuracy: 0.3934
Epoch 11: val_loss improved from 4.76022 to 4.54920, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 522s 714ms/step - loss: 3.2911 - accuracy: 0.3934 - val_loss: 4.5492 - val_accuracy: 0.2758
Epoch 12/50
732/732 [==============================] - ETA: 0s - loss: 2.9327 - accuracy: 0.4617
Epoch 12: val_loss improved from 4.54920 to 4.35707, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 497s 679ms/step - loss: 2.9327 - accuracy: 0.4617 - val_loss: 4.3571 - val_accuracy: 0.3135
Epoch 13/50
732/732 [==============================] - ETA: 0s - loss: 2.5890 - accuracy: 0.5304
Epoch 13: val_loss improved from 4.35707 to 4.19432, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 520s 711ms/step - loss: 2.5890 - accuracy: 0.5304 - val_loss: 4.1943 - val_accuracy: 0.3563
Epoch 14/50
732/732 [==============================] - ETA: 0s - loss: 2.2645 - accuracy: 0.5939
Epoch 14: val_loss improved from 4.19432 to 4.01978, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 496s 678ms/step - loss: 2.2645 - accuracy: 0.5939 - val_loss: 4.0198 - val_accuracy: 0.3830
Epoch 15/50
732/732 [==============================] - ETA: 0s - loss: 1.9605 - accuracy: 0.6559
Epoch 15: val_loss improved from 4.01978 to 3.85684, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 515s 703ms/step - loss: 1.9605 - accuracy: 0.6559 - val_loss: 3.8568 - val_accuracy: 0.4217
Epoch 16/50
732/732 [==============================] - ETA: 0s - loss: 1.6771 - accuracy: 0.7090
Epoch 16: val_loss improved from 3.85684 to 3.67159, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 506s 692ms/step - loss: 1.6771 - accuracy: 0.7090 - val_loss: 3.6716 - val_accuracy: 0.4705
Epoch 17/50
732/732 [==============================] - ETA: 0s - loss: 1.4196 - accuracy: 0.7565
Epoch 17: val_loss improved from 3.67159 to 3.54158, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 516s 706ms/step - loss: 1.4196 - accuracy: 0.7565 - val_loss: 3.5416 - val_accuracy: 0.4999
Epoch 18/50
732/732 [==============================] - ETA: 0s - loss: 1.1845 - accuracy: 0.8012
Epoch 18: val_loss improved from 3.54158 to 3.41239, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 513s 700ms/step - loss: 1.1845 - accuracy: 0.8012 - val_loss: 3.4124 - val_accuracy: 0.5406
Epoch 19/50
732/732 [==============================] - ETA: 0s - loss: 0.9755 - accuracy: 0.8447
Epoch 19: val_loss improved from 3.41239 to 3.30893, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 512s 700ms/step - loss: 0.9755 - accuracy: 0.8447 - val_loss: 3.3089 - val_accuracy: 0.5650
Epoch 20/50
732/732 [==============================] - ETA: 0s - loss: 0.7897 - accuracy: 0.8836
Epoch 20: val_loss improved from 3.30893 to 3.19777, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 493s 674ms/step - loss: 0.7897 - accuracy: 0.8836 - val_loss: 3.1978 - val_accuracy: 0.5944
Epoch 21/50
732/732 [==============================] - ETA: 0s - loss: 0.6258 - accuracy: 0.9167
Epoch 21: val_loss improved from 3.19777 to 3.11253, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 514s 702ms/step - loss: 0.6258 - accuracy: 0.9167 - val_loss: 3.1125 - val_accuracy: 0.6162
Epoch 22/50
732/732 [==============================] - ETA: 0s - loss: 0.4878 - accuracy: 0.9439
Epoch 22: val_loss improved from 3.11253 to 3.04473, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 492s 673ms/step - loss: 0.4878 - accuracy: 0.9439 - val_loss: 3.0447 - val_accuracy: 0.6273
Epoch 23/50
732/732 [==============================] - ETA: 0s - loss: 0.3731 - accuracy: 0.9611
Epoch 23: val_loss improved from 3.04473 to 3.02193, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 520s 711ms/step - loss: 0.3731 - accuracy: 0.9611 - val_loss: 3.0219 - val_accuracy: 0.6311
Epoch 24/50
732/732 [==============================] - ETA: 0s - loss: 0.2829 - accuracy: 0.9724
Epoch 24: val_loss improved from 3.02193 to 3.00552, saving model to .\model1_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
732/732 [==============================] - 507s 693ms/step - loss: 0.2829 - accuracy: 0.9724 - val_loss: 3.0055 - val_accuracy: 0.6339
Epoch 25/50
732/732 [==============================] - ETA: 0s - loss: 0.2212 - accuracy: 0.9760
Epoch 25: val_loss did not improve from 3.00552
732/732 [==============================] - 515s 704ms/step - loss: 0.2212 - accuracy: 0.9760 - val_loss: 3.0245 - val_accuracy: 0.6371
Epoch 26/50
732/732 [==============================] - ETA: 0s - loss: 0.1743 - accuracy: 0.9784
Epoch 26: val_loss did not improve from 3.00552
732/732 [==============================] - 486s 664ms/step - loss: 0.1743 - accuracy: 0.9784 - val_loss: 3.0246 - val_accuracy: 0.6368
Epoch 27/50
732/732 [==============================] - ETA: 0s - loss: 0.1448 - accuracy: 0.9799
Epoch 27: val_loss did not improve from 3.00552
732/732 [==============================] - 513s 701ms/step - loss: 0.1448 - accuracy: 0.9799 - val_loss: 3.0409 - val_accuracy: 0.6376
Epoch 28/50
732/732 [==============================] - ETA: 0s - loss: 0.1268 - accuracy: 0.9800
Epoch 28: val_loss did not improve from 3.00552
732/732 [==============================] - 494s 675ms/step - loss: 0.1268 - accuracy: 0.9800 - val_loss: 3.0424 - val_accuracy: 0.6390
Epoch 29/50
732/732 [==============================] - ETA: 0s - loss: 0.1097 - accuracy: 0.9805
Epoch 29: val_loss did not improve from 3.00552
732/732 [==============================] - 511s 698ms/step - loss: 0.1097 - accuracy: 0.9805 - val_loss: 3.1073 - val_accuracy: 0.6362
Epoch 30/50
732/732 [==============================] - ETA: 0s - loss: 0.1002 - accuracy: 0.9814
Epoch 30: val_loss did not improve from 3.00552
732/732 [==============================] - 489s 667ms/step - loss: 0.1002 - accuracy: 0.9814 - val_loss: 3.0818 - val_accuracy: 0.6372
Epoch 31/50
732/732 [==============================] - ETA: 0s - loss: 0.0904 - accuracy: 0.9823
Epoch 31: val_loss did not improve from 3.00552
732/732 [==============================] - 490s 670ms/step - loss: 0.0904 - accuracy: 0.9823 - val_loss: 3.1299 - val_accuracy: 0.6391
Epoch 32/50
732/732 [==============================] - ETA: 0s - loss: 0.0847 - accuracy: 0.9826
Epoch 32: val_loss did not improve from 3.00552
732/732 [==============================] - 486s 664ms/step - loss: 0.0847 - accuracy: 0.9826 - val_loss: 3.1279 - val_accuracy: 0.6388
Epoch 33/50
732/732 [==============================] - ETA: 0s - loss: 0.0783 - accuracy: 0.9830
Epoch 33: val_loss did not improve from 3.00552
732/732 [==============================] - 490s 669ms/step - loss: 0.0783 - accuracy: 0.9830 - val_loss: 3.1476 - val_accuracy: 0.6406
Epoch 34/50
732/732 [==============================] - ETA: 0s - loss: 0.0755 - accuracy: 0.9833
Epoch 34: val_loss did not improve from 3.00552
732/732 [==============================] - 496s 677ms/step - loss: 0.0755 - accuracy: 0.9833 - val_loss: 3.1682 - val_accuracy: 0.6392
Epoch 35/50
732/732 [==============================] - ETA: 0s - loss: 0.0707 - accuracy: 0.9835
Epoch 35: val_loss did not improve from 3.00552
732/732 [==============================] - 501s 685ms/step - loss: 0.0707 - accuracy: 0.9835 - val_loss: 3.1914 - val_accuracy: 0.6402
Epoch 36/50
732/732 [==============================] - ETA: 0s - loss: 0.0672 - accuracy: 0.9846
Epoch 36: val_loss did not improve from 3.00552
732/732 [==============================] - 515s 704ms/step - loss: 0.0672 - accuracy: 0.9846 - val_loss: 3.1692 - val_accuracy: 0.6415
Epoch 37/50
732/732 [==============================] - ETA: 0s - loss: 0.0636 - accuracy: 0.9845
Epoch 37: val_loss did not improve from 3.00552
732/732 [==============================] - 503s 687ms/step - loss: 0.0636 - accuracy: 0.9845 - val_loss: 3.1902 - val_accuracy: 0.6396
Epoch 38/50
732/732 [==============================] - ETA: 0s - loss: 0.0608 - accuracy: 0.9848
Epoch 38: val_loss did not improve from 3.00552
732/732 [==============================] - 500s 684ms/step - loss: 0.0608 - accuracy: 0.9848 - val_loss: 3.2331 - val_accuracy: 0.6398
Epoch 39/50
732/732 [==============================] - ETA: 0s - loss: 0.0594 - accuracy: 0.9850
Epoch 39: val_loss did not improve from 3.00552
732/732 [==============================] - 509s 695ms/step - loss: 0.0594 - accuracy: 0.9850 - val_loss: 3.2590 - val_accuracy: 0.6411
Epoch 40/50
732/732 [==============================] - ETA: 0s - loss: 0.0577 - accuracy: 0.9855
Epoch 40: val_loss did not improve from 3.00552
732/732 [==============================] - 499s 682ms/step - loss: 0.0577 - accuracy: 0.9855 - val_loss: 3.2430 - val_accuracy: 0.6405
Epoch 41/50
732/732 [==============================] - ETA: 0s - loss: 0.0567 - accuracy: 0.9852
Epoch 41: val_loss did not improve from 3.00552
732/732 [==============================] - 484s 661ms/step - loss: 0.0567 - accuracy: 0.9852 - val_loss: 3.2784 - val_accuracy: 0.6393
Epoch 42/50
732/732 [==============================] - ETA: 0s - loss: 0.0555 - accuracy: 0.9849
Epoch 42: val_loss did not improve from 3.00552
732/732 [==============================] - 496s 677ms/step - loss: 0.0555 - accuracy: 0.9849 - val_loss: 3.2869 - val_accuracy: 0.6407
Epoch 43/50
732/732 [==============================] - ETA: 0s - loss: 0.0535 - accuracy: 0.9856
Epoch 43: val_loss did not improve from 3.00552
732/732 [==============================] - 501s 685ms/step - loss: 0.0535 - accuracy: 0.9856 - val_loss: 3.2955 - val_accuracy: 0.6413
Epoch 44/50
732/732 [==============================] - ETA: 0s - loss: 0.0519 - accuracy: 0.9854
Epoch 44: val_loss did not improve from 3.00552
732/732 [==============================] - 510s 697ms/step - loss: 0.0519 - accuracy: 0.9854 - val_loss: 3.3141 - val_accuracy: 0.6429
Epoch 45/50
732/732 [==============================] - ETA: 0s - loss: 0.0505 - accuracy: 0.9854
Epoch 45: val_loss did not improve from 3.00552
732/732 [==============================] - 498s 680ms/step - loss: 0.0505 - accuracy: 0.9854 - val_loss: 3.3240 - val_accuracy: 0.6395
Epoch 46/50
732/732 [==============================] - ETA: 0s - loss: 0.0505 - accuracy: 0.9855
Epoch 46: val_loss did not improve from 3.00552
732/732 [==============================] - 505s 690ms/step - loss: 0.0505 - accuracy: 0.9855 - val_loss: 3.3422 - val_accuracy: 0.6393
Epoch 47/50
732/732 [==============================] - ETA: 0s - loss: 0.0486 - accuracy: 0.9859
Epoch 47: val_loss did not improve from 3.00552
732/732 [==============================] - 500s 684ms/step - loss: 0.0486 - accuracy: 0.9859 - val_loss: 3.3550 - val_accuracy: 0.6389
Epoch 48/50
732/732 [==============================] - ETA: 0s - loss: 0.0475 - accuracy: 0.9864
Epoch 48: val_loss did not improve from 3.00552
732/732 [==============================] - 491s 671ms/step - loss: 0.0475 - accuracy: 0.9864 - val_loss: 3.3685 - val_accuracy: 0.6409
Epoch 49/50
732/732 [==============================] - ETA: 0s - loss: 0.0466 - accuracy: 0.9863
Epoch 49: val_loss did not improve from 3.00552
732/732 [==============================] - 502s 686ms/step - loss: 0.0466 - accuracy: 0.9863 - val_loss: 3.3602 - val_accuracy: 0.6406
Epoch 50/50
732/732 [==============================] - ETA: 0s - loss: 0.0459 - accuracy: 0.9867
Epoch 50: val_loss did not improve from 3.00552
732/732 [==============================] - 520s 710ms/step - loss: 0.0459 - accuracy: 0.9867 - val_loss: 3.4052 - val_accuracy: 0.6399

2023-04-16 10:02:50.415089: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-16 10:02:50.532383: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
癡情
如如任一點一眼
你是沒有那美單
等有你還格我用
我的秋分分是我 出別怎象
如結(jié)向 什一然 有人把雨落
讓是人著你
和你真開的表表
那個人的歌不要最當(dāng)早還穿解
o代一住真把橋開
人受明葉曲煙時
愛解輕同的又人太許
o清清
相槍喚醒言的前里
我想不獨進進關(guān)海提在高子
受留著一失唱以日繁在星些“間
飛的讓那國在落
是些如些名感甘
我在紫眼 落個此此的是無在
這生的流候是遠果
如想想流一年非
你是還

2023-04-16 10:10:40.953140: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-16 10:10:40.955064: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
喜歡
她一如為誰似對年雪
唱明成風(fēng)
我深了不走上
你在哪的急
虹過無溫道  如進風(fēng)物一生
想一難等 一個難難好分
我沒是象樂
你在香錯的”我
幻照當(dāng)說了 新亮送
你到曾愿
我還幾笑
心走下義其憶
哪色是人心是事淚已個
你說你想誰我有臉為我
春非你是如今變 為怎拍還你
我過后的很解 我在再 一苦平高里出結(jié)陪我的化角
飄樂就誰  我看說你
紅個下下 我在曾領(lǐng)你你
深留去開話
再南哪氣的時命這多非在剛剛出的晃光
你走人暖終
他裝著在深我
日輕花步們的多愁
以一世慢慢就就多曾曾要要現(xiàn)的樣
看果感地 人我再自結(jié)
心 第你留乘后決視嘆少你著傷嘴
如想有多會那起
我還一回話
 我說會你一點 提下 也什么回過

2023-04-16 10:17:08.619933: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-16 10:17:08.792872: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
喜歡
一一如夢之難意
是愛的夢 聽是是關(guān)非
和你走沒說 我沒人 我得經(jīng)認微過
每正我 愛的一巧 對就我一絕用在出上氣頭
你的神盡像如笑 如經(jīng)是一夜一切
你的白明已喜轉(zhuǎn)如如在了你的白與夢
是個年里我實的碎越頭
在如我們都你的眼a
有不了好 我沒沒象得
看續(xù)里我回走的熱笑
隨一起不面我愛擺
以果回輕 認場可能一b人
所結(jié)古 這人自定
你 愛喜你不了然世風(fēng)
我尋愛歡的世過
公名界事是別邊為飛兒
那場時上的多邊
是湖小歌泣
你若未緣快
靠頭獨譜
漫放起燭
天干萬清色
 11音失 深 無像我著落
我的頭隨
一些大流 天心的眉途
在碎眼 它鋪忽
你尋夢   世日還不買
我想意在把然角邊月的真一角一黃得
飛

Process finished with exit code 0



2023-04-16 10:22:19.223633: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-16 10:22:19.258500: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
喜歡
晨言一城晚片
高怎頭的泛夢
南h修五里不花不
又上踏言烈深燈
如持著三月聲裙
上生靜畢不緊一家住是悠久的泛爍
爍胖太光破  玄叠?花更夕對
有什的紅空氣線
否品見就越里模理
曾心的顆古
盛明 街心道
又聊年執(zhí)你
你海化觀天
站幻算一缺白過的向只
信穿你的如次悄里
最為我了心的雨樣
表立著記明該年
泛影了馬出界記
看有我還說著抱音
你唱在回樓
情啟溫緣 齊具一緣
還天登要心電
熱腦片r舞甲亂
我持量的故影里扣的潔很打起
碑情意 絢月如暗的重形燈雄微
飛書的微孤漫再在緊車的美
你里美方多本余沫
我上白和信經(jīng)窗三想你 看走而潮心深
還有是球塵人給讓有氣更提
像那感或信歡的象道
電~開院橫物
分越三

2023-04-16 10:26:38.504694: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-16 10:26:38.506895: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
喜歡著靜女
她忙o著失到然然眼偷是風(fēng)念里而過起
我們著過樂始耳躲的詩么聽
每笑一喉熱冶湖命的歌
在頭沉淡忽關(guān)動藏
吹要人純滾身冬盛
倒停上帶著都我想任浪心
努實正到看盡流亮
感笑于注了與淚著
盡一底人他法等陰窗
少嘆要一近后后
我這實戴你言經(jīng)有我的雨
 請我還心們的滿單B和那常微常的墻
我領(lǐng)完完著很待字
最州的歌無薯堤了已穿手一照任熱頭的獎
當(dāng)是發(fā)淺來流里于孤旁
你還化頭的是算雨情
我回明繞里們諾么邊
花從好非了 這情來哭和和花壞
拍關(guān)經(jīng)深子你
別是我的飛念
你只我好憶我的期定
愿愛在奏面毯
待偶九種失跑 紙飲陰斜北慣的貴
我愛該人 別為用身
自己還命 你聲臺乏緣宙
功去對陽淡續(xù)門 穿怕時若就

Process finished with exit code 0

 

  • 存在符號數(shù)字 對數(shù)據(jù)集處理重新訓(xùn)練
2023-04-16 12:48:52.373053: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-16 12:48:52.566464: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2023-04-16 12:49:04.313108: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
embedding layer is:: KerasTensor(type_spec=TensorSpec(shape=(None, 10, 800), dtype=tf.float32, name=None), name='embedding/embedding_lookup/Identity_1:0', description="created by layer 'embedding'")
build model.....
Model: "model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 x_train (InputLayer)        [(None, 10)]              0         
                                                                 
 embedding (Embedding)       (None, 10, 800)           1887200   
                                                                 
 LSTM_1 (LSTM)               (None, 10, 800)           5123200   
                                                                 
 LSTM_2 (LSTM)               (None, 1600)              15366400  
                                                                 
 Dense_1 (Dense)             (None, 2359)              3776759   
                                                                 
=================================================================
Total params: 26,153,559
Trainable params: 26,153,559
Non-trainable params: 0
_________________________________________________________________
None
F:\anaconda3\lib\site-packages\keras\optimizers\optimizer_v2\adam.py:110: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Train....
WARNING:tensorflow:`period` argument is deprecated. Please use `save_freq` to specify the frequency in number of batches seen.
Epoch 1/50
724/724 [==============================] - ETA: 0s - loss: 6.1890 - accuracy: 0.1090
Epoch 1: val_loss improved from inf to 6.08992, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 553s 746ms/step - loss: 6.1890 - accuracy: 0.1090 - val_loss: 6.0899 - val_accuracy: 0.1099
Epoch 2/50
724/724 [==============================] - ETA: 0s - loss: 5.9348 - accuracy: 0.1207
Epoch 2: val_loss improved from 6.08992 to 5.99014, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 536s 741ms/step - loss: 5.9348 - accuracy: 0.1207 - val_loss: 5.9901 - val_accuracy: 0.1200
Epoch 3/50
724/724 [==============================] - ETA: 0s - loss: 5.7773 - accuracy: 0.1285
Epoch 3: val_loss improved from 5.99014 to 5.89635, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 733s 1s/step - loss: 5.7773 - accuracy: 0.1285 - val_loss: 5.8964 - val_accuracy: 0.1264
Epoch 4/50
724/724 [==============================] - ETA: 0s - loss: 5.5887 - accuracy: 0.1364
Epoch 4: val_loss improved from 5.89635 to 5.77642, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 547s 756ms/step - loss: 5.5887 - accuracy: 0.1364 - val_loss: 5.7764 - val_accuracy: 0.1314
Epoch 5/50
724/724 [==============================] - ETA: 0s - loss: 5.3723 - accuracy: 0.1444
Epoch 5: val_loss improved from 5.77642 to 5.68260, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 543s 750ms/step - loss: 5.3723 - accuracy: 0.1444 - val_loss: 5.6826 - val_accuracy: 0.1362
Epoch 6/50
724/724 [==============================] - ETA: 0s - loss: 5.0972 - accuracy: 0.1560
Epoch 6: val_loss improved from 5.68260 to 5.55087, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 632s 873ms/step - loss: 5.0972 - accuracy: 0.1560 - val_loss: 5.5509 - val_accuracy: 0.1433
Epoch 7/50
724/724 [==============================] - ETA: 0s - loss: 4.7730 - accuracy: 0.1783
Epoch 7: val_loss improved from 5.55087 to 5.38638, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 670s 904ms/step - loss: 4.7730 - accuracy: 0.1783 - val_loss: 5.3864 - val_accuracy: 0.1570
Epoch 8/50
724/724 [==============================] - ETA: 0s - loss: 4.4211 - accuracy: 0.2154
Epoch 8: val_loss improved from 5.38638 to 5.23084, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 716s 990ms/step - loss: 4.4211 - accuracy: 0.2154 - val_loss: 5.2308 - val_accuracy: 0.1764
Epoch 9/50
724/724 [==============================] - ETA: 0s - loss: 4.0669 - accuracy: 0.2610
Epoch 9: val_loss improved from 5.23084 to 5.04806, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 544s 751ms/step - loss: 4.0669 - accuracy: 0.2610 - val_loss: 5.0481 - val_accuracy: 0.2017
Epoch 10/50
724/724 [==============================] - ETA: 0s - loss: 3.7084 - accuracy: 0.3184
Epoch 10: val_loss improved from 5.04806 to 4.87938, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 504s 696ms/step - loss: 3.7084 - accuracy: 0.3184 - val_loss: 4.8794 - val_accuracy: 0.2295
Epoch 11/50
724/724 [==============================] - ETA: 0s - loss: 3.3505 - accuracy: 0.3786
Epoch 11: val_loss improved from 4.87938 to 4.70261, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 501s 692ms/step - loss: 3.3505 - accuracy: 0.3786 - val_loss: 4.7026 - val_accuracy: 0.2582
Epoch 12/50
724/724 [==============================] - ETA: 0s - loss: 3.0024 - accuracy: 0.4459
Epoch 12: val_loss improved from 4.70261 to 4.52160, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 474s 654ms/step - loss: 3.0024 - accuracy: 0.4459 - val_loss: 4.5216 - val_accuracy: 0.2924
Epoch 13/50
724/724 [==============================] - ETA: 0s - loss: 2.6653 - accuracy: 0.5139
Epoch 13: val_loss improved from 4.52160 to 4.35553, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 505s 697ms/step - loss: 2.6653 - accuracy: 0.5139 - val_loss: 4.3555 - val_accuracy: 0.3275
Epoch 14/50
724/724 [==============================] - ETA: 0s - loss: 2.3434 - accuracy: 0.5783
Epoch 14: val_loss improved from 4.35553 to 4.18917, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 472s 652ms/step - loss: 2.3434 - accuracy: 0.5783 - val_loss: 4.1892 - val_accuracy: 0.3682
Epoch 15/50
724/724 [==============================] - ETA: 0s - loss: 2.0391 - accuracy: 0.6353
Epoch 15: val_loss improved from 4.18917 to 4.04068, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 510s 704ms/step - loss: 2.0391 - accuracy: 0.6353 - val_loss: 4.0407 - val_accuracy: 0.4064
Epoch 16/50
724/724 [==============================] - ETA: 0s - loss: 1.7560 - accuracy: 0.6924
Epoch 16: val_loss improved from 4.04068 to 3.87016, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 480s 664ms/step - loss: 1.7560 - accuracy: 0.6924 - val_loss: 3.8702 - val_accuracy: 0.4477
Epoch 17/50
724/724 [==============================] - ETA: 0s - loss: 1.4943 - accuracy: 0.7416
Epoch 17: val_loss improved from 3.87016 to 3.74787, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 513s 709ms/step - loss: 1.4943 - accuracy: 0.7416 - val_loss: 3.7479 - val_accuracy: 0.4739
Epoch 18/50
724/724 [==============================] - ETA: 0s - loss: 1.2571 - accuracy: 0.7888
Epoch 18: val_loss improved from 3.74787 to 3.61463, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 492s 679ms/step - loss: 1.2571 - accuracy: 0.7888 - val_loss: 3.6146 - val_accuracy: 0.5113
Epoch 19/50
724/724 [==============================] - ETA: 0s - loss: 1.0431 - accuracy: 0.8310
Epoch 19: val_loss improved from 3.61463 to 3.48974, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 497s 687ms/step - loss: 1.0431 - accuracy: 0.8310 - val_loss: 3.4897 - val_accuracy: 0.5410
Epoch 20/50
724/724 [==============================] - ETA: 0s - loss: 0.8561 - accuracy: 0.8702
Epoch 20: val_loss improved from 3.48974 to 3.39216, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 492s 679ms/step - loss: 0.8561 - accuracy: 0.8702 - val_loss: 3.3922 - val_accuracy: 0.5707
Epoch 21/50
724/724 [==============================] - ETA: 0s - loss: 0.6875 - accuracy: 0.9031
Epoch 21: val_loss improved from 3.39216 to 3.32759, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 488s 673ms/step - loss: 0.6875 - accuracy: 0.9031 - val_loss: 3.3276 - val_accuracy: 0.5906
Epoch 22/50
724/724 [==============================] - ETA: 0s - loss: 0.5420 - accuracy: 0.9328
Epoch 22: val_loss improved from 3.32759 to 3.27522, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 486s 672ms/step - loss: 0.5420 - accuracy: 0.9328 - val_loss: 3.2752 - val_accuracy: 0.6037
Epoch 23/50
724/724 [==============================] - ETA: 0s - loss: 0.4179 - accuracy: 0.9533
Epoch 23: val_loss improved from 3.27522 to 3.21136, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 505s 698ms/step - loss: 0.4179 - accuracy: 0.9533 - val_loss: 3.2114 - val_accuracy: 0.6143
Epoch 24/50
724/724 [==============================] - ETA: 0s - loss: 0.3216 - accuracy: 0.9670
Epoch 24: val_loss improved from 3.21136 to 3.19814, saving model to .\model2_epoch50_2lstm_1dense_seq50_phrase_based_best.h5
724/724 [==============================] - 495s 684ms/step - loss: 0.3216 - accuracy: 0.9670 - val_loss: 3.1981 - val_accuracy: 0.6167
Epoch 25/50
724/724 [==============================] - ETA: 0s - loss: 0.2464 - accuracy: 0.9747
Epoch 25: val_loss did not improve from 3.19814
724/724 [==============================] - 490s 676ms/step - loss: 0.2464 - accuracy: 0.9747 - val_loss: 3.2126 - val_accuracy: 0.6171
Epoch 26/50
724/724 [==============================] - ETA: 0s - loss: 0.1926 - accuracy: 0.9781
Epoch 26: val_loss did not improve from 3.19814
724/724 [==============================] - 489s 676ms/step - loss: 0.1926 - accuracy: 0.9781 - val_loss: 3.2302 - val_accuracy: 0.6161
Epoch 27/50
724/724 [==============================] - ETA: 0s - loss: 0.1584 - accuracy: 0.9790
Epoch 27: val_loss did not improve from 3.19814
724/724 [==============================] - 481s 664ms/step - loss: 0.1584 - accuracy: 0.9790 - val_loss: 3.2447 - val_accuracy: 0.6183
Epoch 28/50
724/724 [==============================] - ETA: 0s - loss: 0.1363 - accuracy: 0.9794
Epoch 28: val_loss did not improve from 3.19814
724/724 [==============================] - 491s 678ms/step - loss: 0.1363 - accuracy: 0.9794 - val_loss: 3.2659 - val_accuracy: 0.6182
Epoch 29/50
724/724 [==============================] - ETA: 0s - loss: 0.1170 - accuracy: 0.9807
Epoch 29: val_loss did not improve from 3.19814
724/724 [==============================] - 479s 662ms/step - loss: 0.1170 - accuracy: 0.9807 - val_loss: 3.2872 - val_accuracy: 0.6178
Epoch 30/50
724/724 [==============================] - ETA: 0s - loss: 0.1066 - accuracy: 0.9807
Epoch 30: val_loss did not improve from 3.19814
724/724 [==============================] - 489s 676ms/step - loss: 0.1066 - accuracy: 0.9807 - val_loss: 3.3080 - val_accuracy: 0.6219
Epoch 31/50
724/724 [==============================] - ETA: 0s - loss: 0.0951 - accuracy: 0.9817
Epoch 31: val_loss did not improve from 3.19814
724/724 [==============================] - 486s 672ms/step - loss: 0.0951 - accuracy: 0.9817 - val_loss: 3.3242 - val_accuracy: 0.6189
Epoch 32/50
724/724 [==============================] - ETA: 0s - loss: 0.0885 - accuracy: 0.9817
Epoch 32: val_loss did not improve from 3.19814
724/724 [==============================] - 497s 687ms/step - loss: 0.0885 - accuracy: 0.9817 - val_loss: 3.3471 - val_accuracy: 0.6206
Epoch 33/50
724/724 [==============================] - ETA: 0s - loss: 0.0846 - accuracy: 0.9820
Epoch 33: val_loss did not improve from 3.19814
724/724 [==============================] - 491s 678ms/step - loss: 0.0846 - accuracy: 0.9820 - val_loss: 3.3512 - val_accuracy: 0.6194
Epoch 34/50
724/724 [==============================] - ETA: 0s - loss: 0.0782 - accuracy: 0.9827
Epoch 34: val_loss did not improve from 3.19814
724/724 [==============================] - 494s 683ms/step - loss: 0.0782 - accuracy: 0.9827 - val_loss: 3.4060 - val_accuracy: 0.6209
Epoch 35/50
724/724 [==============================] - ETA: 0s - loss: 0.0730 - accuracy: 0.9838
Epoch 35: val_loss did not improve from 3.19814
724/724 [==============================] - 479s 662ms/step - loss: 0.0730 - accuracy: 0.9838 - val_loss: 3.3856 - val_accuracy: 0.6207
Epoch 36/50
724/724 [==============================] - ETA: 0s - loss: 0.0698 - accuracy: 0.9838
Epoch 36: val_loss did not improve from 3.19814
724/724 [==============================] - 491s 679ms/step - loss: 0.0698 - accuracy: 0.9838 - val_loss: 3.4165 - val_accuracy: 0.6238
Epoch 37/50
724/724 [==============================] - ETA: 0s - loss: 0.0682 - accuracy: 0.9838
Epoch 37: val_loss did not improve from 3.19814
724/724 [==============================] - 491s 678ms/step - loss: 0.0682 - accuracy: 0.9838 - val_loss: 3.4286 - val_accuracy: 0.6227
Epoch 38/50
724/724 [==============================] - ETA: 0s - loss: 0.0627 - accuracy: 0.9848
Epoch 38: val_loss did not improve from 3.19814
724/724 [==============================] - 489s 675ms/step - loss: 0.0627 - accuracy: 0.9848 - val_loss: 3.4581 - val_accuracy: 0.6219
Epoch 39/50
724/724 [==============================] - ETA: 0s - loss: 0.0613 - accuracy: 0.9847
Epoch 39: val_loss did not improve from 3.19814
724/724 [==============================] - 491s 678ms/step - loss: 0.0613 - accuracy: 0.9847 - val_loss: 3.4872 - val_accuracy: 0.6225
Epoch 40/50
724/724 [==============================] - ETA: 0s - loss: 0.0581 - accuracy: 0.9857
Epoch 40: val_loss did not improve from 3.19814
724/724 [==============================] - 498s 688ms/step - loss: 0.0581 - accuracy: 0.9857 - val_loss: 3.4986 - val_accuracy: 0.6199
Epoch 41/50
724/724 [==============================] - ETA: 0s - loss: 0.0574 - accuracy: 0.9854
Epoch 41: val_loss did not improve from 3.19814
724/724 [==============================] - 476s 658ms/step - loss: 0.0574 - accuracy: 0.9854 - val_loss: 3.4774 - val_accuracy: 0.6212
Epoch 42/50
724/724 [==============================] - ETA: 0s - loss: 0.0561 - accuracy: 0.9851
Epoch 42: val_loss did not improve from 3.19814
724/724 [==============================] - 479s 661ms/step - loss: 0.0561 - accuracy: 0.9851 - val_loss: 3.5298 - val_accuracy: 0.6215
Epoch 43/50
724/724 [==============================] - ETA: 0s - loss: 0.0521 - accuracy: 0.9864
Epoch 43: val_loss did not improve from 3.19814
724/724 [==============================] - 490s 677ms/step - loss: 0.0521 - accuracy: 0.9864 - val_loss: 3.5399 - val_accuracy: 0.6225
Epoch 44/50
724/724 [==============================] - ETA: 0s - loss: 0.0538 - accuracy: 0.9855
Epoch 44: val_loss did not improve from 3.19814
724/724 [==============================] - 479s 661ms/step - loss: 0.0538 - accuracy: 0.9855 - val_loss: 3.5258 - val_accuracy: 0.6219
Epoch 45/50
724/724 [==============================] - ETA: 0s - loss: 0.0511 - accuracy: 0.9859
Epoch 45: val_loss did not improve from 3.19814
724/724 [==============================] - 490s 677ms/step - loss: 0.0511 - accuracy: 0.9859 - val_loss: 3.5453 - val_accuracy: 0.6241
Epoch 46/50
724/724 [==============================] - ETA: 0s - loss: 0.0498 - accuracy: 0.9862
Epoch 46: val_loss did not improve from 3.19814
724/724 [==============================] - 479s 662ms/step - loss: 0.0498 - accuracy: 0.9862 - val_loss: 3.5661 - val_accuracy: 0.6234
Epoch 47/50
724/724 [==============================] - ETA: 0s - loss: 0.0489 - accuracy: 0.9866
Epoch 47: val_loss did not improve from 3.19814
724/724 [==============================] - 507s 700ms/step - loss: 0.0489 - accuracy: 0.9866 - val_loss: 3.5884 - val_accuracy: 0.6232
Epoch 48/50
724/724 [==============================] - ETA: 0s - loss: 0.0477 - accuracy: 0.9864
Epoch 48: val_loss did not improve from 3.19814
724/724 [==============================] - 508s 702ms/step - loss: 0.0477 - accuracy: 0.9864 - val_loss: 3.5940 - val_accuracy: 0.6211
Epoch 49/50
724/724 [==============================] - ETA: 0s - loss: 0.0452 - accuracy: 0.9870
Epoch 49: val_loss did not improve from 3.19814
724/724 [==============================] - 514s 710ms/step - loss: 0.0452 - accuracy: 0.9870 - val_loss: 3.6373 - val_accuracy: 0.6241
Epoch 50/50
724/724 [==============================] - ETA: 0s - loss: 0.0458 - accuracy: 0.9870
Epoch 50: val_loss did not improve from 3.19814
724/724 [==============================] - 511s 706ms/step - loss: 0.0458 - accuracy: 0.9870 - val_loss: 3.6021 - val_accuracy: 0.6230
2023-04-16 20:17:42.808287: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-16 20:17:42.810103: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
喜歡離馬的孤
她我 機唱期對沒候
我情獨地
只懷會愛 你還笑話
我最萬問它命水大后
當(dāng)光天你我前
天用我化化
這笑轉(zhuǎn)最
你是帶么愛的心成
我們的地傳卻生生的乎
當(dāng)奈知男何訴會希如若亂下的錯
他就以眼深作 生俗堅了
都是世淚著你的時座
好流談 獨始抓了擁海
遙絕江書白不海小夜
老寫墻的不好力念
當(dāng)遠在天遠
你的心肆
蝴追心著領(lǐng)流
和笑腳脫了鮮去
你鶴戒變到流了見誰
擺她的定面 機散無以波聽
每些人的吃世酒 今們沒人心狂
等顛成人
外就動當(dāng) 已袋寂漪在心
堆古迷最散都看會灑必下
我打想陪我們在想行

2023-04-16 20:20:01.516025: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-16 20:20:01.517944: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
喜歡著你的騰影
靈風(fēng)著這窗的恤號
我輕時的深涌
假說 人 此如隱住當(dāng)樣
你的時就雨來年 數(shù)盡第絕散場手 水倚宣了然沙夢
有雖天塵完曳 迷解你
這一夢燈歌的夏座流早飛勺
最完了有經(jīng)行 出若每告來
神關(guān)氣中白又天色
心星如如經(jīng)你走素
沒為命頭有有個經(jīng)有有一常生有有感的大花
我的人翼你不想全天
一色去風(fēng)風(fēng)風(fēng)
三開角天墜時一天
生指已定氣
當(dāng)是和你干而的純難
 離離方和 深不愛
雨門
可飛終嗎
 記所以就不折變
讓時我 只想干花約發(fā)
我想你還很己解
等情唱一個歌隔傷星美
江個紅影雨 夢塵你斜晨在轉(zhuǎn)了流手

Process finished with exit code 0
2023-04-16 21:27:07.969038: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-16 21:27:07.990591: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
輸入任意作為生成歌詞的起始:喜歡
喜歡出眼的孤晚
你星合帶我我的一到
隨當(dāng)我的笑
紅開月片帶
生十放放雨線 揮了了得干
這將附拿為你年
迷得的行的初惑
當(dāng)不紫的人 如人無遺情
冷繁我的一大喜畔 認雪里你著我
深轉(zhuǎn)的夢
我眼可在愛下
你們最去里耳深新的南遠
暖有人回很困味
溫開搖載我的冷淚
他在不筋慢上 埋身入到那疼么全
你的多失是多對一多的火難
相必就想生只都完不
看能說也太容中
是有一人 許么眼始才活
是要人失初維玫就就今在要就全音
寫不可回在不頭最能頭可走的失里
我是聽的夢的喜單來都有眼鎖了她
擺脫衙了夜窗都了孤心
仿她天能

F:\anaconda3\python.exe "E:/PyCharm Community Edition 2021.2.3/train_lstm/train_lstm_word_based.py"
2023-04-16 23:00:00.611644: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-16 23:00:00.773888: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
2023-04-16 23:00:02.191776: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
embedding layer is:: KerasTensor(type_spec=TensorSpec(shape=(None, 6, 800), dtype=tf.float32, name=None), name='embedding/embedding_lookup/Identity_1:0', description="created by layer 'embedding'")
build model.....
Model: "model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 x_train (InputLayer)        [(None, 6)]               0         
                                                                 
 embedding (Embedding)       (None, 6, 800)            1865600   
                                                                 
 LSTM_1 (LSTM)               (None, 6, 800)            5123200   
                                                                 
 LSTM_2 (LSTM)               (None, 1600)              15366400  
                                                                 
 Dense_1 (Dense)             (None, 2332)              3733532   
                                                                 
=================================================================
Total params: 26,088,732
Trainable params: 26,088,732
Non-trainable params: 0
_________________________________________________________________
None
F:\anaconda3\lib\site-packages\keras\optimizers\optimizer_v2\adam.py:110: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Train....
WARNING:tensorflow:`period` argument is deprecated. Please use `save_freq` to specify the frequency in number of batches seen.
Epoch 1/30
691/691 [==============================] - ETA: 0s - loss: 6.2159 - accuracy: 0.1085
Epoch 1: val_loss improved from inf to 6.04871, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 329s 462ms/step - loss: 6.2159 - accuracy: 0.1085 - val_loss: 6.0487 - val_accuracy: 0.1160
Epoch 2/30
691/691 [==============================] - ETA: 0s - loss: 5.9450 - accuracy: 0.1200
Epoch 2: val_loss improved from 6.04871 to 5.99066, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 329s 476ms/step - loss: 5.9450 - accuracy: 0.1200 - val_loss: 5.9907 - val_accuracy: 0.1220
Epoch 3/30
691/691 [==============================] - ETA: 0s - loss: 5.8189 - accuracy: 0.1268
Epoch 3: val_loss improved from 5.99066 to 5.89026, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 306s 443ms/step - loss: 5.8189 - accuracy: 0.1268 - val_loss: 5.8903 - val_accuracy: 0.1257
Epoch 4/30
691/691 [==============================] - ETA: 0s - loss: 5.6723 - accuracy: 0.1319
Epoch 4: val_loss improved from 5.89026 to 5.79656, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 316s 457ms/step - loss: 5.6723 - accuracy: 0.1319 - val_loss: 5.7966 - val_accuracy: 0.1293
Epoch 5/30
691/691 [==============================] - ETA: 0s - loss: 5.4981 - accuracy: 0.1396
Epoch 5: val_loss improved from 5.79656 to 5.67159, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 327s 473ms/step - loss: 5.4981 - accuracy: 0.1396 - val_loss: 5.6716 - val_accuracy: 0.1346
Epoch 6/30
691/691 [==============================] - ETA: 0s - loss: 5.2895 - accuracy: 0.1474
Epoch 6: val_loss improved from 5.67159 to 5.55534, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 315s 456ms/step - loss: 5.2895 - accuracy: 0.1474 - val_loss: 5.5553 - val_accuracy: 0.1371
Epoch 7/30
691/691 [==============================] - ETA: 0s - loss: 5.0373 - accuracy: 0.1606
Epoch 7: val_loss improved from 5.55534 to 5.43768, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 323s 468ms/step - loss: 5.0373 - accuracy: 0.1606 - val_loss: 5.4377 - val_accuracy: 0.1431
Epoch 8/30
691/691 [==============================] - ETA: 0s - loss: 4.7511 - accuracy: 0.1786
Epoch 8: val_loss improved from 5.43768 to 5.28905, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 312s 452ms/step - loss: 4.7511 - accuracy: 0.1786 - val_loss: 5.2891 - val_accuracy: 0.1590
Epoch 9/30
691/691 [==============================] - ETA: 0s - loss: 4.4522 - accuracy: 0.2075
Epoch 9: val_loss improved from 5.28905 to 5.12466, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 325s 470ms/step - loss: 4.4522 - accuracy: 0.2075 - val_loss: 5.1247 - val_accuracy: 0.1775
Epoch 10/30
691/691 [==============================] - ETA: 0s - loss: 4.1476 - accuracy: 0.2482
Epoch 10: val_loss improved from 5.12466 to 5.00481, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 310s 449ms/step - loss: 4.1476 - accuracy: 0.2482 - val_loss: 5.0048 - val_accuracy: 0.1955
Epoch 11/30
691/691 [==============================] - ETA: 0s - loss: 3.8423 - accuracy: 0.2941
Epoch 11: val_loss improved from 5.00481 to 4.81261, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 304s 439ms/step - loss: 3.8423 - accuracy: 0.2941 - val_loss: 4.8126 - val_accuracy: 0.2254
Epoch 12/30
691/691 [==============================] - ETA: 0s - loss: 3.5432 - accuracy: 0.3444
Epoch 12: val_loss improved from 4.81261 to 4.67752, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 318s 460ms/step - loss: 3.5432 - accuracy: 0.3444 - val_loss: 4.6775 - val_accuracy: 0.2536
Epoch 13/30
691/691 [==============================] - ETA: 0s - loss: 3.2465 - accuracy: 0.3979
Epoch 13: val_loss improved from 4.67752 to 4.53187, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 320s 463ms/step - loss: 3.2465 - accuracy: 0.3979 - val_loss: 4.5319 - val_accuracy: 0.2777
Epoch 14/30
691/691 [==============================] - ETA: 0s - loss: 2.9621 - accuracy: 0.4513
Epoch 14: val_loss improved from 4.53187 to 4.38059, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 305s 441ms/step - loss: 2.9621 - accuracy: 0.4513 - val_loss: 4.3806 - val_accuracy: 0.3122
Epoch 15/30
691/691 [==============================] - ETA: 0s - loss: 2.6872 - accuracy: 0.5041
Epoch 15: val_loss improved from 4.38059 to 4.21560, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 315s 456ms/step - loss: 2.6872 - accuracy: 0.5041 - val_loss: 4.2156 - val_accuracy: 0.3410
Epoch 16/30
691/691 [==============================] - ETA: 0s - loss: 2.4248 - accuracy: 0.5576
Epoch 16: val_loss improved from 4.21560 to 4.08963, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 303s 438ms/step - loss: 2.4248 - accuracy: 0.5576 - val_loss: 4.0896 - val_accuracy: 0.3705
Epoch 17/30
691/691 [==============================] - ETA: 0s - loss: 2.1754 - accuracy: 0.6073
Epoch 17: val_loss improved from 4.08963 to 3.93891, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 303s 439ms/step - loss: 2.1754 - accuracy: 0.6073 - val_loss: 3.9389 - val_accuracy: 0.4014
Epoch 18/30
691/691 [==============================] - ETA: 0s - loss: 1.9425 - accuracy: 0.6544 
Epoch 18: val_loss improved from 3.93891 to 3.82455, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 30438s 44s/step - loss: 1.9425 - accuracy: 0.6544 - val_loss: 3.8245 - val_accuracy: 0.4293
Epoch 19/30
691/691 [==============================] - ETA: 0s - loss: 1.7218 - accuracy: 0.6966
Epoch 19: val_loss improved from 3.82455 to 3.68174, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 340s 492ms/step - loss: 1.7218 - accuracy: 0.6966 - val_loss: 3.6817 - val_accuracy: 0.4658
Epoch 20/30
691/691 [==============================] - ETA: 0s - loss: 1.5157 - accuracy: 0.7379
Epoch 20: val_loss improved from 3.68174 to 3.56363, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 322s 466ms/step - loss: 1.5157 - accuracy: 0.7379 - val_loss: 3.5636 - val_accuracy: 0.4883
Epoch 21/30
691/691 [==============================] - ETA: 0s - loss: 1.3256 - accuracy: 0.7750
Epoch 21: val_loss improved from 3.56363 to 3.46436, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 337s 487ms/step - loss: 1.3256 - accuracy: 0.7750 - val_loss: 3.4644 - val_accuracy: 0.5176
Epoch 22/30
691/691 [==============================] - ETA: 0s - loss: 1.1541 - accuracy: 0.8065
Epoch 22: val_loss improved from 3.46436 to 3.36404, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 1441s 2s/step - loss: 1.1541 - accuracy: 0.8065 - val_loss: 3.3640 - val_accuracy: 0.5449
Epoch 23/30
691/691 [==============================] - ETA: 0s - loss: 0.9930 - accuracy: 0.8374
Epoch 23: val_loss improved from 3.36404 to 3.26704, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 317s 459ms/step - loss: 0.9930 - accuracy: 0.8374 - val_loss: 3.2670 - val_accuracy: 0.5724
Epoch 24/30
691/691 [==============================] - ETA: 0s - loss: 0.8485 - accuracy: 0.8685
Epoch 24: val_loss improved from 3.26704 to 3.17621, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 332s 480ms/step - loss: 0.8485 - accuracy: 0.8685 - val_loss: 3.1762 - val_accuracy: 0.5924
Epoch 25/30
691/691 [==============================] - ETA: 0s - loss: 0.7176 - accuracy: 0.8927
Epoch 25: val_loss improved from 3.17621 to 3.11386, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 540s 782ms/step - loss: 0.7176 - accuracy: 0.8927 - val_loss: 3.1139 - val_accuracy: 0.6126
Epoch 26/30
691/691 [==============================] - ETA: 0s - loss: 0.6013 - accuracy: 0.9182
Epoch 26: val_loss improved from 3.11386 to 3.04579, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 341s 493ms/step - loss: 0.6013 - accuracy: 0.9182 - val_loss: 3.0458 - val_accuracy: 0.6273
Epoch 27/30
691/691 [==============================] - ETA: 0s - loss: 0.4965 - accuracy: 0.9367
Epoch 27: val_loss improved from 3.04579 to 2.98705, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 323s 467ms/step - loss: 0.4965 - accuracy: 0.9367 - val_loss: 2.9871 - val_accuracy: 0.6416
Epoch 28/30
691/691 [==============================] - ETA: 0s - loss: 0.4084 - accuracy: 0.9527
Epoch 28: val_loss improved from 2.98705 to 2.96121, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 332s 480ms/step - loss: 0.4084 - accuracy: 0.9527 - val_loss: 2.9612 - val_accuracy: 0.6478
Epoch 29/30
691/691 [==============================] - ETA: 0s - loss: 0.3306 - accuracy: 0.9636 
Epoch 29: val_loss improved from 2.96121 to 2.95598, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 8987s 13s/step - loss: 0.3306 - accuracy: 0.9636 - val_loss: 2.9560 - val_accuracy: 0.6529
Epoch 30/30
691/691 [==============================] - ETA: 0s - loss: 0.2683 - accuracy: 0.9710
Epoch 30: val_loss improved from 2.95598 to 2.93468, saving model to .\model_epoch30_2lstm_1dense_seq6_phrase_based_best.h5
691/691 [==============================] - 327s 473ms/step - loss: 0.2683 - accuracy: 0.9710 - val_loss: 2.9347 - val_accuracy: 0.6548

Process finished with exit code 0


2023-04-17 12:57:24.666280: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-17 12:57:24.668160: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
輸入任意作為生成歌詞的起始:喜歡
喜歡回念的回
可怕萬留殘雨
緬縛要點年
我情始出許聽錯對那滿松兒
腦流補的食
科雨手
外拍了去浮手
你說輕了了了什
你當(dāng)不落有內(nèi)
所用會在秒發(fā)上長地影
三個人字氣去
錯領(lǐng)少還多不以 你常息臉
如今這在你的霜 瞄怎楚
你的不味懸的半雨
表旋無才全少 歸頭步
那又你這夠燃
老意界
也知情生去沒然銀逃容
讓關(guān)林去十始傘的腳指
書情這變釋 擺誰得黃 了些美
有零場里紛約沉默
消真已年到?jīng)]幻意
追書你著能不痛走
完聲也電間
又南間酒夜  那愛我的保念
點課張的歌途
只你的喜和每十夠都文心


2023-04-17 18:16:37.570332: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-17 18:16:37.614413: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
烏鴉停在海岸荒露里節(jié)寒山
對把演張風(fēng)短
終不不道沿途風(fēng)復(fù)們
我們這經(jīng)許此旋遲睡
孤愛的臨落
月風(fēng)造冷 局愛的信情
算夢成是口 難刻容過放座
愛搖 漸起們年來春雪 幸片的了十得讓還是不么
家國困上悲化
通上為你間沒有是夠聽
我抱相首殘喚我搖道的揪
可懷搞自音各在在陪你的唱白
又來的風(fēng)想
視時上了活在還情心的
那準城向的衙品馳
我著你的風(fēng)景
點喝眉指染敵提深學(xué)歲快非著副脆弱
分真春滿江音
沒別么何好
你又絕孩開
我輕聽盡首也起 就來像我的備
和妹月到環(huán)店
窗色了風(fēng)中
掀角一樣都會束會
番恍太路情自難


最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末拓提,一起剝皮案震驚了整個濱河市读恃,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖寺惫,帶你破解...
    沈念sama閱讀 218,386評論 6 506
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件疹吃,死亡現(xiàn)場離奇詭異,居然都是意外死亡西雀,警方通過查閱死者的電腦和手機萨驶,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,142評論 3 394
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來艇肴,“玉大人篡撵,你說我怎么就攤上這事《雇欤” “怎么了?”我有些...
    開封第一講書人閱讀 164,704評論 0 353
  • 文/不壞的土叔 我叫張陵券盅,是天一觀的道長帮哈。 經(jīng)常有香客問我,道長锰镀,這世上最難降的妖魔是什么娘侍? 我笑而不...
    開封第一講書人閱讀 58,702評論 1 294
  • 正文 為了忘掉前任,我火速辦了婚禮泳炉,結(jié)果婚禮上憾筏,老公的妹妹穿的比我還像新娘。我一直安慰自己花鹅,他們只是感情好氧腰,可當(dāng)我...
    茶點故事閱讀 67,716評論 6 392
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著刨肃,像睡著了一般古拴。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上真友,一...
    開封第一講書人閱讀 51,573評論 1 305
  • 那天黄痪,我揣著相機與錄音,去河邊找鬼盔然。 笑死桅打,一個胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的愈案。 我是一名探鬼主播挺尾,決...
    沈念sama閱讀 40,314評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼刻帚!你這毒婦竟也來了潦嘶?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 39,230評論 0 276
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎掂僵,沒想到半個月后航厚,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,680評論 1 314
  • 正文 獨居荒郊野嶺守林人離奇死亡锰蓬,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,873評論 3 336
  • 正文 我和宋清朗相戀三年幔睬,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片芹扭。...
    茶點故事閱讀 39,991評論 1 348
  • 序言:一個原本活蹦亂跳的男人離奇死亡麻顶,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出舱卡,到底是詐尸還是另有隱情辅肾,我是刑警寧澤,帶...
    沈念sama閱讀 35,706評論 5 346
  • 正文 年R本政府宣布轮锥,位于F島的核電站矫钓,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏舍杜。R本人自食惡果不足惜新娜,卻給世界環(huán)境...
    茶點故事閱讀 41,329評論 3 330
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望既绩。 院中可真熱鬧概龄,春花似錦、人聲如沸饲握。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,910評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽互拾。三九已至歪今,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間颜矿,已是汗流浹背寄猩。 一陣腳步聲響...
    開封第一講書人閱讀 33,038評論 1 270
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留骑疆,地道東北人田篇。 一個月前我還...
    沈念sama閱讀 48,158評論 3 370
  • 正文 我出身青樓,卻偏偏與公主長得像箍铭,于是被迫代替她去往敵國和親泊柬。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 44,941評論 2 355

推薦閱讀更多精彩內(nèi)容