(八)sequence to sequence —1

這個系列網(wǎng)上的教程實在太多痹愚,所以我準(zhǔn)備采用代碼和理論相結(jié)合的方式,詳細代碼請點擊我的github,基于python3.6和tensorflow1.4完成寓免。數(shù)據(jù)都是合成數(shù)據(jù)癣诱,即構(gòu)造一個輸入任意序列encoder輸出任意序列decoder的任務(wù),比較簡單袜香,原理的基本實現(xiàn)撕予。

基于tensorflow1.4 Seq2seq的實現(xiàn)

import helpers
import tensorflow as tf

from tensorflow.contrib import seq2seq,rnn

tf.__version__
'1.4.0'
tf.reset_default_graph()
sess = tf.InteractiveSession()
PAD = 0
EOS = 1


vocab_size = 10
input_embedding_size = 20
encoder_hidden_units = 25

decoder_hidden_units = encoder_hidden_units

import helpers as data_helpers
batch_size = 10

# 一個generator,每次產(chǎn)生一個minibatch的隨機樣本

batches = data_helpers.random_sequences(length_from=3, length_to=8,
                                   vocab_lower=2, vocab_upper=10,
                                   batch_size=batch_size)

print('產(chǎn)生%d個長度不一(最短3蜈首,最長8)的sequences, 其中前十個是:' % batch_size)
for seq in next(batches)[:min(batch_size, 10)]:
    print(seq)
產(chǎn)生10個長度不一(最短3实抡,最長8)的sequences, 其中前十個是:
[9, 4, 4, 6]
[4, 3, 3, 2]
[5, 7, 4, 4]
[5, 6, 6, 4, 6, 7, 3]
[6, 7, 5, 2, 8, 6, 8]
[5, 6, 9, 4, 6, 9, 6, 9]
[3, 5, 2, 2, 9]
[5, 6, 5, 8, 9, 8]
[6, 8, 2, 4, 3]
[9, 6, 8, 3, 5, 2]

1.使用seq2seq庫實現(xiàn)seq2seq模型

tf.reset_default_graph()
sess = tf.InteractiveSession()
mode = tf.contrib.learn.ModeKeys.TRAIN

1. 計算圖的數(shù)據(jù)的placeholder

with tf.name_scope('minibatch'):
    encoder_inputs = tf.placeholder(tf.int32, [None, None], name='encoder_inputs')
    
    encoder_inputs_length = tf.placeholder(tf.int32, [None], name='encoder_inputs_length')
    
    decoder_targets = tf.placeholder(tf.int32, [None, None], name='decoder_targets')
    
    decoder_inputs = tf.placeholder(shape=(None, None),dtype=tf.int32,name='decoder_inputs')
    
    #decoder_inputs_length和decoder_targets_length是一樣的
    decoder_inputs_length = tf.placeholder(shape=(None,),
                                            dtype=tf.int32,
                                            name='decoder_inputs_length')

2.定義lstm cell 這里使用1層的lstm

def _create_rnn_cell():
    def single_rnn_cell(encoder_hidden_units):
        # 創(chuàng)建單個cell,這里需要注意的是一定要使用一個single_rnn_cell的函數(shù)欢策,不然直接把cell放在MultiRNNCell
        # 的列表中最終模型會發(fā)生錯誤
        single_cell = rnn.LSTMCell(encoder_hidden_units)
        #添加dropout
        single_cell = rnn.DropoutWrapper(single_cell, output_keep_prob=0.5)
        return single_cell
            #列表中每個元素都是調(diào)用single_rnn_cell函數(shù)
            #cell = rnn.MultiRNNCell([single_rnn_cell() for _ in range(self.num_layers)])
    cell = rnn.MultiRNNCell([single_rnn_cell(encoder_hidden_units) for _ in range(1)])
    return cell 

dynamic_rnn需要提供decoder_input

seq2seq2014.png

1.定義encoder 部分

with tf.variable_scope('encoder'):
    # 創(chuàng)建LSTMCell
    encoder_cell = _create_rnn_cell()
    # 構(gòu)建embedding矩陣,encoder和decoder公用該詞向量矩陣
    embedding = tf.get_variable('embedding', [vocab_size,input_embedding_size])
    encoder_inputs_embedded = tf.nn.embedding_lookup(embedding,encoder_inputs)
    # 使用dynamic_rnn構(gòu)建LSTM模型吆寨,將輸入編碼成隱層向量。
    # encoder_outputs用于attention猬腰,batch_size*encoder_inputs_length*rnn_size,
    # encoder_state用于decoder的初始化狀態(tài)鸟废,batch_size*rnn_szie
    encoder_outputs, encoder_state = tf.nn.dynamic_rnn(encoder_cell, encoder_inputs_embedded,
                                                       sequence_length=encoder_inputs_length,
                                                       dtype=tf.float32)

2.定義decoder 部分(暫時不添加attention部分)

with tf.variable_scope('decoder'):
    decoder_cell = _create_rnn_cell()
    
    #定義decoder的初始狀態(tài)
    decoder_initial_state = encoder_state
    
    #定義output_layer
    output_layer = tf.layers.Dense(vocab_size,kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))
    
    decoder_inputs_embedded = tf.nn.embedding_lookup(embedding, decoder_inputs)
    
    # 訓(xùn)練階段,使用TrainingHelper+BasicDecoder的組合姑荷,這一般是固定的盒延,當(dāng)然也可以自己定義Helper類,實現(xiàn)自己的功能
    training_helper = seq2seq.TrainingHelper(inputs=decoder_inputs_embedded,
                                                        sequence_length=decoder_inputs_length,
                                                        time_major=False, name='training_helper')
    training_decoder = seq2seq.BasicDecoder(cell=decoder_cell, helper=training_helper,
                                                       initial_state=decoder_initial_state,
                                                       output_layer=output_layer)
    
    # 調(diào)用dynamic_decode進行解碼鼠冕,decoder_outputs是一個namedtuple添寺,里面包含兩項(rnn_outputs, sample_id)
    # rnn_output: [batch_size, decoder_targets_length, vocab_size],保存decode每個時刻每個單詞的概率懈费,可以用來計算loss
    # sample_id: [batch_size], tf.int32计露,保存最終的編碼結(jié)果≡饕遥可以表示最后的答案
    max_target_sequence_length = tf.reduce_max(decoder_inputs_length, name='max_target_len')
    decoder_outputs, _, _ = seq2seq.dynamic_decode(decoder=training_decoder,
                                                          impute_finished=True,
                                                          maximum_iterations=max_target_sequence_length)
    
    #創(chuàng)建一個與decoder_outputs.rnn_output一樣的tensor給decoder_logits_train
    decoder_logits_train = tf.identity(decoder_outputs.rnn_output)
    sample_id = decoder_outputs.sample_id
    #decoder_predict_train = tf.argmax(decoder_logits_train, axis=-1,name='decoder_pred_train')
    #decoder_predict_decode = tf.expand_dims(decoder_outputs.sample_id, -1)
    
    # 根據(jù)目標(biāo)序列長度票罐,選出其中最大值,然后使用該值構(gòu)建序列長度的mask標(biāo)志泞边。用一個sequence_mask的例子來說明起作用
    #  tf.sequence_mask([1, 3, 2], 5)
    #  [[True, False, False, False, False],
    #  [True, True, True, False, False],
    #  [True, True, False, False, False]]
    max_target_sequence_length = tf.reduce_max(decoder_inputs_length, name='max_target_len')
    mask = tf.sequence_mask(decoder_inputs_length,max_target_sequence_length, dtype=tf.float32, name='masks')
    print('\t%s' % repr(decoder_logits_train))
    print('\t%s' % repr(decoder_targets))
    print('\t%s' % repr(sample_id))
    loss = seq2seq.sequence_loss(logits=decoder_logits_train,targets=decoder_targets, weights=mask)
    <tf.Tensor 'decoder/Identity:0' shape=(?, ?, 10) dtype=float32>
    <tf.Tensor 'minibatch/decoder_targets:0' shape=(?, ?) dtype=int32>
    <tf.Tensor 'decoder/decoder/transpose_1:0' shape=(?, ?) dtype=int32>
train_op = tf.train.AdamOptimizer(learning_rate = 0.001).minimize(loss)
sess.run(tf.global_variables_initializer())
def next_feed():
    batch = next(batches)
    
    encoder_inputs_, encoder_inputs_length_ = data_helpers.batch(batch)
    decoder_targets_, decoder_targets_length_ = data_helpers.batch(
        [(sequence) + [EOS] for sequence in batch]
    )
    decoder_inputs_, decoder_inputs_length_ = data_helpers.batch(
        [[EOS] + (sequence) for sequence in batch]
    )
    
    # 在feedDict里面该押,key可以是一個Tensor
    return {
        encoder_inputs: encoder_inputs_.T,
        decoder_inputs: decoder_inputs_.T,
        decoder_targets: decoder_targets_.T,
        encoder_inputs_length: encoder_inputs_length_,
        decoder_inputs_length: decoder_inputs_length_
    }
x = next_feed()
print('encoder_inputs:')
print(x[encoder_inputs][0,:])
print('encoder_inputs_length:')
print(x[encoder_inputs_length][0])
print('decoder_inputs:')
print(x[decoder_inputs][0,:])
print('decoder_inputs_length:')
print(x[decoder_inputs_length][0])
print('decoder_targets:')
print(x[decoder_targets][0,:])
encoder_inputs:
[9 4 3 3 2 6 0 0]
encoder_inputs_length:
6
decoder_inputs:
[1 9 4 3 3 2 6 0 0]
decoder_inputs_length:
7
decoder_targets:
[9 4 3 3 2 6 1 0 0]
loss_track = []
max_batches = 3001
batches_in_epoch = 100

try:
    # 一個epoch的learning
    for batch in range(max_batches):
        fd = next_feed()
        _, l = sess.run([train_op, loss], fd)
        loss_track.append(l)
        
        if batch == 0 or batch % batches_in_epoch == 0:
            print('batch {}'.format(batch))
            print('  minibatch loss: {}'.format(sess.run(loss, fd)))
            predict_ = sess.run(decoder_outputs.sample_id, fd)
            for i, (inp, pred) in enumerate(zip(fd[encoder_inputs], predict_)):
                print('  sample {}:'.format(i + 1))
                print('    input     > {}'.format(inp))
                print('    predicted > {}'.format(pred))
                if i >= 2:
                    break
            print()
        
except KeyboardInterrupt:
    print('training interrupted')
batch 0
  minibatch loss: 2.2938551902770996
  sample 1:
    input     > [8 5 3 9 3 5 0 0]
    predicted > [4 4 4 4 4 1 4 0 0]
  sample 2:
    input     > [9 5 8 4 4 6 4 7]
    predicted > [9 3 4 4 4 9 9 4 9]
  sample 3:
    input     > [6 6 5 7 6 8 0 0]
    predicted > [1 4 4 3 3 3 4 0 0]

batch 100
  minibatch loss: 2.1440541744232178
  sample 1:
    input     > [5 5 3 7 2 5 0 0]
    predicted > [7 5 5 5 5 7 1 0 0]
  sample 2:
    input     > [3 2 7 2 4 9 6 8]
    predicted > [2 2 2 2 2 1 1 1 1]
  sample 3:
    input     > [6 8 6 2 0 0 0 0]
    predicted > [2 9 2 1 1 0 0 0 0]

batch 200
  minibatch loss: 1.7902907133102417
  sample 1:
    input     > [8 5 6 9 6 6 7 0]
    predicted > [7 5 7 9 5 7 5 1 0]
  sample 2:
    input     > [5 3 4 0 0 0 0 0]
    predicted > [5 3 1 1 0 0 0 0 0]
  sample 3:
    input     > [8 9 3 6 6 4 6 2]
    predicted > [6 9 8 4 4 4 2 1 1]

batch 300
  minibatch loss: 1.6711502075195312
  sample 1:
    input     > [6 5 6 5 7 0 0 0]
    predicted > [7 7 7 7 5 1 0 0 0]
  sample 2:
    input     > [7 8 6 9 7 2 7 0]
    predicted > [5 7 7 5 7 7 7 1 0]
  sample 3:
    input     > [7 3 8 2 2 0 0 0]
    predicted > [2 2 2 2 1 1 0 0 0]

batch 400
  minibatch loss: 1.4671175479888916
  sample 1:
    input     > [3 4 8 8 9 0 0 0]
    predicted > [4 8 8 4 2 1 0 0 0]
  sample 2:
    input     > [5 2 6 2 5 4 3 8]
    predicted > [8 8 2 6 2 9 8 8 1]
  sample 3:
    input     > [2 9 6 0 0 0 0 0]
    predicted > [8 6 5 1 0 0 0 0 0]

batch 500
  minibatch loss: 1.3590279817581177
  sample 1:
    input     > [3 2 2 3 8 8 5 5]
    predicted > [8 8 8 9 5 5 5 1 1]
  sample 2:
    input     > [8 4 6 3 8 2 0 0]
    predicted > [4 4 2 8 2 4 1 0 0]
  sample 3:
    input     > [2 2 6 3 9 9 0 0]
    predicted > [4 2 2 9 6 6 1 0 0]

batch 600
  minibatch loss: 1.292779564857483
  sample 1:
    input     > [7 9 6 5 0 0 0 0]
    predicted > [5 9 5 5 1 0 0 0 0]
  sample 2:
    input     > [5 9 3 0 0 0 0 0]
    predicted > [5 9 3 1 0 0 0 0 0]
  sample 3:
    input     > [3 3 8 5 6 3 0 0]
    predicted > [3 3 3 3 9 3 1 0 0]

batch 700
  minibatch loss: 1.2727009057998657
  sample 1:
    input     > [4 4 7 7 8 6 5 7]
    predicted > [3 7 7 7 9 7 5 1 1]
  sample 2:
    input     > [5 4 2 2 7 7 0 0]
    predicted > [2 2 8 7 7 7 1 0 0]
  sample 3:
    input     > [7 3 9 7 8 0 0 0]
    predicted > [3 7 5 7 8 1 0 0 0]

batch 800
  minibatch loss: 1.1580817699432373
  sample 1:
    input     > [8 3 2 7 8 5 7 0]
    predicted > [4 3 7 7 7 7 7 1 0]
  sample 2:
    input     > [2 8 7 6 7 2 0 0]
    predicted > [2 2 7 7 7 2 1 0 0]
  sample 3:
    input     > [8 7 8 4 3 2 5 8]
    predicted > [8 7 4 3 5 5 5 8 1]

batch 900
  minibatch loss: 1.1622250080108643
  sample 1:
    input     > [6 8 2 5 5 0 0 0]
    predicted > [8 8 5 5 5 1 0 0 0]
  sample 2:
    input     > [5 9 4 5 7 0 0 0]
    predicted > [5 6 7 7 7 1 0 0 0]
  sample 3:
    input     > [6 2 3 4 9 5 3 9]
    predicted > [4 3 3 4 9 3 9 6 1]

batch 1000
  minibatch loss: 1.2378357648849487
  sample 1:
    input     > [4 3 2 3 8 7 4 8]
    predicted > [3 4 8 3 2 4 2 2 1]
  sample 2:
    input     > [5 6 5 4 5 8 5 6]
    predicted > [5 5 5 5 5 5 6 6 1]
  sample 3:
    input     > [3 8 4 3 4 3 6 0]
    predicted > [2 4 4 4 4 3 6 1 0]

batch 1100
  minibatch loss: 1.1085090637207031
  sample 1:
    input     > [4 7 2 0 0 0 0 0]
    predicted > [4 2 8 1 0 0 0 0 0]
  sample 2:
    input     > [6 2 3 5 7 7 2 4]
    predicted > [6 7 7 7 7 7 2 4 1]
  sample 3:
    input     > [9 7 7 3 5 2 4 0]
    predicted > [7 7 7 3 5 8 3 1 0]

batch 1200
  minibatch loss: 1.1771703958511353
  sample 1:
    input     > [8 2 7 8 9 7 0 0]
    predicted > [8 8 5 8 7 7 1 0 0]
  sample 2:
    input     > [8 8 4 7 2 8 0 0]
    predicted > [8 8 2 2 2 2 1 0 0]
  sample 3:
    input     > [2 9 7 9 4 9 3 2]
    predicted > [9 9 7 8 4 8 3 2 1]

batch 1300
  minibatch loss: 0.9447832107543945
  sample 1:
    input     > [4 3 2 3 9 6 0 0]
    predicted > [4 3 4 6 9 9 1 0 0]
  sample 2:
    input     > [5 9 4 0 0 0 0 0]
    predicted > [5 6 4 1 0 0 0 0 0]
  sample 3:
    input     > [8 8 8 2 7 8 0 0]
    predicted > [8 8 8 2 5 8 1 0 0]

batch 1400
  minibatch loss: 1.0269840955734253
  sample 1:
    input     > [5 6 3 5 7 5 6 4]
    predicted > [2 6 5 5 7 6 6 4 1]
  sample 2:
    input     > [2 6 2 4 2 6 0 0]
    predicted > [2 4 2 6 6 6 1 0 0]
  sample 3:
    input     > [2 3 8 4 0 0 0 0]
    predicted > [4 3 8 4 1 0 0 0 0]

batch 1500
  minibatch loss: 0.8967496752738953
  sample 1:
    input     > [7 7 8 6 4 7 0 0]
    predicted > [7 7 2 4 4 7 1 0 0]
  sample 2:
    input     > [7 8 4 6 0 0 0 0]
    predicted > [7 4 4 6 1 0 0 0 0]
  sample 3:
    input     > [6 7 5 6 8 7 7 6]
    predicted > [7 7 5 6 7 7 7 1 1]

batch 1600
  minibatch loss: 0.9586960077285767
  sample 1:
    input     > [6 5 8 3 2 4 9 0]
    predicted > [5 5 8 4 2 4 5 1 0]
  sample 2:
    input     > [4 9 6 9 0 0 0 0]
    predicted > [3 9 9 9 1 0 0 0 0]
  sample 3:
    input     > [7 7 9 9 5 2 0 0]
    predicted > [7 5 9 5 5 2 1 0 0]

batch 1700
  minibatch loss: 1.0395662784576416
  sample 1:
    input     > [5 7 4 5 0 0 0 0]
    predicted > [5 7 4 7 1 0 0 0 0]
  sample 2:
    input     > [3 3 2 8 0 0 0 0]
    predicted > [3 4 2 8 1 0 0 0 0]
  sample 3:
    input     > [6 8 2 7 8 5 0 0]
    predicted > [8 8 2 7 8 7 1 0 0]

batch 1800
  minibatch loss: 0.9203397035598755
  sample 1:
    input     > [4 5 4 2 5 8 0 0]
    predicted > [4 5 4 5 5 1 1 0 0]
  sample 2:
    input     > [2 7 4 8 8 4 0 0]
    predicted > [7 7 4 8 4 4 1 0 0]
  sample 3:
    input     > [6 6 4 0 0 0 0 0]
    predicted > [6 6 4 1 0 0 0 0 0]

batch 1900
  minibatch loss: 0.7155815362930298
  sample 1:
    input     > [6 5 2 2 9 7 9 0]
    predicted > [6 2 2 8 9 7 9 1 0]
  sample 2:
    input     > [5 6 2 9 9 4 8 0]
    predicted > [5 9 9 6 9 4 8 1 0]
  sample 3:
    input     > [6 8 2 9 0 0 0 0]
    predicted > [2 8 2 9 1 0 0 0 0]

batch 2000
  minibatch loss: 0.7423955202102661
  sample 1:
    input     > [3 5 2 9 8 5 3 2]
    predicted > [5 5 2 3 5 2 3 2 1]
  sample 2:
    input     > [8 5 5 9 6 0 0 0]
    predicted > [5 9 5 6 6 1 0 0 0]
  sample 3:
    input     > [6 8 8 0 0 0 0 0]
    predicted > [8 8 8 1 0 0 0 0 0]

batch 2100
  minibatch loss: 0.8510919213294983
  sample 1:
    input     > [7 7 9 0 0 0 0 0]
    predicted > [7 7 9 1 0 0 0 0 0]
  sample 2:
    input     > [4 2 9 2 5 6 2 6]
    predicted > [2 2 9 2 6 6 6 6 1]
  sample 3:
    input     > [4 6 8 2 5 5 0 0]
    predicted > [6 9 8 2 5 5 1 0 0]

batch 2200
  minibatch loss: 0.6667694449424744
  sample 1:
    input     > [9 8 8 4 0 0 0 0]
    predicted > [8 8 8 4 1 0 0 0 0]
  sample 2:
    input     > [5 8 7 0 0 0 0 0]
    predicted > [2 8 1 1 0 0 0 0 0]
  sample 3:
    input     > [9 3 4 0 0 0 0 0]
    predicted > [3 3 4 1 0 0 0 0 0]

batch 2300
  minibatch loss: 0.7337868809700012
  sample 1:
    input     > [2 4 7 6 6 9 0 0]
    predicted > [2 6 6 6 6 9 1 0 0]
  sample 2:
    input     > [3 5 2 8 0 0 0 0]
    predicted > [3 5 2 8 1 0 0 0 0]
  sample 3:
    input     > [5 5 8 4 8 9 4 3]
    predicted > [3 5 8 4 4 3 3 3 1]

batch 2400
  minibatch loss: 0.8720135688781738
  sample 1:
    input     > [8 7 5 7 2 7 2 0]
    predicted > [2 5 5 7 2 7 8 1 0]
  sample 2:
    input     > [7 7 9 4 3 6 8 0]
    predicted > [7 7 9 4 3 9 7 1 0]
  sample 3:
    input     > [8 6 3 2 6 0 0 0]
    predicted > [2 6 2 2 6 1 0 0 0]

batch 2500
  minibatch loss: 0.6776264309883118
  sample 1:
    input     > [7 7 8 8 8 3 2 0]
    predicted > [7 7 8 8 8 3 1 1 0]
  sample 2:
    input     > [6 7 7 9 3 7 9 8]
    predicted > [7 7 7 3 9 7 9 8 1]
  sample 3:
    input     > [8 6 6 7 0 0 0 0]
    predicted > [6 6 6 7 1 0 0 0 0]

batch 2600
  minibatch loss: 0.7246588468551636
  sample 1:
    input     > [3 6 7 0 0 0 0 0]
    predicted > [6 6 7 1 0 0 0 0 0]
  sample 2:
    input     > [9 6 8 4 6 6 8 0]
    predicted > [6 6 8 6 6 6 8 1 0]
  sample 3:
    input     > [6 5 9 6 9 2 7 0]
    predicted > [6 9 9 4 6 2 6 1 0]

batch 2700
  minibatch loss: 0.6910533308982849
  sample 1:
    input     > [3 7 4 0 0 0 0 0]
    predicted > [3 7 4 1 0 0 0 0 0]
  sample 2:
    input     > [2 6 9 9 7 3 2 5]
    predicted > [6 6 9 3 3 3 2 5 1]
  sample 3:
    input     > [9 6 5 0 0 0 0 0]
    predicted > [9 6 5 1 0 0 0 0 0]

batch 2800
  minibatch loss: 0.6767545342445374
  sample 1:
    input     > [9 8 5 0 0 0 0]
    predicted > [9 8 5 1 0 0 0 0]
  sample 2:
    input     > [2 6 6 4 9 8 2]
    predicted > [2 6 6 4 8 8 2 9]
  sample 3:
    input     > [3 8 7 0 0 0 0]
    predicted > [3 8 7 1 0 0 0 0]

batch 2900
  minibatch loss: 0.6852056980133057
  sample 1:
    input     > [6 4 7 0 0 0 0 0]
    predicted > [6 4 7 1 0 0 0 0 0]
  sample 2:
    input     > [9 3 9 9 0 0 0 0]
    predicted > [3 9 9 9 1 0 0 0 0]
  sample 3:
    input     > [3 5 8 0 0 0 0 0]
    predicted > [3 5 8 1 0 0 0 0 0]

batch 3000
  minibatch loss: 0.6660669445991516
  sample 1:
    input     > [7 2 6 9 5 2 8 7]
    predicted > [7 2 9 5 5 2 7 5 1]
  sample 2:
    input     > [6 9 9 3 2 0 0 0]
    predicted > [9 9 9 3 5 1 0 0 0]
  sample 3:
    input     > [8 4 6 6 0 0 0 0]
    predicted > [8 4 6 6 1 0 0 0 0]
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市阵谚,隨后出現(xiàn)的幾起案子蚕礼,更是在濱河造成了極大的恐慌,老刑警劉巖梢什,帶你破解...
    沈念sama閱讀 216,372評論 6 498
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件奠蹬,死亡現(xiàn)場離奇詭異,居然都是意外死亡嗡午,警方通過查閱死者的電腦和手機囤躁,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,368評論 3 392
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人割以,你說我怎么就攤上這事金度。” “怎么了严沥?”我有些...
    開封第一講書人閱讀 162,415評論 0 353
  • 文/不壞的土叔 我叫張陵猜极,是天一觀的道長。 經(jīng)常有香客問我消玄,道長跟伏,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 58,157評論 1 292
  • 正文 為了忘掉前任翩瓜,我火速辦了婚禮受扳,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘兔跌。我一直安慰自己勘高,他們只是感情好,可當(dāng)我...
    茶點故事閱讀 67,171評論 6 388
  • 文/花漫 我一把揭開白布坟桅。 她就那樣靜靜地躺著华望,像睡著了一般。 火紅的嫁衣襯著肌膚如雪仅乓。 梳的紋絲不亂的頭發(fā)上赖舟,一...
    開封第一講書人閱讀 51,125評論 1 297
  • 那天,我揣著相機與錄音夸楣,去河邊找鬼宾抓。 笑死,一個胖子當(dāng)著我的面吹牛豫喧,可吹牛的內(nèi)容都是我干的石洗。 我是一名探鬼主播,決...
    沈念sama閱讀 40,028評論 3 417
  • 文/蒼蘭香墨 我猛地睜開眼紧显,長吁一口氣:“原來是場噩夢啊……” “哼劲腿!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起鸟妙,我...
    開封第一講書人閱讀 38,887評論 0 274
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎挥吵,沒想到半個月后重父,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,310評論 1 310
  • 正文 獨居荒郊野嶺守林人離奇死亡忽匈,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,533評論 2 332
  • 正文 我和宋清朗相戀三年房午,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片丹允。...
    茶點故事閱讀 39,690評論 1 348
  • 序言:一個原本活蹦亂跳的男人離奇死亡郭厌,死狀恐怖袋倔,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情折柠,我是刑警寧澤宾娜,帶...
    沈念sama閱讀 35,411評論 5 343
  • 正文 年R本政府宣布,位于F島的核電站扇售,受9級特大地震影響前塔,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜承冰,卻給世界環(huán)境...
    茶點故事閱讀 41,004評論 3 325
  • 文/蒙蒙 一华弓、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧困乒,春花似錦寂屏、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,659評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽草巡。三九已至增淹,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間倔韭,已是汗流浹背恳谎。 一陣腳步聲響...
    開封第一講書人閱讀 32,812評論 1 268
  • 我被黑心中介騙來泰國打工芝此, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人因痛。 一個月前我還...
    沈念sama閱讀 47,693評論 2 368
  • 正文 我出身青樓婚苹,卻偏偏與公主長得像,于是被迫代替她去往敵國和親鸵膏。 傳聞我的和親對象是個殘疾皇子膊升,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 44,577評論 2 353