tensorflow人臉識(shí)別(自己的數(shù)據(jù)集)

可以在云盤(pán)下載打包文件包括API,數(shù)據(jù)
把原有的文件夾下面的object_detection刪掉,這里面的(__init____.py)文件百度云盤(pán)上傳不了,全都沒(méi)成功,所以在把文件下來(lái)之后object_detection/object_detection/下的內(nèi)容刪掉并蝗,把object_detection.zip解壓到object_detection里面。
鏈接:https://pan.baidu.com/s/1BkMpGOF1cVjJl2Hpip-Hpg
提取碼:9stc

首先先下載圖片

網(wǎng)友ACLJW的爬蟲(chóng)代碼簡(jiǎn)單高效


pachong.py

# @File    : pachong.py

import requests
import re
import os
from pypinyin import pinyin, lazy_pinyin
def getHTMLText(url):
    try:
        r = requests.get(url,timeout=30)
        r.raise_for_status()
        r.encoding = r.apparent_encoding
        return r.text
    except:
        print("")

def getPageUrls(text,name):
    re_pageUrl=r'href="(.+)">\s*<img src="(.+)" alt="'+name
    return re.findall(re_pageUrl,text)

def downPictures(text,root,name,L):
    pageUrls=getPageUrls(text,name)
    titles=re.findall(r'alt="'+name+r'(.+)" ',text)
    for i in range(len(pageUrls)):
        pageUrl=pageUrls[i][0]
        path = root + titles[i]+ "http://"
        if not os.path.exists(path):
            os.mkdir(path)
        if not os.listdir(path):
            pageText=getHTMLText(pageUrl)
            totalPics=int(re.findall(r'<em>(.+)</em>)',pageText)[0])
            downUrl=re.findall(r'href="(.+?)" class="">下載圖片',pageText)[0]
            cnt=1;
            while(cnt<=totalPics):
                L += 1
                picPath=path+"%s.jpg"%str(L)
                r=requests.get(downUrl)
                with open(picPath,'wb') as f:
                    f.write(r.content)
                    f.close()
                print('{} - 第{}張下載已完成\n'.format(titles[i],L))
                cnt+=1
                nextPageUrl=re.findall(r'href="(.+?)">下一張',pageText)[0]
                pageText=getHTMLText(nextPageUrl)
                downUrl=re.findall(r'href="(.+?)" class="">下載圖片',pageText)[0]
    return L

def main():
    name=input("請(qǐng)輸入你喜歡的明星的名字:")
    nameUrl="http://www.win4000.com/mt/"+''.join(lazy_pinyin(name))+".html"
    L  = 0
    try:
        text=getHTMLText(nameUrl)
        if not re.findall(r'暫無(wú)(.+)!',text):
            root = "C:/Users/yanghe/Desktop/data/"+name+"http://"
            if not os.path.exists(root):
                os.mkdir(root)
            L = downPictures(text,root,name, L)
            try:
                nextPage=re.findall(r'next" href="(.+)"',text)[0]
                while(nextPage):
                    nextText=getHTMLText(nextPage)
                    L = downPictures(nextText,root,name,L)
                    nextPage=re.findall(r'next" href="(.+)"',nextText)[0]
            except IndexError:
                print("已全部下載完畢")
    except TypeError:
        print("不好意思秸妥,沒(méi)有{}的照片".format(name))
    return

if __name__ == '__main__':
    main()


打上標(biāo)簽

1.打標(biāo)簽用的軟件是是labelImg.exe,這款軟件操作簡(jiǎn)單滚停。
labelImg.exe的快捷鍵


2.這里需要設(shè)置類(lèi)別:

這里有一個(gè)open_dir是照片文件打開(kāi)的目錄,還有一個(gè)Ctrl+R更改默認(rèn)xml文件地址粥惧。這里是為了生成和Pascal voc2007數(shù)據(jù)集一樣格式的文件键畴。
每打一張圖片就保存一下,點(diǎn)下ok就行了突雪,好像是自動(dòng)保存的起惕,超級(jí)簡(jiǎn)單。

給戚薇打上標(biāo)簽

圖片爬到的質(zhì)量有問(wèn)題咏删,大部分是側(cè)臉惹想,柳巖的全是戴帽子的照片,哎6胶嘀粱!這明星的寫(xiě)真集照片看的眼都花了

Pascal voc2007數(shù)據(jù)集簡(jiǎn)單介紹

具體細(xì)節(jié)查看點(diǎn)這里:數(shù)據(jù)集:Pascal voc2007數(shù)據(jù)集分析
labelImg
在Pascal voc2007中(對(duì)于2007_000392.jpg)對(duì)于這張圖有如下的對(duì)應(yīng)xml文件激挪。(2007_000392.jpg圖在下面)

#2007_000392.xml
<annotation>
    <folder>VOC2012</folder>                           
    <filename>2007_000392.jpg</filename>                               //文件名
    <source>                                                           //圖像來(lái)源(不重要)
        <database>The VOC2007 Database</database>
        <annotation>PASCAL VOC2007</annotation>
        
    </source>
    <size>                                             //圖像尺寸(長(zhǎng)寬以及通道數(shù))                      
        <width>500</width>
        <height>332</height>
        <depth>3</depth>
    </size>
    <segmented>1</segmented>                                   //是否用于分割(在圖像物體識(shí)別中01無(wú)所謂)
    <object>                                                           //檢測(cè)到的物體
        <name>horse</name>                                         //物體類(lèi)別
        <pose>Right</pose>                                         //拍攝角度
        <truncated>0</truncated>                                   //是否被截?cái)啵?表示完整)
        <difficult>0</difficult>                                   //目標(biāo)是否難以識(shí)別(0表示容易識(shí)別)
        <bndbox>                                                   //bounding-box(包含左下角和右上角xy坐標(biāo))
            <xmin>100</xmin>
            <ymin>96</ymin>
            <xmax>355</xmax>
            <ymax>324</ymax>
        </bndbox>
    </object>
    <object>                                                           //檢測(cè)到多個(gè)物體
        <name>person</name>
        <pose>Unspecified</pose>
        <truncated>0</truncated>
        <difficult>0</difficult>
        <bndbox>
            <xmin>198</xmin>
            <ymin>58</ymin>
            <xmax>286</xmax>
            <ymax>197</ymax>
        </bndbox>
    </object>
</annotation>

在2007_000392.jpg這張圖里面有個(gè)人在騎馬。在xml文件里面object有兩個(gè)(person和horse)锋叨,包括信息有是否識(shí)別困難垄分,截?cái)啵嵌鹊韧藁牵笙陆呛陀疑辖堑淖鴺?biāo)锋喜。

2007_000392.jpg

我的類(lèi)別如下:


把xml文件生成csv文件

這里的path 就是保存xml文件的目錄,data是你要保存csv文件的目錄豌鸡。

具體查看請(qǐng)點(diǎn)這里:TensorFlow Object Detection API教程——利用自己制作的數(shù)據(jù)集進(jìn)行訓(xùn)練預(yù)測(cè)和測(cè)試
這里記得設(shè)置一下你的訓(xùn)練集和測(cè)試集的大小,這里是0.67

# -*- coding: utf-8 -*-
import os
import glob
import pandas as pd
import xml.etree.ElementTree as ET

def xml_to_csv(path):
    xml_list = []
    # 讀取注釋文件
    for xml_file in glob.glob(path + '/*.xml'):
        tree = ET.parse(xml_file)
        root = tree.getroot()
        for member in root.findall('object'):
            value = (root.find('filename').text + '.jpg',
                     int(root.find('size')[0].text),
                     int(root.find('size')[1].text),
                     member[0].text,
                     int(member[4][0].text),
                     int(member[4][1].text),
                     int(member[4][2].text),
                     int(member[4][3].text)
                     )
            xml_list.append(value)
    column_name = ['filename', 'width', 'height', 'class', 'xmin', 'ymin', 'xmax', 'ymax']

    # 將所有數(shù)據(jù)分為樣本集和驗(yàn)證集段标,一般按照3:1的比例
    train_list = xml_list[0: int(len(xml_list) * 0.67)]
    eval_list = xml_list[int(len(xml_list) * 0.67) + 1: ]

    # 保存為CSV格式
    train_df = pd.DataFrame(train_list, columns=column_name)
    eval_df = pd.DataFrame(eval_list, columns=column_name)
    train_df.to_csv('data/train.csv', index=None)
    eval_df.to_csv('data/eval.csv', index=None)


def main():
    path = './xml'
    xml_to_csv(path)
    print('Successfully converted xml to csv.')

main()

把csv生成TFrecord文件

import os
import io
import pandas as pd
import tensorflow as tf

from PIL import Image
from object_detection.utils import dataset_util
from collections import namedtuple, OrderedDict

flags = tf.app.flags
flags.DEFINE_string('csv_input', '', 'Path to the CSV input')
flags.DEFINE_string('output_path', '', 'Path to output TFRecord')
FLAGS = flags.FLAGS


# 將分類(lèi)名稱(chēng)轉(zhuǎn)成ID號(hào)
def class_text_to_int(row_label):
    if row_label == 'damimi':
        return 1
    elif row_label == 'fanbingbing':
        return 2
    elif row_label == 'liuyan':
        return 3
    elif row_label == 'nazha':
        return 4
    elif row_label == 'xiaowei':  
        return 5
    else:
        print('NONE: ' + row_label)
        None


def split(df, group):
    data = namedtuple('data', ['filename', 'object'])
    gb = df.groupby(group)
    return [data(filename, gb.get_group(x)) for filename, x in zip(gb.groups.keys(), gb.groups)]


def create_tf_example(group, path):
    print(os.path.join(path, '{}'.format(group.filename)))
    with tf.gfile.GFile(os.path.join(path, '{}'.format(group.filename)), 'rb') as fid:
        encoded_jpg = fid.read()
    encoded_jpg_io = io.BytesIO(encoded_jpg)
    image = Image.open(encoded_jpg_io)
    width, height = image.size

    filename = (group.filename + '.jpg').encode('utf8')
    image_format = b'jpg'
    xmins = []
    xmaxs = []
    ymins = []
    ymaxs = []
    classes_text = []
    classes = []

    for index, row in group.object.iterrows():
        xmins.append(row['xmin'] / width)
        xmaxs.append(row['xmax'] / width)
        ymins.append(row['ymin'] / height)
        ymaxs.append(row['ymax'] / height)
        classes_text.append(row['class'].encode('utf8'))
        classes.append(class_text_to_int(row['class']))

    tf_example = tf.train.Example(features=tf.train.Features(feature={
        'image/height': dataset_util.int64_feature(height),
        'image/width': dataset_util.int64_feature(width),
        'image/filename': dataset_util.bytes_feature(filename),
        'image/source_id': dataset_util.bytes_feature(filename),
        'image/encoded': dataset_util.bytes_feature(encoded_jpg),
        'image/format': dataset_util.bytes_feature(image_format),
        'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),
        'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),
        'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),
        'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),
        'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
        'image/object/class/label': dataset_util.int64_list_feature(classes),
    }))
    return tf_example


def main(csv_input, output_path, imgPath):
    writer = tf.python_io.TFRecordWriter(output_path)
    path = imgPath
    examples = pd.read_csv(csv_input)
    grouped = split(examples, 'filename')
    for group in grouped:
        tf_example = create_tf_example(group, path)
        writer.write(tf_example.SerializeToString())

    writer.close()
    print('Successfully created the TFRecords: {}'.format(output_path))


if __name__ == '__main__':

    imgPath = './xml' #你的圖片路徑

    # 生成train.record文件
    output_path = 'data/train.record' #你的record保存路徑
    csv_input = 'data/train.csv' #你的csv文件路徑
    main(csv_input, output_path, imgPath)

    # 生成驗(yàn)證文件 eval.record
    output_path = 'data/eval.record'
    csv_input = 'data/eval.csv'
    main(csv_input, output_path, imgPath)


設(shè)置一下圖片路徑和record的保存路徑就行了

修改ssd_inception_v2_coco.config文件

就是修改一下目錄涯冠,訓(xùn)練次數(shù),record文件逼庞,和lable文件等信息:
num_classes: 5
num_steps: 10000
batch_size: 20
fine_tune_checkpoint:"ssd_inception_v2_coco_2018_01_28/model.ckpt"
train_input_reader:的下面{
input_path: "record/train.record"
label_map_path: "record/label_map.pbtxt"}
test_input_reader:的下面{
input_path: "record/val.record"
label_map_path: "record/label_map.pbtxt"}

# SSD with Inception v2 configuration for MSCOCO Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.

model {
  ssd {
    num_classes: 5
    box_coder {
      faster_rcnn_box_coder {
        y_scale: 10.0
        x_scale: 10.0
        height_scale: 5.0
        width_scale: 5.0
      }
    }
    matcher {
      argmax_matcher {
        matched_threshold: 0.5
        unmatched_threshold: 0.5
        ignore_thresholds: false
        negatives_lower_than_unmatched: true
        force_match_for_each_row: true
      }
    }
    similarity_calculator {
      iou_similarity {
      }
    }
    anchor_generator {
      ssd_anchor_generator {
        num_layers: 6
        min_scale: 0.2
        max_scale: 0.95
        aspect_ratios: 1.0
        aspect_ratios: 2.0
        aspect_ratios: 0.5
        aspect_ratios: 3.0
        aspect_ratios: 0.3333
        reduce_boxes_in_lowest_layer: true
      }
    }
    image_resizer {
      fixed_shape_resizer {
        height: 300
        width: 300
      }
    }
    box_predictor {
      convolutional_box_predictor {
        min_depth: 0
        max_depth: 0
        num_layers_before_predictor: 0
        use_dropout: false
        dropout_keep_probability: 0.8
        kernel_size: 3
        box_code_size: 4
        apply_sigmoid_to_scores: false
        conv_hyperparams {
          activation: RELU_6,
          regularizer {
            l2_regularizer {
              weight: 0.00004
            }
          }
          initializer {
            truncated_normal_initializer {
              stddev: 0.03
              mean: 0.0
            }
          }
        }
      }
    }
    feature_extractor {
      type: 'ssd_inception_v2'
      min_depth: 16
      depth_multiplier: 1.0
      conv_hyperparams {
        activation: RELU_6,
        regularizer {
          l2_regularizer {
            weight: 0.00004
          }
        }
        initializer {
          truncated_normal_initializer {
            stddev: 0.03
            mean: 0.0
          }
        }
        batch_norm {
          train: true,
          scale: true,
          center: true,
          decay: 0.9997,
          epsilon: 0.001,
        }
      }
    }
    loss {
      classification_loss {
        weighted_sigmoid {
          anchorwise_output: true
        }
      }
      localization_loss {
        weighted_smooth_l1 {
          anchorwise_output: true
        }
      }
      hard_example_miner {
        num_hard_examples: 3000
        iou_threshold: 0.99
        loss_type: CLASSIFICATION
        max_negatives_per_positive: 3
        min_negatives_per_image: 0
      }
      classification_weight: 1.0
      localization_weight: 1.0
    }
    normalize_loss_by_num_matches: true
    post_processing {
      batch_non_max_suppression {
        score_threshold: 1e-8
        iou_threshold: 0.6
        max_detections_per_class: 100
        max_total_detections: 100
      }
      score_converter: SIGMOID
    }
  }
}

train_config: {
  batch_size: 20
  optimizer {
    rms_prop_optimizer: {
      learning_rate: {
        exponential_decay_learning_rate {
          initial_learning_rate: 0.004
          decay_steps: 10000
          decay_factor: 0.95
        }
      }
      momentum_optimizer_value: 0.9
      decay: 0.9
      epsilon: 1.0
    }
  }
  fine_tune_checkpoint: "ssd_inception_v2_coco_2018_01_28/model.ckpt"
  from_detection_checkpoint: true
  # Note: The below line limits the training process to 200K steps, which we
  # empirically found to be sufficient enough to train the pets dataset. This
  # effectively bypasses the learning rate schedule (the learning rate will
  # never decay). Remove the below line to train indefinitely.
  num_steps: 1000
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
  data_augmentation_options {
    ssd_random_crop {
    }
  }
}

train_input_reader: {
  tf_record_input_reader {
    input_path: "record/train.record"
  }
  label_map_path: "record/label_map.pbtxt"
}

eval_config: {
  num_examples: 4952
  # Note: The below line limits the evaluation process to 10 evaluations.
  # Remove the below line to evaluate indefinitely.
  max_evals: 10
}

eval_input_reader: {
  tf_record_input_reader {
    input_path: "record/val.record"
  }
  label_map_path: "record/label_map.pbtxt"
  shuffle: false
  num_readers: 1
  num_epochs: 1
}

訓(xùn)練

這是我的目錄圖



在cmd下執(zhí)行蛇更,cpu訓(xùn)練一個(gè)晚上才666次,這個(gè)精度不咋滴赛糟。

python train.py \
--logtostderr  \
--train_dir=train \
--pipeline_config_path=ssd_inception_v2_coco.config

生成pb文件

訓(xùn)練666次了派任。早上起來(lái)直接Ctrl+c關(guān)掉,如果想繼續(xù)在666上繼續(xù)訓(xùn)練璧南,直接執(zhí)行上面的就可以了掌逛,它會(huì)讀取train 下面的訓(xùn)練文件的。

python export_inference_graph.py  
--pipeline_config_path ssd_inception_v2_coco.config
 --trained_checkpoint_prefix "pb/model.ckpt-666" 
--output_directory pb

生成pb的時(shí)候出錯(cuò)了

ValueError: Protocol message RewriterConfig has no "layout_optimizer" field

在/object_detection/exporter.py”文件司倚,將第72行的layout_optimizer與相互更改optimize_tensor_layout就解決問(wèn)題啦豆混。

測(cè)試

test_image.py

import matplotlib.pyplot as plt
import numpy as np
import os 
import tensorflow as tf
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
from PIL import Image


def test():
    #重置圖
    tf.reset_default_graph()
    '''
    載入模型以及數(shù)據(jù)集樣本標(biāo)簽,加載待測(cè)試的圖片文件
    '''
    #指定要使用的模型的路徑  包含圖結(jié)構(gòu)动知,以及參數(shù)
    PATH_TO_CKPT = 'pb/frozen_inference_graph.pb'
    
    #測(cè)試圖片所在的路徑
    PATH_TO_TEST_IMAGES_DIR = './test_images'
    
    TEST_IMAGE_PATHS = [os.path.join(PATH_TO_TEST_IMAGES_DIR,'{}.jpg'.format(i)) for i in range(1,11) ]
    
    #數(shù)據(jù)集對(duì)應(yīng)的label pascal_label_map.pbtxt文件保存了index和類(lèi)別名之間的映射
    PATH_TO_LABELS = "record/label_map.pbtxt"
    
    NUM_CLASSES = 5
     
    #重新定義一個(gè)圖
    output_graph_def = tf.GraphDef()
    
    with tf.gfile.GFile(PATH_TO_CKPT,'rb') as fid:
        #將*.pb文件讀入serialized_graph
        serialized_graph = fid.read()
        #將serialized_graph的內(nèi)容恢復(fù)到圖中
        output_graph_def.ParseFromString(serialized_graph)
        #print(output_graph_def)
        #將output_graph_def導(dǎo)入當(dāng)前默認(rèn)圖中(加載模型)
        tf.import_graph_def(output_graph_def,name='')
        
    print('模型加載完成')    
    
    #載入coco數(shù)據(jù)集標(biāo)簽文件
    label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
    categories = label_map_util.convert_label_map_to_categories(label_map,max_num_classes = NUM_CLASSES,use_display_name = True)
    category_index = label_map_util.create_category_index(categories)
    
    
    '''
    定義session
    '''
    def load_image_into_numpy_array(image):
        '''
        將圖片轉(zhuǎn)換為ndarray數(shù)組的形式
        '''
        im_width,im_height = image.size
        return np.array(image.getdata()).reshape((im_height,im_width,3)).astype(np.uint0)
    
    #設(shè)置輸出圖片的大小
    IMAGE_SIZE = (12,8)
    
    #使用默認(rèn)圖皿伺,此時(shí)已經(jīng)加載了模型
    detection_graph = tf.get_default_graph()
    
    with tf.Session(graph=detection_graph) as sess:
        for image_path in TEST_IMAGE_PATHS:
            image = Image.open(image_path)
            #將圖片轉(zhuǎn)換為numpy格式
            image_np = load_image_into_numpy_array(image)
            
            '''
            定義節(jié)點(diǎn),運(yùn)行并可視化
            '''
            #將圖片擴(kuò)展一維盒粮,最后進(jìn)入神經(jīng)網(wǎng)絡(luò)的圖片格式應(yīng)該是[1,?,?,3]
            image_np_expanded = np.expand_dims(image_np,axis = 0)
            
            '''
            獲取模型中的tensor
            '''
            image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
                        
            #boxes用來(lái)顯示識(shí)別結(jié)果
            boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
            
            #Echo score代表識(shí)別出的物體與標(biāo)簽匹配的相似程度鸵鸥,在類(lèi)型標(biāo)簽后面
            scores = detection_graph.get_tensor_by_name('detection_scores:0')
            classes = detection_graph.get_tensor_by_name('detection_classes:0')
            num_detections = detection_graph.get_tensor_by_name('num_detections:0')
            
            #開(kāi)始檢查
            boxes,scores,classes,num_detections = sess.run([boxes,scores,classes,num_detections],
                                                           feed_dict={image_tensor:image_np_expanded})
            
            #可視化結(jié)果
            vis_util.visualize_boxes_and_labels_on_image_array(
                    image_np,
                    np.squeeze(boxes),
                    np.squeeze(classes).astype(np.int32),
                    np.squeeze(scores),
                    category_index,
                    use_normalized_coordinates=True,
                    line_thickness=8)
            plt.figure(figsize=IMAGE_SIZE)
            print(type(image_np))
            print(image_np.shape)
            image_np = np.array(image_np,dtype=np.uint8)            
            plt.imshow(image_np)
            plt.show()
    
    
                
if __name__ == '__main__':
    test()

如下執(zhí)行結(jié)果:


范冰冰



這是大咪咪




其他人的識(shí)別都不是很高,可能和訓(xùn)練圖片有關(guān)
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市丹皱,隨后出現(xiàn)的幾起案子妒穴,更是在濱河造成了極大的恐慌,老刑警劉巖种呐,帶你破解...
    沈念sama閱讀 212,718評(píng)論 6 492
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件宰翅,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡爽室,警方通過(guò)查閱死者的電腦和手機(jī)汁讼,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 90,683評(píng)論 3 385
  • 文/潘曉璐 我一進(jìn)店門(mén)淆攻,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái),“玉大人嘿架,你說(shuō)我怎么就攤上這事瓶珊。” “怎么了耸彪?”我有些...
    開(kāi)封第一講書(shū)人閱讀 158,207評(píng)論 0 348
  • 文/不壞的土叔 我叫張陵伞芹,是天一觀的道長(zhǎng)。 經(jīng)常有香客問(wèn)我蝉娜,道長(zhǎng)唱较,這世上最難降的妖魔是什么? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 56,755評(píng)論 1 284
  • 正文 為了忘掉前任召川,我火速辦了婚禮南缓,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘荧呐。我一直安慰自己汉形,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 65,862評(píng)論 6 386
  • 文/花漫 我一把揭開(kāi)白布倍阐。 她就那樣靜靜地躺著概疆,像睡著了一般。 火紅的嫁衣襯著肌膚如雪峰搪。 梳的紋絲不亂的頭發(fā)上岔冀,一...
    開(kāi)封第一講書(shū)人閱讀 50,050評(píng)論 1 291
  • 那天,我揣著相機(jī)與錄音罢艾,去河邊找鬼楣颠。 笑死,一個(gè)胖子當(dāng)著我的面吹牛咐蚯,可吹牛的內(nèi)容都是我干的童漩。 我是一名探鬼主播,決...
    沈念sama閱讀 39,136評(píng)論 3 410
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼春锋,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼矫膨!你這毒婦竟也來(lái)了?” 一聲冷哼從身側(cè)響起期奔,我...
    開(kāi)封第一講書(shū)人閱讀 37,882評(píng)論 0 268
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤侧馅,失蹤者是張志新(化名)和其女友劉穎,沒(méi)想到半個(gè)月后呐萌,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體馁痴,經(jīng)...
    沈念sama閱讀 44,330評(píng)論 1 303
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 36,651評(píng)論 2 327
  • 正文 我和宋清朗相戀三年肺孤,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了罗晕。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片济欢。...
    茶點(diǎn)故事閱讀 38,789評(píng)論 1 341
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡,死狀恐怖小渊,靈堂內(nèi)的尸體忽然破棺而出法褥,到底是詐尸還是另有隱情,我是刑警寧澤酬屉,帶...
    沈念sama閱讀 34,477評(píng)論 4 333
  • 正文 年R本政府宣布半等,位于F島的核電站,受9級(jí)特大地震影響呐萨,放射性物質(zhì)發(fā)生泄漏杀饵。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 40,135評(píng)論 3 317
  • 文/蒙蒙 一谬擦、第九天 我趴在偏房一處隱蔽的房頂上張望凹髓。 院中可真熱鬧,春花似錦怯屉、人聲如沸。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 30,864評(píng)論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)。三九已至狼牺,卻和暖如春羡儿,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背是钥。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 32,099評(píng)論 1 267
  • 我被黑心中介騙來(lái)泰國(guó)打工掠归, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人悄泥。 一個(gè)月前我還...
    沈念sama閱讀 46,598評(píng)論 2 362
  • 正文 我出身青樓虏冻,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國(guó)和親弹囚。 傳聞我的和親對(duì)象是個(gè)殘疾皇子厨相,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 43,697評(píng)論 2 351

推薦閱讀更多精彩內(nèi)容

  • Android 自定義View的各種姿勢(shì)1 Activity的顯示之ViewRootImpl詳解 Activity...
    passiontim閱讀 171,860評(píng)論 25 707
  • 用兩張圖告訴你,為什么你的 App 會(huì)卡頓? - Android - 掘金 Cover 有什么料鸥鹉? 從這篇文章中你...
    hw1212閱讀 12,704評(píng)論 2 59
  • 1蛮穿、通過(guò)CocoaPods安裝項(xiàng)目名稱(chēng)項(xiàng)目信息 AFNetworking網(wǎng)絡(luò)請(qǐng)求組件 FMDB本地?cái)?shù)據(jù)庫(kù)組件 SD...
    陽(yáng)明先生_X自主閱讀 15,969評(píng)論 3 119
  • 孟子曰:君子有三樂(lè):父母俱存,兄弟無(wú)故,一樂(lè)也;仰不愧天,俯不怍于地毁渗,二樂(lè)也;得天下英才而教育之,三樂(lè)也践磅。...
    田米米閱讀 174評(píng)論 0 0
  • 滄海月明珠有淚,藍(lán)田日暖玉生煙灸异「剩…………………………………………………………“他愛(ài)我嗎羔飞?”這個(gè)問(wèn)題只有一個(gè)標(biāo)準(zhǔn)答案...
    水宸心閱讀 241評(píng)論 0 0