YOLOv3: An Incremental Improvement

YOLOv3: An Incremental Improvement

Abstract

We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that’s pretty swell. It’s a little bigger than last time but more accurate. It’s still fast though, don’t worry. At 320 × 320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 AP50 in 51 ms on a Titan X, compared to 57.5 AP50 in 198 ms by RetinaNet, similar performance but 3.8× faster. As always, all the code is online at https://pjreddie.com/yolo/.

我們發(fā)布了YOLO的更新饥臂!我們用了很多小的設(shè)計(jì)改變來(lái)優(yōu)化它凰慈。我們也訓(xùn)練了這個(gè)新的非常強(qiáng)大的網(wǎng)絡(luò)阻荒。雖然有點(diǎn)大谆甜,但更準(zhǔn)確近尚。不用擔(dān)心盗胀,速度仍然很快孟岛。在320 x 320的YOLOV3運(yùn)行了22ms,28.2mAP视乐,準(zhǔn)確率和SSD一樣,但速度提升了3倍敢茁。當(dāng)我們看老版的.5 IOU mAP 檢測(cè)度量佑淀,YOLOv3非常棒。它在titanX上能達(dá)到57.9mAP在50到51ms彰檬,相比RetinaNet57.5mAP 50 198ms伸刃,類(lèi)似準(zhǔn)確率,但是快了3.8倍逢倍。和往常一樣捧颅,代碼上傳到https://pjreddie.com/yolo/.

1. Introduction

Sometimes you just kinda phone it in for a year, you know? I didn’t do a whole lot of research this year. Spent a lot of time on Twitter. Played around with GANs a little. I had a little momentum left over from last year [10] [1]; I managed to make some improvements to YOLO. But, honestly, nothing like super interesting, just a bunch of small changes that make it better. I also helped out with other people’s research a little.

你們召喚它一年了,知道嗎较雕?今年我沒(méi)有做太多研究碉哑。花了很多時(shí)間在Twitter上亮蒋,玩了一下GAN扣典。去年也剩下一些momentum,我也成功對(duì)優(yōu)化了YOLO慎玖。但贮尖,坦白來(lái)講,最有趣的事情凄吏,就是通過(guò)很多很小的改變來(lái)優(yōu)化它远舅。我還幫別人做了一些研究闰蛔。

Actually, that’s what brings us here today. We have a camera-ready deadline and we need to cite some of the random updates I made to YOLO but we don’t have a source. So get ready for a TECH REPORT!

事實(shí)上痕钢,這就是今天我們來(lái)到這里的原因。我們有一個(gè)相機(jī)準(zhǔn)備的最后期限序六,我們需要舉出一些隨機(jī)更新任连,但我們沒(méi)有一個(gè)源。準(zhǔn)備好一份科技報(bào)告例诀!

The great thing about tech reports is that they don’t need intros, y’all know why we’re here. So the end of this introduction will signpost for the rest of the paper. First we’ll tell you what the deal is with YOLOv3. Then we’ll tell you how we do. We’ll also tell you about some things we tried that didn’t work. Finally we’ll contemplate what this all means.

科技報(bào)告最棒的事情就是我們不需要簡(jiǎn)介随抠,你們都知道為什么我們?cè)谶@。所以繁涂,簡(jiǎn)介的其他部分會(huì)引出論文的其他部分拱她。首先,我會(huì)YOLOv3解決了什么扔罪,然后我告訴你我們?cè)趺醋龅谋樱覀円矔?huì)講我們做的改變但失敗了,最后我們總結(jié)這些意味著什么。

2. The Deal

So here’s the deal with YOLOv3: We mostly took good ideas from other people. We also trained a new classifier network that’s better than the other ones. We’ll just take you through the whole system from scratch so you can understand it all.

所以唬复,YOLOv3解決了什么:我們大部分是從其他人那里得到好的想法矗积。我們也訓(xùn)練一個(gè)新的比另一個(gè)些好的分類(lèi)器網(wǎng)絡(luò)。我們會(huì)一點(diǎn)點(diǎn)講整個(gè)系統(tǒng)敞咧,這樣你就能完全理解了棘捣。

2.1 Bounding Box Prediction

Following YOLO9000 our system predicts bounding boxes using dimension clusters as anchor boxes [13]. The network predicts 4 coordinates for each bounding box, tx, ty, tw, th. If the cell is offset from the top left corner of the image by (cx, cy) and the bounding box prior has width and height pw, ph, then the predictions correspond to:

下面是我們YOLO9000系統(tǒng)使用維度集群作為錨框來(lái)預(yù)測(cè)邊框。網(wǎng)絡(luò)對(duì)每個(gè)邊框預(yù)測(cè)4個(gè)坐標(biāo)值休建,tx, ty, tw, ty乍恐。如果這個(gè)cell變異了左上角(cx, cy)邊框,邊框先驗(yàn)寬高為pw, ph测砂,那么預(yù)測(cè)就表示為:

$$b_x = σ(t_x) + c_x $$

$$b_y = σ(t_y) + c_y$$

$$b_w = p_we^{t_w}$$

$$b_h = p_he^{t_h}$$

During training we use sum of squared error loss. If the ground truth for some coordinate prediction is t? * our gradient is the ground truth value (computed from the ground truth box) minus our prediction: t?* ? t*. This ground truth value can be easily computed by inverting the equations above.

在訓(xùn)練期間禁熏,我們使用了方差損失的和。如果每個(gè)坐標(biāo)預(yù)測(cè)的真值是t^ * 邑彪,我們的梯度是真值(從真值邊框計(jì)算而來(lái))減去我們的預(yù)測(cè)值瞧毙,t * ? t * 。那么真值通過(guò)插入上面的等式很容易被計(jì)算出來(lái)寄症。

YOLOv3 predicts an objectness score for each bounding box using logistic regression. This should be 1 if the bounding box prior overlaps a ground truth object by more than any other bounding box prior. If the bounding box prior is not the best but does overlap a ground truth object by more than some threshold we ignore the prediction, following [15]. We use the threshold of .5. Unlike [15] our system only assigns one bounding box prior for each ground truth object. If a bounding box prior is not assigned to a ground truth object it incurs no loss for coordinate or class predictions, only objectness.

YOLOv3使用logistic回歸來(lái)為每一個(gè)邊框給每個(gè)物體打分宙彪。如果邊框先驗(yàn)比任何其他邊框與真值重疊更多,那么它為的分?jǐn)?shù)為1.如果邊框先驗(yàn)不是最好的有巧,但與真值覆蓋面積大于某個(gè)閾值释漆,我們就忽略預(yù)測(cè),參照[15]篮迎。我們使用.5作為閾值男图。不像[15],我們的系統(tǒng)為每個(gè)真值物體只分配了一個(gè)邊框甜橱。如果這個(gè)邊框沒(méi)有被分配到真值物體逊笆,那么坐標(biāo)或者類(lèi)別預(yù)測(cè)不加入到損失中,只有對(duì)象岂傲。

2.2 Class Prediction

Each box predicts the classes the bounding box may contain using multilabel classification. We do not use a softmax as we have found it is unnecessary for good performance, instead we simply use independent logistic classifiers. During training we use binary cross-entropy loss for the class predictions.

每個(gè)邊框預(yù)測(cè)的類(lèi)可以有多個(gè)標(biāo)簽难裆。我們不需要softmax,因?yàn)槲覀円呀?jīng)發(fā)現(xiàn)更好的性能沒(méi)必要使用softmax镊掖,取代的是乃戈,我們簡(jiǎn)單實(shí)用了獨(dú)立的logistic分類(lèi)器。在訓(xùn)練期間亩进,我們使用二值交叉熵?fù)p失來(lái)分類(lèi)做類(lèi)預(yù)測(cè)症虑。

This formulation helps when we move to more complex domains like the Open Images Dataset [5]. In this dataset there are many overlapping labels (i.e. Woman and Person). Using a softmax imposes the assumption that each box has exactly one class which is often not the case. A multilabel approach better models the data.

這個(gè)公式很有用,當(dāng)我們使用更復(fù)雜的域归薛,比如開(kāi)放的Images數(shù)據(jù)庫(kù)谍憔。在數(shù)據(jù)集里有很多重疊的標(biāo)簽驶冒,比如女人和人。使用Softmax作了一個(gè)假設(shè)韵卤,就是每一個(gè)邊框只有一個(gè)類(lèi)別骗污,但實(shí)際場(chǎng)景并非如此。一個(gè)多標(biāo)簽的方法對(duì)于數(shù)據(jù)更好的模型沈条。

2.3 Predictions Across Scale

YOLOv3 predicts boxes at 3 different scales. Our system extracts features from those scales using a similar concept to feature pyramid networks [6]. From our base feature extractor we add several convolutional layers. The last of these predicts a 3-d tensor encoding bounding box, objectness, and class predictions. In our experiments with COCO [8] we predict 3 boxes at each scale so the tensor is N ×N ×[3?(4+ 1+ 80)] for the 4 bounding box offsets, 1 objectness prediction, and 80 class predictions.

YOLOv3在3種尺度下來(lái)預(yù)測(cè)需忿。我們系統(tǒng)使用特征金字塔網(wǎng)絡(luò)類(lèi)似的概念,從這些尺度中提取特征蜡歹。從我們基本特征提取器屋厘,我們?cè)黾恿藥讉€(gè)卷積層。這些最尾端預(yù)測(cè)一個(gè)編碼為邊框月而,物體和類(lèi)預(yù)測(cè)的3-d張量汗洒。在我們COCO實(shí)驗(yàn)中,我們每個(gè)尺寸預(yù)測(cè)3個(gè)邊框父款,所以張量為 N ×N ×[3?(4+ 1+ 80)],4個(gè)邊框偏置溢谤,1個(gè)物體預(yù)測(cè),80個(gè)類(lèi)別預(yù)測(cè)憨攒。

Next we take the feature map from 2 layers previous and upsample it by 2×. We also take a feature map from earlier in the network and merge it with our upsampled features using element-wise addition. This method allows us to get more meaningful semantic information from the upsampled features and finer-grained information from the earlier feature map. We then add a few more convolutional layers to process this combined feature map, and eventually predict a similar tensor, although now twice the size.

接下來(lái)世杀,我們從之前的兩層得到特征圖,然后2倍上采樣肝集。我們也從更前面的網(wǎng)絡(luò)中得到特征圖瞻坝,然后使用元素級(jí)增加的方式和上采樣的特征做融合。這個(gè)方法讓我們從上采樣特征中得到更有用的語(yǔ)義信息并從更之前的特征圖中得到更細(xì)粒信息杏瞻。當(dāng)我們?cè)黾痈嗟木矸e層來(lái)結(jié)合特征圖所刀,最終預(yù)測(cè)一個(gè)相似的張量,雖然尺寸已經(jīng)2倍大捞挥。

We perform the same design one more time to predict boxes for the final scale. Thus our predictions for the 3rd scale benefit from all the prior computation as well as finegrained features from early on in the network.

我們又用了一次同樣的設(shè)計(jì)來(lái)為最后一個(gè)尺度預(yù)測(cè)邊框浮创。然后,我們第三個(gè)尺度的預(yù)測(cè)树肃,就得益于所有之前的計(jì)算蒸矛,和從更之前的網(wǎng)絡(luò)中的細(xì)粒特征瀑罗。

We still use k-means clustering to determine our bounding box priors. We just sort of chose 9 clusters and 3 scales arbitrarily and then divide up the clusters evenly across scales. On the COCO dataset the 9 clusters were: (10×13),(16×30),(33×23),(30×61),(62×45),(59×119) (116 × 90),(156 × 198),(373 × 326).

我們?nèi)匀皇褂胟-means集群來(lái)先驗(yàn)邊框胸嘴。我們只選了9個(gè)集群和3個(gè)隨機(jī)尺度,然后將這些集群均勻地分布在各個(gè)尺寸上斩祭。在COCO級(jí)9組:(10×13),(16×30),(33×23),(30×61),(62×45),(59×119) (116 × 90),(156 × 198),(373 × 326).

2.4 Feature Extractor

We use a new network for performing feature extraction. Our new network is a hybrid approach between the network used in YOLOv2, Darknet-19, and that newfangled residual network stuff. Our network uses successive 3 × 3 and 1 × 1 convolutional layers but now has some shortcut connections as well and is significantly larger. It has 53 convolutional layers so we call it.... wait for it..... Darknet-53!

我們使用了新的網(wǎng)絡(luò)來(lái)進(jìn)行特征提取劣像。我們新的網(wǎng)絡(luò)是一種使用在YOLOv2, Darknet-19的網(wǎng)絡(luò)的混合方法和新奇的殘差網(wǎng)絡(luò)的東西。我們的網(wǎng)絡(luò)使用連續(xù)的3 x 3 和1 x 1的卷積層摧玫,但現(xiàn)在也有了捷徑連接耳奕,明顯增大了網(wǎng)絡(luò)绑青。它有了53個(gè)卷積層,所以我們叫它...等等...Darknet-53屋群!

This new network is much more powerful than Darknet19 but still more efficient than ResNet-101 or ResNet-152. Here are some ImageNet results:

這個(gè)網(wǎng)絡(luò)比Darknet19更強(qiáng)大闸婴,但仍然比ResNet-101和ResNet-152更強(qiáng)大。這里是ImageNet結(jié)果:

Each network is trained with identical settings and tested at 256×256, single crop accuracy. Run times are measured on a Titan X at 256 × 256. Thus Darknet-53 performs on par with state-of-the-art classifiers but with fewer floating point operations and more speed. Darknet-53 is better than ResNet-101 and 1.5× faster. Darknet-53 has similar performance to ResNet-152 and is 2× faster.

每個(gè)網(wǎng)絡(luò)都在256 x 256相同的配置下訓(xùn)練芍躏,單一作物的準(zhǔn)確性邪乍。運(yùn)行時(shí)間在TitanX 256x256上測(cè)試。因此Darknet是目前最好的結(jié)果的分類(lèi)器对竣,但用了更少的浮點(diǎn)運(yùn)算和更快的速度庇楞。Darknet-53比ResNet-101效果更好,而且快1.5倍否纬。Darknet-53有ResNet-152類(lèi)似的準(zhǔn)確性吕晌,但快2倍速度。

Darknet-53 also achieves the highest measured floating point operations per second. This means the network structure better utilizes the GPU, making it more efficient to evaluate and thus faster. That’s mostly because ResNets have just way too many layers and aren’t very efficient.

Darknet-53也達(dá)到了每秒最高的浮點(diǎn)計(jì)算量临燃。這個(gè)意味著網(wǎng)絡(luò)結(jié)構(gòu)更適合GPU睛驳,使它驗(yàn)證更有效,也更快膜廊。更多是因?yàn)镽esNets有更多的層柏靶,當(dāng)并不是很高效。

2.5 Training

We still train on full images with no hard negative mining or any of that stuff. We use multi-scale training, lots of data augmentation, batch normalization, all the standard stuff. We use the Darknet neural network framework for training and testing [12].

我們?nèi)匀辉谡鶊D上訓(xùn)練沒(méi)有難分負(fù)樣本挖掘和任何其他策略溃论。我們使用多尺寸訓(xùn)練屎蜓,許多數(shù)據(jù)增強(qiáng),塊歸一化钥勋,和所以基本的東西炬转。我們使用Darknet神經(jīng)網(wǎng)絡(luò)框架來(lái)訓(xùn)練和測(cè)試。

3 How We Do

YOLOv3 is pretty good! See table 3. In terms of COCOs weird average mean AP metric it is on par with the SSD variants but is 3× faster. It is still quite a bit behind other models like RetinaNet in this metric though.

YOLOv3非常棒算灸!看表3扼劈。 就COCO奇怪的平均AP度量而言,準(zhǔn)確率相當(dāng)于SSD菲驴,但快3倍荐吵。可能稍稍落后于像RetinaNet的網(wǎng)絡(luò)赊瞬。

However, when we look at the “old” detection metric of mAP at IOU= .5 (or AP50 in the chart) YOLOv3 is very strong. It is almost on par with RetinaNet and far above the SSD variants. This indicates that YOLOv3 is a very strong detector that excels at producing decent boxes for objects. However, performance drops significantly as the IOU threshold increases indicating YOLOv3 struggles to get the boxes perfectly aligned with the object.

然而先煎,當(dāng)我們看到“old”檢測(cè)度量IOU=.5,YOLOV3非常強(qiáng)大巧涧。幾乎和RetinaNet媲美薯蝎,遠(yuǎn)超于SSD。這說(shuō)明YOLOv3是非常強(qiáng)大的檢測(cè)器谤绳,擅長(zhǎng)產(chǎn)生好的邊框占锯。然而袒哥,在閾值增加的時(shí)候,性能急速下降消略,表明YOLOv3在努力將邊框與物體對(duì)齊堡称。

In the past YOLO struggled with small objects. However, now we see a reversal in that trend. With the new multi-scale predictions we see YOLOv3 has relatively high APS performance. However, it has comparatively worse performance on medium and larger size objects. More investigation is needed to get to the bottom of this.

在之前YOLO檢測(cè)小物體不太好。然而艺演,現(xiàn)在我們看到了轉(zhuǎn)機(jī)粮呢。新的多尺度預(yù)測(cè),我們發(fā)現(xiàn)YOLOv3有相當(dāng)高的APS表現(xiàn)钞艇。然而啄寡,相比可能中等或大的物體有稍稍差的效果。還需要做更多研究來(lái)提升它哩照。

When we plot accuracy vs speed on the AP50 metric (see figure 3) we see YOLOv3 has significant benefits over other detection systems. Namely, it’s faster and better.

當(dāng)我們畫(huà)折線(xiàn)圖挺物,準(zhǔn)確率和速度在AP50度量上(見(jiàn)圖3)我們可以看到Y(jié)OLO比其他檢測(cè)系統(tǒng)更好。叫做飘弧,更快更好识藤。

4. Things We Tried That Didn't Work

We tried lots of stuff while we were working on YOLOv3. A lot of it didn’t work. Here’s the stuff we can remember.

我們?cè)赮OLOv3上嘗試很多東西。很多沒(méi)有奏效次伶,這里使我們記得的一些算法痴昧。

Anchor box x, y offset predictions. We tried using the normal anchor box prediction mechanism where you predict the x, y offset as a multiple of the box width or height using a linear activation. We found this formulation decreased model stability and didn’t work very well.

錨框x,y偏置預(yù)測(cè)我們?cè)囍褂谜5腻^框預(yù)測(cè)機(jī)制冠王,預(yù)測(cè)偏置x赶撰,y為使用線(xiàn)性激活的多個(gè)框的寬高。我們發(fā)現(xiàn)這個(gè)方程減少模型穩(wěn)定性柱彻,所以不是太好豪娜。

Linear x, y predictions instead of logistic. We tried using a linear activation to directly predict the x, y offset instead of the logistic activation. This led to a couple point drop in mAP.

線(xiàn)性x,y預(yù)測(cè)而非logistic我們?cè)囍褂镁€(xiàn)性激活來(lái)直接預(yù)測(cè)偏置x,y,而不是logsitic激活哟楷。這個(gè)會(huì)導(dǎo)致mAP下降幾個(gè)點(diǎn)瘤载。

Focal loss. We tried using focal loss. It dropped our mAP about 2 points. YOLOv3 may already be robust to the problem focal loss is trying to solve because it has separate objectness predictions and conditional class predictions. Thus for most examples there is no loss from the class predictions? Or something? We aren’t totally sure.

焦點(diǎn)損失我們嘗試了焦點(diǎn)損失。它會(huì)讓我們的mAP下降2個(gè)點(diǎn)卖擅。對(duì)于焦點(diǎn)損失要解決的問(wèn)題YOLOv3已經(jīng)解決的很好了鸣奔,因?yàn)樗盐矬w預(yù)測(cè)和條件類(lèi)預(yù)測(cè)分開(kāi)了。對(duì)于大多數(shù)例子沒(méi)有類(lèi)預(yù)測(cè)損失惩阶】胬辏或者其他什么?我們不完全確定琳猫。

Dual IOU thresholds and truth assignment. Faster RCNN uses two IOU thresholds during training. If a prediction overlaps the ground truth by .7 it is as a positive example, by [.3?.7] it is ignored, less than .3 for all ground truth objects it is a negative example. We tried a similar strategy but couldn’t get good results.

雙IOU閾值和真值分配Faster RCNN在訓(xùn)練的時(shí)候使用了兩個(gè)IOU閾值伟叛。如果一個(gè)預(yù)測(cè)值覆蓋了真值超過(guò).7,它就是真值脐嫂。.3-.7就忽略了统刮,少于.3的為負(fù)樣本。我們?cè)嚵祟?lèi)似的策略账千,但效果不太好侥蒙。

We quite like our current formulation, it seems to be at a local optima at least. It is possible that some of these techniques could eventually produce good results, perhaps they just need some tuning to stabilize the training.

我們非常喜歡我們現(xiàn)在的公式,至少看起來(lái)是局部最優(yōu)的仓洼∑谙牛可能這些技術(shù)能得到好的結(jié)果充包,可能我們需要微調(diào)以穩(wěn)定訓(xùn)練。

5 What This All Means

YOLOv3 is a good detector. It’s fast, it’s accurate. It’s not as great on the COCO average AP between .5 and .95 IOU metric. But it’s very good on the old detection metric of .5 IOU.

YOLOv3是個(gè)很好的檢測(cè)器论衍。很快,準(zhǔn)確率很高聚磺∨魈ǎ可能在.5到.95IOU域不太好,但是在老的檢測(cè)域.5IOU非常好瘫寝。

Why did we switch metrics anyway? The original COCO paper just has this cryptic sentence: “A full discussion of evaluation metrics will be added once the evaluation server is complete”. Russakovsky et al report that that humans have a hard time distinguishing an IOU of .3 from .5! “Training humans to visually inspect a bounding box with IOU of 0.3 and distinguish it from one with IOU 0.5 is surprisingly

difficult.” [16] If humans have a hard time telling the difference, how much does it matter?

為什么我們交換了域蜒蕾?原來(lái)的COCO說(shuō)了句有含義的話(huà):一旦評(píng)估服務(wù)完成,就會(huì)增加評(píng)估域的完全討論焕阿。Russakovsky在很難區(qū)分IOU.3到.5的報(bào)告說(shuō)咪啡。訓(xùn)練人類(lèi)這樣做就很難。如果人都很難區(qū)分暮屡,又有什么意義呢撤摸?

But maybe a better question is: “What are we going to do with these detectors now that we have them?” A lot of the people doing this research are at Google and Facebook. I guess at least we know the technology is in good hands and definitely won’t be used to harvest your personal information and sell it to.... wait, you’re saying that’s exactly what it will be used for?? Oh.

但或許更好的問(wèn)題是:“現(xiàn)在我們有了他們,我們用這些檢測(cè)器來(lái)做什么呢”許多人在Google和Facebook在做這件事褒纲。我猜至少我們知道科技在好的人手里愁溜,完全不會(huì)被用來(lái)侵犯你的個(gè)人信息然后賣(mài)到。外厂。冕象。等等,你會(huì)說(shuō)那的確是將要用來(lái)做的事汁蝶?oh渐扮。

Well the other people heavily funding vision research are the military and they’ve never done anything horrible like killing lots of people with new technology oh wait.....

當(dāng)然其他人把視覺(jué)研究在軍事上,他們沒(méi)做什么用新科技來(lái)殺害更多的人. oh .等等...

I have a lot of hope that most of the people using computer vision are just doing happy, good stuff with it, like counting the number of zebras in a national park [11], or tracking their cat as it wanders around their house [17]. But computer vision is already being put to questionable use and as researchers we have a responsibility to at least consider the harm our work might be doing and think of ways to mitigate it. We owe the world that much. In closing, do not @ me. (Because I finally quit Twitter).

我很希望大部分人用計(jì)算機(jī)視覺(jué)來(lái)做開(kāi)心掖棉,好的事情墓律,比如在國(guó)家公園數(shù)斑馬,或者跟蹤他們的貓幔亥,當(dāng)貓?jiān)诩依镛D(zhuǎn)到的時(shí)候耻讽。但計(jì)算機(jī)時(shí)間已經(jīng)應(yīng)用到有爭(zhēng)議的應(yīng)用。作為研究者帕棉,我們有義務(wù)针肥,至少考慮我們工作的危害饼记,可能做或者想一些辦法去減輕它。我們欠世界太多慰枕。最后具则,不要@我,我已經(jīng)完全不用twitter啦具帮。

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末博肋,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子蜂厅,更是在濱河造成了極大的恐慌匪凡,老刑警劉巖,帶你破解...
    沈念sama閱讀 221,198評(píng)論 6 514
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件掘猿,死亡現(xiàn)場(chǎng)離奇詭異病游,居然都是意外死亡,警方通過(guò)查閱死者的電腦和手機(jī)术奖,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 94,334評(píng)論 3 398
  • 文/潘曉璐 我一進(jìn)店門(mén)礁遵,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái),“玉大人采记,你說(shuō)我怎么就攤上這事佣耐。” “怎么了唧龄?”我有些...
    開(kāi)封第一講書(shū)人閱讀 167,643評(píng)論 0 360
  • 文/不壞的土叔 我叫張陵兼砖,是天一觀的道長(zhǎng)。 經(jīng)常有香客問(wèn)我既棺,道長(zhǎng)讽挟,這世上最難降的妖魔是什么? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 59,495評(píng)論 1 296
  • 正文 為了忘掉前任丸冕,我火速辦了婚禮耽梅,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘胖烛。我一直安慰自己眼姐,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 68,502評(píng)論 6 397
  • 文/花漫 我一把揭開(kāi)白布佩番。 她就那樣靜靜地躺著众旗,像睡著了一般。 火紅的嫁衣襯著肌膚如雪趟畏。 梳的紋絲不亂的頭發(fā)上贡歧,一...
    開(kāi)封第一講書(shū)人閱讀 52,156評(píng)論 1 308
  • 那天,我揣著相機(jī)與錄音,去河邊找鬼利朵。 笑死律想,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的哗咆。 我是一名探鬼主播蜘欲,決...
    沈念sama閱讀 40,743評(píng)論 3 421
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼益眉,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼晌柬!你這毒婦竟也來(lái)了?” 一聲冷哼從身側(cè)響起郭脂,我...
    開(kāi)封第一講書(shū)人閱讀 39,659評(píng)論 0 276
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤年碘,失蹤者是張志新(化名)和其女友劉穎,沒(méi)想到半個(gè)月后展鸡,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體屿衅,經(jīng)...
    沈念sama閱讀 46,200評(píng)論 1 319
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 38,282評(píng)論 3 340
  • 正文 我和宋清朗相戀三年莹弊,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了涤久。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 40,424評(píng)論 1 352
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡忍弛,死狀恐怖响迂,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情细疚,我是刑警寧澤蔗彤,帶...
    沈念sama閱讀 36,107評(píng)論 5 349
  • 正文 年R本政府宣布,位于F島的核電站疯兼,受9級(jí)特大地震影響然遏,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜吧彪,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,789評(píng)論 3 333
  • 文/蒙蒙 一待侵、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧姨裸,春花似錦秧倾、人聲如沸。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 32,264評(píng)論 0 23
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)。三九已至扑毡,卻和暖如春胃榕,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 33,390評(píng)論 1 271
  • 我被黑心中介騙來(lái)泰國(guó)打工勋又, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留苦掘,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 48,798評(píng)論 3 376
  • 正文 我出身青樓楔壤,卻偏偏與公主長(zhǎng)得像鹤啡,于是被迫代替她去往敵國(guó)和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子蹲嚣,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,435評(píng)論 2 359

推薦閱讀更多精彩內(nèi)容

  • rljs by sennchi Timeline of History Part One The Cognitiv...
    sennchi閱讀 7,345評(píng)論 0 10
  • 文/逸飛 我們的人生递瑰,從出生開(kāi)始便在不斷的做加法,從懵懵懂懂無(wú)知的嬰兒到懂得了1隙畜、2抖部、3、4议惰、5牙牙學(xué)語(yǔ)的幼兒慎颗,再...
    晴天愛(ài)閱讀閱讀 409評(píng)論 0 2
  • 為什么現(xiàn)階段要好好努力? 因?yàn)槟阋囵B(yǎng)自己的核心競(jìng)爭(zhēng)力言询,要更有自信和實(shí)力俯萎,才能配得上未來(lái)更有底氣的人生。 1. 最...
    洛依汐閱讀 331評(píng)論 0 1
  • 忽然忘記了昨晚的夢(mèng)运杭。忽然忘記了所有的夢(mèng)夫啊。要怎么一點(diǎn)點(diǎn)回憶起來(lái)。
    煉心清秋閱讀 228評(píng)論 0 0