前沿技術(shù)翻譯系列——1. 如何區(qū)分人工智能效扫,機器學(xué)習(xí)和深度學(xué)習(xí)展懈?

前沿技術(shù)翻譯文章

原文地址:https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/

參考譯文:一張圖看懂AI、機器學(xué)習(xí)和深度學(xué)習(xí)的區(qū)別

這不僅是自己學(xué)習(xí)前沿技術(shù)的記錄幌陕,也是提升英語閱讀的記錄诵姜,同時關(guān)注對比百度翻譯、搜狗翻譯搏熄、谷歌翻譯三者的區(qū)別棚唆,剛好和標(biāo)題中的三者的區(qū)別有異曲同工之妙。

What’s the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning?

如何區(qū)分人工智能心例,機器學(xué)習(xí)和深度學(xué)習(xí)宵凌?

This is the first of a multi-part series explaining the fundamentals of deep learning by long-time tech journalist Michael Copeland.

這是描述深度學(xué)習(xí)的基礎(chǔ)系列的第一部分。本系列由資深科技記者邁克爾·科普蘭(Michael Copeland)編寫止后。

Artificial intelligence is the future. Artificial intelligence is science fiction. Artificial intelligence is already part of our everyday lives. All those statements are true, it just depends on what flavor of AI you are referring to.

人工智能是未來瞎惫、人工智能是科幻小說、人工智能已經(jīng)成為我們?nèi)粘I畹囊徊糠忠胫辍K姓摂喽际钦_的瓜喇,所有的這些都已成現(xiàn)實,只是要看你所談到所認(rèn)為的AI到底是什么歉糜。

For example, when Google DeepMind’s AlphaGo program defeated South Korean Master Lee Se-dol in the board game Go earlier this year, the terms AI, machine learning, and deep learning were used in the media to describe how DeepMind won. And all three are part of the reason why AlphaGo trounced Lee Se-Dol. But they are not the same things.

例如乘寒,今年早些時候,Google DeepMind開發(fā)的AlphaGo程序在圍棋比賽中擊敗韓國圍棋大師李世石匪补。媒體在報導(dǎo)AlphaGo獲得勝利中使用人工智能伞辛、機器學(xué)習(xí)和深度學(xué)習(xí)這些術(shù)語。AlphaGo擊敗李世石夯缺,這些技術(shù)都立下汗馬功勞蚤氏,但是它們并不是同一回事。

The easiest way to think of their relationship is to visualize them as concentric circles with AI — the idea that came first — the largest, then machine learning — which blossomed later, and finally deep learning — which is driving today’s AI explosion — fitting inside both.

同心圓是理解人工智能踊兜、機器學(xué)習(xí)和深度學(xué)習(xí)三者的關(guān)系的最簡單直觀的表達(dá)方式瞧捌。人工智能(AI)的概念是最早出現(xiàn)的,同樣是范圍最大的润文;后來是機器學(xué)習(xí)的爆發(fā)式發(fā)展;最后是深度學(xué)習(xí)殿怜,正在推動著今天人工智能(AI)的爆發(fā)典蝌,兩者都適合。

From Bust to Boom

從衰敗到繁榮

AI has been part of our imaginations and simmering in research labs since a handful of computer scientists rallied around the term at the Dartmouth Conferences in 1956 and birthed the field of AI. In the decades since, AI has alternately been heralded as the key to our civilization’s brightest future, and tossed on technology’s trash heap as a harebrained notion of over-reaching propellerheads. Frankly, until 2012, it was a bit of both.

人工智能(AI)一直是我們想象力以及實驗室研究的一部分头谜,人工智能(AI)概念首先在1956年的達(dá)特茅斯會議由幾位計算機科學(xué)家提出骏掀,人工智能(AI)從此誕生。在過去的幾十年里,人們對人工智能(AI)的看法不斷改變截驮,有時認(rèn)為人工智能(AI)是我們未來最光明文明的關(guān)鍵笑陈。然而有時它也被認(rèn)為只是一個輕率的概念,野心過大葵袭,注定要失敗涵妥,被扔進(jìn)技術(shù)的垃圾堆中。坦率地說坡锡,直到2012年蓬网,人工智能(AI)這兩種觀念仍然同時存在。

Over the past few years AI has exploded, and especially since 2015. Much of that has to do with the wide availability of GPUs that make parallel processing ever faster, cheaper, and more powerful. It also has to do with the simultaneous one-two punch of practically infinite storage and a flood of data of every stripe (that whole Big Data movement) – images, text, transactions, mapping data, you name it.

在過去的幾年里鹉勒,人工智能已經(jīng)發(fā)生了爆炸式增長帆锋,尤其是在2015年之后更是迅猛發(fā)展。其中很大一部分原因歸功于GPU的廣泛普及使用禽额,它使并行處理變得更快锯厢、更便宜、更強大脯倒。它同時還與實際存儲容量無限擴展的大環(huán)境下实辑,各種類型的大數(shù)據(jù)不斷產(chǎn)生有關(guān),比如圖像盔憨、文本徙菠、交易、地圖數(shù)據(jù)等等你可以想到的所有數(shù)據(jù)類型郁岩。

Let’s walk through how computer scientists have moved from something of a bust — until 2012 — to a boom that has unleashed applications used by hundreds of millions of people every day.

讓我們來看看計算機科學(xué)家們是如何將人工智能(AI)從蕭條(直到2012年)變?yōu)榉睒s婿奔,這一熱潮使得每天有成千上萬的人使用包含人工智能(AI)的應(yīng)用。

Artificial Intelligence — Human Intelligence Exhibited by Machines

人工智能——讓機器展示出人類智能

King me: computer programs that played checkers were among the earliest examples of artificial intelligence, stirring an early wave of excitement in the 1950s.

介紹:玩跳棋的電腦程序是最早的人工智能的例子之一问慎,在20世紀(jì)50年代激起了早期的人工智能興奮浪潮萍摊。

Back in that summer of ’56 conference the dream of those AI pioneers was to construct complex machines — enabled by emerging computers — that possessed the same characteristics of human intelligence. This is the concept we think of as “General AI” — fabulous machines that have all our senses (maybe even more), all our reason, and think just like we do. You’ve seen these machines endlessly in movies as friend — C-3PO — and foe — The Terminator. General AI machines have remained in the movies and science fiction novels for good reason; we can’t pull it off, at least not yet.

回到1956年的達(dá)特茅斯會議,這些AI先驅(qū)的夢想就是構(gòu)建具有與人類智慧相同特征的由當(dāng)時新興計算機構(gòu)成的復(fù)雜機器如叼。這個概念就是我們所說的“強人工智能(General AI)”冰木,這是一個神話般的機器,具有我們所有的感覺(甚至更多)笼恰,我們所有的理智踊沸,像我們一樣想。你已經(jīng)在電影中看到了這些機器社证,例如C-3PO (禮儀機器人)逼龟、敵人終結(jié)者等待。強人工智能(General AI)仍然存在于電影和科幻小說中追葡,這是有原因的腺律。我們不能實現(xiàn)強人工智能(General AI)奕短,至少現(xiàn)在還不能。

What we can do falls into the concept of “Narrow AI.” Technologies that are able to perform specific tasks as well as, or better than, we humans can. Examples of narrow AI are things such as image classification on a service like Pinterest and face recognition on Facebook.

目前我們能做的是有關(guān)“弱人工智能(Narrow AI)”的概念匀钧。這是一種能夠執(zhí)行特定任務(wù)的技術(shù)翎碑,或者比我們?nèi)祟惸茏龅母玫募夹g(shù)。例如之斯,Pinterest利用AI進(jìn)行圖片分類日杈,F(xiàn)acebook使用AI對臉部識別,這些都是“弱人工智能(Narrow AI)”的應(yīng)用例子吊圾。

Those are examples of Narrow AI in practice. These technologies exhibit some facets of human intelligence. But how? Where does that intelligence come from? That get us to the next circle, Machine Learning.

這些都是對“弱人工智能(Narrow AI)”的實際實踐达椰。這些技術(shù)表現(xiàn)出人類智力的某些方面的特點。但是這些是如何實現(xiàn)的项乒?這些智力來自哪里啰劲?帶著問題我們將會來到下一個圓圈——機器學(xué)習(xí)。

Machine Learning — An Approach to Achieve Artificial Intelligence

機器學(xué)習(xí)——一種實現(xiàn)人工智能的方法

Spam free diet: machine learning, a subset of AI (Artificial Intelligence) helps keep your inbox (relatively) free of spam.

垃圾郵件免費食譜:機器學(xué)習(xí)檀何,人工智能的一個子集(人工智能)幫助你的收件箱(相對)不受垃圾郵件的影響蝇裤。

Machine Learning at its most basic is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world. So rather than hand-coding software routines with a specific set of instructions to accomplish a particular task, the machine is “trained” using large amounts of data and algorithms that give it the ability to learn how to perform the task.

機器學(xué)習(xí)最基本的方法是使用算法來解析處理數(shù)據(jù),從中學(xué)習(xí)频鉴,然后對世界中的某些事物做出決定或預(yù)測栓辜。因此,與其用特定的指令集編寫軟件程序來完成特定的任務(wù)垛孔,還不如使用大量的數(shù)據(jù)和算法“訓(xùn)練”機器藕甩,讓它能夠?qū)W習(xí)如何執(zhí)行任務(wù)。

Machine learning came directly from minds of the early AI crowd, and the algorithmic approaches over the years included decision tree learning, inductive logic programming. clustering, reinforcement learning, and Bayesian networks among others. As we know, none achieved the ultimate goal of General AI, and even Narrow AI was mostly out of reach with early machine learning approaches.

機器學(xué)習(xí)的概念直接來源于早期的早期人工智能研究群體的思想周荐。這些年來狭莱,算法的方法包括決策樹學(xué)習(xí)、歸納邏輯編程概作、聚類腋妙、強化學(xué)習(xí)、貝葉斯網(wǎng)絡(luò)等讯榕。 我們知道骤素,沒有一個人實現(xiàn)了“強人工智能(General AI)”的最終目標(biāo),甚至是通過早期的機器學(xué)習(xí)方法愚屁,甚至是“弱人工智能(Narrow AI)”都無法實現(xiàn)济竹。

As it turned out, one of the very best application areas for machine learning for many years was computer vision, though it still required a great deal of hand-coding to get the job done. People would go in and write hand-coded classifiers like edge detection filters so the program could identify where an object started and stopped; shape detection to determine if it had eight sides; a classifier to recognize the letters “S-T-O-P.” From all those hand-coded classifiers they would develop algorithms to make sense of the image and “l(fā)earn” to determine whether it was a stop sign.

事實證明,多年來機器學(xué)習(xí)的最佳應(yīng)用領(lǐng)域之一是計算機視覺領(lǐng)域霎槐。要實現(xiàn)計算機視覺送浊,它仍然需要大量的手工編碼來完成工作。研究人員會去寫手動編寫分類器栽燕,比如邊緣檢測過濾器罕袋,這樣程序就能識別出物體的起點和停止位置;形狀檢測確定是否有八面碍岔;識別字母“S-T-O-P”的分類器浴讯。從所有這些手工編寫的分類器中,他們將開發(fā)算法來理解圖像和學(xué)習(xí)識別圖像蔼啦,確定它是否是一個停止符號榆纽。

Good, but not mind-bendingly great. Especially on a foggy day when the sign isn’t perfectly visible, or a tree obscures part of it. There’s a reason computer vision and image detection didn’t come close to rivaling humans until very recently, it was too brittle and too prone to error.

這種辦法可以使用,但并不是令人滿意捏肢。尤其是在大霧天奈籽,當(dāng)這個標(biāo)志不是完全可見的時候,或者一棵樹遮住了標(biāo)志的某一部分鸵赫,它的識別能力就會下降衣屏。計算機視覺和圖像檢測的能力直到最近才接近人類的原因是它太脆弱,太容易出錯辩棒。

Time, and the right learning algorithms made all the difference.

時間狼忱,以及正確的學(xué)習(xí)算法讓一切都變得不同。

Deep Learning — A Technique for Implementing Machine Learning

深度學(xué)習(xí)——一種實現(xiàn)機器學(xué)習(xí)的技術(shù)

Herding cats: Picking images of cats out of YouTube videos was one of the first breakthrough demonstrations of deep learning.

牧羊貓:從YouTube視頻中挑選貓的圖像是深入學(xué)習(xí)的第一個突破性示范之一一睁。

Another algorithmic approach from the early machine-learning crowd, Artificial Neural Networks, came and mostly went over the decades. Neural Networks are inspired by our understanding of the biology of our brains – all those interconnections between the neurons. But, unlike a biological brain where any neuron can connect to any other neuron within a certain physical distance, these artificial neural networks have discrete layers, connections, and directions of data propagation.

“人工神經(jīng)網(wǎng)絡(luò)(Artificial Neural Networks)”是另外一種算法方法钻弄,它也是早期機器學(xué)習(xí)專家提出的,存在已經(jīng)幾十年了者吁。神經(jīng)網(wǎng)絡(luò)的靈感來自于我們對大腦生物學(xué)構(gòu)造的理解——所有這些神經(jīng)元之間的相互聯(lián)系窘俺。但是,不同之處在于任何神經(jīng)元可以在一定的物理距離內(nèi)連接到其他神經(jīng)元的生物大腦复凳,而人工神經(jīng)網(wǎng)絡(luò)具有離散的層瘤泪、連接和數(shù)據(jù)傳播方向。

You might, for example, take an image, chop it up into a bunch of tiles that are inputted into the first layer of the neural network. In the first layer individual neurons, then passes the data to a second layer. The second layer of neurons does its task, and so on, until the final layer and the final output is produced.

舉個例子染坯,你可以拿出一張圖片均芽,把它分割成一堆的小的圖片碎片。它們被輸入到神經(jīng)網(wǎng)絡(luò)的第一層单鹿。在第一層中單個獨立神經(jīng)元會將數(shù)據(jù)傳遞到第二層掀宋。第二層神經(jīng)元同樣會執(zhí)行其任務(wù),依此類推仲锄,直到產(chǎn)生最后一層劲妙,生成最終輸出的結(jié)果。

Each neuron assigns a weighting to its input — how correct or incorrect it is relative to the task being performed. The final output is then determined by the total of those weightings. So think of our stop sign example. Attributes of a stop sign image are chopped up and “examined” by the neurons — its octogonal shape, its fire-engine red color, its distinctive letters, its traffic-sign size, and its motion or lack thereof. The neural network’s task is to conclude whether this is a stop sign or not. It comes up with a “probability vector,” really a highly educated guess, based on the weighting. In our example the system might be 86% confident the image is a stop sign, 7% confident it’s a speed limit sign, and 5% it’s a kite stuck in a tree ,and so on — and the network architecture then tells the neural network whether it is right or not.

每個神經(jīng)元都將一個權(quán)重分配給它的輸入儒喊,確定它與所執(zhí)行任務(wù)的關(guān)系镣奋,對應(yīng)正確與不正確的程度。最后的輸出結(jié)果由這些權(quán)重的總和決定怀愧。以之前我們的停止標(biāo)志為例子說明侨颈,我們將停止符號圖像的屬性切割余赢,通過神經(jīng)元來進(jìn)行“檢測”:它的八角形狀,它的消防車類型的紅色哈垢,它獨特的字母妻柒,它的交通標(biāo)志大小,以及它的手勢耘分。神經(jīng)網(wǎng)絡(luò)的任務(wù)是判斷這個圖像是否是一個停止標(biāo)志举塔。它提出了一個“概率向量”,一個基于權(quán)重的據(jù)理推測求泰。在我們的例子中央渣,神經(jīng)網(wǎng)絡(luò)系統(tǒng)可能有86%的概率確定圖像是一個停止標(biāo)志,7%的概率確認(rèn)它是一個限速標(biāo)志渴频,5%的概率確認(rèn)是一個被困在樹上的風(fēng)箏芽丹,等等。網(wǎng)絡(luò)架構(gòu)然后告訴神經(jīng)網(wǎng)絡(luò)結(jié)果是否正確枉氮。

Even this example is getting ahead of itself, because until recently neural networks were all but shunned by the AI research community. They had been around since the earliest days of AI, and had produced very little in the way of “intelligence.” The problem was even the most basic neural networks were very computationally intensive, it just wasn’t a practical approach. Still, a small heretical research group led by Geoffrey Hinton at the University of Toronto kept at it, finally parallelizing the algorithms for supercomputers to run and proving the concept, but it wasn’t until GPUs were deployed in the effort that the promise was realized.

即使是這樣的例子也是超前的志衍。因為直到最近,神經(jīng)網(wǎng)絡(luò)仍然被人工智能研究界所忽略聊替。神經(jīng)網(wǎng)絡(luò)在人工智能發(fā)展的早期就出現(xiàn)了楼肪,但其幾乎沒有產(chǎn)生什么而“智力”。問題在于即使最基本的神經(jīng)網(wǎng)絡(luò)都是計算密集型的惹悄,因此這并不是一種實用的方法春叫。盡管如此,由多倫多大學(xué)的Geoffrey Hinton領(lǐng)導(dǎo)的異端課題小組仍然堅持研究神經(jīng)網(wǎng)絡(luò)問題泣港,最后在超級計算機中并行化算法來運行和證明這個概念暂殖,但一直到直到GPU的廣泛部署應(yīng)用后,這個承諾才被實現(xiàn)当纱。

If we go back again to our stop sign example, chances are very good that as the network is getting tuned or “trained” it’s coming up with wrong answers — a lot. What it needs is training. It needs to see hundreds of thousands, even millions of images, until the weightings of the neuron inputs are tuned so precisely that it gets the answer right practically every time — fog or no fog, sun or rain. It’s at that point that the neural network has taught itself what a stop sign looks like; or your mother’s face in the case of Facebook; or a cat, which is what Andrew Ng did in 2012 at Google.

如果我們再回到識別停止標(biāo)志的例子呛每。隨著網(wǎng)絡(luò)的調(diào)整或“訓(xùn)練”,結(jié)果就會變得更好坡氯。它會產(chǎn)生很多的錯誤答案晨横,它需要的是更多的訓(xùn)練。研究人員需要收集成百上千箫柳,甚至上百萬的圖像手形,直到神經(jīng)元輸入的權(quán)重調(diào)整到精確,使得它幾乎每次都能得到正確的答案——不管有霧或沒有霧悯恍、是否有太陽或雨库糠。此時神經(jīng)網(wǎng)絡(luò)已經(jīng)學(xué)會判斷停止符號的形式。神經(jīng)網(wǎng)絡(luò)還可以識別Facebook上你母親的臉龐涮毫,或者識別出一只貓——這就是吳恩達(dá)(Andrew Ng)2012年在谷歌所做的事情瞬欧。

Ng’s breakthrough was to take these neural networks, and essentially make them huge, increase the layers and the neurons, and then run massive amounts of data through the system to train it. In Ng’s case it was images from 10 million YouTube videos. Ng put the “deep” in deep learning, which describes all the layers in these neural networks.

吳恩達(dá)的突破之處在于:利用這些神經(jīng)網(wǎng)絡(luò)贷屎,本質(zhì)上使它們變得巨大,增加了層和神經(jīng)元艘虎,然后通過系統(tǒng)運行大量的數(shù)據(jù)來訓(xùn)練它豫尽。在吳恩達(dá)(Andrew Ng)的案例中,它是通過分析來自1000萬段YouTube視頻中的圖片顷帖。吳恩達(dá)(Andrew Ng)真正實現(xiàn)了深度學(xué)習(xí)的“深度”,使得其能夠描述神經(jīng)網(wǎng)絡(luò)中的所有的層次信息渤滞。

Today, image recognition by machines trained via deep learning in some scenarios is better than humans, and that ranges from cats to identifying indicators for cancer in blood and tumors in MRI scans. Google’s AlphaGo learned the game, and trained for its Go match — it tuned its neural network — by playing against itself over and over and over.

今天贬墩,在某些應(yīng)用場景中,通過深入學(xué)習(xí)訓(xùn)練的機器獲得的圖像識別比人類識別的效果更好妄呕,例如識別貓陶舞、血液識別癌癥、在MRI掃描中識別腫瘤等绪励。谷歌的AlphaGo學(xué)習(xí)圍棋肿孵,并通過圍棋比賽來訓(xùn)練自己,同時反復(fù)與自己對抗來調(diào)整自己的神經(jīng)網(wǎng)絡(luò)疏魏。

Thanks to Deep Learning, AI Has a Bright Future

感謝深度學(xué)習(xí)停做,人工智能有一個光明的未來

Deep Learning has enabled many practical applications of Machine Learning and by extension the overall field of AI. Deep Learning breaks down tasks in ways that makes all kinds of machine assists seem possible, even likely. Driverless cars, better preventive healthcare, even better movie recommendations, are all here today or on the horizon. AI is the present and the future. With Deep Learning’s help, AI may even get to that science fiction state we’ve so long imagined. You have a C-3PO, I’ll take it. You can keep your Terminator.

深入學(xué)習(xí)已經(jīng)使機器學(xué)習(xí)有許多實際的應(yīng)用同時擴展了人工智能(AI)的整體領(lǐng)域。深度學(xué)習(xí)以各種方式分解任務(wù),這使得所有的機器輔助解決問題成為可能。無人駕駛汽車莹痢,更好的預(yù)防保健件蚕,甚至更好的電影推薦,都在今天或即將到來跳纳。AI既是現(xiàn)在,也是未來。在深度學(xué)習(xí)的幫助下河咽,人工智能甚至可能達(dá)到我們長久以來所設(shè)想的科幻小說描述的水平。也許未來你會擁有自己的C-3PO赋元,我將會拿走它忘蟹,;你也可以保有自己的終結(jié)者们陆。

To learn more about where deep learning is going next, listen to our in-depth interview with AI pioneer Andrew Ng on the NVIDIA AI Podcast.

想要更加深入了解深度學(xué)習(xí)的更多信息寒瓦,在英偉達(dá)(NVIDIA)AI播客聽我們與人工智能先驅(qū)吳恩達(dá)(Andrew Ng)的深入訪談節(jié)目。

寫在最后

深度學(xué)習(xí)會在未來的各個新興領(lǐng)域以及傳統(tǒng)行業(yè)廣泛實踐應(yīng)用坪仇。社會不斷發(fā)展以及效率不斷提升離不開對于各行各業(yè)的數(shù)據(jù)分析數(shù)據(jù)中蘊含的價值杂腰。

All in AI!

Learning English way is bumpy, let us join hands to advance together!

自動翻譯也是深度學(xué)習(xí)的應(yīng)用領(lǐng)域之一。

PS:自己對于在線翻譯自動翻譯的排名:Google >= 有道 >> 百度

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末椅文,一起剝皮案震驚了整個濱河市喂很,隨后出現(xiàn)的幾起案子惜颇,更是在濱河造成了極大的恐慌,老刑警劉巖少辣,帶你破解...
    沈念sama閱讀 217,657評論 6 505
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件凌摄,死亡現(xiàn)場離奇詭異,居然都是意外死亡漓帅,警方通過查閱死者的電腦和手機锨亏,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,889評論 3 394
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來忙干,“玉大人器予,你說我怎么就攤上這事【杵龋” “怎么了乾翔?”我有些...
    開封第一講書人閱讀 164,057評論 0 354
  • 文/不壞的土叔 我叫張陵,是天一觀的道長施戴。 經(jīng)常有香客問我反浓,道長,這世上最難降的妖魔是什么赞哗? 我笑而不...
    開封第一講書人閱讀 58,509評論 1 293
  • 正文 為了忘掉前任雷则,我火速辦了婚禮,結(jié)果婚禮上肪笋,老公的妹妹穿的比我還像新娘巧婶。我一直安慰自己,他們只是感情好涂乌,可當(dāng)我...
    茶點故事閱讀 67,562評論 6 392
  • 文/花漫 我一把揭開白布艺栈。 她就那樣靜靜地躺著,像睡著了一般湾盒。 火紅的嫁衣襯著肌膚如雪湿右。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,443評論 1 302
  • 那天罚勾,我揣著相機與錄音毅人,去河邊找鬼。 笑死尖殃,一個胖子當(dāng)著我的面吹牛丈莺,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播送丰,決...
    沈念sama閱讀 40,251評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼缔俄,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起俐载,我...
    開封第一講書人閱讀 39,129評論 0 276
  • 序言:老撾萬榮一對情侶失蹤蟹略,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后遏佣,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體挖炬,經(jīng)...
    沈念sama閱讀 45,561評論 1 314
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,779評論 3 335
  • 正文 我和宋清朗相戀三年状婶,在試婚紗的時候發(fā)現(xiàn)自己被綠了意敛。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 39,902評論 1 348
  • 序言:一個原本活蹦亂跳的男人離奇死亡膛虫,死狀恐怖空闲,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情走敌,我是刑警寧澤,帶...
    沈念sama閱讀 35,621評論 5 345
  • 正文 年R本政府宣布逗噩,位于F島的核電站掉丽,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏异雁。R本人自食惡果不足惜捶障,卻給世界環(huán)境...
    茶點故事閱讀 41,220評論 3 328
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望纲刀。 院中可真熱鬧项炼,春花似錦、人聲如沸示绊。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,838評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽面褐。三九已至拌禾,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間展哭,已是汗流浹背湃窍。 一陣腳步聲響...
    開封第一講書人閱讀 32,971評論 1 269
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留匪傍,地道東北人您市。 一個月前我還...
    沈念sama閱讀 48,025評論 2 370
  • 正文 我出身青樓,卻偏偏與公主長得像役衡,于是被迫代替她去往敵國和親茵休。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 44,843評論 2 354

推薦閱讀更多精彩內(nèi)容