無標(biāo)題文章


Jason Yosinski

機(jī)器學(xué)習(xí) 深度學(xué)習(xí)

Hello there! I'm Jason, a Ph.D.studentcandidate in Computer Science at Cornell. My research focuses on training and understanding neural networks for computer vision and robotics. I work withHod Lipsonand theCornell Creative Machines Laband sometimes as a visiting student withYoshua Bengioand theLISA Labat U. Montreal. My work is supported by aNASA Space Technology Research Fellowship. This summer of 2015 I'm in London working atGoogle DeepMind.

Jason Yosinski http://yosinski.com/?


Understanding Neural Networks Through Deep Visualization

ICML DL Workshop paper|video|code and more info ?

Recent years have produced great advances in training large, deep neural networks (DNNs), including notable successes in training convolutional neural networks (convnets) to recognize natural images. However, our understanding of how these models work, especially what computations they perform at intermediate layers, has lagged behind. Here we introduce two tools for better visualizing and interpreting neural nets. The first is a set of new regularization methods for finding preferred activations using optimization, which leads to clearer and more interpretable images than had been found before. The second tool is an interactive toolbox that visualizes the activations produced on each layer of a trained convnet. You can input image files or read video from your webcam, which we've found fun and informative. Both tools are open source.Read more ?

Deep Neural Networks are Easily Fooled

CVPR paper|code|more ?

Deep neural networks (DNNs) have recently been doing very well at visual classification problems (e.g. recognizing that one image is of a lion and another image is of a school bus). A recent study bySzegedy et al.showed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a network to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). We show methods of producing fooling images both with and without the class gradient in pixel space. The results shed light on interesting differences between human vision and state-of-the-art DNNs.Read more ?

How Transferable are Features in Deep Neural Networks?

NIPS paper|code|more ?

Many deep neural networks trained on natural images exhibit a curious phenomenon: they all learn roughly the same Gabor filters and color blobs on the first layer. These features seem to begeneric— useful for many datasets and tasks — as opposed tospecific— useful for only one dataset and task. By the last layer featuresmustbe task specific, which prompts the question: how do features transition from general to specific throughout the network? In this paper, presented at NIPS 2014, we show the manner in which features transition from general to specific, and also uncover a few other interesting results along the way.Read more

Generative Stochastic Networks

First arXiv paper|ICML paper|Latest arXiv paper

Unsupervised learning of models for probability distributions can be difficult due to intractable partition functions. We introduce a general family of models called Generative Stochastic Networks (GSNs) as an alternative to maximum likelihood. Briefly, we show how to learn the transition operator of a Markov chain whose stationary distribution estimates the data distribution. Because this transition distribution is a conditional distribution, it's often much easier to learn than the data distribution itself. Intuitively, this works by pushing the complexity that normally lives in the partition function into the “function approximation” part of the transition operator, which can be learned via simple backprop. We validate the theory by showing several successful experiments on two image datasets and with a particular architecture that mimics the Deep Boltzmann Machine but without the need for layerwise pretraining.

EndlessForms.com

Watch the two minute intro video.Users on EndlessForms.com collaborate to produce interesting crowdsourced designs. Since launch, over 4,000,000 shapes have been seen and evaluated by human eyes. This volume of user input has produced somereally cool shapes. EndlessForms has received somefavorable press.Evolve your own shape ?

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市僧须,隨后出現(xiàn)的幾起案子纲刀,更是在濱河造成了極大的恐慌,老刑警劉巖担平,帶你破解...
    沈念sama閱讀 206,602評論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件示绊,死亡現(xiàn)場離奇詭異,居然都是意外死亡暂论,警方通過查閱死者的電腦和手機(jī)面褐,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,442評論 2 382
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來取胎,“玉大人展哭,你說我怎么就攤上這事∥胖” “怎么了匪傍?”我有些...
    開封第一講書人閱讀 152,878評論 0 344
  • 文/不壞的土叔 我叫張陵,是天一觀的道長觉痛。 經(jīng)常有香客問我役衡,道長,這世上最難降的妖魔是什么薪棒? 我笑而不...
    開封第一講書人閱讀 55,306評論 1 279
  • 正文 為了忘掉前任手蝎,我火速辦了婚禮,結(jié)果婚禮上俐芯,老公的妹妹穿的比我還像新娘棵介。我一直安慰自己,他們只是感情好吧史,可當(dāng)我...
    茶點(diǎn)故事閱讀 64,330評論 5 373
  • 文/花漫 我一把揭開白布邮辽。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪逆巍。 梳的紋絲不亂的頭發(fā)上及塘,一...
    開封第一講書人閱讀 49,071評論 1 285
  • 那天,我揣著相機(jī)與錄音锐极,去河邊找鬼笙僚。 笑死,一個(gè)胖子當(dāng)著我的面吹牛灵再,可吹牛的內(nèi)容都是我干的肋层。 我是一名探鬼主播,決...
    沈念sama閱讀 38,382評論 3 400
  • 文/蒼蘭香墨 我猛地睜開眼翎迁,長吁一口氣:“原來是場噩夢啊……” “哼栋猖!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起汪榔,我...
    開封第一講書人閱讀 37,006評論 0 259
  • 序言:老撾萬榮一對情侶失蹤蒲拉,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后痴腌,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體雌团,經(jīng)...
    沈念sama閱讀 43,512評論 1 300
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 35,965評論 2 325
  • 正文 我和宋清朗相戀三年士聪,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了锦援。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 38,094評論 1 333
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡剥悟,死狀恐怖灵寺,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情区岗,我是刑警寧澤略板,帶...
    沈念sama閱讀 33,732評論 4 323
  • 正文 年R本政府宣布,位于F島的核電站躏尉,受9級特大地震影響蚯根,放射性物質(zhì)發(fā)生泄漏后众。R本人自食惡果不足惜胀糜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,283評論 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望蒂誉。 院中可真熱鬧教藻,春花似錦、人聲如沸右锨。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,286評論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至悄窃,卻和暖如春讥电,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背轧抗。 一陣腳步聲響...
    開封第一講書人閱讀 31,512評論 1 262
  • 我被黑心中介騙來泰國打工恩敌, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人横媚。 一個(gè)月前我還...
    沈念sama閱讀 45,536評論 2 354
  • 正文 我出身青樓纠炮,卻偏偏與公主長得像,于是被迫代替她去往敵國和親灯蝴。 傳聞我的和親對象是個(gè)殘疾皇子恢口,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 42,828評論 2 345

推薦閱讀更多精彩內(nèi)容

  • 最重要的事,只有一件 why 做事要抓本質(zhì)穷躁。 每一件事都有其核心耕肩。把握本質(zhì),發(fā)...
    若心向陽閱讀 185評論 0 1
  • 這一部分主要講了提升學(xué)習(xí)能力的三個(gè)底層方法:反思问潭,以教為學(xué)和刻意練習(xí)看疗。 重點(diǎn)對反思這塊內(nèi)容進(jìn)行分析和學(xué)習(xí)。 文章行...
    緣奇楓閱讀 178評論 0 0
  • 3月25號那天,打開電腦去枷,接上移動(dòng)硬盤怖辆,一個(gè)窗口跳出來,我選擇了第一個(gè)按鈕删顶,接著悲劇發(fā)生了竖螃,移動(dòng)硬盤的資料在被刪除...
    donna王采寧閱讀 302評論 0 0
  • 日記 1 D1. 今天晚上我和爸爸興致勃勃的(其實(shí)興致勃勃的是媽媽,但爸爸還是努力配合了)召開了第一次家庭會議...
    白曉如夢閱讀 451評論 4 3