如果大家關注CVPR的話,應該知道今年CVPR有一個關于自動駕駛的WorkShop揭保,并且還在Kaggle上舉行了一個關于語義分割的比賽肥橙,可以說是珍貴的學習和實踐資源啦,因此接下來的很長一段時間里秸侣,我將簡紹2018年CVPR上關于自動駕駛的最新論文存筏,同時利用Kaggle上的數(shù)據(jù)集進行相關實踐(能力有限,盡力吧(′д` )…彡…彡)味榛。
大家登陸WAD官網(wǎng)就可以看到今年CVPR收錄的關于自動駕駛的相關論啦方篮,如獲至寶啊,有木有励负,嘿嘿藕溅。下面就是我截取的論文題目和作者啦,感興趣的可以自己下載下來進行仔細研讀继榆。
Papers
The ApolloScape Dataset for Autonomous Driving
Xinyu Huang*, Baidu; Xinjing Cheng, Baidu; Qichuan Geng, Baidu; Binbin Cao, Baidu; Dingfu Zhou, Baidu; Peng Wang, Baidu USA LLC; Yuanqing Lin, Baidu; Yang Ruigang, Baidu
Scene Understanding Networks for Autonomous Driving based on Around View Monitoring System
Jeongyeol Baek*, LG Electronics; Ioana Veronica Chelu, Arnia; Livia Iordache, Arnia; Vlad Paunescu, Arnia; HyunJoo Ryu, LG Electronics; Alexandru Ghiuta, Arnia; Andrei Petreanu, Arnia; Yunsung Soh, LG Electronics; Andrei Leica, Arnia; ByeongMoon Jeon, LG Electronics
Jonathan Tremblay*, Nvidia; Aayush Prakash, Nvidia; David Acuna, Nvidia; Mark Brophy, Nvidia; Varun Jampani, Nvidia Research; Cem Anil, Nvidia; Thang To, Nvidia; Eric Cameracci, Nvidia; Shaad Boonchoon, Nvidia; Stan Birchfield, NVIDIA
On the iterative refinement of densely connected representation levels for semantic segmentation
Arantxa Casanova*, MILA; Guillem Cucurull, Computer Vision Center; Michal Drozdzal, Facebook; Adriana Romero, FAIR; Yoshua Bengio, Universite de Montreal
Minimizing Supervision for Free-space Segmentation
Satoshi Tsutsui, Indiana University; Tommi Kerola*, Preferred Networks, Inc.; Shunta Saito, Preferred Networks, Inc.; David Crandall, Indiana University
Error Correction for Dense Semantic Image Labeling
Yu-Hui Huang*, KU Leuven; Xu Jia, KU Leuven; Stamatios Georgoulis, ETH Zurich; Tinne Tuytelaars, K.U. Leuven; Luc Van Gool, ETH Zurich
Nikolai Smolyanskiy, NVIDIA; Alexey Kamenev, NVIDIA; Stan Birchfield*, NVIDIA
Accurate Deep Direct Geo-Localization from Ground Imagery and Phone-Grade GPS
Shaohui Sun*, Lyft; Ramesh Sarukkai, Lyft; Jack Kwok, Lyft; Vinay Shet, Lyft
Efficient and Safe Vehicle Navigation Based on Driver Behavior Classification
Chor Hei Ernest Cheung*, The University of North Carolina at Chapel Hill; Aniket Bera, The University of North Carolina at Chapel Hill; Dinesh Manocha, University of North Carolina
Detection of Distracted Driver using Convolutional Neural Network
Bhakti Baheti*, SGGSIE&T, Nanded, MH; Suhas Gajre, S.G.G.S. Nanded; Sanjay Talbar, SGGSIET Nanded
The ApolloScape Dataset for Autonomous Driving
Xinyu Huang*, Baidu; Xinjing Cheng, Baidu; Qichuan Geng, Baidu; Binbin Cao, Baidu; Dingfu Zhou, Baidu; Peng Wang, Baidu USA LLC; Yuanqing Lin, Baidu; Yang Ruigang, Baidu
Classifying Group Emotions for Socially-Aware Autonomous Vehicle Navigation
Aniket Bera*, The University of North Carolina at Chapel Hill; Tanmay Randhavane, The University of North Carolina at Chapel Hill; Emily Kubin, The University of North Carolina at Chapel Hill; Austin Wang, The University of North Carolina at Chapel Hill; Kurt Gray, The University of North Carolina at Chapel Hill; Dinesh Manocha, University of North Carolina
AutonoVi-Sim: Autonomous Vehicle Simulation Platform with Weather, Sensing, and Traffic Control
Andrew Best*, UNC Chapel Hill; Sahil Narang, UNC Chapel Hill; Lucas Pasqualin, University of Central Florida; Daniel Barber, University of Central Florida; Dinesh Manocha, University of North Carolina
Learning Hierarchical Models for Class-Specific Reconstruction from Natural Data
Arun CS Kumar*, University of Georgia; Suchendra Bhandarkar, University of Georgia; Mukta Prasad, Trinity College, Dublin
Subset Replay based Continual Learning for Scalable Improvement of Autonomous Systems
Pratik Brahma*, Volkswagen Electronics Research Lab; Adrienne Othon, Volkswagen Electronics Research Lab
除了閱讀論文以外巾表,實踐也很重要,否則就變成了天龍八部里面的王語嫣啦略吨,好的程序員是不能靠臉吃飯的集币,要靠嘴啊(傲嬌臉)翠忠。Kaggle就不用我多介紹啦鞠苟,關于WAD的比賽以及數(shù)據(jù)大家可以自己去官網(wǎng)上閱讀,還有有很多值得學習的代碼和看法,是一個很好的學習實踐平臺当娱。
廢話不多說啦吃既,擼起袖子開始讀論文啦(數(shù)據(jù)還在下載中,95.6G跨细,也不知道我的神船能不能飛得起來)鹦倚。
第一篇:The ApolloScape Dataset for Autonomous Driving
看題目就知道是一篇介紹數(shù)據(jù)集的論文(應該是Kaggle上的那個數(shù)據(jù)集),我挑重點闡述一下冀惭,感興趣的可以自行詳細閱讀震叙,同時在這里向百度這樣的無私奉獻自己的數(shù)據(jù)集的公司和實驗室表示敬意(手動敬禮)。
相比于其他相關數(shù)據(jù)集散休,ApolloScape Dataset主要有五個特點:
1.數(shù)據(jù)集共包含143906幀駕駛圖像數(shù)據(jù)媒楼,并根據(jù)場景難易程度(圖片中場景的復雜度,包含車輛戚丸、行人的數(shù)量)將數(shù)據(jù)集劃分為簡單匣砖、中等、困難三個子數(shù)據(jù)集昏滴。
2.該數(shù)據(jù)集為第一個包含像素級別的RGB-D戶外圖像數(shù)據(jù)集猴鲫。
3.對車道線進行了精細的紋理標記。
4.設計了一種有效的2D/3D聯(lián)合標注方法谣殊,可以節(jié)省70%的標注時間拂共,同時數(shù)據(jù)集包含3D點云信息,因此是第一個開放的包含3D標注的街景數(shù)據(jù)集姻几。
5.數(shù)據(jù)集進行了視頻幀實例級別的標注宜狐,這意味著使用者可以使用該數(shù)據(jù)集對視頻中的移動物體搭建時間空間模型,例如預測蛇捌、軌跡跟蹤抚恒、行為分析等任務。
數(shù)據(jù)集下載網(wǎng)址络拌,并且后續(xù)會持續(xù)更新數(shù)據(jù)集俭驮,上面有更詳盡的數(shù)據(jù)集中文介紹,還有百度APOLLO參與的其他活動春贸,例如IV2018混萝。
另外安利一下百度的阿波羅開放平臺,我還沒有仔細研究過萍恕,不過好像很炫酷的樣子逸嘀。
第一篇比較簡單,就不做太多介紹啦允粤,另外今天恰巧看到了一篇分析自動駕駛汽車公司發(fā)展狀況的文章崭倘,對于想要找自動駕駛相關工作的伙伴們會有很大幫助翼岁,畢竟科研之余還是要吃飯的啊。
基本上國內自動駕駛創(chuàng)業(yè)窗口期已過司光,幾家脫穎而出的創(chuàng)業(yè)公司也在自己的賽道上奔跑琅坡,BAT和華為等巨頭也是爭相參與,自動駕駛大有可為(最后感慨兩句飘庄,嗚嗚嗚脑蠕,找不到工作了)购撼。
最后跪削,祝好!愿與諸君一起進步迂求。