2019-08-09 Nvidia DALI

NVIDIA數(shù)據(jù)加載庫介紹

The NVIDIA Data Loading Library (DALI) is a portable, open source library for decoding and augmenting images and videos to accelerate deep learning applications. DALI reduces latency and training time, mitigating bottlenecks, by overlapping training and pre-processing. It provides a drop-in replacement for built in data loaders and data iterators in popular deep learning frameworks for easy integration or retargeting to different frameworks.

Training neural networks with images requires developers to first normalize those images. Moreover images are often compressed to save on storage. Developers have therefore built multi-stage data processing pipelines that include loading, decoding, cropping, resizing, and many other augmentation operators. These data processing pipelines, which are currently executed on the CPU, have become a bottleneck, limiting overall throughput.

DALI is a high performance alternative to built-in data loaders and data iterators. Developers can now run their data processing pipelines on the GPU, reducing the total time it takes to train a neural network. Data processing pipelines implemented using DALI are portable because they can easily be retargeted to TensorFlow, PyTorch and MXNet.。


DALI is a high performance alternative to built-in data loaders and data iterators. Developers can now run their data processing pipelines on the GPU, reducing the total time it takes to train a neural network. Data processing pipelines implemented using DALI are portable because they can easily be retargeted to TensorFlow, PyTorch and MXNet.

DALI的主要特點

Easy-to-use Python API
Transparently scales across multiple GPUs
Accelerates image classification (ResNet-50) and object detection (SSD) workloads
Flexible graphs lets developers create custom pipelines
Supports multiple data formats - LMDB, RecordIO, TFRecord, COCO, JPEG, H.264 and HEVC
Developers can add custom image and video processing operators


DALI的目的——解決CPU瓶頸

Training deep learning models with vast amounts of data is necessary to achieve accurate results. Data in the wild, or even prepared data sets, is usually not in the form that can be directly fed into neural network. This is where NVIDIA DALI data preprocessing comes into play.

There are various reasons for that:

(1)Different storage formats
(2)Compression
(3)Data format and size may be incompatible
(4)Limited amount of high quality data

Addressing the above issues requires your training pipeline provide extensive data preprocessing capabilities, such as loading, decoding, decompression, data augmentation, format conversion, and resizing. You may have used the native implementation in existing machine learning frameworks, such as Tensorflow, Pytorch, MXnet, and others, for these pre-processing steps. However, this creates portability issues due to use of framework-specific data format, set of available transformations, and their implementations. Training in a truly portable fashion needs augmentations and portability in the data pipeline.
Data preprocessing for deep learning workloads has garnered little attention until recently, eclipsed by the tremendous computational resources required for training complex models. As such, preprocessing tasks typically ran on the CPU due to simplicity, flexibility, and availability of libraries such as OpenCV or Pillow.

Recent advances in GPU architectures introduced in the NVIDIA Volta and Turing architectures, have significantly increased GPU throughput in deep learning tasks. In particular, half-precision arithmetic and Tensor Cores accelerate certain types of FP16 matrix calculations useful for training DNNs. Dense multi-GPU systems like NVIDIA’s DGX-1 and DGX-2 train a model much faster than data can be provided by the processing framework, leaving the GPUs starved for data.

Today’s DL applications include complex, multi-stage data processing pipelines consisting of many serial operations. To rely on the CPU to handle these pipelines limits your performance and scalability.

DALI Key features

DALI offers a simple Python interface where you can implement a data processing pipeline in a few steps:

  1. Select Operators from this extensive list of supported operators
  2. Define the operation flow as a symbolic graph in an imperative way (as in most of the current deep learning frameworks)
  3. Build an operation pipeline
  4. Run graph on demand
  5. Integrate with your target deep learning framework by dedicated plugin

Let us now deep dive into the inner working of DALI, followed by how to use it.

DALI 內(nèi)部原理

DALI defines data pre-processing pipeline as a dataflow graph, with each node representing a data processing Operator. DALI has 3 types of Operators as follows:

1.CPU: accepts and produces data on CPU
2.Mixed: accepts data from CPU and produces the output at the GPU side
3.GPU: accepts and produces data on the GPU

Although DALI is developed mostly with GPUs in mind, it also provides a variety of CPU-operator variants. This enables utilizing available CPU cycles for use cases where the CPU/GPU ratio is high or network traffic completely consumes available GPU cycles. You should experiment with CPU/GPU operator placement to find the sweet spot.

For the performance reasons, DALI only transfers data from CPU->Mixed->GPU as shown in figure 3.

Dali example pipeline

Existing frameworks offer prefetching, which calculates necessary data fetches before they’re needed. DALI prefetches transparently, providing the ability to define prefetch queue length flexibly during pipeline construction, as shown in figure 4. This makes it straightforward to hide high variation in the batch-to-batch processing time.

How data processing overlaps with training

DALI 表現(xiàn)

NVIDIA showcases DALI in our implementations of SSD and ResNet-50 since it was one of the contributing factors in MLPerf benchmark success.

Figure 6 compares DALI with the RN50 network running with the different GPU configurations:

Note how the core/GPU ratio becomes smaller (DGX1V has 5 CPU cores per GPU, while DGX2 only 3) the performance improvement gets better.
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市岔激,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌漂佩,老刑警劉巖,帶你破解...
    沈念sama閱讀 218,941評論 6 508
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異言津,居然都是意外死亡腻异,警方通過查閱死者的電腦和手機进副,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,397評論 3 395
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來悔常,“玉大人影斑,你說我怎么就攤上這事』颍” “怎么了矫户?”我有些...
    開封第一講書人閱讀 165,345評論 0 356
  • 文/不壞的土叔 我叫張陵,是天一觀的道長残邀。 經(jīng)常有香客問我皆辽,道長,這世上最難降的妖魔是什么罐旗? 我笑而不...
    開封第一講書人閱讀 58,851評論 1 295
  • 正文 為了忘掉前任膳汪,我火速辦了婚禮,結(jié)果婚禮上九秀,老公的妹妹穿的比我還像新娘遗嗽。我一直安慰自己,他們只是感情好鼓蜒,可當我...
    茶點故事閱讀 67,868評論 6 392
  • 文/花漫 我一把揭開白布痹换。 她就那樣靜靜地躺著,像睡著了一般都弹。 火紅的嫁衣襯著肌膚如雪娇豫。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,688評論 1 305
  • 那天畅厢,我揣著相機與錄音冯痢,去河邊找鬼。 笑死,一個胖子當著我的面吹牛浦楣,可吹牛的內(nèi)容都是我干的袖肥。 我是一名探鬼主播,決...
    沈念sama閱讀 40,414評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼振劳,長吁一口氣:“原來是場噩夢啊……” “哼椎组!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起历恐,我...
    開封第一講書人閱讀 39,319評論 0 276
  • 序言:老撾萬榮一對情侶失蹤寸癌,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后弱贼,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體蒸苇,經(jīng)...
    沈念sama閱讀 45,775評論 1 315
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,945評論 3 336
  • 正文 我和宋清朗相戀三年哮洽,在試婚紗的時候發(fā)現(xiàn)自己被綠了填渠。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 40,096評論 1 350
  • 序言:一個原本活蹦亂跳的男人離奇死亡鸟辅,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出莺葫,到底是詐尸還是另有隱情匪凉,我是刑警寧澤,帶...
    沈念sama閱讀 35,789評論 5 346
  • 正文 年R本政府宣布捺檬,位于F島的核電站再层,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏堡纬。R本人自食惡果不足惜聂受,卻給世界環(huán)境...
    茶點故事閱讀 41,437評論 3 331
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望烤镐。 院中可真熱鬧蛋济,春花似錦、人聲如沸炮叶。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,993評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽镜悉。三九已至祟辟,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間侣肄,已是汗流浹背旧困。 一陣腳步聲響...
    開封第一講書人閱讀 33,107評論 1 271
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人吼具。 一個月前我還...
    沈念sama閱讀 48,308評論 3 372
  • 正文 我出身青樓僚纷,卻偏偏與公主長得像,于是被迫代替她去往敵國和親馍悟。 傳聞我的和親對象是個殘疾皇子畔濒,可洞房花燭夜當晚...
    茶點故事閱讀 45,037評論 2 355

推薦閱讀更多精彩內(nèi)容

  • rljs by sennchi Timeline of History Part One The Cognitiv...
    sennchi閱讀 7,334評論 0 10
  • **2014真題Directions:Read the following text. Choose the be...
    又是夜半驚坐起閱讀 9,511評論 0 23
  • 有了孩子之后侵状,明顯感覺時間精力不夠用。孩子小的時候覺得睡不夠毅整,孩子大了些覺得自己學(xué)習(xí)時間不夠趣兄,家務(wù)時間不夠,現(xiàn)在覺...
    擁抱小小的我閱讀 382評論 0 0
  • 今日體驗悼嫉,每天工作艇潭,生活都在努力做好自己,什么事情從不麻煩別人戏蔑,又看不慣別人有一點成就就不知道是誰了蹋凝,我還是我,不...
    王全峰閱讀 110評論 0 0
  • 前言:根據(jù)酷傳 iOS版本記錄总棵,攜程美食林功能于16年6月6日上線鳍寂,主要產(chǎn)品定位是為用戶推薦旅行中的地道美食,幫助...
    betterme222閱讀 2,860評論 1 4