Performance comparison of LSTM with and without cuDNN(v5) in Chainer

Performance comparison of LSTM with and without cuDNN(v5) in Chainer
Mar 15, 2017

We compare the performance of an LSTM network both with and without cuDNN in Chainer. The NVIDIA CUDA? Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as LSTM, CNN.
In this article, we compare the performance of LSTM with or without cuDNN. In Chainer, an LSTM implementation is configurable to run with or without cuDNN. An LSTM (NStepLSTM) implementation can be found here.
https://github.com/pfnet/chainer/blob/master/chainer/functions/connection/n_step_lstm.py
NVIDIA’s official blog is related to this post. Please check here.
SUMMARY:
Thorough our experiments, we found the following observations: We should use cuDNN
if the model is large.
if the sequence data is long.
if the sequence data has variable lengths.

We conducted experiments from the following viewpoints:
The effect of mini-batch size
The effect of the number of LSTM layers and sequence length of data
The effect of the number of LSTM layers and random sequence length of data
The effect of the number of LSTM layers and the input unit size
The effect of the number of LSTM layers and the hidden unit size
The effect of the number of LSTM layers and dropout rate
When will the differences be large? (setting with/without cuDNN)

EXPERIMENTAL RESULT:
The effect of mini-batch size parameters: batchsize = {127, 128, 200, 255, 256} In all results, batchsize 128 is faster than 127 and batchsize 256 is faster than 255. (Despite smaller batch size!) Using batchsize = 2^n will provide the best performance. Note that the number of iterations is the same number (39 iterations) in the case of batchsize = {255, 256}.

Comparing the setting with/without cuDNN, about 2 times ~ 2.8 times faster when using cuDNN in forward time, and 1.6 times faster in backward time.


The effect of mini-batch size
The effect of mini-batch size

The effect of the layer size of LSTM and sequence length of data parameters: length={5, 25, 50}, layer={1, 2, 3} As the length of data and the number of layers increases, the performance benefit from the cuDNN implementation increases.

When we use a large LSTM setting (layer=3, length=50), cuDNN is about 7 times faster in forward time, and 4 times faster in backward time.
When we use a small LSTM setting (layer=1, length=5), about 1.5 times faster in forward time, and 1.04 times faster in backward time.
[圖片上傳中茸歧。。显沈。(2)]
The effect of the layer size of LSTM and random sequence length of data parameters: layer = {1, 2, 3}, random={True, False} In this setting, we compare if the data sequence length is fixed or not. (Max length is 25.) When we use cuDNN, the performance impact of random sequence length is small.

The effect of the layer size of LSTM and random sequence length of data
The effect of the layer size of LSTM and random sequence length of data

The effect of the layer size of LSTM and the input unit size parameters: layer = {1, 2, 3}, input={128, 256} In the setting with cuDNN, as number of layers increases, the difference between cuDNN and no-cuDNN will be large. (About 5 times faster in forward time, and about 2.7 times faster in backward time.)

The effect of the layer size of LSTM and the input unit size
The effect of the layer size of LSTM and the input unit size

The effect of the layer size of LSTM and the hidden unit size parameters: layer = {1, 2, 3}, hidden={128, 256} In the setting with cuDNN, the number of layers increases, the difference between cuDNN and no-cuDNN will be large. (This is same as layer size and input size experiment.) However as hidden unit size increases, the difference between cuDNN and no-cuDNN will be small.

[圖片上傳中拉讯。。魔慷。(5)]
The effect of the layer size of LSTM and dropout rate parameters: layer={1, 2, 3}, dropout={0.5, 0.0} In the setting with cuDNN, when using dropout, the speed gets slower but the difference is very small (dropout rate=0.50).

The effect of the layer size of LSTM and dropout rate
The effect of the layer size of LSTM and dropout rate

When will the differences be large? (Setting with/without cuDNN.)

As batch size is small (batchsize=128), sequence length is long (length=50) and the number of layers is large (layer=3), the difference is large. (The cuDNN is faster than no-cuDNN setting.)
If we use a large LSTM in the experiment, the performance benefit of using cuDNN will be large. 7.8 times faster in forward time. 4.0 times faster in backward time.
EXPERIMENTAL ENVIRONMENT
GPU: GeForce GTX 970
Chainer (v1.21)
cuDNN v5.1 (cuda v8)

EXPERIMENTAL SETTING
Data: Random artificial sequence data (data size: 10,000)
Training epoch: 10
Comparing the average time per one epoch.
Performance time offorward time (for train data)
forward time (for test data)
backward time
Default experiment setting:
batchsize : 128
sequence length : 25
random length : 0 (fix length)
layer size: 1
input unit size: 128
hidden unit size : 128

The code for our experiments:
https://github.com/aonotas/test-chainer-performance

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末盖彭,一起剝皮案震驚了整個(gè)濱河市页滚,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌裹驰,老刑警劉巖,帶你破解...
    沈念sama閱讀 217,277評(píng)論 6 503
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件幻林,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡沪饺,警方通過(guò)查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,689評(píng)論 3 393
  • 文/潘曉璐 我一進(jìn)店門(mén)整葡,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái),“玉大人遭居,你說(shuō)我怎么就攤上這事【闫迹” “怎么了?”我有些...
    開(kāi)封第一講書(shū)人閱讀 163,624評(píng)論 0 353
  • 文/不壞的土叔 我叫張陵枪蘑,是天一觀的道長(zhǎng)岖免。 經(jīng)常有香客問(wèn)我,道長(zhǎng)成翩,這世上最難降的妖魔是什么? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 58,356評(píng)論 1 293
  • 正文 為了忘掉前任栅炒,我火速辦了婚禮,結(jié)果婚禮上术羔,老公的妹妹穿的比我還像新娘。我一直安慰自己级历,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,402評(píng)論 6 392
  • 文/花漫 我一把揭開(kāi)白布寥殖。 她就那樣靜靜地躺著,像睡著了一般嚼贡。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上粤策,一...
    開(kāi)封第一講書(shū)人閱讀 51,292評(píng)論 1 301
  • 那天,我揣著相機(jī)與錄音叮盘,去河邊找鬼。 笑死柔吼,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的愈魏。 我是一名探鬼主播蝗罗,決...
    沈念sama閱讀 40,135評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼串塑,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來(lái)了北苟?” 一聲冷哼從身側(cè)響起,我...
    開(kāi)封第一講書(shū)人閱讀 38,992評(píng)論 0 275
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤傻昙,失蹤者是張志新(化名)和其女友劉穎闺骚,沒(méi)想到半個(gè)月后妆档,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,429評(píng)論 1 314
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡贾惦,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,636評(píng)論 3 334
  • 正文 我和宋清朗相戀三年,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了须板。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 39,785評(píng)論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡习瑰,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出甜奄,到底是詐尸還是另有隱情,我是刑警寧澤课兄,帶...
    沈念sama閱讀 35,492評(píng)論 5 345
  • 正文 年R本政府宣布,位于F島的核電站第喳,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏曲饱。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,092評(píng)論 3 328
  • 文/蒙蒙 一扩淀、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧啤挎,春花似錦、人聲如沸庆聘。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 31,723評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)。三九已至宴抚,卻和暖如春勒魔,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背冠绢。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 32,858評(píng)論 1 269
  • 我被黑心中介騙來(lái)泰國(guó)打工, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留弟胀,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 47,891評(píng)論 2 370
  • 正文 我出身青樓邮利,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國(guó)和親延届。 傳聞我的和親對(duì)象是個(gè)殘疾皇子剪勿,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,713評(píng)論 2 354

推薦閱讀更多精彩內(nèi)容

  • 9月12日厕吉,周一。早上到校械念,想著告訴學(xué)生下午班會(huì)課后換座位,沒(méi)想到一進(jìn)教室龄减,學(xué)生們很自覺(jué)的已經(jīng)換過(guò)了项钮,心里竊...
    匆匆十年閱讀 316評(píng)論 0 0
  • 我是不被父母接納的吧烁巫,父母那個(gè)年代都是重男輕女的,我有一個(gè)姐姐宠能,生了我這個(gè)女兒,據(jù)姥姥說(shuō)违崇,爸爸是失望的,所以就連我...
    飴逸閱讀 195評(píng)論 3 3
  • 假日走在街頭羞延,常吃荆看見(jiàn)熱鬧的路邊樹(shù)下坐著三五個(gè)鄉(xiāng)下姑娘伴箩,有時(shí)忙著手里的活計(jì),閑了招攬著路上行人,"織補(bǔ)嘍砂客,誰(shuí)織補(bǔ)嘍...
    秦沐陽(yáng)閱讀 302評(píng)論 0 0
  • 我錯(cuò)了。 我突然意識(shí)到鞠值, 認(rèn)同于某一項(xiàng)孤立的東西,“者”彤恶, 會(huì)讓我無(wú)可選擇钞钙。 無(wú)限個(gè)分枝声离,無(wú)數(shù)個(gè)我, 在認(rèn)同個(gè)體的...
    j_haven閱讀 258評(píng)論 0 0