Pytorch 小知識點

Contents

  • Should I manually set model mode to train() or eval()?
  • Dropout behaves different in train and test mode ?
  • BatchNorm behaves different in train and test mode?
  • Set accessible GPUs your code can run on
  • Get one batch from DataLoader
  • What does pytorch detach() do?

Should I manually set model mode to train() or eval()?

  • By default , in pytorch, all the modules are initialized to train mode (self.training = True). You can set the model in train mode by manually call model.train(), but it is an optional operation.
  • Also be aware that some layers have different behavior during train and evaluation (like BatchNorm, Dropout) so setting it matters.
  • As a rule of thumb for programming in general, try to explicitly state your intent and set model.train() and model.eval() when necessary.

Dropout behaves different in train and test mode ?

Dropout layer is defined in torch.nn module and is used in the training phase to reduce the chance of overfitting. However, when we apply our trained model, we want to use the full power of the model, i.e. to use all neurons (no element is masked) in the trained model to obtain a higher accuracy.

  • During training, Dropout randomly zeroes some of the elements of the input tensor with a pre-defined probability p using samples from a Bernoulli distribution. The elements to zero are randomized on every forward call.
  • During training, the outputs are scaled by a factor of \frac{1}{1-p}.
  • During evaluation, the module simply computes an identity function.

BatchNorm behaves different in train and test mode?

According to torch.nn.BatchNorm2d interpretation in pytorch doc:

  • By default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default momentum of 0.1.
  • If track_running_stats is set to False, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.

Let's have a look at the BatchNorm2d module:

class torch.nn.BatchNorm2d(
num_features, 
eps=1e-05, 
momentum=0.1, 
affine=True, 
track_running_stats=True)

It is very clear that the track_running_stats is set True. So BatchNorm2d will keep a running estimate of its computed mean and variance and, moreover, the running mean/variance is used for normalization during evaluation.

Set accessible GPUs your code can run on

It is common that one machine can have 2 or more GPU cards installed and a group people share the limited resource. For example, your machine has 2 1080Ti and your colleague is running his code on the first GPU indexed by gpu:0. He almost used out the GPU memory, so you cannot launch your code on the same device because it will throw a Out of memory error.

However, you are absolutely the one who comes across the Out of memory error if you directly run your code without any specific setting, let's say model.cuda(). That's due to the default setting. Let's make it more clear, in pytorch, it always uses the first device (index=0) .

So how can we get around this problem? Here is the solution:
Solution One: explicitly change the device.

x = torch.Tensor([1,2,3]).cuda() # or
x = torch.Tensor([1,2,3], device=torch.device("cuda")) # or
x = torch.Tensor([1,2,3]).cuda(torch.device("cuda")) # or
x = torch.Tensor([1,2,3]).to(device=torch.device("cuda"))
# x.device is device(type="cuda", index=0), the default one in the context

with torch.cuda.device(1):
    x = torch.Tensor([1,2,3]).cuda() # or
    x = torch.Tensor([1,2,3], device=torch.device("cuda")) # or
    x = torch.Tensor([1,2,3]).cuda(torch.device("cuda")) # or
    x = torch.Tensor([1,2,3]).to(device=torch.device("cuda"))
    # x.device is device(type="cuda", index=1), the default one in the context

    x = torch.Tensor([1,2,3], device=torch.device("cuda:0")) # or
    x = torch.Tensor([1,2,3]).cuda(torch.device("cuda:0")) # or
    x = torch.Tensor([1,2,3]).to(device=torch.device("cuda:0"))
    # x.device is device(type="cuda", index=0), regardless the context

Note that the device context indicates the default device to use, but you can go out of the bounds by explicitly using other device, e.g. cuda:1.

Solution Two: use CUDA_DEVICE_ORDER & CUDA_VISIBLE_DEVICES env variable.
See: CUDA_DEVICE_ORDER 環(huán)境變量說明 and CUDA_VISIBLE_DEVICES 環(huán)境變量說明
for more information.

import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"   
os.environ["CUDA_VISIBLE_DEVICES"]="1"

x = torch.Tensor([1,2,3]).cuda() # or
x = torch.Tensor([1,2,3], device=torch.device("cuda")) # or
x = torch.Tensor([1,2,3]).cuda(torch.device("cuda")) # or
x = torch.Tensor([1,2,3]).to(device=torch.device("cuda"))
# x.device is device(type="cuda", index=1), the default one in the context

Why CUDA_VISIBLE_DEVICES not working in PyTorch code?

Even strictly following the introduction aforementioned, sometimes you might run into the situation in which CUDA_VISIBLE_DEVICES env does not work as expected, say we got 4 GPU installed on a machine, and we want to run our code on the 3rd GPU by setting CUDA_VISIBLE_DEVICES =2 :

import os
...
...
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="2"
...
...

However, the code run on the 1st GPU all the time. The strange thing is everything works well when CUDA_DEVICE_ORDER and CUDA_DEVICE_ORDER env are set ahead, e.g.

CUDA_DEVICE_ORDER= PCI_BUS_ID, CUDA_VISIBLE_DEVICES=2 python code.py

If this is your situation, check and make sure os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" and
os.environ["CUDA_VISIBLE_DEVICES"]="2" are set before you call torch.cuda.is_available() or torch.Tensor.cuda() or any other PyTorch built-in cuda function.

Never call cuda relevant functions when CUDA_DEVICE_ORDER &CUDA_VISIBLE_DEVICES is not set.

Get one batch from DataLoader

We usually construct a data loader and then enumerate it to retrieve data one batch after another.

for step, item in enumerate(dataloader):
    ## data consume

What if we want to get only one batch of data out of the data loader? DataLoader intrinsically does not support indexing, which means dataload[0] fails to pull a batch of data. We can do that with the following code:

dataloaderI = iter(dataloader)
item = next(dataloaderI)

That's it.

What does pytorch detach() do?

Dataloader will automatically convert data of type numpy.ndarray to torch.Tensor

DistributedDataParallel vs DataParallel

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌箱叁,老刑警劉巖仇让,帶你破解...
    沈念sama閱讀 222,464評論 6 517
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件因俐,死亡現(xiàn)場離奇詭異,居然都是意外死亡炼七,警方通過查閱死者的電腦和手機傀履,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 95,033評論 3 399
  • 文/潘曉璐 我一進店門疏虫,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事卧秘。” “怎么了官扣?”我有些...
    開封第一講書人閱讀 169,078評論 0 362
  • 文/不壞的土叔 我叫張陵翅敌,是天一觀的道長。 經(jīng)常有香客問我惕蹄,道長蚯涮,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 59,979評論 1 299
  • 正文 為了忘掉前任卖陵,我火速辦了婚禮遭顶,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘泪蔫。我一直安慰自己棒旗,他們只是感情好,可當(dāng)我...
    茶點故事閱讀 69,001評論 6 398
  • 文/花漫 我一把揭開白布撩荣。 她就那樣靜靜地躺著铣揉,像睡著了一般。 火紅的嫁衣襯著肌膚如雪餐曹。 梳的紋絲不亂的頭發(fā)上逛拱,一...
    開封第一講書人閱讀 52,584評論 1 312
  • 那天,我揣著相機與錄音台猴,去河邊找鬼朽合。 笑死,一個胖子當(dāng)著我的面吹牛饱狂,可吹牛的內(nèi)容都是我干的曹步。 我是一名探鬼主播,決...
    沈念sama閱讀 41,085評論 3 422
  • 文/蒼蘭香墨 我猛地睜開眼嗡官,長吁一口氣:“原來是場噩夢啊……” “哼箭窜!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起衍腥,我...
    開封第一講書人閱讀 40,023評論 0 277
  • 序言:老撾萬榮一對情侶失蹤磺樱,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后婆咸,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體竹捉,經(jīng)...
    沈念sama閱讀 46,555評論 1 319
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 38,626評論 3 342
  • 正文 我和宋清朗相戀三年尚骄,在試婚紗的時候發(fā)現(xiàn)自己被綠了块差。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 40,769評論 1 353
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖憨闰,靈堂內(nèi)的尸體忽然破棺而出状蜗,到底是詐尸還是另有隱情,我是刑警寧澤鹉动,帶...
    沈念sama閱讀 36,439評論 5 351
  • 正文 年R本政府宣布轧坎,位于F島的核電站,受9級特大地震影響泽示,放射性物質(zhì)發(fā)生泄漏缸血。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點故事閱讀 42,115評論 3 335
  • 文/蒙蒙 一械筛、第九天 我趴在偏房一處隱蔽的房頂上張望捎泻。 院中可真熱鬧,春花似錦埋哟、人聲如沸笆豁。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,601評論 0 25
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽渔呵。三九已至,卻和暖如春砍鸠,著一層夾襖步出監(jiān)牢的瞬間扩氢,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 33,702評論 1 274
  • 我被黑心中介騙來泰國打工爷辱, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留录豺,地道東北人。 一個月前我還...
    沈念sama閱讀 49,191評論 3 378
  • 正文 我出身青樓饭弓,卻偏偏與公主長得像双饥,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子弟断,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 45,781評論 2 361

推薦閱讀更多精彩內(nèi)容

  • rljs by sennchi Timeline of History Part One The Cognitiv...
    sennchi閱讀 7,347評論 0 10
  • 摘要:火藥是中國古代的四大發(fā)明之一咏花,起源于方士的煉丹術(shù),最晚到唐代已經(jīng)開始出現(xiàn)阀趴。自火藥出現(xiàn)后昏翰,由于其巨大的殺傷力,...
    浩然文史閱讀 1,262評論 5 3
  • 昨天做了工作交接刘急,上午稅務(wù)局的來參觀棚菊,我和婕在交接,另外兩個人在坐著叔汁,鄢廠進來吼了一句统求,沒一個人下去接待检碗??码邻?然折剃,...
    無盡夏小柒閱讀 160評論 0 0
  • 到現(xiàn)在那個畫面依然會浮現(xiàn)在我眼前,因為它美的有點深刻像屋。 昨晚我和阿強還有林一起上晚班微驶,下班后林獨自乘公交離開,我和...
    呼吸到他存在閱讀 409評論 0 0
  • 但凡是有點才華之人無論是藝術(shù)上的還是文學(xué)上的必將擁有情懷开睡,也可以簡而概括為性情中人,就像我昨日在朋友圈里忽然寫...
    Aedi閱讀 275評論 0 0