RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0窟社; 4.00 GiB total capacity券勺; 2.44

調(diào)試手寫(xiě)數(shù)字識(shí)別代碼時(shí)出現(xiàn)的問(wèn)題,將cpu的代碼改用gpu訓(xùn)練時(shí)雖然可以訓(xùn)練灿里,詳見(jiàn)上一條博客(Mnist手寫(xiě)數(shù)字識(shí)別cpu訓(xùn)練與gpu訓(xùn)練)关炼,但是會(huì)出現(xiàn)Error。查找資料后以下是解決過(guò)程匣吊。

一儒拂、調(diào)整前代碼&調(diào)整后代碼


1、前

import torch
from torchvision import datasets, transforms
import torch.nn as nn
import torch.optim as optim
from datetime import datetime

# 添加
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# 添加
class Config:
    batch_size = 64
    epoch = 10
    momentum = 0.9
    alpha = 1e-3

    print_per_step = 100


class LeNet(nn.Module):

    def __init__(self):
        super(LeNet, self).__init__()
        # 3*3的卷積
        self.conv1 = nn.Sequential(
            nn.Conv2d(1, 32, 3, 1, 2),  #kernel_size卷積核大小 stride卷積步長(zhǎng) padding特征圖填充
            nn.ReLU(),
            nn.MaxPool2d(2, 2)
        )

        self.conv2 = nn.Sequential(
            nn.Conv2d(32, 64, 5),
            nn.ReLU(),
            nn.MaxPool2d(2, 2)  #2*2的最大池化層
        )

        self.fc1 = nn.Sequential(
            nn.Linear(64 * 5 * 5, 128),
            nn.BatchNorm1d(128),
            nn.ReLU()
        )

        self.fc2 = nn.Sequential(
            nn.Linear(128, 64),
            nn.BatchNorm1d(64),  # 加快收斂速度的方法(注:批標(biāo)準(zhǔn)化一般放在全連接層后面色鸳,激活函數(shù)層的前面)
            nn.ReLU()
        )

        self.fc3 = nn.Linear(64, 10)


    def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        x = x.view(x.size()[0], -1)
        x = self.fc1(x)
        x = self.fc2(x)
        x = self.fc3(x)
        return x


class TrainProcess:

    def __init__(self):
        self.train, self.test = self.load_data()
        #修改
        self.net = LeNet().to(device)
        #修改
        self.criterion = nn.CrossEntropyLoss()  # 定義損失函數(shù)
        self.optimizer = optim.SGD(self.net.parameters(), lr=Config.alpha, momentum=Config.momentum)

    @staticmethod
    def load_data():
        print("Loading Data......")
        """加載MNIST數(shù)據(jù)集侣灶,本地?cái)?shù)據(jù)不存在會(huì)自動(dòng)下載"""
        train_data = datasets.MNIST(root='./data/',
                                    train=True,
                                    transform=transforms.ToTensor(),
                                    download=True)

        test_data = datasets.MNIST(root='./data/',
                                   train=False,
                                   transform=transforms.ToTensor())

        # 返回一個(gè)數(shù)據(jù)迭代器
        # shuffle:是否打亂順序
        train_loader = torch.utils.data.DataLoader(dataset=train_data,
                                                   batch_size=Config.batch_size,
                                                   shuffle=True)

        test_loader = torch.utils.data.DataLoader(dataset=test_data,
                                                  batch_size=Config.batch_size,
                                                  shuffle=False)
        return train_loader, test_loader

    def train_step(self):
        steps = 0
        start_time = datetime.now()

        print("Training & Evaluating......")
        for epoch in range(Config.epoch):
            print("Epoch {:3}".format(epoch + 1))

            for data, label in self.train:
                # 修改
                data, label = data.to(device),label.to(device)
                # 修改
                self.optimizer.zero_grad()  # 將梯度歸零
                outputs = self.net(data)  # 將數(shù)據(jù)傳入網(wǎng)絡(luò)進(jìn)行前向運(yùn)算
                loss = self.criterion(outputs, label)  # 得到損失函數(shù)
                loss.backward()  # 反向傳播
                self.optimizer.step()  # 通過(guò)梯度做一步參數(shù)更新

                # 每100次打印一次結(jié)果
                if steps % Config.print_per_step == 0:
                    _, predicted = torch.max(outputs, 1)
                    correct = int(sum(predicted == label))
                    accuracy = correct / Config.batch_size  # 計(jì)算準(zhǔn)確率
                    end_time = datetime.now()
                    time_diff = (end_time - start_time).seconds
                    time_usage = '{:3}m{:3}s'.format(int(time_diff / 60), time_diff % 60)
                    msg = "Step {:5}, Loss:{:6.2f}, Accuracy:{:8.2%}, Time usage:{:9}."
                    print(msg.format(steps, loss, accuracy, time_usage))

                steps += 1

        test_loss = 0.
        test_correct = 0
        for data, label in self.test:
            # 修改
            data, label = data.to(device),label.to(device)
            # 修改
            outputs = self.net(data)
            loss = self.criterion(outputs, label)
            test_loss += loss * Config.batch_size
            _, predicted = torch.max(outputs, 1)
            correct = int(sum(predicted == label))
            test_correct += correct

        accuracy = test_correct / len(self.test.dataset)
        loss = test_loss / len(self.test.dataset)
        print("Test Loss: {:5.2f}, Accuracy: {:6.2%}".format(loss, accuracy))

        end_time = datetime.now()
        time_diff = (end_time - start_time).seconds
        print("Time Usage: {:5.2f} mins.".format(time_diff / 60.))


if __name__ == "__main__":
    p = TrainProcess()
    p.train_step()

報(bào)錯(cuò):RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 2.44 GiB already allocated; 0 bytes free; 2.45 GiB reserved in total by PyTorch)
[圖片上傳失敗...(image-433dd1-1649150131900)]

2、后

import torch
from torchvision import datasets, transforms
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
from datetime import datetime

# 添加
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# 添加

class Config:
    batch_size = 64
    epoch = 10
    momentum = 0.9
    alpha = 1e-3

    print_per_step = 100


class LeNet(nn.Module):

    def __init__(self):
        super(LeNet, self).__init__()
        # 3*3的卷積
        self.conv1 = nn.Sequential(
            nn.Conv2d(1, 32, 3, 1, 2),  #kernel_size卷積核大小 stride卷積步長(zhǎng) padding特征圖填充
            nn.ReLU(),
            nn.MaxPool2d(2, 2)
        )

        self.conv2 = nn.Sequential(
            nn.Conv2d(32, 64, 5),
            nn.ReLU(),
            nn.MaxPool2d(2, 2)  #2*2的最大池化層
        )

        self.fc1 = nn.Sequential(
            nn.Linear(64 * 5 * 5, 128),
            nn.BatchNorm1d(128),
            nn.ReLU()
        )

        self.fc2 = nn.Sequential(
            nn.Linear(128, 64),
            nn.BatchNorm1d(64),  # 加快收斂速度的方法(注:批標(biāo)準(zhǔn)化一般放在全連接層后面缕碎,激活函數(shù)層的前面)
            nn.ReLU()
        )

        self.fc3 = nn.Linear(64, 10)


    def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        x = x.view(x.size()[0], -1)
        x = self.fc1(x)
        x = self.fc2(x)
        x = self.fc3(x)
        return x


class TrainProcess:

    def __init__(self):
        self.train, self.test = self.load_data()
        #修改
        self.net = LeNet().to(device)
        #修改
        self.criterion = nn.CrossEntropyLoss()  # 定義損失函數(shù)
        self.optimizer = optim.SGD(self.net.parameters(), lr=Config.alpha, momentum=Config.momentum)

    @staticmethod
    def load_data():
        print("Loading Data......")
        """加載MNIST數(shù)據(jù)集,本地?cái)?shù)據(jù)不存在會(huì)自動(dòng)下載"""
        train_data = datasets.MNIST(root='./data/',
                                    train=True,
                                    transform=transforms.ToTensor(),
                                    download=True)

        test_data = datasets.MNIST(root='./data/',
                                   train=False,
                                   transform=transforms.ToTensor())

        # 返回一個(gè)數(shù)據(jù)迭代器
        # shuffle:是否打亂順序
        train_loader = torch.utils.data.DataLoader(dataset=train_data,
                                                   batch_size=Config.batch_size,
                                                   shuffle=True)

        test_loader = torch.utils.data.DataLoader(dataset=test_data,
                                                  batch_size=Config.batch_size,
                                                  shuffle=False)
        return train_loader, test_loader

    def train_step(self):
        steps = 0
        start_time = datetime.now()

        print("Training & Evaluating......")
        for epoch in range(Config.epoch):
            print("Epoch {:3}".format(epoch + 1))

            for data, label in self.train:
                # 修改
                data, label = data.to(device),label.to(device)
                # 修改
                self.optimizer.zero_grad()  # 將梯度歸零
                outputs = self.net(data)  # 將數(shù)據(jù)傳入網(wǎng)絡(luò)進(jìn)行前向運(yùn)算
                loss = self.criterion(outputs, label)  # 得到損失函數(shù)
                loss.backward()  # 反向傳播
                self.optimizer.step()  # 通過(guò)梯度做一步參數(shù)更新

                # 每100次打印一次結(jié)果
                if steps % Config.print_per_step == 0:
                    _, predicted = torch.max(outputs, 1)
                    correct = int(sum(predicted == label))
                    accuracy = correct / Config.batch_size  # 計(jì)算準(zhǔn)確率
                    end_time = datetime.now()
                    time_diff = (end_time - start_time).seconds
                    time_usage = '{:3}m{:3}s'.format(int(time_diff / 60), time_diff % 60)
                    msg = "Step {:5}, Loss:{:6.2f}, Accuracy:{:8.2%}, Time usage:{:9}."
                    print(msg.format(steps, loss, accuracy, time_usage))

                steps += 1

        test_loss = 0.
        test_correct = 0
        for data, label in self.test:
            with torch.no_grad():# 修改
                data, label = data.to(device),label.to(device)
                outputs = self.net(data)
                loss = self.criterion(outputs, label)
                test_loss += loss * Config.batch_size
                _, predicted = torch.max(outputs, 1)
                correct = int(sum(predicted == label))
                test_correct += correct

        accuracy = test_correct / len(self.test.dataset)
        loss = test_loss / len(self.test.dataset)
        print("Test Loss: {:5.2f}, Accuracy: {:6.2%}".format(loss, accuracy))

        end_time = datetime.now()
        time_diff = (end_time - start_time).seconds
        print("Time Usage: {:5.2f} mins.".format(time_diff / 60.))


if __name__ == "__main__":
    print(device)
    p = TrainProcess()
    p.train_step()

運(yùn)行結(jié)果:
[圖片上傳失敗...(image-14b58-1649150131900)]
做了修改后便不會(huì)報(bào)該錯(cuò)誤了池户。

二咏雌、解決方法


方法一:調(diào)整batch_size大小

網(wǎng)上的解決方法大多讓調(diào)整batch_size大小凡怎,但是我在調(diào)整后,并沒(méi)有解決問(wèn)題赊抖。

方法二:不計(jì)算梯度

使用with torch.no_grad():
給出一篇博主寫(xiě)的博客:pytorch運(yùn)行錯(cuò)誤:CUDA out of memory.
注:本文使用的就是方法二解決了問(wèn)題统倒。

方法三:釋放內(nèi)存

在報(bào)錯(cuò)代碼前加上以下代碼,釋放無(wú)關(guān)內(nèi)存:

if hasattr(torch.cuda, 'empty_cache'):
  torch.cuda.empty_cache()

參考博客:解決:RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末氛雪,一起剝皮案震驚了整個(gè)濱河市房匆,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌报亩,老刑警劉巖浴鸿,帶你破解...
    沈念sama閱讀 223,002評(píng)論 6 519
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場(chǎng)離奇詭異弦追,居然都是意外死亡岳链,警方通過(guò)查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 95,357評(píng)論 3 400
  • 文/潘曉璐 我一進(jìn)店門(mén)劲件,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)掸哑,“玉大人,你說(shuō)我怎么就攤上這事零远∶绶郑” “怎么了?”我有些...
    開(kāi)封第一講書(shū)人閱讀 169,787評(píng)論 0 365
  • 文/不壞的土叔 我叫張陵牵辣,是天一觀(guān)的道長(zhǎng)摔癣。 經(jīng)常有香客問(wèn)我,道長(zhǎng)服猪,這世上最難降的妖魔是什么供填? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 60,237評(píng)論 1 300
  • 正文 為了忘掉前任,我火速辦了婚禮罢猪,結(jié)果婚禮上近她,老公的妹妹穿的比我還像新娘。我一直安慰自己膳帕,他們只是感情好粘捎,可當(dāng)我...
    茶點(diǎn)故事閱讀 69,237評(píng)論 6 398
  • 文/花漫 我一把揭開(kāi)白布。 她就那樣靜靜地躺著危彩,像睡著了一般攒磨。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上汤徽,一...
    開(kāi)封第一講書(shū)人閱讀 52,821評(píng)論 1 314
  • 那天娩缰,我揣著相機(jī)與錄音,去河邊找鬼谒府。 笑死拼坎,一個(gè)胖子當(dāng)著我的面吹牛浮毯,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播泰鸡,決...
    沈念sama閱讀 41,236評(píng)論 3 424
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼债蓝,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來(lái)了盛龄?” 一聲冷哼從身側(cè)響起饰迹,我...
    開(kāi)封第一講書(shū)人閱讀 40,196評(píng)論 0 277
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎余舶,沒(méi)想到半個(gè)月后啊鸭,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 46,716評(píng)論 1 320
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡欧芽,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 38,794評(píng)論 3 343
  • 正文 我和宋清朗相戀三年莉掂,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片千扔。...
    茶點(diǎn)故事閱讀 40,928評(píng)論 1 353
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡憎妙,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出曲楚,到底是詐尸還是另有隱情厘唾,我是刑警寧澤,帶...
    沈念sama閱讀 36,583評(píng)論 5 351
  • 正文 年R本政府宣布龙誊,位于F島的核電站抚垃,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏趟大。R本人自食惡果不足惜鹤树,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 42,264評(píng)論 3 336
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望逊朽。 院中可真熱鬧罕伯,春花似錦、人聲如沸叽讳。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 32,755評(píng)論 0 25
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)岛蚤。三九已至邑狸,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間涤妒,已是汗流浹背单雾。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 33,869評(píng)論 1 274
  • 我被黑心中介騙來(lái)泰國(guó)打工, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人铁坎。 一個(gè)月前我還...
    沈念sama閱讀 49,378評(píng)論 3 379
  • 正文 我出身青樓蜂奸,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國(guó)和親硬萍。 傳聞我的和親對(duì)象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,937評(píng)論 2 361

推薦閱讀更多精彩內(nèi)容