Pytorch的安裝
知道如何安裝pytorch
Pytorch是一款facebook發(fā)布的深度學(xué)習(xí)框架雨女,由其易用性墨叛,友好性石窑,深受廣大用戶青睞炫欺。
安裝地址介紹:https://pytorch.org/get-started/locally/
注意:代碼中都是使用torch
知道張量和Pytorch中的張量
知道pytorch中如何創(chuàng)建張量
知道pytorch中tensor的重要屬性
知道pytorch中tensor的如何修改
知道pytorch中的cuda tensor
掌握pytorch中tensor的常用數(shù)學(xué)運(yùn)算
張量是一個(gè)統(tǒng)稱(chēng),其中包含很多類(lèi)型:
0階張量:標(biāo)量揉抵、常數(shù)亡容,0-D Tensor
1階張量:向量,1-D Tensor
2階張量:矩陣冤今,2-D Tensor
3階張量
...
N階張量
從已有的數(shù)據(jù)中創(chuàng)建張量
從列表中創(chuàng)建
torch.tensor([[1., -1.], [1., -1.]])
tensor([[ 1.0000, -1.0000],
[ 1.0000, -1.0000]])
使用numpy中的數(shù)組創(chuàng)建tensor
torch.tensor(np.array([[1, 2, 3], [4, 5, 6]]))
tensor([[ 1, 2, 3],
[ 4, 5, 6]])
創(chuàng)建固定張量
torch.ones([3,4])?創(chuàng)建3行4列的全為1的tensor
torch.zeros([3,4])創(chuàng)建3行4列的全為0的tensor
torch.ones_like(tensor)?torch.zeros_like(tensor)創(chuàng)建與tensor相同形狀和數(shù)據(jù)類(lèi)型的值全為1/0的tensor
torch.empty(3,4)創(chuàng)建3行4列的空的tensor闺兢,會(huì)用無(wú)用數(shù)據(jù)進(jìn)行填充(手工填充torch.fill_)
在一定范圍內(nèi)創(chuàng)建序列張量
torch.arange(start, end, step)?從start到end以step為步長(zhǎng)取值生成序列
torch.linspace(start, end, number_steps)?從start到end之間等差生成number_steps個(gè)數(shù)字組成序列
torch.logspace(start, end, number_steps, base=10)在$base^{start}$到$base^{end}$之間等比生成number_steps個(gè)數(shù)字組成序列
創(chuàng)建隨機(jī)張量
torch.rand([3,4])?創(chuàng)建3行4列的隨機(jī)值的tensor,隨機(jī)值的區(qū)間是[0, 1)
>>> torch.rand(2, 3)
tensor([[ 0.8237, 0.5781, 0.6879],
[ 0.3816, 0.7249, 0.0998]])
torch.randint(low=0,high=10,size=[3,4])?創(chuàng)建3行4列的隨機(jī)整數(shù)的tensor辟汰,隨機(jī)值的區(qū)間是[low, high)
>>> torch.randint(3, 10, (2, 2))
tensor([[4, 5],
[6, 7]])
torch.randn([3,4])?創(chuàng)建3行4列的隨機(jī)數(shù)的tensor列敲,隨機(jī)值的分布式均值為0阱佛,方差為1
獲取tensor中的數(shù)據(jù)
tensor.item() 當(dāng)tensor中只有一個(gè)元素時(shí)
In [10]: a = torch.tensor(np.arange(1))
In [11]: a
Out[11]: tensor([0])
In [12]: a.item()
Out[12]:0
轉(zhuǎn)化為numpy數(shù)組
In [55]: z.numpy()
Out[55]:
array([[-2.5871205],
[7.3690367],
[-2.4918075]], dtype=float32)
獲取形狀:tensor.size()?tensor.shape
In [72]: x
Out[72]:
tensor([[1,2],
[3,4],
[5,10]], dtype=torch.int32)
In [73]: x.size()
Out[73]: torch.Size([3,2])
獲取數(shù)據(jù)類(lèi)型tensor.dtype
In [80]: x.dtype
Out[80]: torch.int32
獲取階數(shù):tensor.dim()
In [77]: x.dim()
Out[77]: 2
形狀改變:
tensor.view((3,4))?類(lèi)似numpy中的reshape
In [76]: x.view(2,3)
Out[76]:
tensor([[ 1, 2, 3],
[ 4, 5, 10]], dtype=torch.int32)
tensor.t()?或tensor.transpose(dim0, dim1)?轉(zhuǎn)置
In [79]: x.t()
Out[79]:
tensor([[ 1, 3, 5],
[ 2, 4, 10]], dtype=torch.int32)
tensor([[[1.,2.,3.],
[4.,5.,6.]],
[[2.,2.,3.],
[4.,5.,6.]],
[[3.,2.,3.],
[4.,5.,6.]],
[[4.,2.,3.],
[4.,5.,6.]]])
In [62]: b1.size()
Out[62]: torch.Size([4,2,3])
In [65]: b2.size()
Out[65]: torch.Size([4,3,2])
tensor.unsqueeze(dim)?tensor.squeeze()填充或者壓縮維度
# tensor.squeeze() 默認(rèn)去掉所有長(zhǎng)度是1的維度帖汞,# 也可以填入維度的下標(biāo),指定去掉某個(gè)維度
In [82]: a
Out[82]:
tensor([[[1],
[2],
[3]]])
In [83]: a.size()
Out[83]: torch.Size([1,3,1])
In [84]: a.squeeze()
Out[84]: tensor([1,2,3])
In [85]: a.squeeze(0)
Out[85]:
tensor([[1],
[2],
[3]])
In [86]: a.squeeze(2)
Out[86]: tensor([[1,2,3]])
In [87]:
類(lèi)型的指定或修改
創(chuàng)建數(shù)據(jù)的時(shí)候指定類(lèi)型
In [88]: torch.ones([2,3],dtype=torch.float32)
Out[88]:
tensor([[1.,1.,1.],
[1.,1.,1.]])
改變已有tensor的類(lèi)型
In [17]: a
Out[17]: tensor([1,2], dtype=torch.int32)
In [18]: a.type(torch.float)
Out[18]: tensor([1.,2.])
In [19]: a.double()
Out[19]: tensor([1.,2.], dtype=torch.float64)
tensor的切片
In [101]: x
Out[101]:
tensor([[1.6437,1.9439,1.5393],
[1.3491,1.9575,1.0552],
[1.5106,1.0123,1.0961],
[1.4382,1.5939,1.5012],
[1.5267,1.4858,1.4007]])
In [102]: x[:,1]
Out[102]: tensor([1.9439,1.9575,1.0123,1.5939,1.4858])
切片賦值
In [12]: x[:,1]
Out[12]: tensor([1.9439,1.9575,1.0123,1.5939,1.4858])
In [13]: x[:,1] =1
In [14]: x[:,1]
Out[14]: tensor([1.,1.,1.,1.,1.])
注意:切片數(shù)據(jù)內(nèi)存不連續(xù)
In [87]: a = torch.randn(2,3,4)
In [88]: a
Out[88]:
tensor([[[0.6204,0.9294,0.6449,-2.0183],
[-1.1809,0.4071,-1.0827,1.7154],
[0.0431,0.6646,2.0386,0.0777]],
[[0.0052,-0.1531,-0.7470,-0.8283],
[-0.1547,0.3123,-0.6279,-0.0132],
[-0.0527,-1.2305,0.7089,-0.4231]]])
In [89]: a[:,:1,:2]
Out[89]:
tensor([[[0.6204,0.9294]],
[[0.0052,-0.1531]]])
In [90]: a[:,:1,:2].view(1,4)
什么是CUDA凑术?
CUDA(Compute Unified Device Architecture):CUDA?是一種由NVIDIA推出的通用并行計(jì)算架構(gòu)翩蘸,該架構(gòu)使GPU能夠解決復(fù)雜的計(jì)算問(wèn)題(GPU,或者叫做顯卡淮逊,如果沒(méi)有cuda這個(gè)框架催首,就只能完成圖形渲染)扶踊。
如何使pytorch能夠調(diào)用cuda框架(使用gpu完成深度學(xué)習(xí)計(jì)算)?
1.本機(jī)需要有一個(gè)NVIDIA的gpu
2.本機(jī)需要安裝一個(gè)適配的gpu驅(qū)動(dòng)
3.本機(jī)需要安裝一個(gè)與該gpu適配的CUDA框架
4.在python環(huán)境中安裝gpu版本pytorch
如何判斷當(dāng)前環(huán)境中的pytorch能否調(diào)用cuda框架進(jìn)行計(jì)算郎任?
torch.cuda這個(gè)模塊增加了對(duì)CUDA tensor的支持秧耗,能夠在cpu和gpu上使用相同的方法操作tensor
torch.cuda.is_available()
如何把cpu tensor轉(zhuǎn)換成 cuda tensor
通過(guò).to方法能夠把一個(gè)tensor轉(zhuǎn)移到另外一個(gè)設(shè)備(比如從CPU轉(zhuǎn)到GPU)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
y = torch.ones_like(x, device=device) # 創(chuàng)建一個(gè)在cuda上的tensor
x = x.to(device) # 使用方法把x轉(zhuǎn)為cuda的tensor
z = x + y
print(z)
print(z.to("cpu", torch.double)) # .to方法也能夠同時(shí)設(shè)置類(lèi)型
>>tensor([1.9806], device='cuda:0')
>>tensor([1.9806], dtype=torch.float64)
6. tensor的常用數(shù)學(xué)運(yùn)算
tensor.add?tensor.sub?tensor.abs?tensor.mm
In [204]: a = torch.tensor([1,2,3])
In [205]: b = torch.tensor(1)
In [206]: a.add(b)
Out[206]: tensor([2,3,4])
In [207]: a.sub(b)
Out[207]: tensor([0,1,2])
In [212]: c = torch.randn((3,))
In [213]: c
Out[213]: tensor([0.5161,-0.1732,1.0162])
In [214]: c.abs()
Out[214]: tensor([0.5161,0.1732,1.0162])
In [215]: c
Out[215]: tensor([0.5161,-0.1732,1.0162])
In [254]: a = torch.randn([3,4])
In [255]: b = torch.randn([4,5])
In [256]: a.mm(b)
Out[256]:
tensor([[0.6888,0.4304,-0.5489,0.3615,-1.1690],
[1.0890,-1.0391,-0.3717,-0.4045,3.4404],
[0.9885,0.1720,-0.2117,-0.1694,-0.5460]])
注意:tensor之間元素級(jí)別的數(shù)學(xué)運(yùn)算同樣適用廣播機(jī)制。
In [145]: a = torch.tensor([[1,2], [3,4]])
In [146]: b = torch.tensor([1,2])
In [147]: a + b
Out[147]:
tensor([[2,4],
[4,6]])
In [148]: c = torch.tensor([[1,],[2]])
In [149]: a + c
Out[149]:
tensor([[2,3],
[5,6]])
簡(jiǎn)單函數(shù)運(yùn)算?torch.exp?torch.sin?torch.cos
In [109]: torch.exp(torch.tensor([0, np.log(2)]))
Out[109]: tensor([1.,2.])
In [110]: torch.tensor([0, np.log(2)]).exp()
Out[110]: tensor([1.,2.])
In [111]: torch.sin(torch.tensor([0, np.pi]))
Out[111]: tensor([0.0000e+00,-8.7423e-08])
In [112]: torch.cos(torch.tensor([0, np.pi]))
Out[112]: tensor([1.,-1.])
in-place 原地操作?tensor.add_?tensor.sub_?tensor.abs_
In [224]: a
Out[224]: tensor([1,2,3])
In [225]: b
Out[225]: tensor(1)
In [226]: a.add(b)
Out[226]: tensor([2,3,4])
In [227]: a
Out[227]: tensor([1,2,3])
In [228]: a.add_(b)
Out[228]: tensor([2,3,4])
In [229]: a
Out[229]: tensor([2,3,4])
In [236]: c.abs()
Out[236]: tensor([0.5161,0.1732,1.0162])
In [237]: c
Out[237]: tensor([0.5161,-0.1732,1.0162])
In [238]: c.abs_()
Out[238]: tensor([0.5161,0.1732,1.0162])
In [239]: c
Out[239]: tensor([0.5161,0.1732,1.0162])
In [240]: c.zero_()
Out[240]: tensor([0.,0.,0.])
In [241]: c
Out[241]: tensor([0.,0.,0.])
統(tǒng)計(jì)操作?tensor.max,?tensor.min,?tensor.mean,tensor.median?tensor.argmax
In[242]:a
Out[242]:tensor([0.5161,-0.1732,1.0162])
In[243]:a.max()
Out[243]:tensor(1.0162)
In[246]:a
Out[246]:
tensor([[0.3337,-0.5011,-1.4319,-0.6633],
[0.6620,1.3154,-0.9129,0.4685],
[0.3203,-1.6496,1.1967,-0.3174]])
In[247]:a.max()
Out[247]:tensor(1.3154)
In[248]:a.max(dim=0)
Out[248]:
torch.return_types.max(
values=tensor([0.6620,1.3154,1.1967,0.4685]),
indices=tensor([1,1,2,1]))
In[249]:a.max(dim=0)[0]
Out[249]:tensor([0.6620,1.3154,1.1967,0.4685])
In[250]:a.max(dim=0)[1]
Out[250]:tensor([1,1,2,1])
In[251]:a.argmax()
Out[251]:tensor(5)
In[252]:a.argmax(dim=0)
Out[252]:tensor([1,1,2,1])
通過(guò)前面的學(xué)習(xí)舶治,可以發(fā)現(xiàn)torch的各種操作幾乎和numpy一樣
更多tensor的操作分井,參考https://pytorch.org/docs/stable/tensors.html