Professional CUDA C Programing
代碼下載:http://www.wrox.com/WileyCDA/WroxTitle/Professional-CUDA-C-Programming.productCd-1118739329,descCd-DOWNLOAD.html
Memory Management
CUDA編程中的內(nèi)存管理與C編程類似,并附加了程序員明確負(fù)責(zé)內(nèi)存管理及主機(jī)與設(shè)備之間的數(shù)據(jù)移動(dòng)叽奥。
? 分配和釋放設(shè)備內(nèi)存
? 在主機(jī)和設(shè)備之間傳輸數(shù)據(jù)
Memory Allocation and Deallocation
//分配顯存:
cudaError_t cudaMalloc(void **devPtr, size_t count);
//初始化:
cudaError_t cudaMemset(void *devPtr, int value, size_t count);
//釋放顯存:
cudaError_t cudaFree(void *devPtr);
device資源分配是個(gè)非常昂貴的操作亲桦,因此device Memory應(yīng)該盡可能的重用诉植,而不是重新分配手负。
Memory Transfer
cudaError_t cudaMemcpy(void *dst, const void *src, size_t count,
enum cudaMemcpyKind kind);
//cudaMemcpy通常情況下沪羔,都是同步的臣镣。
Example:
#include "../common/common.h"
#include <cuda_runtime.h>
#include <stdio.h>
/*
* An example of using CUDA's memory copy API to transfer data to and from the
* device. In this case, cudaMalloc is used to allocate memory on the GPU and
* cudaMemcpy is used to transfer the contents of host memory to an array
* allocated using cudaMalloc.
*/
int main(int argc, char **argv)
{
// set up device
int dev = 0;
CHECK(cudaSetDevice(dev));
// memory size
unsigned int isize = 1 << 22;
unsigned int nbytes = isize * sizeof(float);
// get device information
cudaDeviceProp deviceProp;
CHECK(cudaGetDeviceProperties(&deviceProp, dev));
printf("%s starting at ", argv[0]);
printf("device %d: %s memory size %d nbyte %5.2fMB\n", dev,
deviceProp.name, isize, nbytes / (1024.0f * 1024.0f));
// allocate the host memory
float *h_a = (float *)malloc(nbytes);
// allocate the device memory
float *d_a;
CHECK(cudaMalloc((float **)&d_a, nbytes));
// initialize the host memory
for(unsigned int i = 0; i < isize; i++) h_a[i] = 0.5f;
// transfer data from the host to the device
CHECK(cudaMemcpy(d_a, h_a, nbytes, cudaMemcpyHostToDevice));
// transfer data from the device to the host
CHECK(cudaMemcpy(h_a, d_a, nbytes, cudaMemcpyDeviceToHost));
// free memory
CHECK(cudaFree(d_a));
free(h_a);
// reset device
CHECK(cudaDeviceReset());
return EXIT_SUCCESS;
}
編譯運(yùn)行:
hym@hym-ThinkPad-Edge-E440:~/CodeSamples/chapter04$ nvcc -O3 memTransfer.cu -o memTransfer
hym@hym-ThinkPad-Edge-E440:~/CodeSamples/chapter04$ nvprof ./memTransfer
==8038== NVPROF is profiling process 8038, command: ./memTransfer
./memTransfer starting at device 0: GeForce GT 740M memory size 4194304 nbyte 16.00MB
==8038== Profiling application: ./memTransfer
==8038== Profiling result:
Time(%) Time Calls Avg Min Max Name
50.53% 2.7607ms 1 2.7607ms 2.7607ms 2.7607ms [CUDA memcpy HtoD]
49.47% 2.7030ms 1 2.7030ms 2.7030ms 2.7030ms [CUDA memcpy DtoH]
上圖是CPU和GPU之間傳輸關(guān)系圖谣膳,可以看出來(lái),CPU和GPU之間傳輸速度相對(duì)很差(8GB/s)何吝,GPU和on-board Memory傳輸速度要快得多溉委,所以對(duì)于編程來(lái)說(shuō)鹃唯,要時(shí)刻考慮減少CPU和GPU之間的數(shù)據(jù)傳輸。
Pinned Memory
為什么需要虛擬內(nèi)存地址空間薛躬?
假設(shè)某個(gè)進(jìn)程需要4MB的空間俯渤,內(nèi)存假設(shè)是1MB的呆细,如果進(jìn)程直接使用物理地址型宝,這個(gè)進(jìn)程會(huì)因?yàn)閮?nèi)存不足跑不起來(lái)。但是進(jìn)程可以根據(jù)運(yùn)行時(shí)間調(diào)用部分?jǐn)?shù)據(jù)絮爷,執(zhí)行進(jìn)程趴酣。
host的內(nèi)存是按頁(yè)進(jìn)行管理的,虛擬內(nèi)存和物理內(nèi)存間有一個(gè)映射關(guān)系坑夯,比如要將host上的某個(gè)變量拷貝到device上岖寞,首先得知道host上變量的物理地址,實(shí)際上host的物理地址和虛擬地址的映射關(guān)系隨時(shí)間而變化的柜蜈。所以device無(wú)法安全地訪問(wèn)host的變量仗谆。因此,當(dāng)將pageable host Memory數(shù)據(jù)送到device時(shí)淑履,CUDA驅(qū)動(dòng)會(huì)首先分配一個(gè)臨時(shí)的page-locked或者pinned host Memory隶垮,并將host的數(shù)據(jù)放到這個(gè)臨時(shí)空間里。然后GPU從這個(gè)所謂的pinned Memory中獲取數(shù)據(jù)秘噪,如下圖所示:
我們也可以顯式的直接使用pinned Memory狸吞,如下:
cudaError_t cudaMallocHost(void **devPtr, size_t count);
由于pinned Memory能夠被device直接訪問(wèn)(不是指不通過(guò)PCIE了,而是相對(duì)左圖我們少了pageable Memory到pinned Memory這一步)指煎,所以他比pageable Memory具有相當(dāng)高的讀寫(xiě)帶寬蹋偏,但是可能會(huì)降低pageable Memory的數(shù)量,影響整個(gè)虛擬存儲(chǔ)性能至壤。
cudaError_t status = cudaMallocHost((void**)&h_aPinned, bytes);
if (status != cudaSuccess) {
fprintf(stderr, "Error returned from pinned host memory allocation\n");
exit(1);
}
//釋放pinned memory
cudaError_t cudaFreeHost(void *ptr);
Example:
hym@hym-ThinkPad-Edge-E440:~/CodeSamples/chapter04$ nvcc -O3 pinMemTransfer.cu -o pinMemTransfer
nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
hym@hym-ThinkPad-Edge-E440:~/CodeSamples/chapter04$ nvprof ./pinMemTransfer
==9488== NVPROF is profiling process 9488, command: ./pinMemTransfer
./pinMemTransfer starting at device 0: GeForce GT 740M memory size 4194304 nbyte 16.00MB canMap 1
==9488== Profiling application: ./pinMemTransfer
==9488== Profiling result:
Time(%) Time Calls Avg Min Max Name
50.71% 2.5983ms 1 2.5983ms 2.5983ms 2.5983ms [CUDA memcpy HtoD]
49.29% 2.5255ms 1 2.5255ms 2.5255ms 2.5255ms [CUDA memcpy DtoH]
Pinned Memory比pageable Memory的分配操作更加昂貴威始,但是對(duì)大數(shù)據(jù)的傳輸有很好的表現(xiàn)。pinned Memory性能的好壞也是跟CC有關(guān)的像街。將許多小的傳輸合并到一次大的數(shù)據(jù)傳輸黎棠,并使用pinned Memory將降低很大的傳輸消耗。有些GPU數(shù)據(jù)傳輸和kernel的計(jì)算是可以overlap的宅广。
Zero-Copy Memory
通常情況下葫掉,host不能直接訪問(wèn)device的變量,device的變量也不能直接訪問(wèn)host的變量跟狱。但Zero-Copy Memory是個(gè)例外俭厚,主機(jī)和設(shè)備都可以訪問(wèn)Zero-Copy Memory。
使用Zero-Copy Memory的優(yōu)點(diǎn)如下:
?當(dāng)設(shè)備內(nèi)存不足時(shí)利用主機(jī)內(nèi)存
?避免主機(jī)和設(shè)備之間的顯式數(shù)據(jù)傳輸
?提高PCIe傳輸速率
需要注意的問(wèn)題:要注意device和host端的synchronize
memory accesses 問(wèn)題驶臊,在同一時(shí)刻host和device端同時(shí)修改zero-copy的數(shù)據(jù)挪挤,可能會(huì)導(dǎo)致無(wú)法預(yù)料的后果叼丑。
Zero-copy本身實(shí)質(zhì)就是pinned memory并且被映射到了device的地址空間。
cudaError_t cudaHostAlloc(void **pHost, size_t count, unsigned int flags);
當(dāng)使用cudaHostAllocDefault時(shí)扛门,cudaHostAlloc和cudaMallocHost等價(jià)鸠信。cudaHostAllocPortable則說(shuō)明,分配的pinned memory對(duì)所有CUDA context都有效论寨,而不是單單執(zhí)行分配此操作的那個(gè)context或者說(shuō)線程星立。cudaHostAllocWriteCombined是在特殊系統(tǒng)配置情況下使用的,這塊pinned memory在PCIE上的傳輸更快葬凳,但是對(duì)于host自己來(lái)說(shuō)绰垂,卻沒(méi)什么效率。所以該選項(xiàng)一般用來(lái)讓host去寫(xiě)火焰,然后device讀劲装。最常用的是cudaHostAllocMapped,就是返回一個(gè)標(biāo)準(zhǔn)的zero-copy昌简≌家担可以用下面的API來(lái)獲取device端的地址:
cudaError_t cudaHostGetDevicePointer(void **pDevice, void *pHost, unsigned int flags);
//注意:flags目前設(shè)置為0
使用zero-copy memory來(lái)作為device memory的讀寫(xiě)很頻繁的那部分是很不明智的,究其根本原因還是GPU和CPU之間低的傳輸速度纯赎,甚至在頻繁讀寫(xiě)情況下谦疾,zero-copy表現(xiàn)比global memory也要差不少。
下面一段代買是比較頻繁讀寫(xiě)情況下址否,zero-copy的表現(xiàn):
#include "../common/common.h"
#include <cuda_runtime.h>
#include <stdio.h>
/*
* This example demonstrates the use of zero-copy memory to remove the need to
* explicitly issue a memcpy operation between the host and device. By mapping
* host, page-locked memory into the device's address space, the address can
* directly reference a host array and transfer its contents over the PCIe bus.
*
* This example compares performing a vector addition with and without zero-copy
* memory.
*/
void checkResult(float *hostRef, float *gpuRef, const int N)
{
double epsilon = 1.0E-8;
for (int i = 0; i < N; i++)
{
if (abs(hostRef[i] - gpuRef[i]) > epsilon)
{
printf("Arrays do not match!\n");
printf("host %5.2f gpu %5.2f at current %d\n", hostRef[i],
gpuRef[i], i);
break;
}
}
return;
}
void initialData(float *ip, int size)
{
int i;
for (i = 0; i < size; i++)
{
ip[i] = (float)( rand() & 0xFF ) / 10.0f;
}
return;
}
void sumArraysOnHost(float *A, float *B, float *C, const int N)
{
for (int idx = 0; idx < N; idx++)
{
C[idx] = A[idx] + B[idx];
}
}
__global__ void sumArrays(float *A, float *B, float *C, const int N)
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
if (i < N) C[i] = A[i] + B[i];
}
__global__ void sumArraysZeroCopy(float *A, float *B, float *C, const int N)
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
if (i < N) C[i] = A[i] + B[i];
}
int main(int argc, char **argv)
{
// set up device
int dev = 0;
CHECK(cudaSetDevice(dev));
// get device properties
cudaDeviceProp deviceProp;
CHECK(cudaGetDeviceProperties(&deviceProp, dev));
// check if support mapped memory
if (!deviceProp.canMapHostMemory)
{
printf("Device %d does not support mapping CPU host memory!\n", dev);
CHECK(cudaDeviceReset());
exit(EXIT_SUCCESS);
}
printf("Using Device %d: %s ", dev, deviceProp.name);
// set up data size of vectors
int ipower = 10;
if (argc > 1) ipower = atoi(argv[1]);
int nElem = 1 << ipower;
size_t nBytes = nElem * sizeof(float);
if (ipower < 18)
{
printf("Vector size %d power %d nbytes %3.0f KB\n", nElem, ipower,
(float)nBytes / (1024.0f));
}
else
{
printf("Vector size %d power %d nbytes %3.0f MB\n", nElem, ipower,
(float)nBytes / (1024.0f * 1024.0f));
}
// part 1: using device memory
// malloc host memory
float *h_A, *h_B, *hostRef, *gpuRef;
h_A = (float *)malloc(nBytes);
h_B = (float *)malloc(nBytes);
hostRef = (float *)malloc(nBytes);
gpuRef = (float *)malloc(nBytes);
// initialize data at host side
initialData(h_A, nElem);
initialData(h_B, nElem);
memset(hostRef, 0, nBytes);
memset(gpuRef, 0, nBytes);
// add vector at host side for result checks
sumArraysOnHost(h_A, h_B, hostRef, nElem);
// malloc device global memory
float *d_A, *d_B, *d_C;
CHECK(cudaMalloc((float**)&d_A, nBytes));
CHECK(cudaMalloc((float**)&d_B, nBytes));
CHECK(cudaMalloc((float**)&d_C, nBytes));
// transfer data from host to device
CHECK(cudaMemcpy(d_A, h_A, nBytes, cudaMemcpyHostToDevice));
CHECK(cudaMemcpy(d_B, h_B, nBytes, cudaMemcpyHostToDevice));
// set up execution configuration
int iLen = 512;
dim3 block (iLen);
dim3 grid ((nElem + block.x - 1) / block.x);
sumArrays<<<grid, block>>>(d_A, d_B, d_C, nElem);
// copy kernel result back to host side
CHECK(cudaMemcpy(gpuRef, d_C, nBytes, cudaMemcpyDeviceToHost));
// check device results
checkResult(hostRef, gpuRef, nElem);
// free device global memory
CHECK(cudaFree(d_A));
CHECK(cudaFree(d_B));
// free host memory
free(h_A);
free(h_B);
// part 2: using zerocopy memory for array A and B
// allocate zerocpy memory
CHECK(cudaHostAlloc((void **)&h_A, nBytes, cudaHostAllocMapped));
CHECK(cudaHostAlloc((void **)&h_B, nBytes, cudaHostAllocMapped));
// initialize data at host side
initialData(h_A, nElem);
initialData(h_B, nElem);
memset(hostRef, 0, nBytes);
memset(gpuRef, 0, nBytes);
// pass the pointer to device
CHECK(cudaHostGetDevicePointer((void **)&d_A, (void *)h_A, 0));
CHECK(cudaHostGetDevicePointer((void **)&d_B, (void *)h_B, 0));
// add at host side for result checks
sumArraysOnHost(h_A, h_B, hostRef, nElem);
// execute kernel with zero copy memory
sumArraysZeroCopy<<<grid, block>>>(d_A, d_B, d_C, nElem);
// copy kernel result back to host side
CHECK(cudaMemcpy(gpuRef, d_C, nBytes, cudaMemcpyDeviceToHost));
// check device results
checkResult(hostRef, gpuRef, nElem);
// free memory
CHECK(cudaFree(d_C));
CHECK(cudaFreeHost(h_A));
CHECK(cudaFreeHost(h_B));
free(hostRef);
free(gpuRef);
// reset device
CHECK(cudaDeviceReset());
return EXIT_SUCCESS;
}
編譯運(yùn)行:
hym@hym-ThinkPad-Edge-E440:~/CodeSamples/chapter04$ nvcc -O3 sumArrayZerocpy.cu -o sumZerocpy
nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
hym@hym-ThinkPad-Edge-E440:~/CodeSamples/chapter04$ nvprof ./sumZerocpy
==11871== NVPROF is profiling process 11871, command: ./sumZerocpy
Using Device 0: GeForce GT 740M Vector size 1024 power 10 nbytes 4 KB
==11871== Profiling application: ./sumZerocpy
==11871== Profiling result:
Time(%) Time Calls Avg Min Max Name
36.62% 5.3440us 2 2.6720us 2.6560us 2.6880us [CUDA memcpy DtoH]
32.46% 4.7360us 1 4.7360us 4.7360us 4.7360us sumArraysZeroCopy(float*, float*, float*, int)
17.76% 2.5920us 2 1.2960us 1.2800us 1.3120us [CUDA memcpy HtoD]
13.16% 1.9200us 1 1.9200us 1.9200us 1.9200us sumArrays(float*, float*, float*, int)
餐蔬??佑附?:
$ ./sumZerocopy <size-log-2>
因此樊诺,對(duì)于共享host和device之間的一小塊內(nèi)存空間,zero-copy是很好的選擇音同,簡(jiǎn)化了編程词爬。
在異構(gòu)架構(gòu)中有兩種:集成&分離。集成:CPU和GPU在同一個(gè)芯片上权均,共享memory顿膨,這個(gè)時(shí)候zero-copy memory很適合。分離:CPU和GPU在不同的芯片上叽赊,通過(guò)PCIe總線進(jìn)行傳輸恋沃,只有特定場(chǎng)景適合zero-copy。另外必指,不要過(guò)度使用zero-copy囊咏,因?yàn)閐evice中的threads讀取zero-copy非常慢。
Unified Virtual Addressing
在CC2.0以上的設(shè)備支持一種新特性:Unified Virtual Addressing (UVA).這個(gè)特性在CUDA4.0中首次介紹,并被64位Linux系統(tǒng)支持梅割。如下圖所示霜第,在使用UVA的情況下,CPU和GPU使用同一塊連續(xù)的地址空間:
在UVA之前户辞,我們需要分別管理指向host memory和device memory的指針泌类。使用UVA之后,實(shí)際指向內(nèi)存空間的指針對(duì)我們來(lái)說(shuō)是透明的底燎,我們看到的是同一塊連續(xù)地址空間刃榨。
這樣,使用cudaHostAlloc分配的pinned memory獲得的地址對(duì)于device和host來(lái)說(shuō)是通用的书蚪。我們可以直接在kernel里使用這個(gè)地址喇澡。回看前文殊校,我們對(duì)于zero-copy的處理過(guò)程是:
1 分配已經(jīng)映射到device的pinned memory。
2 根據(jù)獲得的host地址读存,獲取device的映射地址为流。
3 在kernel中使用該映射地址。
使用UVA之后让簿,就沒(méi)必要來(lái)獲取device的映射地址了敬察,直接使用一個(gè)地址就可以,如下代碼所示:
// allocate zero-copy memory at the host side
cudaHostAlloc((void **)&h_A, nBytes, cudaHostAllocMapped);
cudaHostAlloc((void **)&h_B, nBytes, cudaHostAllocMapped);
// initialize data at the host side
initialData(h_A, nElem);
initialData(h_B, nElem);
// invoke the kernel with zero-copy memory
sumArraysZeroCopy<<<grid, block>>>(h_A, h_B, d_C, nElem);
編譯運(yùn)行:
hym@hym-ThinkPad-Edge-E440:~/CodeSamples/Solutions/chapter04$ nvcc -O3 sumArrayZerocpyUVA.cu -o sumArrayZerocpyUVA
nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
hym@hym-ThinkPad-Edge-E440:~/CodeSamples/Solutions/chapter04$ nvprof ./sumArrayZerocpyUVA
==16987== NVPROF is profiling process 16987, command: ./sumArrayZerocpyUVA
Using Device 0: GeForce GT 740M Vector size 16777216 power 24 nbytes 64 MB
sumArrays, elapsed = 0.015717 s
sumArraysZeroCopy, elapsed = 0.020800 s
sumArraysZeroCopy w/ UVA, elapsed = 0.020872 s
==16987== Profiling application: ./sumArrayZerocpyUVA
==16987== Profiling result:
Time(%) Time Calls Avg Min Max Name
29.82% 33.489ms 3 11.163ms 11.107ms 11.256ms [CUDA memcpy DtoH]
19.39% 21.775ms 2 10.887ms 10.847ms 10.927ms [CUDA memcpy HtoD]
18.50% 20.778ms 1 20.778ms 20.778ms 20.778ms sumArraysZeroCopyWithUVA(float*, float*, float*, int)
18.46% 20.733ms 1 20.733ms 20.733ms 20.733ms sumArraysZeroCopy(float*, float*, float*, int)
13.84% 15.545ms 1 15.545ms 15.545ms 15.545ms sumArrays(float*, float*, float*, int)
Unified Memory
在CUDA 6.0尔当,引入了一個(gè)Unified Memory的新功能莲祸,以簡(jiǎn)化CUDA的內(nèi)存管理。
Unified Memory依賴于UVA椭迎,但它們是完全不同的技術(shù)锐帜。UVA給所有CPU和GPU提供了一個(gè)虛擬的地址空間,但是UVA不會(huì)自動(dòng)地將數(shù)據(jù)從一個(gè)物理位置遷移到另一個(gè)位置畜号,這正是Unified Memory所特有的缴阎。
Unified Memory提供了一個(gè)“單指針數(shù)據(jù)”模型,其概念上類似于zero-copy简软。 然而蛮拔,零拷貝內(nèi)存被分配在主機(jī)內(nèi)存中,并且在kernek中的性能通常會(huì)受到PCIe總線對(duì)零拷貝內(nèi)存的高延遲訪問(wèn)痹升。另一方面建炫,Unified Memory解耦內(nèi)存和執(zhí)行空間,以便數(shù)據(jù)可以透明地根據(jù)需要遷移到主機(jī)或設(shè)備疼蛾,以提高局部性和性能肛跌。??惋砂?沒(méi)有理解
原始的CUDA程序:
__global__ void AplusB(int *ret, int a, int b)
{
ret[threadIdx.x] = a + b + threadIdx.x;
}
int main()
{
int *ret;
//**************************************
cudaMalloc(&ret, 1000 * sizeof(int));
AplusB<<<1, 1000>>>(ret, 10, 100);
//**************************************
int *host_ret = (int *)malloc(1000 * sizeof(int));
cudaMemcpy(host_ret, ret, 1000 * sizeof(int), cudaMemcpyDefault);
for(int i = 0; i < 1000; i++)
printf("%d: A + B = %d\n", i, host_ret[i]);
free(host_ret);
cudaFree(ret);
return 0;
}
使用Unifiled Memory
#include "../common/common.h"
#include <cuda_runtime.h>
#include <stdio.h>
__global__ void AplusB(int *ret, int a, int b)
{
ret[threadIdx.x] = a + b + threadIdx.x;
}
int main()
{
int *ret;
//***********************************************
CHECK(cudaMallocManaged(&ret, 1000 * sizeof(int)));
AplusB<<<1, 1000>>>(ret, 10, 100);
//***********************************************
CHECK(cudaDeviceSynchronize());
for(int i = 0; i < 1000; i++)
printf("%d: A + B = %d\n", i, ret[i]);
cudaFree(ret);
return 0;
}
從上面不同的代碼可以看出妒挎,統(tǒng)一尋址后的代碼更簡(jiǎn)潔,使用了函數(shù)cudaMallocManaged()
開(kāi)辟一塊存儲(chǔ)空間西饵,無(wú)論是在Kernel函數(shù)中還是main函數(shù)中酝掩,都可以使用這塊內(nèi)存,達(dá)到了統(tǒng)一尋址的目的眷柔。
注意:main函數(shù)在調(diào)用kernel函數(shù)之后期虾,使用了一個(gè)同步函數(shù)。仔細(xì)思考后就會(huì)有所領(lǐng)悟——既然這塊存儲(chǔ)空間既可以被kernel函數(shù)訪問(wèn)驯嘱,也可以被main函數(shù)訪問(wèn)镶苞,為了解決訪問(wèn)沖突的問(wèn)題,因此使用了同步函數(shù)鞠评,使得在Kernel改變變量的值后茂蚓,main函數(shù)才能使用該變量。
注意:Unifiled Memory需要在CC3.0以上剃幌,64bit.
http://blog.csdn.net/tom1027/article/details/44856875