上次我用AVFoundation的提供的AVCaptureVideoPreviewLayer,把攝像頭捕獲的數(shù)據(jù)顯示出來械荷。但是一般情況下涌矢,數(shù)據(jù)有可能是網(wǎng)絡(luò)或者文件中的,這時(shí)AVCaptureVideoPreviewLayer就沒法用了包券。
先普及一點(diǎn)視頻編碼的知識(shí)。視頻就像電影炫贤,由一張張圖片連續(xù)起來溅固。但是我們存儲(chǔ)的時(shí)候也這樣,就太浪費(fèi)空間了兰珍。一般取一張關(guān)鍵的圖片(I幀)侍郭,下一張圖片只保存上一圖片不同部分(P幀)。等到下一個(gè)畫面出現(xiàn)掠河,再重復(fù)上面的動(dòng)作亮元。所以最后保存的是 IPPPPPIPPPPP這種形式,每一段重復(fù)稱之為GOP唠摹。有的編碼方式還有B幀爆捞,即前向參考幀。視頻編碼很復(fù)雜勾拉,推薦這篇文章 http://www.skywind.me/blog/archives/1609 (需翻墻)煮甥。
解碼播放的流程就相對(duì)簡(jiǎn)單:
- 收到一段GOP數(shù)據(jù)盗温,丟給解碼器
- 解碼器還原出一幀一幀的圖像,丟給前端
- 前端把圖像繪制出來
需要特別指出的是成肘,視頻中的圖像是YUV編碼格式肌访,類似于RGB。為什么要選YUV艇劫,因?yàn)楸容^省空間。
上一篇文章提到惩激,AVCaptureSession能夠指定不同的輸出端店煞,可以是文件,可以是AVCaptureVideoPreviewLayer风钻,當(dāng)然也可以是原始數(shù)據(jù)AVCaptureVideoDataOutput顷蟀。
//-- Create the output for the capture session.
AVCaptureVideoDataOutput * dataOutput = [[AVCaptureVideoDataOutput alloc] init];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES]; // Probably want to set this to NO when recording
//-- Set to YUV420.
[dataOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; // Necessary for manual preview
// Set dispatch to be on the main thread so OpenGL can do things with the data
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[_captureSession addOutput:dataOutput];
dataOutput需要指定kCVPixelBufferPixelFormatTypeKey,這里用的YUV格式骡技,h264解碼出來的也是這種格式鸣个。
設(shè)置好后,攝像頭捕捉的數(shù)據(jù)就回調(diào)到這個(gè)方法
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
sampleBuffer就是我們需要的數(shù)據(jù)對(duì)象布朦。
我們這里是Uncompressed Image囤萤,而Image就保存在CVPixelBuffer中。
渲染方式有兩種:
- CVPixelBuffer轉(zhuǎn)換為CGImage是趴,交給NSImageView或UIImageView
- 交給OpenGL渲染
這次選方法1
/*
http://stackoverflow.com/questions/8838481/kcvpixelformattype-420ypcbcr8biplanarfullrange-frame-to-uiimage-conversion
*/
#define clamp(a) (a>255?255:(a<0?0:a))
- (NSImage *)imageFromSampleBuffer:(CMSampleBufferRef)sampleBuffer {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
uint8_t *yBuffer = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
size_t yPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
uint8_t *cbCrBuffer = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1);
size_t cbCrPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 1);
int bytesPerPixel = 4;
uint8_t *rgbBuffer = malloc(width * height * bytesPerPixel);
for(int y = 0; y < height; y++) {
uint8_t *rgbBufferLine = &rgbBuffer[y * width * bytesPerPixel];
uint8_t *yBufferLine = &yBuffer[y * yPitch];
uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch];
for(int x = 0; x < width; x++) {
int16_t y = yBufferLine[x];
int16_t cb = cbCrBufferLine[x & ~1] - 128;
int16_t cr = cbCrBufferLine[x | 1] - 128;
uint8_t *rgbOutput = &rgbBufferLine[x*bytesPerPixel];
int16_t r = (int16_t)roundf( y + cr * 1.4 );
int16_t g = (int16_t)roundf( y + cb * -0.343 + cr * -0.711 );
int16_t b = (int16_t)roundf( y + cb * 1.765);
rgbOutput[0] = 0xff;
rgbOutput[1] = clamp(b);
rgbOutput[2] = clamp(g);
rgbOutput[3] = clamp(r);
}
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rgbBuffer, width, height, 8, width * bytesPerPixel, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// UIImage *image = [UIImage imageWithCGImage:quartzImage];
NSImage *image = [[NSImage alloc] initWithCGImage:quartzImage size:NSZeroSize];
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
CGImageRelease(quartzImage);
free(rgbBuffer);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return image;
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
NSImage *nsImage = [self imageFromSampleBuffer:sampleBuffer];
[self.cameraView performSelectorOnMainThread:@selector(setImage:) withObject:nsImage waitUntilDone:YES];
}
轉(zhuǎn)換過程比較繁瑣涛舍,基本原理就是取出YUV的數(shù)據(jù),通過一些數(shù)學(xué)變換變成RGB唆途,然后再在內(nèi)存中繪制富雅。
工程代碼:https://github.com/annidy/AVCapturePreview2
UIImageView刷新圖片是非常耗CPU,整個(gè)流程跑下來CPU占用率60%多肛搬,而fsp也只有13左右没佑,其中50%都是setImage消耗,而用AVCaptureVideoPreviewLayer總CPU不到5%温赔。
結(jié)論:通過ImageView渲染方式理論上可行蛤奢,但是有比較大的性能問題。