想要高效的進行界面刷新,OpenGL/硬件加速是必須的。最近我在研究OpenGL的過程中族展,被OpenGL的API饶套、Shader、GSLS等燒腦得不要不要的额衙。也不怪它,OpenGL本來就是為3D動畫設計的,一上來肯定高大上贱鼻。
網(wǎng)絡上有不少OpenGL ES 2.0視頻render的開源實現(xiàn),蘋果自家的GLCameraRipple示例也非常棒滋将,但是鮮有OpenGL的實現(xiàn)邻悬。OpenGL渲染紋理,CoreImage也可以做得到随闽。
蘋果有個Demo中有一個VideoCIView父丰,這個類基本實現(xiàn)了將一個CIImage繪制到NSOpenGLView中。
VideoCIView繼承自NSOpenGLView掘宪,主要是想利用顯示區(qū)域變化的事件回調(diào)蛾扇。還有一點要注意攘烛,NSOpenGLView不能有subview。
+ (NSOpenGLPixelFormat *)defaultPixelFormat
{
static NSOpenGLPixelFormat *pf;
if (pf == nil)
{
// You must make sure that the pixel format of the context does not
// have a recovery renderer is important. Otherwise CoreImage may not be able to
// create contexts that share textures with this context.
static const NSOpenGLPixelFormatAttribute attr[] = {
NSOpenGLPFAAccelerated,
NSOpenGLPFANoRecovery,
NSOpenGLPFAColorSize, 32,
#if MAC_OS_X_VERSION_MAX_ALLOWED > MAC_OS_X_VERSION_10_4
NSOpenGLPFAAllowOfflineRenderers,
#endif
0
};
pf = [[NSOpenGLPixelFormat alloc] initWithAttributes:(void *)&attr];
}
return pf;
}
NSOpenGLPixelFormat是每個NSOpenGLView初始化所必需的镀首,這種已C數(shù)組作為attribute傳入到OC中的做法比較少見坟漱。attribute分兩種類型:1、BOOL更哄;2芋齿、帶整數(shù)。此函數(shù)主要是設置OpenGL的一些參數(shù)竖瘾,實現(xiàn)大同小異沟突。
- (void)prepareOpenGL
{
GLint parm = 1;
// Set the swap interval to 1 to ensure that buffers swaps occur only during the vertical retrace of the monitor.
[[self openGLContext] setValues:&parm forParameter:NSOpenGLCPSwapInterval];
// To ensure best performance, disbale everything you don't need.
glDisable (GL_ALPHA_TEST);
glDisable (GL_DEPTH_TEST);
glDisable (GL_SCISSOR_TEST);
glDisable (GL_BLEND);
glDisable (GL_DITHER);
glDisable (GL_CULL_FACE);
glColorMask (GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDepthMask (GL_FALSE);
glStencilMask (0);
glClearColor (0.0f, 0.0f, 0.0f, 0.0f);
glHint (GL_TRANSFORM_HINT_APPLE, GL_FASTEST);
_needsReshape = YES;
}
當OpenGL初始化完成當前context,就會調(diào)用一次此函數(shù)捕传,姑且認為它就是viewDidLoad吧惠拭。Apple這里關閉了很多不需要的參數(shù)。
// Called when the user scrolls, moves, or resizes the view.
- (void)reshape
{
// Resets the viewport on the next draw operation.
_needsReshape = YES;
}
- (void)updateMatrices
{
NSRect visibleRect = [self visibleRect];
NSRect mappedVisibleRect = NSIntegralRect([self convertRect: visibleRect toView: [self enclosingScrollView]]);
[[self openGLContext] update];
// Install an orthographic projection matrix (no perspective)
// with the origin in the bottom left and one unit equal to one device pixel.
glViewport (0, 0,mappedVisibleRect.size.width, mappedVisibleRect.size.height);
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
glOrtho(visibleRect.origin.x,
visibleRect.origin.x + visibleRect.size.width,
visibleRect.origin.y,
visibleRect.origin.y + visibleRect.size.height,
-1, 1);
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();
_needsReshape = NO;
}
當外部窗口大小發(fā)生變化時庸论,會調(diào)用此方法(這也是不選NSOpenGLLayer的原因)职辅。窗口大小變化后,沒有直接修改glViewport聂示,而是設置_needsReshap = YES域携!這算是OpenGL的一個通用設計模式吧——所有繪制操作都在render函數(shù)里完成,且render函數(shù)不重入(我瞎BB的)鱼喉。OpenGL繪制不需要一定在主線程秀鞭,至于render函數(shù)嘛,一般都是用CADisplayLink驅(qū)動扛禽,我們這個工程有onCapture回調(diào)锋边,所以省了。
- (void)render
{
NSRect frame = [self bounds];
[[self openGLContext] makeCurrentContext];
if (_needsReshape)
{
[self updateMatrices];
glClear (GL_COLOR_BUFFER_BIT);
}
CGRect imageRect = [_image extent];
CGRect destRect = *((CGRect*)&frame);
[[self ciContext] drawImage:_image inRect:destRect fromRect:imageRect];
// Flush the OpenGL command stream. If the view is double-buffered
// you should replace this call with [[self openGLContext]
glFlush ();
}
- (CIContext*)ciContext
{
// Allocate a CoreImage rendering context using the view's OpenGL
// context as its destination if none already exists.
// You must do this before sending any queries to the CIContext.
if (_context == nil)
{
[[self openGLContext] makeCurrentContext];
NSOpenGLPixelFormat *pf;
pf = [self pixelFormat];
if (pf == nil)
pf = [[self class] defaultPixelFormat];
_context = [[CIContext contextWithCGLContext: CGLGetCurrentContext()
pixelFormat: [pf CGLPixelFormatObj] options: nil] retain];
}
return _context;
}
真正的render就干兩件事
- 可視區(qū)域變化编曼?更新viewport和映射關系
- 把CIImage繪制到OpenGL中
第2步前需要先得到用于CoreImage繪制的CIContext豆巨。繪制函數(shù)就一行,drawImage掐场。當然往扔,這個繪制是在GPU完成的,速度非承芑В快萍膛。
最后,通過setImage來驅(qū)動render
- (void)setImage:(CIImage *)image
{
if (_image != image)
{
[_image release];
_image = [image retain];
}
[self render];
}
最后一步關鍵點嚷堡,獲得CIImage卦羡。通過CMSampleBuffer得到CVImageBuffer,然后再通過CVImageBuffer得到CIImage
CVImageBufferRef videoFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage* image = [CIImage imageWithCVImageBuffer:videoFrame];
+[CIImage imageWithCVImageBuffer:]
并不總是能成功,這與采集的圖像格式有關绿饵。
整個過程NSOpenGLView繁雜了點,iOS上的GLKView用起來要簡單許多瓶颠。
最后測試下來拟赊,CIContext的方式CPU占用率約7%,比AVSampleBufferDisplayLayer稍差粹淋。原因主要是CVImageBufferRef -> CIImage占用了許多時間吸祟,而繪制過程還是相當?shù)母咝А?/p>
我是比較喜歡這種渲染方式,它既利用到硬件加速桃移,而且CIImage還要好多濾鏡可以玩屋匕。最重要的,這種實現(xiàn)方案比較簡單借杰。