概述
GPUImage是一個著名的圖像處理開源庫,它讓你能夠在圖片、視頻、相機上使用GPU加速的濾鏡和其它特效渐夸。與CoreImage框架相比,可以根據(jù)GPUImage提供的接口渔欢,使用自定義的濾鏡墓塌。項目地址:https://github.com/BradLarson/GPUImage
這篇文章主要是閱讀GPUImage框架中的GPUImagePicture、GPUImageView奥额、GPUImageUIElement三個類的源碼苫幢。這三個類與iOS中的圖片加載、圖片顯示垫挨、UI渲染相關(guān)韩肝。在使用GPUImage框架的時候,涉及到對圖片進行濾鏡處理和顯示的時候九榔,基本都會用到這幾個類哀峻。以下是源碼內(nèi)容:
*** GPUImagePicture***
GPUImageView
GPUImageUIElement
實現(xiàn)效果
GPUImagePicture
從名稱就可以知道GPUImagePicture是GPUImage框架中處理與圖片相關(guān)的類涡相,它的主要作用是將UIImage或CGImage轉(zhuǎn)化為紋理對象。GPUImagePicture繼承自GPUImageOutput剩蟀,從而可以知道它能夠作為輸出催蝗,由于它沒有實現(xiàn)GPUImageInput協(xié)議,不能處理輸入育特。因此生逸,常常作為響應(yīng)鏈源。
- 直接alpha與預(yù)乘alpha
使用直接 alpha 描述 RGBA 顏色時且预,顏色的 alpha 值會存儲在 alpha 通道中。例如烙无,若要描述具有 60% 不透明度的紅色锋谐,使用以下值:(255, 0, 0, 255 * 0.6) = (255, 0, 0, 153)。其中153(153 = 255 * 0.6)指示顏色應(yīng)具有 60% 的不透明度截酷。
使用預(yù)乘 alpha 描述 RGBA 顏色時涮拗,每種顏色都會與 alpha 值相乘:(255 * 0.6, 0 * 0.6, 0 * 0.6, 255 * 0.6) = (153, 0, 0, 153)。
- 初始化方法迂苛。初始化方法比較多三热,因為提供的初始化選項比較多。好處便是自由度比較大三幻,方便自己定制就漾。
// 通過圖片URL初始化
- (id)initWithURL:(NSURL *)url;
// 通過UIImage或CGImage初始化
- (id)initWithImage:(UIImage *)newImageSource;
- (id)initWithCGImage:(CGImageRef)newImageSource;
// 通過UIImage或CGImage、是否平滑縮放輸出來初始化
- (id)initWithImage:(UIImage *)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput;
- (id)initWithCGImage:(CGImageRef)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput;
// 通過UIImage或CGImage念搬、是否去除預(yù)乘alpha來初始化
- (id)initWithImage:(UIImage *)newImageSource removePremultiplication:(BOOL)removePremultiplication;
- (id)initWithCGImage:(CGImageRef)newImageSource removePremultiplication:(BOOL)removePremultiplication;
// 通過UIImage或CGImage抑堡、是否平滑縮放、是否去除預(yù)乘alpha來初始化
- (id)initWithImage:(UIImage *)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput removePremultiplication:(BOOL)removePremultiplication;
- (id)initWithCGImage:(CGImageRef)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput removePremultiplication:(BOOL)removePremultiplication;
初始化方法比較多朗徊,但是其它初始化方法都是基于下面的這個初始化方法首妖。因此,只看下面的這個初始化方法爷恳。實現(xiàn)比較復(fù)雜有缆,但是基本思路就是:1、獲取圖片適合的寬高(不能超出OpenGL ES允許的最大紋理寬高)2温亲、如果使用了smoothlyScaleOutput棚壁,需要調(diào)整寬高為接近2的冪的值,調(diào)整后必須重繪铸豁;3灌曙、如果不用重繪,則獲取大小端节芥、alpha等信息在刺;4逆害、需要重繪,則使用CoreGraphics重繪蚣驼;5魄幕、根據(jù)是否需要去除預(yù)乘alpha選項,決定是否去除預(yù)乘alpha颖杏;6纯陨、由調(diào)整后的數(shù)據(jù)生成紋理緩存對象;7留储、根據(jù)shouldSmoothlyScaleOutput選項決定是否生成mipmaps翼抠;8、最后釋放資源获讳。
- (id)initWithCGImage:(CGImageRef)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput removePremultiplication:(BOOL)removePremultiplication;
{
if (!(self = [super init]))
{
return nil;
}
hasProcessedImage = NO;
self.shouldSmoothlyScaleOutput = smoothlyScaleOutput;
imageUpdateSemaphore = dispatch_semaphore_create(0);
dispatch_semaphore_signal(imageUpdateSemaphore);
// TODO: Dispatch this whole thing asynchronously to move image loading off main thread
CGFloat widthOfImage = CGImageGetWidth(newImageSource);
CGFloat heightOfImage = CGImageGetHeight(newImageSource);
// If passed an empty image reference, CGContextDrawImage will fail in future versions of the SDK.
NSAssert( widthOfImage > 0 && heightOfImage > 0, @"Passed image must not be empty - it should be at least 1px tall and wide");
pixelSizeOfImage = CGSizeMake(widthOfImage, heightOfImage);
CGSize pixelSizeToUseForTexture = pixelSizeOfImage;
BOOL shouldRedrawUsingCoreGraphics = NO;
// For now, deal with images larger than the maximum texture size by resizing to be within that limit
CGSize scaledImageSizeToFitOnGPU = [GPUImageContext sizeThatFitsWithinATextureForSize:pixelSizeOfImage];
if (!CGSizeEqualToSize(scaledImageSizeToFitOnGPU, pixelSizeOfImage))
{
pixelSizeOfImage = scaledImageSizeToFitOnGPU;
pixelSizeToUseForTexture = pixelSizeOfImage;
shouldRedrawUsingCoreGraphics = YES;
}
if (self.shouldSmoothlyScaleOutput)
{
// In order to use mipmaps, you need to provide power-of-two textures, so convert to the next largest power of two and stretch to fill
CGFloat powerClosestToWidth = ceil(log2(pixelSizeOfImage.width));
CGFloat powerClosestToHeight = ceil(log2(pixelSizeOfImage.height));
pixelSizeToUseForTexture = CGSizeMake(pow(2.0, powerClosestToWidth), pow(2.0, powerClosestToHeight));
shouldRedrawUsingCoreGraphics = YES;
}
GLubyte *imageData = NULL;
CFDataRef dataFromImageDataProvider = NULL;
GLenum format = GL_BGRA;
BOOL isLitteEndian = YES;
BOOL alphaFirst = NO;
BOOL premultiplied = NO;
if (!shouldRedrawUsingCoreGraphics) {
/* Check that the memory layout is compatible with GL, as we cannot use glPixelStore to
* tell GL about the memory layout with GLES.
*/
if (CGImageGetBytesPerRow(newImageSource) != CGImageGetWidth(newImageSource) * 4 ||
CGImageGetBitsPerPixel(newImageSource) != 32 ||
CGImageGetBitsPerComponent(newImageSource) != 8)
{
shouldRedrawUsingCoreGraphics = YES;
} else {
/* Check that the bitmap pixel format is compatible with GL */
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(newImageSource);
if ((bitmapInfo & kCGBitmapFloatComponents) != 0) {
/* We don't support float components for use directly in GL */
shouldRedrawUsingCoreGraphics = YES;
} else {
CGBitmapInfo byteOrderInfo = bitmapInfo & kCGBitmapByteOrderMask;
if (byteOrderInfo == kCGBitmapByteOrder32Little) {
/* Little endian, for alpha-first we can use this bitmap directly in GL */
CGImageAlphaInfo alphaInfo = bitmapInfo & kCGBitmapAlphaInfoMask;
if (alphaInfo != kCGImageAlphaPremultipliedFirst && alphaInfo != kCGImageAlphaFirst &&
alphaInfo != kCGImageAlphaNoneSkipFirst) {
shouldRedrawUsingCoreGraphics = YES;
}
} else if (byteOrderInfo == kCGBitmapByteOrderDefault || byteOrderInfo == kCGBitmapByteOrder32Big) {
isLitteEndian = NO;
/* Big endian, for alpha-last we can use this bitmap directly in GL */
CGImageAlphaInfo alphaInfo = bitmapInfo & kCGBitmapAlphaInfoMask;
if (alphaInfo != kCGImageAlphaPremultipliedLast && alphaInfo != kCGImageAlphaLast &&
alphaInfo != kCGImageAlphaNoneSkipLast) {
shouldRedrawUsingCoreGraphics = YES;
} else {
/* Can access directly using GL_RGBA pixel format */
premultiplied = alphaInfo == kCGImageAlphaPremultipliedLast || alphaInfo == kCGImageAlphaPremultipliedLast;
alphaFirst = alphaInfo == kCGImageAlphaFirst || alphaInfo == kCGImageAlphaPremultipliedFirst;
format = GL_RGBA;
}
}
}
}
}
// CFAbsoluteTime elapsedTime, startTime = CFAbsoluteTimeGetCurrent();
if (shouldRedrawUsingCoreGraphics)
{
// For resized or incompatible image: redraw
imageData = (GLubyte *) calloc(1, (int)pixelSizeToUseForTexture.width * (int)pixelSizeToUseForTexture.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (size_t)pixelSizeToUseForTexture.width, (size_t)pixelSizeToUseForTexture.height, 8, (size_t)pixelSizeToUseForTexture.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// CGContextSetBlendMode(imageContext, kCGBlendModeCopy); // From Technical Q&A QA1708: http://developer.apple.com/library/ios/#qa/qa1708/_index.html
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, pixelSizeToUseForTexture.width, pixelSizeToUseForTexture.height), newImageSource);
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
isLitteEndian = YES;
alphaFirst = YES;
premultiplied = YES;
}
else
{
// Access the raw image bytes directly
dataFromImageDataProvider = CGDataProviderCopyData(CGImageGetDataProvider(newImageSource));
imageData = (GLubyte *)CFDataGetBytePtr(dataFromImageDataProvider);
}
if (removePremultiplication && premultiplied) {
NSUInteger totalNumberOfPixels = round(pixelSizeToUseForTexture.width * pixelSizeToUseForTexture.height);
uint32_t *pixelP = (uint32_t *)imageData;
uint32_t pixel;
CGFloat srcR, srcG, srcB, srcA;
for (NSUInteger idx=0; idx<totalNumberOfPixels; idx++, pixelP++) {
pixel = isLitteEndian ? CFSwapInt32LittleToHost(*pixelP) : CFSwapInt32BigToHost(*pixelP);
if (alphaFirst) {
srcA = (CGFloat)((pixel & 0xff000000) >> 24) / 255.0f;
}
else {
srcA = (CGFloat)(pixel & 0x000000ff) / 255.0f;
pixel >>= 8;
}
srcR = (CGFloat)((pixel & 0x00ff0000) >> 16) / 255.0f;
srcG = (CGFloat)((pixel & 0x0000ff00) >> 8) / 255.0f;
srcB = (CGFloat)(pixel & 0x000000ff) / 255.0f;
srcR /= srcA; srcG /= srcA; srcB /= srcA;
pixel = (uint32_t)(srcR * 255.0) << 16;
pixel |= (uint32_t)(srcG * 255.0) << 8;
pixel |= (uint32_t)(srcB * 255.0);
if (alphaFirst) {
pixel |= (uint32_t)(srcA * 255.0) << 24;
}
else {
pixel <<= 8;
pixel |= (uint32_t)(srcA * 255.0);
}
*pixelP = isLitteEndian ? CFSwapInt32HostToLittle(pixel) : CFSwapInt32HostToBig(pixel);
}
}
// elapsedTime = (CFAbsoluteTimeGetCurrent() - startTime) * 1000.0;
// NSLog(@"Core Graphics drawing time: %f", elapsedTime);
// CGFloat currentRedTotal = 0.0f, currentGreenTotal = 0.0f, currentBlueTotal = 0.0f, currentAlphaTotal = 0.0f;
// NSUInteger totalNumberOfPixels = round(pixelSizeToUseForTexture.width * pixelSizeToUseForTexture.height);
//
// for (NSUInteger currentPixel = 0; currentPixel < totalNumberOfPixels; currentPixel++)
// {
// currentBlueTotal += (CGFloat)imageData[(currentPixel * 4)] / 255.0f;
// currentGreenTotal += (CGFloat)imageData[(currentPixel * 4) + 1] / 255.0f;
// currentRedTotal += (CGFloat)imageData[(currentPixel * 4 + 2)] / 255.0f;
// currentAlphaTotal += (CGFloat)imageData[(currentPixel * 4) + 3] / 255.0f;
// }
//
// NSLog(@"Debug, average input image red: %f, green: %f, blue: %f, alpha: %f", currentRedTotal / (CGFloat)totalNumberOfPixels, currentGreenTotal / (CGFloat)totalNumberOfPixels, currentBlueTotal / (CGFloat)totalNumberOfPixels, currentAlphaTotal / (CGFloat)totalNumberOfPixels);
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext useImageProcessingContext];
outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:pixelSizeToUseForTexture onlyTexture:YES];
[outputFramebuffer disableReferenceCounting];
glBindTexture(GL_TEXTURE_2D, [outputFramebuffer texture]);
if (self.shouldSmoothlyScaleOutput)
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
}
// no need to use self.outputTextureOptions here since pictures need this texture formats and type
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)pixelSizeToUseForTexture.width, (int)pixelSizeToUseForTexture.height, 0, format, GL_UNSIGNED_BYTE, imageData);
if (self.shouldSmoothlyScaleOutput)
{
glGenerateMipmap(GL_TEXTURE_2D);
}
glBindTexture(GL_TEXTURE_2D, 0);
});
if (shouldRedrawUsingCoreGraphics)
{
free(imageData);
}
else
{
if (dataFromImageDataProvider)
{
CFRelease(dataFromImageDataProvider);
}
}
return self;
}
- 其它方法阴颖。這些方法主要是與圖片處理相關(guān)。
// Image rendering
- (void)processImage;
- (CGSize)outputImageSize;
- (BOOL)processImageWithCompletionHandler:(void (^)(void))completion;
- (void)processImageUpToFilter:(GPUImageOutput<GPUImageInput> *)finalFilterInChain withCompletionHandler:(void (^)(UIImage *processedImage))block;
// 處理圖片
- (void)processImage;
{
[self processImageWithCompletionHandler:nil];
}
// 輸出圖片大小丐膝,由于圖像大小可能被調(diào)整(詳見初始化方法)量愧。因此,提供了獲取圖像大小的方法帅矗。
- (CGSize)outputImageSize;
{
return pixelSizeOfImage;
}
// 處理圖片偎肃,可以傳入處理完回調(diào)的block
- (BOOL)processImageWithCompletionHandler:(void (^)(void))completion;
{
hasProcessedImage = YES;
// dispatch_semaphore_wait(imageUpdateSemaphore, DISPATCH_TIME_FOREVER);
// 如果計數(shù)器小于1便立即返回。計數(shù)器大于等于1的時候浑此,使計數(shù)器減1累颂,并且往下執(zhí)行
if (dispatch_semaphore_wait(imageUpdateSemaphore, DISPATCH_TIME_NOW) != 0)
{
return NO;
}
// 傳遞Framebuffer給所有targets處理
runAsynchronouslyOnVideoProcessingQueue(^{
for (id<GPUImageInput> currentTarget in targets)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger textureIndexOfTarget = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
[currentTarget setCurrentlyReceivingMonochromeInput:NO];
[currentTarget setInputSize:pixelSizeOfImage atIndex:textureIndexOfTarget];
[currentTarget setInputFramebuffer:outputFramebuffer atIndex:textureIndexOfTarget];
[currentTarget newFrameReadyAtTime:kCMTimeIndefinite atIndex:textureIndexOfTarget];
}
// 執(zhí)行完,計數(shù)器加1
dispatch_semaphore_signal(imageUpdateSemaphore);
// 有block尤勋,執(zhí)行block
if (completion != nil) {
completion();
}
});
return YES;
}
// 由響應(yīng)鏈的final filter生成UIImage圖像
- (void)processImageUpToFilter:(GPUImageOutput<GPUImageInput> *)finalFilterInChain withCompletionHandler:(void (^)(UIImage *processedImage))block;
{
[finalFilterInChain useNextFrameForImageCapture];
[self processImageWithCompletionHandler:^{
UIImage *imageFromFilter = [finalFilterInChain imageFromCurrentFramebuffer];
block(imageFromFilter);
}];
}
GPUImageView
從名稱就可以知道GPUImageView是GPUImage框架中顯示圖片相關(guān)的類喘落。GPUImageView實現(xiàn)了GPUImageInput協(xié)議,從而可以知道它能夠接受GPUImageFramebuffer的輸入最冰。因此瘦棋,常常作為響應(yīng)鏈的終端節(jié)點,用于顯示處理后的幀緩存暖哨。GPUImageView這個涉及了比較多的OpenGL ES的知識赌朋,在這里不會說太多OpenGL ES的知識。如果不太了解篇裁,歡迎閱讀我的 OpenGL ES入門專題沛慢。
- 初始化
- (id)initWithFrame:(CGRect)frame;
-(id)initWithCoder:(NSCoder *)coder;
初始化的時候主要是以下幾個方面的工作:1、設(shè)置OpenGL ES的相關(guān)屬性达布;2团甲、創(chuàng)建著色器程序;3黍聂、獲取屬性變量躺苦、統(tǒng)一變量的位置身腻;4、設(shè)置清屏顏色匹厘;5嘀趟、創(chuàng)建默認的幀緩存、渲染緩存愈诚,用于圖像的顯示她按;6、根據(jù)填充模式調(diào)整頂點坐標(biāo)炕柔。
- (id)initWithFrame:(CGRect)frame
{
if (!(self = [super initWithFrame:frame]))
{
return nil;
}
[self commonInit];
return self;
}
-(id)initWithCoder:(NSCoder *)coder
{
if (!(self = [super initWithCoder:coder]))
{
return nil;
}
[self commonInit];
return self;
}
- (void)commonInit;
{
// Set scaling to account for Retina display
if ([self respondsToSelector:@selector(setContentScaleFactor:)])
{
self.contentScaleFactor = [[UIScreen mainScreen] scale];
}
inputRotation = kGPUImageNoRotation;
self.opaque = YES;
self.hidden = NO;
CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
eaglLayer.opaque = YES;
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:NO], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
self.enabled = YES;
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext useImageProcessingContext];
displayProgram = [[GPUImageContext sharedImageProcessingContext] programForVertexShaderString:kGPUImageVertexShaderString fragmentShaderString:kGPUImagePassthroughFragmentShaderString];
if (!displayProgram.initialized)
{
[displayProgram addAttribute:@"position"];
[displayProgram addAttribute:@"inputTextureCoordinate"];
if (![displayProgram link])
{
NSString *progLog = [displayProgram programLog];
NSLog(@"Program link log: %@", progLog);
NSString *fragLog = [displayProgram fragmentShaderLog];
NSLog(@"Fragment shader compile log: %@", fragLog);
NSString *vertLog = [displayProgram vertexShaderLog];
NSLog(@"Vertex shader compile log: %@", vertLog);
displayProgram = nil;
NSAssert(NO, @"Filter shader link failed");
}
}
displayPositionAttribute = [displayProgram attributeIndex:@"position"];
displayTextureCoordinateAttribute = [displayProgram attributeIndex:@"inputTextureCoordinate"];
displayInputTextureUniform = [displayProgram uniformIndex:@"inputImageTexture"]; // This does assume a name of "inputTexture" for the fragment shader
[GPUImageContext setActiveShaderProgram:displayProgram];
glEnableVertexAttribArray(displayPositionAttribute);
glEnableVertexAttribArray(displayTextureCoordinateAttribute);
[self setBackgroundColorRed:0.0 green:0.0 blue:0.0 alpha:1.0];
_fillMode = kGPUImageFillModePreserveAspectRatio;
[self createDisplayFramebuffer];
});
}
- 方法列表
// 設(shè)置背景顏色
- (void)setBackgroundColorRed:(GLfloat)redComponent green:(GLfloat)greenComponent blue:(GLfloat)blueComponent alpha:(GLfloat)alphaComponent;
// 該方法未實現(xiàn)
- (void)setCurrentlyReceivingMonochromeInput:(BOOL)newValue;
方法實現(xiàn)酌泰。方法實現(xiàn)比較簡單,主要看以下幾個方法匕累。
- (void)setBackgroundColorRed:(GLfloat)redComponent green:(GLfloat)greenComponent blue:(GLfloat)blueComponent alpha:(GLfloat)alphaComponent;
{
backgroundColorRed = redComponent;
backgroundColorGreen = greenComponent;
backgroundColorBlue = blueComponent;
backgroundColorAlpha = alphaComponent;
}
- (void)setCurrentlyReceivingMonochromeInput:(BOOL)newValue;
{
}
// 根據(jù)旋轉(zhuǎn)模式獲取紋理坐標(biāo)
+ (const GLfloat *)textureCoordinatesForRotation:(GPUImageRotationMode)rotationMode;
{
// static const GLfloat noRotationTextureCoordinates[] = {
// 0.0f, 0.0f,
// 1.0f, 0.0f,
// 0.0f, 1.0f,
// 1.0f, 1.0f,
// };
static const GLfloat noRotationTextureCoordinates[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
static const GLfloat rotateRightTextureCoordinates[] = {
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
0.0f, 0.0f,
};
static const GLfloat rotateLeftTextureCoordinates[] = {
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
1.0f, 1.0f,
};
static const GLfloat verticalFlipTextureCoordinates[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat horizontalFlipTextureCoordinates[] = {
1.0f, 1.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f,
};
static const GLfloat rotateRightVerticalFlipTextureCoordinates[] = {
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 0.0f,
0.0f, 1.0f,
};
static const GLfloat rotateRightHorizontalFlipTextureCoordinates[] = {
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 1.0f,
1.0f, 0.0f,
};
static const GLfloat rotate180TextureCoordinates[] = {
1.0f, 0.0f,
0.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
};
switch(rotationMode)
{
case kGPUImageNoRotation: return noRotationTextureCoordinates;
case kGPUImageRotateLeft: return rotateLeftTextureCoordinates;
case kGPUImageRotateRight: return rotateRightTextureCoordinates;
case kGPUImageFlipVertical: return verticalFlipTextureCoordinates;
case kGPUImageFlipHorizonal: return horizontalFlipTextureCoordinates;
case kGPUImageRotateRightFlipVertical: return rotateRightVerticalFlipTextureCoordinates;
case kGPUImageRotateRightFlipHorizontal: return rotateRightHorizontalFlipTextureCoordinates;
case kGPUImageRotate180: return rotate180TextureCoordinates;
}
}
// 覆蓋父類的方法宫莱,作用是負責(zé)OpenGL的圖形繪制,并顯示在屏幕上
- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
{
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext setActiveShaderProgram:displayProgram];
[self setDisplayFramebuffer];
// 清屏
glClearColor(backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glActiveTexture(GL_TEXTURE4);
glBindTexture(GL_TEXTURE_2D, [inputFramebufferForDisplay texture]);
glUniform1i(displayInputTextureUniform, 4);
glVertexAttribPointer(displayPositionAttribute, 2, GL_FLOAT, 0, 0, imageVertices);
glVertexAttribPointer(displayTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, [GPUImageView textureCoordinatesForRotation:inputRotation]);
// 繪制
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// 顯示
[self presentFramebuffer];
[inputFramebufferForDisplay unlock];
inputFramebufferForDisplay = nil;
});
}
// 顯示幀緩存
- (void)presentFramebuffer;
{
glBindRenderbuffer(GL_RENDERBUFFER, displayRenderbuffer);
[[GPUImageContext sharedImageProcessingContext] presentBufferForDisplay];
}
GPUImageUIElement
與GPUImagePicture類似可以作為響應(yīng)鏈源哩罪。與GPUImagePicture不同的是,它的數(shù)據(jù)不是來自圖片巡验,而是來自于UIView或CALayer的渲染結(jié)果际插,類似于對UIView或CALayer截圖。GPUImageUIElement繼承自GPUImageOutput显设,從而可以知道它能夠作為輸出框弛,由于它沒有實現(xiàn)GPUImageInput協(xié)議,不能處理輸入捕捂。
- 初始化
- (id)initWithView:(UIView *)inputView;
- (id)initWithLayer:(CALayer *)inputLayer;
通過UIView或CALayer進行初始化瑟枫,初始化的過程中調(diào)用 [layer renderInContext:imageContext]
進行渲染,渲染后便生成紋理對象指攒。
- (id)initWithView:(UIView *)inputView;
{
if (!(self = [super init]))
{
return nil;
}
view = inputView;
layer = inputView.layer;
previousLayerSizeInPixels = CGSizeZero;
[self update];
return self;
}
- (id)initWithLayer:(CALayer *)inputLayer;
{
if (!(self = [super init]))
{
return nil;
}
view = nil;
layer = inputLayer;
previousLayerSizeInPixels = CGSizeZero;
[self update];
return self;
}
- (void)update;
{
[self updateWithTimestamp:kCMTimeIndefinite];
}
- (void)updateWithTimestamp:(CMTime)frameTime;
{
[GPUImageContext useImageProcessingContext];
CGSize layerPixelSize = [self layerSizeInPixels];
GLubyte *imageData = (GLubyte *) calloc(1, (int)layerPixelSize.width * (int)layerPixelSize.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)layerPixelSize.width, (int)layerPixelSize.height, 8, (int)layerPixelSize.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// CGContextRotateCTM(imageContext, M_PI_2);
CGContextTranslateCTM(imageContext, 0.0f, layerPixelSize.height);
CGContextScaleCTM(imageContext, layer.contentsScale, -layer.contentsScale);
// CGContextSetBlendMode(imageContext, kCGBlendModeCopy); // From Technical Q&A QA1708: http://developer.apple.com/library/ios/#qa/qa1708/_index.html
[layer renderInContext:imageContext];
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
// TODO: This may not work
outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:layerPixelSize textureOptions:self.outputTextureOptions onlyTexture:YES];
glBindTexture(GL_TEXTURE_2D, [outputFramebuffer texture]);
// no need to use self.outputTextureOptions here, we always need these texture options
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)layerPixelSize.width, (int)layerPixelSize.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
free(imageData);
for (id<GPUImageInput> currentTarget in targets)
{
if (currentTarget != self.targetToIgnoreForUpdates)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger textureIndexOfTarget = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
[currentTarget setInputSize:layerPixelSize atIndex:textureIndexOfTarget];
[currentTarget newFrameReadyAtTime:frameTime atIndex:textureIndexOfTarget];
}
}
}
- 其它方法
- (CGSize)layerSizeInPixels;
- (void)update;
- (void)updateUsingCurrentTime;
- (void)updateWithTimestamp:(CMTime)frameTime;
其它方法主要是與截屏生成紋理對象并傳給所有targets處理相關(guān)慷妙。
// 獲取像素大小
- (CGSize)layerSizeInPixels;
{
CGSize pointSize = layer.bounds.size;
return CGSizeMake(layer.contentsScale * pointSize.width, layer.contentsScale * pointSize.height);
}
// 更新方法
- (void)update;
{
[self updateWithTimestamp:kCMTimeIndefinite];
}
// 使用當(dāng)前時間的更新方法
- (void)updateUsingCurrentTime;
{
if(CMTIME_IS_INVALID(time)) {
time = CMTimeMakeWithSeconds(0, 600);
actualTimeOfLastUpdate = [NSDate timeIntervalSinceReferenceDate];
} else {
NSTimeInterval now = [NSDate timeIntervalSinceReferenceDate];
NSTimeInterval diff = now - actualTimeOfLastUpdate;
time = CMTimeAdd(time, CMTimeMakeWithSeconds(diff, 600));
actualTimeOfLastUpdate = now;
}
[self updateWithTimestamp:time];
}
// 帶時間的更新方法
- (void)updateWithTimestamp:(CMTime)frameTime;
{
[GPUImageContext useImageProcessingContext];
CGSize layerPixelSize = [self layerSizeInPixels];
GLubyte *imageData = (GLubyte *) calloc(1, (int)layerPixelSize.width * (int)layerPixelSize.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)layerPixelSize.width, (int)layerPixelSize.height, 8, (int)layerPixelSize.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// CGContextRotateCTM(imageContext, M_PI_2);
CGContextTranslateCTM(imageContext, 0.0f, layerPixelSize.height);
CGContextScaleCTM(imageContext, layer.contentsScale, -layer.contentsScale);
// CGContextSetBlendMode(imageContext, kCGBlendModeCopy); // From Technical Q&A QA1708: http://developer.apple.com/library/ios/#qa/qa1708/_index.html
[layer renderInContext:imageContext];
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
// TODO: This may not work
outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:layerPixelSize textureOptions:self.outputTextureOptions onlyTexture:YES];
glBindTexture(GL_TEXTURE_2D, [outputFramebuffer texture]);
// no need to use self.outputTextureOptions here, we always need these texture options
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)layerPixelSize.width, (int)layerPixelSize.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
free(imageData);
for (id<GPUImageInput> currentTarget in targets)
{
if (currentTarget != self.targetToIgnoreForUpdates)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger textureIndexOfTarget = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
[currentTarget setInputSize:layerPixelSize atIndex:textureIndexOfTarget];
[currentTarget newFrameReadyAtTime:frameTime atIndex:textureIndexOfTarget];
}
}
}
實例代碼
- GPUImagePicture與GPUImageView。效果見
GPUImagePicture.png
允悦。
@interface ViewController ()
@property (weak, nonatomic) IBOutlet GPUImageView *imageView;
@end
@implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
[_imageView setBackgroundColorRed:1.0 green:1.0 blue:1.0 alpha:1.0];
GPUImagePicture *picture = [[GPUImagePicture alloc] initWithImage:[UIImage imageNamed:@"1.jpg"]];
GPUImageGrayscaleFilter *filter = [[GPUImageGrayscaleFilter alloc] init];
[picture addTarget:filter];
[filter addTarget:_imageView];
[filter useNextFrameForImageCapture];
[picture processImage];
}
- GPUImageUIElement與GPUImageView膝擂。效果見
GPUImageUIElement.png
。
@interface SecondViewController ()
@property (weak, nonatomic) IBOutlet GPUImageView *imageView;
@property (weak, nonatomic) IBOutlet UIView *bgView;
@end
@implementation SecondViewController
- (void)viewDidLoad {
[super viewDidLoad];
[_imageView setBackgroundColorRed:1.0 green:1.0 blue:1.0 alpha:1.0];
GPUImageUIElement *element = [[GPUImageUIElement alloc] initWithView:_bgView];
GPUImageHueFilter *filter = [[GPUImageHueFilter alloc] init];
[element addTarget:filter];
[filter addTarget:_imageView];
[filter useNextFrameForImageCapture];
[element update];
}
總結(jié)
GPUImagePicture隙弛、 GPUImageView架馋、GPUImageUIElement 這幾類在處理圖片、處理截屏全闷、以及顯示圖片方面會經(jīng)常用到叉寂。因此,熟悉這幾類對了解GPUImage框架有著重要的作用总珠。
源碼地址:GPUImage源碼閱讀系列 https://github.com/QinminiOS/GPUImage
系列文章地址:GPUImage源碼閱讀 http://www.reibang.com/nb/11749791