概述
GPUImage是一個(gè)著名的圖像處理開源庫(kù),它讓你能夠在圖片谴古、視頻质涛、相機(jī)上使用GPU加速的濾鏡和其它特效。與CoreImage框架相比讥电,可以根據(jù)GPUImage提供的接口蹂窖,使用自定義的濾鏡。項(xiàng)目地址:https://github.com/BradLarson/GPUImage
這篇文章主要是閱讀GPUImage框架中的 GPUImageFramebuffer恩敌、GPUImageFramebufferCache 兩個(gè)重要類的源碼瞬测。這兩個(gè)是GPUImage中處理幀緩存相關(guān)的類,是各種濾鏡的基礎(chǔ)纠炮。以下是源碼內(nèi)容:
GPUImageFramebuffer
GPUImageFramebufferCache
準(zhǔn)備
- 位圖創(chuàng)建月趟。通過幀緩存生成圖像的時(shí)候,GPUImage使用CGImage相關(guān)的API生成相關(guān)的位圖對(duì)象恢口。以下是該API的相關(guān)說明:
// CGImageRef這個(gè)結(jié)構(gòu)用來創(chuàng)建像素位圖孝宗,可以通過操作存儲(chǔ)的像素位來編輯圖片
typedef struct CGImage *CGImageRef;
/**
通過CGImageCreate方法,我們可以創(chuàng)建出一個(gè)CGImageRef類型的對(duì)象
@param width 圖片寬度像素
@param height 圖片高度像素
@param bitsPerComponent 每個(gè)顏色的比特?cái)?shù)耕肩,例如在rgba32模式下為8
@param bitsPerPixel 每個(gè)像素的總比特?cái)?shù)
@param bytesPerRow 每一行占用的字節(jié)數(shù)因妇,注意這里的單位是字節(jié)
@param space 顏色空間模式
@param bitmapInfo 位圖像素布局枚舉
@param provider 數(shù)據(jù)源提供者
@param decode 解碼渲染數(shù)組
@param shouldInterpolate 是否抗鋸齒
@param intent 圖片相關(guān)參數(shù)
@return 位圖
*/
CGImageRef CGImageCreate(size_t width,
size_t height,
size_t bitsPerComponent,
size_t bitsPerPixel,
size_t bytesPerRow,
CGColorSpaceRef space,
CGBitmapInfo bitmapInfo,
CGDataProviderRef provider,
const CGFloat *decode,
bool shouldInterpolate,
CGColorRenderingIntent intent)
GPUImageFramebuffer
這個(gè)類不是很復(fù)雜,它主要是涉及幀緩存和幀緩存附件等相關(guān)OpenGLES的知識(shí)猿诸,參見
OpenGL ES入門12-幀緩存 婚被。GPUImageFramebuffer 管理者幀緩存和紋理附件。其中紋理附件涉及到了相關(guān)的紋理選項(xiàng)梳虽。因此址芯,它提供的屬性也是和幀緩存、紋理附件窜觉、紋理選項(xiàng)等相關(guān)谷炸。
- 屬性
// 幀緩存大小
@property(readonly) CGSize size;
// 紋理選項(xiàng)
@property(readonly) GPUTextureOptions textureOptions;
// 紋理緩存
@property(readonly) GLuint texture;
// 是否僅有紋理沒有幀緩存
@property(readonly) BOOL missingFramebuffer;
- 初始化方法
- (id)initWithSize:(CGSize)framebufferSize;
- (id)initWithSize:(CGSize)framebufferSize textureOptions:(GPUTextureOptions)fboTextureOptions onlyTexture:(BOOL)onlyGenerateTexture;
- (id)initWithSize:(CGSize)framebufferSize overriddenTexture:(GLuint)inputTexture;
- (id)initWithSize:(CGSize)framebufferSize;
{
// 提供默認(rèn)紋理選項(xiàng)
GPUTextureOptions defaultTextureOptions;
defaultTextureOptions.minFilter = GL_LINEAR;
defaultTextureOptions.magFilter = GL_LINEAR;
defaultTextureOptions.wrapS = GL_CLAMP_TO_EDGE;
defaultTextureOptions.wrapT = GL_CLAMP_TO_EDGE;
defaultTextureOptions.internalFormat = GL_RGBA;
defaultTextureOptions.format = GL_BGRA;
defaultTextureOptions.type = GL_UNSIGNED_BYTE;
// 根據(jù)默認(rèn)紋理選項(xiàng)以及強(qiáng)制生成幀緩存和紋理附件進(jìn)行相關(guān)初始化
if (!(self = [self initWithSize:framebufferSize textureOptions:defaultTextureOptions onlyTexture:NO]))
{
return nil;
}
return self;
}
- (id)initWithSize:(CGSize)framebufferSize textureOptions:(GPUTextureOptions)fboTextureOptions onlyTexture:(BOOL)onlyGenerateTexture;
{
if (!(self = [super init]))
{
return nil;
}
// 紋理選項(xiàng)
_textureOptions = fboTextureOptions;
_size = framebufferSize;
framebufferReferenceCount = 0;
referenceCountingDisabled = NO;
// 是否只生成紋理緩存
_missingFramebuffer = onlyGenerateTexture;
// 如果只生成紋理緩存,則不生成幀緩存
if (_missingFramebuffer)
{
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext useImageProcessingContext];
[self generateTexture];
framebuffer = 0;
});
}
// 既生成紋理緩存又生成幀緩存
else
{
[self generateFramebuffer];
}
return self;
}
- 方法列表禀挫。方法主要分為四大類:
第一類:與使用當(dāng)前幀緩存相關(guān)的方法旬陡,
第二類:與GPUImageFramebuffer
引用計(jì)數(shù)相關(guān)的方法,
第三類:從幀緩存生成位圖相關(guān)的方法特咆,
第四類:獲取幀緩存原始數(shù)據(jù)相關(guān)的方法季惩。
// Usage
- (void)activateFramebuffer;
// Reference counting
- (void)lock;
- (void)unlock;
- (void)clearAllLocks;
- (void)disableReferenceCounting;
- (void)enableReferenceCounting;
// Image capture
- (CGImageRef)newCGImageFromFramebufferContents;
- (void)restoreRenderTarget;
// Raw data bytes
- (void)lockForReading;
- (void)unlockAfterReading;
- (NSUInteger)bytesPerRow;
- (GLubyte *)byteBuffer;
- (CVPixelBufferRef)pixelBuffer;
- 激活录粱。在使用幀緩存的時(shí)候首先要激活(即綁定為當(dāng)前幀緩存),激活之后才能在當(dāng)前幀緩存上進(jìn)行相關(guān)操作画拾。
- (void)activateFramebuffer;
{
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
// 激活的時(shí)候需要設(shè)置視口大小
glViewport(0, 0, (int)_size.width, (int)_size.height);
}
- 引用計(jì)數(shù)啥繁。
開啟引用計(jì)數(shù)后,每次調(diào)用 lock 方法后青抛,引用計(jì)數(shù)加一旗闽。
- (void)lock;
{
if (referenceCountingDisabled)
{
return;
}
framebufferReferenceCount++;
}
開啟引用計(jì)數(shù)后,當(dāng)引用計(jì)數(shù)小于1的時(shí)候蜜另,會(huì)調(diào)用 returnFramebufferToCache
函數(shù)把自己放回 GPUImageFramebufferCache
中适室,便于之后的使用,并不會(huì)銷毀幀緩存举瑰。銷毀幀緩存是通過 destroyFramebuffer
函數(shù)捣辆,該函數(shù)是私有函數(shù),未在頭文件中公開此迅,在 dealloc
中被調(diào)用汽畴。
- (void)unlock;
{
if (referenceCountingDisabled)
{
return;
}
NSAssert(framebufferReferenceCount > 0, @"Tried to overrelease a framebuffer, did you forget to call -useNextFrameForImageCapture before using -imageFromCurrentFramebuffer?");
framebufferReferenceCount--;
if (framebufferReferenceCount < 1)
{
[[GPUImageContext sharedFramebufferCache] returnFramebufferToCache:self];
}
}
清除所有引用計(jì)數(shù),以及開啟關(guān)閉引用計(jì)數(shù)如下:
- (void)clearAllLocks;
{
framebufferReferenceCount = 0;
}
- (void)disableReferenceCounting;
{
referenceCountingDisabled = YES;
}
- (void)enableReferenceCounting;
{
referenceCountingDisabled = NO;
}
- 從幀緩存中生成圖片耸序。在讀取圖片數(shù)據(jù)的時(shí)候忍些,根據(jù)設(shè)備是否支持
CoreVideo
框架,GPUImage 會(huì)選擇使用CVPixelBufferGetBaseAddress
或者glReadPixels
讀取幀緩存中的數(shù)據(jù)坎怪。最后通過CGImageCreate
罢坝,創(chuàng)建CGImage
對(duì)象并返回該對(duì)象。
- (CGImageRef)newCGImageFromFramebufferContents;
{
// a CGImage can only be created from a 'normal' color texture
NSAssert(self.textureOptions.internalFormat == GL_RGBA, @"For conversion to a CGImage the output texture format for this filter must be GL_RGBA.");
NSAssert(self.textureOptions.type == GL_UNSIGNED_BYTE, @"For conversion to a CGImage the type of the output texture of this filter must be GL_UNSIGNED_BYTE.");
__block CGImageRef cgImageFromBytes;
// 在VideoProcessingQueue中進(jìn)行同步處理
runSynchronouslyOnVideoProcessingQueue(^{
// 設(shè)置OpenGLES上下文
[GPUImageContext useImageProcessingContext];
// 圖片的總大小 = 幀緩存大小 * 每個(gè)像素點(diǎn)字節(jié)數(shù)
NSUInteger totalBytesForImage = (int)_size.width * (int)_size.height * 4;
// It appears that the width of a texture must be padded out to be a multiple of 8 (32 bytes) if reading from it using a texture cache
GLubyte *rawImagePixels;
CGDataProviderRef dataProvider = NULL;
// 判斷是否支持CoreVideo的快速紋理上傳
if ([GPUImageContext supportsFastTextureUpload])
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
// 圖像寬度 = 每行圖像數(shù)據(jù)大小 / 每個(gè)像素點(diǎn)字節(jié)數(shù)
NSUInteger paddedWidthOfImage = CVPixelBufferGetBytesPerRow(renderTarget) / 4.0;
// 圖像大小 = 圖像寬度 * 高度 * 每個(gè)像素點(diǎn)字節(jié)數(shù)
NSUInteger paddedBytesForImage = paddedWidthOfImage * (int)_size.height * 4;
// 等待OpenGL指令執(zhí)行完成搅窿,與glFlush有區(qū)別
glFinish();
CFRetain(renderTarget); // I need to retain the pixel buffer here and release in the data source callback to prevent its bytes from being prematurely deallocated during a photo write operation
[self lockForReading];
rawImagePixels = (GLubyte *)CVPixelBufferGetBaseAddress(renderTarget);
// 創(chuàng)建CGDataProviderRef對(duì)象
dataProvider = CGDataProviderCreateWithData((__bridge_retained void*)self, rawImagePixels, paddedBytesForImage, dataProviderUnlockCallback);
[[GPUImageContext sharedFramebufferCache] addFramebufferToActiveImageCaptureList:self]; // In case the framebuffer is swapped out on the filter, need to have a strong reference to it somewhere for it to hang on while the image is in existence
#else
#endif
}
else
{
// 激活幀緩存
[self activateFramebuffer];
rawImagePixels = (GLubyte *)malloc(totalBytesForImage);
// 從當(dāng)前的幀緩存讀取圖片數(shù)據(jù)
glReadPixels(0, 0, (int)_size.width, (int)_size.height, GL_RGBA, GL_UNSIGNED_BYTE, rawImagePixels);
// 創(chuàng)建 CGDataProvider
dataProvider = CGDataProviderCreateWithData(NULL, rawImagePixels, totalBytesForImage, dataProviderReleaseCallback);
// 讀取到數(shù)據(jù)之后不需要再持有幀緩存
[self unlock]; // Don't need to keep this around anymore
}
CGColorSpaceRef defaultRGBColorSpace = CGColorSpaceCreateDeviceRGB();
if ([GPUImageContext supportsFastTextureUpload])
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
// 創(chuàng)建CGImage對(duì)象
cgImageFromBytes = CGImageCreate((int)_size.width, (int)_size.height, 8, 32, CVPixelBufferGetBytesPerRow(renderTarget), defaultRGBColorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst, dataProvider, NULL, NO, kCGRenderingIntentDefault);
#else
#endif
}
else
{
// 創(chuàng)建CGImage對(duì)象
cgImageFromBytes = CGImageCreate((int)_size.width, (int)_size.height, 8, 32, 4 * (int)_size.width, defaultRGBColorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaLast, dataProvider, NULL, NO, kCGRenderingIntentDefault);
}
// Capture image with current device orientation
// 釋放數(shù)據(jù)
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(defaultRGBColorSpace);
});
return cgImageFromBytes;
}
- 獲取幀緩存原始數(shù)據(jù)相關(guān)的方法嘁酿。如果設(shè)備支持
CoreVideo
框架,獲取紋理數(shù)據(jù)的相關(guān)操作會(huì)調(diào)用下面的這些方法男应。詳細(xì)見newCGImageFromFramebufferContents
方法痹仙。
- (void)lockForReading
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
if ([GPUImageContext supportsFastTextureUpload])
{
if (readLockCount == 0)
{
// 在訪問CPU的像素?cái)?shù)據(jù)之前,必須調(diào)用CVPixelBufferLockBaseAddress
CVPixelBufferLockBaseAddress(renderTarget, 0);
}
readLockCount++;
}
#endif
}
- (void)unlockAfterReading
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
if ([GPUImageContext supportsFastTextureUpload])
{
NSAssert(readLockCount > 0, @"Unbalanced call to -[GPUImageFramebuffer unlockAfterReading]");
readLockCount--;
if (readLockCount == 0)
{
// 訪問結(jié)束后殉了,必須調(diào)用CVPixelBufferUnlockBaseAddress
CVPixelBufferUnlockBaseAddress(renderTarget, 0);
}
}
#endif
}
- (NSUInteger)bytesPerRow;
{
if ([GPUImageContext supportsFastTextureUpload])
{
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
// 獲取每行數(shù)據(jù)大小
return CVPixelBufferGetBytesPerRow(renderTarget);
#else
return _size.width * 4; // TODO: do more with this on the non-texture-cache side
#endif
}
else
{
return _size.width * 4;
}
}
GPUImageFramebufferCache
GPUImageFramebufferCache類核心的職責(zé)是管理GPUImageFramebuffer對(duì)象。
- 屬性
@interface GPUImageFramebufferCache()
{
// NSCache *framebufferCache;
// 緩存字典
NSMutableDictionary *framebufferCache;
// 緩存數(shù)量字典
NSMutableDictionary *framebufferTypeCounts;
// 當(dāng)前正在使用的GPUImageFramebuffer數(shù)組
NSMutableArray *activeImageCaptureList; // Where framebuffers that may be lost by a filter, but which are still needed for a UIImage, etc., are stored
id memoryWarningObserver;
// 緩存隊(duì)列
dispatch_queue_t framebufferCacheQueue;
}
- 構(gòu)造方法拟枚。構(gòu)造方法中主要是初始化各個(gè)緩存池薪铜,以及創(chuàng)建緩存隊(duì)列,以及對(duì)系統(tǒng)內(nèi)存警告進(jìn)行監(jiān)聽恩溅。
- (id)init;
{
if (!(self = [super init]))
{
return nil;
}
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
__unsafe_unretained __typeof__ (self) weakSelf = self;
// 監(jiān)聽系統(tǒng)內(nèi)存警告隔箍,收到通知,便清理緩存數(shù)組脚乡。
memoryWarningObserver = [[NSNotificationCenter defaultCenter] addObserverForName:UIApplicationDidReceiveMemoryWarningNotification object:nil queue:nil usingBlock:^(NSNotification *note) {
__typeof__ (self) strongSelf = weakSelf;
if (strongSelf) {
[strongSelf purgeAllUnassignedFramebuffers];
}
}];
#else
#endif
// 初始化緩存池
// framebufferCache = [[NSCache alloc] init];
framebufferCache = [[NSMutableDictionary alloc] init];
framebufferTypeCounts = [[NSMutableDictionary alloc] init];
activeImageCaptureList = [[NSMutableArray alloc] init];
framebufferCacheQueue = dispatch_queue_create("com.sunsetlakesoftware.GPUImage.framebufferCacheQueue", GPUImageDefaultQueueAttribute());
return self;
}
- 方法列表蜒滩。主要涉及到在緩存中查找
GPUImageFramebuffer
滨达,將GPUImageFramebuffer
加入緩存,清空緩存等相關(guān)方法俯艰。
- (GPUImageFramebuffer *)fetchFramebufferForSize:(CGSize)framebufferSize textureOptions:(GPUTextureOptions)textureOptions onlyTexture:(BOOL)onlyTexture;
- (GPUImageFramebuffer *)fetchFramebufferForSize:(CGSize)framebufferSize onlyTexture:(BOOL)onlyTexture;
- (void)returnFramebufferToCache:(GPUImageFramebuffer *)framebuffer;
- (void)purgeAllUnassignedFramebuffers;
- (void)addFramebufferToActiveImageCaptureList:(GPUImageFramebuffer *)framebuffer;
- (void)removeFramebufferFromActiveImageCaptureList:(GPUImageFramebuffer *)framebuffer;
- 根據(jù)framebufferSize捡遍、textureOptions和onlyTexture查找GPUImageFramebuffer。如果找不到framebufferCache竹握,會(huì)創(chuàng)建新的緩存画株。這里說的相同類型的
GPUImageFramebuffer
指的是根據(jù)- (NSString *)hashForSize:(CGSize)size textureOptions:(GPUTextureOptions)textureOptions onlyTexture:(BOOL)onlyTexture
得出來的lookupHash相同。
- (GPUImageFramebuffer *)fetchFramebufferForSize:(CGSize)framebufferSize textureOptions:(GPUTextureOptions)textureOptions onlyTexture:(BOOL)onlyTexture;
{
__block GPUImageFramebuffer *framebufferFromCache = nil;
// dispatch_sync(framebufferCacheQueue, ^{
runSynchronouslyOnVideoProcessingQueue(^{
// 創(chuàng)建查找字符串
NSString *lookupHash = [self hashForSize:framebufferSize textureOptions:textureOptions onlyTexture:onlyTexture];
// 獲取GPUImageFramebuffer在緩存中的數(shù)量
NSNumber *numberOfMatchingTexturesInCache = [framebufferTypeCounts objectForKey:lookupHash];
NSInteger numberOfMatchingTextures = [numberOfMatchingTexturesInCache integerValue];
// 緩存中如果沒有啦辐,則創(chuàng)建
if ([numberOfMatchingTexturesInCache integerValue] < 1)
{
// Nothing in the cache, create a new framebuffer to use
framebufferFromCache = [[GPUImageFramebuffer alloc] initWithSize:framebufferSize textureOptions:textureOptions onlyTexture:onlyTexture];
}
else
{
// Something found, pull the old framebuffer and decrement the count
// 緩存中如果有谓传,則取出最后一個(gè),如果取出framebufferFromCache為空芹关,則取倒數(shù)第二個(gè)续挟,依次類推。
NSInteger currentTextureID = (numberOfMatchingTextures - 1);
while ((framebufferFromCache == nil) && (currentTextureID >= 0))
{
// 根據(jù)數(shù)量構(gòu)建帶數(shù)量的textureHash字符串
NSString *textureHash = [NSString stringWithFormat:@"%@-%ld", lookupHash, (long)currentTextureID];
// 查找以textureHash為key的GPUImageFramebuffer是否存在
framebufferFromCache = [framebufferCache objectForKey:textureHash];
// Test the values in the cache first, to see if they got invalidated behind our back
if (framebufferFromCache != nil)
{
// 存在侥衬,則從緩存中刪除
// Withdraw this from the cache while it's in use
[framebufferCache removeObjectForKey:textureHash];
}
currentTextureID--;
}
currentTextureID++;
// 更新framebufferTypeCounts中相同類型GPUImageFramebuffer的數(shù)量
[framebufferTypeCounts setObject:[NSNumber numberWithInteger:currentTextureID] forKey:lookupHash];
// 還是沒有則創(chuàng)建
if (framebufferFromCache == nil)
{
framebufferFromCache = [[GPUImageFramebuffer alloc] initWithSize:framebufferSize textureOptions:textureOptions onlyTexture:onlyTexture];
}
}
});
// 引用計(jì)數(shù)加1诗祸,返回
[framebufferFromCache lock];
return framebufferFromCache;
}
- 根據(jù)framebufferSize和onlyTexture以及默認(rèn)的GPUTextureOptions查找GPUImageFramebuffer。如果找不到浇冰,會(huì)創(chuàng)建新的緩存贬媒。
- (GPUImageFramebuffer *)fetchFramebufferForSize:(CGSize)framebufferSize onlyTexture:(BOOL)onlyTexture;
{
GPUTextureOptions defaultTextureOptions;
defaultTextureOptions.minFilter = GL_LINEAR;
defaultTextureOptions.magFilter = GL_LINEAR;
defaultTextureOptions.wrapS = GL_CLAMP_TO_EDGE;
defaultTextureOptions.wrapT = GL_CLAMP_TO_EDGE;
defaultTextureOptions.internalFormat = GL_RGBA;
defaultTextureOptions.format = GL_BGRA;
defaultTextureOptions.type = GL_UNSIGNED_BYTE;
return [self fetchFramebufferForSize:framebufferSize textureOptions:defaultTextureOptions onlyTexture:onlyTexture];
}
- 回收緩存。根據(jù)size肘习、textureOptions和onlyTexture际乘,創(chuàng)建緩存的key值,ramebufferTypeCounts中的key由lookupHash構(gòu)成沒有加數(shù)量漂佩。在framebufferCache中脖含,key值由lookupHash加上數(shù)量避免覆蓋相同的GPUImageFramebuffer。
- (void)returnFramebufferToCache:(GPUImageFramebuffer *)framebuffer;
{
// 清除引用計(jì)數(shù)
[framebuffer clearAllLocks];
// dispatch_async(framebufferCacheQueue, ^{
runAsynchronouslyOnVideoProcessingQueue(^{
CGSize framebufferSize = framebuffer.size;
GPUTextureOptions framebufferTextureOptions = framebuffer.textureOptions;
// 常見查找hash字符串
NSString *lookupHash = [self hashForSize:framebufferSize textureOptions:framebufferTextureOptions onlyTexture:framebuffer.missingFramebuffer];
// 獲取當(dāng)前同類型緩存的數(shù)量
NSNumber *numberOfMatchingTexturesInCache = [framebufferTypeCounts objectForKey:lookupHash];
NSInteger numberOfMatchingTextures = [numberOfMatchingTexturesInCache integerValue];
// 對(duì)相同類型的GPUImageFramebuffer,存放在framebufferCache中時(shí)投蝉,key值由lookupHash加上數(shù)量避免覆蓋相同的GPUImageFramebuffer养葵。
NSString *textureHash = [NSString stringWithFormat:@"%@-%ld", lookupHash, (long)numberOfMatchingTextures];
// [framebufferCache setObject:framebuffer forKey:textureHash cost:round(framebufferSize.width * framebufferSize.height * 4.0)];
[framebufferCache setObject:framebuffer forKey:textureHash];
// framebufferTypeCounts中的key沒有加數(shù)量
[framebufferTypeCounts setObject:[NSNumber numberWithInteger:(numberOfMatchingTextures + 1)] forKey:lookupHash];
});
}
- 內(nèi)存警告的時(shí)候,清空緩存瘩缆。
- (void)purgeAllUnassignedFramebuffers;
{
runAsynchronouslyOnVideoProcessingQueue(^{
// dispatch_async(framebufferCacheQueue, ^{
[framebufferCache removeAllObjects];
[framebufferTypeCounts removeAllObjects];
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
CVOpenGLESTextureCacheFlush([[GPUImageContext sharedImageProcessingContext] coreVideoTextureCache], 0);
#else
#endif
});
}
- 幀緩存持有與釋放关拒。在讀取幀緩存圖像數(shù)據(jù)時(shí),需要保持對(duì)
GPUImageFramebuffer
的引用庸娱。并且讀取完數(shù)據(jù)后着绊,需要對(duì)其進(jìn)行釋放。詳細(xì)見GPUImageFramebuffer
的newCGImageFromFramebufferContents
方法熟尉。
- (void)addFramebufferToActiveImageCaptureList:(GPUImageFramebuffer *)framebuffer;
{
runAsynchronouslyOnVideoProcessingQueue(^{
// dispatch_async(framebufferCacheQueue, ^{
[activeImageCaptureList addObject:framebuffer];
});
}
- (void)removeFramebufferFromActiveImageCaptureList:(GPUImageFramebuffer *)framebuffer;
{
runAsynchronouslyOnVideoProcessingQueue(^{
// dispatch_async(framebufferCacheQueue, ^{
[activeImageCaptureList removeObject:framebuffer];
});
}
總結(jié)
GPUImageFramebuffer 封裝了OpenGLES中幀緩存归露,紋理附件等相關(guān)技術(shù)。
GPUImageFramebufferCache 管理著GPUImageFramebuffer斤儿,方便GPUImageFramebuffer的重復(fù)利用剧包。
源碼地址:GPUImage源碼閱讀系列 https://github.com/QinminiOS/GPUImage
系列文章地址:GPUImage源碼閱讀 http://www.reibang.com/nb/11749791