一烘绽、寄宿圖Bitmap
之前看過(guò)內(nèi)存惡鬼drawRect,也驗(yàn)證過(guò)千绪,確實(shí)如此朽肥,但理解花了好長(zhǎng)時(shí)間。
在每一個(gè)UIView實(shí)例當(dāng)中尽楔,都有一個(gè)默認(rèn)的支持圖層投储,UIView負(fù)責(zé)創(chuàng)建并且管理這個(gè)圖層第练。實(shí)際上這個(gè)CALayer圖層才是真正用來(lái)在屏幕上顯示的,UIView僅僅是對(duì)它的一層封裝玛荞,實(shí)現(xiàn)了CALayer的delegate娇掏,提供了處理事件交互的具體功能。
CALayer只是一個(gè)普通的類(lèi)勋眯,它也不能直接渲染到屏幕上婴梧,因?yàn)槠聊簧夏闼吹降臇|西,其實(shí)都是一張張圖片客蹋。而為什么我們能看到CALayer的內(nèi)容呢塞蹭,是因?yàn)镃ALayer內(nèi)部有一個(gè)contents屬性。contents默認(rèn)可以傳一個(gè)id類(lèi)型的對(duì)象讶坯,但是只有你傳CGImage的時(shí)候番电,它才能夠正常顯示在屏幕上。contents也被稱(chēng)為寄宿圖闽巩,除了給它賦值CGImage之外钧舌,我們也可以直接對(duì)它進(jìn)行繪制,涎跨。如果UIView檢測(cè)到-drawRect:方法被調(diào)用了洼冻,它就會(huì)為視圖分配一個(gè)寄宿圖。這個(gè)寄宿圖的像素尺寸等于視圖大小乘以contentsScale隅很。
為什么要做這樣的設(shè)定呢撞牢?
猜測(cè)是為了方便顯示,iOS保持界面流暢的技巧 中指出圖片的解碼比較消耗CPU性能叔营,為保持界面流暢屋彪,圖片需要提前解碼,即從寄宿圖直接創(chuàng)建圖片绒尊。
當(dāng)你用UIImage后CGImageSource的那幾個(gè)方法創(chuàng)建圖片時(shí)畜挥,圖片數(shù)據(jù)并不會(huì)立刻解碼。圖片設(shè)置到UIImageView或者CALayer.contents中去婴谱,并且CALayer被提交到GPU 前蟹但,CGImage 中的數(shù)據(jù)才會(huì)得到解碼。這一步是發(fā)生在主線(xiàn)程的谭羔,并且不可避免华糖。如果想要繞開(kāi)這個(gè)機(jī)制,常見(jiàn)的做法是在后臺(tái)線(xiàn)程先把圖片繪制到 CGBitmapContext 中瘟裸,然后從 Bitmap 直接創(chuàng)建圖片客叉。目前常見(jiàn)的網(wǎng)絡(luò)圖片庫(kù)都自帶這個(gè)功能。
二、UIGraphicsBeginImageContext
之前看過(guò)一篇文章iOS微信內(nèi)存監(jiān)控中提到大圖片處理時(shí)采用這種方法
- (UIImage *)scaleImage:(UIImage *)image newSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
但處理大分辨率圖片時(shí)兼搏,往往容易出現(xiàn)OOM卵慰,原因是-[UIImage drawInRect:]在繪制時(shí),先解碼圖片向族,再生成原始分辨率大小的bitmap呵燕,這是很耗內(nèi)存的。解決方法是使用更低層的ImageIO接口件相,避免中間bitmap產(chǎn)生:
+ (UIImage *)scaleImageWithData:(NSData *)data withSize:(CGSize)size
scale:(CGFloat)scale
orientation:(UIImageOrientation)orientation {
CGFloat maxPixelSize = MAX(size.width, size.height);
CGImageSourceRef sourceRef = CGImageSourceCreateWithData((__bridge CFDataRef)data, nil);
NSDictionary *options = @{(__bridge id)kCGImageSourceCreateThumbnailFromImageAlways:(__bridge id)kCFBooleanTrue,
(__bridge id)kCGImageSourceThumbnailMaxPixelSize:[NSNumber numberWithFloat:maxPixelSize]
};
CGImageRef imageRef = CGImageSourceCreateThumbnailAtIndex(sourceRef, 0, (__bridge CFDictionaryRef)options);
UIImage *resultImage = [UIImage imageWithCGImage:imageRef scale:scale orientation:orientation];
CGImageRelease(imageRef);
CFRelease(sourceRef);
return resultImage;
}
//或者如下方法
- (UIImage *)imageByCropToRect:(CGRect)rect {
rect.origin.x *= self.scale;
rect.origin.y *= self.scale;
rect.size.width *= self.scale;
rect.size.height *= self.scale;
if (rect.size.width <= 0 || rect.size.height <= 0) return nil;
CGImageRef imageRef = CGImageCreateWithImageInRect(self.CGImage, rect);
UIImage *image = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation];
CGImageRelease(imageRef);
return image;
}
不過(guò)以上兩個(gè)方法不好處就是提交圖片后在真正顯示的時(shí)候才會(huì)解碼再扭,會(huì)影響CPU性能。
以下摘自源碼
// UIImage context
// The following methods will only return a 8-bit per channel context in the DeviceRGB color space.
// Any new bitmap drawing code is encouraged to use UIGraphicsImageRenderer in leiu of this API.
UIKIT_EXTERN void UIGraphicsBeginImageContext(CGSize size);
UIKIT_EXTERN void UIGraphicsBeginImageContextWithOptions(CGSize size, BOOL opaque, CGFloat scale) NS_AVAILABLE_IOS(4_0);
UIKIT_EXTERN UIImage* __nullable UIGraphicsGetImageFromCurrentImageContext(void);
UIKIT_EXTERN void UIGraphicsEndImageContext(void);
/* Create a bitmap context. The context draws into a bitmap which is `width'
pixels wide and `height' pixels high. The number of components for each
pixel is specified by `space', which may also specify a destination color
profile. The number of bits for each component of a pixel is specified by
`bitsPerComponent'. The number of bytes per pixel is equal to
`(bitsPerComponent * number of components + 7)/8'. Each row of the bitmap
consists of `bytesPerRow' bytes, which must be at least `width * bytes
per pixel' bytes; in addition, `bytesPerRow' must be an integer multiple
of the number of bytes per pixel. `data', if non-NULL, points to a block
of memory at least `bytesPerRow * height' bytes. If `data' is NULL, the
data for context is allocated automatically and freed when the context is
deallocated. `bitmapInfo' specifies whether the bitmap should contain an
alpha channel and how it's to be generated, along with whether the
components are floating-point or integer. */
CG_EXTERN CGContextRef __nullable CGBitmapContextCreate(void * __nullable data,
size_t width, size_t height, size_t bitsPerComponent, size_t bytesPerRow,
CGColorSpaceRef cg_nullable space, uint32_t bitmapInfo)
CG_AVAILABLE_STARTING(__MAC_10_0, __IPHONE_2_0);
UIGraphicsBeginImage:經(jīng)過(guò)測(cè)試當(dāng)創(chuàng)建一個(gè) 寬高都為5000的size時(shí)夜矗,內(nèi)存瘋狂上漲泛范,約為5000 * 5000 * scale^2 * 4。
CGBitmapContextCreate:同樣的測(cè)試,寬高都為5000的size時(shí)紊撕,內(nèi)存瘋狂上漲罢荡,約為5000 * 5000 * bitsPerComponent。
如果我們截屏圖片的目的是保存到相冊(cè)对扶,不是為了顯示截屏圖片区赵,盡量采用比UIGraphicsBeginImageContex底層的方案;如果是為了顯示浪南,盡量采用UIGraphicsBeginImageContex笼才,但要確保UIGraphicsBeginImageContext和UIGraphicsEndImageContext必須成對(duì)出現(xiàn)。
三络凿、截屏方案
常用方法如下骡送,
+ (UIImage *)snapshottingWithView:(UIView *)inputView {
UIGraphicsBeginImageContextWithOptions(inputView.frame.size, inputView.opaque, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
[inputView.layer renderInContext:context];
UIImage *targetImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return targetImage;
}
這樣我就好奇別人的超長(zhǎng)圖截屏方案是怎么實(shí)現(xiàn)的。如果是大分辨率圖片絮记,在特定條件下摔踱,如果直接用上述方案很容易引往往容易出現(xiàn)OOM。
四怨愤、常用圖片框架SDWebImage
4.3.0
/**
* By default, images are decoded respecting their original size. On iOS, this flag will scale down the
* images to a size compatible with the constrained memory of devices.
* If `SDWebImageProgressiveDownload` flag is set the scale down is deactivated.
*/
SDWebImageScaleDownLargeImages = 1 << 12,
- (UIImage *)incrementallyDecodedImageWithData:(NSData *)data finished:(BOOL)finished {
if (!_imageSource) {
_imageSource = CGImageSourceCreateIncremental(NULL);
}
UIImage *image;
// The following code is from http://www.cocoaintheshell.com/2011/05/progressive-images-download-imageio/
// Thanks to the author @Nyx0uf
// Update the data source, we must pass ALL the data, not just the new bytes
CGImageSourceUpdateData(_imageSource, (__bridge CFDataRef)data, finished);
if (_width + _height == 0) {
CFDictionaryRef properties = CGImageSourceCopyPropertiesAtIndex(_imageSource, 0, NULL);
if (properties) {
NSInteger orientationValue = 1;
CFTypeRef val = CFDictionaryGetValue(properties, kCGImagePropertyPixelHeight);
if (val) CFNumberGetValue(val, kCFNumberLongType, &_height);
val = CFDictionaryGetValue(properties, kCGImagePropertyPixelWidth);
if (val) CFNumberGetValue(val, kCFNumberLongType, &_width);
val = CFDictionaryGetValue(properties, kCGImagePropertyOrientation);
if (val) CFNumberGetValue(val, kCFNumberNSIntegerType, &orientationValue);
CFRelease(properties);
#pragma <#arguments#>
在這個(gè)地方加入自己的邏輯判斷圖片派敷,成倍的縮小_height與_width在合理范圍內(nèi),待驗(yàn)證撰洗,期待更優(yōu)雅的方式
// When we draw to Core Graphics, we lose orientation information,
// which means the image below born of initWithCGIImage will be
// oriented incorrectly sometimes. (Unlike the image born of initWithData
// in didCompleteWithError.) So save it here and pass it on later.
#if SD_UIKIT || SD_WATCH
_orientation = [SDWebImageCoderHelper imageOrientationFromEXIFOrientation:orientationValue];
#endif
}
}
if (_width + _height > 0) {
// Create the image
CGImageRef partialImageRef = CGImageSourceCreateImageAtIndex(_imageSource, 0, NULL);
#if SD_UIKIT || SD_WATCH
// Workaround for iOS anamorphic image
if (partialImageRef) {
const size_t partialHeight = CGImageGetHeight(partialImageRef);
CGColorSpaceRef colorSpace = SDCGColorSpaceGetDeviceRGB();
CGContextRef bmContext = CGBitmapContextCreate(NULL, _width, _height, 8, 0, colorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst);
if (bmContext) {
CGContextDrawImage(bmContext, (CGRect){.origin.x = 0.0f, .origin.y = 0.0f, .size.width = _width, .size.height = partialHeight}, partialImageRef);
CGImageRelease(partialImageRef);
partialImageRef = CGBitmapContextCreateImage(bmContext);
CGContextRelease(bmContext);
}
else {
CGImageRelease(partialImageRef);
partialImageRef = nil;
}
}
#endif
if (partialImageRef) {
#if SD_UIKIT || SD_WATCH
image = [[UIImage alloc] initWithCGImage:partialImageRef scale:1 orientation:_orientation];
#elif SD_MAC
image = [[UIImage alloc] initWithCGImage:partialImageRef size:NSZeroSize];
#endif
CGImageRelease(partialImageRef);
}
}
if (finished) {
if (_imageSource) {
CFRelease(_imageSource);
_imageSource = NULL;
}
}
return image;
}
- (nullable UIImage *)sd_decompressedImageWithImage:(nullable UIImage *)image {
if (![[self class] shouldDecodeImage:image]) {
return image;
}
// autorelease the bitmap context and all vars to help system to free memory when there are memory warning.
// on iOS7, do not forget to call [[SDImageCache sharedImageCache] clearMemory];
@autoreleasepool{
CGImageRef imageRef = image.CGImage;
CGColorSpaceRef colorspaceRef = [[self class] colorSpaceForImageRef:imageRef];
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
// kCGImageAlphaNone is not supported in CGBitmapContextCreate.
// Since the original image here has no alpha info, use kCGImageAlphaNoneSkipLast
// to create bitmap graphics contexts without alpha info.
CGContextRef context = CGBitmapContextCreate(NULL,
width,
height,
kBitsPerComponent,
0,
colorspaceRef,
kCGBitmapByteOrderDefault|kCGImageAlphaNoneSkipLast);
if (context == NULL) {
return image;
}
// Draw the image into the context and retrieve the new bitmap image without alpha
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGImageRef imageRefWithoutAlpha = CGBitmapContextCreateImage(context);
UIImage *imageWithoutAlpha = [[UIImage alloc] initWithCGImage:imageRefWithoutAlpha scale:image.scale orientation:image.imageOrientation];
CGContextRelease(context);
CGImageRelease(imageRefWithoutAlpha);
return imageWithoutAlpha;
}
}
在圖片解碼中膀息,并未判斷圖片尺寸,故如果返回的圖片較大了赵,像素較高(比如數(shù)碼相機(jī)拍攝的高清照片,一張幾十兆)時(shí)甸赃,會(huì)導(dǎo)致內(nèi)存暴漲OOM柿汛。