第四篇
前言
首先酗钞,我們要弄明白一個(gè)問題岸裙? 為什么要對UIImage進(jìn)行解碼呢?難道不能直接使用嗎旷偿?
其實(shí)不解碼也是可以使用的烹俗,假如說我們通過imageNamed:
來加載image爆侣,系統(tǒng)默認(rèn)會(huì)在主線程立即進(jìn)行圖片的解碼工作。這一過程就是把image解碼成可供控件直接使用的位圖幢妄。
當(dāng)在主線程調(diào)用了大量的imageNamed:
方法后兔仰,就會(huì)產(chǎn)生卡頓了。為了解決這個(gè)問題我們有兩種比較簡單的處理方法:
- 我們不使用
imageNamed:
加載圖片蕉鸳,使用其他的方法乎赴,比如imageWithContentsOfFile:
- 我們自己解碼圖片,可以把這個(gè)解碼過程放到子線程
通過上邊這兩點(diǎn)小小的建議潮尝,我們知道了處理圖片的一些小技巧榕吼。我們還需要知道圖片的一些基礎(chǔ)知識(shí)和如何解碼圖片。
圖像存儲(chǔ)
首先圖像的存儲(chǔ)是二維的勉失,所以我們需要考慮如何表示圖像中某個(gè)特定位置的值羹蚣。然后,我們需要考慮具體的值應(yīng)該如何量化乱凿。另外顽素,根據(jù)我們捕捉圖像的途徑,也會(huì)有不同的方式來編碼圖形數(shù)據(jù)告匠。一般來說戈抄,最直觀的方式是將其存為位圖數(shù)據(jù),可如果你想處理一組幾何圖形后专,效率就會(huì)偏低划鸽。一個(gè)圓形可以只由三個(gè)值 (兩個(gè)坐標(biāo)值和半徑) 來表示,使用位圖會(huì)使文件更大戚哎,卻只能做粗略的近似裸诽。
不同于位圖把值存在陣列中,矢量格式存儲(chǔ)的是繪圖圖像的指令型凳。在處理一些可以被歸納為幾何形狀的簡單圖像時(shí)丈冬,這樣做顯然更有效率;但面對照片數(shù)據(jù)時(shí)矢量儲(chǔ)存就會(huì)顯得乏力了甘畅。建筑師設(shè)計(jì)房屋更傾向于使用矢量的方式埂蕊,因?yàn)槭噶扛袷讲⒉粌H僅局限于線條的繪制,也可以用漸變或圖案的填充作為展示疏唾,所以利用矢量方式完全可以生成房屋的擬真渲染圖蓄氧。
用于填充的圖案單元?jiǎng)t更適合被儲(chǔ)存為一個(gè)位圖,在這種情況下槐脏,我們可能需要一個(gè)混合格式喉童。一個(gè)非常普遍的混合格式的一個(gè)例子是 PostScript,(或者時(shí)下比較流行的衍生格式顿天,PDF)堂氯,它基本上是一個(gè)用于繪制圖像的描述語言蔑担。上述格式主要針對印刷業(yè),而 NeXT 和 Adobe 開發(fā)的 Display Postscript 則是進(jìn)行屏幕繪制的指令集咽白。PostScript 能夠排布字母啤握,甚至位圖,這使得它成為了一個(gè)非常靈活的格式局扶。
矢量圖像
矢量格式的一大優(yōu)點(diǎn)是縮放恨统。矢量格式的圖像其實(shí)是一組繪圖指令,這些指令通常是獨(dú)立于尺寸的三妈。如果你想擴(kuò)大一個(gè)圓形畜埋,只需在繪制前擴(kuò)大它的半徑就可以了。位圖則沒這么容易畴蒲。最起碼悠鞍,如果擴(kuò)大的比例不是二的倍數(shù),就會(huì)涉及到重繪圖像模燥,并且各個(gè)元素都只是簡單地增加尺寸咖祭,成為一個(gè)色塊。由于我們不知道這圖像是一個(gè)圓形蔫骂,所以無法確泵春玻弧線的準(zhǔn)確描繪,效果看起來肯定不如按比例繪制的線條那樣好辽旋。也因此浩嫌,在像素密度不同的設(shè)備中,矢量圖像作為圖形資源會(huì)非常有用补胚。位圖的話码耐,同樣的圖標(biāo),在視網(wǎng)膜屏幕之前的 iPhone 上看起來并沒有問題溶其,在拉伸兩倍后的視網(wǎng)膜屏幕上看起來就會(huì)發(fā)虛骚腥。就好像僅適配了 iPhone 的 App 運(yùn)行在 iPad 的 2x 模式下就不再那么清晰了。
雖然 Xcode 6 已經(jīng)支持了 PDF 格式瓶逃,但迄今仍不完善束铭,只是在編譯時(shí)將其創(chuàng)建成了位圖圖像。最常見的矢量圖像格式為 SVG厢绝,在 iOS 中也有一個(gè)渲染 SVG 文件的庫纯露,SVGKit。
位圖
大部分圖像都是以位圖方式處理的代芜,從這里開始,我們就將重點(diǎn)放在如何處理它們上浓利。第一個(gè)問題挤庇,是如何表示兩個(gè)維度钞速。所有的格式都以一系列連續(xù)的行作為單元,而每一行則水平地按順序存儲(chǔ)了每個(gè)像素嫡秕。大多數(shù)格式會(huì)按照行的順序進(jìn)行存儲(chǔ)渴语,但是這并不絕對,比如常見的交叉格式昆咽,就不嚴(yán)格按照行順序驾凶。其優(yōu)點(diǎn)是當(dāng)圖像被部分加載時(shí),可以更好的顯示預(yù)覽圖像掷酗。在互聯(lián)網(wǎng)初期调违,這是一個(gè)問題,隨著數(shù)據(jù)的傳輸速度提升泻轰,現(xiàn)在已經(jīng)不再被當(dāng)做重點(diǎn)技肩。
表示位圖最簡單的方法是將二進(jìn)制作為每個(gè)像素的值:一個(gè)像素只有開、關(guān)兩種狀態(tài)浮声,我們可以在一個(gè)字節(jié)中存儲(chǔ)八個(gè)像素虚婿,效率非常高。不過泳挥,由于每一位只有最多兩個(gè)值然痊,我們只能儲(chǔ)存兩種顏色√敕考慮到現(xiàn)實(shí)中的顏色數(shù)以百萬計(jì)剧浸,上述方法聽起來并不是很有用。不過有一種情況還是需要用到這樣的方法:遮罩筑煮。比如辛蚊,圖像的遮罩可以被用于透明性,在 iOS 中真仲,遮罩被應(yīng)用在 tab bar 的圖標(biāo)上 (即便實(shí)際圖標(biāo)不是單像素位圖)袋马。
如果要添加更多的顏色,有兩個(gè)基本的選擇:使用一個(gè)查找表秸应,或直接用真實(shí)的顏色值虑凛。GIF 圖像有一個(gè)顏色表 (或色彩面板),可以存儲(chǔ)最多 256 種顏色软啼。存儲(chǔ)在位圖中的值是該查詢列表中的索引值桑谍,對應(yīng)著其相應(yīng)的顏色。所以祸挪,GIF 文件僅限于 256 色锣披。對于簡單的線條圖或純色圖,這是一種不錯(cuò)的解決方法。但對于照片來說雹仿,就會(huì)顯示的不夠真實(shí)增热,照片需要更精細(xì)的顏色深度。進(jìn)一步的改進(jìn)是 PNG 文件胧辽,這種格式可以使用一個(gè)預(yù)置的色板或者獨(dú)立的通道峻仇,它們都支持可變的顏色深度。在一個(gè)通道中邑商,每個(gè)像素的顏色分量 (紅摄咆,綠,藍(lán)人断,即 RGB吭从,有時(shí)添加透明度值,即RGBA) 是直接指定的含鳞。
GIF 和 PNG 對于具有大面積相同顏色的圖像是最好的選擇影锈,因?yàn)樗鼈兪褂玫?(主要是基于游程長度編碼的) 壓縮算法可以減少存儲(chǔ)需求。這種壓縮是無損的蝉绷,這意味著圖像質(zhì)量不會(huì)被壓縮過程影響鸭廷。
一個(gè)有損壓縮圖像格式的例子是 JPEG。創(chuàng)建 JPEG 圖像時(shí)熔吗,通常會(huì)指定一個(gè)與圖像質(zhì)量相關(guān)的壓縮比值參數(shù)辆床,壓縮程度過高會(huì)導(dǎo)致圖像質(zhì)量惡化。JPEG 不適用于對比鮮明的圖像 (如線條圖)桅狠,其壓縮方式對類似區(qū)域的圖像質(zhì)量損害會(huì)相對嚴(yán)重讼载。如果某張截圖中包含了文本,且保存為 JPEG 格式中跌,就可以清楚地看到:生成的圖像中字符周圍會(huì)出現(xiàn)雜散的像素點(diǎn)咨堤。在大部分照片中不存在這個(gè)問題,所以照片主要使用 JPEG 格式漩符。
總結(jié):就放大縮小而言一喘,矢量格式 (如 SVG) 是最好的。對比鮮明且顏色數(shù)量有限的線條圖最適合 GIF 或 PNG (其中 PNG 更為強(qiáng)大)嗜暴,而照片凸克,則應(yīng)該使用 JPEG。當(dāng)然闷沥,這些都不是不可逾越的規(guī)則萎战,不過通常而言,對一定的圖像質(zhì)量與圖像尺寸而言舆逃,遵守規(guī)則會(huì)得到最好的結(jié)果蚂维。
做一些好玩的事
連接了上邊的知識(shí)呢戳粒,就可以做一些好玩的事了。比如說給圖像打馬賽克鸟雏,合并圖像等等享郊。再次就不介紹怎么實(shí)現(xiàn)了。有興趣的同學(xué)可以自己網(wǎng)上去搜孝鹊,例子很多。
+ (nullable UIImage *)decodedImageWithImage:(nullable UIImage *)image
好了展蒂,言歸正傳又活,讀完上邊的內(nèi)容,我們明白了為什么要解碼圖片锰悼,那么這個(gè)方法就是解碼圖片的實(shí)現(xiàn)過程柳骄。這給我們提供了一種思路:我們有時(shí)在優(yōu)化代碼的時(shí)候,可以考慮用這個(gè)方法來處理圖像數(shù)據(jù)箕般。
static const size_t kBytesPerPixel = 4;
static const size_t kBitsPerComponent = 8;
+ (nullable UIImage *)decodedImageWithImage:(nullable UIImage *)image {
if (![UIImage shouldDecodeImage:image]) {
return image;
}
// autorelease the bitmap context and all vars to help system to free memory when there are memory warning.
// on iOS7, do not forget to call [[SDImageCache sharedImageCache] clearMemory];
@autoreleasepool{
CGImageRef imageRef = image.CGImage;
CGColorSpaceRef colorspaceRef = [UIImage colorSpaceForImageRef:imageRef];
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
size_t bytesPerRow = kBytesPerPixel * width;
// kCGImageAlphaNone is not supported in CGBitmapContextCreate.
// Since the original image here has no alpha info, use kCGImageAlphaNoneSkipLast
// to create bitmap graphics contexts without alpha info.
CGContextRef context = CGBitmapContextCreate(NULL,
width,
height,
kBitsPerComponent,
bytesPerRow,
colorspaceRef,
kCGBitmapByteOrderDefault|kCGImageAlphaNoneSkipLast);
if (context == NULL) {
return image;
}
// Draw the image into the context and retrieve the new bitmap image without alpha
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGImageRef imageRefWithoutAlpha = CGBitmapContextCreateImage(context);
UIImage *imageWithoutAlpha = [UIImage imageWithCGImage:imageRefWithoutAlpha
scale:image.scale
orientation:image.imageOrientation];
CGContextRelease(context);
CGImageRelease(imageRefWithoutAlpha);
return imageWithoutAlpha;
}
}
我們一行一行的看:
static const size_t kBytesPerPixel = 4;
kBytesPerPixel
用來說明每個(gè)像素占用內(nèi)存多少個(gè)字節(jié)耐薯,在這里是占用4個(gè)字節(jié)。(圖像在iOS設(shè)備上是以像素為單位顯示的)丝里。
static const size_t kBitsPerComponent = 8;
kBitsPerComponent
表示每一個(gè)組件占多少位曲初。這個(gè)不太好理解,我們先舉個(gè)例子杯聚,比方說RGBA臼婆,其中R(紅色)G(綠色)B(藍(lán)色)A(透明度)是4個(gè)組件,每個(gè)像素由這4個(gè)組件組成幌绍,那么我們就用8位來表示著每一個(gè)組件颁褂,所以這個(gè)RGBA就是8*4 = 32位。
知道了kBitsPerComponent
和每個(gè)像素有多少組件組成就能計(jì)算kBytesPerPixel
了傀广。計(jì)算公式是:(bitsPerComponent * number of components + 7)/8
.
判斷要不要解碼
if (![UIImage shouldDecodeImage:image]) {
return image;
}
并不是所有的image都要解碼的颁独。我們來看看shouldDecodeImage:
這個(gè)函數(shù):
+ (BOOL)shouldDecodeImage:(nullable UIImage *)image {
// Prevent "CGBitmapContextCreateImage: invalid context 0x0" error
if (image == nil) {
return NO;
}
// do not decode animated images
if (image.images != nil) {
return NO;
}
CGImageRef imageRef = image.CGImage;
CGImageAlphaInfo alpha = CGImageGetAlphaInfo(imageRef);
BOOL anyAlpha = (alpha == kCGImageAlphaFirst ||
alpha == kCGImageAlphaLast ||
alpha == kCGImageAlphaPremultipliedFirst ||
alpha == kCGImageAlphaPremultipliedLast);
// do not decode images with alpha
if (anyAlpha) {
return NO;
}
return YES;
}
不適合解碼的條件為:
- image為nil
- animated images 動(dòng)圖不適合
- 帶有透明因素的圖像不適合
獲取核心數(shù)據(jù)
通過CGImageRef imageRef = image.CGImage
可以拿到和圖像有關(guān)的各種參數(shù)。
- 顏色空間
CGColorSpaceRef colorspaceRef = [UIImage colorSpaceForImageRef:imageRef];
- 寬
size_t width = CGImageGetWidth(imageRef);
- 高
size_t height = CGImageGetHeight(imageRef);
- 計(jì)算出每行的像素?cái)?shù)
size_t bytesPerRow = kBytesPerPixel * width;
創(chuàng)建沒有透明因素的bitmap graphics contexts
// kCGImageAlphaNone is not supported in CGBitmapContextCreate.
// Since the original image here has no alpha info, use kCGImageAlphaNoneSkipLast
// to create bitmap graphics contexts without alpha info.
CGContextRef context = CGBitmapContextCreate(NULL,
width,
height,
kBitsPerComponent,
bytesPerRow,
colorspaceRef,
kCGBitmapByteOrderDefault|kCGImageAlphaNoneSkipLast);
if (context == NULL) {
return image;
}
注意:這里創(chuàng)建的contexts是沒有透明因素的伪冰。在UI渲染的時(shí)候誓酒,實(shí)際上是把多個(gè)圖層按像素疊加計(jì)算的過程,需要對每一個(gè)像素進(jìn)行 RGBA 的疊加計(jì)算糜值。當(dāng)某個(gè) layer 的是不透明的丰捷,也就是 opaque 為 YES 時(shí),GPU 可以直接忽略掉其下方的圖層寂汇,這就減少了很多工作量病往。這也是調(diào)用 CGBitmapContextCreate 時(shí) bitmapInfo 參數(shù)設(shè)置為忽略掉 alpha 通道的原因。
繪制圖像
// Draw the image into the context and retrieve the new bitmap image without alpha
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGImageRef imageRefWithoutAlpha = CGBitmapContextCreateImage(context);
UIImage *imageWithoutAlpha = [UIImage imageWithCGImage:imageRefWithoutAlpha
scale:image.scale
orientation:image.imageOrientation];
CGContextRelease(context);
CGImageRelease(imageRefWithoutAlpha);
+ (nullable UIImage *)decodedAndScaledDownImageWithImage:(nullable UIImage *)image
/*
* Defines the maximum size in MB of the decoded image when the flag `SDWebImageScaleDownLargeImages` is set
* Suggested value for iPad1 and iPhone 3GS: 60.
* Suggested value for iPad2 and iPhone 4: 120.
* Suggested value for iPhone 3G and iPod 2 and earlier devices: 30.
*/
static const CGFloat kDestImageSizeMB = 60.0f;
/*
* Defines the maximum size in MB of a tile used to decode image when the flag `SDWebImageScaleDownLargeImages` is set
* Suggested value for iPad1 and iPhone 3GS: 20.
* Suggested value for iPad2 and iPhone 4: 40.
* Suggested value for iPhone 3G and iPod 2 and earlier devices: 10.
*/
static const CGFloat kSourceImageTileSizeMB = 20.0f;
static const CGFloat kBytesPerMB = 1024.0f * 1024.0f;
static const CGFloat kPixelsPerMB = kBytesPerMB / kBytesPerPixel;
static const CGFloat kDestTotalPixels = kDestImageSizeMB * kPixelsPerMB;
static const CGFloat kTileTotalPixels = kSourceImageTileSizeMB * kPixelsPerMB;
static const CGFloat kDestSeemOverlap = 2.0f; // the numbers of pixels to overlap the seems where tiles meet.
+ (nullable UIImage *)decodedAndScaledDownImageWithImage:(nullable UIImage *)image {
if (![UIImage shouldDecodeImage:image]) {
return image;
}
if (![UIImage shouldScaleDownImage:image]) {
return [UIImage decodedImageWithImage:image];
}
CGContextRef destContext;
// autorelease the bitmap context and all vars to help system to free memory when there are memory warning.
// on iOS7, do not forget to call [[SDImageCache sharedImageCache] clearMemory];
@autoreleasepool {
CGImageRef sourceImageRef = image.CGImage;
CGSize sourceResolution = CGSizeZero;
sourceResolution.width = CGImageGetWidth(sourceImageRef);
sourceResolution.height = CGImageGetHeight(sourceImageRef);
float sourceTotalPixels = sourceResolution.width * sourceResolution.height;
// Determine the scale ratio to apply to the input image
// that results in an output image of the defined size.
// see kDestImageSizeMB, and how it relates to destTotalPixels.
float imageScale = kDestTotalPixels / sourceTotalPixels;
CGSize destResolution = CGSizeZero;
destResolution.width = (int)(sourceResolution.width*imageScale);
destResolution.height = (int)(sourceResolution.height*imageScale);
// current color space
CGColorSpaceRef colorspaceRef = [UIImage colorSpaceForImageRef:sourceImageRef];
size_t bytesPerRow = kBytesPerPixel * destResolution.width;
// Allocate enough pixel data to hold the output image.
void* destBitmapData = malloc( bytesPerRow * destResolution.height );
if (destBitmapData == NULL) {
return image;
}
// kCGImageAlphaNone is not supported in CGBitmapContextCreate.
// Since the original image here has no alpha info, use kCGImageAlphaNoneSkipLast
// to create bitmap graphics contexts without alpha info.
destContext = CGBitmapContextCreate(destBitmapData,
destResolution.width,
destResolution.height,
kBitsPerComponent,
bytesPerRow,
colorspaceRef,
kCGBitmapByteOrderDefault|kCGImageAlphaNoneSkipLast);
if (destContext == NULL) {
free(destBitmapData);
return image;
}
CGContextSetInterpolationQuality(destContext, kCGInterpolationHigh);
// Now define the size of the rectangle to be used for the
// incremental blits from the input image to the output image.
// we use a source tile width equal to the width of the source
// image due to the way that iOS retrieves image data from disk.
// iOS must decode an image from disk in full width 'bands', even
// if current graphics context is clipped to a subrect within that
// band. Therefore we fully utilize all of the pixel data that results
// from a decoding opertion by achnoring our tile size to the full
// width of the input image.
CGRect sourceTile = CGRectZero;
sourceTile.size.width = sourceResolution.width;
// The source tile height is dynamic. Since we specified the size
// of the source tile in MB, see how many rows of pixels high it
// can be given the input image width.
sourceTile.size.height = (int)(kTileTotalPixels / sourceTile.size.width );
sourceTile.origin.x = 0.0f;
// The output tile is the same proportions as the input tile, but
// scaled to image scale.
CGRect destTile;
destTile.size.width = destResolution.width;
destTile.size.height = sourceTile.size.height * imageScale;
destTile.origin.x = 0.0f;
// The source seem overlap is proportionate to the destination seem overlap.
// this is the amount of pixels to overlap each tile as we assemble the ouput image.
float sourceSeemOverlap = (int)((kDestSeemOverlap/destResolution.height)*sourceResolution.height);
CGImageRef sourceTileImageRef;
// calculate the number of read/write operations required to assemble the
// output image.
int iterations = (int)( sourceResolution.height / sourceTile.size.height );
// If tile height doesn't divide the image height evenly, add another iteration
// to account for the remaining pixels.
int remainder = (int)sourceResolution.height % (int)sourceTile.size.height;
if(remainder) {
iterations++;
}
// Add seem overlaps to the tiles, but save the original tile height for y coordinate calculations.
float sourceTileHeightMinusOverlap = sourceTile.size.height;
sourceTile.size.height += sourceSeemOverlap;
destTile.size.height += kDestSeemOverlap;
for( int y = 0; y < iterations; ++y ) {
@autoreleasepool {
sourceTile.origin.y = y * sourceTileHeightMinusOverlap + sourceSeemOverlap;
destTile.origin.y = destResolution.height - (( y + 1 ) * sourceTileHeightMinusOverlap * imageScale + kDestSeemOverlap);
sourceTileImageRef = CGImageCreateWithImageInRect( sourceImageRef, sourceTile );
if( y == iterations - 1 && remainder ) {
float dify = destTile.size.height;
destTile.size.height = CGImageGetHeight( sourceTileImageRef ) * imageScale;
dify -= destTile.size.height;
destTile.origin.y += dify;
}
CGContextDrawImage( destContext, destTile, sourceTileImageRef );
CGImageRelease( sourceTileImageRef );
}
}
CGImageRef destImageRef = CGBitmapContextCreateImage(destContext);
CGContextRelease(destContext);
if (destImageRef == NULL) {
return image;
}
UIImage *destImage = [UIImage imageWithCGImage:destImageRef scale:image.scale orientation:image.imageOrientation];
CGImageRelease(destImageRef);
if (destImage == nil) {
return image;
}
return destImage;
}
}
......... 這個(gè)方法也真夠長的骄瓣,看了就頭疼啊停巷。不過我們還是會(huì)一點(diǎn)點(diǎn)分析。我們能夠?qū)W會(huì)如何壓縮一個(gè)圖像。
最大支持壓縮圖像源的大小
static const CGFloat kDestImageSizeMB = 60.0f;
默認(rèn)的單位是MB畔勤,這里設(shè)置了60MB蕾各。當(dāng)我們要壓縮一張圖像的時(shí)候,首先就是要定義最大支持的源文件的大小庆揪,不能沒有任何限制式曲。下邊是SDWebImage
的建議:
/*
* Defines the maximum size in MB of the decoded image when the flag `SDWebImageScaleDownLargeImages` is set
* Suggested value for iPad1 and iPhone 3GS: 60.
* Suggested value for iPad2 and iPhone 4: 120.
* Suggested value for iPhone 3G and iPod 2 and earlier devices: 30.
*/
原圖方塊的大小
static const CGFloat kSourceImageTileSizeMB = 20.0f;
這個(gè)方塊將會(huì)被用來分割原圖,默認(rèn)設(shè)置為20M缸榛。
1M有多少字節(jié)
static const CGFloat kBytesPerMB = 1024.0f * 1024.0f;
1M有多少像素
static const CGFloat kPixelsPerMB = kBytesPerMB / kBytesPerPixel;
目標(biāo)總像素
static const CGFloat kDestTotalPixels = kDestImageSizeMB * kPixelsPerMB;
原圖放款總像素
static const CGFloat kTileTotalPixels = kSourceImageTileSizeMB * kPixelsPerMB;
重疊像素大小
static const CGFloat kDestSeemOverlap = 2.0f; // the numbers of pixels to overlap the seems where tiles meet.
重點(diǎn)來了吝羞,如何把一個(gè)很大的原圖壓縮成指定的大小内颗?
原理: 首先定義一個(gè)大小固定的方塊钧排,然后把原圖按照方塊的大小進(jìn)行分割,最后把每個(gè)方塊中的數(shù)據(jù)畫到目標(biāo)畫布上均澳,這樣就能得到目標(biāo)圖像了恨溜。接下來我們做出相信的解釋。
-
檢測圖像能否解碼
if (![UIImage shouldDecodeImage:image]) { return image; }
-
檢查圖像應(yīng)不應(yīng)該壓縮找前,原則是:如果圖像大于目標(biāo)尺寸才需要壓縮
if (![UIImage shouldScaleDownImage:image]) { return [UIImage decodedImageWithImage:image]; } + (BOOL)shouldScaleDownImage:(nonnull UIImage *)image { BOOL shouldScaleDown = YES; CGImageRef sourceImageRef = image.CGImage; CGSize sourceResolution = CGSizeZero; sourceResolution.width = CGImageGetWidth(sourceImageRef); sourceResolution.height = CGImageGetHeight(sourceImageRef); float sourceTotalPixels = sourceResolution.width * sourceResolution.height; float imageScale = kDestTotalPixels / sourceTotalPixels; if (imageScale < 1) { shouldScaleDown = YES; } else { shouldScaleDown = NO; } return shouldScaleDown; }
-
拿到數(shù)據(jù)信息 sourceImageRef
CGImageRef sourceImageRef = image.CGImage;
-
計(jì)算原圖的像素 sourceResolution
CGSize sourceResolution = CGSizeZero; sourceResolution.width = CGImageGetWidth(sourceImageRef); sourceResolution.height = CGImageGetHeight(sourceImageRef);
-
計(jì)算原圖總像素 sourceTotalPixels
float sourceTotalPixels = sourceResolution.width * sourceResolution.height;
-
計(jì)算壓縮比例 imageScale
// Determine the scale ratio to apply to the input image // that results in an output image of the defined size. // see kDestImageSizeMB, and how it relates to destTotalPixels. float imageScale = kDestTotalPixels / sourceTotalPixels;
-
計(jì)算目標(biāo)像素 destResolution
CGSize destResolution = CGSizeZero; destResolution.width = (int)(sourceResolution.width*imageScale); destResolution.height = (int)(sourceResolution.height*imageScale);
-
獲取當(dāng)前的顏色空間 colorspaceRef
// current color space CGColorSpaceRef colorspaceRef = [UIImage colorSpaceForImageRef:sourceImageRef]; + (CGColorSpaceRef)colorSpaceForImageRef:(CGImageRef)imageRef { // current CGColorSpaceModel imageColorSpaceModel = CGColorSpaceGetModel(CGImageGetColorSpace(imageRef)); CGColorSpaceRef colorspaceRef = CGImageGetColorSpace(imageRef); BOOL unsupportedColorSpace = (imageColorSpaceModel == kCGColorSpaceModelUnknown || imageColorSpaceModel == kCGColorSpaceModelMonochrome || imageColorSpaceModel == kCGColorSpaceModelCMYK || imageColorSpaceModel == kCGColorSpaceModelIndexed); if (unsupportedColorSpace) { colorspaceRef = CGColorSpaceCreateDeviceRGB(); CFAutorelease(colorspaceRef); } return colorspaceRef; }
-
計(jì)算并創(chuàng)建目標(biāo)圖像的內(nèi)存 destBitmapData
size_t bytesPerRow = kBytesPerPixel * destResolution.width; // Allocate enough pixel data to hold the output image. void* destBitmapData = malloc( bytesPerRow * destResolution.height ); if (destBitmapData == NULL) { return image; }
-
創(chuàng)建目標(biāo)上下文 destContext
// kCGImageAlphaNone is not supported in CGBitmapContextCreate. // Since the original image here has no alpha info, use kCGImageAlphaNoneSkipLast // to create bitmap graphics contexts without alpha info. destContext = CGBitmapContextCreate(destBitmapData, destResolution.width, destResolution.height, kBitsPerComponent, bytesPerRow, colorspaceRef, kCGBitmapByteOrderDefault|kCGImageAlphaNoneSkipLast); if (destContext == NULL) { free(destBitmapData); return image; }
-
設(shè)置壓縮質(zhì)量
CGContextSetInterpolationQuality(destContext, kCGInterpolationHigh);
-
計(jì)算第一個(gè)原圖方塊 sourceTile糟袁,這個(gè)方塊的寬度同原圖一樣,高度根據(jù)方塊容量計(jì)算
// Now define the size of the rectangle to be used for the // incremental blits from the input image to the output image. // we use a source tile width equal to the width of the source // image due to the way that iOS retrieves image data from disk. // iOS must decode an image from disk in full width 'bands', even // if current graphics context is clipped to a subrect within that // band. Therefore we fully utilize all of the pixel data that results // from a decoding opertion by achnoring our tile size to the full // width of the input image. CGRect sourceTile = CGRectZero; sourceTile.size.width = sourceResolution.width; // The source tile height is dynamic. Since we specified the size // of the source tile in MB, see how many rows of pixels high it // can be given the input image width. sourceTile.size.height = (int)(kTileTotalPixels / sourceTile.size.width ); sourceTile.origin.x = 0.0f;
-
計(jì)算目標(biāo)圖像方塊 destTile
// The output tile is the same proportions as the input tile, but // scaled to image scale. CGRect destTile; destTile.size.width = destResolution.width; destTile.size.height = sourceTile.size.height * imageScale; destTile.origin.x = 0.0f;
-
計(jì)算原圖像方塊與方塊重疊的像素大小 sourceSeemOverlap
// The source seem overlap is proportionate to the destination seem overlap. // this is the amount of pixels to overlap each tile as we assemble the ouput image. float sourceSeemOverlap = (int)((kDestSeemOverlap/destResolution.height)*sourceResolution.height);
-
計(jì)算原圖像需要被分割成多少個(gè)方塊 iterations
// calculate the number of read/write operations required to assemble the // output image. int iterations = (int)( sourceResolution.height / sourceTile.size.height ); // If tile height doesn't divide the image height evenly, add another iteration // to account for the remaining pixels. int remainder = (int)sourceResolution.height % (int)sourceTile.size.height; if(remainder) { iterations++; }
-
根據(jù)重疊像素計(jì)算原圖方塊的大小后纸厉,獲取原圖中該方塊內(nèi)的數(shù)據(jù)系吭,把該數(shù)據(jù)寫入到相對應(yīng)的目標(biāo)方塊中
// Add seem overlaps to the tiles, but save the original tile height for y coordinate calculations. float sourceTileHeightMinusOverlap = sourceTile.size.height; sourceTile.size.height += sourceSeemOverlap; destTile.size.height += kDestSeemOverlap; for( int y = 0; y < iterations; ++y ) { @autoreleasepool { sourceTile.origin.y = y * sourceTileHeightMinusOverlap + sourceSeemOverlap; destTile.origin.y = destResolution.height - (( y + 1 ) * sourceTileHeightMinusOverlap * imageScale + kDestSeemOverlap); sourceTileImageRef = CGImageCreateWithImageInRect( sourceImageRef, sourceTile ); if( y == iterations - 1 && remainder ) { float dify = destTile.size.height; destTile.size.height = CGImageGetHeight( sourceTileImageRef ) * imageScale; dify -= destTile.size.height; destTile.origin.y += dify; } CGContextDrawImage( destContext, destTile, sourceTileImageRef ); CGImageRelease( sourceTileImageRef ); } }
-
返回目標(biāo)圖像
CGImageRef destImageRef = CGBitmapContextCreateImage(destContext); CGContextRelease(destContext); if (destImageRef == NULL) { return image; } UIImage *destImage = [UIImage imageWithCGImage:destImageRef scale:image.scale orientation:image.imageOrientation]; CGImageRelease(destImageRef); if (destImage == nil) { return image; }
總結(jié)
好了,這篇文章已經(jīng)很長了 颗品,但是令人高興的是肯尺,我們學(xué)到了很多關(guān)于圖像的知識(shí)。其中比較重要的是圖片的基礎(chǔ)知識(shí)躯枢,還有就是把圖片按照方塊進(jìn)行切割的思想了则吟,目前我能想的使用場景就是當(dāng)我們加載一個(gè)比較大的數(shù)據(jù)時(shí),可以把數(shù)據(jù)切成一個(gè)一個(gè)的方塊锄蹂,然后顯示氓仲。
由于個(gè)人知識(shí)有限,如有錯(cuò)誤之處得糜,還望各路大俠給予指出啊
發(fā)現(xiàn)一片文章講解的也很有意思敬扛,一張圖片引發(fā)的深思。