工程地址(https://github.com/tujinqiu/KTMovieImagesTransfer)
如果你有一些連續(xù)的圖片序列,那么把它轉(zhuǎn)換成MP4再放到網(wǎng)絡(luò)上傳輸是一個(gè)好的選擇,因?yàn)閟ize會(huì)小很多梅屉。從視頻里面抽取連續(xù)的圖片序列也是一個(gè)偶爾會(huì)遇到的問(wèn)題鹅经。我分別嘗試使用iOS原生的API,F(xiàn)Fmpeg和OpenCV來(lái)解決這兩個(gè)問(wèn)題缤灵。
1伦籍、原生方法
使用原生方法主要是利用AVFoundation框架的api進(jìn)行轉(zhuǎn)換的。
1腮出、將視頻解成序列幀
- (NSError *)nativeTransferMovie:(NSString *)movie toImagesAtPath:(NSString *)imagesPath
{
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:[NSURL fileURLWithPath:movie] options:nil];
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
CMTime time = asset.duration;
NSUInteger totalFrameCount = CMTimeGetSeconds(time) * kKTImagesMovieTransferFPS;
NSMutableArray *timesArray = [NSMutableArray arrayWithCapacity:totalFrameCount];
for (NSUInteger ii = 0; ii < totalFrameCount; ++ii) {
CMTime timeFrame = CMTimeMake(ii, kKTImagesMovieTransferFPS);
NSValue *timeValue = [NSValue valueWithCMTime:timeFrame];
[timesArray addObject:timeValue];
}
generator.requestedTimeToleranceBefore = kCMTimeZero;
generator.requestedTimeToleranceAfter = kCMTimeZero;
__block NSError *returnError = nil;
[generator generateCGImagesAsynchronouslyForTimes:timesArray completionHandler:^(CMTime requestedTime, CGImageRef _Nullable image, CMTime actualTime, AVAssetImageGeneratorResult result, NSError * _Nullable error) {
switch (result) {
case AVAssetImageGeneratorFailed:
returnError = error;
[self sendToMainThreadError:returnError];
break;
case AVAssetImageGeneratorSucceeded:
{
NSString *imageFile = [imagesPath stringByAppendingPathComponent:[NSString stringWithFormat:@"%lld.jpg", requestedTime.value]];
NSData *data = UIImageJPEGRepresentation([UIImage imageWithCGImage:image], 1.0);
if ([data writeToFile:imageFile atomically:YES]) {
dispatch_async(dispatch_get_main_queue(), ^{
if ([self.delegate respondsToSelector:@selector(transfer:didTransferedAtIndex:totalFrameCount:)]) {
[self.delegate transfer:self didTransferedAtIndex:requestedTime.value totalFrameCount:totalFrameCount];
}
});
NSUInteger index = requestedTime.value;
if (index == totalFrameCount - 1) {
if ([self.delegate respondsToSelector:@selector(transfer:didFinishedWithError:)]) {
[self.delegate transfer:self didFinishedWithError:nil];
}
}
} else {
returnError = [self errorWithErrorCode:KTTransferWriteError object:imageFile];
[self sendToMainThreadError:returnError];
[generator cancelAllCGImageGeneration];
}
}
break;
default:
break;
}
}];
return returnError;
}
主要是利用AVAssetImageGenerator抽取圖片帖鸦,注意在調(diào)用generateCGImagesAsynchronouslyForTimes: completionHandler方法之前,用幀率和視頻時(shí)長(zhǎng)計(jì)算幀數(shù)利诺。
2富蓄、將序列幀壓縮為視頻
- (NSError *)nativeTransferImageFiles:(NSArray<NSString *> *)imageFiles toMovie:(NSString *)movie
{
__block NSError *returnError = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:movie] fileType:AVFileTypeQuickTimeMovie error:&returnError];
if (returnError) {
[self sendToMainThreadError:returnError];
return returnError;
}
UIImage *firstImage = [UIImage imageWithContentsOfFile:[imageFiles firstObject]];
if (!firstImage) {
returnError = [self errorWithErrorCode:KTTransferReadImageError object:[imageFiles firstObject]];
[self sendToMainThreadError:returnError];
return returnError;
}
CGSize size = firstImage.size;
// h264格式
NSDictionary *videoSettings = @{AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: [NSNumber numberWithInt:size.width],
AVVideoHeightKey: [NSNumber numberWithInt:size.height]};
AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:nil];
dispatch_async(KTImagesMovieTransferQueue(), ^{
[videoWriter addInput:writerInput];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
UIImage *tmpImage = nil;
NSUInteger index = 0;
while (index < imageFiles.count) {
if(writerInput.readyForMoreMediaData) {
CMTime presentTime = CMTimeMake(index, kKTImagesMovieTransferFPS);
tmpImage = [UIImage imageWithContentsOfFile:[imageFiles objectAtIndex:index]];
if (!tmpImage) {
returnError = [self errorWithErrorCode:KTTransferReadImageError object:[imageFiles firstObject]];
[self sendToMainThreadError:returnError];
return;
}
CVPixelBufferRef buffer = [self pixelBufferFromCGImage:[tmpImage CGImage] size:size];
if (buffer) {
[self appendToAdapter:adaptor pixelBuffer:buffer atTime:presentTime withInput:writerInput];
CFRelease(buffer);
} else {
// Finish the session
[writerInput markAsFinished];
[videoWriter finishWritingWithCompletionHandler:^{
}];
returnError = [self errorWithErrorCode:KTTransferGetBufferError object:nil];
[self sendToMainThreadError:returnError];
return;
}
}
dispatch_async(dispatch_get_main_queue(), ^{
if ([self.delegate respondsToSelector:@selector(transfer:didTransferedAtIndex:totalFrameCount:)]) {
[self.delegate transfer:self didTransferedAtIndex:index totalFrameCount:imageFiles.count];
}
});
index++;
}
// Finish the session
[writerInput markAsFinished];
[videoWriter finishWritingWithCompletionHandler:^{
if (videoWriter.status != AVAssetWriterStatusCompleted) {
returnError = videoWriter.error;
[self sendToMainThreadError:returnError];
} else {
if ([self.delegate respondsToSelector:@selector(transfer:didFinishedWithError:)]) {
[self.delegate transfer:self didFinishedWithError:nil];
}
}
}];
});
return returnError;
}
這里主要是利用AVAssetWriter和AVAssetWriterInput來(lái)進(jìn)行圖片轉(zhuǎn)視頻的,通過(guò)字典可以給AVAssetWriterInput設(shè)置屬性慢逾,從而達(dá)到設(shè)置視頻屬性的辦法立倍。
2、OpenCV
1侣滩、OpenCV導(dǎo)入工程
直接去官網(wǎng)下載iOS版本的framework口注,然后放進(jìn)工程里面即可。
如果出現(xiàn)下面的錯(cuò)誤:
那么是由于Objective-C默認(rèn)只支持C語(yǔ)言君珠,但是并不支持C++寝志,而OpenCV用到了C++,因此將m文件的后綴改為mm策添,讓編譯支持C++即可材部。
如果出現(xiàn)下面的錯(cuò)誤:
Undefined symbols for architecture x86_64:
"_jpeg_free_large", referenced from:
_free_pool in opencv2(jmemmgr.o)
"_jpeg_free_small", referenced from:
_free_pool in opencv2(jmemmgr.o)
_self_destruct in opencv2(jmemmgr.o)
"_jpeg_get_large", referenced from:
_alloc_large in opencv2(jmemmgr.o)
_realize_virt_arrays in opencv2(jmemmgr.o)
"_jpeg_get_small", referenced from:
_jinit_memory_mgr in opencv2(jmemmgr.o)
_alloc_small in opencv2(jmemmgr.o)
"_jpeg_mem_available", referenced from:
_realize_virt_arrays in opencv2(jmemmgr.o)
"_jpeg_mem_init", referenced from:
_jinit_memory_mgr in opencv2(jmemmgr.o)
"_jpeg_mem_term", referenced from:
_jinit_memory_mgr in opencv2(jmemmgr.o)
_self_destruct in opencv2(jmemmgr.o)
"_jpeg_open_backing_store", referenced from:
_realize_virt_arrays in opencv2(jmemmgr.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
是缺少libjpeg庫(kù)造成的,首先去這里(https://sourceforge.net/projects/libjpeg-turbo/files/ )下載安裝libjpeg唯竹。安裝完成之后輸入命令:lipo -info /opt/libjpeg-turbo/lib/libjpeg.a乐导,可以看到支持各個(gè)處理器架構(gòu)的libjpeg.a文件的路徑,將這個(gè)這個(gè)文件加入工程即可浸颓。
如果出現(xiàn)下面的錯(cuò)誤:
Undefined symbols for architecture x86_64:
"_CMSampleBufferGetImageBuffer", referenced from:
-[CaptureDelegate captureOutput:didOutputSampleBuffer:fromConnection:] in opencv2(cap_avfoundation.o)
CvCaptureFile::retrieveFramePixelBuffer() in opencv2(cap_avfoundation.o)
"_CMSampleBufferInvalidate", referenced from:
CvCaptureFile::retrieveFramePixelBuffer() in opencv2(cap_avfoundation.o)
"_CMTimeGetSeconds", referenced from:
-[KTImagesMovieTransfer nativeTransferMovie:toImagesAtPath:] in KTImagesMovieTransfer.o
是缺少CoreMedia.framework造成的物臂,添加進(jìn)去即可
2旺拉、將視頻解成序列幀
OpenCV的方法很簡(jiǎn)練,通過(guò)while循環(huán)不斷取出幀棵磷,然后存盤(pán)即可:
- (NSError *)opencvTransferMovie:(NSString *)movie toImagesAtPath:(NSString *)imagesPath
{
__block NSError *returnError = nil;
dispatch_async(KTImagesMovieTransferQueue(), ^{
CvCapture *pCapture = cvCaptureFromFile(movie.UTF8String);
// 這個(gè)函數(shù)只是讀取視頻頭文件信息來(lái)獲取幀數(shù)蛾狗,因此有可能有不對(duì)的情況
// NSUInteger totalFrameCount = cvGetCaptureProperty(pCapture, CV_CAP_PROP_FRAME_COUNT);
// 所以采取下面的遍歷兩遍的辦法
NSUInteger totalFrameCount = 0;
while (cvQueryFrame(pCapture)) {
totalFrameCount ++;
}
if (pCapture) {
cvReleaseCapture(&pCapture);
}
pCapture = cvCaptureFromFile(movie.UTF8String);
NSUInteger index = 0;
IplImage *pGrabImg = NULL;
while ((pGrabImg = cvQueryFrame(pCapture))) {
NSString *imagePath = [imagesPath stringByAppendingPathComponent:[NSString stringWithFormat:@"%lu.jpg", index]];
cvSaveImage(imagePath.UTF8String, pGrabImg);
dispatch_async(dispatch_get_main_queue(), ^{
if ([self.delegate respondsToSelector:@selector(transfer:didTransferedAtIndex:totalFrameCount:)]) {
[self.delegate transfer:self didTransferedAtIndex:index totalFrameCount:totalFrameCount];
}
});
index++;
}
if (pCapture) {
cvReleaseCapture(&pCapture);
}
if (index == totalFrameCount) {
dispatch_async(dispatch_get_main_queue(), ^{
if ([self.delegate respondsToSelector:@selector(transfer:didFinishedWithError:)]) {
[self.delegate transfer:self didFinishedWithError:nil];
}
});
} else {
returnError = [self errorWithErrorCode:KTTransferOpencvWrongFrameCountError object:nil];
[self sendToMainThreadError:returnError];
}
});
return returnError;
}
這里要注意的是OpenCV有一個(gè)獲取視頻屬性的函數(shù):cvGetCaptureProperty,但是這個(gè)方法獲取視頻幀數(shù)往往不正確仪媒,因?yàn)镺penCV只是通過(guò)這個(gè)函數(shù)讀取視頻的頭信息沉桌,很有可能與實(shí)際幀數(shù)不相符合。因此上面是采用2次遍歷的辦法來(lái)獲取幀數(shù)规丽,第一次遍歷獲取幀數(shù)蒲牧,第二次遍歷執(zhí)行取幀,存盤(pán)操作赌莺。另外一個(gè)需要注意的是cvQueryFrame函數(shù)返回的IplImage并不需要釋放冰抢,只需要在最后釋放一次cvReleaseCapture(&pCapture)即可。
3艘狭、將序列幀壓縮為視頻
OpenCV圖片轉(zhuǎn)換為視頻的辦法同樣很簡(jiǎn)單
- (NSError *)opencvTransferImageFiles:(NSArray<NSString *> *)imageFiles toMovie:(NSString *)movie
{
__block NSError *returnError = nil;
UIImage *firstImage = [UIImage imageWithContentsOfFile:[imageFiles firstObject]];
if (!firstImage) {
returnError = [self errorWithErrorCode:KTTransferReadImageError object:[imageFiles firstObject]];
[self sendToMainThreadError:returnError];
return returnError;
}
CvSize size = cvSize(firstImage.size.width, firstImage.size.height);
dispatch_async(KTImagesMovieTransferQueue(), ^{
// OpenCV由于不原生支持H264(可以用其他辦法做到)挎扰,這里選用MP4格式
CvVideoWriter *pWriter = cvCreateVideoWriter(movie.UTF8String, CV_FOURCC('D', 'I', 'V', 'X'), (double)kKTImagesMovieTransferFPS, size);
for (NSUInteger ii = 0; ii < imageFiles.count; ++ii) {
NSString *imageFile = [imageFiles objectAtIndex:ii];
IplImage *pImage = cvLoadImage(imageFile.UTF8String);
if (pImage) {
cvWriteFrame(pWriter, pImage);
cvReleaseImage(&pImage);
dispatch_async(dispatch_get_main_queue(), ^{
if ([self.delegate respondsToSelector:@selector(transfer:didTransferedAtIndex:totalFrameCount:)]) {
[self.delegate transfer:self didTransferedAtIndex:ii totalFrameCount:imageFiles.count];
}
});
} else {
returnError = [self errorWithErrorCode:KTTransferReadImageError object:imageFile];
[self sendToMainThreadError:returnError];
return;
}
}
cvReleaseVideoWriter(&pWriter);
dispatch_async(dispatch_get_main_queue(), ^{
if ([self.delegate respondsToSelector:@selector(transfer:didFinishedWithError:)]) {
[self.delegate transfer:self didFinishedWithError:nil];
}
});
});
return returnError;
}
首先初始化一個(gè)CvVideoWriter對(duì)象,然后一幀幀往里面寫(xiě)就行了巢音。這里初始化CvVideoWriter
CvVideoWriter *pWriter = cvCreateVideoWriter(movie.UTF8String, CV_FOURCC('D', 'I', 'V', 'X'), (double)kKTImagesMovieTransferFPS, size);
與前面的原生方法設(shè)置writerInput屬性相比較遵倦,其實(shí)非常相似,都是設(shè)置文件名官撼,格式梧躺,大小。
NSDictionary *videoSettings = @{AVVideoCodecKey: AVVideoCodecH264, AVVideoWidthKey: [NSNumber numberWithInt:size.width], AVVideoHeightKey: [NSNumber numberWithInt:size.height]};
AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
3傲绣、FFmpeg
1掠哥、FFmpeg的編譯和配置
FFmpeg稍顯復(fù)雜一點(diǎn),參照這個(gè)教程秃诵。
將編譯好的FFmpeg加入工程之后续搀,除了上面這個(gè)教程所說(shuō)的設(shè)置header search pathes和lib庫(kù)之外,如果遇到下面的錯(cuò)誤:
Undefined symbols for architecture arm64:
"av_register_all()", referenced from:
-[KTImagesMovieTransfer ffmpegTransferMovie:toImagesAtPath:] in KTImagesMovieTransfer.o
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
這是因?yàn)镃語(yǔ)言的函數(shù)無(wú)法在C++中調(diào)用識(shí)別的原因菠净,那么引用頭文件的時(shí)候請(qǐng)這樣引用(與在windows上用VS寫(xiě)C++遇到的問(wèn)題一樣的解決辦法):extern "C"的作用是讓括號(hào)包住的頭文件中的符號(hào)以C語(yǔ)言的方式進(jìn)行編譯禁舷。這里之所以使用.mm后綴支持C++是因?yàn)橹靶枰С諳penCV。如果把FFmpeg的轉(zhuǎn)換方法和OpenCV的分文件寫(xiě)毅往,就不存在這個(gè)問(wèn)題牵咙。
extern "C"
{
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
}
未完待續(xù)。攀唯。霜大。。