寫在前面
喜歡AVFoundation資料的同學(xué)可以關(guān)注我的專題:《AVFoundation》專輯
也可以關(guān)注我的簡(jiǎn)書賬號(hào)
正文
要讀取和寫入視頻和音頻assets
堤瘤,必須使用AVFoundation
框架提供的導(dǎo)出API
酌住。AVAssetExportSession類為簡(jiǎn)單的導(dǎo)出需求提供了一個(gè)界面镊绪,例如修改文件格式或修剪資源的長(zhǎng)度(請(qǐng)參閱Trimming and Transcoding a Movie)游两。要獲得更深入的導(dǎo)出需求归形,請(qǐng)使用AVAssetReader和AVAssetWriter類宠页。
如果要對(duì)asset
內(nèi)容執(zhí)行操作,請(qǐng)使用AVAssetReader
瑰抵。例如你雌,你可能會(huì)讀取asset
的音軌以生成波形的直觀表示。要從樣本緩沖區(qū)或靜止圖像等媒體生成asset
二汛,請(qǐng)使用AVAssetWriter
對(duì)象婿崭。
注意:
asset
讀取器和寫入器類不用于實(shí)時(shí)處理拨拓。實(shí)際上,asset
讀取器甚至不能用于從HTTP
實(shí)時(shí)流等實(shí)時(shí)源讀取逛球。但是千元,如果你正在使用具有實(shí)時(shí)數(shù)據(jù)源的asset
寫入器(例如AVCaptureOutput對(duì)象)苫昌,請(qǐng)將asset
寫入器輸入的expectsMediaDataInRealTime屬性設(shè)置為YES颤绕。對(duì)于非實(shí)時(shí)數(shù)據(jù)源,將此屬性設(shè)置為YES
將導(dǎo)致文件無法正確交錯(cuò)祟身。
讀取Asset
每個(gè)AVAssetReader
對(duì)象一次只能與一個(gè)asset
相關(guān)聯(lián)奥务,但此asset
可能包含多個(gè)track
。因此袜硫,在開始讀取之前氯葬,必須將AVAssetReaderOutput類的具體子類分配給asset
讀取器,以便配置媒體數(shù)據(jù)的讀取方式婉陷。 AVAssetReaderOutput
基類有三個(gè)具體的子類帚称,可用于滿足asset
讀取需求:AVAssetReaderTrackOutput,AVAssetReaderAudioMixOutput和AVAssetReaderVideoCompositionOutput秽澳。
創(chuàng)建Asset讀取器
初始化AVAssetReader
對(duì)象所需的只是你要讀取的asset
闯睹。
NSError *outError;
AVAsset *someAsset = <#AVAsset that you want to read#>;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:someAsset error:&outError];
BOOL success = (assetReader != nil);
注意:請(qǐng)務(wù)必檢查返回給你的
asset
讀取器是否為nil
,以確保asset
讀取器已成功初始化担神。否則楼吃,error
參數(shù)(前一個(gè)示例中的outError
)將包含相關(guān)的錯(cuò)誤信息。
設(shè)置Asset 讀取器輸出
創(chuàng)建Asset
讀取器后妄讯,設(shè)置至少一個(gè)輸出以接收正在讀取的媒體數(shù)據(jù)孩锡。設(shè)置輸出時(shí),請(qǐng)務(wù)必將alwaysCopiesSampleData屬性設(shè)置為NO
亥贸。通過這種方式躬窜,你可以獲得性能改進(jìn)的好處。在本章的所有示例中炕置,此屬性可以并且應(yīng)該設(shè)置為NO
荣挨。
如果你只想從一個(gè)或多個(gè)軌道讀取媒體數(shù)據(jù)并可能將該數(shù)據(jù)轉(zhuǎn)換為其他格式,請(qǐng)使用AVAssetReaderTrackOutput
類讹俊,為要從asset
中讀取的每個(gè)AVAssetTrack對(duì)象使用單個(gè)軌道輸出對(duì)象垦沉。要使用asset
讀取??器將音軌解壓縮到Linear PCM
,請(qǐng)按如下方式設(shè)置音軌輸出:
AVAsset *localAsset = assetReader.asset;
// Get the audio track to read.
AVAssetTrack *audioTrack = [[localAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
// Decompression settings for Linear PCM
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the output with the audio track and decompression settings.
AVAssetReaderOutput *trackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:decompressionAudioSettings];
// Add the output to the reader if possible.
if ([assetReader canAddOutput:trackOutput])
[assetReader addOutput:trackOutput];
注意:要以特定
asset``track
的格式讀取媒體數(shù)據(jù)仍劈,請(qǐng)將nil
傳遞給outputSettings
參數(shù)厕倍。
你可以使用AVAssetReaderAudioMixOutput
和AVAssetReaderVideoCompositionOutput
類分別讀取已使用AVAudioMix對(duì)象或AVVideoComposition對(duì)象混合或合成的媒體數(shù)據(jù)。通常贩疙,當(dāng)asset
讀取器從AVComposition對(duì)象讀取時(shí)讹弯,將使用這些輸出况既。
使用單個(gè)音頻混合輸出,你可以從asset
中讀取使用AVAudioMix
對(duì)象混合在一起的多個(gè)音軌组民。要指定音軌的混合方式棒仍,請(qǐng)?jiān)诔跏蓟髮⒒煲舴峙浣oAVAssetReaderAudioMixOutput
對(duì)象。以下代碼顯示如何使用asset
中的所有音軌創(chuàng)建音頻混合輸出臭胜,將音軌解壓縮到Linear PCM
莫其,并將音頻混合對(duì)象分配給輸出。有關(guān)如何配置音頻混合的詳細(xì)信息耸三,請(qǐng)參閱Editing乱陡。
AVAudioMix *audioMix = <#An AVAudioMix that specifies how the audio tracks from the AVAsset are mixed#>;
// Assumes that assetReader was initialized with an AVComposition object.
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the audio tracks to read.
NSArray *audioTracks = [composition tracksWithMediaType:AVMediaTypeAudio];
// Get the decompression settings for Linear PCM.
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the audio mix output with the audio tracks and decompression setttings.
AVAssetReaderOutput *audioMixOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:audioTracks audioSettings:decompressionAudioSettings];
// Associate the audio mix used to mix the audio tracks being read with the output.
audioMixOutput.audioMix = audioMix;
// Add the output to the reader if possible.
if ([assetReader canAddOutput:audioMixOutput])
[assetReader addOutput:audioMixOutput];
注意:為
audioSettings
參數(shù)傳遞nil
會(huì)告訴asset
讀取器以方便的未壓縮格式返回樣本。AVAssetReaderVideoCompositionOutput
類也是如此仪壮。
視頻合成輸出的行為方式大致相同:你可以使用AVVideoComposition
對(duì)象從asset
中讀取多個(gè)視頻軌道憨颠。要從多個(gè)合成視頻軌道中讀取媒體數(shù)據(jù)并將其解壓縮為ARGB
,請(qǐng)按如下方式設(shè)置輸出:
AVVideoComposition *videoComposition = <#An AVVideoComposition that specifies how the video tracks from the AVAsset are composited#>;
// Assumes assetReader was initialized with an AVComposition.
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the video tracks to read.
NSArray *videoTracks = [composition tracksWithMediaType:AVMediaTypeVideo];
// Decompression settings for ARGB.
NSDictionary *decompressionVideoSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32ARGB], (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary] };
// Create the video composition output with the video tracks and decompression setttings.
AVAssetReaderOutput *videoCompositionOutput = [AVAssetReaderVideoCompositionOutput assetReaderVideoCompositionOutputWithVideoTracks:videoTracks videoSettings:decompressionVideoSettings];
// Associate the video composition used to composite the video tracks being read with the output.
videoCompositionOutput.videoComposition = videoComposition;
// Add the output to the reader if possible.
if ([assetReader canAddOutput:videoCompositionOutput])
[assetReader addOutput:videoCompositionOutput];
讀取Asset多媒體數(shù)據(jù)
要在設(shè)置所需的所有輸出后開始讀取积锅,請(qǐng)?jiān)?code>asset讀取器上調(diào)用startReading方法爽彤。接下來,使用copyNextSampleBuffer方法從每個(gè)輸出中單獨(dú)檢索媒體數(shù)據(jù)缚陷。要使用單個(gè)輸出啟動(dòng)asset
讀取器并讀取其所有媒體示例适篙,請(qǐng)執(zhí)行以下操作:
// Start the asset reader up.
[self.assetReader startReading];
BOOL done = NO;
while (!done)
{
// Copy the next sample buffer from the reader output.
CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
if (sampleBuffer)
{
// Do something with sampleBuffer here.
CFRelease(sampleBuffer);
sampleBuffer = NULL;
}
else
{
// Find out why the asset reader output couldn't copy another sample buffer.
if (self.assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = self.assetReader.error;
// Handle the error here.
}
else
{
// The asset reader output has read all of its samples.
done = YES;
}
}
}
Asset寫入
AVAssetWriter類將來自多個(gè)源的媒體數(shù)據(jù)寫入指定文件格式的單個(gè)文件。你不需要將asset
寫入器對(duì)象與特定asset
相關(guān)聯(lián)蹬跃,但必須為要?jiǎng)?chuàng)建的每個(gè)輸出文件使用單獨(dú)的asset
寫入器匙瘪。由于asset
寫入器可以從多個(gè)源寫入媒體數(shù)據(jù),因此必須為要寫入輸出文件的每個(gè)單獨(dú)的軌道創(chuàng)建AVAssetWriterInput對(duì)象蝶缀。每個(gè)AVAssetWriterInput
對(duì)象都希望以CMSampleBufferRef對(duì)象的形式接收數(shù)據(jù)丹喻,但是如果要將CVPixelBufferRef對(duì)象附加到asset
寫入器輸入,請(qǐng)使用AVAssetWriterInputPixelBufferAdaptor類翁都。
創(chuàng)建Asset寫入器
要?jiǎng)?chuàng)建asset
寫入器碍论,請(qǐng)指定輸出文件的URL
和所需的文件類型。以下代碼顯示如何初始化asset
寫入器以創(chuàng)建QuickTime
影片:
NSError *outError;
NSURL *outputURL = <#NSURL object representing the URL where you want to save the video#>;
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:outputURL
fileType:AVFileTypeQuickTimeMovie
error:&outError];
BOOL success = (assetWriter != nil);
設(shè)置Asset寫入器輸入
要使asset
寫入器能夠?qū)懭朊襟w數(shù)據(jù)柄慰,你必須至少設(shè)置一個(gè)asset
寫入器輸入鳍悠。例如,如果你的媒體數(shù)據(jù)源已經(jīng)將媒體樣本作為CMSampleBufferRef
對(duì)象輸出坐搔,則只需使用AVAssetWriterInput
類藏研。要設(shè)置將音頻媒體數(shù)據(jù)壓縮為128 kbps
AAC
并將其連接到asset
寫入器的asset
寫入器輸入,請(qǐng)執(zhí)行以下操作:
// Configure the channel layout as stereo.
AudioChannelLayout stereoChannelLayout = {
.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
.mChannelBitmap = 0,
.mNumberChannelDescriptions = 0
};
// Convert the channel layout object to an NSData object.
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
// Get the compression settings for 128 kbps AAC.
NSDictionary *compressionAudioSettings = @{
AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
AVEncoderBitRateKey : [NSNumber numberWithInteger:128000],
AVSampleRateKey : [NSNumber numberWithInteger:44100],
AVChannelLayoutKey : channelLayoutAsData,
AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};
// Create the asset writer input with the compression settings and specify the media type as audio.
AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:compressionAudioSettings];
// Add the input to the writer if possible.
if ([assetWriter canAddInput:assetWriterInput])
[assetWriter addInput:assetWriterInput];
注意:如果希望以存儲(chǔ)的格式寫入媒體數(shù)據(jù)概行,請(qǐng)?jiān)?code>outputSettings參數(shù)中傳遞
nil
蠢挡。僅當(dāng)asset
寫入器使用fileType
為AVFileTypeQuickTimeMovie初始化時(shí)才傳遞nil
。
你的asset
寫入器輸入可以選擇性地包含一些metadata
,或者分別使用metadata和transform屬性為特定軌道指定不同的變換业踏。對(duì)于數(shù)據(jù)源為視頻軌道的asset
寫入器輸入禽炬,你可以通過執(zhí)行以下操作在輸出文件中維護(hù)視頻的原始變換:
AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
assetWriterInput.transform = videoAssetTrack.preferredTransform;
注意:在開始使用
asset
寫入器進(jìn)行寫入之前,請(qǐng)先設(shè)置metadata
和transform
屬性,以使其生效。
將媒體數(shù)據(jù)寫入輸出文件時(shí)抒巢,有時(shí)你可能需要分配像素緩沖區(qū)。為此热幔,請(qǐng)使用AVAssetWriterInputPixelBufferAdaptor
類。為了獲得最高效率晓殊,請(qǐng)使用像素緩沖適配器提供的像素緩沖池断凶,而不是添加使用單獨(dú)池分配的像素緩沖區(qū)伤提。以下代碼創(chuàng)建一個(gè)在RGB
域中工作的像素緩沖區(qū)對(duì)象巫俺,該對(duì)象將使用CGImage對(duì)象來創(chuàng)建其像素緩沖區(qū)。
NSDictionary *pixelBufferAttributes = @{
kCVPixelBufferCGImageCompatibilityKey : [NSNumber numberWithBool:YES],
kCVPixelBufferCGBitmapContextCompatibilityKey : [NSNumber numberWithBool:YES],
kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32ARGB]
};
AVAssetWriterInputPixelBufferAdaptor *inputPixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.assetWriterInput sourcePixelBufferAttributes:pixelBufferAttributes];
注意:所有
AVAssetWriterInputPixelBufferAdaptor
對(duì)象必須連接到單個(gè)asset
寫入器輸入肿男。該asset
寫入器輸入必須接受AVMediaTypeVideo類型的媒體數(shù)據(jù)介汹。
寫入媒體數(shù)據(jù)
配置asset
寫入器所需的所有輸入后,即可開始寫入媒體數(shù)據(jù)舶沛。正如您對(duì)asset
讀取器所做的那樣嘹承,通過調(diào)用startWriting方法啟動(dòng)寫入過程。然后如庭,你需要通過調(diào)用startSessionAtSourceTime:方法來啟動(dòng)sample-writing
會(huì)話叹卷。asset
寫入器完成的所有寫入都必須在其中一個(gè)會(huì)話中進(jìn)行,每個(gè)會(huì)話的時(shí)間范圍定義了源中包含的媒體數(shù)據(jù)的時(shí)間范圍坪它。例如骤竹,如果你的源是提供從AVAsset對(duì)象讀取的媒體數(shù)據(jù)的asset
讀取器,并且你不希望包含來自asset
的前半部分的媒體數(shù)據(jù)往毡,那么你將執(zhí)行以下操作:
CMTime halfAssetDuration = CMTimeMultiplyByFloat64(self.asset.duration, 0.5);
[self.assetWriter startSessionAtSourceTime:halfAssetDuration];
//Implementation continues.
通常蒙揣,要結(jié)束寫入會(huì)話,必須調(diào)用endSessionAtSourceTime:方法开瞭。但是懒震,如果你的寫入會(huì)話直到文件末尾,則只需調(diào)用finishWriting方法即可結(jié)束寫入會(huì)話嗤详。要使用單個(gè)輸入啟動(dòng)asset
寫入器并寫入其所有媒體數(shù)據(jù)个扰,請(qǐng)執(zhí)行以下操作:
// Prepare the asset writer for writing.
[self.assetWriter startWriting];
// Start a sample-writing session.
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
while ([self.assetWriterInput isReadyForMoreMediaData])
{
// Get the next sample buffer.
CMSampleBufferRef nextSampleBuffer = [self copyNextSampleBufferToWrite];
if (nextSampleBuffer)
{
// If it exists, append the next sample buffer to the output file.
[self.assetWriterInput appendSampleBuffer:nextSampleBuffer];
CFRelease(nextSampleBuffer);
nextSampleBuffer = nil;
}
else
{
// Assume that lack of a next sample buffer means the sample buffer source is out of samples and mark the input as finished.
[self.assetWriterInput markAsFinished];
break;
}
}
}];
上面代碼中的copyNextSampleBufferToWrite
方法只是一個(gè)存根。此存根的位置是你需要插入一些邏輯以返回表示你要寫入的媒體數(shù)據(jù)的CMSampleBufferRef
對(duì)象的位置葱色。樣本緩沖區(qū)的一個(gè)可能來源是asset
讀取器的輸出递宅。
錄制Assets
你可以串聯(lián)asset
讀取器和asset
寫入器對(duì)象,將asset
從一種表示轉(zhuǎn)換為另一種表示。使用這些對(duì)象恐锣,你可以比使用AVAssetExportSession
對(duì)象更多地控制轉(zhuǎn)換茅主。例如,你可以選擇要在輸出文件中表示哪些track
土榴,指定自己的輸出格式诀姚,或在轉(zhuǎn)換過程中修改asset
。此過程的第一步是根據(jù)需要設(shè)置asset
讀取器輸出和asset
寫入器輸入玷禽。在完全配置asset
讀取器和寫入器之后赫段,分別通過調(diào)用startReading和startWriting
方法啟動(dòng)它們。以下代碼段顯示如何使用單個(gè)asset
寫入器輸入來寫入單個(gè)asset
讀取器輸出提供的媒體數(shù)據(jù):
NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];
// Create a serialization queue for reading and writing.
dispatch_queue_t serializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:serializationQueue usingBlock:^{
while ([self.assetWriterInput isReadyForMoreMediaData])
{
// Get the asset reader output's next sample buffer.
CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
// If it exists, append this sample buffer to the output file.
BOOL success = [self.assetWriterInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
// Check for errors that may have occurred when appending the new sample buffer.
if (!success && self.assetWriter.status == AVAssetWriterStatusFailed)
{
NSError *failureError = self.assetWriter.error;
//Handle the error.
}
}
else
{
// If the next sample buffer doesn't exist, find out why the asset reader output couldn't vend another one.
if (self.assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = self.assetReader.error;
//Handle the error here.
}
else
{
// The asset reader output must have vended all of its samples. Mark the input as finished.
[self.assetWriterInput markAsFinished];
break;
}
}
}
}];
綜述:使用Asset讀取器和寫入器串聯(lián)重新編碼Asset
這個(gè)簡(jiǎn)短的代碼示例說明了如何使用asset
讀取器和寫入器將asset
的第一個(gè)視頻和音頻軌道重新編碼為新文件矢赁。它顯示了如何:
使用序列化隊(duì)列來處理讀取和寫入視頻和音頻數(shù)據(jù)的異步性質(zhì)糯笙。
初始化
asset
讀取器并配置兩個(gè)asset
讀取器輸出,一個(gè)用于音頻撩银,一個(gè)用于視頻给涕。初始化
asset
寫入器并配置兩個(gè)asset
寫入器輸入,一個(gè)用于音頻额获,一個(gè)用于視頻够庙。使用
asset
讀取器通過兩種不同的輸出/輸入組合將媒體數(shù)據(jù)異步提供給asset
寫入器。使用調(diào)度組通知重新編碼過程的完成抄邀。
允許用戶在開始后取消重新編碼過程耘眨。
注意:為了關(guān)注最相關(guān)的代碼,此示例省略了完整應(yīng)用程序的幾個(gè)方面境肾。要使用
AVFoundation
剔难,你應(yīng)該有足夠的經(jīng)驗(yàn)使用Cocoa
來推斷缺失的部分。
處理初始設(shè)置
在創(chuàng)建asset
讀取器和寫入器并配置其輸出和輸入之前奥喻,你需要處理一些初始設(shè)置偶宫。此設(shè)置的第一部分涉及創(chuàng)建三個(gè)單獨(dú)的序列化隊(duì)列以協(xié)調(diào)讀寫過程。
NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];
// Create the main serialization queue.
self.mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw audio serialization queue", self];
// Create the serialization queue to use for reading and writing the audio data.
self.rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL);
NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw video serialization queue", self];
// Create the serialization queue to use for reading and writing the video data.
self.rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL);
主序列化隊(duì)列用于協(xié)調(diào)asset
讀取器和寫入器的啟動(dòng)和停止(可能由于取消而導(dǎo)致停止)衫嵌,并且其他兩個(gè)序列化隊(duì)列用于序列化每個(gè)輸出/輸入組合的讀取和寫入以及可能的取消操作读宙。
現(xiàn)在你已經(jīng)有了一些序列化隊(duì)列,加載asset
的track
并開始重新編碼過程楔绞。
self.asset = <#AVAsset that you want to reencode#>;
self.cancelled = NO;
self.outputURL = <#NSURL representing desired output URL for file generated by asset writer#>;
// Asynchronously load the tracks of the asset you want to read.
[self.asset loadValuesAsynchronouslyForKeys:@[@"tracks"] completionHandler:^{
// Once the tracks have finished loading, dispatch the work to the main serialization queue.
dispatch_async(self.mainSerializationQueue, ^{
// Due to asynchronous nature, check to see if user has already cancelled.
if (self.cancelled)
return;
BOOL success = YES;
NSError *localError = nil;
// Check for success of loading the assets tracks.
success = ([self.asset statusOfValueForKey:@"tracks" error:&localError] == AVKeyValueStatusLoaded);
if (success)
{
// If the tracks loaded successfully, make sure that no file exists at the output path for the asset writer.
NSFileManager *fm = [NSFileManager defaultManager];
NSString *localOutputPath = [self.outputURL path];
if ([fm fileExistsAtPath:localOutputPath])
success = [fm removeItemAtPath:localOutputPath error:&localError];
}
if (success)
success = [self setupAssetReaderAndAssetWriter:&localError];
if (success)
success = [self startAssetReaderAndWriter:&localError];
if (!success)
[self readingAndWritingDidFinishSuccessfully:success withError:localError];
});
}];
當(dāng)track
加載過程完成時(shí)结闸,無論是否成功,其余的工作都將被分派到主序列化隊(duì)列酒朵,以確保所有這些工作都被序列化并具有可能的取消¤氤現(xiàn)在剩下的就是在上一個(gè)代碼列表的末尾實(shí)現(xiàn)取消過程和三個(gè)自定義方法。
初始化Asset讀取器和寫入器
自定義setupAssetReaderAndAssetWriter:
方法初始化讀寫器并配置兩個(gè)輸出/輸入組合蔫耽,一個(gè)用于音軌结耀,一個(gè)用于視頻軌留夜。在此示例中,使用asset
讀取??器將音頻解壓縮為線性PCM
图甜,并使用asset
寫入器將其壓縮回128 kbps AAC
碍粥。使用asset
讀取??器將視頻解壓縮為YUV
,并使用asset
寫入器壓縮為H.264
黑毅。
- (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
{
// Create and initialize the asset reader.
self.assetReader = [[AVAssetReader alloc] initWithAsset:self.asset error:outError];
BOOL success = (self.assetReader != nil);
if (success)
{
// If the asset reader was successfully initialized, do the same for the asset writer.
self.assetWriter = [[AVAssetWriter alloc] initWithURL:self.outputURL fileType:AVFileTypeQuickTimeMovie error:outError];
success = (self.assetWriter != nil);
}
if (success)
{
// If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
NSArray *audioTracks = [self.asset tracksWithMediaType:AVMediaTypeAudio];
if ([audioTracks count] > 0)
assetAudioTrack = [audioTracks objectAtIndex:0];
NSArray *videoTracks = [self.asset tracksWithMediaType:AVMediaTypeVideo];
if ([videoTracks count] > 0)
assetVideoTrack = [videoTracks objectAtIndex:0];
if (assetAudioTrack)
{
// If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
self.assetReaderAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack outputSettings:decompressionAudioSettings];
[self.assetReader addOutput:self.assetReaderAudioOutput];
// Then, set the compression settings to 128kbps AAC and create the asset writer input.
AudioChannelLayout stereoChannelLayout = {
.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
.mChannelBitmap = 0,
.mNumberChannelDescriptions = 0
};
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
NSDictionary *compressionAudioSettings = @{
AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
AVEncoderBitRateKey : [NSNumber numberWithInteger:128000],
AVSampleRateKey : [NSNumber numberWithInteger:44100],
AVChannelLayoutKey : channelLayoutAsData,
AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};
self.assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType] outputSettings:compressionAudioSettings];
[self.assetWriter addInput:self.assetWriterAudioInput];
}
if (assetVideoTrack)
{
// If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
NSDictionary *decompressionVideoSettings = @{
(id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_422YpCbCr8],
(id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
};
self.assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
[self.assetReader addOutput:self.assetReaderVideoOutput];
CMFormatDescriptionRef formatDescription = NULL;
// Grab the video format descriptions from the video track and grab the first one if it exists.
NSArray *videoFormatDescriptions = [assetVideoTrack formatDescriptions];
if ([videoFormatDescriptions count] > 0)
formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
CGSize trackDimensions = {
.width = 0.0,
.height = 0.0,
};
// If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
if (formatDescription)
trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
else
trackDimensions = [assetVideoTrack naturalSize];
NSDictionary *compressionSettings = nil;
// If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video.
if (formatDescription)
{
NSDictionary *cleanAperture = nil;
NSDictionary *pixelAspectRatio = nil;
CFDictionaryRef cleanApertureFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_CleanAperture);
if (cleanApertureFromCMFormatDescription)
{
cleanAperture = @{
AVVideoCleanApertureWidthKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth),
AVVideoCleanApertureHeightKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight),
AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset),
AVVideoCleanApertureVerticalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset)
};
}
CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio);
if (pixelAspectRatioFromCMFormatDescription)
{
pixelAspectRatio = @{
AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing),
AVVideoPixelAspectRatioVerticalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing)
};
}
// Add whichever settings we could grab from the format description to the compression settings dictionary.
if (cleanAperture || pixelAspectRatio)
{
NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary];
if (cleanAperture)
[mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey];
if (pixelAspectRatio)
[mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey];
compressionSettings = mutableCompressionSettings;
}
}
// Create the video settings dictionary for H.264.
NSMutableDictionary *videoSettings = (NSMutableDictionary *) @{
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : [NSNumber numberWithDouble:trackDimensions.width],
AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
};
// Put the compression settings into the video settings dictionary if we were able to grab them.
if (compressionSettings)
[videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey];
// Create the asset writer input and add it to the asset writer.
self.assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[videoTrack mediaType] outputSettings:videoSettings];
[self.assetWriter addInput:self.assetWriterVideoInput];
}
}
return success;
}
重新編碼Asset
如果asset
讀取器和寫入器已成功初始化和配置嚼摩,則調(diào)用Handling the Initial Setup的startAssetReaderAndWriter:
方法。該方法是asset
的實(shí)際讀取和寫入的地方矿瘦。
- (BOOL)startAssetReaderAndWriter:(NSError **)outError
{
BOOL success = YES;
// Attempt to start the asset reader.
success = [self.assetReader startReading];
if (!success)
*outError = [self.assetReader error];
if (success)
{
// If the reader started successfully, attempt to start the asset writer.
success = [self.assetWriter startWriting];
if (!success)
*outError = [self.assetWriter error];
}
if (success)
{
// If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session.
self.dispatchGroup = dispatch_group_create();
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
self.audioFinished = NO;
self.videoFinished = NO;
if (self.assetWriterAudioInput)
{
// If there is audio to reencode, enter the dispatch group before beginning the work.
dispatch_group_enter(self.dispatchGroup);
// Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on.
[self.assetWriterAudioInput requestMediaDataWhenReadyOnQueue:self.rwAudioSerializationQueue usingBlock:^{
// Because the block is called asynchronously, check to see whether its task is complete.
if (self.audioFinished)
return;
BOOL completedOrFailed = NO;
// If the task isn't complete yet, make sure that the input is actually ready for more media data.
while ([self.assetWriterAudioInput isReadyForMoreMediaData] && !completedOrFailed)
{
// Get the next audio sample buffer, and append it to the output file.
CMSampleBufferRef sampleBuffer = [self.assetReaderAudioOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
BOOL success = [self.assetWriterAudioInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
else
{
completedOrFailed = YES;
}
}
if (completedOrFailed)
{
// Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished).
BOOL oldFinished = self.audioFinished;
self.audioFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterAudioInput markAsFinished];
}
dispatch_group_leave(self.dispatchGroup);
}
}];
}
if (self.assetWriterVideoInput)
{
// If we had video to reencode, enter the dispatch group before beginning the work.
dispatch_group_enter(self.dispatchGroup);
// Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on.
[self.assetWriterVideoInput requestMediaDataWhenReadyOnQueue:self.rwVideoSerializationQueue usingBlock:^{
// Because the block is called asynchronously, check to see whether its task is complete.
if (self.videoFinished)
return;
BOOL completedOrFailed = NO;
// If the task isn't complete yet, make sure that the input is actually ready for more media data.
while ([self.assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
{
// Get the next video sample buffer, and append it to the output file.
CMSampleBufferRef sampleBuffer = [self.assetReaderVideoOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
BOOL success = [self.assetWriterVideoInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
else
{
completedOrFailed = YES;
}
}
if (completedOrFailed)
{
// Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished).
BOOL oldFinished = self.videoFinished;
self.videoFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterVideoInput markAsFinished];
}
dispatch_group_leave(self.dispatchGroup);
}
}];
}
// Set up the notification that the dispatch group will send when the audio and video work have both finished.
dispatch_group_notify(self.dispatchGroup, self.mainSerializationQueue, ^{
BOOL finalSuccess = YES;
NSError *finalError = nil;
// Check to see if the work has finished due to cancellation.
if (self.cancelled)
{
// If so, cancel the reader and writer.
[self.assetReader cancelReading];
[self.assetWriter cancelWriting];
}
else
{
// If cancellation didn't occur, first make sure that the asset reader didn't fail.
if ([self.assetReader status] == AVAssetReaderStatusFailed)
{
finalSuccess = NO;
finalError = [self.assetReader error];
}
// If the asset reader didn't fail, attempt to stop the asset writer and check for any errors.
if (finalSuccess)
{
finalSuccess = [self.assetWriter finishWriting];
if (!finalSuccess)
finalError = [self.assetWriter error];
}
}
// Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful.
[self readingAndWritingDidFinishSuccessfully:finalSuccess withError:finalError];
});
}
// Return success here to indicate whether the asset reader and writer were started successfully.
return success;
}
在重新編碼期間枕面,音頻和視頻軌道在各個(gè)序列化隊(duì)列上異步處理,以提高進(jìn)程的整體性能缚去,但兩個(gè)隊(duì)列都包含在同一個(gè)調(diào)度組中潮秘。通過將每個(gè)軌道的工作放在同一個(gè)調(diào)度組中,該組可以在完成所有工作并且可以確定重新編碼過程成功時(shí)發(fā)送通知易结。
結(jié)束回調(diào)
為了完成讀寫過程枕荞,readAndWritingDidFinishSuccessfully:
方法被調(diào)用 - 帶有指示重新編碼是否成功完成的參數(shù)。如果進(jìn)程未成功完成衬衬,則asset
讀取器和寫入器都將被取消买猖,并且任何與UI
相關(guān)的任務(wù)都將分派到主隊(duì)列。
- (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error
{
if (!success)
{
// If the reencoding process failed, we need to cancel the asset reader and writer.
[self.assetReader cancelReading];
[self.assetWriter cancelWriting];
dispatch_async(dispatch_get_main_queue(), ^{
// Handle any UI tasks here related to failure.
});
}
else
{
// Reencoding was successful, reset booleans.
self.cancelled = NO;
self.videoFinished = NO;
self.audioFinished = NO;
dispatch_async(dispatch_get_main_queue(), ^{
// Handle any UI tasks here related to success.
});
}
}
取消回調(diào)
使用多個(gè)序列化隊(duì)列滋尉,你可以允許應(yīng)用程序的用戶輕松取消重新編碼過程。在主序列化隊(duì)列中飞主,將消息異步發(fā)送到每個(gè)asset
重新編碼序列化隊(duì)列以取消其讀取和寫入狮惜。當(dāng)這兩個(gè)序列化隊(duì)列完成取消時(shí),調(diào)度組會(huì)向主序列化隊(duì)列發(fā)送通知碌识,其中取消的屬性設(shè)置為YES
碾篡。你可以使用UI
上的按鈕將以下代碼列表中的cancel
方法關(guān)聯(lián)起來。
- (void)cancel
{
// Handle cancellation asynchronously, but serialize it with the main queue.
dispatch_async(self.mainSerializationQueue, ^{
// If we had audio data to reencode, we need to cancel the audio work.
if (self.assetWriterAudioInput)
{
// Handle cancellation asynchronously again, but this time serialize it with the audio queue.
dispatch_async(self.rwAudioSerializationQueue, ^{
// Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
BOOL oldFinished = self.audioFinished;
self.audioFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterAudioInput markAsFinished];
}
// Leave the dispatch group since the audio work is finished now.
dispatch_group_leave(self.dispatchGroup);
});
}
if (self.assetWriterVideoInput)
{
// Handle cancellation asynchronously again, but this time serialize it with the video queue.
dispatch_async(self.rwVideoSerializationQueue, ^{
// Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
BOOL oldFinished = self.videoFinished;
self.videoFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterVideoInput markAsFinished];
}
// Leave the dispatch group, since the video work is finished now.
dispatch_group_leave(self.dispatchGroup);
});
}
// Set the cancelled Boolean property to YES to cancel any work on the main queue as well.
self.cancelled = YES;
});
}
Asset輸出設(shè)置Assistant
AVOutputSettingsAssistant類有助于為asset
讀取器或?qū)懭肫鲃?chuàng)建輸出設(shè)置詞典筏餐。這使得設(shè)置更加簡(jiǎn)單开泽,特別是對(duì)于具有許多特定預(yù)設(shè)的高幀率H264
電影。代碼5-1
顯示了一個(gè)使用輸出設(shè)置assistant
使用設(shè)置assistant
的示例魁瞪。
AVOutputSettingsAssistant *outputSettingsAssistant = [AVOutputSettingsAssistant outputSettingsAssistantWithPreset:<some preset>];
CMFormatDescriptionRef audioFormat = [self getAudioFormat];
if (audioFormat != NULL)
[outputSettingsAssistant setSourceAudioFormat:(CMAudioFormatDescriptionRef)audioFormat];
CMFormatDescriptionRef videoFormat = [self getVideoFormat];
if (videoFormat != NULL)
[outputSettingsAssistant setSourceVideoFormat:(CMVideoFormatDescriptionRef)videoFormat];
CMTime assetMinVideoFrameDuration = [self getMinFrameDuration];
CMTime averageFrameDuration = [self getAvgFrameDuration]
[outputSettingsAssistant setSourceVideoAverageFrameDuration:averageFrameDuration];
[outputSettingsAssistant setSourceVideoMinFrameDuration:assetMinVideoFrameDuration];
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:<some URL> fileType:[outputSettingsAssistant outputFileType] error:NULL];
AVAssetWriterInput *audioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:[outputSettingsAssistant audioSettings] sourceFormatHint:audioFormat];
AVAssetWriterInput *videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:[outputSettingsAssistant videoSettings] sourceFormatHint:videoFormat];
上一章 | 目錄 | 下一章 |
---|