本文轉(zhuǎn)自:AVAudioFoundation(3):音視頻編輯 | www.samirchen.com
本文主要內(nèi)容來自 AVFoundation Programming Guide倔矾。
音視頻編輯
上面簡單了解了下 AVFoundation
框架后谱姓,我們來看看跟音視頻編輯相關(guān)的接口。
一個(gè) composition 可以簡單的認(rèn)為是一組軌道(tracks)的集合,這些軌道可以是來自不同媒體資源(asset)寄症。AVMutableComposition
提供了接口來插入或者刪除軌道,也可以調(diào)整這些軌道的順序冷离。
下面這張圖反映了一個(gè)新的 composition 是怎么從已有的 asset 中獲取對應(yīng)的 track 并進(jìn)行拼接形成新的 asset上遥。
在處理音頻時(shí),你可以在使用 AVMutableAudioMix
類的接口來做一些自定義的操作溃肪,如下圖所示∶馕福現(xiàn)在,你可以做到指定一個(gè)最大音量或設(shè)置一個(gè)音頻軌道的音量漸變乍惊。
如下圖所示杜秸,我們還可以使用 AVMutableVideoComposition
來直接處理 composition 中的視頻軌道。處理一個(gè)單獨(dú)的 video composition 時(shí)润绎,你可以指定它的渲染尺寸撬碟、縮放比例、幀率等參數(shù)并輸出最終的視頻文件莉撇。通過一些針對 video composition 的指令(AVMutableVideoCompositionInstruction 等)呢蛤,我們可以修改視頻的背景顏色、應(yīng)用 layer instructions棍郎。這些 layer instructions(AVMutableVideoCompositionLayerInstruction 等)可以用來對 composition 中的視頻軌道實(shí)施圖形變換其障、添加圖形漸變、透明度變換涂佃、增加透明度漸變励翼。此外,你還能通過設(shè)置 video composition 的 animationTool
屬性來應(yīng)用 Core Animation Framework 框架中的動(dòng)畫效果辜荠。
如下圖所示汽抚,你可以使用 AVAssetExportSession
相關(guān)的接口來合并你的 composition 中的 audio mix 和 video composition。你只需要初始化一個(gè) AVAssetExportSession
對象伯病,然后將其 audioMix
和 videoComposition
屬性分別設(shè)置為你的 audio mix 和 video composition 即可造烁。
創(chuàng)建 Composition
上面簡單介紹了集中音視頻編輯的場景,現(xiàn)在我們來詳細(xì)介紹具體的接口。從 AVMutableComposition
開始惭蟋。
當(dāng)使用 AVMutableComposition
創(chuàng)建自己的 composition 時(shí)苗桂,最典型的,我們可以使用 AVMutableCompositionTrack
來向 composition 中添加一個(gè)或多個(gè) composition tracks告组,比如下面這個(gè)簡單的例子便是向一個(gè) composition 中添加一個(gè)音頻軌道和一個(gè)視頻軌道:
AVMutableComposition *mutableComposition = [AVMutableComposition composition];
// Create the video composition track.
AVMutableCompositionTrack *mutableCompositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
// Create the audio composition track.
AVMutableCompositionTrack *mutableCompositionAudioTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
當(dāng)為 composition 添加一個(gè)新的 track 的時(shí)候煤伟,需要設(shè)置其媒體類型(media type)和 track ID,主要的媒體類型包括:音頻惹谐、視頻持偏、字幕、文本等等氨肌。
這里需要注意的是鸿秆,每個(gè) track 都需要一個(gè)唯一的 track ID,比較方便的做法是:設(shè)置 track ID 為 kCMPersistentTrackID_Invalid 來為對應(yīng)的 track 獲得一個(gè)自動(dòng)生成的唯一 ID怎囚。
向 Composition 添加視聽數(shù)據(jù)
要將媒體數(shù)據(jù)添加到一個(gè) composition track 中需要訪問媒體數(shù)據(jù)所在的 AVAsset
卿叽,可以使用 AVMutableCompositionTrack
的接口將具有相同媒體類型的多個(gè) track 添加到同一個(gè) composition track 中。下面的例子便是從兩個(gè) AVAsset
中各取出一份 video asset track恳守,再添加到一個(gè)新的 composition track 中去:
// You can retrieve AVAssets from a number of places, like the camera roll for example.
AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAsset *anotherVideoAsset = <#another AVAsset with at least one video track#>;
// Get the first video track from each asset.
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *anotherVideoAssetTrack = [[anotherVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
// Add them both to the composition.
[mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,videoAssetTrack.timeRange.duration) ofTrack:videoAssetTrack atTime:kCMTimeZero error:nil];
[mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,anotherVideoAssetTrack.timeRange.duration) ofTrack:anotherVideoAssetTrack atTime:videoAssetTrack.timeRange.duration error:nil];
檢索兼容的 Composition Tracks
如果可能考婴,每一種媒體類型最好只使用一個(gè) composition track,這樣能夠優(yōu)化資源的使用催烘。當(dāng)你連續(xù)播放媒體數(shù)據(jù)時(shí)沥阱,應(yīng)該將相同類型的媒體數(shù)據(jù)放到同一個(gè) composition track 中,你可以通過類似下面的代碼來從 composition 中查找是否有與當(dāng)前的 asset track 兼容的 composition track伊群,然后拿來使用:
AVMutableCompositionTrack *compatibleCompositionTrack = [mutableComposition mutableTrackCompatibleWithTrack:<#the AVAssetTrack you want to insert#>];
if (compatibleCompositionTrack) {
// Implementation continues.
}
需要注意的是考杉,在同一個(gè) composition track 中添加多個(gè)視頻段時(shí),當(dāng)視頻段之間切換時(shí)可能會(huì)丟幀舰始,尤其在嵌入式設(shè)備上崇棠。基于這個(gè)問題丸卷,應(yīng)該合理選擇一個(gè) composition track 里的視頻段數(shù)量枕稀。
設(shè)置音量漸變
只使用一個(gè) AVMutableAudioMix
對象就能夠?yàn)?composition 中的每一個(gè) audio track 單獨(dú)做音頻處理。
下面代碼展示了如果使用 AVMutableAudioMix
給一個(gè) audio track 設(shè)置音量漸變給聲音增加一個(gè)淡出效果谜嫉。使用 audioMix
類方法獲取 AVMutableAudioMix
實(shí)例萎坷;然后使用 AVMutableAudioMixInputParameters
類的 audioMixInputParametersWithTrack:
接口將 AVMutableAudioMix
實(shí)例與 composition 中的某一個(gè) audio track 關(guān)聯(lián)起來;之后便可以通過 AVMutableAudioMix
實(shí)例來處理音量了沐兰。
AVMutableAudioMix *mutableAudioMix = [AVMutableAudioMix audioMix];
// Create the audio mix input parameters object.
AVMutableAudioMixInputParameters *mixParameters = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:mutableCompositionAudioTrack];
// Set the volume ramp to slowly fade the audio out over the duration of the composition.
[mixParameters setVolumeRampFromStartVolume:1.f toEndVolume:0.f timeRange:CMTimeRangeMake(kCMTimeZero, mutableComposition.duration)];
// Attach the input parameters to the audio mix.
mutableAudioMix.inputParameters = @[mixParameters];
自定義視頻處理
處理音頻是我們使用 AVMutableAudioMix
哆档,那么處理視頻時(shí),我們就使用 AVMutableVideoComposition
僧鲁,只需要一個(gè) AVMutableVideoComposition
實(shí)例就可以為 composition 中所有的 video track 做處理虐呻,比如設(shè)置渲染尺寸、縮放寞秃、播放幀率等等斟叼。
下面我們依次來看一些場景。
設(shè)置視頻背景色
所有的 video composition 也必然對應(yīng)一組 AVVideoCompositionInstruction
實(shí)例春寿,每個(gè) AVVideoCompositionInstruction
中至少包含一條 video composition instruction朗涩。我們可以使用 AVMutableVideoCompositionInstruction
來創(chuàng)建我們自己的 video composition instruction,通過這些指令绑改,我們可以修改 composition 的背景顏色谢床、后處理、layer instruction 等等厘线。
下面的實(shí)例代碼展示了如何創(chuàng)建 video composition instruction 并將一個(gè) composition 的整個(gè)時(shí)長都設(shè)置為紅色背景色:
AVMutableVideoCompositionInstruction *mutableVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mutableVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, mutableComposition.duration);
mutableVideoCompositionInstruction.backgroundColor = [[UIColor redColor] CGColor];
設(shè)置透明度漸變
我們也可以用 video composition instructions 來應(yīng)用 video composition layer instructions识腿。AVMutableVideoCompositionLayerInstruction
可以用來設(shè)置 video track 的圖形變換、圖形漸變造壮、透明度渡讼、透明度漸變等等。一個(gè) video composition instruction 的 layerInstructions
屬性中所存儲的 layer instructions 的順序決定了 tracks 中的視頻幀是如何被放置和組合的耳璧。
下面的示例代碼展示了如何在從一個(gè)視頻切換到第二個(gè)視頻時(shí)添加一個(gè)透明度漸變的效果:
AVAsset *firstVideoAssetTrack = <#AVAssetTrack representing the first video segment played in the composition#>;
AVAsset *secondVideoAssetTrack = <#AVAssetTrack representing the second video segment played in the composition#>;
// Create the first video composition instruction.
AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set its time range to span the duration of the first video track.
firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
// Create the layer instruction and associate it with the composition video track.
AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack];
// Create the opacity ramp to fade out the first video track over its entire duration.
[firstVideoLayerInstruction setOpacityRampFromStartOpacity:1.f toEndOpacity:0.f timeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration)];
// Create the second video composition instruction so that the second video track isn't transparent.
AVMutableVideoCompositionInstruction *secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set its time range to span the duration of the second video track.
secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration));
// Create the second layer instruction and associate it with the composition video track.
AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack];
// Attach the first layer instruction to the first video composition instruction.
firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction];
// Attach the second layer instruction to the second video composition instruction.
secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction];
// Attach both of the video composition instructions to the video composition.
AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction];
動(dòng)畫效果
我們還能通過設(shè)置 video composition 的 animationTool
屬性來使用 Core Animation Framework 框架的強(qiáng)大能力成箫。比如:設(shè)置視頻水印、視頻標(biāo)題旨枯、動(dòng)畫浮層等蹬昌。
在 video composition 中使用 Core Animation 有兩種不同的方式:
- 添加一個(gè) Core Animation Layer 作為獨(dú)立的 composition track
- 直接使用 Core Animation Layer 在視頻幀中渲染動(dòng)畫效果
下面的代碼展示了后面一種使用方式,在視頻區(qū)域的中心添加水优矢簟:
CALayer *watermarkLayer = <#CALayer representing your desired watermark image#>;
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
videoLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
[parentLayer addSublayer:videoLayer];
watermarkLayer.position = CGPointMake(mutableVideoComposition.renderSize.width/2, mutableVideoComposition.renderSize.height/4);
[parentLayer addSublayer:watermarkLayer];
mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
一個(gè)完整示例
這里的示例將展示如何合并兩個(gè) video asset tracks 和一個(gè) audio asset track 到一個(gè)視頻文件皂贩,其中大體步驟如下:
- 創(chuàng)建一個(gè)
AVMutableComposition
對象,添加多個(gè)AVMutableCompositionTrack
對象 - 在各個(gè) composition tracks 中添加
AVAssetTrack
對應(yīng)的時(shí)間范圍 - 檢查 video asset track 的
preferredTransform
屬性決定視頻方向 - 使用
AVMutableVideoCompositionLayerInstruction
對象對視頻進(jìn)行圖形變換 - 設(shè)置 video composition 的
renderSize
和frameDuration
屬性 - 導(dǎo)出視頻文件
- 保存視頻文件到相冊
下面的示例代碼省略了一些內(nèi)存管理和通知移除相關(guān)的代碼竞慢。
// 1先紫、創(chuàng)建 composition。創(chuàng)建一個(gè) composition筹煮,并添加一個(gè) audio track 和一個(gè) video track遮精。
AVMutableComposition *mutableComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *videoCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *audioCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
// 2、添加 asset败潦。從源 assets 中取得兩個(gè) video track 和一個(gè) audio track本冲,在上面的 video composition track 中依次添加兩個(gè) video track,在 audio composition track 中添加一個(gè) video track劫扒。
AVAsset *firstVideoAsset = <#First AVAsset with at least one video track#>;
AVAsset *secondVideoAsset = <#Second AVAsset with at least one video track#>;
AVAsset *audioAsset = <#AVAsset with at least one audio track#>;
AVAssetTrack *firstVideoAssetTrack = [[firstVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *secondVideoAssetTrack = [[secondVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *audioAssetTrack = [[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration) ofTrack:firstVideoAssetTrack atTime:kCMTimeZero error:nil];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, secondVideoAssetTrack.timeRange.duration) ofTrack:secondVideoAssetTrack atTime:firstVideoAssetTrack.timeRange.duration error:nil];
[audioCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration)) ofTrack:audioAssetTrack atTime:kCMTimeZero error:nil];
// 3檬洞、檢查 composition 方向。在 composition 中添加了 audio track 和 video track 后沟饥,還必須確保其中所有的 video track 的視頻方向都是一致的添怔。在默認(rèn)情況下 video track 默認(rèn)為橫屏模式湾戳,如果這時(shí)添加進(jìn)來的 video track 是在豎屏模式下采集的,那么導(dǎo)出的視頻會(huì)出現(xiàn)方向錯(cuò)誤广料。同理砾脑,將一個(gè)橫向的視頻和一個(gè)縱向的視頻進(jìn)行合并導(dǎo)出,export session 會(huì)報(bào)錯(cuò)艾杏。
BOOL isFirstVideoPortrait = NO;
CGAffineTransform firstTransform = firstVideoAssetTrack.preferredTransform;
// Check the first video track's preferred transform to determine if it was recorded in portrait mode.
if (firstTransform.a == 0 && firstTransform.d == 0 && (firstTransform.b == 1.0 || firstTransform.b == -1.0) && (firstTransform.c == 1.0 || firstTransform.c == -1.0)) {
isFirstVideoPortrait = YES;
}
BOOL isSecondVideoPortrait = NO;
CGAffineTransform secondTransform = secondVideoAssetTrack.preferredTransform;
// Check the second video track's preferred transform to determine if it was recorded in portrait mode.
if (secondTransform.a == 0 && secondTransform.d == 0 && (secondTransform.b == 1.0 || secondTransform.b == -1.0) && (secondTransform.c == 1.0 || secondTransform.c == -1.0)) {
isSecondVideoPortrait = YES;
}
if ((isFirstVideoAssetPortrait && !isSecondVideoAssetPortrait) || (!isFirstVideoAssetPortrait && isSecondVideoAssetPortrait)) {
UIAlertView *incompatibleVideoOrientationAlert = [[UIAlertView alloc] initWithTitle:@"Error!" message:@"Cannot combine a video shot in portrait mode with a video shot in landscape mode." delegate:self cancelButtonTitle:@"Dismiss" otherButtonTitles:nil];
[incompatibleVideoOrientationAlert show];
return;
}
// 4韧衣、應(yīng)用 Video Composition Layer Instructions。一旦你知道你要合并的視頻片段的方向是兼容的购桑,那么你接下來就可以為每個(gè)片段應(yīng)用必要的 layer instructions畅铭,并將這些 layer instructions 添加到 video composition 中。
// 所有的 `AVAssetTrack` 對象都有一個(gè) `preferredTransform` 屬性勃蜘,包含了 asset track 的方向信息硕噩。這個(gè) transform 會(huì)在 asset track 在屏幕上展示時(shí)被應(yīng)用。在下面的代碼中缭贡,layer instruction 的 transform 被設(shè)置為 asset track 的 transform榴徐,便于在你修改了視頻尺寸時(shí),新的 composition 中的視頻也能正確的進(jìn)行展示匀归。
AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set the time range of the first instruction to span the duration of the first video track.
firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
AVMutableVideoCompositionInstruction *secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set the time range of the second instruction to span the duration of the second video track.
secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration));
// 創(chuàng)建兩個(gè) video layer instruction坑资,關(guān)聯(lián)對應(yīng)的 video composition track,并設(shè)置 transform 為 preferredTransform穆端。
AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
// Set the transform of the first layer instruction to the preferred transform of the first video track.
[firstVideoLayerInstruction setTransform:firstTransform atTime:kCMTimeZero];
AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
// Set the transform of the second layer instruction to the preferred transform of the second video track.
[secondVideoLayerInstruction setTransform:secondTransform atTime:firstVideoAssetTrack.timeRange.duration];
firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction];
secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction];
AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction];
// 5袱贮、設(shè)置渲染尺寸和幀率。要完全解決視頻方向問題体啰,你還需要調(diào)整 video composition 的 `renderSize` 屬性攒巍,同時(shí)也需要設(shè)置一個(gè)合適的 `frameDuration`,比如 1/30 表示 30 幀每秒荒勇。此外柒莉,`renderScale` 默認(rèn)值為 1.0。
CGSize naturalSizeFirst, naturalSizeSecond;
// If the first video asset was shot in portrait mode, then so was the second one if we made it here.
if (isFirstVideoAssetPortrait) {
// Invert the width and height for the video tracks to ensure that they display properly.
naturalSizeFirst = CGSizeMake(firstVideoAssetTrack.naturalSize.height, firstVideoAssetTrack.naturalSize.width);
naturalSizeSecond = CGSizeMake(secondVideoAssetTrack.naturalSize.height, secondVideoAssetTrack.naturalSize.width);
} else {
// If the videos weren't shot in portrait mode, we can just use their natural sizes.
naturalSizeFirst = firstVideoAssetTrack.naturalSize;
naturalSizeSecond = secondVideoAssetTrack.naturalSize;
}
float renderWidth, renderHeight;
// Set the renderWidth and renderHeight to the max of the two videos widths and heights.
if (naturalSizeFirst.width > naturalSizeSecond.width) {
renderWidth = naturalSizeFirst.width;
} else {
renderWidth = naturalSizeSecond.width;
}
if (naturalSizeFirst.height > naturalSizeSecond.height) {
renderHeight = naturalSizeFirst.height;
} else {
renderHeight = naturalSizeSecond.height;
}
mutableVideoComposition.renderSize = CGSizeMake(renderWidth, renderHeight);
// Set the frame duration to an appropriate value (i.e. 30 frames per second for video).
mutableVideoComposition.frameDuration = CMTimeMake(1,30);
// 6沽翔、導(dǎo)出 composition 并保持到相冊兢孝。創(chuàng)建一個(gè) `AVAssetExportSession` 對象,設(shè)置對應(yīng)的 `outputURL` 來將視頻導(dǎo)出到指定的文件仅偎。同時(shí)跨蟹,我們還可以用 `ALAssetsLibrary` 接口來將導(dǎo)出的視頻文件存儲到相冊中去。
// Create a static date formatter so we only have to initialize it once.
static NSDateFormatter *kDateFormatter;
if (!kDateFormatter) {
kDateFormatter = [[NSDateFormatter alloc] init];
kDateFormatter.dateStyle = NSDateFormatterMediumStyle;
kDateFormatter.timeStyle = NSDateFormatterShortStyle;
}
// Create the export session with the composition and set the preset to the highest quality.
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPresetHighestQuality];
// Set the desired output URL for the file created by the export process.
exporter.outputURL = [[[[NSFileManager defaultManager] URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:@YES error:nil] URLByAppendingPathComponent:[kDateFormatter stringFromDate:[NSDate date]]] URLByAppendingPathExtension:CFBridgingRelease(UTTypeCopyPreferredTagWithClass((CFStringRef)AVFileTypeQuickTimeMovie, kUTTagClassFilenameExtension))];
// Set the output file type to be a QuickTime movie.
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;
exporter.videoComposition = mutableVideoComposition;
// Asynchronously export the composition to a video file and save this file to the camera roll once export completes.
[exporter exportAsynchronouslyWithCompletionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
if (exporter.status == AVAssetExportSessionStatusCompleted) {
ALAssetsLibrary *assetsLibrary = [[ALAssetsLibrary alloc] init];
if ([assetsLibrary videoAtPathIsCompatibleWithSavedPhotosAlbum:exporter.outputURL]) {
[assetsLibrary writeVideoAtPathToSavedPhotosAlbum:exporter.outputURL completionBlock:NULL];
}
}
});
}];