GitHub地址(附代碼) : iOS AVAsset
簡書地址 : iOS AVAsset
博客地址 : iOS AVAsset
掘金地址 : iOS AVAsset
知識預備
AVFoundation概覽
AVFoundation是一個底層可以用來實時捕捉與播放的框架.蘋果提供了回調可以獲取每幀視頻數(shù)據(jù).如果你僅僅是想播放一段視頻而不對視頻做一些特殊處理,則您可以簡單的使用上層框架如AVKit framework,
- AVAudioPlayer: 播放音頻文件
- AVAudioRecorder: 錄制音頻文件
- AVAsset: 一個或多個媒體數(shù)據(jù)(音頻軌道,視頻軌道)的集合
- Reading, Writing, and Reencoding Assets: 讀取,寫出,重新編碼
- Thumbnails: 使用AVAssetImageGenerator生成縮略圖
- Editing: 對獲取的視頻文件做一些編輯操作:改變背景顏色,透明度,快進等等...
- Still and Video Media Capture: 使用capture session來捕捉此相機的視頻數(shù)據(jù)與麥克風的音頻數(shù)據(jù).
Asset的表示
-
AVAsset
是AVFoundation框架中的核心的類,它提供了基于時間的音視頻數(shù)據(jù).(如電影文件,視頻流),一個asset包含很多軌道的結合,如audio, video, text, closed captions, subtitles... -
AVMetadataItem
:提供了一個asset相關的所有資源信息. -
AVAssetTrack
: 一個軌道可以代表一個音頻軌道或視頻軌道
AVAsset代表了一種基于時間的音視頻數(shù)據(jù)的抽象類型,其結構決定了很多框架的工作原理.AVFoundation中一些用于代表時間與媒體數(shù)據(jù)的sample buffer來自Core Media框架.
Media的表示
- CMSampleBuffer: 表示視頻幀數(shù)據(jù)
- CMSampleBufferGetPresentationTimeStamp,CMSampleBufferGetDecodeTimeStamp: 獲取原始時間與解碼時間戳
- CMFormatDescriptionRef: 格式信息
- CMGetAttachment: 獲取元數(shù)據(jù)
CMSampleBufferRef sampleBuffer = <#Get a sample buffer#>;
CFDictionaryRef metadataDictionary =
CMGetAttachment(sampleBuffer, CFSTR("MetadataDictionary", NULL);
if (metadataDictionary) {
// Do something with the metadata.
}
CMTime
CMTime是一個C語言結構類型的有理數(shù),它使用分子(int64_t)與分母(int32_t)表示時間. AVFoundation中關于時間的代碼均使用此數(shù)據(jù)結構,所以務必了解其使用規(guī)則.
- Using CMTime
CMTime time1 = CMTimeMake(200, 2); // 200 half-seconds
CMTime time2 = CMTimeMake(400, 4); // 400 quarter-seconds
// time1 and time2 both represent 100 seconds, but using different timescales.
if (CMTimeCompare(time1, time2) == 0) {
NSLog(@"time1 and time2 are the same");
}
Float64 float64Seconds = 200.0 / 3;
CMTime time3 = CMTimeMakeWithSeconds(float64Seconds , 3); // 66.66... third-seconds
time3 = CMTimeMultiply(time3, 3);
// time3 now represents 200 seconds; next subtract time1 (100 seconds).
time3 = CMTimeSubtract(time3, time1);
CMTimeShow(time3);
if (CMTIME_COMPARE_INLINE(time2, ==, time3)) {
NSLog(@"time2 and time3 are the same");
}
- 特定CMTime的值
- kCMTimeZero
- kCMTimePositiveInfinity
- kCMTimeInvalid
- kCMTimeNegativeInfinity
CMTime myTime = <#Get a CMTime#>;
if (CMTIME_IS_INVALID(myTime)) {
// Perhaps treat this as an error; display a suitable alert to the user.
}
- CMTime作為一個對象
可以用CMTimeCopyAsDictionary與CMTimeMakeFromDictionary將CMTime轉換為CFDictionary.還可以使用CMTimeCopyDescription獲取一個代表CMTime的字符串
- CMTimeRange表示一個時間段
CMTimeRange是一個擁有開始時間與持續(xù)時間的C語言數(shù)據(jù)結構.
- 比較
- CMTimeRangeContainsTime
- CMTimeRangeEqual
- CMTimeRangeContainsTimeRange
- CMTimeRangeGetUnion
CMTimeRangeContainsTime(range, CMTimeRangeGetEnd(range))
- 特定的CMTimeRange
- kCMTimeRangeInvalid
- kCMTimeRangeZero
CMTimeRange myTimeRange = <#Get a CMTimeRange#>;
if (CMTIMERANGE_IS_EMPTY(myTimeRange)) {
// The time range is zero.
}
- 轉換
使用CMTimeRangeCopyAsDictionary與CMTimeRangeMakeFromDictionary將CMTimeRange轉為CFDictionary.
1. Assets使用
-
定義: Assets 可以來自一個文件或用戶的相冊,可以理解為多媒體資源
創(chuàng)建Asset對象時,我們無法立即獲取其所有數(shù)據(jù), 因為含音視頻的資源文件可能很大,系統(tǒng)需要花時間遍歷它.一旦獲取到asset后,可以從中提取靜態(tài)圖像, 或者將它轉碼為其他格式, 亦或是做裁剪操作.
1.1. 創(chuàng)建Asset對象
通過URL作為一個asset對象的標識. 這個URL可以是本地文件路徑或網(wǎng)絡流
NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>;
AVURLAsset *anAsset = [[AVURLAsset alloc] initWithURL:url options:nil];
1.2. 初始化屬性
AVURLAsset對象的初始化方式是用url與字典作為參數(shù).
- 字典中的key是AVURLAssetPreferPreciseDurationAndTimingKey枚舉中的值
- AVURLAssetPreferPreciseDurationAndTimingKey是一個Bool類型的值,他決定了是否應準備好指示精確的持續(xù)時間并按時間提供精確的隨機訪問立宜。
- 獲取精確的時間需要大量的處理開銷. 使用近似時間開銷較小且可以滿足播放功能.
- 如果僅僅想播放asset,可以設置nil,它將默認為NO
- 如果想要用asset做一個合成操作,我們需要一個精確的訪問.則需要設置為true.
NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>;
NSDictionary *options = @{ AVURLAssetPreferPreciseDurationAndTimingKey : @YES };
AVURLAsset *anAssetToUseInAComposition = [[AVURLAsset alloc] initWithURL:url options:options];
1.3. 訪問用戶相冊
我們可以獲取用戶相冊中的視頻資源
- iPod: MPMediaQuery
- iPhone: ALAssetsLibrary
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
// Enumerate just the photos and videos group by using ALAssetsGroupSavedPhotos.
[library enumerateGroupsWithTypes:ALAssetsGroupSavedPhotos usingBlock:^(ALAssetsGroup *group, BOOL *stop) {
// Within the group enumeration block, filter to enumerate just videos.
[group setAssetsFilter:[ALAssetsFilter allVideos]];
// For this example, we're only interested in the first item.
[group enumerateAssetsAtIndexes:[NSIndexSet indexSetWithIndex:0]
options:0
usingBlock:^(ALAsset *alAsset, NSUInteger index, BOOL *innerStop) {
// The end of the enumeration is signaled by asset == nil.
if (alAsset) {
ALAssetRepresentation *representation = [alAsset defaultRepresentation];
NSURL *url = [representation url];
AVAsset *avAsset = [AVURLAsset URLAssetWithURL:url options:nil];
// Do something interesting with the AV asset.
}
}];
}
failureBlock: ^(NSError *error) {
// Typically you should handle an error more gracefully than this.
NSLog(@"No groups");
}];
1.4. 使用
初始化asset并意味著你檢索的信息可以馬上使用. 它可能需要一定時間去計算視頻的信息.因此我們需要使用block異步接受處理的結果.
使用AVAsynchronousKeyValueLoading協(xié)議.
- a property using statusOfValueForKey:error: : 測試一個值是否被加載
NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>;
AVURLAsset *anAsset = [[AVURLAsset alloc] initWithURL:url options:nil];
NSArray *keys = @[@"duration"];
[asset loadValuesAsynchronouslyForKeys:keys completionHandler:^() {
NSError *error = nil;
AVKeyValueStatus tracksStatus = [asset statusOfValueForKey:@"duration" error:&error];
switch (tracksStatus) {
case AVKeyValueStatusLoaded:
[self updateUserInterfaceForDuration];
break;
case AVKeyValueStatusFailed:
[self reportError:error forAsset:asset];
break;
case AVKeyValueStatusCancelled:
// Do whatever is appropriate for cancelation.
break;
}
}];
1.5.從Video中獲取靜止圖像
為了從asset的播放回調中獲取像縮略圖這樣的靜態(tài)圖片,需要使用 AVAssetImageGenerator對象.可以使用tracksWithMediaCharacteristic:.
測試asset是否有具有視頻信息.
AVAsset anAsset = <#Get an asset#>;
if ([[anAsset tracksWithMediaType:AVMediaTypeVideo] count] > 0) {
AVAssetImageGenerator *imageGenerator =
[AVAssetImageGenerator assetImageGeneratorWithAsset:anAsset];
// Implementation continues...
}
1.6. 生成一張圖片
可以使用copyCGImageAtTime:actualTime:error:
在特定時間生成一張圖片. AVFoundation不能在一個精準時間生成一張請求的圖片.
AVAsset *myAsset = <#An asset#>];
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc] initWithAsset:myAsset];
Float64 durationSeconds = CMTimeGetSeconds([myAsset duration]);
CMTime midpoint = CMTimeMakeWithSeconds(durationSeconds/2.0, 600);
NSError *error;
CMTime actualTime;
CGImageRef halfWayImage = [imageGenerator copyCGImageAtTime:midpoint actualTime:&actualTime error:&error];
if (halfWayImage != NULL) {
NSString *actualTimeString = (NSString *)CMTimeCopyDescription(NULL, actualTime);
NSString *requestedTimeString = (NSString *)CMTimeCopyDescription(NULL, midpoint);
NSLog(@"Got halfWayImage: Asked for %@, got %@", requestedTimeString, actualTimeString);
// Do something interesting with the image.
CGImageRelease(halfWayImage);
}
1.7.生成一系列圖像
為了生成一系列圖像,可以使用generateCGImagesAsynchronouslyForTimes:completionHandler:
方法生成某個時間段內的連續(xù)圖片.
AVAsset *myAsset = <#An asset#>];
// Assume: @property (strong) AVAssetImageGenerator *imageGenerator;
self.imageGenerator = [AVAssetImageGenerator assetImageGeneratorWithAsset:myAsset];
Float64 durationSeconds = CMTimeGetSeconds([myAsset duration]);
CMTime firstThird = CMTimeMakeWithSeconds(durationSeconds/3.0, 600);
CMTime secondThird = CMTimeMakeWithSeconds(durationSeconds*2.0/3.0, 600);
CMTime end = CMTimeMakeWithSeconds(durationSeconds, 600);
NSArray *times = @[NSValue valueWithCMTime:kCMTimeZero],
[NSValue valueWithCMTime:firstThird], [NSValue valueWithCMTime:secondThird],
[NSValue valueWithCMTime:end]];
[imageGenerator generateCGImagesAsynchronouslyForTimes:times
completionHandler:^(CMTime requestedTime, CGImageRef image, CMTime actualTime,
AVAssetImageGeneratorResult result, NSError *error) {
NSString *requestedTimeString = (NSString *)
CFBridgingRelease(CMTimeCopyDescription(NULL, requestedTime));
NSString *actualTimeString = (NSString *)
CFBridgingRelease(CMTimeCopyDescription(NULL, actualTime));
NSLog(@"Requested: %@; actual %@", requestedTimeString, actualTimeString);
if (result == AVAssetImageGeneratorSucceeded) {
// Do something interesting with the image.
}
if (result == AVAssetImageGeneratorFailed) {
NSLog(@"Failed with error: %@", [error localizedDescription]);
}
if (result == AVAssetImageGeneratorCancelled) {
NSLog(@"Canceled");
}
}];
另外,調用cancelAllCGImageGeneration
可以取消上面正在生成的圖片.
1.8.裁剪,轉碼一個視頻文件
可以使用AVAssetExportSession對象對視頻做格式轉碼,裁剪功能.
AVAsset *anAsset = <#Get an asset#>;
NSArray *compatiblePresets = [AVAssetExportSession exportPresetsCompatibleWithAsset:anAsset];
if ([compatiblePresets containsObject:AVAssetExportPresetLowQuality]) {
AVAssetExportSession *exportSession = [[AVAssetExportSession alloc]
initWithAsset:anAsset presetName:AVAssetExportPresetLowQuality];
// Implementation continues.
}
exportSession.outputURL = <#A file URL#>;
exportSession.outputFileType = AVFileTypeQuickTimeMovie;
CMTime start = CMTimeMakeWithSeconds(1.0, 600);
CMTime duration = CMTimeMakeWithSeconds(3.0, 600);
CMTimeRange range = CMTimeRangeMake(start, duration);
exportSession.timeRange = range;
使用exportAsynchronouslyWithCompletionHandler:.
方法將創(chuàng)建一個新的文件.
[exportSession exportAsynchronouslyWithCompletionHandler:^{
switch ([exportSession status]) {
case AVAssetExportSessionStatusFailed:
NSLog(@"Export failed: %@", [[exportSession error] localizedDescription]);
break;
case AVAssetExportSessionStatusCancelled:
NSLog(@"Export canceled");
break;
default:
break;
}
}];
可以使用cancelExport
取消導出操作
導出操作可能因為一下原因失敗
- 有來電顯示
- 有別的應用程序開始播放音頻當程序進入后臺
2. Playback
需要使用AVPlayer對象播放asset.
2.1 播放assets
- 使用AVPlayer播放一個asset
- 使用AVQueuePlayer播放一定數(shù)量的items.
2.2 處理不同類型asset
- 基于文件
- 創(chuàng)建一個AVURLAsset
- 使用asset創(chuàng)建一個AVPlayerItem
- 使用AVPlayer關聯(lián)AVPlayerItem
- 使用KVO檢測item狀態(tài)變化
- 基于HTTP流
NSURL *url = [NSURL URLWithString:@"<#Live stream URL#>];
// You may find a test stream at <http://devimages.apple.com/iphone/samples/bipbop/bipbopall.m3u8>.
self.playerItem = [AVPlayerItem playerItemWithURL:url];
[playerItem addObserver:self forKeyPath:@"status" options:0 context:&ItemStatusContext];
self.player = [AVPlayer playerWithPlayerItem:playerItem];
2.3 播放Item
- (IBAction)play:sender {
[player play];
}
- 改變播放速率
- canPlayReverse : 是否支持倒放
- canPlaySlowReverse:0.0 and -1.0
- canPlayFastReverse : less than -1.0
aPlayer.rate = 0.5;
aPlayer.rate = 1; // 正常播放
aPlayer.rate = 0; // 暫停
aPlayer.rate = -0.5; // 倒放
- 重新定位播放頭:主要用于調節(jié)播放視頻的位置,及播放完后重置播放頭
- seekToTime:針對性能
- seekToTime:toleranceBefore:toleranceAfter: 針對精確度
CMTime fiveSecondsIn = CMTimeMake(5, 1);
[player seekToTime:fiveSecondsIn];
CMTime fiveSecondsIn = CMTimeMake(5, 1);
[player seekToTime:fiveSecondsIn toleranceBefore:kCMTimeZero toleranceAfter:kCMTimeZero];
播放完后,應該重新將播放頭設置為0,以便下次繼續(xù)播放
// Register with the notification center after creating the player item.
[[NSNotificationCenter defaultCenter]
addObserver:self
selector:@selector(playerItemDidReachEnd:)
name:AVPlayerItemDidPlayToEndTimeNotification
object:<#The player item#>];
- (void)playerItemDidReachEnd:(NSNotification *)notification {
[player seekToTime:kCMTimeZero];
}
2.4. 播放多個Items
可以使用play播放多個Items, 它們將按順序播放.
- advanceToNextItem:跳過下一個
- insertItem:afterItem: 插入一個
- removeItem:刪除一個
- removeAllItems: 刪除所有
NSArray *items = <#An array of player items#>;
AVQueuePlayer *queuePlayer = [[AVQueuePlayer alloc] initWithItems:items];
AVPlayerItem *anItem = <#Get a player item#>;
if ([queuePlayer canInsertItem:anItem afterItem:nil]) {
[queuePlayer insertItem:anItem afterItem:nil];
}
2.5. 監(jiān)聽播放
比如
- 如果用戶切換到別的APP, rate會降到0
- 播放遠程媒體時,item的loadedTimeRanges and seekableTimeRanges等更多數(shù)據(jù)的可用性發(fā)生變化
- currentItem屬性在Item通過HTTP live stream創(chuàng)建
- item的track屬性也變化如果當前正在播放HTTP live stream.
- item的stataus屬性也會隨著播放失敗的原因而變化
我們應該在主線程注冊和刪除通知
響應狀態(tài)的變化
當item的狀態(tài)發(fā)生變化時,我們可以通知中得知.
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object
change:(NSDictionary *)change context:(void *)context {
if (context == <#Player status context#>) {
AVPlayer *thePlayer = (AVPlayer *)object;
if ([thePlayer status] == AVPlayerStatusFailed) {
NSError *error = [<#The AVPlayer object#> error];
// Respond to error: for example, display an alert sheet.
return;
}
// Deal with other status change if appropriate.
}
// Deal with other change notifications if appropriate.
[super observeValueForKeyPath:keyPath ofObject:object
change:change context:context];
return;
}
- 時間追蹤
- addPeriodicTimeObserverForInterval:queue:usingBlock:
- addBoundaryTimeObserverForTimes:queue:usingBlock:
// Assume a property: @property (strong) id playerObserver;
Float64 durationSeconds = CMTimeGetSeconds([<#An asset#> duration]);
CMTime firstThird = CMTimeMakeWithSeconds(durationSeconds/3.0, 1);
CMTime secondThird = CMTimeMakeWithSeconds(durationSeconds*2.0/3.0, 1);
NSArray *times = @[[NSValue valueWithCMTime:firstThird], [NSValue valueWithCMTime:secondThird]];
self.playerObserver = [<#A player#> addBoundaryTimeObserverForTimes:times queue:NULL usingBlock:^{
NSString *timeDescription = (NSString *)
CFBridgingRelease(CMTimeCopyDescription(NULL, [self.player currentTime]));
NSLog(@"Passed a boundary at %@", timeDescription);
}];
- 播放結束
注冊AVPlayerItemDidPlayToEndTimeNotification當播放結束
[[NSNotificationCenter defaultCenter] addObserver:<#The observer, typically self#>
selector:@selector(<#The selector name#>)
name:AVPlayerItemDidPlayToEndTimeNotification
object:<#A player item#>];
2.6. 使用AVPlayerLayer 播放視頻文件
2.6.1. 步驟
- 配置AVPlayerLayer
- 創(chuàng)建AVPlayer
- 創(chuàng)建一個基于asset的AVPlayerItem對象并使用KVO觀察他的狀態(tài)
- 準備播放
- 播放完成后恢復播放頭
2.6.2. The Player View
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
@interface PlayerView : UIView
@property (nonatomic) AVPlayer *player;
@end
@implementation PlayerView
+ (Class)layerClass {
return [AVPlayerLayer class];
}
- (AVPlayer*)player {
return [(AVPlayerLayer *)[self layer] player];
}
- (void)setPlayer:(AVPlayer *)player {
[(AVPlayerLayer *)[self layer] setPlayer:player];
}
@end
2.6.3. A Simple View Controller
@class PlayerView;
@interface PlayerViewController : UIViewController
@property (nonatomic) AVPlayer *player;
@property (nonatomic) AVPlayerItem *playerItem;
@property (nonatomic, weak) IBOutlet PlayerView *playerView;
@property (nonatomic, weak) IBOutlet UIButton *playButton;
- (IBAction)loadAssetFromFile:sender;
- (IBAction)play:sender;
- (void)syncUI;
@end
- (void)syncUI {
if ((self.player.currentItem != nil) &&
([self.player.currentItem status] == AVPlayerItemStatusReadyToPlay)) {
self.playButton.enabled = YES;
}
else {
self.playButton.enabled = NO;
}
}
- (void)viewDidLoad {
[super viewDidLoad];
[self syncUI];
}
2.6.4. 創(chuàng)建Asset
- (IBAction)loadAssetFromFile:sender {
NSURL *fileURL = [[NSBundle mainBundle]
URLForResource:<#@"VideoFileName"#> withExtension:<#@"extension"#>];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:fileURL options:nil];
NSString *tracksKey = @"tracks";
[asset loadValuesAsynchronouslyForKeys:@[tracksKey] completionHandler:
^{
// The completion block goes here.
}];
}
// Define this constant for the key-value observation context.
static const NSString *ItemStatusContext;
// Completion handler block.
dispatch_async(dispatch_get_main_queue(),
^{
NSError *error;
AVKeyValueStatus status = [asset statusOfValueForKey:tracksKey error:&error];
if (status == AVKeyValueStatusLoaded) {
self.playerItem = [AVPlayerItem playerItemWithAsset:asset];
// ensure that this is done before the playerItem is associated with the player
[self.playerItem addObserver:self forKeyPath:@"status"
options:NSKeyValueObservingOptionInitial context:&ItemStatusContext];
[[NSNotificationCenter defaultCenter] addObserver:self
selector:@selector(playerItemDidReachEnd:)
name:AVPlayerItemDidPlayToEndTimeNotification
object:self.playerItem];
self.player = [AVPlayer playerWithPlayerItem:self.playerItem];
[self.playerView setPlayer:self.player];
}
else {
// You should deal with the error appropriately.
NSLog(@"The asset's tracks were not loaded:\n%@", [error localizedDescription]);
}
});
2.6.5. 響應狀態(tài)變化
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object
change:(NSDictionary *)change context:(void *)context {
if (context == &ItemStatusContext) {
dispatch_async(dispatch_get_main_queue(),
^{
[self syncUI];
});
return;
}
[super observeValueForKeyPath:keyPath ofObject:object
change:change context:context];
return;
}
2.6.7. 播放Item
- (IBAction)play:sender {
[player play];
}
// Register with the notification center after creating the player item.
[[NSNotificationCenter defaultCenter]
addObserver:self
selector:@selector(playerItemDidReachEnd:)
name:AVPlayerItemDidPlayToEndTimeNotification
object:[self.player currentItem]];
- (void)playerItemDidReachEnd:(NSNotification *)notification {
[self.player seekToTime:kCMTimeZero];
}
3.Editing
AVFoundation 提供了一組豐富功能的類去編輯asset, 編輯的核心是組合. 組合是來自一個或多個asset的集合.
AVMutableComposition類提供了插入,刪除,管理tracks順序的界面,下圖展示了兩個asset結合成一個新的asset.
使用AVMutableAudioMix類,可以執(zhí)行自定義的音頻處理.
AVMutableVideoComposition: 使用合成的track進行編輯
AVMutableVideoCompositionLayerInstruction: 變換,漸變變換,不透明度,漸變不透明
AVAssetExportSession: 將音視頻合成
3.1 新建Composition
使用AVMutableComposition創(chuàng)建對象,然后添加音視頻數(shù)據(jù),通過AVMutableCompositionTrack添加.
AVMutableComposition *mutableComposition = [AVMutableComposition composition];
// Create the video composition track.
AVMutableCompositionTrack *mutableCompositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
// Create the audio composition track.
AVMutableCompositionTrack *mutableCompositionAudioTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
kCMPersistentTrackID_Invalid: 將自動為您生成唯一標識符并與軌道關聯(lián)亡脸。
類型
- AVMediaTypeVideo
- AVMediaTypeAudio
- AVMediaTypeSubtitle
- AVMediaTypeText.
3.2. 添加音視頻數(shù)據(jù)
// You can retrieve AVAssets from a number of places, like the camera roll for example.
AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAsset *anotherVideoAsset = <#another AVAsset with at least one video track#>;
// Get the first video track from each asset.
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *anotherVideoAssetTrack = [[anotherVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
// Add them both to the composition.
[mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,videoAssetTrack.timeRange.duration) ofTrack:videoAssetTrack atTime:kCMTimeZero error:nil];
[mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,anotherVideoAssetTrack.timeRange.duration) ofTrack:anotherVideoAssetTrack atTime:videoAssetTrack.timeRange.duration error:nil];
AVMutableCompositionTrack *compatibleCompositionTrack = [mutableComposition mutableTrackCompatibleWithTrack:<#the AVAssetTrack you want to insert#>];
if (compatibleCompositionTrack) {
// Implementation continues.
}
3.3. 生成一個音量坡度
AVMutableAudioMix對象可以單獨地對你的合成的全部音頻執(zhí)行自定義處理,
AVMutableAudioMix *mutableAudioMix = [AVMutableAudioMix audioMix];
// Create the audio mix input parameters object.
AVMutableAudioMixInputParameters *mixParameters = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:mutableCompositionAudioTrack];
// Set the volume ramp to slowly fade the audio out over the duration of the composition.
[mixParameters setVolumeRampFromStartVolume:1.f toEndVolume:0.f timeRange:CMTimeRangeMake(kCMTimeZero, mutableComposition.duration)];
// Attach the input parameters to the audio mix.
mutableAudioMix.inputParameters = @[mixParameters];
3.4 自定義視頻處理
AVMutableVideoComposition對象在你的視頻合成軌道中執(zhí)行所有自定義的處理.你可以直接設置渲染尺寸,scal, 幀率在你合成的視頻軌道上.
- 改變背影顏色
AVMutableVideoCompositionInstruction *mutableVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mutableVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, mutableComposition.duration);
mutableVideoCompositionInstruction.backgroundColor = [[UIColor redColor] CGColor];
- 應用不透明坡度
AVAsset *firstVideoAssetTrack = <#AVAssetTrack representing the first video segment played in the composition#>;
AVAsset *secondVideoAssetTrack = <#AVAssetTrack representing the second video segment played in the composition#>;
// Create the first video composition instruction.
AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set its time range to span the duration of the first video track.
firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
// Create the layer instruction and associate it with the composition video track.
AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack];
// Create the opacity ramp to fade out the first video track over its entire duration.
[firstVideoLayerInstruction setOpacityRampFromStartOpacity:1.f toEndOpacity:0.f timeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration)];
// Create the second video composition instruction so that the second video track isn't transparent.
AVMutableVideoCompositionInstruction *secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set its time range to span the duration of the second video track.
secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration));
// Create the second layer instruction and associate it with the composition video track.
AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack];
// Attach the first layer instruction to the first video composition instruction.
firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction];
// Attach the second layer instruction to the second video composition instruction.
secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction];
// Attach both of the video composition instructions to the video composition.
AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction];
Incorporating Core Animation Effects
A video composition can add the power of Core Animation to your composition through the animationTool property. Through this animation tool, you can accomplish tasks such as watermarking video and adding titles or animating overlays. Core Animation can be used in two different ways with video compositions: You can add a Core Animation layer as its own individual composition track, or you can render Core Animation effects (using a Core Animation layer) into the video frames in your composition directly. The following code displays the latter option by adding a watermark to the center of the video:
CALayer *watermarkLayer = <#CALayer representing your desired watermark image#>;
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
videoLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
[parentLayer addSublayer:videoLayer];
watermarkLayer.position = CGPointMake(mutableVideoComposition.renderSize.width/2, mutableVideoComposition.renderSize.height/4);
[parentLayer addSublayer:watermarkLayer];
mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
- 結合Core Animation動畫效果
video composition通過animationTool屬性添加Core Animation中的動畫到合成軌道中.你可以利用它去做視頻水印,添加標題,動畫疊加等操作.
Core Animation主要用于以下兩方面
- 你可以把Core Animation圖層添加到自己的合成軌道
- 直接將 Core Animation 效果渲染到你合成軌道的視頻幀中.
CALayer *watermarkLayer = <#CALayer representing your desired watermark image#>;
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
videoLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
[parentLayer addSublayer:videoLayer];
watermarkLayer.position = CGPointMake(mutableVideoComposition.renderSize.width/2, mutableVideoComposition.renderSize.height/4);
[parentLayer addSublayer:watermarkLayer];
mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
3.5 組合多個asset, 保存到相冊
3.5.1. 創(chuàng)建Composition
使用AVMutableComposition將多個asset組合在一起
AVMutableComposition *mutableComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *videoCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *audioCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
3.5.2. 添加assets
AVAssetTrack *firstVideoAssetTrack = [[firstVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *secondVideoAssetTrack = [[secondVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration) ofTrack:firstVideoAssetTrack atTime:kCMTimeZero error:nil];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, secondVideoAssetTrack.timeRange.duration) ofTrack:secondVideoAssetTrack atTime:firstVideoAssetTrack.timeRange.duration error:nil];
[audioCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration)) ofTrack:[[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] atTime:kCMTimeZero error:nil];
3.5.3. 檢查視頻方向
一旦添加了音頻和視頻軌道到composition,請確保視頻軌道的方向是正確的.默認,所有視頻被指定為橫屏方向.如果你的視頻是縱向拍攝的,則導出視頻無法正確定位.同樣地,如果將橫向視頻與縱向視頻結合也將出錯.
BOOL isFirstVideoPortrait = NO;
CGAffineTransform firstTransform = firstVideoAssetTrack.preferredTransform;
// Check the first video track's preferred transform to determine if it was recorded in portrait mode.
if (firstTransform.a == 0 && firstTransform.d == 0 && (firstTransform.b == 1.0 || firstTransform.b == -1.0) && (firstTransform.c == 1.0 || firstTransform.c == -1.0)) {
isFirstVideoPortrait = YES;
}
BOOL isSecondVideoPortrait = NO;
CGAffineTransform secondTransform = secondVideoAssetTrack.preferredTransform;
// Check the second video track's preferred transform to determine if it was recorded in portrait mode.
if (secondTransform.a == 0 && secondTransform.d == 0 && (secondTransform.b == 1.0 || secondTransform.b == -1.0) && (secondTransform.c == 1.0 || secondTransform.c == -1.0)) {
isSecondVideoPortrait = YES;
}
if ((isFirstVideoAssetPortrait && !isSecondVideoAssetPortrait) || (!isFirstVideoAssetPortrait && isSecondVideoAssetPortrait)) {
UIAlertView *incompatibleVideoOrientationAlert = [[UIAlertView alloc] initWithTitle:@"Error!" message:@"Cannot combine a video shot in portrait mode with a video shot in landscape mode." delegate:self cancelButtonTitle:@"Dismiss" otherButtonTitles:nil];
[incompatibleVideoOrientationAlert show];
return;
}
3.5.4. 視頻合成指令
一旦知道視頻的兼容方向,可以對視頻片段加以說明
AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set the time range of the first instruction to span the duration of the first video track.
firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
AVMutableVideoCompositionInstruction * secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set the time range of the second instruction to span the duration of the second video track.
secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration));
AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
// Set the transform of the first layer instruction to the preferred transform of the first video track.
[firstVideoLayerInstruction setTransform:firstTransform atTime:kCMTimeZero];
AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
// Set the transform of the second layer instruction to the preferred transform of the second video track.
[secondVideoLayerInstruction setTransform:secondTransform atTime:firstVideoAssetTrack.timeRange.duration];
firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction];
secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction];
AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction];
3.5.5. 設置渲染尺寸和幀率
CGSize naturalSizeFirst, naturalSizeSecond;
// If the first video asset was shot in portrait mode, then so was the second one if we made it here.
if (isFirstVideoAssetPortrait) {
// Invert the width and height for the video tracks to ensure that they display properly.
naturalSizeFirst = CGSizeMake(firstVideoAssetTrack.naturalSize.height, firstVideoAssetTrack.naturalSize.width);
naturalSizeSecond = CGSizeMake(secondVideoAssetTrack.naturalSize.height, secondVideoAssetTrack.naturalSize.width);
}
else {
// If the videos weren't shot in portrait mode, we can just use their natural sizes.
naturalSizeFirst = firstVideoAssetTrack.naturalSize;
naturalSizeSecond = secondVideoAssetTrack.naturalSize;
}
float renderWidth, renderHeight;
// Set the renderWidth and renderHeight to the max of the two videos widths and heights.
if (naturalSizeFirst.width > naturalSizeSecond.width) {
renderWidth = naturalSizeFirst.width;
}
else {
renderWidth = naturalSizeSecond.width;
}
if (naturalSizeFirst.height > naturalSizeSecond.height) {
renderHeight = naturalSizeFirst.height;
}
else {
renderHeight = naturalSizeSecond.height;
}
mutableVideoComposition.renderSize = CGSizeMake(renderWidth, renderHeight);
// Set the frame duration to an appropriate value (i.e. 30 frames per second for video).
mutableVideoComposition.frameDuration = CMTimeMake(1,30);
3.5.6. 導出合成的視頻
// Create a static date formatter so we only have to initialize it once.
static NSDateFormatter *kDateFormatter;
if (!kDateFormatter) {
kDateFormatter = [[NSDateFormatter alloc] init];
kDateFormatter.dateStyle = NSDateFormatterMediumStyle;
kDateFormatter.timeStyle = NSDateFormatterShortStyle;
}
// Create the export session with the composition and set the preset to the highest quality.
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPresetHighestQuality];
// Set the desired output URL for the file created by the export process.
exporter.outputURL = [[[[NSFileManager defaultManager] URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:@YES error:nil] URLByAppendingPathComponent:[kDateFormatter stringFromDate:[NSDate date]]] URLByAppendingPathExtension:CFBridgingRelease(UTTypeCopyPreferredTagWithClass((CFStringRef)AVFileTypeQuickTimeMovie, kUTTagClassFilenameExtension))];
// Set the output file type to be a QuickTime movie.
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;
exporter.videoComposition = mutableVideoComposition;
// Asynchronously export the composition to a video file and save this file to the camera roll once export completes.
[exporter exportAsynchronouslyWithCompletionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
if (exporter.status == AVAssetExportSessionStatusCompleted) {
ALAssetsLibrary *assetsLibrary = [[ALAssetsLibrary alloc] init];
if ([assetsLibrary videoAtPathIsCompatibleWithSavedPhotosAlbum:exporter.outputURL]) {
[assetsLibrary writeVideoAtPathToSavedPhotosAlbum:exporter.outputURL completionBlock:NULL];
}
}
});
}];
4. 導出AVAsset
Overview
為了讀寫asset,必須使用由AVFoundation提供的導出API. AVAssetExportSession類提供了一些導出的方法,如改變文件格式,裁剪asset長度等等.
- AVAssetReader: 當你相對asset內容進行操作時,比如讀取音軌以生成波形圖
- AVAssetWriter: 從媒體(sample buffers或still images)中生成asset.
asset reader and writer 不適用于實時處理. asset reader不能讀取HTTP直播流. 然而,如果你使用asset writer做實時流操作,設置expectsMediaDataInRealTime為YES.對于非實時流的數(shù)據(jù)如果設置該屬性則會報錯.
4.1. Reading an Asset
每個AVAssetReader對象僅僅能和一個asset關聯(lián),但是這個asset可以包含多個tracks.
- 創(chuàng)建Asset Reader
NSError *outError;
AVAsset *someAsset = <#AVAsset that you want to read#>;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:someAsset error:&outError];
BOOL success = (assetReader != nil);
- 建立Asset Reader輸出
創(chuàng)建好asset reader后,至少設置一個輸出對象以接收當前正在去讀的媒體數(shù)據(jù).設置好輸出后,請確保alwaysCopiesSampleData
為NO以便得到性能的提升.
如果僅僅想要從一個或多個軌道中讀取媒體數(shù)據(jù)并且將其轉為不同的格式,可以使用AVAssetReaderTrackOutput類. 通過使用一個單獨的軌道輸出對象對每個AVAssetTrack對象你想要從asset中讀取的.
AVAsset *localAsset = assetReader.asset;
// Get the audio track to read.
AVAssetTrack *audioTrack = [[localAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
// Decompression settings for Linear PCM
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the output with the audio track and decompression settings.
AVAssetReaderOutput *trackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:decompressionAudioSettings];
// Add the output to the reader if possible.
if ([assetReader canAddOutput:trackOutput])
[assetReader addOutput:trackOutput];
使用AVAssetReaderAudioMixOutput與AVAssetReaderVideoCompositionOutput類分別讀取由AVAudioMix與AVVideoComposition對象合成的媒體數(shù)據(jù).通常被用在從AVComposition中讀取數(shù)據(jù).
AVAudioMix *audioMix = <#An AVAudioMix that specifies how the audio tracks from the AVAsset are mixed#>;
// Assumes that assetReader was initialized with an AVComposition object.
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the audio tracks to read.
NSArray *audioTracks = [composition tracksWithMediaType:AVMediaTypeAudio];
// Get the decompression settings for Linear PCM.
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the audio mix output with the audio tracks and decompression setttings.
AVAssetReaderOutput *audioMixOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:audioTracks audioSettings:decompressionAudioSettings];
// Associate the audio mix used to mix the audio tracks being read with the output.
audioMixOutput.audioMix = audioMix;
// Add the output to the reader if possible.
if ([assetReader canAddOutput:audioMixOutput])
[assetReader addOutput:audioMixOutput];
video也同理
AVVideoComposition *videoComposition = <#An AVVideoComposition that specifies how the video tracks from the AVAsset are composited#>;
// Assumes assetReader was initialized with an AVComposition.
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the video tracks to read.
NSArray *videoTracks = [composition tracksWithMediaType:AVMediaTypeVideo];
// Decompression settings for ARGB.
NSDictionary *decompressionVideoSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32ARGB], (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary] };
// Create the video composition output with the video tracks and decompression setttings.
AVAssetReaderOutput *videoCompositionOutput = [AVAssetReaderVideoCompositionOutput assetReaderVideoCompositionOutputWithVideoTracks:videoTracks videoSettings:decompressionVideoSettings];
// Associate the video composition used to composite the video tracks being read with the output.
videoCompositionOutput.videoComposition = videoComposition;
// Add the output to the reader if possible.
if ([assetReader canAddOutput:videoCompositionOutput])
[assetReader addOutput:videoCompositionOutput];
- Reading the Asset’s Media Data
設置好輸出后,可以調用startReading
開始讀取.接下來,使用copyNextSampleBuffer
方法從每個輸出中單獨檢索數(shù)據(jù)
// Start the asset reader up.
[self.assetReader startReading];
BOOL done = NO;
while (!done)
{
// Copy the next sample buffer from the reader output.
CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
if (sampleBuffer)
{
// Do something with sampleBuffer here.
CFRelease(sampleBuffer);
sampleBuffer = NULL;
}
else
{
// Find out why the asset reader output couldn't copy another sample buffer.
if (self.assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = self.assetReader.error;
// Handle the error here.
}
else
{
// The asset reader output has read all of its samples.
done = YES;
}
}
}
4.2.Writing an Asset
AVAssetWriter: 用于將多個源的媒體數(shù)據(jù)寫入指定格式的單個文件中.你不必去關聯(lián)你的asset writer對象與一個特定的asset, 但必須使用一個單獨的asset為你創(chuàng)建的每個輸出文件.
- 創(chuàng)建Asset Writer
指定輸出文件的URL與類型
NSError *outError;
NSURL *outputURL = <#NSURL object representing the URL where you want to save the video#>;
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:outputURL
fileType:AVFileTypeQuickTimeMovie
error:&outError];
BOOL success = (assetWriter != nil);
- 建立 Asset Writer輸入
為了讓asset writer能夠寫入數(shù)據(jù),必須鍵值至少一個asset writer輸入源.
// Configure the channel layout as stereo.
AudioChannelLayout stereoChannelLayout = {
.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
.mChannelBitmap = 0,
.mNumberChannelDescriptions = 0
};
// Convert the channel layout object to an NSData object.
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
// Get the compression settings for 128 kbps AAC.
NSDictionary *compressionAudioSettings = @{
AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
AVEncoderBitRateKey : [NSNumber numberWithInteger:128000],
AVSampleRateKey : [NSNumber numberWithInteger:44100],
AVChannelLayoutKey : channelLayoutAsData,
AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};
// Create the asset writer input with the compression settings and specify the media type as audio.
AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:compressionAudioSettings];
// Add the input to the writer if possible.
if ([assetWriter canAddInput:assetWriterInput])
[assetWriter addInput:assetWriterInput];
AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
assetWriterInput.transform = videoAssetTrack.preferredTransform;
NSDictionary *pixelBufferAttributes = @{
kCVPixelBufferCGImageCompatibilityKey : [NSNumber numberWithBool:YES],
kCVPixelBufferCGBitmapContextCompatibilityKey : [NSNumber numberWithBool:YES],
kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32ARGB]
};
AVAssetWriterInputPixelBufferAdaptor *inputPixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.assetWriterInput sourcePixelBufferAttributes:pixelBufferAttributes];
- Writing Media Data
CMTime halfAssetDuration = CMTimeMultiplyByFloat64(self.asset.duration, 0.5);
[self.assetWriter startSessionAtSourceTime:halfAssetDuration];
//Implementation continues.
可以使用endSessionAtSourceTime:結束當前正在寫的session, 然而,如果你的session將執(zhí)行到文件末尾, 可以調用finishWriting.
// Prepare the asset writer for writing.
[self.assetWriter startWriting];
// Start a sample-writing session.
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
while ([self.assetWriterInput isReadyForMoreMediaData])
{
// Get the next sample buffer.
CMSampleBufferRef nextSampleBuffer = [self copyNextSampleBufferToWrite];
if (nextSampleBuffer)
{
// If it exists, append the next sample buffer to the output file.
[self.assetWriterInput appendSampleBuffer:nextSampleBuffer];
CFRelease(nextSampleBuffer);
nextSampleBuffer = nil;
}
else
{
// Assume that lack of a next sample buffer means the sample buffer source is out of samples and mark the input as finished.
[self.assetWriterInput markAsFinished];
break;
}
}
}];
4.3. 重新編碼Assets
NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];
// Create a serialization queue for reading and writing.
dispatch_queue_t serializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:serializationQueue usingBlock:^{
while ([self.assetWriterInput isReadyForMoreMediaData])
{
// Get the asset reader output's next sample buffer.
CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
// If it exists, append this sample buffer to the output file.
BOOL success = [self.assetWriterInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
// Check for errors that may have occurred when appending the new sample buffer.
if (!success && self.assetWriter.status == AVAssetWriterStatusFailed)
{
NSError *failureError = self.assetWriter.error;
//Handle the error.
}
}
else
{
// If the next sample buffer doesn't exist, find out why the asset reader output couldn't vend another one.
if (self.assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = self.assetReader.error;
//Handle the error here.
}
else
{
// The asset reader output must have vended all of its samples. Mark the input as finished.
[self.assetWriterInput markAsFinished];
break;
}
}
}
}];
4.4 總結: 使用Asset Reader and Writer串聯(lián)去重新編碼Asset
流程
- 使用串行隊列處理異步讀取與寫入的數(shù)據(jù).
- 初始化asset reader并且配置兩個asset reader的輸出(一個video,一個audio)
- 初始化asset writer并且配置兩個asset writer的輸入出(一個video,一個audio)
- 使用一個asset reader通過兩種不同的輸入輸出結合將數(shù)據(jù)傳給asset writer
- 使用dispatch group通知重新編碼的過程完成
- 允許用戶取消開始后的重新編碼的過程