第一 對(duì)于AVFoundation的理解
我們現(xiàn)在原生上對(duì)于視頻的處理都是在于avfoundation的框架下完成的.那我們對(duì)視頻的剪接贫橙、混音、倒放反粥、快進(jìn)等大部分功能卢肃。
一、混音
AVMutableComposition
混音主要我們會(huì)使用大avfoundation的一個(gè)API才顿,AVMutableComposition(音視頻組合軌道)相當(dāng)于一個(gè)容器.
//創(chuàng)建一個(gè)音視頻組合軌道
AVMutableComposition *mainComposition = [[AVMutableComposition alloc]init];
AVMutableCompositionTrack
創(chuàng)建對(duì)應(yīng)的音視頻軌道 ==> AVMutableCompositionTrack
//可變音視頻軌道添加一個(gè) 視頻通道
AVMutableCompositionTrack *compositionVideoTrack = [mainComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
//可變音視頻軌道添加一個(gè) 音頻通道
AVMutableCompositionTrack *compositionAudioTrack = [mainComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
這個(gè)相當(dāng)于一個(gè)容器里又兩個(gè)對(duì)應(yīng)不同屬性的杯子.
AVAssetTrack
AVAssetTrack是一個(gè)軌道.一個(gè)AVAsset對(duì)象里有兩個(gè)數(shù)組.該數(shù)組分別裝著兩個(gè)不同屬性的軌道.既是音視軌軌道.
//視頻通道數(shù)組
NSArray<AVAssetTrack *> *videoTrackers = [asset tracksWithMediaType:AVMediaTypeVideo];
if (0 >= videoTrackers.count) {
NSLog(@"數(shù)據(jù)獲取失敗");
return ;
}
//獲取第一個(gè)視頻通道
AVAssetTrack *video_track = [videoTrackers objectAtIndex:0];
********************************************************
//獲取音頻軌道數(shù)組
NSArray<AVAssetTrack *> *audioTrackers = [asset tracksWithMediaType:AVMediaTypeAudio];
if (0 >= audioTrackers.count) {
NSLog(@"獲取音頻數(shù)據(jù)失敗");
return;
}
//獲取第一個(gè)音頻軌道
AVAssetTrack *audio_track = [audioTrackers objectAtIndex:0];
將獲取的視軌倒入杯子
//視頻時(shí)間
float video_times = (float)asset.duration.value / (float)asset.duration.timescale;
compositionVideoTrack.preferredTransform = video_track.preferredTransform;
NSError *error = nil;
//把采集軌道數(shù)據(jù)加入到可變軌道之中
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, asset.duration)
ofTrack:video_track
atTime:kCMTimeZero
error:&error];
if (error) {
NSLog(@"error;%@",error);
return;
}
*****************************************************
//獲取第一個(gè)音頻軌道
AVAssetTrack *audio_track = [audioTrackers objectAtIndex:0];
int audio_time_scale = audio_track.naturalTimeScale;
//獲取音頻的時(shí)間
CMTime audio_duration = CMTimeMake(video_times * audio_time_scale, audio_time_scale);
//將音頻軌道加入到可變軌道中
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, audio_duration)
ofTrack:audio_track
atTime:kCMTimeZero
error:&error];
if (error) {
NSLog(@"音軌error:%@",error);
return;
}
對(duì)于- (BOOL)insertTimeRange:(CMTimeRange)timeRange ofTrack:(AVAssetTrack *)track atTime:(CMTime)startTime error:(NSError * _Nullable * _Nullable)outError
方法.timeRange表示的該視頻的時(shí)間范圍,track表示 你插入的軌道.StartTime 表示 你要在哪個(gè)時(shí)間點(diǎn)去開始插入這個(gè)軌道. outError 返回的錯(cuò)誤信息.
再加入一個(gè)音軌
這個(gè)音軌可以從別的視頻里獲取,也可以從MP3中獲取
//增加音軌
//采集資源
AVURLAsset *mixAsset = [[AVURLAsset alloc]initWithURL:mixAssetUrl options:nil];
NSArray<AVAssetTrack *> *audioTrackers_mix = [mixAsset tracksWithMediaType:AVMediaTypeAudio];
if (0 >= audioTrackers_mix.count) {
NSLog(@"獲取第二音軌資源失敗");
return;
}
//可變音視軌道再添加一個(gè) 音軌
AVMutableCompositionTrack *mixAudioTrack = [mainComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];//第二音軌
//將采集到數(shù)據(jù)加入到第二音軌
[mixAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, audio_duration)
ofTrack:[audioTrackers_mix objectAtIndex:0]
atTime:kCMTimeZero
error:&error];
對(duì)音視軌操作
該階段是對(duì)音視軌的處理.比如混音的時(shí)候如何處理哪個(gè)音軌音量的大小,設(shè)置視頻的大小.
//視頻操作指令集合
AVMutableVideoComposition *select_videoComposition = [AVMutableVideoComposition videoCompositionWithPropertiesOfAsset:mainComposition];
AVMutableVideoComposition *first_vcn = [AVMutableVideoComposition videoCompositionWithPropertiesOfAsset:asset];
select_videoComposition.renderSize = first_vcn.renderSize;
AVMutableAudioMix *videoAudioMixTools = [AVMutableAudioMix audioMix];
//獲取第一個(gè)音頻軌道
AVMutableAudioMixInputParameters *firstAudioParam = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:compositionAudioTrack];
//設(shè)置第一個(gè)音軌音量
[firstAudioParam setVolumeRampFromStartVolume:firstStartVolume toEndVolume:firstEndVolume timeRange:CMTimeRangeMake(kCMTimeZero, asset.duration)];
//第二個(gè)音頻軌道
AVMutableAudioMixInputParameters *secondAudioParam = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:mixAudioTrack];
[secondAudioParam setVolumeRampFromStartVolume:secondStartVolume toEndVolume:secondEndVolume timeRange:CMTimeRangeMake(kCMTimeZero, asset.duration)];
videoAudioMixTools.inputParameters = @[firstAudioParam,secondAudioParam];
到這里你就可以獲得三個(gè)參數(shù)分別為mainComposition(既是音視頻組合軌道)
莫湘、select_videoComposition(視頻操作指令集合)
、videoAudioMixTools(音頻操作)
.
預(yù)覽
預(yù)覽主要用到三個(gè)參數(shù) mainComposition
select_videoComposition
videoAudioMixTools
AVPlayerItem *item = [AVPlayerItem playerItemWithAsset:sourceVideo_.mainComposition];
[item setAudioMix:sourceVideo_.videoAudioMixTools];
AVPlayer *tmpPlayer = [AVPlayer playerWithPlayerItem:item];
self.player = tmpPlayer;
AVPlayerLayer *playerLayer = [AVPlayerLayer playerLayerWithPlayer:self.player];
playerLayer.frame = self.videoContainView.bounds;
playerLayer.videoGravity = AVLayerVideoGravityResize;
[self.view.layer addSublayer:playerLayer];
到這里你就能感受到混音的功能了娜膘。下一個(gè)文章我將給大家講解如何去實(shí)現(xiàn)吧處理后的音視頻如何壓縮寫出來成一個(gè)文件。
原創(chuàng)文章轉(zhuǎn)載需獲授權(quán)并注明出處
請(qǐng)?jiān)诤笈_(tái)留言聯(lián)系轉(zhuǎn)載