前言:項目中有發(fā)送視頻和圖片的需求掏导,產(chǎn)品想要微信小視頻錄制的功能晴股,即單擊拍照调榄,長按錄制小視頻,參考很多大神的demo呵扛,以及翻閱
AVFoundation
相關(guān)文獻每庆,自己用AVFoundation
的API封裝了一個小視頻錄制,后面會加上閃光燈今穿,聚焦缤灵,濾鏡...等功能。
AVFoundation
官方文檔介紹:
The AVFoundation framework combines four major technology areas that together encompass a wide range of tasks for capturing, processing, synthesizing, controlling, importing and exporting audiovisual media on Apple platforms.
解釋為:
AVFoundation
框架結(jié)合了四個主要的技術(shù)領(lǐng)域蓝晒,它們共同包含了在蘋果平臺上捕獲腮出、處理、合成芝薇、控制胚嘲、導(dǎo)入和導(dǎo)出視聽媒體的廣泛任務(wù)。
AVFoundation
在相關(guān)框架棧中的位置圖如:
我們本文章只討論關(guān)于AVFoundation
關(guān)于音視頻錄制的功能洛二,AVFoundation
包含很多頭文件馋劈,其中涉及音視頻的頭文件主要有以下幾個:
//AVCaptureDevice提供實時輸入媒體數(shù)據(jù)(如視頻和音頻)的物理設(shè)備
#import <AVFoundation/AVCaptureDevice.h>
//AVCaptureInput是一個抽象類,它提供了一個接口灭红,用于將捕獲輸入源連接到AVCaptureSession
#import <AVFoundation/AVCaptureInput.h>
//AVCaptureOutput用于處理未壓縮或壓縮的音視頻樣本被捕獲侣滩,一般用AVCaptureAudioDataOutput和AVCaptureVideoDataOutput子類
#import <AVFoundation/AVCaptureOutput.h>
//AVCaptureSession是AVFoundation捕獲類的中心樞紐
#import <AVFoundation/AVCaptureSession.h>
//用于預(yù)覽AVCaptureSession的可視輸出的CoreAnimation層的子類
#import <AVFoundation/AVCaptureVideoPreviewLayer.h>
//AVAssetWriter提供將媒體數(shù)據(jù)寫入新文件的服務(wù)
#import <AVFoundation/AVAssetWriter.h>
//用于將新媒體樣本或?qū)Υ虬鼮镃MSampleBuffer對象的現(xiàn)有媒體樣本的引用附加到AVAssetWriter輸出文件的單個軌跡中
#import <AVFoundation/AVAssetWriterInput.h>
//系統(tǒng)提供的處理視頻的類(壓縮)
#import <AVFoundation/AVAssetExportSession.h>
可以用如下一幅圖來概述:
從圖上可以清晰的看出各個模塊的功能,接下來詳細(xì)介紹一下每個模塊如何使用的变擒。
1. AVCaptureSession
AVCaptureSession
是AVFoundation捕獲類的中心樞紐君珠,用法:
- (AVCaptureSession *)session{
if (_session == nil){
_session = [[AVCaptureSession alloc] init];
//高質(zhì)量采集率
[_session setSessionPreset:AVCaptureSessionPresetHigh];
if([_session canAddInput:self.videoInput]) [_session addInput:self.videoInput]; //添加視頻輸入流
if([_session canAddInput:self.audioInput]) [_session addInput:self.audioInput]; //添加音頻輸入流
if([_session canAddOutput:self.videoDataOutput]) [_session addOutput:self.videoDataOutput]; //視頻數(shù)據(jù)輸出流 純畫面
if([_session canAddOutput:self.audioDataOutput]) [_session addOutput:self.audioDataOutput]; //音頻數(shù)據(jù)輸出流
AVCaptureConnection * captureVideoConnection = [self.videoDataOutput connectionWithMediaType:AVMediaTypeVideo];
// 設(shè)置是否為鏡像,前置攝像頭采集到的數(shù)據(jù)本來就是翻轉(zhuǎn)的娇斑,這里設(shè)置為鏡像把畫面轉(zhuǎn)回來
if (self.devicePosition == AVCaptureDevicePositionFront && captureVideoConnection.supportsVideoMirroring) {
captureVideoConnection.videoMirrored = YES;
}
captureVideoConnection.videoOrientation = AVCaptureVideoOrientationPortrait;
}
return _session;
}
AVCaptureSessionPreset
此屬性的值是AVCaptureSession預(yù)設(shè)值策添,表示接收方正在使用的當(dāng)前會話預(yù)設(shè)值『晾拢可以在接收器運行時設(shè)置session預(yù)設(shè)屬性唯竹,有一下幾個值:
-
AVCaptureSessionPresetPhoto
:適用于高分辨率照片質(zhì)量輸出 -
AVCaptureSessionPresetHigh
:適用于高質(zhì)量視頻和音頻輸出 -
AVCaptureSessionPresetMedium
:適合中等質(zhì)量輸出 -
AVCaptureSessionPresetLow
:適用于低質(zhì)量輸出 -
AVCaptureSessionPreset320x240
:適合320x240視頻輸出 -
AVCaptureSessionPreset352x288
:CIF質(zhì)量 -
AVCaptureSessionPreset640x480
:VGA質(zhì)量 -
AVCaptureSessionPreset960x540
:HD質(zhì)量 -
AVCaptureSessionPreset1280x720
:720p -
AVCaptureSessionPreset1920x1080
:1080P
-AVCaptureSessionPreset3840x2160
:UHD or 4K -
AVCaptureSessionPresetiFrame960x540
:實現(xiàn)960x540質(zhì)量的iFrame H.264視頻在~30兆/秒AAC音頻 -
AVCaptureSessionPresetiFrame1280x720
:實現(xiàn)1280x720質(zhì)量的iFrame H.264視頻,在~ 40mbits /秒的AAC音頻 -
AVCaptureSessionPresetInputPriority
:當(dāng)客戶端在設(shè)備上設(shè)置活動格式時苦丁,相關(guān)會話的- session預(yù)設(shè)屬性自動更改為AVCaptureSessionPresetInputPriority
AVCaptureSession需要添加相應(yīng)的視頻/音頻的輸入/輸出流才能捕獲音視頻的樣本浸颓。
AVCaptureSession可以調(diào)用startRunning
來啟動捕獲和stopRunning
來停止捕獲。
2. AVCaptureDeviceInput
AVCaptureDeviceInput
是AVCaptureInput
的子類旺拉,提供了一個接口产上,用于將捕獲輸入源鏈接到AVCaptureSession,用法:
① 視頻輸入源:
- (AVCaptureDeviceInput *)videoInput {
if (_videoInput == nil) {
//添加一個視頻輸入設(shè)備 默認(rèn)是后置攝像頭
AVCaptureDevice *videoCaptureDevice = [self getCameraDeviceWithPosition:AVCaptureDevicePositionBack];
//創(chuàng)建視頻輸入流
_videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoCaptureDevice error:nil];
if (!_videoInput){
NSLog(@"獲得攝像頭失敗");
return nil;
}
}
return _videoInput;
}
② 音頻輸入源:
- (AVCaptureDeviceInput *)audioInput {
if (_audioInput == nil) {
NSError * error = nil;
//添加一個音頻輸入/捕獲設(shè)備
AVCaptureDevice * audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
_audioInput = [[AVCaptureDeviceInput alloc] initWithDevice:audioCaptureDevice error:&error];
if (error) {
NSLog(@"獲得音頻輸入設(shè)備失敹旯贰:%@",error.localizedDescription);
}
}
return _audioInput;
}
音視頻輸入源都用到了AVCaptureDevice晋涣,AVCaptureDevice表示提供實時輸入媒體數(shù)據(jù)(如視頻和音頻)的物理設(shè)備,AVCaptureDevice通過AVCaptureDevicePosition參數(shù)獲取沉桌,AVCaptureDevicePosition有一下幾個參數(shù):
-AVCaptureDevicePositionUnspecified
:默認(rèn)(后置)
-AVCaptureDevicePositionBack
:后置
-AVCaptureDevicePositionFront
:前置
獲取視頻的AVCaptureDevice的代碼如下:
//獲取指定位置的攝像頭
- (AVCaptureDevice *)getCameraDeviceWithPosition:(AVCaptureDevicePosition)positon {
if (@available(iOS 10.2, *)) {
AVCaptureDeviceDiscoverySession *dissession = [AVCaptureDeviceDiscoverySession discoverySessionWithDeviceTypes:@[AVCaptureDeviceTypeBuiltInDualCamera,AVCaptureDeviceTypeBuiltInTelephotoCamera,AVCaptureDeviceTypeBuiltInWideAngleCamera] mediaType:AVMediaTypeVideo position:positon];
for (AVCaptureDevice *device in dissession.devices) {
if ([device position] == positon) {
return device;
}
}
} else {
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices) {
if ([device position] == positon) {
return device;
}
}
}
return nil;
}
注:切換前后置攝像頭需要調(diào)用beginConfiguration
和commitConfiguration
來進行攝像頭設(shè)備的切換谢鹊,移除之前的設(shè)備輸入源算吩,添加新的設(shè)備輸入源,代碼如下:
//切換前/后置攝像頭
- (void)switchsCamera:(AVCaptureDevicePosition)devicePosition {
//當(dāng)前設(shè)備方向
if (self.devicePosition == devicePosition) {
return;
}
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:[self getCameraDeviceWithPosition:devicePosition] error:nil];
//先開啟配置佃扼,配置完成后提交配置改變
[self.session beginConfiguration];
//移除原有輸入對象
[self.session removeInput:self.videoInput];
//添加新的輸入對象
if ([self.session canAddInput:videoInput]) {
[self.session addInput:videoInput];
self.videoInput = videoInput;
}
//視頻輸入對象發(fā)生了改變 視頻輸出的鏈接也要重新初始化
AVCaptureConnection * captureConnection = [self.videoDataOutput connectionWithMediaType:AVMediaTypeVideo];
if (self.devicePosition == AVCaptureDevicePositionFront && captureConnection.supportsVideoMirroring) {
captureConnection.videoMirrored = YES;
}
captureConnection.videoOrientation = AVCaptureVideoOrientationPortrait;
//提交新的輸入對象
[self.session commitConfiguration];
}
獲取音頻的AVCaptureDevice則是通過defaultDeviceWithMediaType:
方法的偎巢,需要的參數(shù)是AVMediaType
-媒體類型,常用的有視頻的:AVMediaTypeVideo
和音頻的:AVMediaTypeAudio
兼耀。
3. AVCaptureVideoDataOutput和AVCaptureAudioDataOutput
AVCaptureVideoDataOutput
和AVCaptureAudioDataOutput
是AVCaptureOutput
的子類艘狭,用于捕獲未壓縮或壓縮的音視頻樣本,使用代碼如下:
//視頻輸入源
- (AVCaptureVideoDataOutput *)videoDataOutput {
if (_videoDataOutput == nil) {
_videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
[_videoDataOutput setSampleBufferDelegate:self queue:dispatch_get_global_queue(0, 0)];
}
return _videoDataOutput;
}
//音頻輸入源
- (AVCaptureAudioDataOutput *)audioDataOutput {
if (_audioDataOutput == nil) {
_audioDataOutput = [[AVCaptureAudioDataOutput alloc] init];
[_audioDataOutput setSampleBufferDelegate:self queue:dispatch_get_global_queue(0, 0)];
}
return _audioDataOutput;
}
需要設(shè)置AVCaptureVideoDataOutputSampleBufferDelegate和AVCaptureAudioDataOutputSampleBufferDelegate代理翠订,有兩個捕獲音視頻數(shù)據(jù)的代理方法:
#pragma mark - AVCaptureVideoDataOutputSampleBufferDelegate AVCaptureAudioDataOutputSampleBufferDelegate 實時輸出音視頻
/// 實時輸出采集到的音視頻幀內(nèi)容
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
if (!sampleBuffer) {
return;
}
//提供對外接口,方便自定義處理
if (output == self.videoDataOutput) {
if([self.delegate respondsToSelector:@selector(captureSession:didOutputVideoSampleBuffer:fromConnection:)]) {
[self.delegate captureSession:self didOutputVideoSampleBuffer:sampleBuffer fromConnection:connection];
}
}
if (output == self.audioDataOutput) {
if([self.delegate respondsToSelector:@selector(captureSession:didOutputAudioSampleBuffer:fromConnection:)]) {
[self.delegate captureSession:self didOutputAudioSampleBuffer:sampleBuffer fromConnection:connection];
}
}
}
/// 實時輸出丟棄的音視頻幀內(nèi)容
- (void)captureOutput:(AVCaptureOutput *)output didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection API_AVAILABLE(ios(6.0)) {
}
捕獲到音視頻之后遵倦,用戶就可以用于壓縮和本地保存了尽超,后面會說一下音視頻的壓縮和本地保存。
4. AVCaptureVideoPreviewLayer
AVCaptureVideoPreviewLayer
用于預(yù)覽AVCaptureSession的可視輸出的CoreAnimation層的子類梧躺,簡單點說就是實時預(yù)覽攝像頭捕獲到的視圖似谁。
- (AVCaptureVideoPreviewLayer *)previewLayer {
if (_previewLayer == nil) {
_previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.session];
_previewLayer.videoGravity = AVLayerVideoGravityResizeAspect;
}
return _previewLayer;
}
需要AVCaptureSession參數(shù)來初始化,在AVCaptureSession對象startRunning
(啟動運行)時掠哥,顯示出攝像頭捕獲到的視圖巩踏。
videoGravity
參數(shù)是視頻如何在AVCaptureVideoPreviewLayer邊界矩形內(nèi)顯示,有三種樣式:
-AVLayerVideoGravityResizeAspect
:在視圖內(nèi)保持長寬比续搀,可能預(yù)覽的視圖不是全屏的塞琼。
-AVLayerVideoGravityResizeAspectFill
:在視圖內(nèi)保持長寬比的情況下填充滿。
-AVLayerVideoGravityResize
:拉伸填充層邊界禁舷。
默認(rèn)是AVLayerVideoGravityResizeAspect
5. CMMotionManager
CMMotionManager
是運動傳感器彪杉,用來監(jiān)測設(shè)備方向的,初始化如下牵咙,
- (CMMotionManager *)motionManager {
if (!_motionManager) {
_motionManager = [[CMMotionManager alloc] init];
}
return _motionManager;
}
當(dāng)用戶startRunning
開啟時派近,則需要監(jiān)測設(shè)備方向了,當(dāng)用戶stopRunning
停止時洁桌,則停止監(jiān)測設(shè)備方向渴丸。在代碼中調(diào)用startUpdateDeviceDirection
和stopUpdateDeviceDirection
來開啟監(jiān)測和停止監(jiān)測,這里就不貼代碼了另凌。
6. AVAssetWriter
AVAssetWriter
提供將媒體數(shù)據(jù)寫入新文件的服務(wù)谱轨,通過assetWriterWithURL:fileType:error:
方法來初始化,需要AVAssetWriterInput
將新媒體樣本或打包為CMSampleBuffer對象的現(xiàn)有媒體樣本引用附加到AVAssetWriter輸出文件中途茫,有幾種方法:
startWriting
:為接收輸入和將輸出寫入輸出文件做好準(zhǔn)備碟嘴。
startSessionAtSourceTime:
:為接收方啟動一個示例編寫會話。
finishWritingWithCompletionHandler:
:將所有未完成的輸入標(biāo)記為完成囊卜,并完成輸出文件的寫入娜扇。
7.AVAssetWriterInput
AVAssetWriterInput
用于將新媒體樣本或?qū)Υ虬鼮镃MSampleBuffer對象的現(xiàn)有媒體樣本的引用附加到AVAssetWriter輸出文件的單個軌跡中错沃。
視頻寫入文件初始化:
- (AVAssetWriterInput *)assetWriterVideoInput {
if (!_assetWriterVideoInput) {
//寫入視頻大小
NSInteger numPixels = self.videoSize.width * [UIScreen mainScreen].scale * self.videoSize.height * [UIScreen mainScreen].scale;
//每像素比特
CGFloat bitsPerPixel = 24.0;
NSInteger bitsPerSecond = numPixels * bitsPerPixel;
// 碼率和幀率設(shè)置
NSDictionary *compressionProperties = @{ AVVideoAverageBitRateKey : @(bitsPerSecond),
AVVideoExpectedSourceFrameRateKey : @(30),
AVVideoMaxKeyFrameIntervalKey : @(30),
AVVideoProfileLevelKey : AVVideoProfileLevelH264BaselineAutoLevel };
CGFloat width = self.videoSize.width * [UIScreen mainScreen].scale;
CGFloat height = self.videoSize.height * [UIScreen mainScreen].scale;
//視頻屬性
self.videoCompressionSettings = @{ AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : @(width),
AVVideoHeightKey : @(height),
AVVideoScalingModeKey : AVVideoScalingModeResizeAspectFill,
AVVideoCompressionPropertiesKey : compressionProperties };
_assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:self.videoCompressionSettings];
//expectsMediaDataInRealTime 必須設(shè)為yes,需要從capture session 實時獲取數(shù)據(jù)
_assetWriterVideoInput.expectsMediaDataInRealTime = YES;
}
return _assetWriterVideoInput;
}
音頻寫入文件初始化:
- (AVAssetWriterInput *)assetWriterAudioInput {
if (_assetWriterAudioInput == nil) {
/* 注:
<1>AVNumberOfChannelsKey 通道數(shù) 1為單通道 2為立體通道
<2>AVSampleRateKey 采樣率 取值為 8000/44100/96000 影響音頻采集的質(zhì)量
<3>d 比特率(音頻碼率) 取值為 8 16 24 32
<4>AVEncoderAudioQualityKey 質(zhì)量 (需要iphone8以上手機)
<5>AVEncoderBitRateKey 比特采樣率 一般是128000
*/
/*另注:aac的音頻采樣率不支持96000雀瓢,當(dāng)我設(shè)置成8000時枢析,assetWriter也是報錯*/
// 音頻設(shè)置
_audioCompressionSettings = @{ AVEncoderBitRatePerChannelKey : @(28000),
AVFormatIDKey : @(kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey : @(1),
AVSampleRateKey : @(22050) };
_assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:self.audioCompressionSettings];
_assetWriterAudioInput.expectsMediaDataInRealTime = YES;
}
return _assetWriterAudioInput;
}
8. AVAssetExportSession
AVAssetExportSession
是系統(tǒng)自帶的壓縮
主要有幾個參數(shù):
-outputURL
:輸出URL
-shouldOptimizeForNetworkUse
:優(yōu)化網(wǎng)絡(luò)
-outputFileType
:轉(zhuǎn)換后的格式
設(shè)置完上面的參數(shù)之后,就可以調(diào)用下面的方法來壓縮視頻了刃麸,壓縮之后就可以根據(jù)視頻地址保存被壓縮之后的視頻了
//異步導(dǎo)出
[videoExportSession exportAsynchronouslyWithCompletionHandler:^(NSError * _Nonnull error) {
if (error) {
NSLog(@"%@",error.localizedDescription);
} else {
//獲取第一幀
UIImage *cover = [UIImage dx_videoFirstFrameWithURL:url];
//保存到相冊醒叁,沒有權(quán)限走出錯處理
[TMCaptureTool saveVideoToPhotoLibrary:url completion:^(PHAsset * _Nonnull asset, NSString * _Nonnull errorMessage) {
if (errorMessage) { //保存失敗
NSLog(@"%@",errorMessage);
[weakSelf finishWithImage:cover asset:nil videoPath:outputVideoFielPath];
} else {
[weakSelf finishWithImage:cover asset:asset videoPath:outputVideoFielPath];
}
}];
}
} progress:^(float progress) {
//NSLog(@"視頻導(dǎo)出進度 %f",progress);
}];
這里保存視頻的方法為:
[[PHPhotoLibrary sharedPhotoLibrary] performChanges:^{
PHAssetChangeRequest *request = [PHAssetChangeRequest creationRequestForAssetFromVideoAtFileURL:url];
localIdentifier = request.placeholderForCreatedAsset.localIdentifier;
request.creationDate = [NSDate date];
} completionHandler:^(BOOL success, NSError * _Nullable error) {
TM_DISPATCH_ON_MAIN_THREAD(^{
if (success) {
PHAsset *asset = [[PHAsset fetchAssetsWithLocalIdentifiers:@[localIdentifier] options:nil] firstObject];
if (completion) completion(asset,nil);
} else if (error) {
NSLog(@"保存視頻失敗 %@",error.localizedDescription);
if (completion) completion(nil,[NSString stringWithFormat:@"保存視頻失敗 %@",error.localizedDescription]);
}
});
}];
到此,視頻就錄制和保存完了在介紹中只貼出來了部分代碼泊业,還有很多是自己封裝起來的把沼,想要查看完整項目的,可以查看TMCaptureVideo吁伺,是上傳到github上的完整項目饮睬,僅供大家參考,歡迎大家指導(dǎo)篮奄!