AVFoundation 查看介紹,蘋果在官方文檔上寫的比較清楚密任。大概如下圖所示。
AVFoundation 捕獲視頻
======
AVCaptureSession : 負責管理 音頻與視頻之間的數(shù)據(jù)流
AVCaptureDevice :視頻或者音頻設備(攝像頭虎锚,麥克風)
AVCaptureDeviceInput :音視頻輸入壁查,需要綁定AVCaptureDevice(設備)
AVCaptureVideoPreviewLayer :AVCaptureSession捕捉到的信息會通過此layer 顯示出來
整個流程圖看起來像這樣:
下面附上代碼
1.定義所需對象###
//隊列
@property(nonatomic,copy)dispatch_queue_t captureQueue;
//捕獲視頻的會話
@property (strong, nonatomic) AVCaptureSession *session;
///捕捉到現(xiàn)實的view
@property(nonatomic,strong)AVCaptureVideoPreviewLayer *previewLayer;
//后置攝像頭輸入
@property (strong, nonatomic) AVCaptureDeviceInput *backCameraInput;
//前置攝像頭輸入
@property (strong, nonatomic) AVCaptureDeviceInput *frontCameraInput;
//麥克風輸入
@property (strong, nonatomic) AVCaptureDeviceInput *audioMicInput;
//音頻錄制連接
@property (strong, nonatomic) AVCaptureConnection *audioConnection;
//視頻錄制連接
@property (strong, nonatomic) AVCaptureConnection *videoConnection;
//視頻輸出
@property (strong, nonatomic) AVCaptureVideoDataOutput *videoOutput;
//音頻輸出
@property (strong, nonatomic) AVCaptureAudioDataOutput *audioOutput;
2.實例化###
-(void)initSession{
_fristRun = YES;
_paused = YES; //默認是暫停(未開始錄制)
_isFront = YES; //默認為前攝像頭
//錄制隊列
_captureQueue = dispatch_queue_create("com.capture", DISPATCH_QUEUE_SERIAL);
NSError *error;
//默認前攝像頭輸入
AVCaptureDevice *frontDevice = [self cameraWithPosition:AVCaptureDevicePositionFront];
_frontCameraInput = [[AVCaptureDeviceInput alloc] initWithDevice:frontDevice error:&error];
if (error) {
NSLog(@"獲取攝像頭失敗、璧尸、咒林、、");
}
//實例化后后攝像頭
AVCaptureDevice *backDevice = [self cameraWithPosition:AVCaptureDevicePositionBack];
_backCameraInput = [[AVCaptureDeviceInput alloc] initWithDevice:backDevice error:&error];
if (error) {
NSLog(@"獲取后攝像頭失敗爷光、垫竞、、、");
}
//麥克風輸入
NSError *micError;
AVCaptureDevice *audioDevice =[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
_audioMicInput = [[AVCaptureDeviceInput alloc] initWithDevice:audioDevice error:&micError];
if (micError) {
NSLog(@"獲取麥克風失敗欢瞪。活烙。。遣鼓。");
}
//音視頻輸出
_videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[_videoOutput setSampleBufferDelegate:self queue:self.captureQueue];
//視頻輸出的設置
NSDictionary* setcapSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey,
nil];
_videoOutput.videoSettings = setcapSettings;
_audioOutput = [[AVCaptureAudioDataOutput alloc] init];
[_audioOutput setSampleBufferDelegate:self queue:self.captureQueue];
_session = [[AVCaptureSession alloc] init];
_session.sessionPreset = AVCaptureSessionPreset1280x720;
//添加設備
if ([_session canAddInput:self.frontCameraInput]) {
[_session addInput:self.frontCameraInput];
}
if ([_session canAddInput:self.audioMicInput]) {
[_session addInput:self.audioMicInput];
}
//添加輸出
if ([_session canAddOutput:self.audioOutput]) {
[_session addOutput:self.audioOutput];
}
if ([_session canAddOutput:self.videoOutput]) {
[_session addOutput:self.videoOutput];
}
//捕獲view
_previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
_previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[_previewLayer setFrame:CGRectMake(0, 0, WIDTH, HEIGHT)];
[self.showView.layer insertSublayer:_previewLayer atIndex:0];
//音視頻連接
_audioConnection = [self.audioOutput connectionWithMediaType:AVMediaTypeAudio];
self.videoConnection.videoOrientation = AVCaptureVideoOrientationPortrait;
}
3.開啟捕獲###
[self.session startRunning];
做完以上操作步驟啸盏,AVCaptureSession 已經(jīng)在開始捕獲視頻,此處需要遵守代理
AVCaptureVideoDataOutputSampleBufferDelegate,AVCaptureAudioDataOutputSampleBufferDelegate
在捕獲的每一幀骑祟,都會回調(diào)協(xié)議方法
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
4.視頻寫入###
//媒體寫入對象
@property(nonatomic,strong)AVAssetWriter *writer;
//視頻寫入
@property (nonatomic, strong) AVAssetWriterInput *videoInput;
//音頻寫入
@property (nonatomic, strong) AVAssetWriterInput *audioInput;
直接保存視頻信息會非常大回懦,所以這里要進行設置,需要進行視頻編碼:
//初始化視頻輸入
NSDictionary* settings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
nil];
//初始化視頻寫入類
_videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:settings];
開始寫入
[_writer startWriting];
寫入完成回調(diào)方法
[_writer finishWritingWithCompletionHandler: handler];
總結###
AVCaptureSession 對象的構建:需要AVCaptureDeviceInput 設備輸入-->AVCaptureDevice 通過調(diào)用設備(相機和麥克風等)
在AVCaptureVideoDataOutputSampleBufferDelegate 協(xié)議方法的理解曾我。
AVAssetWriter :視頻寫入的靈活運用粉怕。
思考###
1.在多次錄制完成之后,為什么偶爾會出現(xiàn)首幀黑屏現(xiàn)象抒巢?
--一般情況下贫贝,音頻采集要快于視頻采集,在寫入過程中蛉谜,第一幀為音頻幀稚晚,所有有黑屏現(xiàn)象。解決方法型诚,在- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
方法中客燕,判斷captureOutput 參數(shù),如果首幀為音頻幀狰贯,直接舍棄也搓。
2.在錄制完成后會調(diào)用[_session stopRunning]; 而業(yè)務需求經(jīng)常會有,視頻多次錄制需求涵紊,如何進行 開始錄制-->暫停錄制-->開啟錄制-->....傍妒?
---1.)在暫停錄制的時候,同時調(diào)用[_session stopRunning] 方法摸柄。開始的時候颤练,在重新實例化session對象,但在stopRunning時驱负,同時手機畫面也會被暫停嗦玖,體驗非常不好,而且重新實例化對性能也會有一些損耗跃脊,所以不推薦此方法
---2.)文件寫入時候做處理宇挫。[self.session startRunning];開始捕獲之后,用戶點擊暫停按鈕酪术,響應時間為器瘪,停止寫入視頻幀,點擊開始按鈕時,繼續(xù)進入視頻幀娱局,錄制完成后。
例:錄制過程為: A段--暫停2s--B段
此時會發(fā)現(xiàn)錄制完的視頻咧七,在播放完A段視頻時候衰齐,會有2s的卡界面情況,然后播放B段視頻继阻,解決方法耻涛。