捕捉媒體的核心是AVCaptureSession捕獲會話,通過它來管理咱們的輸入設(shè)備,它可以同時連接多個輸入設(shè)備状勤,比如攝像頭和麥克,并且為媒體捕獲做些預(yù)設(shè)配置(格式双泪、質(zhì)量)持搜,還可以動態(tài)的配置輸入的線路,最重要的是它可以控制捕獲的開始和停止焙矛,并且可以調(diào)控設(shè)備的切換葫盼。但需要注意!這些操作都比較好時村斟,盡量異步調(diào)用贫导。
1. 創(chuàng)建session
????@property (nonatomic, strong) AVCaptureSession *session;
????_session = [[AVCaptureSession alloc] init];
? ? ?_session.sessionPreset = AVCaptureSessionPresetHigh;//宏 選擇畫質(zhì)
2. 給Session添加input輸入和Output輸出
?1>視頻
????<AVCaptureVideoDataOutputSampleBufferDelegate>視頻輸出代理
?? ?@property (nonatomic, strong) AVCaptureScreenInput *input;//macOSX專有輸入源是電腦屏幕
????@property (nonatomic, strong) AVCaptureVideoDataOutput *NewOutput;
_input.capturesMouseClicks = YES; //鼠標(biāo)
? ?[_session addInput:_input];
_NewOutput = [[AVCaptureVideoDataOutput alloc] init];
? ? ? ? [_session addOutput:_NewOutput];//視頻的輸出加到session
? ? ? ? dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
? ? ? ? [_NewOutput setSampleBufferDelegate:self queue:queue];
? ? ? ? [_NewOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];//輸出的圖片的樣式抛猫,其實視頻本質(zhì)就是一幀一幀的圖片,每一幀的圖片用32 bit BGRA構(gòu)成脱盲。常見的有yuv等
2>聲音
<AVCaptureAudioDataOutputSampleBufferDelegate>聲音輸出代理
@property (nonatomic, strong) AVCaptureConnection *audioConnection;
NSError *deviceError;
? ? AVCaptureDevice *microphoneDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];//音頻輸入設(shè)備
? ? AVCaptureDeviceInput *inputMicrophoneDevice = [AVCaptureDeviceInput deviceInputWithDevice:microphoneDevice error:&deviceError];
?? ?AVCaptureAudioDataOutput *outputAudioDevice = [[AVCaptureAudioDataOutput alloc] init];//輸出的數(shù)據(jù)
? ? NSDictionary *audioSettings = @{AVFormatIDKey : @(kAudioFormatMPEG4AAC), AVSampleRateKey : @48000, AVEncoderBitRateKey : @12800, AVNumberOfChannelsKey : @1};//設(shè)置輸出的參數(shù) ?AAC類型 ?48000頻率 12800是編解碼的速率 1是單雙聲道
? ? outputAudioDevice.audioSettings = audioSettings;
? ? [outputAudioDevice setSampleBufferDelegate:self queue:dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)];
? ? [_session addInput:inputMicrophoneDevice];//聲音輸入加到_session
? ? [_session addOutput:outputAudioDevice];//數(shù)據(jù)輸出加到_session
? ? // begin configuration for the AVCaptureSession
? ? [_session beginConfiguration];
? ? // picture resolution
? ? self.audioConnection = [outputAudioDevice connectionWithMediaType:AVMediaTypeAudio];
?? ?[_session commitConfiguration];
注意MacOSX不支持內(nèi)錄邑滨,就是說聲音只能從麥克風(fēng)獲取日缨。比如你QQ音樂的聲音是拿不到的钱反,只能是麥克風(fēng)聽到之后才可以拿到。那么怎么解決匣距,需要一個虛擬聲卡驅(qū)動面哥。用一些插件可以拿到內(nèi)錄聲音。
3.代理主要是獲得到數(shù)據(jù)
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
? ? // 如果是聲音轉(zhuǎn)成aac
? ? if (connection == self.audioConnection) {
? ? ? ? CMBlockBufferRef dataBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
? ? ? ? size_t length, totalLength;
? ? ? ? char *dataPointer;
? ? ? ? CMBlockBufferGetDataPointer(dataBuffer, 0, &length, &totalLength, &dataPointer);
? ? ? ? NSData *rawAAC = [NSData dataWithBytes:dataPointer length:totalLength];
? ? ? ? NSData *adtsHeader = [self adtsDataForPacketLength:totalLength];
? ? ? ? NSMutableData *fullData = [NSMutableData dataWithData:adtsHeader];
? ? ? ? [fullData appendData:rawAAC];//fullData就是最后的二進制數(shù)據(jù)
? ? ? ? CMTime timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
? ? }else {
? ? //得到視頻 之后去編碼
// ? ? ? ?[_videoEncoder encodeVideoSampleBuffer:sampleBuffer];
? ? }
}
最后獲得每一幀數(shù)據(jù)可以直接傳輸毅待,但是數(shù)據(jù)量很大尚卫。因此需要對數(shù)據(jù)做壓縮。視頻常見的H264尸红,265等吱涉。之后主要了解包括H264的編解碼,ATSP或者RTMP的傳輸外里。