1跃须、iOS直播技術(shù)的流程
直播技術(shù)的流程大致可以分為幾個(gè)步驟:數(shù)據(jù)采集眷篇、圖像處理(實(shí)時(shí)濾鏡)座咆、視頻編碼徽惋、封包案淋、上傳座韵、云端(轉(zhuǎn)碼险绘、錄制、分發(fā))誉碴、直播播放器宦棺。
數(shù)據(jù)采集:通過攝像頭和麥克風(fēng)獲得實(shí)時(shí)的音視頻數(shù)據(jù);
圖像處理:將數(shù)據(jù)采集的輸入流進(jìn)行實(shí)時(shí)濾鏡黔帕,得到我們美化之后的視頻幀代咸;
視頻編碼:編碼分為軟編碼和硬編碼。現(xiàn)在一般的編碼方式都是H.264成黄,比較新的H.265據(jù)說壓縮率比較高呐芥,但算法也相當(dāng)要復(fù)雜一些逻杖,使用還不夠廣泛。軟編碼是利用CPU進(jìn)行編碼思瘟,硬編碼就是使用GPU進(jìn)行編碼荸百,軟編碼支持現(xiàn)在所有的系統(tǒng)版本,由于蘋果在iOS8才開放硬編碼的API滨攻,故硬編碼只支持iOS8以上的系統(tǒng)够话;
封包:現(xiàn)在直播推流中,一般采用的格式是FLV光绕;
上傳:常用的協(xié)議是利用RTMP協(xié)議進(jìn)行推流女嘲;
云端:進(jìn)行流的轉(zhuǎn)碼、分發(fā)和錄制诞帐;
直播播放器:負(fù)責(zé)拉流欣尼、解碼、播放停蕉。
用一張騰訊云的圖來說明上面的流程:
2媒至、獲取系統(tǒng)的授權(quán)
直播的第一步就是采集數(shù)據(jù),包含視頻和音頻數(shù)據(jù)谷徙,由于iOS權(quán)限的要求拒啰,需要先獲取訪問攝像頭和麥克風(fēng)的權(quán)限:
請(qǐng)求獲取訪問攝像頭權(quán)限
__weak typeof(self) _self = self;
AVAuthorizationStatus status = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo];
switch (status) {
case AVAuthorizationStatusNotDetermined:{
// 許可對(duì)話沒有出現(xiàn),發(fā)起授權(quán)許可
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo completionHandler:^(BOOL granted) {
if (granted) {
dispatch_async(dispatch_get_main_queue(), ^{
[_self.session setRunning:YES];
});
}
}];
break;
}
case AVAuthorizationStatusAuthorized:{
// 已經(jīng)開啟授權(quán)完慧,可繼續(xù)
[_self.session setRunning:YES];
break;
}
case AVAuthorizationStatusDenied:
case AVAuthorizationStatusRestricted:
// 用戶明確地拒絕授權(quán)谋旦,或者相機(jī)設(shè)備無法訪問
break;
default:
break;
}
請(qǐng)求獲取訪問麥克風(fēng)權(quán)限
AVAuthorizationStatus status = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeAudio];
switch (status) {
case AVAuthorizationStatusNotDetermined:{
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeAudio completionHandler:^(BOOL granted) {
}];
break;
}
case AVAuthorizationStatusAuthorized:{
break;
}
case AVAuthorizationStatusDenied:
case AVAuthorizationStatusRestricted:
break;
default:
break;
}
3、配置采樣參數(shù)
音頻:需要配置碼率屈尼、采樣率册着;
視頻:需要配置視頻分辨率、視頻的幀率脾歧、視頻的碼率甲捏。
4、音視頻的錄制
音頻的錄制
self.taskQueue = dispatch_queue_create("audioCapture.Queue",NULL);
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setActive:YES withOptions:kAudioSessionSetActiveFlag_NotifyOthersOnDeactivation error:nil];
[[NSNotificationCenter defaultCenter] addObserver:self
selector: @selector(handleRouteChange:)
name: AVAudioSessionRouteChangeNotification
object:session];
[[NSNotificationCenter defaultCenter] addObserver:self
selector: @selector(handleInterruption:)
name: AVAudioSessionInterruptionNotification
object:session];
NSError *error = nil;
[session setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker | AVAudioSessionCategoryOptionMixWithOthers error:nil];
[session setMode:AVAudioSessionModeVideoRecording error:&error];
if(![session setActive:YES error:&error]){
[self handleAudioComponentCreationFailure];
}
AudioComponentDescription acd;
acd.componentType = kAudioUnitType_Output;
acd.componentSubType = kAudioUnitSubType_RemoteIO;
acd.componentManufacturer = kAudioUnitManufacturer_Apple;
acd.componentFlags = 0;
acd.componentFlagsMask = 0;
self.component = AudioComponentFindNext(NULL, &acd);
OSStatus status = noErr;
status = AudioComponentInstanceNew(self.component, &_componetInstance);
if (noErr != status) {
[self handleAudioComponentCreationFailure];
}
UInt32 flagOne = 1;
AudioUnitSetProperty(self.componetInstance, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &flagOne, sizeof(flagOne));
AudioStreamBasicDescription desc = {0};
desc.mSampleRate = _configuration.audioSampleRate;
desc.mFormatID = kAudioFormatLinearPCM;
desc.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked;
desc.mChannelsPerFrame = (UInt32)_configuration.numberOfChannels;
desc.mFramesPerPacket = 1;
desc.mBitsPerChannel = 16;
desc.mBytesPerFrame = desc.mBitsPerChannel / 8 * desc.mChannelsPerFrame;
desc.mBytesPerPacket = desc.mBytesPerFrame * desc.mFramesPerPacket;
AURenderCallbackStruct cb;
cb.inputProcRefCon = (__bridge void *)(self);
cb.inputProc = handleInputBuffer;
status = AudioUnitSetProperty(self.componetInstance, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &desc, sizeof(desc));
status = AudioUnitSetProperty(self.componetInstance, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 1, &cb, sizeof(cb));
status = AudioUnitInitialize(self.componetInstance);
if (noErr != status) {
[self handleAudioComponentCreationFailure];
}
[session setPreferredSampleRate:_configuration.audioSampleRate error:nil];
[session setActive:YES error:nil];
視頻的錄制:調(diào)用GPUImage中的GPUImageVideoCamera
_videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:_configuration.avSessionPreset cameraPosition:AVCaptureDevicePositionFront];
_videoCamera.outputImageOrientation = _configuration.orientation;
_videoCamera.horizontallyMirrorFrontFacingCamera = NO;
_videoCamera.horizontallyMirrorRearFacingCamera = NO;
_videoCamera.frameRate = (int32_t)_configuration.videoFrameRate;
_gpuImageView = [[GPUImageView alloc] initWithFrame:[UIScreen mainScreen].bounds];
[_gpuImageView setFillMode:kGPUImageFillModePreserveAspectRatioAndFill]; [_gpuImageView setAutoresizingMask:UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight]; [_gpuImageView setInputRotation:kGPUImageFlipHorizonal atIndex:0];
轉(zhuǎn)載自:https://chenhu1001.github.io