如果只需要完成簡(jiǎn)單的錄音功能狂秦,蘋果有更高級(jí)页畦、方便的接口供開發(fā)者使用:AVAudioRecorder改艇,面向?qū)ο笫瞻啵魂P(guān)心細(xì)節(jié)實(shí)現(xiàn)。但是如果開發(fā)者想要拿到實(shí)時(shí)數(shù)據(jù)并對(duì)其進(jìn)行處理谒兄,就要用到Audio Unit Services和Audio Processing Graph Services。下面我會(huì)介紹如何使用它們來(lái)完成一個(gè)最簡(jiǎn)單的錄音DEMO社付。
AudioSession
首先承疲,我們需要了解下AudioSession這個(gè)類。先看下蘋果對(duì)它的介紹:
iOS handles audio behavior at the app, inter-app, and device levels through audio sessions
iOS通過(guò)AudioSession來(lái)控制APP中的音頻表現(xiàn)鸥咖,跨應(yīng)用和硬件設(shè)備燕鸽。按我的理解,它就是用來(lái)設(shè)定最基礎(chǔ)的音頻配置的啼辣,比如:
1啊研、當(dāng)耳機(jī)被拔出,是否停止音頻的播放鸥拧?
2党远、本APP的音頻播放是否和其他APP的音頻實(shí)現(xiàn)混音?還是讓其他APP的音頻暫停富弦?
3沟娱、是否允許APP獲取麥克風(fēng)數(shù)據(jù)?
在本文中腕柜,我們需要用到麥克風(fēng)錄音并且播放济似,調(diào)用以下代碼矫废,APP就會(huì)彈出窗口詢問(wèn)是否允許APP訪問(wèn)麥克風(fēng):
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
Audio Processing Graph
先用一張表格(來(lái)自蘋果文檔)介紹下Audio Unit的類型:
Purpose | Audio units |
---|---|
Effect | iPod Equalizer |
Mixing | 3D Mixer |
Multichannel Mixer | |
I/O | Remote I/O |
Voice-Processing I/O | |
Generic Output | |
Format conversion | Format Converter |
7種類型,4個(gè)作用:均衡器砰蠢、混合蓖扑、輸入/輸出和格式轉(zhuǎn)換。
在本DEMO中只使用了Remote I/O完成簡(jiǎn)單的錄音和播放功能台舱。
而Audio Unit并不能獨(dú)立完成工作赵誓,需要配合AUGraph來(lái)使用。AUGraph是管理者柿赊,不同的Unit作為Node添加到AUGraph中去發(fā)揮作用俩功,如下圖AUGraph管理著Mixer Unit和Remote I/O Unit:
聲明一個(gè)Remote I/O類型的Node,并添加到AUGraph中:
AUNode remoteIONode;
AudioComponentDescription componentDesc; //關(guān)于Node的描述
componentDesc.componentType = kAudioUnitType_Output;
componentDesc.componentSubType = kAudioUnitSubType_RemoteIO;
componentDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
componentDesc.componentFlags = 0;
componentDesc.componentFlagsMask = 0;
CheckError(NewAUGraph(&auGraph),"couldn't NewAUGraph"); //創(chuàng)建AUGraph
CheckError(AUGraphOpen(auGraph),"couldn't AUGraphOpen"); //打開AUGraph
CheckError(AUGraphAddNode(auGraph,&componentDesc,&remoteIONode),"couldn't add remote io node");
CheckError(AUGraphNodeInfo(auGraph,remoteIONode,NULL,&remoteIOUnit),"couldn't get remote io unit from node");
Remote I/O Unit
Remote I/O Unit 屬于Audio Unit其中之一碰声,是一個(gè)與硬件設(shè)備相關(guān)的Unit诡蜓,它分為輸入端和輸出端,如揚(yáng)聲器胰挑、麥克風(fēng)和耳機(jī)等蔓罚。在這里我們需要在錄音的同時(shí)播放,所以我們要讓輸入端和輸出端的Unit相連瞻颂,如下圖所示:
其中Element0代表著輸出端豺谈,Element1代表輸入端;而每個(gè)Element又分為Input scope和Output scope贡这。我們具體要做的就是將Element0的Output scope和喇叭接上茬末,Element1的Intput和麥克風(fēng)接上。代碼如下:
UInt32 oneFlag = 1;
CheckError(AudioUnitSetProperty(remoteIOUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&oneFlag,
sizeof(oneFlag)),"couldn't kAudioOutputUnitProperty_EnableIO with kAudioUnitScope_Output");
CheckError(AudioUnitSetProperty(remoteIOUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&oneFlag,
sizeof(oneFlag)),"couldn't kAudioOutputUnitProperty_EnableIO with kAudioUnitScope_Input");
然后設(shè)置一下輸入輸出的音頻格式:
AudioStreamBasicDescription mAudioFormat;
mAudioFormat.mSampleRate = 44100.0;//采樣率
mAudioFormat.mFormatID = kAudioFormatLinearPCM;//PCM采樣
mAudioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
mAudioFormat.mFramesPerPacket = 1;//每個(gè)數(shù)據(jù)包多少幀 mAudioFormat.mChannelsPerFrame = 1;//1單聲道盖矫,2立體聲
mAudioFormat.mBitsPerChannel = 16;//語(yǔ)音每采樣點(diǎn)占用位數(shù)
mAudioFormat.mBytesPerFrame = mAudioFormat.mBitsPerChannel*mAudioFormat.mChannelsPerFrame/8;//每幀的bytes數(shù)
mAudioFormat.mBytesPerPacket = mAudioFormat.mBytesPerFrame*mAudioFormat.mFramesPerPacket;//每個(gè)數(shù)據(jù)包的bytes總數(shù)丽惭,每幀的bytes數(shù)*每個(gè)數(shù)據(jù)包的幀數(shù)
mAudioFormat.mReserved = 0;
UInt32 size = sizeof(mAudioFormat);
CheckError(AudioUnitSetProperty(remoteIOUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
1,
&mAudioFormat,
size),"couldn't set kAudioUnitProperty_StreamFormat with kAudioUnitScope_Output");
CheckError(AudioUnitSetProperty(remoteIOUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&mAudioFormat,
size),"couldn't set kAudioUnitProperty_StreamFormat with kAudioUnitScope_Input");
至此我們就差不多完成了,只差最后一個(gè)步驟:設(shè)置CallBack辈双,每次音頻從麥克風(fēng)進(jìn)來(lái)轉(zhuǎn)化為數(shù)字信號(hào)時(shí)就會(huì)調(diào)用此CallBack函數(shù)责掏,將數(shù)字信號(hào)作你所想要的處理,完了再送到輸出端去進(jìn)行播放湃望』怀模回調(diào)函數(shù)是一個(gè)C語(yǔ)言的靜態(tài)方法,代碼如下:
static OSStatus CallBack(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
RecordTool *THIS=(__bridge RecordTool*)inRefCon;
OSStatus renderErr = AudioUnitRender(THIS->remoteIOUnit, ioActionFlag, inTimeStamp, 1, inNumberFrames, ioData);
//--------------------------------------------//
// 在這里處理音頻數(shù)據(jù) //
//--------------------------------------------//
//轉(zhuǎn)Mp3证芭,后面有時(shí)間再詳細(xì)介紹下
// [THIS->cover convertPcmToMp3:ioData->mBuffers[0] toPath:THIS->outPath];
return renderErr;
}
ioData->mBuffers[n] //n=0~1瞳浦,單聲道n=0,雙聲道n=1
ioData->mBuffers[0].mData //PCM數(shù)據(jù)
ioData->mBuffers[0].mDataByteSize //PCM數(shù)據(jù)的長(zhǎng)度
定義好CallBack函數(shù)后檩帐,將其與AUGraph關(guān)聯(lián)起來(lái):
AURenderCallbackStruct inputProc;
inputProc.inputProc = CallBack;
inputProc.inputProcRefCon = (__bridge void *)(self);
CheckError(AUGraphSetNodeInputCallback(auGraph, remoteIONode, 0, &inputProc),"Error setting io output callback");
CheckError(AUGraphInitialize(auGraph),"couldn't AUGraphInitialize" );
CheckError(AUGraphUpdate(auGraph, NULL),"couldn't AUGraphUpdate" );
//最后再調(diào)用以下代碼即可開始錄音
CheckError(AUGraphStart(auGraph),"couldn't AUGraphStart");
CAShow(auGraph);
最后
因?yàn)闀r(shí)間的關(guān)系术幔,未能細(xì)說(shuō)PCM轉(zhuǎn)MP3(使用LAME)的內(nèi)容,我已經(jīng)把DEMO上傳到GitHub湃密,有需要的朋友可以下載來(lái)看看诅挑,往后有時(shí)間了我再補(bǔ)全關(guān)于轉(zhuǎn)碼MP3的內(nèi)容四敞。
如果覺得我的DEMO對(duì)您有幫助,請(qǐng)Star拔妥,非常感謝忿危!
更新
使用LAME轉(zhuǎn)碼請(qǐng)看iOS-使用Lame轉(zhuǎn)碼:PCM->MP3
關(guān)于我
目前在職iOS開發(fā),業(yè)余時(shí)間獨(dú)立開發(fā)App没龙,現(xiàn)有上架作品:Mini記賬
公z號(hào):沙拉可樂 分享獨(dú)立開發(fā)的干貨和背后的故事