音頻基礎(chǔ)知識
PCM格式
pcm是經(jīng)過話筒錄音后直接得到的未經(jīng)壓縮的數(shù)據(jù)流
數(shù)據(jù)大小=采樣頻率采樣位數(shù)聲道*秒數(shù)/8
采樣頻率一般是44k,位數(shù)一般是8位或者16位,聲道一般是單聲道或者雙聲道
pcm屬于編碼格式农曲,就是一串由多個(gè)樣本值組成的數(shù)據(jù)流翁都,本身沒有任何頭信息或者幀的概念。如果不是音頻的錄制者,光憑一段PCM數(shù)據(jù)描扯,是沒有辦法知道它的采樣率等信息的。
AAC格式
初步了解趟薄,AAC文件可以沒有文件頭绽诚,全部由幀序列組成,每個(gè)幀由幀頭和數(shù)據(jù)部分組成。幀頭包含采樣率恩够、聲道數(shù)卒落、幀長度等,有點(diǎn)類似MP3格式蜂桶。
AAC編碼
初始化編碼轉(zhuǎn)換器
-(BOOL)createAudioConvert{
if(m_converter != nil){
return TRUE;
}
AudioStreamBasicDescription inputFormat = {0};
inputFormat.mSampleRate = _configuration.audioSampleRate;
inputFormat.mFormatID = kAudioFormatLinearPCM;
inputFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked;
inputFormat.mChannelsPerFrame = (UInt32)_configuration.numberOfChannels;
inputFormat.mFramesPerPacket = 1;
inputFormat.mBitsPerChannel = 16;
inputFormat.mBytesPerFrame = inputFormat.mBitsPerChannel / 8 * inputFormat.mChannelsPerFrame;
inputFormat.mBytesPerPacket = inputFormat.mBytesPerFrame * inputFormat.mFramesPerPacket;
AudioStreamBasicDescription outputFormat; // 這里開始是輸出音頻格式
memset(&outputFormat, 0, sizeof(outputFormat));
outputFormat.mSampleRate = inputFormat.mSampleRate; // 采樣率保持一致
outputFormat.mFormatID = kAudioFormatMPEG4AAC; // AAC編碼 kAudioFormatMPEG4AAC kAudioFormatMPEG4AAC_HE_V2
outputFormat.mChannelsPerFrame = (UInt32)_configuration.numberOfChannels;;
outputFormat.mFramesPerPacket = 1024; // AAC一幀是1024個(gè)字節(jié)
const OSType subtype = kAudioFormatMPEG4AAC;
AudioClassDescription requestedCodecs[2] = {
{
kAudioEncoderComponentType,
subtype,
kAppleSoftwareAudioCodecManufacturer
},
{
kAudioEncoderComponentType,
subtype,
kAppleHardwareAudioCodecManufacturer
}
};
OSStatus result = AudioConverterNewSpecific(&inputFormat, &outputFormat, 2, requestedCodecs, &m_converter);
if(result != noErr) return NO;
return YES;
}
編碼轉(zhuǎn)換
char *aacBuf;
if(!aacBuf){
aacBuf = malloc(inBufferList.mBuffers[0].mDataByteSize);
}
// 初始化一個(gè)輸出緩沖列表
AudioBufferList outBufferList;
outBufferList.mNumberBuffers = 1;
outBufferList.mBuffers[0].mNumberChannels = inBufferList.mBuffers[0].mNumberChannels;
outBufferList.mBuffers[0].mDataByteSize = inBufferList.mBuffers[0].mDataByteSize; // 設(shè)置緩沖區(qū)大小
outBufferList.mBuffers[0].mData = aacBuf; // 設(shè)置AAC緩沖區(qū) UInt32
outputDataPacketSize = 1;
if (AudioConverterFillComplexBuffer(m_converter, inputDataProc, &inBufferList, &outputDataPacketSize, &outBufferList, NULL) != noErr){
return;
}
AudioFrame *audioFrame = [AudioFrame new];
audioFrame.timestamp = timeStamp;
audioFrame.data = [NSData dataWithBytes:aacBuf length:outBufferList.mBuffers[0].mDataByteSize];
char exeData[2];
exeData[0] = _configuration.asc[0];
exeData[1] = _configuration.asc[1];
audioFrame.audioInfo =[NSData dataWithBytes:exeData length:2];
在Ios中儡毕,實(shí)現(xiàn)打開和捕獲麥克風(fēng)大多是用的AVCaptureSession這個(gè)組件來實(shí)現(xiàn)的,它可以不僅可以實(shí)現(xiàn)音頻捕獲扑媚,還可以實(shí)現(xiàn)視頻的捕獲腰湾。
針對打開麥克風(fēng)和捕獲音頻的代碼,簡單的整理了一下:
首先疆股,我們需要定義一個(gè)AVCaptureSession類型的變量费坊,它是架起在麥克風(fēng)設(shè)備和數(shù)據(jù)輸出上的一座橋,通過它可以方便的得到麥克風(fēng)的實(shí)時(shí)原始數(shù)據(jù)旬痹。
AVCaptureSession *m_capture;
同時(shí)附井,定義一組函數(shù),用來打開和關(guān)閉麥克風(fēng)两残;為了能使數(shù)據(jù)順利的導(dǎo)出永毅,你還需要實(shí)現(xiàn)AVCaptureAudioDataOutputSampleBufferDelegate這個(gè)協(xié)議
-(void)open;
-(void)close;
-(BOOL)isOpen;
下面我們將分別實(shí)現(xiàn)上述參數(shù)函數(shù),來完成數(shù)據(jù)的捕獲
-(void)open {
NSError *error;
m_capture = [[AVCaptureSession alloc]init];
AVCaptureDevice *audioDev = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
if (audioDev == nil)
{
CKPrint("Couldn't create audio capture device");
return ;
}
// create mic device
AVCaptureDeviceInput *audioIn = [AVCaptureDeviceInput deviceInputWithDevice:audioDev error:&error];
if (error != nil)
{
CKPrint("Couldn't create audio input");
return ;
}
// add mic device in capture object
if ([m_capture canAddInput:audioIn] == NO)
{
CKPrint("Couldn't add audio input")
return ;
}
[m_capture addInput:audioIn];
// export audio data
AVCaptureAudioDataOutput *audioOutput = [[AVCaptureAudioDataOutput alloc] init];
[audioOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
if ([m_capture canAddOutput:audioOutput] == NO)
{
CKPrint("Couldn't add audio output");
return ;
}
[m_capture addOutput:audioOutput];
[audioOutput connectionWithMediaType:AVMediaTypeAudio];
[m_capture startRunning];
return ;
}
-(void)close {
if (m_capture != nil && [m_capture isRunning])
{
[m_capture stopRunning];
}
return;
}
-(BOOL)isOpen {
if (m_capture == nil)
{
return NO;
}
return [m_capture isRunning];
}
通過上面三個(gè)函數(shù)人弓,即可完成所有麥克風(fēng)捕獲的準(zhǔn)備工作卷雕,現(xiàn)在我們就等著數(shù)據(jù)主動(dòng)送上門了。要想數(shù)據(jù)主動(dòng)送上門票从,我們還需要實(shí)現(xiàn)一個(gè)協(xié)議接口:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
char szBuf[4096];
int nSize = sizeof(szBuf);
#if SUPPORT_AAC_ENCODER
if ([self encoderAAC:sampleBuffer aacData:szBuf aacLen:&nSize] == YES)
{
[g_pViewController sendAudioData:szBuf len:nSize channel:0];
}
#else //#if SUPPORT_AAC_ENCODER
AudioStreamBasicDescription outputFormat = *(CMAudioFormatDescriptionGetStreamBasicDescription(CMSampleBufferGetFormatDescription(sampleBuffer)));
nSize = CMSampleBufferGetTotalSampleSize(sampleBuffer);
CMBlockBufferRef databuf = CMSampleBufferGetDataBuffer(sampleBuffer);
if (CMBlockBufferCopyDataBytes(databuf, 0, nSize, szBuf) == kCMBlockBufferNoErr)
{
[g_pViewController sendAudioData:szBuf len:nSize channel:outputFormat.mChannelsPerFrame];
}
#endif
}
到這里漫雕,我們的工作也就差不多做完了,所捕獲出來的數(shù)據(jù)是原始的PCM數(shù)據(jù)峰鄙。
當(dāng)然浸间,由于PCM數(shù)據(jù)本身比較大,不利于網(wǎng)絡(luò)傳輸吟榴,所以如果需要進(jìn)行網(wǎng)絡(luò)傳輸時(shí)魁蒜,就需要對數(shù)據(jù)進(jìn)行編碼;Ios系統(tǒng)本身支持多種音頻編碼格式吩翻,這里我們就以AAC為例來實(shí)現(xiàn)一個(gè)PCM編碼AAC的函數(shù)兜看。
在Ios系統(tǒng)中,PCM編碼AAC的例子狭瞎,在網(wǎng)上也是一找一大片细移,但是大多都是不太完整的,而且相當(dāng)一部分都是E文的熊锭,對于某些童鞋而言弧轧,這些都是深惡痛絕的雪侥。我這里就做做好人,把它們整理了一下精绎,寫成了一個(gè)函數(shù)速缨,方便使用。
在編碼前代乃,需要先創(chuàng)建一個(gè)編碼轉(zhuǎn)換對象
AVAudioConverterRef m_converter;
#if SUPPORT_AAC_ENCODER
-(BOOL)createAudioConvert:(CMSampleBufferRef)sampleBuffer { //根據(jù)輸入樣本初始化一個(gè)編碼轉(zhuǎn)換器
if (m_converter != nil)
{
return TRUE;
}
AudioStreamBasicDescription inputFormat = *(CMAudioFormatDescriptionGetStreamBasicDescription(CMSampleBufferGetFormatDescription(sampleBuffer))); // 輸入音頻格式
AudioStreamBasicDescription outputFormat; // 這里開始是輸出音頻格式
memset(&outputFormat, 0, sizeof(outputFormat));
outputFormat.mSampleRate = inputFormat.mSampleRate; // 采樣率保持一致
outputFormat.mFormatID = kAudioFormatMPEG4AAC; // AAC編碼
outputFormat.mChannelsPerFrame = 2;
outputFormat.mFramesPerPacket = 1024; // AAC一幀是1024個(gè)字節(jié)
AudioClassDescription *desc = [self getAudioClassDescriptionWithType:kAudioFormatMPEG4AAC fromManufacturer:kAppleSoftwareAudioCodecManufacturer];
if (AudioConverterNewSpecific(&inputFormat, &outputFormat, 1, desc, &m_converter) != noErr)
{
CKPrint(@"AudioConverterNewSpecific failed");
return NO;
}
return YES;
}
-(BOOL)encoderAAC:(CMSampleBufferRef)sampleBuffer aacData:(char*)aacData aacLen:(int*)aacLen { // 編碼PCM成AAC
if ([self createAudioConvert:sampleBuffer] != YES)
{
return NO;
}
CMBlockBufferRef blockBuffer = nil;
AudioBufferList inBufferList;
if (CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &inBufferList, sizeof(inBufferList), NULL, NULL, 0, &blockBuffer) != noErr)
{
CKPrint(@"CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer failed");
return NO;
}
// 初始化一個(gè)輸出緩沖列表
AudioBufferList outBufferList;
outBufferList.mNumberBuffers = 1;
outBufferList.mBuffers[0].mNumberChannels = 2;
outBufferList.mBuffers[0].mDataByteSize = *aacLen; // 設(shè)置緩沖區(qū)大小
outBufferList.mBuffers[0].mData = aacData; // 設(shè)置AAC緩沖區(qū)
UInt32 outputDataPacketSize = 1;
if (AudioConverterFillComplexBuffer(m_converter, inputDataProc, &inBufferList, &outputDataPacketSize, &outBufferList, NULL) != noErr)
{
CKPrint(@"AudioConverterFillComplexBuffer failed");
return NO;
}
*aacLen = outBufferList.mBuffers[0].mDataByteSize; //設(shè)置編碼后的AAC大小
CFRelease(blockBuffer);
return YES;
}
-(AudioClassDescription*)getAudioClassDescriptionWithType:(UInt32)type fromManufacturer:(UInt32)manufacturer { // 獲得相應(yīng)的編碼器
static AudioClassDescription audioDesc;
UInt32 encoderSpecifier = type, size = 0;
OSStatus status;
memset(&audioDesc, 0, sizeof(audioDesc));
status = AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders, sizeof(encoderSpecifier), &encoderSpecifier, &size);
if (status)
{
return nil;
}
uint32_t count = size / sizeof(AudioClassDescription);
AudioClassDescription descs[count];
status = AudioFormatGetProperty(kAudioFormatProperty_Encoders, sizeof(encoderSpecifier), &encoderSpecifier, &size, descs);
for (uint32_t i = 0; i < count; i++)
{
if ((type == descs[i].mSubType) && (manufacturer == descs[i].mManufacturer))
{
memcpy(&audioDesc, &descs[i], sizeof(audioDesc));
break;
}
}
return &audioDesc;
}
OSStatus inputDataProc(AudioConverterRef inConverter, UInt32 *ioNumberDataPackets, AudioBufferList *ioData,AudioStreamPacketDescription **outDataPacketDescription, voidvoid *inUserData) { //<span style="font-family: Arial, Helvetica, sans-serif;">AudioConverterFillComplexBuffer 編碼過程中旬牲,會(huì)要求這個(gè)函數(shù)來填充輸入數(shù)據(jù),也就是原始PCM數(shù)據(jù)</span>
AudioBufferList bufferList = *(AudioBufferList*)inUserData;
ioData->mBuffers[0].mNumberChannels = 1;
ioData->mBuffers[0].mData = bufferList.mBuffers[0].mData;
ioData->mBuffers[0].mDataByteSize = bufferList.mBuffers[0].mDataByteSize;
return noErr;
}
#endif
好了搁吓,世界是那么美好引谜,一個(gè)函數(shù)即可所有的事情搞定了。當(dāng)你需要進(jìn)行AAC編碼時(shí)擎浴,調(diào)用encoderAAC這個(gè)函數(shù)就可以了(在上面有完整的代碼)
char szBuf[4096];
int nSize = sizeof(szBuf);
if ([self encoderAAC:sampleBuffer aacData:szBuf aacLen:&nSize] == YES)
{
// do something
}