從CMSampleBufferRef中提取PCM數(shù)據(jù)
脈沖編碼調(diào)制罢屈,其實(shí)是將不規(guī)則的模擬信號轉(zhuǎn)換成數(shù)字信號色建,這樣就可以通過物理介質(zhì)存儲起來茴她。
而聲音也是一種特定頻率(20-20000HZ)的模擬信號旅东,也可以通過這種技術(shù)轉(zhuǎn)換成數(shù)字信號,從而保存下來玲献。
PCM格式港庄,就是錄制聲音時习霹,保存的最原始的聲音數(shù)據(jù)格式呜投。比如 wav格式的音頻加匈,它其實(shí)就是給PCM數(shù)據(jù)流加上一段header數(shù)據(jù)存璃,就成為了wav格式仑荐。而wav格式有時候之所以被稱為無損格式,就是因?yàn)樗4娴氖窃紁cm數(shù)據(jù)(也跟采樣率和比特率有關(guān))纵东。像我們耳熟能詳?shù)哪切┮纛l格式粘招,mp3,aac等等偎球,都是有損壓縮洒扎,為了節(jié)約占用空間,在很少損失音效的基礎(chǔ)上衰絮,進(jìn)行最大程度的壓縮袍冷。
所有的音頻編碼器,都支持pcm編碼猫牡,而且錄制的聲音胡诗,默認(rèn)也是PCM格式,所以我們下一步就是要獲取錄制的PCM數(shù)據(jù)。
-(NSData *) convertAudioSmapleBufferToPcmData:(CMSampleBufferRef) audioSample{
AudioStreamBasicDescription inAudioStreamBasicDescription = *CMAudioFormatDescriptionGetStreamBasicDescription((CMAudioFormatDescriptionRef)CMSampleBufferGetFormatDescription(pcmData));
//獲取CMBlockBufferRef
CMBlockBufferRef blockBufferRef = CMSampleBufferGetDataBuffer(pcmData);
//獲取pcm數(shù)據(jù)大小
size_t length = CMBlockBufferGetDataLength(blockBufferRef);
//分配空間
char buffer[length];
//直接將數(shù)據(jù)copy至我們自己分配的內(nèi)存中
CMBlockBufferCopyDataBytes(blockBufferRef, 0, length, buffer);
if ((inAudioStreamBasicDescription.mFormatFlags & kAudioFormatFlagIsBigEndian) == kAudioFormatFlagIsBigEndian)
{
for (int i = 0; i < length; i += 2)
{
char tmp = buffer[i];
buffer[i] = buffer[i+1];
buffer[i+1] = tmp;
}
}
uint32_t ch = inAudioStreamBasicDescription.mChannelsPerFrame;
uint32_t fs = inAudioStreamBasicDescription.mSampleRate;
//返回?cái)?shù)據(jù)
return [NSData dataWithBytesNoCopy:buffer length:audioDataSize];
}
PCM填充CMSampleBufferRef
根據(jù)采樣精度我們可以知道一個采樣點(diǎn)的數(shù)據(jù)量煌恢,比如16位精度骇陈,即一個采樣點(diǎn)需要2子節(jié),則有200ms需要的數(shù)據(jù)量為:
//200ms 采樣點(diǎn)數(shù)量
NSUInteger samples = self->mSampleRate * 200 * self->mChannelsPerFrame/1000;
//200ms pcm數(shù)量量
int len = samples*2;
PCM填充CMSampleBufferRef 代碼示例:
- (CMSampleBufferRef)createAudioSampleBuffer:(char*) buf withLen:(int) len withASBD:(AudioStreamBasicDescription) asbd{
AudioBufferList audioData;
audioData.mNumberBuffers = 1;
char* tmp = malloc(len);
memcpy(tmp, buf, len);
audioData.mBuffers[0].mData = tmp;
audioData.mBuffers[0].mNumberChannels = asbd.mChannelsPerFrame;
audioData.mBuffers[0].mDataByteSize = len;
CMSampleBufferRef buff = NULL;
CMFormatDescriptionRef format =NULL;
OSStatus status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &asbd,0, NULL, 0, NULL, NULL, &format);
if (status) {
return nil;
}
CMSampleTimingInfo timing = {CMTimeMake(asbd.mFramesPerPacket,asbd.mSampleRate), kCMTimeZero, kCMTimeInvalid };
status = CMSampleBufferCreate(kCFAllocatorDefault,NULL, false,NULL, NULL, format, (CMItemCount)asbd.mFramesPerPacket,1, &timing, 0,NULL, &buff);
if (status) { //失敗
return nil;
}
status = CMSampleBufferSetDataBufferFromAudioBufferList(buff,kCFAllocatorDefault,kCFAllocatorDefault,0, &audioData);
if (tmp) {
free(tmp);
}
CFRelease(format);
return buff;
}