最近工作比較忙,很久沒有更新這個(gè)系列的文章刽肠。我們先回顧一下上一篇MediaCodec進(jìn)行AAC編解碼(文件格式轉(zhuǎn)換)的內(nèi)容溃肪,里面介紹了MediaExtractor的使用免胃,MediaCodec進(jìn)行音頻文件的解碼和編碼,ADTS的介紹和封裝惫撰。今天這篇文章在此基礎(chǔ)上跟大家一起學(xué)習(xí)如何通過Android設(shè)備進(jìn)行音頻的采集羔沙,然后使用MediaCodec進(jìn)行AAC編碼,最后輸出到文件厨钻。這部分我們關(guān)注的重點(diǎn)就是在如何進(jìn)行音頻的采集扼雏。
項(xiàng)目代碼github對(duì)應(yīng)的代碼版本v1.7。大家一定要注意下載對(duì)應(yīng)的代碼版本調(diào)試夯膀。
音頻的采集涉及一個(gè)類AudioRecord诗充。我們先介紹下這個(gè)類
AudioRecord
我們還是先看下官方的說明。AudioRecord類在Java應(yīng)用程序中管理音頻資源诱建,用來記錄從平臺(tái)音頻輸入設(shè)備產(chǎn)生的數(shù)據(jù)蝴蜓。通過AudioRecord對(duì)象來完成"pulling"(讀取)數(shù)據(jù)涂佃。
應(yīng)用通過以下幾個(gè)方法負(fù)責(zé)立即從AudioRecord對(duì)象讀壤怼:read(byte[], int, int),read(short[], int, int)或read(ByteBuffer, int).無論使用哪種音頻格式辜荠,使用AudioRecord是最方便的汽抚。
在創(chuàng)建AudioRecord對(duì)象時(shí),AudioRecord會(huì)初始化伯病,并和音頻緩沖區(qū)連接造烁,用來緩沖新的音頻數(shù)據(jù)。根據(jù)構(gòu)造時(shí)指定的緩沖區(qū)大小午笛,來決定AudioRecord能夠記錄多長(zhǎng)的數(shù)據(jù)惭蟋。從硬件設(shè)備讀取的數(shù)據(jù),應(yīng)小于整個(gè)記錄緩沖區(qū)药磺。
AudioRecord的使用我們分一下幾個(gè)步驟:
第一步 創(chuàng)建AudioRecord
AudioRecord直接使用new來創(chuàng)建告组,我們看一下構(gòu)造方法:
//---------------------------------------------------------
// Constructor, Finalize
//--------------------
/**
* Class constructor.
* Though some invalid parameters will result in an {@link IllegalArgumentException} exception,
* other errors do not. Thus you should call {@link #getState()} immediately after construction
* to confirm that the object is usable.
* @param audioSource the recording source.
* See {@link MediaRecorder.AudioSource} for the recording source definitions.
* @param sampleRateInHz the sample rate expressed in Hertz. 44100Hz is currently the only
* rate that is guaranteed to work on all devices, but other rates such as 22050,
* 16000, and 11025 may work on some devices.
* {@link AudioFormat#SAMPLE_RATE_UNSPECIFIED} means to use a route-dependent value
* which is usually the sample rate of the source.
* {@link #getSampleRate()} can be used to retrieve the actual sample rate chosen.
* @param channelConfig describes the configuration of the audio channels.
* See {@link AudioFormat#CHANNEL_IN_MONO} and
* {@link AudioFormat#CHANNEL_IN_STEREO}. {@link AudioFormat#CHANNEL_IN_MONO} is guaranteed
* to work on all devices.
* @param audioFormat the format in which the audio data is to be returned.
* See {@link AudioFormat#ENCODING_PCM_8BIT}, {@link AudioFormat#ENCODING_PCM_16BIT},
* and {@link AudioFormat#ENCODING_PCM_FLOAT}.
* @param bufferSizeInBytes the total size (in bytes) of the buffer where audio data is written
* to during the recording. New audio data can be read from this buffer in smaller chunks
* than this size. See {@link #getMinBufferSize(int, int, int)} to determine the minimum
* required buffer size for the successful creation of an AudioRecord instance. Using values
* smaller than getMinBufferSize() will result in an initialization failure.
* @throws java.lang.IllegalArgumentException
*/
public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat,
int bufferSizeInBytes)
throws IllegalArgumentException {
this((new AudioAttributes.Builder())
.setInternalCapturePreset(audioSource)
.build(),
(new AudioFormat.Builder())
.setChannelMask(getChannelMaskFromLegacyConfig(channelConfig,
true/*allow legacy configurations*/))
.setEncoding(audioFormat)
.setSampleRate(sampleRateInHz)
.build(),
bufferSizeInBytes,
AudioManager.AUDIO_SESSION_ID_GENERATE);
}
這個(gè)注釋寫的還是比較清楚的。如果參數(shù)無效可能會(huì)拋出異常癌佩,所以創(chuàng)建后要通過getState()
方法來判斷是否可用木缝,我們看到參數(shù)
- audioSource 音頻錄制源
- sampleRateInHz 默認(rèn)采樣率,單位Hz围辙。44100Hz是當(dāng)前唯一能保證在所有設(shè)備上工作的采樣率我碟,在一些設(shè)備上還有22050, 16000或11025。
- channelConfig 描述音頻通道設(shè)置
- audioFormat 音頻數(shù)據(jù)保證支持此格式姚建。請(qǐng)見ENCODING_PCM_16BIT和ENCODING_PCM_8BIT矫俺。
- bufferSizeInBytes
這個(gè)是最難理解又最重要的一個(gè)參數(shù),它配置的是 AudioRecord 內(nèi)部的音頻緩沖區(qū)的大小,該緩沖區(qū)的值不能低于一幀“音頻幀”(Frame)的大小厘托,一幀音頻幀的大小計(jì)算如下:
int size = 采樣率 x 位寬 x 采樣時(shí)間 x 通道數(shù)
采樣時(shí)間一般取 2.5ms~120ms 之間友雳,由廠商或者具體的應(yīng)用決定,我們其實(shí)可以推斷铅匹,每一幀的采樣時(shí)間取得越短沥阱,產(chǎn)生的延時(shí)就應(yīng)該會(huì)越小,當(dāng)然伊群,碎片化的數(shù)據(jù)也就會(huì)越多考杉。在Android開發(fā)中,AudioRecord 類提供了一個(gè)幫助你確定這個(gè) bufferSizeInBytes 的函數(shù)
設(shè)置的值比getMinBufferSize()還小則會(huì)導(dǎo)致初始化失敗舰始。
前面說到創(chuàng)建完后要通過getState()
判斷是否可用崇棠,判斷返回值是否等于AudioRecord.STATE_INITIALIZED
第二步 開始采集
這一步很簡(jiǎn)單,直接調(diào)用startRecording()
即可丸卷。
第三步 讀取數(shù)據(jù)
通過read方法讀取采集到的音頻數(shù)據(jù),看下方法的定義:
//---------------------------------------------------------
// Audio data supply
//--------------------
/**
* Reads audio data from the audio hardware for recording into a byte array.
* The format specified in the AudioRecord constructor should be
* {@link AudioFormat#ENCODING_PCM_8BIT} to correspond to the data in the array.
* @param audioData the array to which the recorded audio data is written.
* @param offsetInBytes index in audioData from which the data is written expressed in bytes.
* @param sizeInBytes the number of requested bytes.
* @return zero or the positive number of bytes that were read, or one of the following
* error codes. The number of bytes will not exceed sizeInBytes.
* <ul>
* <li>{@link #ERROR_INVALID_OPERATION} if the object isn't properly initialized</li>
* <li>{@link #ERROR_BAD_VALUE} if the parameters don't resolve to valid data and indexes</li>
* <li>{@link #ERROR_DEAD_OBJECT} if the object is not valid anymore and
* needs to be recreated. The dead object error code is not returned if some data was
* successfully transferred. In this case, the error is returned at the next read()</li>
* <li>{@link #ERROR} in case of other error</li>
* </ul>
*/
public int read(@NonNull byte[] audioData, int offsetInBytes, int sizeInBytes) {
return read(audioData, offsetInBytes, sizeInBytes, READ_BLOCKING);
}
把從硬件錄制采集到的音頻數(shù)據(jù)讀取到byte數(shù)組中枕稀。 返回值是讀入緩沖區(qū)的總byte數(shù)。如果發(fā)生錯(cuò)誤則返回值小于0谜嫉,如果對(duì)象屬性沒有初始化萎坷,則返回ERROR_INVALID_OPERATION,如果參數(shù)不能解析成有效的數(shù)據(jù)或索引沐兰,則返回ERROR_BAD_VALUE哆档。讀取的總byte數(shù)不會(huì)超過sizeInBytes。
最后一步 釋放資源
直接調(diào)用release()
方法即可住闯,對(duì)象不能經(jīng)常使用此方法瓜浸,而且在調(diào)用release()后,必須設(shè)置引用為null比原。
實(shí)戰(zhàn)
AudioRecord 學(xué)習(xí)后插佛,那么使用Android設(shè)備采集編碼并封裝輸出到文件所需要的技術(shù)知識(shí)儲(chǔ)備我們已經(jīng)都具備了。現(xiàn)在到了如何在代碼中體現(xiàn)的階段了量窘。
看到AudioRecordActivity
雇寇。我們還是分步驟看:
初始化
初始化涉及兩個(gè)方面,AudioRecord的創(chuàng)建和MediaCodec的創(chuàng)建
initAudioDevice();
try {
mAudioEncoder = initAudioEncoder();
} catch (IOException e) {
e.printStackTrace();
throw new RuntimeException("audio encoder init fail");
}
先看到initAudioDevice()
蚌铜,AudioRecord的創(chuàng)建
private void initAudioDevice() {
int[] sampleRates = {44100, 22050, 16000, 11025};
for (int sampleRate : sampleRates) {
//編碼制式
int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
// stereo 立體聲锨侯,
int channelConfig = AudioFormat.CHANNEL_CONFIGURATION_STEREO;
int buffsize = 2 * AudioRecord.getMinBufferSize(sampleRate, channelConfig, audioFormat);
mAudioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, sampleRate, channelConfig,
audioFormat, buffsize);
if (mAudioRecord.getState() != AudioRecord.STATE_INITIALIZED) {
continue;
}
mAudioSampleRate = sampleRate;
mAudioChanelCount = channelConfig == AudioFormat.CHANNEL_CONFIGURATION_STEREO ? 2 : 1;
mAudioBuffer = new byte[Math.min(4096, buffsize)];
mSampleRateType = ADTSUtils.getSampleRateType(sampleRate);
LogUtils.w("編碼器參數(shù):" + mAudioSampleRate + " " + mSampleRateType + " " + mAudioChanelCount);
}
}
這里的邏輯和我們剛剛介紹的AudioRecord一致。只是循環(huán)查找有效的采樣率厘线。
這里我們?cè)O(shè)置緩沖區(qū)大小是
int buffsize =2* AudioRecord.getMinBufferSize(sampleRate, channelConfig, audioFormat);
注意這里有個(gè)關(guān)鍵的判斷
if (mAudioRecord.getState() == AudioRecord.STATE_INITIALIZED&&buffsize<=MAX_BUFFER_SIZE)
為什么緩沖區(qū)大小<MAX_BUFFER_SIZE识腿。這個(gè)大小我們?cè)诔跏蓟幋a器的時(shí)候有設(shè)置出革。防止采集到的數(shù)據(jù)比編碼器輸入緩沖區(qū)的大小還大造壮,那就會(huì)crash掉
其他邏輯就很常規(guī)了。
接下來看到編碼器初始化
/**
* 初始化編碼器
* @return
* @throws IOException
*/
private MediaCodec initAudioEncoder() throws IOException {
MediaCodec encoder = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_AUDIO_AAC);
MediaFormat format = MediaFormat.createAudioFormat(MediaFormat.MIMETYPE_AUDIO_AAC,
mAudioSampleRate, mAudioChanelCount);
format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, MAX_BUFFER_SIZE);
format.setInteger(MediaFormat.KEY_BIT_RATE, 1000 * 30);
encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
return encoder;
}
注意這一句
format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, MAX_BUFFER_SIZE);
這就是前面講的防止采集到的數(shù)據(jù)比這里還大。這里設(shè)置編碼器最大的緩沖區(qū)大小耳璧。
開始采集和編碼
public void btnStart(View view) {
initAudioDevice();
try {
mAudioEncoder = initAudioEncoder();
} catch (IOException e) {
e.printStackTrace();
throw new RuntimeException("audio encoder init fail");
}
//開啟錄音
mRecordThread = new Thread(fetchAudioRunnable());
try {
mAudioBos = new BufferedOutputStream(new FileOutputStream(new File(FileUtil.getMainDir(), "record.aac")), 200 * 1024);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
presentationTimeUs = new Date().getTime() * 1000;
mAudioRecord.startRecording();
queue = new ArrayBlockingQueue<byte[]>(10);
isRecord = true;
if (mAudioEncoder != null) {
mAudioEncoder.start();
encodeInputBuffers = mAudioEncoder.getInputBuffers();
encodeOutputBuffers = mAudioEncoder.getOutputBuffers();
mAudioEncodeBufferInfo = new MediaCodec.BufferInfo();
mEncodeThread = new Thread(new EncodeRunnable());
mEncodeThread.start();
}
mRecordThread.start();
}
這里的邏輯就是在初始化完成后成箫,開啟線程錄音,然后啟動(dòng)編碼器旨枯,開啟線程進(jìn)行編碼
我們看到采集線程中的邏輯
/**
* 采集音頻數(shù)據(jù)
*/
private void fetchPcmFromDevice() {
LogUtils.w("錄音線程開始");
while (isRecord && mAudioRecord != null && !Thread.interrupted()) {
int size = mAudioRecord.read(mAudioBuffer, 0, mAudioBuffer.length);
if (size < 0) {
LogUtils.w("audio ignore ,no data to read");
break;
}
if (isRecord) {
byte[] audio = new byte[size];
System.arraycopy(mAudioBuffer, 0, audio, 0, size);
LogUtils.v("采集到數(shù)據(jù):" + audio.length);
putPCMData(audio);
}
}
}
就是循環(huán)read蹬昌。然后將數(shù)據(jù)添加到pcm隊(duì)列中。后面的邏輯就和前面一篇文章邏輯一樣了攀隔。
接下來看到編碼邏輯
/**
* 編碼PCM數(shù)據(jù) 得到MediaFormat.MIMETYPE_AUDIO_AAC格式的音頻文件皂贩,并保存到
*/
private void encodePCM() {
int inputIndex;
ByteBuffer inputBuffer;
int outputIndex;
ByteBuffer outputBuffer;
byte[] chunkAudio;
int outBitSize;
int outPacketSize;
byte[] chunkPCM;
chunkPCM = getPCMData();//獲取解碼器所在線程輸出的數(shù)據(jù) 代碼后邊會(huì)貼上
if (chunkPCM == null) {
return;
}
inputIndex = mAudioEncoder.dequeueInputBuffer(-1);//同解碼器
if (inputIndex >= 0) {
inputBuffer = encodeInputBuffers[inputIndex];//同解碼器
inputBuffer.clear();//同解碼器
inputBuffer.limit(chunkPCM.length);
inputBuffer.put(chunkPCM);//PCM數(shù)據(jù)填充給inputBuffer
long pts = new Date().getTime() * 1000 - presentationTimeUs;
LogUtils.d("開始編碼: ");
mAudioEncoder.queueInputBuffer(inputIndex, 0, chunkPCM.length, pts, 0);//通知編碼器 編碼
}
outputIndex = mAudioEncoder.dequeueOutputBuffer(mAudioEncodeBufferInfo, 10000);//同解碼器
while (outputIndex >= 0) {//同解碼器
outBitSize = mAudioEncodeBufferInfo.size;
outPacketSize = outBitSize + 7;//7為ADTS頭部的大小
outputBuffer = encodeOutputBuffers[outputIndex];//拿到輸出Buffer
outputBuffer.position(mAudioEncodeBufferInfo.offset);
outputBuffer.limit(mAudioEncodeBufferInfo.offset + outBitSize);
chunkAudio = new byte[outPacketSize];
ADTSUtils.addADTStoPacket(mSampleRateType, chunkAudio, outPacketSize);//添加ADTS 代碼后面會(huì)貼上
outputBuffer.get(chunkAudio, 7, outBitSize);//將編碼得到的AAC數(shù)據(jù) 取出到byte[]中 偏移量offset=7 你懂得
outputBuffer.position(mAudioEncodeBufferInfo.offset);
try {
LogUtils.d("接受編碼后數(shù)據(jù) " + chunkAudio.length);
mAudioBos.write(chunkAudio, 0, chunkAudio.length);//BufferOutputStream 將文件保存到內(nèi)存卡中 *.aac
} catch (IOException e) {
e.printStackTrace();
}
mAudioEncoder.releaseOutputBuffer(outputIndex, false);
outputIndex = mAudioEncoder.dequeueOutputBuffer(mAudioEncodeBufferInfo, 10000);
}
}
代碼差不多和上一篇文章一樣,就不做過多的解釋了昆汹。最終輸出到文件明刷。
到這里整個(gè)流程結(jié)束。最終得到的record.aac可以使用vlc播放器播放满粗。