"本文轉(zhuǎn)載自:[yanbixing123]的Android MultiMedia框架完全解析 - NuPlayerDecoder與MediaCodec的交互"
1.概述
??上一篇文章《Android多媒體框架--15:MediaCodec解析》中詳細分析了MediaCodec震缭,以及由它向下的內(nèi)容,但是在MediaCodec外面包裹的是一層NuPlayerDecoder益眉,這里就看看它們兩者之間是如何溝通的。
??從理論上來講删咱,既然NuPlayerDecoder包裹在MediaCodec外層片挂,所以它相對于MediaCodec也可以理解為App列林,它調(diào)用MediaCodec的API來完成一些任務(wù)。下面就詳細看看這個流程:
2.解碼順序的啟動過程
??在NuPlayer::onStart()函數(shù)中烛愧,會調(diào)用setRenderer設(shè)置渲染器油宜,以便后續(xù)直接根據(jù)改變量進行實際的解碼渲染過程掂碱。實際觸發(fā)開始解碼的地方是在MediaCode創(chuàng)建并執(zhí)行了start函數(shù)后(從onRequestInputBuffers方法觸發(fā)解碼),發(fā)送kWhatCodecNotify消息验庙,然后通過handleAnInputBuffer填充數(shù)據(jù)進行解碼顶吮。
- NuPlayer.cpp
mVideoDecoder->setRenderer(mRenderer);
到NuPlayer::DecoderBase::setRenderer()函數(shù)中,發(fā)送kWhatSetRenderer這個msg粪薛,然后跳到onSetRenderer()函數(shù)中:
- NuPlayerDecoderBase.cpp
void NuPlayer::DecoderBase::setRenderer(const sp<Renderer> &renderer) {
sp<AMessage> msg = new AMessage(kWhatSetRenderer, this);
msg->setObject("renderer", renderer);
msg->post();
}
------------
case kWhatSetRenderer:
{
sp<RefBase> obj;
CHECK(msg->findObject("renderer", &obj));
onSetRenderer(static_cast<Renderer *>(obj.get()));
break;
}
------------
void NuPlayer::Decoder::onSetRenderer(const sp<Renderer> &renderer) {
mRenderer = renderer;
}
在onSetRenderer函數(shù)中悴了,只是將渲染器renderer進行保存。
??handleAnInputBuffer函數(shù)開始填充數(shù)據(jù)违寿,主要是過程是在onRequestInputBuffers中:
void NuPlayer::DecoderBase::onRequestInputBuffers() {
if (mRequestInputBuffersPending) {
return;
}
// doRequestBuffers() return true if we should request more data
if (doRequestBuffers()) {
mRequestInputBuffersPending = true;
sp<AMessage> msg = new AMessage(kWhatRequestInputBuffers, this);
msg->post(2 * 1000ll);
}
}
---------------------
void NuPlayer::DecoderBase::onMessageReceived(const sp<AMessage> &msg) {
case kWhatRequestInputBuffers:
{
mRequestInputBuffersPending = false;
onRequestInputBuffers();
break;
}
在onRequestInputBuffers函數(shù)中湃交,會去發(fā)送kWhatRequestInputBuffers這個msg,而在msg的async處理函數(shù)中藤巢,還會繼續(xù)去調(diào)用onRequestInputBuffers函數(shù)搞莺,就這樣循環(huán)下去了。
??那唯一能阻止這個循環(huán)操作的掂咒,就是onRequestInputBuffers函數(shù)中的if (doRequestBuffers())判斷語句了才沧,只有它判斷為0,才會終止這個循環(huán)操作绍刮。
??那么再來看doRequestBuffers()函數(shù)温圆,它實際上是一個while循環(huán),當(dāng)需要更多的數(shù)據(jù)時孩革,這個函數(shù)返回true :
/*
* returns true if we should request more data
*/
bool NuPlayer::Decoder::doRequestBuffers() {
// mRenderer is only NULL if we have a legacy widevine source that
// is not yet ready. In this case we must not fetch input.
if (isDiscontinuityPending() || mRenderer == NULL) {
return false;
}
status_t err = OK;
while (err == OK && !mDequeuedInputBuffers.empty()) {
size_t bufferIx = *mDequeuedInputBuffers.begin();
sp<AMessage> msg = new AMessage();
msg->setSize("buffer-ix", bufferIx);
err = fetchInputData(msg); //取一個輸入buffer
if (err != OK && err != ERROR_END_OF_STREAM) {
// if EOS, need to queue EOS buffer
break;
}
mDequeuedInputBuffers.erase(mDequeuedInputBuffers.begin());
if (!mPendingInputMessages.empty()
|| !onInputBufferFetched(msg)) {
mPendingInputMessages.push_back(msg); //實際取出的數(shù)據(jù)放到這個緩沖消息隊列中
}
}
return err == -EWOULDBLOCK
&& mSource->feedMoreTSData() == OK;
}
在NuPlayer::Decoder::fetchInputData()函數(shù)中岁歉,通過調(diào)用mSource->dequeueAccessUnit()函數(shù),來跟GenericSource打交道膝蜈,就把壓縮數(shù)據(jù)填充到buffer中了锅移。
??而NuPlayer::Decoder::onInputBufferFetched()函數(shù)內(nèi)部,會通過mCodec->queueInputBuffer()來把數(shù)據(jù)加到BufferQueue中饱搏。
3.循環(huán)邏輯
??來看看下面這個圖非剃,NuPlayerDecoder是包裝在MediaCodec外面的一層,而這個input和output都是相對于MediaCodec來說的推沸,NuPlayerDecoder與MediaCodec交互發(fā)生在兩個port努潘,MediaCodec中也維護著一個BufferQueue,當(dāng)inputport端口有buffer時坤学,就會調(diào)用MediaCodec::onInputBufferAvailable()函數(shù),這個函數(shù)就會發(fā)送一個CB_INPUT_AVAILABLE msg到NuPlayerDecoder中报慕,通知它在MediaCodec的input port有一個buffer深浮,那么NuPlayerDecoder就會相應(yīng)的調(diào)用NuPlayer::Decoder::handleAnInputBuffer()函數(shù)來處理這個事情,怎么處理呢眠冈?MediaCodec的作用就是解碼飞苇,Decoder就是從demux(MediaExtractor)中取數(shù)據(jù)菌瘫,交給MediaCodec去處理。
??當(dāng)MediaCodec處理完這些數(shù)據(jù)后(怎么處理布卡?把H264的碼流解析成YUV格式的)雨让,數(shù)據(jù)在內(nèi)部從input port流到ouput port,這時候就會觸發(fā)MediaCodec::onOutputBufferAvailable()函數(shù)忿等,來告訴NuPlayerDecoder在MediaCodec的ouput port有一個buffer栖忠,通過發(fā)送一個CB_OUTPUT_AVAILABLE msg,當(dāng)NuPlayerDecoder接收到這個msg后贸街,調(diào)用NuPlayer::Decoder::handleAnOutputBuffer()函數(shù)來處理庵寞,怎么處理呢?
??Decoder的下一步就是Render了薛匪,所以下一步就是把數(shù)據(jù)發(fā)送給Renderer捐川。
這是一個循環(huán)邏輯,一步一步來看逸尖。
3.1 異步消息的建立過程
??首先是NuPlayerDecoder是如何與MediaCodec建立消息聯(lián)系的古沥?在NuPlayer::Decoder::onConfigure()函數(shù)中有如下的代碼:
- NuPlayerDecoder.cpp
sp<AMessage> reply = new AMessage(kWhatCodecNotify, this);
mCodec->setCallback(reply);
這里就是將一個AMessage發(fā)送到MediaCodec中,保存為mCallback娇跟。在MediaCodec的inputport端有可用的buffer時會通過onInputBufferAvailable函數(shù)去發(fā)送該消息岩齿,最終由NuPlayerDecoder中的onMessageReceived處理。
- MediaCodec.cpp
void MediaCodec::onInputBufferAvailable() {
int32_t index;
while ((index = dequeuePortBuffer(kPortIndexInput)) >= 0) {
sp<AMessage> msg = mCallback->dup();
msg->setInt32("callbackID", CB_INPUT_AVAILABLE);
msg->setInt32("index", index);
msg->post();
}
}
3.2 向MediaCodec輸入數(shù)據(jù)的過程
??首先是在MediaCodec內(nèi)部會向NuPlayerDecoder通過MediaCodec::onInputBufferAvailable() 函數(shù)向NuPlayerDecoder函數(shù)發(fā)送一個CB_INPUT_AVAILABLE msg逞频,在NuPlayerDecoder中:
- NuPlayerDecoder.cpp
case MediaCodec::CB_INPUT_AVAILABLE:
{
int32_t index;
CHECK(msg->findInt32("index", &index));
handleAnInputBuffer(index);
break;
}
--------------
bool NuPlayer::Decoder::handleAnInputBuffer(size_t index) {
if (isDiscontinuityPending()) {
return false;
}
sp<ABuffer> buffer;
mCodec->getInputBuffer(index, &buffer); //首先從MediaCodec中獲取一個可用的輸入緩沖
if (buffer == NULL) {
handleError(UNKNOWN_ERROR);
return false;
}
if (index >= mInputBuffers.size()) {
for (size_t i = mInputBuffers.size(); i <= index; ++i) {
mInputBuffers.add();
mMediaBuffers.add();
mInputBufferIsDequeued.add();
mMediaBuffers.editItemAt(i) = NULL;
mInputBufferIsDequeued.editItemAt(i) = false;
}
}
mInputBuffers.editItemAt(index) = buffer;
//CHECK_LT(bufferIx, mInputBuffers.size());
if (mMediaBuffers[index] != NULL) {
mMediaBuffers[index]->release();
mMediaBuffers.editItemAt(index) = NULL;
}
mInputBufferIsDequeued.editItemAt(index) = true;
if (!mCSDsToSubmit.isEmpty()) {
sp<AMessage> msg = new AMessage();
msg->setSize("buffer-ix", index);
sp<ABuffer> buffer = mCSDsToSubmit.itemAt(0);
ALOGI("[%s] resubmitting CSD", mComponentName.c_str());
msg->setBuffer("buffer", buffer);
mCSDsToSubmit.removeAt(0);
CHECK(onInputBufferFetched(msg));
return true;
}
while (!mPendingInputMessages.empty()) {
sp<AMessage> msg = *mPendingInputMessages.begin();
if (!onInputBufferFetched(msg)) {//這里是把數(shù)據(jù)queue到MediaCodec的Input中
//在這個循環(huán)中纯衍,更重要的一點是對EOS進行處理,如果遇到EOS苗胀,這里就會跳出循環(huán)襟诸。
break;
}
mPendingInputMessages.erase(mPendingInputMessages.begin());
}
if (!mInputBufferIsDequeued.editItemAt(index)) {
return true;
}
mDequeuedInputBuffers.push_back(index);
onRequestInputBuffers();
return true;
}
真正的循環(huán)在onRequestInputBuffers()函數(shù)中,這個函數(shù)如下所示:
void NuPlayer::DecoderBase::onRequestInputBuffers() {
if (mRequestInputBuffersPending) {
return;
}
// doRequestBuffers() return true if we should request more data
if (doRequestBuffers()) {
mRequestInputBuffersPending = true;
sp<AMessage> msg = new AMessage(kWhatRequestInputBuffers, this);
msg->post(2 * 1000ll);
}
}
如果還需要獲取數(shù)據(jù)的話基协,doRequestBuffers()函數(shù)就會返回true歌亲,然后發(fā)送kWhatRequestInputBuffers msg,這個msg的處理函數(shù)中仍會去調(diào)用onRequestInputBuffers()函數(shù)澜驮,所以就這么一直循環(huán)下去了陷揪。所以核心是這個doRequestBuffers()函數(shù)。
bool NuPlayer::Decoder::doRequestBuffers() {
// mRenderer is only NULL if we have a legacy widevine source that
// is not yet ready. In this case we must not fetch input.
if (isDiscontinuityPending() || mRenderer == NULL) {
return false;
}
status_t err = OK;
while (err == OK && !mDequeuedInputBuffers.empty()) {
size_t bufferIx = *mDequeuedInputBuffers.begin();
sp<AMessage> msg = new AMessage();
msg->setSize("buffer-ix", bufferIx);
err = fetchInputData(msg);
if (err != OK && err != ERROR_END_OF_STREAM) {
// if EOS, need to queue EOS buffer
break;
}
mDequeuedInputBuffers.erase(mDequeuedInputBuffers.begin());
if (!mPendingInputMessages.empty()
|| !onInputBufferFetched(msg)) {
mPendingInputMessages.push_back(msg);
}
}
return err == -EWOULDBLOCK
&& mSource->feedMoreTSData() == OK;
}
這里面有2個重要的函數(shù)杂穷,fetchInputData(msg)函數(shù)是從Source里面取數(shù)據(jù)悍缠,onInputBufferFetched()函數(shù)里面填充數(shù)據(jù),并把數(shù)據(jù)傳給MediaCodec耐量。來看看onInputBufferFetched()函數(shù)是如何完成這些操作的:
bool NuPlayer::Decoder::onInputBufferFetched(const sp<AMessage> &msg) {
size_t bufferIx;
CHECK(msg->findSize("buffer-ix", &bufferIx));
CHECK_LT(bufferIx, mInputBuffers.size());
sp<ABuffer> codecBuffer = mInputBuffers[bufferIx];
sp<ABuffer> buffer;
bool hasBuffer = msg->findBuffer("buffer", &buffer);
// handle widevine classic source - that fills an arbitrary input buffer
MediaBuffer *mediaBuffer = NULL;
if (hasBuffer) {
mediaBuffer = (MediaBuffer *)(buffer->getMediaBufferBase());
if (mediaBuffer != NULL) {
// likely filled another buffer than we requested: adjust buffer index
size_t ix;
for (ix = 0; ix < mInputBuffers.size(); ix++) {
const sp<ABuffer> &buf = mInputBuffers[ix];
if (buf->data() == mediaBuffer->data()) {
// all input buffers are dequeued on start, hence the check
if (!mInputBufferIsDequeued[ix]) {
ALOGV("[%s] received MediaBuffer for #%zu instead of #%zu",
mComponentName.c_str(), ix, bufferIx);
mediaBuffer->release();
return false;
}
// TRICKY: need buffer for the metadata, so instead, set
// codecBuffer to the same (though incorrect) buffer to
// avoid a memcpy into the codecBuffer
codecBuffer = buffer;
codecBuffer->setRange(
mediaBuffer->range_offset(),
mediaBuffer->range_length());
bufferIx = ix;
break;
}
}
CHECK(ix < mInputBuffers.size());
}
}
if (buffer == NULL /* includes !hasBuffer */) {
int32_t streamErr = ERROR_END_OF_STREAM;
CHECK(msg->findInt32("err", &streamErr) || !hasBuffer);
CHECK(streamErr != OK);
// attempt to queue EOS
status_t err = mCodec->queueInputBuffer(
bufferIx,
0,
0,
0,
MediaCodec::BUFFER_FLAG_EOS);
if (err == OK) {
mInputBufferIsDequeued.editItemAt(bufferIx) = false;
} else if (streamErr == ERROR_END_OF_STREAM) {
streamErr = err;
// err will not be ERROR_END_OF_STREAM
}
if (streamErr != ERROR_END_OF_STREAM) {
ALOGE("Stream error for %s (err=%d), EOS %s queued",
mComponentName.c_str(),
streamErr,
err == OK ? "successfully" : "unsuccessfully");
handleError(streamErr);
}
} else {
sp<AMessage> extra;
if (buffer->meta()->findMessage("extra", &extra) && extra != NULL) {
int64_t resumeAtMediaTimeUs;
if (extra->findInt64(
"resume-at-mediaTimeUs", &resumeAtMediaTimeUs)) {
ALOGI("[%s] suppressing rendering until %lld us",
mComponentName.c_str(), (long long)resumeAtMediaTimeUs);
mSkipRenderingUntilMediaTimeUs = resumeAtMediaTimeUs;
}
}
int64_t timeUs = 0;
uint32_t flags = 0;
CHECK(buffer->meta()->findInt64("timeUs", &timeUs));
int32_t eos, csd;
// we do not expect SYNCFRAME for decoder
if (buffer->meta()->findInt32("eos", &eos) && eos) {
flags |= MediaCodec::BUFFER_FLAG_EOS;
} else if (buffer->meta()->findInt32("csd", &csd) && csd) {
flags |= MediaCodec::BUFFER_FLAG_CODECCONFIG;
}
// copy into codec buffer
if (buffer != codecBuffer) {
CHECK_LE(buffer->size(), codecBuffer->capacity());
codecBuffer->setRange(0, buffer->size());
memcpy(codecBuffer->data(), buffer->data(), buffer->size());
//實際復(fù)制數(shù)據(jù)的地方
}
status_t err = mCodec->queueInputBuffer( //在這里把buffer交給解碼器Codec
bufferIx,
codecBuffer->offset(),
codecBuffer->size(),
timeUs,
flags);
if (err != OK) {
if (mediaBuffer != NULL) {
mediaBuffer->release();
}
ALOGE("Failed to queue input buffer for %s (err=%d)",
mComponentName.c_str(), err);
handleError(err);
} else {
mInputBufferIsDequeued.editItemAt(bufferIx) = false;
if (mediaBuffer != NULL) {
CHECK(mMediaBuffers[bufferIx] == NULL);
mMediaBuffers.editItemAt(bufferIx) = mediaBuffer;
}
}
}
return true;
}
至此飞蚓,就把數(shù)據(jù)交給MediaCodec了,解碼前的數(shù)據(jù)就準備好了廊蜒。
3.3 MediaCodec解碼后的數(shù)據(jù)流向
??在MediaCodec的output port有數(shù)據(jù)時趴拧,就會調(diào)用MediaCodec::onOutputBufferAvailable()函數(shù)溅漾,這個函數(shù)就是發(fā)送CB_OUTPUT_AVAILABLE msg到NuPlayerDecoder中,同樣在NuPlayer::Decoder::onMessageReceived函數(shù)中進行處理:
case MediaCodec::CB_OUTPUT_AVAILABLE:
{
int32_t index;
size_t offset;
size_t size;
int64_t timeUs;
int32_t flags;
CHECK(msg->findInt32("index", &index));
CHECK(msg->findSize("offset", &offset));
CHECK(msg->findSize("size", &size));
CHECK(msg->findInt64("timeUs", &timeUs));//時間戳pts
CHECK(msg->findInt32("flags", &flags));
handleAnOutputBuffer(index, offset, size, timeUs, flags);
break;
}
繼續(xù)看:
bool NuPlayer::Decoder::handleAnOutputBuffer(
size_t index,
size_t offset,
size_t size,
int64_t timeUs,
int32_t flags) {
// CHECK_LT(bufferIx, mOutputBuffers.size());
sp<ABuffer> buffer;
mCodec->getOutputBuffer(index, &buffer);
if (index >= mOutputBuffers.size()) {
for (size_t i = mOutputBuffers.size(); i <= index; ++i) {
mOutputBuffers.add();
}
}
mOutputBuffers.editItemAt(index) = buffer;
buffer->setRange(offset, size);
buffer->meta()->clear();
buffer->meta()->setInt64("timeUs", timeUs);
bool eos = flags & MediaCodec::BUFFER_FLAG_EOS;
// we do not expect CODECCONFIG or SYNCFRAME for decoder
sp<AMessage> reply = new AMessage(kWhatRenderBuffer, this);
reply->setSize("buffer-ix", index);
reply->setInt32("generation", mBufferGeneration);
if (eos) {
ALOGI("[%s] saw output EOS", mIsAudio ? "audio" : "video");
buffer->meta()->setInt32("eos", true);
reply->setInt32("eos", true);
} else if (mSkipRenderingUntilMediaTimeUs >= 0) {
if (timeUs < mSkipRenderingUntilMediaTimeUs) {
ALOGV("[%s] dropping buffer at time %lld as requested.",
mComponentName.c_str(), (long long)timeUs);
reply->post();
return true;
}
mSkipRenderingUntilMediaTimeUs = -1;
}
mNumFramesTotal += !mIsAudio;
// wait until 1st frame comes out to signal resume complete
notifyResumeCompleteIfNecessary();
if (mRenderer != NULL) {
// send the buffer to renderer.
mRenderer->queueBuffer(mIsAudio, buffer, reply);// 發(fā)送給渲染器Renderer
if (eos && !isDiscontinuityPending()) {
mRenderer->queueEOS(mIsAudio, ERROR_END_OF_STREAM);
}
}
return true;
}
通過這個mRenderer->queueBuffer()函數(shù)著榴,就把數(shù)據(jù)發(fā)送給Renderer了添履。Render中根據(jù)時間來判斷這個Buffer是否需要渲染,是否需要丟幀脑又,然后通過一個notify反饋給NuPlayerDecoder暮胧,然后在下面的onRenderBuffer處理函數(shù)中,會根據(jù)這個notify來判斷是否渲染這一幀數(shù)據(jù)挂谍。
??同時這里還發(fā)送了一個kWhatRenderBuffer msg叔壤,這個處理函數(shù)如下:
case kWhatRenderBuffer:
{
if (!isStaleReply(msg)) {
onRenderBuffer(msg);
}
break;
}
----------------------------
void NuPlayer::Decoder::onRenderBuffer(const sp<AMessage> &msg) {
status_t err;
int32_t render;
size_t bufferIx;
int32_t eos;
CHECK(msg->findSize("buffer-ix", &bufferIx));
if (!mIsAudio) {
int64_t timeUs;
sp<ABuffer> buffer = mOutputBuffers[bufferIx];
buffer->meta()->findInt64("timeUs", &timeUs);
if (mCCDecoder != NULL && mCCDecoder->isSelected()) {
mCCDecoder->display(timeUs);
}
}
if (msg->findInt32("render", &render) && render) {
int64_t timestampNs;
CHECK(msg->findInt64("timestampNs", ×tampNs));
err = mCodec->renderOutputBufferAndRelease(bufferIx, timestampNs);
} else {
mNumOutputFramesDropped += !mIsAudio;
err = mCodec->releaseOutputBuffer(bufferIx);
}
if (err != OK) {
ALOGE("failed to release output buffer for %s (err=%d)",
mComponentName.c_str(), err);
handleError(err);
}
if (msg->findInt32("eos", &eos) && eos
&& isDiscontinuityPending()) {
finishHandleDiscontinuity(true /* flushOnTimeChange */);
}
}
這個(msg->findInt32("render", &render) && render)就是從Renderer中傳回來的notify,如果需要渲染的話口叙,這個判斷語句就是true炼绘,會調(diào)用mCodec->renderOutputBufferAndRelease函數(shù)渲染這一幀數(shù)據(jù),然后釋放buffer妄田,把buffer返還給MediaCodec俺亮。
??如果這個語句判斷為false,就會直接調(diào)用mCodec->releaseOutputBuffer函數(shù)釋放buffer疟呐,不去渲染脚曾,直接把buffer返還給MediaCodec。
??至此启具,Decoder的過程也就分析完畢了本讥。