系列文章:
音視頻播放基礎流程
在講具體的實現(xiàn)之前我們看一下音視頻播放的基礎流程:
流程很簡單,就是將復用的音視頻流解復用出編碼后的音頻流和編碼后的視頻流。然后通過音頻解碼解出PCM數(shù)據(jù)給音頻設備去播放,通過視頻解碼解出YUV數(shù)據(jù)給視頻設備去播放笔宿。
StagefrightPlayer
上一篇文章有講到MediaPlayerService會通過MediaPlayerFactory創(chuàng)建Player,其中一個創(chuàng)建的就是StagefrightPlayer.但它實際上是一個空殼,只是簡單的調用AwesomePlayer的實現(xiàn)而已:
//StagefrightPlayer.h
class StagefrightPlayer : public MediaPlayerInterface {
...
private:
AwesomePlayer *mPlayer;
...
}
//StagefrightPlayer.cpp
status_t StagefrightPlayer::pause() {
ALOGV("pause");
return mPlayer->pause();
}
bool StagefrightPlayer::isPlaying() {
ALOGV("isPlaying");
return mPlayer->isPlaying();
}
status_t StagefrightPlayer::seekTo(int msec) {
ALOGV("seekTo %.2f secs", msec / 1E3);
status_t err = mPlayer->seekTo((int64_t)msec * 1000);
return err;
}
...
所以我們直接看AwesomePlayer的實現(xiàn)惜索。
多線程架構
音視頻的處理一般都很耗時,所以AwesomePlayer開了一個子線程去工作,防止阻塞住MediaPlayerService的主線程。
具體的架構如下(這幅圖是在這篇博客抄來的,這篇文章寫得的確不錯,大家感興趣可以去仔細讀一下:
首先AwesomePlayer內部有個TimedEventQueue對象,所有的操作都會封裝成一個個的Event,丟到這個隊列里磅废。然后TimedEventQueue創(chuàng)建了一個子線程,不斷從隊列中拿出Event來執(zhí)行却汉。
例如prepare操作最后會調到prepareAsync_l,這里面就是創(chuàng)建了個Event,通過postEvent丟到隊列里:
status_t AwesomePlayer::prepareAsync_l() {
...
if (!mQueueStarted) {
mQueue.start();
mQueueStarted = true;
}
...
mAsyncPrepareEvent = new AwesomeEvent(
this, &AwesomePlayer::onPrepareAsyncEvent);
mQueue.postEvent(mAsyncPrepareEvent);
return OK;
}
AwesomeEvent繼承TimedEventQueue::Event,實現(xiàn)了fire方法,回調了注冊的方法:
struct AwesomeEvent : public TimedEventQueue::Event {
AwesomeEvent(
AwesomePlayer *player,
void (AwesomePlayer::*method)())
: mPlayer(player),
mMethod(method) {
}
...
virtual void fire(TimedEventQueue *queue, int64_t /* now_us */) {
(mPlayer->*mMethod)();
}
...
};
TimedEventQueue::start創(chuàng)建了一個子線程,調用TimedEventQueue::threadEntry方法,這里面有個死循環(huán)一直在從Event隊列中拿出Event,執(zhí)行fire方法:
void TimedEventQueue::start() {
if (mRunning) {
return;
}
mStopped = false;
pthread_attr_t attr;
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);
pthread_create(&mThread, &attr, ThreadWrapper, this);
pthread_attr_destroy(&attr);
mRunning = true;
}
void *TimedEventQueue::ThreadWrapper(void *me) {
androidSetThreadPriority(0, ANDROID_PRIORITY_FOREGROUND);
static_cast<TimedEventQueue *>(me)->threadEntry();
return NULL;
}
void TimedEventQueue::threadEntry() {
...
for (;;) {
...
event = removeEventFromQueue_l(eventID);
if (event != NULL) {
// Fire event with the lock NOT held.
event->fire(this, now_us);
}
}
}
Demux
我們先來看看prepare回調的時候實際是調用了AwesomePlayer::beginPrepareAsync_l()方法,在這里會實際的去設置數(shù)據(jù)源,然后初始化Demux鸯绿、視頻解碼器和音頻解碼器:
void AwesomePlayer::onPrepareAsyncEvent() {
Mutex::Autolock autoLock(mLock);
beginPrepareAsync_l();
}
void AwesomePlayer::beginPrepareAsync_l() {
...
status_t err = finishSetDataSource_l();
...
status_t err = initVideoDecoder();
...
status_t err = initAudioDecoder();
}
先來看看AwesomePlayer::finishSetDataSource_l實際上是為音視頻源找到對應的MediaExtractor,這個MediaExtractor的功能就是實現(xiàn)播放器的基礎流程中的Demux,分解出視頻流和音頻流:
代碼如下:
status_t AwesomePlayer::finishSetDataSource_l() {
...
extractor = MediaExtractor::Create(dataSource, sniffedMIME.empty() ? NULL : sniffedMIME.c_str());
...
status_t err = setDataSource_l(extractor);
...
}
status_t AwesomePlayer::setDataSource_l(const sp<MediaExtractor> &extractor) {
...
for (size_t i = 0; i < extractor->countTracks(); ++i) {
sp<MetaData> meta = extractor->getTrackMetaData(i);
const char *_mime;
CHECK(meta->findCString(kKeyMIMEType, &_mime));
String8 mime = String8(_mime);
...
if (!haveVideo && !strncasecmp(mime.string(), "video/", 6)) {
setVideoSource(extractor->getTrack(i));
...
} else if (!haveAudio && !strncasecmp(mime.string(), "audio/", 6)) {
setAudioSource(extractor->getTrack(i));
...
}
...
}
...
}
MediaExtractor::Create的實現(xiàn)也是蠻粗暴的,判斷媒體類型,然后創(chuàng)建不同的MediaExtractor,如MPEG4Extractor满粗、MP3Extractor等:
sp<MediaExtractor> MediaExtractor::Create(const sp<DataSource> &source, const char *mime) {
..
MediaExtractor *ret = NULL;
if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_MPEG4)
|| !strcasecmp(mime, "audio/mp4")) {
ret = new MPEG4Extractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_MPEG)) {
ret = new MP3Extractor(source, meta);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AMR_NB)
|| !strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AMR_WB)) {
ret = new AMRExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_FLAC)) {
ret = new FLACExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_WAV)) {
ret = new WAVExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_OGG)) {
ret = new OggExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_MATROSKA)) {
ret = new MatroskaExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_MPEG2TS)) {
ret = new MPEG2TSExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_WVM)) {
// Return now. WVExtractor should not have the DrmFlag set in the block below.
return new WVMExtractor(source);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AAC_ADTS)) {
ret = new AACExtractor(source, meta);
} else if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_MPEG2PS)) {
ret = new MPEG2PSExtractor(source);
}
...
}
解碼器
然后AwesomePlayer::initVideoDecoder、AwesomePlayer::initAudioDecoder里面就是調用OMXCodec去做解碼,OMXCodec其實是OpenMax的一層封裝晒喷。OpenMax就是具體的解碼器實現(xiàn)了:
status_t AwesomePlayer::initVideoDecoder(uint32_t flags) {
...
mVideoSource = OMXCodec::Create(
mClient.interface(), mVideoTrack->getFormat(),
false, // createEncoder
mVideoTrack,
NULL, flags, USE_SURFACE_ALLOC ? mNativeWindow : NULL);
...
}
status_t AwesomePlayer::initAudioDecoder() {
...
mOmxSource = OMXCodec::Create(
mClient.interface(), mAudioTrack->getFormat(),
false, // createEncoder
mAudioTrack);
...
}
播放流程
應用在java層調用MediaPlayer.start,最終會通過IPC去到MediaPlayerService里調用到StagefrightPlayer::start方法,我們直接從這里開始往下挖:
//從這里開始是StagefrightPlayer.cpp里的代碼
status_t StagefrightPlayer::start() {
return mPlayer->play();
}
//從這里開始是AwesomePlayer.cpp里的代碼
status_t AwesomePlayer::play() {
...
return play_l();
}
status_t AwesomePlayer::play_l() {
...
createAudioPlayer_l();
...
postVideoEvent_l();
...
return OK;
}
void AwesomePlayer::postVideoEvent_l(int64_t delayUs) {
...
mQueue.postEventWithDelay(mVideoEvent, delayUs < 0 ? 10000 : delayUs);
}
在AwesomePlayer::play_l方法里面調用AwesomePlayer::createAudioPlayer_l創(chuàng)建了一個AudioPlayer,然后調用AwesomePlayer::postVideoEvent_l往mQueue里丟了一個事件孝偎。
還記得這個mVideoEvent嗎?它對應的是AwesomePlayer::onVideoEvent方法,也就是說把這個Event丟到mQueue里面之后AwesomePlayer::onVideoEvent就會在子線程中被調用
mVideoEvent = new AwesomeEvent(this, &AwesomePlayer::onVideoEvent);
讓我們繼續(xù)看看AwesomePlayer::onVideoEvent方法里面干了什么:
void AwesomePlayer::onVideoEvent() {
...
status_t err = mVideoSource->read(&mVideoBuffer, &options);
...
if ((mNativeWindow != NULL)
&& (mVideoRendererIsPreview || mVideoRenderer == NULL)) {
mVideoRendererIsPreview = false;
initRenderer_l();
}
...
if (mAudioPlayer != NULL && !(mFlags & (AUDIO_RUNNING | SEEK_PREVIEW))) {
startAudioPlayer_l();
}
...
if (mVideoRenderer != NULL) {
...
mVideoRenderer->render(mVideoBuffer);
...
}
...
postVideoEvent_l();
}
這個方法最重要的就是創(chuàng)建一個VideoRender,從mVideoSource讀取解碼好的視頻幀去渲染,渲染完之后再調AwesomePlayer::postVideoEvent_l再往隊列丟入一個VideoEvent。于是畫面就不斷的刷新了厨埋。
可以看到,這個方法內部也啟動了音頻播放器去播放音頻邪媳。而且其實它還做了一些音視頻同步的工作,但是考慮到邏輯比較啰嗦,我這里就省略了。
VideoRender
最后讓我們來看看VideoRendere是怎么來的
void AwesomePlayer::initRenderer_l() {
...
if (USE_SURFACE_ALLOC
&& !strncmp(component, "OMX.", 4)
&& strncmp(component, "OMX.google.", 11)
&& strcmp(component, "OMX.Nvidia.mpeg2v.decode")) {
mVideoRenderer =
new AwesomeNativeWindowRenderer(mNativeWindow, rotationDegrees);
} else {
mVideoRenderer = new AwesomeLocalRenderer(mNativeWindow, meta);
}
}
可以看到,是根據(jù)解碼器類型用mNativeWindow創(chuàng)建了不同的AwesomeNativeWindowRenderer或者AwesomeLocalRenderer荡陷。這個mNativeWindow就是畫面最終需要渲染到的地方
我們看看mNativeWindow是怎么來的:
// AwesomePlayer.cpp
status_t AwesomePlayer::setNativeWindow_l(const sp<ANativeWindow> &native) {
mNativeWindow = native;
...
}
status_t AwesomePlayer::setSurfaceTexture(const sp<IGraphicBufferProducer> &bufferProducer) {
...
err = setNativeWindow_l(new Surface(bufferProducer));
...
}
//StagefrightPlayer.cpp
status_t StagefrightPlayer::setVideoSurfaceTexture(
const sp<IGraphicBufferProducer> &bufferProducer) {
ALOGV("setVideoSurfaceTexture");
return mPlayer->setSurfaceTexture(bufferProducer);
}
//MediaPlayerService.cpp
status_t MediaPlayerService::Client::setVideoSurfaceTexture(
...
sp<MediaPlayerBase> p = getPlayer();
...
status_t err = p->setVideoSurfaceTexture(bufferProducer);
...
}
//MediaPlayer.cpp
status_t MediaPlayer::setVideoSurfaceTexture(
const sp<IGraphicBufferProducer>& bufferProducer)
{
...
return mPlayer->setVideoSurfaceTexture(bufferProducer);
}
//android_media_MediaPlayer.cpp
static void setVideoSurface(JNIEnv *env, jobject thiz, jobject jsurface, jboolean mediaPlayerMustBeAlive)
{
sp<MediaPlayer> mp = getMediaPlayer(env, thiz);
...
sp<Surface> surface(android_view_Surface_getSurface(env, jsurface));
...
new_st = surface->getIGraphicBufferProducer();
...
mp->setVideoSurfaceTexture(new_st);
}
static void android_media_MediaPlayer_setVideoSurface(JNIEnv *env, jobject thiz, jobject jsurface)
{
setVideoSurface(env, thiz, jsurface, true /* mediaPlayerMustBeAlive */);
}
//android.media.MediaPlayer.java
public class MediaPlayer extends PlayerBase
implements SubtitleController.Listener
, VolumeAutomation
, AudioRouting
{
...
private native void _setVideoSurface(Surface surface);
...
public void setDisplay(SurfaceHolder sh) {
mSurfaceHolder = sh;
Surface surface;
if (sh != null) {
surface = sh.getSurface();
} else {
surface = null;
}
_setVideoSurface(surface);
updateSurfaceScreenOn();
}
...
}
可以看到,VideoRendere最終是根據(jù)MediaPlayer.setDisplay這個方法設置的SurfaceHolder創(chuàng)建的到的雨效。這就解釋了畫面是怎么渲染到指定的SurfaceView上的。
完整架構圖
整個渲染的架構如下: