寫在前面
前幾篇文章大概分析到了數(shù)據(jù)的讀取罐柳,接下來(lái)就該解碼和播放了张吉。那么ijkplayer解碼和播放又是怎么做的呢肮蛹?
解碼線程
從上一篇文章我們可以看到蔗崎,ijkplayer的音頻解碼線程的入口函數(shù)是audio_thread()
缓苛,那么我們就跟蹤到audio_thread()/ff_ffplayer.c
函數(shù)里面:
static int audio_thread(void *arg)
{
//...
do {
ffp_audio_statistic_l(ffp);
if ((got_frame = decoder_decode_frame(ffp, &is->auddec, frame, NULL)) < 0)
goto the_end;
//...
while ((ret = av_buffersink_get_frame_flags(is->out_audio_filter, frame, 0)) >= 0) {
//...
if (!(af = frame_queue_peek_writable(&is->sampq)))
goto the_end;
//...
av_frame_move_ref(af->frame, frame);
frame_queue_push(&is->sampq);
//...
}
} while (ret >= 0 || ret == AVERROR(EAGAIN) || ret == AVERROR_EOF);
the_end:
//...
av_frame_free(&frame);
return ret;
}
從上面代碼可以看出,一開(kāi)始就進(jìn)入循環(huán)冬耿,然后調(diào)用decoder_decode_frame()
進(jìn)行解碼亦镶,解碼后的幀存放到frame中缤骨,然后調(diào)用frame_queue_peek_writable()
判斷是否能把剛剛解碼的frame寫入is->sampq
中绊起,因?yàn)?code>is->sampq是音頻解碼幀列表虱歪,然而播放線程直接從這里面讀取數(shù)據(jù)笋鄙,然后播放出來(lái)局装。最后 av_frame_move_ref(af->frame, frame);
把frame放入到sampq相應(yīng)位置。由于前面af = frame_queue_peek_writable(&is->sampq)
哆姻,af就是指向這一幀frame應(yīng)該放的位置的指針矛缨,所以直接把值賦值給它的結(jié)構(gòu)體里面的frame就行了灵妨。
然后frame_queue_push(&is->sampq);
里面是一個(gè)喚醒線程的操作泌霍,如查音頻播放線程因?yàn)閟ampq隊(duì)列為空而阻塞朱转,這里可以喚醒它藤为。
在decoder_decode_frame()
里面是調(diào)用傳進(jìn)去的codec的codec->decode()
方法解碼缅疟。
在frame_queue_peek_writable()
里面會(huì)判斷sampq隊(duì)列是否滿了存淫,如果沒(méi)位置放我們的frame的話纫雁,會(huì)調(diào)用pthread_cond_wait()
方法阻塞隊(duì)列轧邪。如果有位置放frame的話忌愚,就會(huì)返回frame應(yīng)該放置的位置的地址硕糊。
解碼線程大概就結(jié)束了檬某。
播放流程
之前在初始化的時(shí)候恢恼,有個(gè)地方還沒(méi)分析到场斑,那就是在ijkmp_android_create()/ijkplayer_android.c
里面:
IjkMediaPlayer *ijkmp_android_create(int(*msg_loop)(void*))
{
IjkMediaPlayer *mp = ijkmp_create(msg_loop);
mp->ffplayer->vout = SDL_VoutAndroid_CreateForAndroidSurface();
if (!mp->ffplayer->vout)
goto fail;
mp->ffplayer->pipeline = ffpipeline_create_from_android(mp->ffplayer);
ffpipeline_set_vout(mp->ffplayer->pipeline, mp->ffplayer->vout);
return mp;
}
在ffpipeline_create_from_android()
里面有一句
pipeline->func_open_audio_output = func_open_audio_output;
接著我們看看func_open_audio_output()/ffpipeline_android.c
:
static SDL_Aout *func_open_audio_output(IJKFF_Pipeline *pipeline, FFPlayer *ffp)
{
SDL_Aout *aout = NULL;
if (ffp->opensles) {
aout = SDL_AoutAndroid_CreateForOpenSLES();
} else {
aout = SDL_AoutAndroid_CreateForAudioTrack();
}
if (aout)
SDL_AoutSetStereoVolume(aout, pipeline->opaque->left_volume, pipeline->opaque->right_volume);
return aout;
}
從上面可以看出,音頻播放也分為:opensles,audiotrack
,然后我們就來(lái)看看audiotrack
吧:
SDL_Aout *SDL_AoutAndroid_CreateForAudioTrack()
{
SDL_Aout *aout = SDL_Aout_CreateInternal(sizeof(SDL_Aout_Opaque));
if (!aout)
return NULL;
SDL_Aout_Opaque *opaque = aout->opaque;
opaque->wakeup_cond = SDL_CreateCond();
opaque->wakeup_mutex = SDL_CreateMutex();
opaque->speed = 1.0f;
aout->opaque_class = &g_audiotrack_class;
aout->free_l = aout_free_l;
aout->open_audio = aout_open_audio;
aout->pause_audio = aout_pause_audio;
aout->flush_audio = aout_flush_audio;
aout->set_volume = aout_set_volume;
aout->close_audio = aout_close_audio;
aout->func_get_audio_session_id = aout_get_audio_session_id;
aout->func_set_playback_rate = func_set_playback_rate;
return aout;
}
我們要分析音頻播放對(duì)吧?在前面數(shù)據(jù)讀取線程我們分析到了在stream_component_open()
里面會(huì)調(diào)用aout->open_audio(aout, pause_on);
爽柒,相信大家還記得吧浩村?我們現(xiàn)在看看在初始化的時(shí)候:
aout->open_audio = aout_open_audio;
所以在前面stream_component_open()
里面相當(dāng)于直接調(diào)用了aout_open_audio()/ijksdl_aout_android_audiotrack.c
心墅。當(dāng)然要是我們?cè)诓シ牌骼锩嬗玫?code>opensles怎燥,程序流程是差不多的铐姚,大家有興趣的話可以下去看看隐绵。
上面接著會(huì)調(diào)用aout_open_audio_n()/ijksdl_aout_android_audiotrack.c
,然后:
SDL_CreateThreadEx(&opaque->_audio_tid, aout_thread, aout, "ff_aout_android");
這里創(chuàng)建的線程就是播放線程依许。
接著我們看看入口函數(shù)aout_thread
,在這個(gè)函數(shù)內(nèi)部會(huì)調(diào)用aout_thread_n()/ijksdl_aout_android_audiotrack.c
:
static int aout_thread_n(JNIEnv *env, SDL_Aout *aout)
{
SDL_Aout_Opaque *opaque = aout->opaque;
SDL_Android_AudioTrack *atrack = opaque->atrack;
SDL_AudioCallback audio_cblk = opaque->spec.callback;
void *userdata = opaque->spec.userdata;
uint8_t *buffer = opaque->buffer;
//...
if (!opaque->abort_request && !opaque->pause_on)
SDL_Android_AudioTrack_play(env, atrack);
while (!opaque->abort_request) {
SDL_LockMutex(opaque->wakeup_mutex);
if (!opaque->abort_request && opaque->pause_on) {
SDL_Android_AudioTrack_pause(env, atrack);
while (!opaque->abort_request && opaque->pause_on) {
SDL_CondWaitTimeout(opaque->wakeup_cond, opaque->wakeup_mutex, 1000);
}
if (!opaque->abort_request && !opaque->pause_on)
SDL_Android_AudioTrack_play(env, atrack);
}
if (opaque->need_flush) {
opaque->need_flush = 0;
SDL_Android_AudioTrack_flush(env, atrack);
}
if (opaque->need_set_volume) {
opaque->need_set_volume = 0;
SDL_Android_AudioTrack_set_volume(env, atrack, opaque->left_volume, opaque->right_volume);
}
if (opaque->speed_changed) {
opaque->speed_changed = 0;
if (J4A_GetSystemAndroidApiLevel(env) >= 23) {
SDL_Android_AudioTrack_setSpeed(env, atrack, opaque->speed);
}
}
SDL_UnlockMutex(opaque->wakeup_mutex);
audio_cblk(userdata, buffer, copy_size);
if (opaque->need_flush) {
SDL_Android_AudioTrack_flush(env, atrack);
opaque->need_flush = false;
}
if (opaque->need_flush) {
opaque->need_flush = 0;
SDL_Android_AudioTrack_flush(env, atrack);
} else {
int written = SDL_Android_AudioTrack_write(env, atrack, buffer, copy_size);
if (written != copy_size) {
ALOGW("AudioTrack: not all data copied %d/%d", (int)written, (int)copy_size);
}
}
// TODO: 1 if callback return -1 or 0
}
這個(gè)函數(shù)的開(kāi)始有很多SDL_Android_AudioTrack_set_xxx(),主要是設(shè)置播放器相關(guān)的配置悬襟,比如播放速度古胆,聲音大小等。
接著就是audio_cblk()
夭谤,不知道大家還記得在上一節(jié)我說(shuō)過(guò)在stream_component_open()
的里面調(diào)用的audio_open()
朗儒,會(huì)有這么一句代碼:
wanted_spec.callback = sdl_audio_callback;
現(xiàn)在這里排上用場(chǎng)了醉锄。
在上面aout_thread_n()
里調(diào)用的audio_cblk()
,實(shí)際上就是調(diào)用的opaque->spec.callback
,其實(shí)就是調(diào)用到sdl_audio_callback()
這個(gè)函數(shù)來(lái)了。
然后繼續(xù)分析:
static void sdl_audio_callback(void *opaque, Uint8 *stream, int len)
{
//...
while (len > 0) {
if (is->audio_buf_index >= is->audio_buf_size) {
audio_size = audio_decode_frame(is);
if (audio_size < 0) {
/* 發(fā)生錯(cuò)誤烟勋,就輸出silence */
//...
} else {
if (is->show_mode != SHOW_MODE_VIDEO)
update_sample_display(is, (int16_t *)is->audio_buf, audio_size);
is->audio_buf_size = audio_size;
}
is->audio_buf_index = 0;
}
len1 = is->audio_buf_size - is->audio_buf_index;
if (len1 > len)
len1 = len;
if (!is->muted && is->audio_buf && is->audio_volume == SDL_MIX_MAXVOLUME)
memcpy(stream, (uint8_t *)is->audio_buf + is->audio_buf_index, len1);
else {
//...
}
len -= len1;
stream += len1;
is->audio_buf_index += len1;
}
is->audio_write_buf_size = is->audio_buf_size - is->audio_buf_index;
/* Let's assume the audio driver that is used by SDL has two periods. */
//...
}
保留了部分相對(duì)重要代碼卵惦,其中重要代碼有:
audio_size = audio_decode_frame(ffp);
memcpy(stream, (uint8_t *)is->audio_buf + is->audio_buf_index, len1);
我們繼續(xù)看audio_decode_frame()
沮尿,還好作者有注釋畜疾,根據(jù)注釋提取重要代碼:
/**
* Decode one audio frame and return its uncompressed size.
*
* The processed audio frame is decoded, converted if required, and
* stored in is->audio_buf, with size in bytes given by the return
* value.
*/
static int audio_decode_frame(FFPlayer *ffp){
af = frame_queue_peek_readable(&is->sampq)
is->audio_buf = af->frame->data[0];
}
從上面代碼看,這里主要是判斷解碼后的is->sampq
是否為空届慈,其中和解碼的時(shí)候一樣,如果為空(解碼時(shí)放入is->sampq
判斷是否滿)臊泌,如果為空渠概,就阻塞(還記得解碼的時(shí)候播揪,每向is->sampq
放入一frame猪狈,就喚醒線程么雇庙?)疆前,否則返回隊(duì)列的第一個(gè)frame竹椒。然后賦值給ffp->is->audio_buf
碾牌。
接著返回到上面sdl_audio_callback()
中舶吗,接著再把剛剛賦值的ffp->is->audio_buf
copy到stream中誓琼,stream從命名來(lái)看是一個(gè)流腹侣,流的另外一頭在哪里呢傲隶?
再返回到aout_thread_n()
中:
SDL_Android_AudioTrack_write(env, atrack, buffer, copy_size);
這里的buffer就是剛剛的stream,該函數(shù)繼續(xù)調(diào)用:
(*env)->SetByteArrayRegion(env, atrack->byte_buffer, 0, (int)size_in_byte, (jbyte*) data);
J4AC_AudioTrack__write(env, atrack->thiz, atrack->byte_buffer, 0, (int)size_in_byte);
這里先是把data拷貝到數(shù)組中跺株,為什么呢乒省?因?yàn)楹竺鏁?huì)把這個(gè)數(shù)組袖扛,也就是音頻幀傳遞給java蛆封,而SetByteArrayRegion()
就是這里的一次轉(zhuǎn)換。
J4AC_AudioTrack__write()
中繼續(xù)跟蹤會(huì)發(fā)現(xiàn):
jint J4AC_android_media_AudioTrack__write(JNIEnv *env, jobject thiz, jbyteArray audioData, jint offsetInBytes, jint sizeInBytes)
{
return (*env)->CallIntMethod(env, thiz, class_J4AC_android_media_AudioTrack.method_write, audioData, offsetInBytes, sizeInBytes);
}
這就尷尬了垒迂,又調(diào)用到j(luò)ava里面去了,這里調(diào)用了java層的AudioTrack.java
中的write()函數(shù)绣夺。
其實(shí)這里又用到了bilibili另外一個(gè)開(kāi)源項(xiàng)目:jni4android陶耍。這個(gè)項(xiàng)目可以直接在c里面生成一個(gè)java的裝飾類烈钞。這里用到的java就是AudioTrack.java
毯欣,生成的文件就是AudioTrack.h
和AudioTrack.c
酗钞。后面基本就不用分析了吧砚作。在java里面基本都會(huì)用androidTrack吧葫录?網(wǎng)上教程也很多米同。
到了這里基本就結(jié)束了窍霞。音頻播放完成了但金,接下來(lái)會(huì)分析視頻播放流程冷溃,其實(shí)視頻播放流程和音頻潦差不多似枕,不過(guò)比音頻麻煩點(diǎn)凿歼。
** 如果大家還想了解ijkplayer的工作流程的話味赃,可以關(guān)注下android下的ijkplayer虐拓。**