MediaExtractor+MediaCodec+MediaMuxer之C++篇

1.文章介紹

這篇文章的與上一篇文章<<MediaExtractor+MediaCodec+MediaMuxer之Java篇>>對應胚迫,兩篇文章的思路相同喷户,有些細節(jié)上的處理不一樣,可以根據(jù)自身情況選擇一篇來讀访锻。

2.實現(xiàn)目標

在Android設備上把本地視頻或者網(wǎng)絡視頻解碼后重新編碼為H264(video/avc)/AAC(audio/mp4a-latm)褪尝,最后合成可播放的音視頻文件。

本篇文章是在完成了對RTSP的支持后才寫的期犬,所以會根據(jù)實際達成目標(RTSP協(xié)議支持河哑,解碼重編碼后,封裝成RTP包轉發(fā))來對代碼作出說明龟虎。

3.技術核心

廢話不多說璃谨,直接上干貨:

/** parameters for the video encoder */
const char *OUTPUT_VIDEO_MIME_TYPE = "video/avc";                                       // H.264 Advanced Video Coding;MediaDefs::MEDIA_MIMETYPE_VIDEO_AVC
const float OUTPUT_VIDEO_BIT_RATE = 512 * 1024;                                         // 512 kbps maybe better
const int32_t OUTPUT_VIDEO_FRAME_RATE = 20;                                             // 20fps;better same with source
const int32_t OUTPUT_VIDEO_IFRAME_INTERVAL = 10;                                        // 10 seconds between I-frames
const int32_t OUTPUT_VIDEO_COLOR_FORMAT = OMX_COLOR_FormatYUV420SemiPlanar;             //OMX_COLOR_FormatYUV420SemiPlanar;

/** parameters for the audio encoder */
const char *OUTPUT_AUDIO_MIME_TYPE = "audio/mp4a-latm";                                 // Advanced Audio Coding;MediaDefs::MEDIA_MIMETYPE_AUDIO_AAC
const float OUTPUT_AUDIO_BIT_RATE = 128 * 1024;                                         // 128 kbps
const int32_t OUTPUT_AUDIO_AAC_PROFILE = OMX_AUDIO_AACObjectLC;                         //OMX_AUDIO_AACObjectLC;//better then AACObjectHE?  
/**parameters for the audio encoder config from input stream */
static int32_t OUTPUT_AUDIO_CHANNEL_COUNT = 1;                                          // better match the input stream 
static int32_t OUTPUT_AUDIO_SAMPLE_RATE_HZ = 48000;                                     // better match the input stream 
static int32_t gVideoWidth = 0;
static int32_t gVideoHeight = 0;    

這是關于編碼器的配置參數(shù)的聲明,代碼風格好的程序員應該喜歡這樣鲤妥。

自己擴展了電信IPTV RTSP解復用器:

    sp<RTSPMediaExtractor> extractor = new RTSPMediaExtractor;

配置解復用器:

    if (extractor->setDataSource(path) != OK) {
        fprintf(stderr, "unable to instantiate extractor.\n");
        extractor = NULL;
        return 1;
    }

通過復用器獲取音視頻的元數(shù)據(jù):

    bool haveAudio = false;
    bool haveVideo = false; 
    for (size_t i = 0; i < extractor->countTracks(); ++i) {
        sp<AMessage> decode_format;

        status_t err = extractor->getTrackFormat(i, &decode_format);
        CHECK_EQ(err, (status_t)OK);
        
        AString mime;
        CHECK(decode_format->findString("mime", &mime));
        bool isAudio = !strncasecmp(mime.c_str(), "audio/", 6);
        bool isVideo = !strncasecmp(mime.c_str(), "video/", 6);
        
        sp<AMessage> encode_format = new AMessage;
        
        if (useAudio && !haveAudio && isAudio) {
            haveAudio = true;
            
            CHECK(decode_format->findInt32("sample-rate", &OUTPUT_AUDIO_SAMPLE_RATE_HZ));
            CHECK(decode_format->findInt32("channel-count", &OUTPUT_AUDIO_CHANNEL_COUNT));
            //make encode format
            encode_format->setString("mime", OUTPUT_AUDIO_MIME_TYPE);
            encode_format->setInt32("aac-profile", OUTPUT_AUDIO_AAC_PROFILE);
            encode_format->setInt32("max-input-size", 100 * 1024);
            encode_format->setInt32("sample-rate", OUTPUT_AUDIO_SAMPLE_RATE_HZ);
            encode_format->setInt32("channel-count", OUTPUT_AUDIO_CHANNEL_COUNT);
            encode_format->setInt32("bitrate", OUTPUT_AUDIO_BIT_RATE);
            ALOGV("selecting audio track %d", i);
            err = extractor->selectTrack(i);
            CHECK_EQ(err, (status_t)OK);
            
            audioTrack = i; 
            mAudioMapCursor = mTrackIndex;
        }else if (useVideo && !haveVideo && isVideo) {
            haveVideo = true;   
        
            decode_format->setInt32("color-format",OUTPUT_VIDEO_COLOR_FORMAT);
            CHECK(decode_format->findInt32("width", &gVideoWidth));
            CHECK(decode_format->findInt32("height", &gVideoHeight));
            //make encode format
            encode_format->setString("mime", OUTPUT_VIDEO_MIME_TYPE);
            encode_format->setInt32("width", gVideoWidth);
            encode_format->setInt32("height", gVideoHeight);
            encode_format->setInt32("color-format", OUTPUT_VIDEO_COLOR_FORMAT);
            encode_format->setInt32("bitrate", OUTPUT_VIDEO_BIT_RATE);
            encode_format->setFloat("frame-rate", OUTPUT_VIDEO_FRAME_RATE);
            encode_format->setInt32("i-frame-interval", OUTPUT_VIDEO_IFRAME_INTERVAL);
            if(mVideoWidth > 0){
                encode_format->setInt32("scale-width", mVideoWidth);
            }       
            if(mVideoHeight > 0){
                encode_format->setInt32("scale-height", mVideoHeight);
            }           
            ALOGV("selecting video track %d", i);

            err = extractor->selectTrack(i);
            CHECK_EQ(err, (status_t)OK);
            videoTrack = i;
            mVideoMapCursor = mTrackIndex;
        }else {
            continue;
        }
        CodecState *state = &stateByTrack.editValueAt(stateByTrack.add(mTrackIndex++, CodecState()));
        //make decodeMediaCodec
        state->mDecodec = MediaCodec::CreateByType(
                    looper, mime.c_str(), false /* encoder */);
        CHECK(state->mDecodec != NULL);
        err = state->mDecodec->configure(
                    decode_format, NULL/*surface*/,
                    NULL /* crypto */,
                    0 /* flags */);
        CHECK_EQ(err, (status_t)OK);
        //make encodeMediaCodec 
              if(isVideo){
            state->mEncodec = MediaCodec::CreateByType(
                    looper, OUTPUT_VIDEO_MIME_TYPE, true /* encoder */);
            CHECK(state->mEncodec != NULL);
        }else if(isAudio){
            state->mEncodec = MediaCodec::CreateByType(
                    looper, OUTPUT_AUDIO_MIME_TYPE, true /* encoder */);
            CHECK(state->mEncodec != NULL);
        }
        ALOGV("%s encode_format: %s",isVideo?"video":"audio", encode_format->debugString().c_str());
        err = state->mEncodec->configure(
                encode_format, NULL,NULL /* crypto */,
                MediaCodec::CONFIGURE_FLAG_ENCODE/* flags */);
        CHECK_EQ(err, (status_t)OK);    
        //start decoder
        CHECK_EQ((status_t)OK, state->mDecodec->start());
        CHECK_EQ((status_t)OK, state->mDecodec->getInputBuffers(&state->mDecodecInBuffers));
        CHECK_EQ((status_t)OK, state->mDecodec->getOutputBuffers(&state->mDecodecOutBuffers));
        //start encoder
        CHECK_EQ((status_t)OK, state->mEncodec->start());
        CHECK_EQ((status_t)OK, state->mEncodec->getInputBuffers(&state->mEncodecInBuffers));
        CHECK_EQ((status_t)OK, state->mEncodec->getOutputBuffers(&state->mEncodecOutBuffers));      
    }

上一篇文章介紹過JAVA的處理佳吞,關于解碼器/編碼器的配置是一致的,只是換了一種編程語言而已棉安,很好理解底扳。

解碼器、編碼器配置完后贡耽,復合器建議在解碼前配置衷模,避免解復用器被一路流獨占羡滑。

    sp<TSMuxer> muxer = new TSMuxer(NULL,mFunc);
    //##################### config the muxer ####################
    while ( ((haveVideo && encoderOutputVideoFormat == NULL) || (haveAudio && encoderOutputAudioFormat == NULL)) ){
        size_t mMapCursor = -1;
        if(haveVideo && encoderOutputVideoFormat == NULL){
            mMapCursor = mVideoMapCursor;
        }
        if(haveAudio && encoderOutputAudioFormat == NULL){
            mMapCursor = mAudioMapCursor;
        }
        CodecState *state = &stateByTrack.editValueAt(mMapCursor);
        size_t index;
        size_t offset;
        size_t size;
        int64_t presentationTimeUs;
        uint32_t flags;
        bool useOriTime = false;
        status_t err = state->mEncodec->dequeueOutputBuffer(
                            &index, &offset, &size, &presentationTimeUs, &flags,kTimeout);
        if (err == OK) {
            err = state->mEncodec->releaseOutputBuffer(index);
            CHECK_EQ(err, (status_t)OK);
        }else if (err == INFO_FORMAT_CHANGED) {
            if(mMapCursor == mVideoMapCursor){
                CHECK_EQ((status_t)OK, state->mEncodec->getOutputFormat(&encoderOutputVideoFormat));
                ALOGV("%s encoder INFO_FORMAT_CHANGED: %s",mMapCursor==mVideoMapCursor?"video":"audio", encoderOutputVideoFormat->debugString().c_str());
                if (haveVideo) {
                    outputVideoTrack = muxer->addTrack(encoderOutputVideoFormat);
                    ALOGV("muxer: adding video track %d",outputVideoTrack);
                }   
            }else if(mMapCursor == mAudioMapCursor){
                CHECK_EQ((status_t)OK, state->mEncodec->getOutputFormat(&encoderOutputAudioFormat));
                ALOGV("%s encoder INFO_FORMAT_CHANGED: %s",mMapCursor==mVideoMapCursor?"video":"audio", encoderOutputAudioFormat->debugString().c_str());
                if (haveAudio) {
                    outputAudioTrack = muxer->addTrack(encoderOutputAudioFormat);
                    ALOGV("muxer: adding audio track %d",outputAudioTrack);
                }
            }
            if( ((haveVideo && encoderOutputVideoFormat != NULL) || !haveVideo) && 
                ((haveAudio && encoderOutputAudioFormat != NULL) || !haveAudio) ){
                ALOGV("muxer: starting video:%s audio:%s",haveVideo?"true":"false",haveAudio?"true":"false");
                muxer->start();
                muxing = true;
            }
        } else {
            CHECK_EQ(err, -EAGAIN);
            ALOGV("err muxer config");
        }
    }   
//##################### config the muxer : end #################### 

接下來的流程起先和上一篇文章設計得一模一樣,在經過多次采坑后算芯,優(yōu)化成如下柒昏,優(yōu)化關鍵因素,通過解復用器智能切換音視頻流:

status_t err = extractor->getSampleTrackIndex(&trackIndex);

通過方案上的優(yōu)化熙揍,和之前JAVA版的CPU占用比較:
(JAVA版只完成了編解碼過程职祷,C++版從RTSP協(xié)議獲取媒體流到解碼重編碼,然后封裝成RTP包轉發(fā)整個過程):


JAVA編解碼效率監(jiān)測
C++版編解碼全流程效率監(jiān)測

Logcat是我調試代碼放開的届囚,不用在意有梆。
還是補一張關閉了調試打印的監(jiān)測圖吧,轉發(fā)后的媒體流不卡頓可正常播放意系。


C++ release版編解碼全流程效率監(jiān)測

內存占用比較:



JAVA版內存占用監(jiān)測

可以看到內存一直在往上漲...泥耀,過了大概1分鐘內存已經占用到快200M了...,但是還沒完蛔添,我看到內存漲到了接近500M痰催,然后出現(xiàn)了低內存保護機制,把應用給kill掉了迎瞧。

JAVA版內存占用監(jiān)測
C++版內存監(jiān)測

通過對比夸溶,可以看到無論是cpu占用還是內存占用方面,都有了較大提升凶硅,當然缝裁,還可以優(yōu)化得更好,需要時間來驗證足绅。

1.解碼器ready時捷绑,通過解復用器獲取es的buffer

//#####################step 1 : read SampleData####################
while(  (trackIndex == videoTrack && (haveVideo && !videoExtractorDone)) || 
                      (trackIndex == audioTrack && (haveAudio && !audioExtractorDone))  ){
    size_t index;
    status_t err = state->mDecodec->dequeueInputBuffer(&index, kTimeout);
    if (err == OK) {
      const sp<ABuffer> &buffer = state->mDecodecInBuffers.itemAt(index);
      err = extractor->readSampleData(buffer);
      //never execute this code
      if (err == ERROR_END_OF_STREAM) {
          ALOGV("%s signalling input EOS ",trackIndex==videoTrack?"video":"audio");
          err = state->mDecodec->queueInputBuffer(
                                    index,
                                    0 /* offset */,
                                    0 /* size */,
                                    0ll /* timeUs */,
                                    MediaCodec::BUFFER_FLAG_EOS);
          CHECK_EQ(err, (status_t)OK);
          err = extractor->getSampleTime(&timeUs);
          CHECK_EQ(err, (status_t)OK);
          if(trackIndex == videoTrack){
            videoExtractorDone = true;
          }else if(trackIndex == audioTrack){
             audioExtractorDone = true;
          }
          break;
        }
          sp<MetaData> meta;
          err = extractor->getSampleMeta(&meta);
          CHECK_EQ(err, (status_t)OK);
          uint32_t bufferFlags = 0;
          int32_t val;
          if (meta->findInt32(kKeyIsSyncFrame, &val) && val != 0) {
              // only support BUFFER_FLAG_SYNCFRAME in the flag for now.
              bufferFlags |= MediaCodec::BUFFER_FLAG_SYNCFRAME;
          }

          int64_t timeUs;
          err = extractor->getSampleTime(&timeUs);
          CHECK_EQ(err, (status_t)OK);
          ALOGV("%s decoder filling input buffer index:%d time:%lld", trackIndex==videoTrack?"video":"audio",index,timeUs);
          err = state->mDecodec->queueInputBuffer(
                                index,
                                buffer->offset(),
                                buffer->size(),
                                timeUs,
                                bufferFlags);
          CHECK_EQ(err, (status_t)OK);
    }else{
          CHECK_EQ(err, -EAGAIN);
          ALOGV("no %s decoder input buffer",trackIndex==videoTrack?"video":"audio");
          //here will loss one buffer if execute advance
          break;
    }
    err = extractor->advance();
    CHECK_EQ(err, (status_t)OK);
}
//#####################step 1 : end ####################

這段代碼針對音視頻的es是相同的業(yè)務處理,都是當decode的InputBuffers準備好后從解復用器中獲取es氢妈,再填充到InputBuffers中粹污,經過decode解碼后輸出OutputBuffer,下一階段就可以把yuv允懂、pcm數(shù)據(jù)轉儲給encdoe的InputBuffers:

                    size_t index;
                    size_t offset;
                    size_t size;
                    int64_t presentationTimeUs;
                    uint32_t flags;
                    
                    status_t err = state->mDecodec->dequeueOutputBuffer(
                            &index, &offset, &size, &presentationTimeUs, &flags,
                            kTimeout);
                    if (err == OK) {
                        ALOGV("%s decoder draining output buffer %d, time = %lld us",trackIndex==videoTrack?"video":"audio",
                              index, presentationTimeUs);
                        if (flags & MediaCodec::BUFFER_FLAG_CODECCONFIG) {
                            ALOGV("reached %s decoder BUFFER_FLAG_CODECCONFIG",trackIndex==videoTrack?"video":"audio");
                            err = state->mDecodec->releaseOutputBuffer(index);
                            CHECK_EQ(err, (status_t)OK);
                            break;
                        }
                        CodecOutInfo *info;
                        if(trackIndex == videoTrack){
                            if(mVideoInfoVector.size() >= state->mDecodecOutBuffers.size()){
                                info = &mVideoInfoVector.editValueAt(index);
                            }else{
                                info = &mVideoInfoVector.editValueAt(mVideoInfoVector.add(index, CodecOutInfo()));
                            }
                            pendingVideoDecoderOutputBufferIndex = index;

                        }else if(trackIndex == audioTrack){
                            if(mAudioInfoVector.size() >= state->mDecodecOutBuffers.size()){
                                info = &mAudioInfoVector.editValueAt(index);
                            }else{
                                info = &mAudioInfoVector.editValueAt(mAudioInfoVector.add(index, CodecOutInfo()));
                            }
                            pendingAudioDecoderOutputBufferIndex = index;

                        }
                        info->offset = offset;
                        info->size = size;
                        info->presentationTimeUs = presentationTimeUs;
                        info->flags = flags;
                        break;

把準備好的yuv或pcm數(shù)據(jù)填充到encode:

                            err = state->mEncodec->queueInputBuffer(index,
                                            0, srcBuffer->size(), info->presentationTimeUs,
                                            info->flags);
                            CHECK_EQ(err, (status_t)OK);
                            err = state->mDecodec->releaseOutputBuffer(pendingIndex);
                            CHECK_EQ(err, (status_t)OK);

最后就是把編碼后的es丟給復用器厕怜,要么保存文件衩匣,要么轉發(fā)出去(本文是自己封裝成了rtp):

                        const sp<ABuffer> &buffer = state->mEncodecOutBuffers.itemAt(index);
                        if(trackIndex == videoTrack){
                            if(presentationTimeUs >= mLastVideoSampleTime){
                                useOriTime = true;
                            }
                            if (size >= 0 && outputVideoTrack != -1) {
                                if(useOriTime){
                                    mLastVideoSampleTime = presentationTimeUs;
                                    err = muxer->writeSampleData(buffer,outputVideoTrack,mLastVideoSampleTime, flags);
                                    CHECK_EQ(err, (status_t)OK);
                                }else{
                                    ALOGV("%s encoder loss one buffer.",trackIndex==videoTrack?"video":"audio");
                                }
                                
                            }
                        }else if(trackIndex == audioTrack){
                            if(presentationTimeUs >= mLastAudioSampleTime){
                                useOriTime = true;
                            }
                            if (size >= 0 && outputAudioTrack != -1) {
                                if(useOriTime){
                                    mLastAudioSampleTime = presentationTimeUs;
                                    err = muxer->writeSampleData(buffer,outputAudioTrack,mLastAudioSampleTime, flags);
                                    CHECK_EQ(err, (status_t)OK);
                                }else{
                                    ALOGV("%s encoder loss one buffer.",trackIndex==videoTrack?"video":"audio");
                                }
                            }
                        }

核心的解碼蕾总、編碼流程就是這些了,其實和Java版的某些細節(jié)上處理不一樣外琅捏,總體思路是一致的生百。

4.結束語

本篇文章分析還是著重于重編碼的流程,和Java的實現(xiàn)方式上有一個很好的對比柄延,RTSP的擴展涉及到電信行業(yè)IPTV專業(yè)技術蚀浆,我也不方便開源出來缀程,望諒解∈锌。總體上又寫了這么多杨凑,也達到了預期目標,寫下來的東西希望可以幫助到關注該技術的同學吧摆昧,感謝關注撩满!

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市绅你,隨后出現(xiàn)的幾起案子伺帘,更是在濱河造成了極大的恐慌,老刑警劉巖忌锯,帶你破解...
    沈念sama閱讀 218,284評論 6 506
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件伪嫁,死亡現(xiàn)場離奇詭異,居然都是意外死亡偶垮,警方通過查閱死者的電腦和手機张咳,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,115評論 3 395
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來似舵,“玉大人晶伦,你說我怎么就攤上這事∽恼恚” “怎么了婚陪?”我有些...
    開封第一講書人閱讀 164,614評論 0 354
  • 文/不壞的土叔 我叫張陵,是天一觀的道長频祝。 經常有香客問我泌参,道長,這世上最難降的妖魔是什么常空? 我笑而不...
    開封第一講書人閱讀 58,671評論 1 293
  • 正文 為了忘掉前任沽一,我火速辦了婚禮,結果婚禮上漓糙,老公的妹妹穿的比我還像新娘铣缠。我一直安慰自己,他們只是感情好昆禽,可當我...
    茶點故事閱讀 67,699評論 6 392
  • 文/花漫 我一把揭開白布蝗蛙。 她就那樣靜靜地躺著,像睡著了一般醉鳖。 火紅的嫁衣襯著肌膚如雪捡硅。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,562評論 1 305
  • 那天盗棵,我揣著相機與錄音壮韭,去河邊找鬼北发。 笑死,一個胖子當著我的面吹牛喷屋,可吹牛的內容都是我干的琳拨。 我是一名探鬼主播,決...
    沈念sama閱讀 40,309評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼屯曹,長吁一口氣:“原來是場噩夢啊……” “哼从绘!你這毒婦竟也來了?” 一聲冷哼從身側響起是牢,我...
    開封第一講書人閱讀 39,223評論 0 276
  • 序言:老撾萬榮一對情侶失蹤僵井,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后驳棱,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體批什,經...
    沈念sama閱讀 45,668評論 1 314
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 37,859評論 3 336
  • 正文 我和宋清朗相戀三年社搅,在試婚紗的時候發(fā)現(xiàn)自己被綠了驻债。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 39,981評論 1 348
  • 序言:一個原本活蹦亂跳的男人離奇死亡形葬,死狀恐怖合呐,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情笙以,我是刑警寧澤淌实,帶...
    沈念sama閱讀 35,705評論 5 347
  • 正文 年R本政府宣布,位于F島的核電站猖腕,受9級特大地震影響拆祈,放射性物質發(fā)生泄漏。R本人自食惡果不足惜倘感,卻給世界環(huán)境...
    茶點故事閱讀 41,310評論 3 330
  • 文/蒙蒙 一放坏、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧老玛,春花似錦淤年、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,904評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至余素,卻和暖如春豹休,著一層夾襖步出監(jiān)牢的瞬間炊昆,已是汗流浹背桨吊。 一陣腳步聲響...
    開封第一講書人閱讀 33,023評論 1 270
  • 我被黑心中介騙來泰國打工威根, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人视乐。 一個月前我還...
    沈念sama閱讀 48,146評論 3 370
  • 正文 我出身青樓洛搀,卻偏偏與公主長得像,于是被迫代替她去往敵國和親佑淀。 傳聞我的和親對象是個殘疾皇子留美,可洞房花燭夜當晚...
    茶點故事閱讀 44,933評論 2 355

推薦閱讀更多精彩內容

  • RTSP SDP RTP/RTCP 介紹應用層 RTSP、SDP伸刃; 傳輸層 RTP谎砾、TCP、UDP捧颅; 網(wǎng)絡層 IP...
    Atom_Woo閱讀 3,845評論 0 7
  • Android 自定義View的各種姿勢1 Activity的顯示之ViewRootImpl詳解 Activity...
    passiontim閱讀 172,139評論 25 707
  • 老人會說景图,過了春節(jié)才是新年,現(xiàn)在是過了春節(jié)到公司打卡上班碉哑,才感覺是新的一年挚币。 從來沒有定過新年目標,因為太懶散了吧...
    糖醋魚米克閱讀 191評論 0 0