ExoPlayer 源碼分析 一 HLS 拉流及播放流程

本文基于 ExoPlayer 2.13.2 版。

本文將從 HLS 入手快速的分析一下 ExoPlayer 各個組件的作用以及 HLS 從拉流到播放的整個流程。

HLS 拉流播放步驟

  1. 下載并解析 m3u8 文件內(nèi)容
  2. 拉流
    2.1 加載視頻流
    2.2 加載音頻流(如果有)
  3. 解封裝
  4. 解碼
    4.1 視頻解碼
    4.2 音頻解碼
  5. 同步播放

先把整體流程列出來是為了帶著問題及目標去分析源碼板惑,做到有的放矢残吩,下面將一步步具體分析。

下載并解析 m3u8 文件

簡單而言,m3u8 文件分為兩種格式:

  • 內(nèi)容直接給出 TS 文件索引
  • 音頻和視頻分開,內(nèi)容為不同碼率的音頻和視頻的 m3u8 索引
    以上兩種分法是錯的,具體請看官方文檔

兩種文件的內(nèi)容如下:

#EXTM3U
#EXT-X-TARGETDURATION:10
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:1
#EXTINF:10,
fileSequence1.ts
#EXTINF:10,
fileSequence2.ts
#EXTINF:10,
fileSequence3.ts
#EXTINF:10,
fileSequence4.ts
#EXTINF:10,
fileSequence5.ts

#EXTM3U
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio",LANGUAGE="eng",NAME="English",AUTOSELECT=YES, \
DEFAULT=YES,URI="eng/prog_index.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio",LANGUAGE="fre",NAME="Fran?ais",AUTOSELECT=YES, \
DEFAULT=NO,URI="fre/prog_index.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio",LANGUAGE="sp",NAME="Espanol",AUTOSELECT=YES, \
DEFAULT=NO,URI="sp/prog_index.m3u8"

#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=195023,CODECS="avc1.42e00a,mp4a.40.2",AUDIO="audio"
lo/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=591680,CODECS="avc1.42e01e,mp4a.40.2",AUDIO="audio"
hi/prog_index.m3u8

ExoPlayer 創(chuàng)建播放器的時候首先要根據(jù)媒體類型構建 Render溉卓,而 HlsRenderBuilder 在構建 Render 的時候首先要加載媒體清單(m3u8),從而決定 AudioRender 和 VideoRender 采用同一個數(shù)據(jù)源還是不同的數(shù)據(jù)源搬泥。

private static final class AsyncRendererBuilder implements ManifestCallback<HlsPlaylist> {

  private final Context context;
  private final String userAgent;
  private final DemoPlayer player;
  private final ManifestFetcher<HlsPlaylist> playlistFetcher;

  private boolean canceled;

  public AsyncRendererBuilder(Context context, String userAgent, String url, DemoPlayer player) {
    this.context = context;
    this.userAgent = userAgent;
    this.player = player;
    HlsPlaylistParser parser = new HlsPlaylistParser();
     // ManifestFetcher 負責從網(wǎng)絡下載 m3u8 文件
    playlistFetcher = new ManifestFetcher<>(url, new DefaultUriDataSource(context, userAgent),
        parser);
  }

  public void init() {
      // 開始下載
    playlistFetcher.singleLoad(player.getMainHandler().getLooper(), this);
  }

  ...

  @Override
  public void onSingleManifest(HlsPlaylist manifest) {
    if (canceled) {
      return;
    }

    Handler mainHandler = player.getMainHandler();
    LoadControl loadControl = new DefaultLoadControl(new DefaultAllocator(BUFFER_SEGMENT_SIZE));
    DefaultBandwidthMeter bandwidthMeter = new DefaultBandwidthMeter();
    PtsTimestampAdjusterProvider timestampAdjusterProvider = new PtsTimestampAdjusterProvider();

    boolean haveSubtitles = false;
      // 是否有獨立的音頻數(shù)據(jù)
    boolean haveAudios = false;
    if (manifest instanceof HlsMasterPlaylist) {
      HlsMasterPlaylist masterPlaylist = (HlsMasterPlaylist) manifest;
      haveSubtitles = !masterPlaylist.subtitles.isEmpty();
      haveAudios = !masterPlaylist.audios.isEmpty();
    }

    // Build the video/id3 renderers.
    // 負責從網(wǎng)絡或文件中夾在數(shù)據(jù)
    DataSource dataSource = new DefaultUriDataSource(context, bandwidthMeter, userAgent);
    // Hls 有分塊的概念,一個 chunk 加載完成之后換一個新的 chunk 繼續(xù)加載桑寨,ChunkSource 負責創(chuàng)建新的 Chunk,并從
      // 底層的 dataSource 獲取數(shù)據(jù)
    HlsChunkSource chunkSource = new HlsChunkSource(true /* isMaster */, dataSource, manifest,
        DefaultHlsTrackSelector.newDefaultInstance(context), bandwidthMeter,
        timestampAdjusterProvider);
    // 負責為 Render 提供數(shù)據(jù)
    HlsSampleSource sampleSource = new HlsSampleSource(chunkSource, loadControl,
        MAIN_BUFFER_SEGMENTS * BUFFER_SEGMENT_SIZE, mainHandler, player, DemoPlayer.TYPE_VIDEO);
    MediaCodecVideoTrackRenderer videoRenderer = new MediaCodecVideoTrackRenderer(context,
        sampleSource, MediaCodecSelector.DEFAULT, MediaCodec.VIDEO_SCALING_MODE_SCALE_TO_FIT,
        5000, mainHandler, player, 50);
    MetadataTrackRenderer<List<Id3Frame>> id3Renderer = new MetadataTrackRenderer<>(
        sampleSource, new Id3Parser(), player, mainHandler.getLooper());

    // Build the audio renderer.
    MediaCodecAudioTrackRenderer audioRenderer;
    if (haveAudios) {
      // 如果有單獨的音頻數(shù)據(jù)忿檩,需要創(chuàng)建獨立的數(shù)據(jù)源從網(wǎng)絡加載數(shù)據(jù)
      DataSource audioDataSource = new DefaultUriDataSource(context, bandwidthMeter, userAgent);
      HlsChunkSource audioChunkSource = new HlsChunkSource(false /* isMaster */, audioDataSource,
          manifest, DefaultHlsTrackSelector.newAudioInstance(), bandwidthMeter,
          timestampAdjusterProvider);
      HlsSampleSource audioSampleSource = new HlsSampleSource(audioChunkSource, loadControl,
          AUDIO_BUFFER_SEGMENTS * BUFFER_SEGMENT_SIZE, mainHandler, player,
          DemoPlayer.TYPE_AUDIO);
      audioRenderer = new MediaCodecAudioTrackRenderer(
          new SampleSource[] {sampleSource, audioSampleSource}, MediaCodecSelector.DEFAULT, null,
          true, player.getMainHandler(), player, AudioCapabilities.getCapabilities(context),
          AudioManager.STREAM_MUSIC);
    } else {
        //如果沒有單獨的音頻數(shù)據(jù)尉尾,可以和視頻 Render 使用相同的數(shù)據(jù)源
      audioRenderer = new MediaCodecAudioTrackRenderer(sampleSource,
          MediaCodecSelector.DEFAULT, null, true, player.getMainHandler(), player,
          AudioCapabilities.getCapabilities(context), AudioManager.STREAM_MUSIC);
    }

    // Build the text renderer.
    TrackRenderer textRenderer;
    if (haveSubtitles) {
      DataSource textDataSource = new DefaultUriDataSource(context, bandwidthMeter, userAgent);
      HlsChunkSource textChunkSource = new HlsChunkSource(false /* isMaster */, textDataSource,
          manifest, DefaultHlsTrackSelector.newSubtitleInstance(), bandwidthMeter,
          timestampAdjusterProvider);
      HlsSampleSource textSampleSource = new HlsSampleSource(textChunkSource, loadControl,
          TEXT_BUFFER_SEGMENTS * BUFFER_SEGMENT_SIZE, mainHandler, player, DemoPlayer.TYPE_TEXT);
      textRenderer = new TextTrackRenderer(textSampleSource, player, mainHandler.getLooper());
    } else {
      textRenderer = new Eia608TrackRenderer(sampleSource, player, mainHandler.getLooper());
    }

    TrackRenderer[] renderers = new TrackRenderer[DemoPlayer.RENDERER_COUNT];
    renderers[DemoPlayer.TYPE_VIDEO] = videoRenderer;
    renderers[DemoPlayer.TYPE_AUDIO] = audioRenderer;
    renderers[DemoPlayer.TYPE_METADATA] = id3Renderer;
    renderers[DemoPlayer.TYPE_TEXT] = textRenderer;
    player.onRenderers(renderers, bandwidthMeter);
  }

}

小結

這一步主要是從網(wǎng)絡加載并解析 m3u8 文件,根據(jù)文件內(nèi)容為 VideoRender 和 AudioRender 配置相同或不同的數(shù)據(jù)源燥透。

拉流

這一步從 HlsSampleSource#maybeStartLoading() 開始沙咏,整個流程如下:

HlsSampleSource.maybeStartLoading() 
    HlsChunkSource.getChunkOperation //生成TsChunk
        loader.startLoading(loadable, this);//loadable 為 TsChunk,將其加入線程池
            TsChunk.load()
                HlsExtractorWrapper.read
                    TsExtract.read
                        DefaultExtractorInput.read
                            DefaultUriDataSource.read
                                DefaultHttpDataSource.read

TsExtract 中有個 tsPacketBuffer 緩存班套,它的大小是一個 Packet size肢藐,網(wǎng)絡數(shù)據(jù)會被持續(xù)不斷的放入這個緩存中,代碼如下吱韭。

TsChunk.load
@Override
public void load() throws IOException, InterruptedException {
  
  DataSpec loadDataSpec;
  boolean skipLoadedBytes;
  if (isEncrypted) {
    loadDataSpec = dataSpec;
    skipLoadedBytes = bytesLoaded != 0;
  } else {
    loadDataSpec = Util.getRemainderDataSpec(dataSpec, bytesLoaded);
    skipLoadedBytes = false;
  }

  try {
    ExtractorInput input = new DefaultExtractorInput(dataSource,
        loadDataSpec.absoluteStreamPosition, dataSource.open(loadDataSpec));
    if (skipLoadedBytes) {
      input.skipFully(bytesLoaded);
    }
    try {
      int result = Extractor.RESULT_CONTINUE;
        // 循環(huán)讀取知道塊的尾部
      while (result == Extractor.RESULT_CONTINUE && !loadCanceled) {
        result = extractorWrapper.read(input);
      }
      long tsChunkEndTimeUs = extractorWrapper.getAdjustedEndTimeUs();
      if (tsChunkEndTimeUs != Long.MIN_VALUE) {
        adjustedEndTimeUs = tsChunkEndTimeUs;
      }
    } finally {
      bytesLoaded = (int) (input.getPosition() - dataSpec.absoluteStreamPosition);
    }
  } finally {
    Util.closeQuietly(dataSource);
  }
}

TsExtractor.read
@Override
public int read(ExtractorInput input, PositionHolder seekPosition)
    throws IOException, InterruptedException {
  byte[] data = tsPacketBuffer.data;
  // 將 tsPacketBuffer 中的數(shù)據(jù)拷貝到頭部
  if (BUFFER_SIZE - tsPacketBuffer.getPosition() < TS_PACKET_SIZE) {
    int bytesLeft = tsPacketBuffer.bytesLeft();
    if (bytesLeft > 0) {
      System.arraycopy(data, tsPacketBuffer.getPosition(), data, 0, bytesLeft);
    }
    tsPacketBuffer.reset(data, bytesLeft);
  }
  // 連續(xù)從 input 中讀取數(shù)據(jù)吆豹,直到至少達到一個 packet size 的大小
  while (tsPacketBuffer.bytesLeft() < TS_PACKET_SIZE) {
    int limit = tsPacketBuffer.limit();
    int read = input.read(data, limit, BUFFER_SIZE - limit);
    if (read == C.RESULT_END_OF_INPUT) {
      return RESULT_END_OF_INPUT;
    }
    tsPacketBuffer.setLimit(limit + read);
  }

  ...
  
  boolean payloadUnitStartIndicator = tsScratch.readBit();
  tsScratch.skipBits(1); // transport_priority
  // 第 13 位是分組 ID 一個PID對應一種特定的PSI消息或者一個特定的PES。
  int pid = tsScratch.readBits(13);
  tsScratch.skipBits(2); // transport_scrambling_control
  boolean adaptationFieldExists = tsScratch.readBit();
  boolean payloadExists = tsScratch.readBit();

  ...
    // 不連續(xù)性檢查等
  // Skip the adaptation field.

  // Read the payload.
  if (payloadExists) {
    TsPayloadReader payloadReader = tsPayloadReaders.get(pid);
    if (payloadReader != null) {
      if (discontinuityFound) {
        payloadReader.seek();
      }
      tsPacketBuffer.setLimit(endOfPacket);
      // 將數(shù)據(jù)交給特定的 payloadReader 處理
      payloadReader.consume(tsPacketBuffer, payloadUnitStartIndicator, output);
      Assertions.checkState(tsPacketBuffer.getPosition() <= endOfPacket);
      tsPacketBuffer.setLimit(limit);
    }
  }

  tsPacketBuffer.setPosition(endOfPacket);
  return RESULT_CONTINUE;
}

以上就是 Hls 拉流過程杉女,它是按塊加載的瞻讽,每加載完一個 Chunk 會重新出發(fā)一次 maybeStartLoading() 來加載下一個塊鸳吸。
TsExtractor 中會每次讀取一個 Packet size 大小的數(shù)據(jù)交給 payloadReader 處理熏挎。

解封裝

解析 TS 流之前先了解一下 TS 流的文件格式:

ts 格式

ts層的內(nèi)容是通過PID值來標識的,主要內(nèi)容包括:PAT表晌砾、PMT表坎拐、音頻流、視頻流养匈。解析ts流要先找到PAT表哼勇,只要找到PAT就可以找到PMT,然后就可以找到音視頻流了呕乎。PAT表的PID值固定為0积担。PAT表和PMT表需要定期插入ts流,因為用戶隨時可能加入ts流猬仁,這個間隔比較小帝璧,通常每隔幾個視頻幀就要加入PAT和PMT先誉。PAT和PMT表是必須的,還可以加入其它表如SDT(業(yè)務描述表)等的烁,不過hls流只要有PAT和PMT就可以播放了褐耳。

要解析 TS 流首先要知道流中內(nèi)容是什么編碼方式,音頻是 AAC 還是 DTS渴庆,視頻是 H264 還是 H265铃芦,這些信息存儲在 PMT 表中。而 PAT 表中存儲著 program_number 及其對應的 PMT 的 PID襟雷。

所以 TS 流解析流程如下:

  • 創(chuàng)建一個 PatReader 它的 pid 是 0刃滓。
  • 解析 PAT 表,PMT pid 創(chuàng)建相應的 PmtReader耸弄。
  • 解析 PMT 表注盈,根據(jù) stream_type 類型創(chuàng)建相應類型的內(nèi)容 Reader。

詳細內(nèi)容格式請閱讀維基百科

ExoPlayer 中具體代碼如下:

TsExtractor

public TsExtractor(PtsTimestampAdjuster ptsTimestampAdjuster, int workaroundFlags) {
   resetPayloadReaders();
}

private void resetPayloadReaders() {
  trackIds.clear();
  tsPayloadReaders.clear();
  // 創(chuàng)建 pid 為 0 的PatReader
  tsPayloadReaders.put(TS_PAT_PID, new PatReader());
  id3Reader = null;
  nextEmbeddedTrackId = BASE_EMBEDDED_TRACK_ID;
}

TsExtractor#PatReader.consume
@Override
public void consume(ParsableByteArray data, boolean payloadUnitStartIndicator,
    ExtractorOutput output) {
  ...
  int programCount = (sectionLength - 9) / 4;
  for (int i = 0; i < programCount; i++) {
    sectionData.readBytes(patScratch, 4);
    int programNumber = patScratch.readBits(16);
    patScratch.skipBits(3); // reserved (3)
    if (programNumber == 0) {
      patScratch.skipBits(13); // network_PID (13)
    } else {
      // 如果存在 program叙赚,創(chuàng)建一個 PmtReader
      int pid = patScratch.readBits(13);
      tsPayloadReaders.put(pid, new PmtReader(pid));
    }
  }

}

TsExtractor#PmtReader.consume

@Override
public void consume(ParsableByteArray data, boolean payloadUnitStartIndicator,
    ExtractorOutput output) {

  ...

  while (remainingEntriesLength > 0) {
    sectionData.readBytes(pmtScratch, 5);
    int streamType = pmtScratch.readBits(8);
    pmtScratch.skipBits(3); // reserved
    int elementaryPid = pmtScratch.readBits(13);
    pmtScratch.skipBits(4); // reserved
    int esInfoLength = pmtScratch.readBits(12); // ES_info_length
    if (streamType == 0x06) {
      // Read descriptors in PES packets containing private data.
      streamType = readPrivateDataStreamType(sectionData, esInfoLength);
    } else {
      sectionData.skipBytes(esInfoLength);
    }
    remainingEntriesLength -= esInfoLength + 5;
    int trackId = (workaroundFlags & WORKAROUND_HLS_MODE) != 0 ? streamType : elementaryPid;
    if (trackIds.get(trackId)) {
      continue;
    }
    ElementaryStreamReader pesPayloadReader;
    // 根據(jù) streamType 創(chuàng)建相應類型的 Reader
    switch (streamType) {
      case TS_STREAM_TYPE_MPA:
        pesPayloadReader = new MpegAudioReader(output.track(trackId));
        break;
      case TS_STREAM_TYPE_MPA_LSF:
        pesPayloadReader = new MpegAudioReader(output.track(trackId));
        break;
      case TS_STREAM_TYPE_AAC:
        pesPayloadReader = (workaroundFlags & WORKAROUND_IGNORE_AAC_STREAM) != 0 ? null
            : new AdtsReader(output.track(trackId), new DummyTrackOutput());
        break;
      case TS_STREAM_TYPE_AC3:
        pesPayloadReader = new Ac3Reader(output.track(trackId), false);
        break;
      case TS_STREAM_TYPE_E_AC3:
        pesPayloadReader = new Ac3Reader(output.track(trackId), true);
        break;
      case TS_STREAM_TYPE_DTS:
      case TS_STREAM_TYPE_HDMV_DTS:
        pesPayloadReader = new DtsReader(output.track(trackId));
        break;
      case TS_STREAM_TYPE_H262:
        pesPayloadReader = new H262Reader(output.track(trackId));
        break;
      case TS_STREAM_TYPE_H264:
        pesPayloadReader = (workaroundFlags & WORKAROUND_IGNORE_H264_STREAM) != 0 ? null
            : new H264Reader(output.track(trackId),
                new SeiReader(output.track(nextEmbeddedTrackId++)),
                (workaroundFlags & WORKAROUND_ALLOW_NON_IDR_KEYFRAMES) != 0,
                (workaroundFlags & WORKAROUND_DETECT_ACCESS_UNITS) != 0);
        break;
      case TS_STREAM_TYPE_H265:
        pesPayloadReader = new H265Reader(output.track(trackId),
            new SeiReader(output.track(nextEmbeddedTrackId++)));
        break;
      case TS_STREAM_TYPE_ID3:
        if ((workaroundFlags & WORKAROUND_HLS_MODE) != 0) {
          pesPayloadReader = id3Reader;
        } else {
          pesPayloadReader = new Id3Reader(output.track(nextEmbeddedTrackId++));
        }
        break;
      default:
        pesPayloadReader = null;
        break;
    }

    if (pesPayloadReader != null) {
      trackIds.put(trackId, true);
      tsPayloadReaders.put(elementaryPid,
          new PesReader(pesPayloadReader, ptsTimestampAdjuster));
    }
  }
  if ((workaroundFlags & WORKAROUND_HLS_MODE) != 0) {
   if (!tracksEnded) {
     output.endTracks();
   }
  } else {
    tsPayloadReaders.remove(TS_PAT_PID);
    tsPayloadReaders.remove(pid);
    output.endTracks();
  }
  tracksEnded = true;
}

注意這里的 Reader 比如 H264Reader 被注入進 PesReader 里老客,之后才放入 tsPayloadReaders,這個由上面的圖片可以看到音頻或視頻數(shù)據(jù)是放在 pes 里的所以要先做一次 pes 解析震叮。

現(xiàn)在再回到 TsExtractor 的 read 方法胧砰,這里緩存完一個 packet size 的數(shù)據(jù)之后會根據(jù)當前數(shù)據(jù)的 pid 找到對應的 Reader 處理數(shù)據(jù)。比如 H264 數(shù)據(jù)就是由 PesReader 摘除掉 Header 之后交給 H264Reader 處理苇瓣。

PesReader 就不看了尉间,直接到 H264Reader,代碼如下:

@Override
public void consume(ParsableByteArray data) {
  while (data.bytesLeft() > 0) {
    int offset = data.getPosition();
    int limit = data.limit();
    byte[] dataArray = data.data;

    // Append the data to the buffer.
    totalBytesWritten += data.bytesLeft();
    output.sampleData(data, data.bytesLeft());

    // Scan the appended data, processing NAL units as they are encountered
    while (true) {
      int nalUnitOffset = NalUnitUtil.findNalUnit(dataArray, offset, limit, prefixFlags);

      if (nalUnitOffset == limit) {
        // We've scanned to the end of the data without finding the start of another NAL unit.
        nalUnitData(dataArray, offset, limit);
        return;
      }

      // We've seen the start of a NAL unit of the following type.
      int nalUnitType = NalUnitUtil.getNalUnitType(dataArray, nalUnitOffset);

      // This is the number of bytes from the current offset to the start of the next NAL unit.
      // It may be negative if the NAL unit started in the previously consumed data.
      int lengthToNalUnit = nalUnitOffset - offset;
      if (lengthToNalUnit > 0) {
        nalUnitData(dataArray, offset, nalUnitOffset);
      }
      int bytesWrittenPastPosition = limit - nalUnitOffset;
      long absolutePosition = totalBytesWritten - bytesWrittenPastPosition;
      // Indicate the end of the previous NAL unit. If the length to the start of the next unit
      // is negative then we wrote too many bytes to the NAL buffers. Discard the excess bytes
      // when notifying that the unit has ended.
      endNalUnit(absolutePosition, bytesWrittenPastPosition,
          lengthToNalUnit < 0 ? -lengthToNalUnit : 0, pesTimeUs);
      // Indicate the start of the next NAL unit.
      startNalUnit(absolutePosition, nalUnitType, pesTimeUs);
      // Continue scanning the data.
      offset = nalUnitOffset + 3;
    }
  }
}

這里以及后續(xù)的方法都是 NAL 相關處理击罪,這里先跳過哲嘲,以后有時間再分析。
音頻流解析也不貼代碼了媳禁。

解碼

播放器初始化的時候創(chuàng)建了 MediaCodecAudioTrackRenderer 和 MediaCodecVideoTrackRenderer 做解碼和渲染眠副,從名字就可以看出來,它使用的是 MediaCodec 解碼竣稽。

其中最關鍵的幾個方法是:

  • codec.dequeueInputBuffer 獲取輸入緩沖
  • codec.queueInputBuffer 填充數(shù)據(jù)后放入隊列
  • codec.dequeueOutputBuffer 獲取輸出緩沖
  • codec.releaseOutputBuffer 渲染后釋放緩沖

MediaCodec 解碼實現(xiàn)有需要的時候再分析囱怕,下面看一下 sample 數(shù)據(jù)從 SampleBuffer 到 MediaCodec buffer 中的過程。

SampleSourceTrackRenderer.doSomeWork
    MediaCodecTrackRenderer.doSomeWork
        MediaCodecTrackRenderer.feedInputBuffer
            SampleSourceTrackRenderer.readSource
                HlsSampleSource.readData
                    HlsExtractorWrapper.getSample
                        DefaultTrackOutput.getSample
                            RollingSampleBuffer.readSample
                                RollingSampleBuffer.readData

feedInputBuffer 這個方法中會創(chuàng)建一個 SampleHolder 一路傳遞下去毫别,最后 readData 將 data 放入 SampleHolder 帶回來娃弓。
解碼音視頻相同。

渲染

視頻渲染

MediaCodec 如果創(chuàng)建的時候設置了 surface岛宦,在 native 層會創(chuàng)建一個 NativeWindow台丛,如果 render 設置為 true 會在 releaseOutputBuffer 時會渲染一幀畫面。

音頻播放

ExoPlayer 音頻播放使用的是 AudioTrack砾肺,播放過程其實就是獲取解碼后的數(shù)據(jù)挽霉,然后寫給 AudioTrack私恬。

音視頻同步

音視頻同步 ExoPlayer 做的有點與眾不同,它這里只用了一個線程做視頻和音頻的輸出炼吴。
線程每隔 10ms 刷新一次本鸣,音頻通過 AudioTrack 輸出,寫入數(shù)據(jù)后直接返回不阻塞硅蹦,視頻通過 MediaCodec 控制時間輸出荣德,也不阻塞。

MediaCodecVideoTrackRenderer.processOutputBuffer
@Override
protected boolean processOutputBuffer(long positionUs, long elapsedRealtimeUs, MediaCodec codec,
    ByteBuffer buffer, MediaCodec.BufferInfo bufferInfo, int bufferIndex, boolean shouldSkip) {

  ...
  // positionUs 為音頻的 pts童芹,elapsedRealtimeUs 為本次刷新前記錄的系統(tǒng)時間
  // Compute how many microseconds it is until the buffer's presentation time.
  // 開始刷新到現(xiàn)在已經(jīng)過去的時間
  long elapsedSinceStartOfLoopUs = (SystemClock.elapsedRealtime() * 1000) - elapsedRealtimeUs;
  //視頻 pts 減去 音頻 pts 為 畫面應該在上一幀聲音多少時間內(nèi)播放
  // 但是距離音頻輸出到現(xiàn)在已經(jīng)過去一段時間涮瞻,這段時間要減去即 - elapsedSinceStartOfLoopUs
  long earlyUs = bufferInfo.presentationTimeUs - positionUs - elapsedSinceStartOfLoopUs;

  // Compute the buffer's desired release time in nanoseconds.
  long systemTimeNs = System.nanoTime();
  long unadjustedFrameReleaseTimeNs = systemTimeNs + (earlyUs * 1000);

  // Apply a timestamp adjustment, if there is one.
  // 用一個工具計算一個更順滑的 FrameReleaseNS,主要是用了兩個方法
  // 1假褪、多幀平均計算出 duration 使每幀過度更順滑
  // 2署咽、計算出一個離垂直同步更近的時間,使輸出的畫面能剛好被展示
  long adjustedReleaseTimeNs = frameReleaseTimeHelper.adjustReleaseTime(
      bufferInfo.presentationTimeUs, unadjustedFrameReleaseTimeNs);
  earlyUs = (adjustedReleaseTimeNs - systemTimeNs) / 1000;

  // 如果當前畫面落后應該渲染時間 30 毫秒生音,丟幀
  if (shouldDropOutputBuffer(earlyUs, elapsedRealtimeUs)) {
    dropOutputBuffer(codec, bufferIndex);
    return true;
  }

  if (Util.SDK_INT >= 21) {
    // Let the underlying framework time the release.
    // Android 版本大于 21宁否,MediaCodec 可以控制輸出時間
    if (earlyUs < 50000) {
      renderOutputBufferV21(codec, bufferIndex, adjustedReleaseTimeNs);
      consecutiveDroppedFrameCount = 0;
      return true;
    }
  } else {
    // We need to time the release ourselves.
    if (earlyUs < 30000) {
      if (earlyUs > 11000) {
        // We're a little too early to render the frame. Sleep until the frame can be rendered.
        // Note: The 11ms threshold was chosen fairly arbitrarily.
        try {
          // Subtracting 10000 rather than 11000 ensures the sleep time will be at least 1ms.
          Thread.sleep((earlyUs - 10000) / 1000);
        } catch (InterruptedException e) {
          Thread.currentThread().interrupt();
        }
      }
      renderOutputBuffer(codec, bufferIndex);
      consecutiveDroppedFrameCount = 0;
      return true;
    }
  }


  return false;
}

VideoFrameReleaseTimeHelper. adjustReleaseTime

public long adjustReleaseTime(long framePresentationTimeUs, long unadjustedReleaseTimeNs) {
  long framePresentationTimeNs = framePresentationTimeUs * 1000;

  // Until we know better, the adjustment will be a no-op.
  // 只有 haveSync 且 frameCount 大于一定數(shù)目之后才會重新計算,所以記錄下來開始直接返回
  long adjustedFrameTimeNs = framePresentationTimeNs;
  long adjustedReleaseTimeNs = unadjustedReleaseTimeNs;

  if (haveSync) {
    // See if we've advanced to the next frame.
    if (framePresentationTimeUs != lastFramePresentationTimeUs) {
      frameCount++;
      adjustedLastFrameTimeNs = pendingAdjustedFrameTimeNs;
    }
    if (frameCount >= MIN_FRAMES_FOR_ADJUSTMENT) {
      // We're synced and have waited the required number of frames to apply an adjustment.
      // Calculate the average frame time across all the frames we've seen since the last sync.
      // This will typically give us a frame rate at a finer granularity than the frame times
      // themselves (which often only have millisecond granularity).
      // 多幀計算出一個 平均 durationNS
      long averageFrameDurationNs = (framePresentationTimeNs - syncFramePresentationTimeNs)
          / frameCount;
      // Project the adjusted frame time forward using the average.
      // 候選 frameTimeNS缀遍,frameTime 其實就是 presentionTime 在這里的的稱呼
      // 上一幀的 frameTime + frameDuration 等于當前幀的 frameTime
      long candidateAdjustedFrameTimeNs = adjustedLastFrameTimeNs + averageFrameDurationNs;

      // 差距過大慕匠,跳過
      if (isDriftTooLarge(candidateAdjustedFrameTimeNs, unadjustedReleaseTimeNs)) {
        haveSync = false;
      } else {
        adjustedFrameTimeNs = candidateAdjustedFrameTimeNs;
        //計算出一個合適的的 releaseTime
        // 系統(tǒng)時間 + 當前 presentationTime(調(diào)整后的)- 第一幀的 presentationTime
        adjustedReleaseTimeNs = syncUnadjustedReleaseTimeNs + adjustedFrameTimeNs
            - syncFramePresentationTimeNs;
      }
    } else {
      // We're synced but haven't waited the required number of frames to apply an adjustment.
      // Check drift anyway.
      if (isDriftTooLarge(framePresentationTimeNs, unadjustedReleaseTimeNs)) {
        haveSync = false;
      }
    }
  }

  // If we need to sync, do so now.
  if (!haveSync) {
    syncFramePresentationTimeNs = framePresentationTimeNs;
    syncUnadjustedReleaseTimeNs = unadjustedReleaseTimeNs;
    frameCount = 0;
    haveSync = true;
    onSynced();
  }

  lastFramePresentationTimeUs = framePresentationTimeUs;
  pendingAdjustedFrameTimeNs = adjustedFrameTimeNs;

  if (vsyncSampler == null || vsyncSampler.sampledVsyncTimeNs == 0) {
    return adjustedReleaseTimeNs;
  }

  // Find the timestamp of the closest vsync. This is the vsync that we're targeting.
  // 計算出一個離垂直同步時間更近的時間
  long snappedTimeNs = closestVsync(adjustedReleaseTimeNs,
      vsyncSampler.sampledVsyncTimeNs, vsyncDurationNs);
  // Apply an offset so that we release before the target vsync, but after the previous one.
  return snappedTimeNs - vsyncOffsetNs;
}


小結:

  • 音視頻輸出只用了一個線程,這個線程每隔 10ms 刷新一次域醇。
  • 音頻寫給 AudioTrack 后台谊,直接返回不阻塞。
  • 視頻會根據(jù)多幀計算出一個平均 duration譬挚,使畫面過度更平滑锅铅。
最后編輯于
?著作權歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市减宣,隨后出現(xiàn)的幾起案子盐须,更是在濱河造成了極大的恐慌,老刑警劉巖蚪腋,帶你破解...
    沈念sama閱讀 211,290評論 6 491
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件丰歌,死亡現(xiàn)場離奇詭異,居然都是意外死亡屉凯,警方通過查閱死者的電腦和手機,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 90,107評論 2 385
  • 文/潘曉璐 我一進店門眼溶,熙熙樓的掌柜王于貴愁眉苦臉地迎上來悠砚,“玉大人,你說我怎么就攤上這事堂飞」嗑桑” “怎么了绑咱?”我有些...
    開封第一講書人閱讀 156,872評論 0 347
  • 文/不壞的土叔 我叫張陵,是天一觀的道長枢泰。 經(jīng)常有香客問我描融,道長,這世上最難降的妖魔是什么衡蚂? 我笑而不...
    開封第一講書人閱讀 56,415評論 1 283
  • 正文 為了忘掉前任窿克,我火速辦了婚禮,結果婚禮上毛甲,老公的妹妹穿的比我還像新娘年叮。我一直安慰自己,他們只是感情好玻募,可當我...
    茶點故事閱讀 65,453評論 6 385
  • 文/花漫 我一把揭開白布只损。 她就那樣靜靜地躺著,像睡著了一般七咧。 火紅的嫁衣襯著肌膚如雪跃惫。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 49,784評論 1 290
  • 那天艾栋,我揣著相機與錄音辈挂,去河邊找鬼。 笑死裹粤,一個胖子當著我的面吹牛终蒂,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播遥诉,決...
    沈念sama閱讀 38,927評論 3 406
  • 文/蒼蘭香墨 我猛地睜開眼拇泣,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了矮锈?” 一聲冷哼從身側(cè)響起霉翔,我...
    開封第一講書人閱讀 37,691評論 0 266
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎苞笨,沒想到半個月后债朵,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 44,137評論 1 303
  • 正文 獨居荒郊野嶺守林人離奇死亡瀑凝,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 36,472評論 2 326
  • 正文 我和宋清朗相戀三年序芦,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片粤咪。...
    茶點故事閱讀 38,622評論 1 340
  • 序言:一個原本活蹦亂跳的男人離奇死亡谚中,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情宪塔,我是刑警寧澤磁奖,帶...
    沈念sama閱讀 34,289評論 4 329
  • 正文 年R本政府宣布,位于F島的核電站某筐,受9級特大地震影響比搭,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜南誊,卻給世界環(huán)境...
    茶點故事閱讀 39,887評論 3 312
  • 文/蒙蒙 一身诺、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧弟疆,春花似錦戚长、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,741評論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至柑司,卻和暖如春迫肖,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背攒驰。 一陣腳步聲響...
    開封第一講書人閱讀 31,977評論 1 265
  • 我被黑心中介騙來泰國打工蟆湖, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人玻粪。 一個月前我還...
    沈念sama閱讀 46,316評論 2 360
  • 正文 我出身青樓隅津,卻偏偏與公主長得像,于是被迫代替她去往敵國和親劲室。 傳聞我的和親對象是個殘疾皇子伦仍,可洞房花燭夜當晚...
    茶點故事閱讀 43,490評論 2 348

推薦閱讀更多精彩內(nèi)容