WebRTC Native 源碼3:視頻編碼實(shí)現(xiàn)分析

本篇只關(guān)注編碼相關(guān)的內(nèi)容,主要分三塊來(lái)闡述:
初始化流程肩袍、編碼流程杭棵、編碼控制


image.png

如上圖所示,step1~step31是初始化流程氛赐,主要是創(chuàng)建相關(guān)的對(duì)象魂爪,step32~step49是編碼流程。

初始化流程

VideoStreamEncoder實(shí)現(xiàn)了VideoSinkInterface和EncodedImageCallback接口艰管,
只看 Java 代碼我們會(huì)發(fā)現(xiàn)這個(gè)類的方法基本都是 package private 的滓侍,而且并沒(méi)有被其他類調(diào)用過(guò),對(duì) MediaCodecVideoEncoder 接口的調(diào)用都發(fā)生在 native 層牲芋,在 webrtc/sdk/android/src/jni/androidmediaencoder_jni.cc 這個(gè)文件中撩笆。當(dāng)做為一個(gè)VideoSinkInterface對(duì)象時(shí),可以接收WebRTC Native 源碼1:相機(jī)采集實(shí)現(xiàn)分析分發(fā)的圖像缸浦。當(dāng)做為一個(gè)EncodedImageCallback對(duì)象時(shí)夕冲,可以接收MediaCodecVideoEncoder編碼后的碼流數(shù)據(jù)。MediaCodecVideoEncoder的callback_是一個(gè)VCMEncodedFrameCallback對(duì)象裂逐,VCMEncodedFrameCallback的post_encode_callback_是一個(gè)VideoStreamEncoder對(duì)象歹鱼,因此編碼后的數(shù)據(jù)就可以通過(guò)callback_成員回調(diào)到VideoStreamEncoder。

創(chuàng)建視頻發(fā)送Stream時(shí)會(huì)調(diào)用WebRtcVideoSendStream的RecreateWebRtcStream絮姆,在這個(gè)過(guò)程會(huì)完成視頻編碼模塊初始化工作醉冤,RecreateWebRtcStream主要代碼如下:

void WebRtcVideoChannel::WebRtcVideoSendStream::RecreateWebRtcStream() {
  stream_ = call_->CreateVideoSendStream(std::move(config),
                                         parameters_.encoder_config.Copy());
  if (source_) {
    stream_->SetSource(this, GetDegradationPreference());
  }
}

主要包括如下兩個(gè)流程:

  1. 調(diào)用Call對(duì)象的CreateVideoSendStream創(chuàng)建VideoSendStream對(duì)象秩霍。
  2. 調(diào)用SetSource函數(shù)設(shè)置圖像數(shù)據(jù)源篙悯,最后會(huì)調(diào)用VideoTrack的AddOrUpdateSink函數(shù)注冊(cè)sink對(duì)象,該sink對(duì)象是一個(gè)VideoStreamEncoder對(duì)象铃绒。

Call的CreateVideoSendStream主要代碼如下:

webrtc::VideoSendStream* Call::CreateVideoSendStream(
    webrtc::VideoSendStream::Config config,
    VideoEncoderConfig encoder_config) {
  VideoSendStream* send_stream = new VideoSendStream(
      num_cpu_cores_, module_process_thread_.get(), &worker_queue_,
      call_stats_.get(), transport_send_.get(), bitrate_allocator_.get(),
      video_send_delay_stats_.get(), event_log_, std::move(config),
      std::move(encoder_config), suspended_video_send_ssrcs_,
      suspended_video_payload_states_);
  return send_stream;
}

VideoSendStream的構(gòu)造函數(shù)如下所示:

VideoSendStream::VideoSendStream(
    int num_cpu_cores,
    ProcessThread* module_process_thread,
    rtc::TaskQueue* worker_queue,
    CallStats* call_stats,
    RtpTransportControllerSendInterface* transport,
    BitrateAllocator* bitrate_allocator,
    SendDelayStats* send_delay_stats,
    RtcEventLog* event_log,
    VideoSendStream::Config config,
    VideoEncoderConfig encoder_config,
    const std::map<uint32_t, RtpState>& suspended_ssrcs,
    const std::map<uint32_t, RtpPayloadState>& suspended_payload_states)
    : worker_queue_(worker_queue),
      thread_sync_event_(false /* manual_reset */, false),
      stats_proxy_(Clock::GetRealTimeClock(),
                   config,
                   encoder_config.content_type),
      config_(std::move(config)),
      content_type_(encoder_config.content_type) {
  video_stream_encoder_.reset(
      new VideoStreamEncoder(num_cpu_cores, &stats_proxy_,
                             config_.encoder_settings,
                             config_.pre_encode_callback,
                             std::unique_ptr<OveruseFrameDetector>()));
  worker_queue_->PostTask(std::unique_ptr<rtc::QueuedTask>(new ConstructionTask(
      &send_stream_, &thread_sync_event_, &stats_proxy_,
      video_stream_encoder_.get(), module_process_thread, call_stats, transport,
      bitrate_allocator, send_delay_stats, event_log, &config_,
      encoder_config.max_bitrate_bps, suspended_ssrcs, suspended_payload_states,
      encoder_config.content_type)));

  // Wait for ConstructionTask to complete so that |send_stream_| can be used.
  // |module_process_thread| must be registered and deregistered on the thread
  // it was created on.
  thread_sync_event_.Wait(rtc::Event::kForever);
  send_stream_->RegisterProcessThread(module_process_thread);
  // TODO(sprang): Enable this also for regular video calls if it works well.
  if (encoder_config.content_type == VideoEncoderConfig::ContentType::kScreen) {
    // Only signal target bitrate for screenshare streams, for now.
    video_stream_encoder_->SetBitrateObserver(send_stream_.get());
  }

  ReconfigureVideoEncoder(std::move(encoder_config));
}

主要是創(chuàng)建了VideoStreamEncoder對(duì)象和VideoSendStreamImpl對(duì)象鸽照,然后調(diào)用ReconfigureVideoEncoder初始化編碼器,VideoSendStreamImpl對(duì)象的創(chuàng)建是在ConstructionTask的Run函數(shù)中完成的颠悬,如下所示:

  bool Run() override {
    send_stream_->reset(new VideoSendStreamImpl(
        stats_proxy_, rtc::TaskQueue::Current(), call_stats_, transport_,
        bitrate_allocator_, send_delay_stats_, video_stream_encoder_,
        event_log_, config_, initial_encoder_max_bitrate_,
        std::move(suspended_ssrcs_), std::move(suspended_payload_states_),
        content_type_));
    return true;
  }

VideoStreamEncoder的構(gòu)造函數(shù)如下所示:

VideoStreamEncoder::VideoStreamEncoder(
    uint32_t number_of_cores,
    SendStatisticsProxy* stats_proxy,
    const VideoSendStream::Config::EncoderSettings& settings,
    rtc::VideoSinkInterface<VideoFrame>* pre_encode_callback,
    std::unique_ptr<OveruseFrameDetector> overuse_detector)
    : shutdown_event_(true /* manual_reset */, false),
      number_of_cores_(number_of_cores),
      initial_rampup_(0),
      source_proxy_(new VideoSourceProxy(this)),
      sink_(nullptr),
      settings_(settings),
      codec_type_(PayloadStringToCodecType(settings.payload_name)),
      video_sender_(Clock::GetRealTimeClock(), this),
      overuse_detector_(
          overuse_detector.get()
              ? overuse_detector.release()
              : new OveruseFrameDetector(
                    GetCpuOveruseOptions(settings.full_overuse_time),
                    this,
                    stats_proxy)),
      stats_proxy_(stats_proxy),
      pre_encode_callback_(pre_encode_callback),
      max_framerate_(-1),
      pending_encoder_reconfiguration_(false),
      encoder_start_bitrate_bps_(0),
      max_data_payload_length_(0),
      nack_enabled_(false),
      last_observed_bitrate_bps_(0),
      encoder_paused_and_dropped_frame_(false),
      clock_(Clock::GetRealTimeClock()),
      degradation_preference_(
          VideoSendStream::DegradationPreference::kDegradationDisabled),
      posted_frames_waiting_for_encode_(0),
      last_captured_timestamp_(0),
      delta_ntp_internal_ms_(clock_->CurrentNtpInMilliseconds() -
                             clock_->TimeInMilliseconds()),
      last_frame_log_ms_(clock_->TimeInMilliseconds()),
      captured_frame_count_(0),
      dropped_frame_count_(0),
      bitrate_observer_(nullptr),
      encoder_queue_("EncoderQueue") {
  RTC_DCHECK(stats_proxy);
  encoder_queue_.PostTask([this] {
    RTC_DCHECK_RUN_ON(&encoder_queue_);
    overuse_detector_->StartCheckForOveruse();
    video_sender_.RegisterExternalEncoder(
        settings_.encoder, settings_.payload_type, settings_.internal_source);
  });
}

主要是初始化source_proxy_對(duì)象和video_sender_對(duì)象矮燎,VideoSender構(gòu)造函數(shù)如下所示:

VideoSender::VideoSender(Clock* clock,
                         EncodedImageCallback* post_encode_callback)
    : _encoder(nullptr),
      _mediaOpt(clock),
      _encodedFrameCallback(post_encode_callback, &_mediaOpt),
      post_encode_callback_(post_encode_callback),
      _codecDataBase(&_encodedFrameCallback),
      frame_dropper_enabled_(true),
      current_codec_(),
      encoder_params_({BitrateAllocation(), 0, 0, 0}),
      encoder_has_internal_source_(false),
      next_frame_types_(1, kVideoFrameDelta) {
  _mediaOpt.Reset();
  // Allow VideoSender to be created on one thread but used on another, post
  // construction. This is currently how this class is being used by at least
  // one external project (diffractor).
  sequenced_checker_.Detach();
}

主要是初始化_encodedFrameCallback和codecDataBase對(duì)象定血,VCMEncodedFrameCallback構(gòu)造函數(shù)如下所示,
post_encode_callback
是一個(gè)VideoStreamEncoder對(duì)象:

VCMEncodedFrameCallback::VCMEncodedFrameCallback(
    EncodedImageCallback* post_encode_callback,
    media_optimization::MediaOptimization* media_opt)
    : internal_source_(false),
      post_encode_callback_(post_encode_callback),
      media_opt_(media_opt),
      framerate_(1),
      last_timing_frame_time_ms_(-1),
      timing_frames_thresholds_({-1, 0}),
      incorrect_capture_time_logged_messages_(0),
      reordered_frames_logged_messages_(0),
      stalled_encoder_logged_messages_(0) {
}

VCMCodecDataBase構(gòu)造函數(shù)如下所示诞外,encoded_frame_callback_是一個(gè)VCMEncodedFrameCallback對(duì)象:

VCMCodecDataBase::VCMCodecDataBase(
    VCMEncodedFrameCallback* encoded_frame_callback)
    : number_of_cores_(0),
      max_payload_size_(kDefaultPayloadSize),
      periodic_key_frames_(false),
      pending_encoder_reset_(true),
      send_codec_(),
      receive_codec_(),
      encoder_payload_type_(0),
      external_encoder_(nullptr),
      internal_source_(false),
      encoded_frame_callback_(encoded_frame_callback),
      dec_map_(),
      dec_external_map_() {}

在VideoStreamEncoder的構(gòu)造函數(shù)中澜沟,調(diào)用VideoSender的RegisterExternalEncoder函數(shù)最終將編碼器對(duì)象保存在VCMCodecDataBase的external_encoder_成員,如下所示:

void VCMCodecDataBase::RegisterExternalEncoder(VideoEncoder* external_encoder,
                                               uint8_t payload_type,
                                               bool internal_source) {
  // Since only one encoder can be used at a given time, only one external
  // encoder can be registered/used.
  external_encoder_ = external_encoder;
  encoder_payload_type_ = payload_type;
  internal_source_ = internal_source;
  pending_encoder_reset_ = true;
}

編碼器對(duì)象是通過(guò)WebRtcVideoEncoderFactory創(chuàng)建的峡谊,如下所示:

void WebRtcVideoChannel::WebRtcVideoSendStream::SetCodec(
    const VideoCodecSettings& codec_settings,
    bool force_encoder_allocation) {
  std::unique_ptr<webrtc::VideoEncoder> new_encoder;
  if (force_encoder_allocation || !allocated_encoder_ ||
      allocated_codec_ != codec_settings.codec) {
    const webrtc::SdpVideoFormat format(codec_settings.codec.name,
                                        codec_settings.codec.params);
    new_encoder = encoder_factory_->CreateVideoEncoder(format);

    parameters_.config.encoder_settings.encoder = new_encoder.get();

    const webrtc::VideoEncoderFactory::CodecInfo info =
        encoder_factory_->QueryVideoEncoder(format);
    parameters_.config.encoder_settings.full_overuse_time =
        info.is_hardware_accelerated;
    parameters_.config.encoder_settings.internal_source =
        info.has_internal_source;
  } else {
    new_encoder = std::move(allocated_encoder_);
  }
  parameters_.config.encoder_settings.payload_name = codec_settings.codec.name;
  parameters_.config.encoder_settings.payload_type = codec_settings.codec.id;
}

以Android MediaCodec編碼器為例茫虽,MediaCodecVideoEncoderFactory的CreateVideoEncoder定義如下:

VideoEncoder* MediaCodecVideoEncoderFactory::CreateVideoEncoder(
    const cricket::VideoCodec& codec) {
  if (supported_codecs().empty()) {
    ALOGW << "No HW video encoder for codec " << codec.name;
    return nullptr;
  }
  if (FindMatchingCodec(supported_codecs(), codec)) {
    ALOGD << "Create HW video encoder for " << codec.name;
    JNIEnv* jni = AttachCurrentThreadIfNeeded();
    ScopedLocalRefFrame local_ref_frame(jni);
    return new MediaCodecVideoEncoder(jni, codec, egl_context_);
  }
  ALOGW << "Can not find HW video encoder for type " << codec.name;
  return nullptr;
}

可見(jiàn)VCMCodecDataBase的external_encoder_是一個(gè)MediaCodecVideoEncoder對(duì)象。

回到VideoSendStream的構(gòu)造函數(shù)中既们,調(diào)用的ReconfigureVideoEncoder最后會(huì)調(diào)用VCMCodecDataBase的SetSendCodec函數(shù)濒析,如下所示,主要是創(chuàng)建并初始化VCMGenericEncoder對(duì)象啥纸,其中external_encoder_是一個(gè)MediaCodecVideoEncoder對(duì)象号杏,encoded_frame_callback_是一個(gè)VCMEncodedFrameCallback對(duì)象。

bool VCMCodecDataBase::SetSendCodec(const VideoCodec* send_codec,
                                    int number_of_cores,
                                    size_t max_payload_size) {
  ptr_encoder_.reset(new VCMGenericEncoder(
      external_encoder_, encoded_frame_callback_, internal_source_));
  encoded_frame_callback_->SetInternalSource(internal_source_);
  if (ptr_encoder_->InitEncode(&send_codec_, number_of_cores_,
                               max_payload_size_) < 0) {
    RTC_LOG(LS_ERROR) << "Failed to initialize video encoder.";
    DeleteEncoder();
    return false;
  }
}

VCMGenericEncoder的構(gòu)造函數(shù)如下所示斯棒,encoder_和vcm_encoded_frame_callback_保存的分別是MediaCodecVideoEncoder對(duì)象和VCMEncodedFrameCallback對(duì)象盾致。

VCMGenericEncoder::VCMGenericEncoder(
    VideoEncoder* encoder,
    VCMEncodedFrameCallback* encoded_frame_callback,
    bool internal_source)
    : encoder_(encoder),
      vcm_encoded_frame_callback_(encoded_frame_callback),
      internal_source_(internal_source),
      encoder_params_({BitrateAllocation(), 0, 0, 0}),
      streams_or_svc_num_(0) {}

VCMGenericEncoder的InitEncode函數(shù)定義如下所示:

int32_t VCMGenericEncoder::InitEncode(const VideoCodec* settings,
                                      int32_t number_of_cores,
                                      size_t max_payload_size) {
  RTC_DCHECK_RUNS_SERIALIZED(&race_checker_);
  TRACE_EVENT0("webrtc", "VCMGenericEncoder::InitEncode");
  streams_or_svc_num_ = settings->numberOfSimulcastStreams;
  codec_type_ = settings->codecType;
  if (settings->codecType == kVideoCodecVP9) {
    streams_or_svc_num_ = settings->VP9().numberOfSpatialLayers;
  }
  if (streams_or_svc_num_ == 0)
    streams_or_svc_num_ = 1;

  vcm_encoded_frame_callback_->SetTimingFramesThresholds(
      settings->timing_frame_thresholds);
  vcm_encoded_frame_callback_->OnFrameRateChanged(settings->maxFramerate);

  if (encoder_->InitEncode(settings, number_of_cores, max_payload_size) != 0) {
    RTC_LOG(LS_ERROR) << "Failed to initialize the encoder associated with "
                         "payload name: "
                      << settings->plName;
    return -1;
  }
  vcm_encoded_frame_callback_->Reset();
  encoder_->RegisterEncodeCompleteCallback(vcm_encoded_frame_callback_);
  return 0;
}

主要是調(diào)用MediaCodecVideoEncoder的InitEncode函數(shù)創(chuàng)建并初始化編碼器,并將VCMEncodedFrameCallback注冊(cè)到MediaCodecVideoEncoder中用來(lái)接收編碼后的數(shù)據(jù)荣暮。

InitEncode函數(shù)最后會(huì)調(diào)用java層MediaCodecVideoEncoder的initEncode函數(shù)绰上,如下所示,在這里完成了創(chuàng)建并初始化Android MediaCodec編碼器的工作渠驼。

  @CalledByNativeUnchecked
  boolean initEncode(VideoCodecType type, int profile, int width, int height, int kbps, int fps,
      EglBase14.Context sharedContext) {
    final boolean useSurface = sharedContext != null;
    Logging.d(TAG,
        "Java initEncode: " + type + ". Profile: " + profile + " : " + width + " x " + height
            + ". @ " + kbps + " kbps. Fps: " + fps + ". Encode from texture : " + useSurface);

    this.profile = profile;
    this.width = width;
    this.height = height;
    if (mediaCodecThread != null) {
      throw new RuntimeException("Forgot to release()?");
    }
    EncoderProperties properties = null;
    String mime = null;
    int keyFrameIntervalSec = 0;
    boolean configureH264HighProfile = false;
    if (type == VideoCodecType.VIDEO_CODEC_VP8) {
      mime = VP8_MIME_TYPE;
      properties = findHwEncoder(
          VP8_MIME_TYPE, vp8HwList(), useSurface ? supportedSurfaceColorList : supportedColorList);
      keyFrameIntervalSec = 100;
    } else if (type == VideoCodecType.VIDEO_CODEC_VP9) {
      mime = VP9_MIME_TYPE;
      properties = findHwEncoder(
          VP9_MIME_TYPE, vp9HwList, useSurface ? supportedSurfaceColorList : supportedColorList);
      keyFrameIntervalSec = 100;
    } else if (type == VideoCodecType.VIDEO_CODEC_H264) {
      mime = H264_MIME_TYPE;
      properties = findHwEncoder(
          H264_MIME_TYPE, h264HwList, useSurface ? supportedSurfaceColorList : supportedColorList);
      if (profile == H264Profile.CONSTRAINED_HIGH.getValue()) {
        EncoderProperties h264HighProfileProperties = findHwEncoder(H264_MIME_TYPE,
            h264HighProfileHwList, useSurface ? supportedSurfaceColorList : supportedColorList);
        if (h264HighProfileProperties != null) {
          Logging.d(TAG, "High profile H.264 encoder supported.");
          configureH264HighProfile = true;
        } else {
          Logging.d(TAG, "High profile H.264 encoder requested, but not supported. Use baseline.");
        }
      }
      keyFrameIntervalSec = 20;
    }
    if (properties == null) {
      throw new RuntimeException("Can not find HW encoder for " + type);
    }
    runningInstance = this; // Encoder is now running and can be queried for stack traces.
    colorFormat = properties.colorFormat;
    bitrateAdjustmentType = properties.bitrateAdjustmentType;
    if (bitrateAdjustmentType == BitrateAdjustmentType.FRAMERATE_ADJUSTMENT) {
      fps = BITRATE_ADJUSTMENT_FPS;
    } else {
      fps = Math.min(fps, MAXIMUM_INITIAL_FPS);
    }

    forcedKeyFrameMs = 0;
    lastKeyFrameMs = -1;
    if (type == VideoCodecType.VIDEO_CODEC_VP8
        && properties.codecName.startsWith(qcomVp8HwProperties.codecPrefix)) {
      if (Build.VERSION.SDK_INT == Build.VERSION_CODES.LOLLIPOP
          || Build.VERSION.SDK_INT == Build.VERSION_CODES.LOLLIPOP_MR1) {
        forcedKeyFrameMs = QCOM_VP8_KEY_FRAME_INTERVAL_ANDROID_L_MS;
      } else if (Build.VERSION.SDK_INT == Build.VERSION_CODES.M) {
        forcedKeyFrameMs = QCOM_VP8_KEY_FRAME_INTERVAL_ANDROID_M_MS;
      } else if (Build.VERSION.SDK_INT > Build.VERSION_CODES.M) {
        forcedKeyFrameMs = QCOM_VP8_KEY_FRAME_INTERVAL_ANDROID_N_MS;
      }
    }

    Logging.d(TAG, "Color format: " + colorFormat + ". Bitrate adjustment: " + bitrateAdjustmentType
            + ". Key frame interval: " + forcedKeyFrameMs + " . Initial fps: " + fps);
    targetBitrateBps = 1000 * kbps;
    targetFps = fps;
    bitrateAccumulatorMax = targetBitrateBps / 8.0;
    bitrateAccumulator = 0;
    bitrateObservationTimeMs = 0;
    bitrateAdjustmentScaleExp = 0;

    mediaCodecThread = Thread.currentThread();
    try {
      MediaFormat format = MediaFormat.createVideoFormat(mime, width, height);
      format.setInteger(MediaFormat.KEY_BIT_RATE, targetBitrateBps);
      format.setInteger("bitrate-mode", VIDEO_ControlRateConstant);
      format.setInteger(MediaFormat.KEY_COLOR_FORMAT, properties.colorFormat);
      format.setInteger(MediaFormat.KEY_FRAME_RATE, targetFps);
      format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, keyFrameIntervalSec);
      if (configureH264HighProfile) {
        format.setInteger("profile", VIDEO_AVCProfileHigh);
        format.setInteger("level", VIDEO_AVCLevel3);
      }
      Logging.d(TAG, "  Format: " + format);
      mediaCodec = createByCodecName(properties.codecName);
      this.type = type;
      if (mediaCodec == null) {
        Logging.e(TAG, "Can not create media encoder");
        release();
        return false;
      }
      mediaCodec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);

      if (useSurface) {
        eglBase = new EglBase14(sharedContext, EglBase.CONFIG_RECORDABLE);
        // Create an input surface and keep a reference since we must release the surface when done.
        inputSurface = mediaCodec.createInputSurface();
        eglBase.createSurface(inputSurface);
        drawer = new GlRectDrawer();
      }
      mediaCodec.start();
      outputBuffers = mediaCodec.getOutputBuffers();
      Logging.d(TAG, "Output buffers: " + outputBuffers.length);

    } catch (IllegalStateException e) {
      Logging.e(TAG, "initEncode failed", e);
      release();
      return false;
    }
    return true;
  }

MediaCodecVideoEncoder的RegisterEncodeCompleteCallback定義如下所示蜈块,可見(jiàn)callback_成員是一個(gè)VCMEncodedFrameCallback對(duì)象。

int32_t MediaCodecVideoEncoder::RegisterEncodeCompleteCallback(
    EncodedImageCallback* callback) {
  RTC_DCHECK_CALLED_SEQUENTIALLY(&encoder_queue_checker_);
  JNIEnv* jni = AttachCurrentThreadIfNeeded();
  ScopedLocalRefFrame local_ref_frame(jni);
  callback_ = callback;
  return WEBRTC_VIDEO_CODEC_OK;
}

回到WebRtcVideoSendStream的RecreateWebRtcStream函數(shù)迷扇,會(huì)調(diào)用VideoSendStream的SetSource設(shè)置Source百揭,最后會(huì)在VideoSourceProxy的SetSource函數(shù)中調(diào)用WebRtcVideoSendStream的AddOrUpdateSink函數(shù)將VideoStreamEncoder這個(gè)sink對(duì)象注冊(cè)到Source上,這樣Source的圖像數(shù)據(jù)就可以分發(fā)到VideoStreamEncoder對(duì)象進(jìn)行編碼了蜓席。

視頻編碼流程

視頻編碼流程是從VideoBroadcaster回調(diào)VideoStreamEncoder的OnFrame開(kāi)始的器一。
VideoStreamEncoder的OnFrame定義如下:

void VideoStreamEncoder::OnFrame(const VideoFrame& video_frame) {
  RTC_DCHECK_RUNS_SERIALIZED(&incoming_frame_race_checker_);
  VideoFrame incoming_frame = video_frame;

  // Local time in webrtc time base.
  int64_t current_time_us = clock_->TimeInMicroseconds();
  int64_t current_time_ms = current_time_us / rtc::kNumMicrosecsPerMillisec;
  // In some cases, e.g., when the frame from decoder is fed to encoder,
  // the timestamp may be set to the future. As the encoding pipeline assumes
  // capture time to be less than present time, we should reset the capture
  // timestamps here. Otherwise there may be issues with RTP send stream.
  if (incoming_frame.timestamp_us() > current_time_us)
    incoming_frame.set_timestamp_us(current_time_us);

  // Capture time may come from clock with an offset and drift from clock_.
  int64_t capture_ntp_time_ms;
  if (video_frame.ntp_time_ms() > 0) {
    capture_ntp_time_ms = video_frame.ntp_time_ms();
  } else if (video_frame.render_time_ms() != 0) {
    capture_ntp_time_ms = video_frame.render_time_ms() + delta_ntp_internal_ms_;
  } else {
    capture_ntp_time_ms = current_time_ms + delta_ntp_internal_ms_;
  }
  incoming_frame.set_ntp_time_ms(capture_ntp_time_ms);

  // Convert NTP time, in ms, to RTP timestamp.
  const int kMsToRtpTimestamp = 90;
  incoming_frame.set_timestamp(
      kMsToRtpTimestamp * static_cast<uint32_t>(incoming_frame.ntp_time_ms()));

  if (incoming_frame.ntp_time_ms() <= last_captured_timestamp_) {
    // We don't allow the same capture time for two frames, drop this one.
    RTC_LOG(LS_WARNING) << "Same/old NTP timestamp ("
                        << incoming_frame.ntp_time_ms()
                        << " <= " << last_captured_timestamp_
                        << ") for incoming frame. Dropping.";
    return;
  }

  bool log_stats = false;
  if (current_time_ms - last_frame_log_ms_ > kFrameLogIntervalMs) {
    last_frame_log_ms_ = current_time_ms;
    log_stats = true;
  }

  last_captured_timestamp_ = incoming_frame.ntp_time_ms();
  encoder_queue_.PostTask(std::unique_ptr<rtc::QueuedTask>(new EncodeTask(
      incoming_frame, this, rtc::TimeMicros(), log_stats)));
}

EncodeTask的Run定義如下:

  bool Run() override {
    RTC_DCHECK_RUN_ON(&video_stream_encoder_->encoder_queue_);
    video_stream_encoder_->stats_proxy_->OnIncomingFrame(frame_.width(),
                                                         frame_.height());
    ++video_stream_encoder_->captured_frame_count_;
    const int posted_frames_waiting_for_encode =
        video_stream_encoder_->posted_frames_waiting_for_encode_.fetch_sub(1);
    RTC_DCHECK_GT(posted_frames_waiting_for_encode, 0);
    if (posted_frames_waiting_for_encode == 1) {
      video_stream_encoder_->EncodeVideoFrame(frame_, time_when_posted_us_);
    } else {
      // There is a newer frame in flight. Do not encode this frame.
      RTC_LOG(LS_VERBOSE)
          << "Incoming frame dropped due to that the encoder is blocked.";
      ++video_stream_encoder_->dropped_frame_count_;
      video_stream_encoder_->stats_proxy_->OnFrameDroppedInEncoderQueue();
    }
    if (log_stats_) {
      RTC_LOG(LS_INFO) << "Number of frames: captured "
                       << video_stream_encoder_->captured_frame_count_
                       << ", dropped (due to encoder blocked) "
                       << video_stream_encoder_->dropped_frame_count_
                       << ", interval_ms " << kFrameLogIntervalMs;
      video_stream_encoder_->captured_frame_count_ = 0;
      video_stream_encoder_->dropped_frame_count_ = 0;
    }
    return true;
  }

成員video_stream_encoder_是一個(gè)VideoStreamEncoder對(duì)象,EncodeVideoFrame函數(shù)定義如下:

void VideoStreamEncoder::EncodeVideoFrame(const VideoFrame& video_frame,
                                          int64_t time_when_posted_us) {
  RTC_DCHECK_RUN_ON(&encoder_queue_);

  if (pre_encode_callback_)
    pre_encode_callback_->OnFrame(video_frame);

  if (!last_frame_info_ || video_frame.width() != last_frame_info_->width ||
      video_frame.height() != last_frame_info_->height ||
      video_frame.is_texture() != last_frame_info_->is_texture) {
    pending_encoder_reconfiguration_ = true;
    last_frame_info_ = rtc::Optional<VideoFrameInfo>(VideoFrameInfo(
        video_frame.width(), video_frame.height(), video_frame.is_texture()));
    RTC_LOG(LS_INFO) << "Video frame parameters changed: dimensions="
                     << last_frame_info_->width << "x"
                     << last_frame_info_->height
                     << ", texture=" << last_frame_info_->is_texture << ".";
  }

  if (initial_rampup_ < kMaxInitialFramedrop &&
      video_frame.size() >
          MaximumFrameSizeForBitrate(encoder_start_bitrate_bps_ / 1000)) {
    RTC_LOG(LS_INFO) << "Dropping frame. Too large for target bitrate.";
    AdaptDown(kQuality);
    ++initial_rampup_;
    return;
  }
  initial_rampup_ = kMaxInitialFramedrop;

  int64_t now_ms = clock_->TimeInMilliseconds();
  if (pending_encoder_reconfiguration_) {
    ReconfigureEncoder();
    last_parameters_update_ms_.emplace(now_ms);
  } else if (!last_parameters_update_ms_ ||
             now_ms - *last_parameters_update_ms_ >=
                 vcm::VCMProcessTimer::kDefaultProcessIntervalMs) {
    video_sender_.UpdateChannelParemeters(rate_allocator_.get(),
                                          bitrate_observer_);
    last_parameters_update_ms_.emplace(now_ms);
  }

  if (EncoderPaused()) {
    TraceFrameDropStart();
    return;
  }
  TraceFrameDropEnd();

  VideoFrame out_frame(video_frame);
  // Crop frame if needed.
  if (crop_width_ > 0 || crop_height_ > 0) {
    int cropped_width = video_frame.width() - crop_width_;
    int cropped_height = video_frame.height() - crop_height_;
    rtc::scoped_refptr<I420Buffer> cropped_buffer =
        I420Buffer::Create(cropped_width, cropped_height);
    // TODO(ilnik): Remove scaling if cropping is too big, as it should never
    // happen after SinkWants signaled correctly from ReconfigureEncoder.
    if (crop_width_ < 4 && crop_height_ < 4) {
      cropped_buffer->CropAndScaleFrom(
          *video_frame.video_frame_buffer()->ToI420(), crop_width_ / 2,
          crop_height_ / 2, cropped_width, cropped_height);
    } else {
      cropped_buffer->ScaleFrom(
          *video_frame.video_frame_buffer()->ToI420().get());
    }
    out_frame =
        VideoFrame(cropped_buffer, video_frame.timestamp(),
                   video_frame.render_time_ms(), video_frame.rotation());
    out_frame.set_ntp_time_ms(video_frame.ntp_time_ms());
  }

  TRACE_EVENT_ASYNC_STEP0("webrtc", "Video", video_frame.render_time_ms(),
                          "Encode");

  overuse_detector_->FrameCaptured(out_frame, time_when_posted_us);

  video_sender_.AddVideoFrame(out_frame, nullptr);
}

有必要的話先裁剪縮放厨内,然后調(diào)用VideoSender的AddVideoFrame函數(shù)祈秕,定義如下:

// Add one raw video frame to the encoder, blocking.
int32_t VideoSender::AddVideoFrame(const VideoFrame& videoFrame,
                                   const CodecSpecificInfo* codecSpecificInfo) {
  EncoderParameters encoder_params;
  std::vector<FrameType> next_frame_types;
  bool encoder_has_internal_source = false;
  {
    rtc::CritScope lock(&params_crit_);
    encoder_params = encoder_params_;
    next_frame_types = next_frame_types_;
    encoder_has_internal_source = encoder_has_internal_source_;
  }
  rtc::CritScope lock(&encoder_crit_);
  if (_encoder == nullptr)
    return VCM_UNINITIALIZED;
  SetEncoderParameters(encoder_params, encoder_has_internal_source);
  if (_mediaOpt.DropFrame()) {
    RTC_LOG(LS_VERBOSE) << "Drop Frame "
                        << "target bitrate "
                        << encoder_params.target_bitrate.get_sum_bps()
                        << " loss rate " << encoder_params.loss_rate << " rtt "
                        << encoder_params.rtt << " input frame rate "
                        << encoder_params.input_frame_rate;
    post_encode_callback_->OnDroppedFrame(
        EncodedImageCallback::DropReason::kDroppedByMediaOptimizations);
    return VCM_OK;
  }
  // TODO(pbos): Make sure setting send codec is synchronized with video
  // processing so frame size always matches.
  if (!_codecDataBase.MatchesCurrentResolution(videoFrame.width(),
                                               videoFrame.height())) {
    RTC_LOG(LS_ERROR)
        << "Incoming frame doesn't match set resolution. Dropping.";
    return VCM_PARAMETER_ERROR;
  }
  VideoFrame converted_frame = videoFrame;
  const VideoFrameBuffer::Type buffer_type =
      converted_frame.video_frame_buffer()->type();
  const bool is_buffer_type_supported =
      buffer_type == VideoFrameBuffer::Type::kI420 ||
      (buffer_type == VideoFrameBuffer::Type::kNative &&
       _encoder->SupportsNativeHandle());
  if (!is_buffer_type_supported) {
    // This module only supports software encoding.
    // TODO(pbos): Offload conversion from the encoder thread.
    rtc::scoped_refptr<I420BufferInterface> converted_buffer(
        converted_frame.video_frame_buffer()->ToI420());

    if (!converted_buffer) {
      RTC_LOG(LS_ERROR) << "Frame conversion failed, dropping frame.";
      return VCM_PARAMETER_ERROR;
    }
    converted_frame = VideoFrame(converted_buffer,
                                 converted_frame.timestamp(),
                                 converted_frame.render_time_ms(),
                                 converted_frame.rotation());
  }
  int32_t ret =
      _encoder->Encode(converted_frame, codecSpecificInfo, next_frame_types);
  if (ret < 0) {
    RTC_LOG(LS_ERROR) << "Failed to encode frame. Error code: " << ret;
    return ret;
  }

  {
    rtc::CritScope lock(&params_crit_);
    // Change all keyframe requests to encode delta frames the next time.
    for (size_t i = 0; i < next_frame_types_.size(); ++i) {
      // Check for equality (same requested as before encoding) to not
      // accidentally drop a keyframe request while encoding.
      if (next_frame_types[i] == next_frame_types_[i])
        next_frame_types_[i] = kVideoFrameDelta;
    }
  }
  return VCM_OK;
}

成員_encoder是一個(gè)VCMGenericEncoder對(duì)象,Encode函數(shù)定義如下:

int32_t VCMGenericEncoder::Encode(const VideoFrame& frame,
                                  const CodecSpecificInfo* codec_specific,
                                  const std::vector<FrameType>& frame_types) {
  RTC_DCHECK_RUNS_SERIALIZED(&race_checker_);
  TRACE_EVENT1("webrtc", "VCMGenericEncoder::Encode", "timestamp",
               frame.timestamp());

  for (FrameType frame_type : frame_types)
    RTC_DCHECK(frame_type == kVideoFrameKey || frame_type == kVideoFrameDelta);

  for (size_t i = 0; i < streams_or_svc_num_; ++i)
    vcm_encoded_frame_callback_->OnEncodeStarted(frame.timestamp(),
                                                 frame.render_time_ms(), i);

  return encoder_->Encode(frame, codec_specific, &frame_types);
}

考慮encoder_為MediaCodecVideoEncoder的情況雏胃,Encode函數(shù)定義如下:

int32_t MediaCodecVideoEncoder::Encode(
    const VideoFrame& frame,
    const CodecSpecificInfo* /* codec_specific_info */,
    const std::vector<FrameType>* frame_types) {
  RTC_DCHECK_CALLED_SEQUENTIALLY(&encoder_queue_checker_);
  if (sw_fallback_required_)
    return WEBRTC_VIDEO_CODEC_FALLBACK_SOFTWARE;
  JNIEnv* jni = AttachCurrentThreadIfNeeded();
  ScopedLocalRefFrame local_ref_frame(jni);
  const int64_t frame_input_time_ms = rtc::TimeMillis();

  if (!inited_) {
    return WEBRTC_VIDEO_CODEC_UNINITIALIZED;
  }

  bool send_key_frame = false;
  if (codec_mode_ == kRealtimeVideo) {
    ++frames_received_since_last_key_;
    int64_t now_ms = rtc::TimeMillis();
    if (last_frame_received_ms_ != -1 &&
        (now_ms - last_frame_received_ms_) > kFrameDiffThresholdMs) {
      // Add limit to prevent triggering a key for every frame for very low
      // framerates (e.g. if frame diff > kFrameDiffThresholdMs).
      if (frames_received_since_last_key_ > kMinKeyFrameInterval) {
        ALOGD << "Send key, frame diff: " << (now_ms - last_frame_received_ms_);
        send_key_frame = true;
      }
      frames_received_since_last_key_ = 0;
    }
    last_frame_received_ms_ = now_ms;
  }

  frames_received_++;
  if (!DeliverPendingOutputs(jni)) {
    if (!ProcessHWError(true /* reset_if_fallback_unavailable */)) {
      return sw_fallback_required_ ? WEBRTC_VIDEO_CODEC_FALLBACK_SOFTWARE
                                   : WEBRTC_VIDEO_CODEC_ERROR;
    }
  }
  if (frames_encoded_ < kMaxEncodedLogFrames) {
    ALOGD << "Encoder frame in # " << (frames_received_ - 1)
          << ". TS: " << static_cast<int>(current_timestamp_us_ / 1000)
          << ". Q: " << input_frame_infos_.size() << ". Fps: " << last_set_fps_
          << ". Kbps: " << last_set_bitrate_kbps_;
  }

  if (drop_next_input_frame_) {
    ALOGW << "Encoder drop frame - failed callback.";
    drop_next_input_frame_ = false;
    current_timestamp_us_ += rtc::kNumMicrosecsPerSec / last_set_fps_;
    frames_dropped_media_encoder_++;
    return WEBRTC_VIDEO_CODEC_OK;
  }

  RTC_CHECK(frame_types->size() == 1) << "Unexpected stream count";

  // Check if we accumulated too many frames in encoder input buffers and drop
  // frame if so.
  if (input_frame_infos_.size() > MAX_ENCODER_Q_SIZE) {
    ALOGD << "Already " << input_frame_infos_.size()
          << " frames in the queue, dropping"
          << ". TS: " << static_cast<int>(current_timestamp_us_ / 1000)
          << ". Fps: " << last_set_fps_
          << ". Consecutive drops: " << consecutive_full_queue_frame_drops_;
    current_timestamp_us_ += rtc::kNumMicrosecsPerSec / last_set_fps_;
    consecutive_full_queue_frame_drops_++;
    if (consecutive_full_queue_frame_drops_ >=
        ENCODER_STALL_FRAMEDROP_THRESHOLD) {
      ALOGE << "Encoder got stuck.";
      return ProcessHWErrorOnEncode();
    }
    frames_dropped_media_encoder_++;
    return WEBRTC_VIDEO_CODEC_OK;
  }
  consecutive_full_queue_frame_drops_ = 0;

  rtc::scoped_refptr<VideoFrameBuffer> input_buffer(frame.video_frame_buffer());

  VideoFrame input_frame(input_buffer, frame.timestamp(),
                         frame.render_time_ms(), frame.rotation());

  if (!MaybeReconfigureEncoder(jni, input_frame)) {
    ALOGE << "Failed to reconfigure encoder.";
    return WEBRTC_VIDEO_CODEC_ERROR;
  }

  const bool key_frame =
      frame_types->front() != kVideoFrameDelta || send_key_frame;
  bool encode_status = true;

  int j_input_buffer_index = -1;
  if (!use_surface_) {
    j_input_buffer_index = Java_MediaCodecVideoEncoder_dequeueInputBuffer(
        jni, j_media_codec_video_encoder_);
    if (CheckException(jni)) {
      ALOGE << "Exception in dequeu input buffer.";
      return ProcessHWErrorOnEncode();
    }
    if (j_input_buffer_index == -1) {
      // Video codec falls behind - no input buffer available.
      ALOGW << "Encoder drop frame - no input buffers available";
      if (frames_received_ > 1) {
        current_timestamp_us_ += rtc::kNumMicrosecsPerSec / last_set_fps_;
        frames_dropped_media_encoder_++;
      } else {
        // Input buffers are not ready after codec initialization, HW is still
        // allocating thme - this is expected and should not result in drop
        // frame report.
        frames_received_ = 0;
      }
      return WEBRTC_VIDEO_CODEC_OK;  // TODO(fischman): see webrtc bug 2887.
    } else if (j_input_buffer_index == -2) {
      return ProcessHWErrorOnEncode();
    }
  }

  if (input_frame.video_frame_buffer()->type() !=
      VideoFrameBuffer::Type::kNative) {
    encode_status =
        EncodeByteBuffer(jni, key_frame, input_frame, j_input_buffer_index);
  } else {
    AndroidVideoFrameBuffer* android_buffer =
        static_cast<AndroidVideoFrameBuffer*>(
            input_frame.video_frame_buffer().get());
    switch (android_buffer->android_type()) {
      case AndroidVideoFrameBuffer::AndroidType::kTextureBuffer:
        encode_status = EncodeTexture(jni, key_frame, input_frame);
        break;
      case AndroidVideoFrameBuffer::AndroidType::kJavaBuffer:
        encode_status =
            EncodeJavaFrame(jni, key_frame, NativeToJavaFrame(jni, input_frame),
                            j_input_buffer_index);
        break;
      default:
        RTC_NOTREACHED();
        return WEBRTC_VIDEO_CODEC_ERROR;
    }
  }

  if (!encode_status) {
    ALOGE << "Failed encode frame with timestamp: " << input_frame.timestamp();
    return ProcessHWErrorOnEncode();
  }

  // Save input image timestamps for later output.
  input_frame_infos_.emplace_back(frame_input_time_ms, input_frame.timestamp(),
                                  input_frame.render_time_ms(),
                                  input_frame.rotation());

  last_input_timestamp_ms_ =
      current_timestamp_us_ / rtc::kNumMicrosecsPerMillisec;

  current_timestamp_us_ += rtc::kNumMicrosecsPerSec / last_set_fps_;

  // Start the polling loop if it is not started.
  if (encode_task_) {
    rtc::TaskQueue::Current()->PostDelayedTask(std::move(encode_task_),
                                               kMediaCodecPollMs);
  }

  if (!DeliverPendingOutputs(jni)) {
    return ProcessHWErrorOnEncode();
  }
  return WEBRTC_VIDEO_CODEC_OK;
}

考慮use_surface_為true的情況请毛,調(diào)用java層MediaCodecVideoEncoder的encodeTexture函數(shù),定義如下:

  @CalledByNativeUnchecked
  boolean encodeTexture(boolean isKeyframe, int oesTextureId, float[] transformationMatrix,
      long presentationTimestampUs) {
    checkOnMediaCodecThread();
    try {
      checkKeyFrameRequired(isKeyframe, presentationTimestampUs);
      eglBase.makeCurrent();
      // TODO(perkj): glClear() shouldn't be necessary since every pixel is covered anyway,
      // but it's a workaround for bug webrtc:5147.
      GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
      drawer.drawOes(oesTextureId, transformationMatrix, width, height, 0, 0, width, height);
      eglBase.swapBuffers(TimeUnit.MICROSECONDS.toNanos(presentationTimestampUs));
      return true;
    } catch (RuntimeException e) {
      Logging.e(TAG, "encodeTexture failed", e);
      return false;
    }
  }

通過(guò)opengl方式往MediaCodec的輸入Surface繪制瞭亮,將圖像數(shù)據(jù)送到OMX進(jìn)行編碼方仿。

然后調(diào)用DeliverPendingOutputs函數(shù),定義如下:

bool MediaCodecVideoEncoder::DeliverPendingOutputs(JNIEnv* jni) {
  RTC_DCHECK_CALLED_SEQUENTIALLY(&encoder_queue_checker_);

  while (true) {
    ScopedJavaLocalRef<jobject> j_output_buffer_info =
        Java_MediaCodecVideoEncoder_dequeueOutputBuffer(
            jni, j_media_codec_video_encoder_);
    if (CheckException(jni)) {
      ALOGE << "Exception in set dequeue output buffer.";
      ProcessHWError(true /* reset_if_fallback_unavailable */);
      return WEBRTC_VIDEO_CODEC_ERROR;
    }
    if (IsNull(jni, j_output_buffer_info)) {
      break;
    }

    int output_buffer_index =
        Java_OutputBufferInfo_getIndex(jni, j_output_buffer_info);
    if (output_buffer_index == -1) {
      ProcessHWError(true /* reset_if_fallback_unavailable */);
      return false;
    }

    // Get key and config frame flags.
    ScopedJavaLocalRef<jobject> j_output_buffer =
        Java_OutputBufferInfo_getBuffer(jni, j_output_buffer_info);
    bool key_frame =
        Java_OutputBufferInfo_isKeyFrame(jni, j_output_buffer_info);

    // Get frame timestamps from a queue - for non config frames only.
    int64_t encoding_start_time_ms = 0;
    int64_t frame_encoding_time_ms = 0;
    last_output_timestamp_ms_ =
        Java_OutputBufferInfo_getPresentationTimestampUs(jni,
                                                         j_output_buffer_info) /
        rtc::kNumMicrosecsPerMillisec;
    if (!input_frame_infos_.empty()) {
      const InputFrameInfo& frame_info = input_frame_infos_.front();
      output_timestamp_ = frame_info.frame_timestamp;
      output_render_time_ms_ = frame_info.frame_render_time_ms;
      output_rotation_ = frame_info.rotation;
      encoding_start_time_ms = frame_info.encode_start_time;
      input_frame_infos_.pop_front();
    }

    // Extract payload.
    size_t payload_size = jni->GetDirectBufferCapacity(j_output_buffer.obj());
    uint8_t* payload = reinterpret_cast<uint8_t*>(
        jni->GetDirectBufferAddress(j_output_buffer.obj()));
    if (CheckException(jni)) {
      ALOGE << "Exception in get direct buffer address.";
      ProcessHWError(true /* reset_if_fallback_unavailable */);
      return WEBRTC_VIDEO_CODEC_ERROR;
    }

    // Callback - return encoded frame.
    const VideoCodecType codec_type = GetCodecType();
    EncodedImageCallback::Result callback_result(
        EncodedImageCallback::Result::OK);
    if (callback_) {
      std::unique_ptr<EncodedImage> image(
          new EncodedImage(payload, payload_size, payload_size));
      image->_encodedWidth = width_;
      image->_encodedHeight = height_;
      image->_timeStamp = output_timestamp_;
      image->capture_time_ms_ = output_render_time_ms_;
      image->rotation_ = output_rotation_;
      image->content_type_ = (codec_mode_ == VideoCodecMode::kScreensharing)
                                 ? VideoContentType::SCREENSHARE
                                 : VideoContentType::UNSPECIFIED;
      image->timing_.flags = TimingFrameFlags::kInvalid;
      image->_frameType = (key_frame ? kVideoFrameKey : kVideoFrameDelta);
      image->_completeFrame = true;
      CodecSpecificInfo info;
      memset(&info, 0, sizeof(info));
      info.codecType = codec_type;
      if (codec_type == kVideoCodecVP8) {
        info.codecSpecific.VP8.pictureId = picture_id_;
        info.codecSpecific.VP8.nonReference = false;
        info.codecSpecific.VP8.simulcastIdx = 0;
        info.codecSpecific.VP8.temporalIdx = kNoTemporalIdx;
        info.codecSpecific.VP8.layerSync = false;
        info.codecSpecific.VP8.tl0PicIdx = kNoTl0PicIdx;
        info.codecSpecific.VP8.keyIdx = kNoKeyIdx;
      } else if (codec_type == kVideoCodecVP9) {
        if (key_frame) {
          gof_idx_ = 0;
        }
        info.codecSpecific.VP9.picture_id = picture_id_;
        info.codecSpecific.VP9.inter_pic_predicted = key_frame ? false : true;
        info.codecSpecific.VP9.flexible_mode = false;
        info.codecSpecific.VP9.ss_data_available = key_frame ? true : false;
        info.codecSpecific.VP9.tl0_pic_idx = tl0_pic_idx_++;
        info.codecSpecific.VP9.temporal_idx = kNoTemporalIdx;
        info.codecSpecific.VP9.spatial_idx = kNoSpatialIdx;
        info.codecSpecific.VP9.temporal_up_switch = true;
        info.codecSpecific.VP9.inter_layer_predicted = false;
        info.codecSpecific.VP9.gof_idx =
            static_cast<uint8_t>(gof_idx_++ % gof_.num_frames_in_gof);
        info.codecSpecific.VP9.num_spatial_layers = 1;
        info.codecSpecific.VP9.spatial_layer_resolution_present = false;
        if (info.codecSpecific.VP9.ss_data_available) {
          info.codecSpecific.VP9.spatial_layer_resolution_present = true;
          info.codecSpecific.VP9.width[0] = width_;
          info.codecSpecific.VP9.height[0] = height_;
          info.codecSpecific.VP9.gof.CopyGofInfoVP9(gof_);
        }
      }
      picture_id_ = (picture_id_ + 1) & 0x7FFF;

      // Generate a header describing a single fragment.
      RTPFragmentationHeader header;
      memset(&header, 0, sizeof(header));
      if (codec_type == kVideoCodecVP8 || codec_type == kVideoCodecVP9) {
        header.VerifyAndAllocateFragmentationHeader(1);
        header.fragmentationOffset[0] = 0;
        header.fragmentationLength[0] = image->_length;
        header.fragmentationPlType[0] = 0;
        header.fragmentationTimeDiff[0] = 0;
        if (codec_type == kVideoCodecVP8) {
          int qp;
          if (vp8::GetQp(payload, payload_size, &qp)) {
            current_acc_qp_ += qp;
            image->qp_ = qp;
          }
        } else if (codec_type == kVideoCodecVP9) {
          int qp;
          if (vp9::GetQp(payload, payload_size, &qp)) {
            current_acc_qp_ += qp;
            image->qp_ = qp;
          }
        }
      } else if (codec_type == kVideoCodecH264) {
        h264_bitstream_parser_.ParseBitstream(payload, payload_size);
        int qp;
        if (h264_bitstream_parser_.GetLastSliceQp(&qp)) {
          current_acc_qp_ += qp;
          image->qp_ = qp;
        }
        // For H.264 search for start codes.
        const std::vector<H264::NaluIndex> nalu_idxs =
            H264::FindNaluIndices(payload, payload_size);
        if (nalu_idxs.empty()) {
          ALOGE << "Start code is not found!";
          ALOGE << "Data:" <<  image->_buffer[0] << " " << image->_buffer[1]
              << " " << image->_buffer[2] << " " << image->_buffer[3]
              << " " << image->_buffer[4] << " " << image->_buffer[5];
          ProcessHWError(true /* reset_if_fallback_unavailable */);
          return false;
        }
        header.VerifyAndAllocateFragmentationHeader(nalu_idxs.size());
        for (size_t i = 0; i < nalu_idxs.size(); i++) {
          header.fragmentationOffset[i] = nalu_idxs[i].payload_start_offset;
          header.fragmentationLength[i] = nalu_idxs[i].payload_size;
          header.fragmentationPlType[i] = 0;
          header.fragmentationTimeDiff[i] = 0;
        }
      }

      callback_result = callback_->OnEncodedImage(*image, &info, &header);
    }

    // Return output buffer back to the encoder.
    bool success = Java_MediaCodecVideoEncoder_releaseOutputBuffer(
        jni, j_media_codec_video_encoder_, output_buffer_index);
    if (CheckException(jni) || !success) {
      ProcessHWError(true /* reset_if_fallback_unavailable */);
      return false;
    }

    // Print per frame statistics.
    if (encoding_start_time_ms > 0) {
      frame_encoding_time_ms = rtc::TimeMillis() - encoding_start_time_ms;
    }
    if (frames_encoded_ < kMaxEncodedLogFrames) {
      int current_latency = static_cast<int>(last_input_timestamp_ms_ -
                                             last_output_timestamp_ms_);
      ALOGD << "Encoder frame out # " << frames_encoded_
            << ". Key: " << key_frame << ". Size: " << payload_size
            << ". TS: " << static_cast<int>(last_output_timestamp_ms_)
            << ". Latency: " << current_latency
            << ". EncTime: " << frame_encoding_time_ms;
    }

    // Calculate and print encoding statistics - every 3 seconds.
    frames_encoded_++;
    current_frames_++;
    current_bytes_ += payload_size;
    current_encoding_time_ms_ += frame_encoding_time_ms;
    LogStatistics(false);

    // Errors in callback_result are currently ignored.
    if (callback_result.drop_next_frame)
      drop_next_input_frame_ = true;
  }
  return true;
}

DeliverPendingOutputs主要流程如下:

調(diào)用java層MediaCodecVideoEncoder的dequeueOutputBuffer函數(shù)從編碼器取出數(shù)據(jù),封裝成OutputBufferInfo仙蚜。
轉(zhuǎn)換OutputBufferInfo為EncodedImage此洲。
回調(diào)callback_的OnEncodedImage來(lái)分發(fā)EncodedImage,callback_成員是一個(gè)VCMEncodedFrameCallback對(duì)象委粉,通過(guò)其OnEncodedImage最終將EncodedImage傳給VideoSendStreamImpl呜师,VideoSendStreamImpl的OnEncodedImage函數(shù)定義如下:

EncodedImageCallback::Result VideoSendStreamImpl::OnEncodedImage(
    const EncodedImage& encoded_image,
    const CodecSpecificInfo* codec_specific_info,
    const RTPFragmentationHeader* fragmentation) {
  // Encoded is called on whatever thread the real encoder implementation run
  // on. In the case of hardware encoders, there might be several encoders
  // running in parallel on different threads.
  size_t simulcast_idx = 0;
  if (codec_specific_info->codecType == kVideoCodecVP8) {
    simulcast_idx = codec_specific_info->codecSpecific.VP8.simulcastIdx;
  }
  if (config_->post_encode_callback) {
    config_->post_encode_callback->EncodedFrameCallback(EncodedFrame(
        encoded_image._buffer, encoded_image._length, encoded_image._frameType,
        simulcast_idx, encoded_image._timeStamp));
  }
  {
    rtc::CritScope lock(&encoder_activity_crit_sect_);
    if (check_encoder_activity_task_)
      check_encoder_activity_task_->UpdateEncoderActivity();
  }

  protection_bitrate_calculator_.UpdateWithEncodedData(encoded_image);
  EncodedImageCallback::Result result = payload_router_.OnEncodedImage(
      encoded_image, codec_specific_info, fragmentation);

  RTC_DCHECK(codec_specific_info);

  int layer = codec_specific_info->codecType == kVideoCodecVP8
                  ? codec_specific_info->codecSpecific.VP8.simulcastIdx
                  : 0;
  {
    rtc::CritScope lock(&ivf_writers_crit_);
    if (file_writers_[layer].get()) {
      bool ok = file_writers_[layer]->WriteFrame(
          encoded_image, codec_specific_info->codecType);
      RTC_DCHECK(ok);
    }
  }

  return result;
}

payload_router_是一個(gè)PayloadRouter對(duì)象,在這里完成后續(xù)的RTP打包和傳輸?shù)墓ぷ鳌?/p>

編碼控制

編碼流控

MediaCodec 流控相關(guān)的接口并不多贾节,一是配置時(shí)設(shè)置目標(biāo)碼率和碼率控制模式匣掸,二是動(dòng)態(tài)調(diào)整目標(biāo)碼率(API 19+)。

配置時(shí)指定目標(biāo)碼率和碼率控制模式

mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, bitRate);
mediaFormat.setInteger(MediaFormat.KEY_BITRATE_MODE,
        MediaCodecInfo.EncoderCapabilities.BITRATE_MODE_VBR);
// 其他配置

mVideoCodec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);

碼率控制模式在 MediaCodecInfo.EncoderCapabilities 類中定義了三種氮双,在 framework 層有另一套名字和它們的值一一對(duì)應(yīng):

  • CQ 對(duì)應(yīng)于 OMX_Video_ControlRateDisable碰酝,它表示完全不控制碼率,盡最大可能保證圖像質(zhì)量戴差;
  • CBR 對(duì)應(yīng)于 OMX_Video_ControlRateConstant送爸,它表示編碼器會(huì)盡量把輸出碼率控制為設(shè)定值,即我們前面提到的“不為所動(dòng)”暖释;
  • VBR 對(duì)應(yīng)于 OMX_Video_ControlRateVariable袭厂,它表示編碼器會(huì)根據(jù)圖像內(nèi)容的復(fù)雜度(實(shí)際上是幀間變化量的大小)來(lái)動(dòng)態(tài)調(diào)整輸出碼率球匕,圖像復(fù)雜則碼率高纹磺,圖像簡(jiǎn)單則碼率低;

動(dòng)態(tài)調(diào)整目標(biāo)碼率

Bundle param = new Bundle();
param.putInt(MediaCodec.PARAMETER_KEY_VIDEO_BITRATE, bitrate);
mediaCodec.setParameters(param);

API 是很簡(jiǎn)單亮曹,但我們究竟該用哪種模式橄杨?
1.對(duì)于質(zhì)量要求高、不在乎帶寬(例如本地存文件)照卦、解碼器支持碼率劇烈波動(dòng)的情況式矫,顯然 CQ 是不二之選;
2.VBR 輸出碼率會(huì)在一定范圍內(nèi)波動(dòng)役耕,對(duì)于小幅晃動(dòng)采转,方塊效應(yīng)會(huì)有所改善,但對(duì)劇烈晃動(dòng)仍無(wú)能為力瞬痘,而連續(xù)調(diào)低碼率則會(huì)導(dǎo)致碼率急劇下降故慈,如果無(wú)法接受這個(gè)問(wèn)題,那 VBR 就不是好的選擇框全;
3.WebRTC 使用的是 CBR察绷,穩(wěn)定可控是 CBR 的優(yōu)點(diǎn),一旦穩(wěn)定可控竣况,那我們就可以自己實(shí)現(xiàn)比較可靠的控制了克婶;
4.方塊效應(yīng)優(yōu)化 VBR 的碼率存在一個(gè)波動(dòng)范圍筒严,因此使用 VBR 可以在一定程度上優(yōu)化方塊效應(yīng)丹泉,但對(duì)于視頻內(nèi)容的劇烈變化情萤,VBR 就只能望洋興嘆了。

WebRTC 的做法是摹恨,獲取每個(gè)輸出幀的 QP 值筋岛,如果 QP 值過(guò)大,就說(shuō)明圖像復(fù)雜度太高晒哄,如果 QP 值持續(xù)超過(guò)上界睁宰,那就重啟編碼器,用更低的輸出分辨率來(lái)編碼寝凌;如果 QP 值過(guò)低柒傻,則說(shuō)明圖像復(fù)雜度太低,如果 QP 值持續(xù)低于下界较木,也會(huì)重啟編碼器红符,用更高的輸出分辨率來(lái)編碼。關(guān)于 QP 值的獲取伐债,可以查看 WebRTC 相關(guān)代碼:master/webrtc/common_video/h264/h264_bitstream_parser.cc预侯。

硬編碼與軟編碼

視頻編碼有硬編碼和軟編碼,在android平臺(tái)上峰锁,MediaCodec封裝了硬編碼和軟編碼萎馅,對(duì)于軟編碼,也可以直接使用其他主流開(kāi)源編碼庫(kù)虹蒋,比如H264編碼標(biāo)準(zhǔn)有l(wèi)ibx264糜芳、libopenh264,vp8/vp9編碼標(biāo)準(zhǔn)有l(wèi)ibvpx魄衅。

webrtc同時(shí)支持硬編碼和軟編碼耍目,在android平臺(tái)上,硬編碼使用的是MediaCodec徐绑,是在java層封裝調(diào)用的邪驮,軟編碼H264使用的是libopenh264,vp8/vp9使用的是libvpx傲茄,都是在native層封裝調(diào)用的毅访。webrtc定義了如下編碼相關(guān)的主要類:

java層主要類如下所示:

image.png

native層主要類如下所示:

image.png

在java層和native層都定義了VideoEncoderFactory接口和VideoEncoder和接口,VideoEncoderFactory創(chuàng)建VideoEncoder盘榨,VideoEncoder實(shí)現(xiàn)視頻編碼功能喻粹。從實(shí)現(xiàn)的角度來(lái)說(shuō),可以分為以下幾類:

  • 基于android系統(tǒng)提供的MediaCodec實(shí)現(xiàn)的HardwareVideoEncoder和MediaCodecVideoEncoder草巡,支持H264守呜、VP8、VP9編碼。
  • 基于libopenh264實(shí)現(xiàn)的H264Encoder查乒,支持H264編碼弥喉。
  • 基于libvpx實(shí)現(xiàn)的VP8Encoder和VP9Encoder,分別支持VP8玛迄、VP9編碼由境。

其他類只是包裝類,比如native層VideoEncoderWrapper可以用于包裝java層HardwareVideoEncoder蓖议,因?yàn)榫幋a操作都是從native層調(diào)用的虏杰,借助VideoEncoderWrapper就可以在native層統(tǒng)一接口了。同理native層的VideoEncoderFactoryWrapper可以用于包裝java層VideoEncoderFactory對(duì)象勒虾。

以java層為例纺阔,VideoEncoderFactory定義如下:

/** Factory for creating VideoEncoders. */
public interface VideoEncoderFactory {
  /** Creates an encoder for the given video codec. */
  @CalledByNative VideoEncoder createEncoder(VideoCodecInfo info);

  /**
   * Enumerates the list of supported video codecs. This method will only be called once and the
   * result will be cached.
   */
  @CalledByNative VideoCodecInfo[] getSupportedCodecs();
}

createEncoder根據(jù)VideoCodecInfo創(chuàng)建對(duì)應(yīng)類型的編碼器,getSupportedCodecs獲取支持的編碼器類型修然,這里的類型指的是編碼標(biāo)準(zhǔn)笛钝,比如H264、VP8低零、VP9等婆翔。獲取的信息用于生成sdp信息用于協(xié)商會(huì)話使用的編碼器類型。實(shí)際定義了HardwareVideoEncoderFactory和SoftwareVideoEncoderFactory兩種掏婶,DefaultVideoEncoderFactory只是對(duì)它們的封裝啃奴。特定的VideoEncoderFactory創(chuàng)建特定的VideoEncoder,比如HardwareVideoEncoderFactory創(chuàng)建的是HardwareVideoEncoder雄妥,SoftwareVideoEncoderFactory創(chuàng)建的是VP8Encoder或者VP9Encoder最蕾,由info參數(shù)決定。

VideoEncoder主要定義如下:

  /**
   * Initializes the encoding process. Call before any calls to encode.
   */
  @CalledByNative VideoCodecStatus initEncode(Settings settings, Callback encodeCallback);

  /**
   * Releases the encoder. No more calls to encode will be made after this call.
   */
  @CalledByNative VideoCodecStatus release();

  /**
   * Requests the encoder to encode a frame.
   */
  @CalledByNative VideoCodecStatus encode(VideoFrame frame, EncodeInfo info);

  /**
   * Informs the encoder of the packet loss and the round-trip time of the network.
   *
   * @param packetLoss How many packets are lost on average per 255 packets.
   * @param roundTripTimeMs Round-trip time of the network in milliseconds.
   */
  @CalledByNative VideoCodecStatus setChannelParameters(short packetLoss, long roundTripTimeMs);

  /** Sets the bitrate allocation and the target framerate for the encoder. */
  @CalledByNative VideoCodecStatus setRateAllocation(BitrateAllocation allocation, int framerate);

  /** Any encoder that wants to use WebRTC provided quality scaler must implement this method. */
  @CalledByNative ScalingSettings getScalingSettings();

  /**
   * Should return a descriptive name for the implementation. Gets called once and cached. May be
   * called from arbitrary thread.
   */
  @CalledByNative String getImplementationName();

native層的EncoderAdapter的internal_encoder_factory_成員是一個(gè)InternalEncoderFactory對(duì)象老厌,external_encoder_factory_成員是一個(gè)CricketToWebRtcEncoderFactory對(duì)象瘟则,CricketToWebRtcEncoderFactory的external_encoder_factory_成員是一個(gè)MediaCodecVideoEncoderFactory對(duì)象,VideoEncoderFactoryWrapper的encoder_factory_成員是一個(gè)java層的VideoEncoderFactory對(duì)象枝秤,VideoEncoderWrapper的encoder_成員是一個(gè)java層的VideoEncoder對(duì)象醋拧。

上面定義了這么多種VideoEncoderFactory和VideoEncoder,實(shí)際使用的是哪一種呢淀弹?實(shí)際使用哪一種跟調(diào)用webrtc api傳遞的參數(shù)丹壕、硬件平臺(tái)以及android系統(tǒng)版本相關(guān)。

參數(shù)主要是PeerConnectionFactory相關(guān)的薇溃,比如:

  public static void initializeFieldTrials(String fieldTrialsInitString) {
    nativeInitializeFieldTrials(fieldTrialsInitString);
  }

fieldTrialsInitString的值會(huì)影響VideoEncoderSoftwareFallbackWrapper的行為菌赖。
還有就是給PeerConnectionFactory構(gòu)造函數(shù)傳的encoderFactory的值。

相關(guān)的代碼如下所示:

  public PeerConnectionFactory(
      Options options, VideoEncoderFactory encoderFactory, VideoDecoderFactory decoderFactory) {
    checkInitializeHasBeenCalled();
    nativeFactory = nativeCreatePeerConnectionFactory(options, encoderFactory, decoderFactory);
    if (nativeFactory == 0) {
      throw new RuntimeException("Failed to initialize PeerConnectionFactory!");
    }
  }

jlong CreatePeerConnectionFactoryForJava(
    JNIEnv* jni,
    const JavaParamRef<jobject>& joptions,
    const JavaParamRef<jobject>& jencoder_factory,
    const JavaParamRef<jobject>& jdecoder_factory,
    rtc::scoped_refptr<AudioProcessing> audio_processor) {

  cricket::WebRtcVideoEncoderFactory* legacy_video_encoder_factory = nullptr;
  cricket::WebRtcVideoDecoderFactory* legacy_video_decoder_factory = nullptr;
  std::unique_ptr<cricket::MediaEngineInterface> media_engine;
  if (jencoder_factory.is_null() && jdecoder_factory.is_null()) {
    // This uses the legacy API, which automatically uses the internal SW
    // codecs in WebRTC.
    if (video_hw_acceleration_enabled) {
      legacy_video_encoder_factory = CreateLegacyVideoEncoderFactory();
      legacy_video_decoder_factory = CreateLegacyVideoDecoderFactory();
    }
    media_engine.reset(CreateMediaEngine(
        adm, audio_encoder_factory, audio_decoder_factory,
        legacy_video_encoder_factory, legacy_video_decoder_factory, audio_mixer,
        audio_processor));
  } else {
    // This uses the new API, does not automatically include software codecs.
    std::unique_ptr<VideoEncoderFactory> video_encoder_factory = nullptr;
    if (jencoder_factory.is_null()) {
      legacy_video_encoder_factory = CreateLegacyVideoEncoderFactory();
      video_encoder_factory = std::unique_ptr<VideoEncoderFactory>(
          WrapLegacyVideoEncoderFactory(legacy_video_encoder_factory));
    } else {
      video_encoder_factory = std::unique_ptr<VideoEncoderFactory>(
          CreateVideoEncoderFactory(jni, jencoder_factory));
    }

    std::unique_ptr<VideoDecoderFactory> video_decoder_factory = nullptr;
    if (jdecoder_factory.is_null()) {
      legacy_video_decoder_factory = CreateLegacyVideoDecoderFactory();
      video_decoder_factory = std::unique_ptr<VideoDecoderFactory>(
          WrapLegacyVideoDecoderFactory(legacy_video_decoder_factory));
    } else {
      video_decoder_factory = std::unique_ptr<VideoDecoderFactory>(
          CreateVideoDecoderFactory(jni, jdecoder_factory));
    }

    rtc::scoped_refptr<AudioDeviceModule> adm_scoped = nullptr;
    media_engine.reset(CreateMediaEngine(
        adm_scoped, audio_encoder_factory, audio_decoder_factory,
        std::move(video_encoder_factory), std::move(video_decoder_factory),
        audio_mixer, audio_processor));
  }

  rtc::scoped_refptr<PeerConnectionFactoryInterface> factory(
      CreateModularPeerConnectionFactory(
          network_thread.get(), worker_thread.get(), signaling_thread.get(),
          std::move(media_engine), std::move(call_factory),
          std::move(rtc_event_log_factory)));
  RTC_CHECK(factory) << "Failed to create the peer connection factory; "
                     << "WebRTC/libjingle init likely failed on this device";
  // TODO(honghaiz): Maybe put the options as the argument of
  // CreatePeerConnectionFactory.
  if (has_options) {
    factory->SetOptions(options);
  }
  OwnedFactoryAndThreads* owned_factory = new OwnedFactoryAndThreads(
      std::move(network_thread), std::move(worker_thread),
      std::move(signaling_thread), legacy_video_encoder_factory,
      legacy_video_decoder_factory, network_monitor_factory, factory.release());
  owned_factory->InvokeJavaCallbacksOnFactoryThreads();
  return jlongFromPointer(owned_factory);
}

可見(jiàn)傳給PeerConnectionFactory構(gòu)造函數(shù)encoderFactory參數(shù)的值直接影響使用哪一個(gè)VideoEncoderFactory沐序,也就直接影響使用哪一個(gè)VideoEncoder琉用。demo中調(diào)用如下:

    if (peerConnectionParameters.videoCodecHwAcceleration) {
      encoderFactory = new DefaultVideoEncoderFactory(
          rootEglBase.getEglBaseContext(), true /* enableIntelVp8Encoder */, enableH264HighProfile);
      decoderFactory = new DefaultVideoDecoderFactory(rootEglBase.getEglBaseContext());
    } else {
      encoderFactory = new SoftwareVideoEncoderFactory();
      decoderFactory = new SoftwareVideoDecoderFactory();
    }

    factory = new PeerConnectionFactory(options, encoderFactory, decoderFactory);

可見(jiàn)使能hardware acceleration時(shí)堕绩,使用的是DefaultVideoEncoderFactory,而DefaultVideoEncoderFactory createEncoder時(shí)會(huì)先后嘗試HardwareVideoEncoderFactory和SoftwareVideoEncoderFactory兩種邑时,如下所示:

  public VideoEncoder createEncoder(VideoCodecInfo info) {
    final VideoEncoder videoEncoder = hardwareVideoEncoderFactory.createEncoder(info);
    if (videoEncoder != null) {
      return videoEncoder;
    }
    return softwareVideoEncoderFactory.createEncoder(info);
  }

也就是無(wú)法使用硬編碼的情況下使用軟編碼奴紧,而是否能使用硬編碼取決于硬件平臺(tái)以及android系統(tǒng)版本是否支持對(duì)應(yīng)的編碼標(biāo)準(zhǔn),如下所示:

public VideoEncoder createEncoder(VideoCodecInfo input) {
    VideoCodecType type = VideoCodecType.valueOf(input.name);
    MediaCodecInfo info = findCodecForType(type);

    if (info == null) {
      // No hardware support for this type.
      // TODO(andersc): This is for backwards compatibility. Remove when clients have migrated to
      // new DefaultVideoEncoderFactory.
      if (fallbackToSoftware) {
        SoftwareVideoEncoderFactory softwareVideoEncoderFactory = new SoftwareVideoEncoderFactory();
        return softwareVideoEncoderFactory.createEncoder(input);
      } else {
        return null;
      }
    }

    String codecName = info.getName();
    String mime = type.mimeType();
    Integer surfaceColorFormat = MediaCodecUtils.selectColorFormat(
        MediaCodecUtils.TEXTURE_COLOR_FORMATS, info.getCapabilitiesForType(mime));
    Integer yuvColorFormat = MediaCodecUtils.selectColorFormat(
        MediaCodecUtils.ENCODER_COLOR_FORMATS, info.getCapabilitiesForType(mime));

    if (type == VideoCodecType.H264) {
      boolean isHighProfile = nativeIsSameH264Profile(input.params, getCodecProperties(type, true))
          && isH264HighProfileSupported(info);
      boolean isBaselineProfile =
          nativeIsSameH264Profile(input.params, getCodecProperties(type, false));

      if (!isHighProfile && !isBaselineProfile) {
        return null;
      }
    }

    return new HardwareVideoEncoder(codecName, type, surfaceColorFormat, yuvColorFormat,
        input.params, getKeyFrameIntervalSec(type), getForcedKeyFrameIntervalMs(type, codecName),
        createBitrateAdjuster(type, codecName), sharedContext);
  }

VideoCodecInfo的name包含要使用的編碼類型信息刁愿,比如H264绰寞、VP8到逊、VP9铣口,可以對(duì)應(yīng)一個(gè)VideoCodecType,如下所示:

enum VideoCodecType {
  VP8("video/x-vnd.on2.vp8"),
  VP9("video/x-vnd.on2.vp9"),
  H264("video/avc");

  private final String mimeType;

  private VideoCodecType(String mimeType) {
    this.mimeType = mimeType;
  }

  String mimeType() {
    return mimeType;
  }
}

調(diào)用findCodecForType判斷使用的硬件平臺(tái)以及android系統(tǒng)版本是否支持該編碼類型觉壶,支持的話就使用HardwareVideoEncoder脑题,否則就會(huì)調(diào)用SoftwareVideoEncoderFactory創(chuàng)建對(duì)應(yīng)的軟解碼器,findCodecForType定義如下所示:

  private MediaCodecInfo findCodecForType(VideoCodecType type) {
    for (int i = 0; i < MediaCodecList.getCodecCount(); ++i) {
      MediaCodecInfo info = null;
      try {
        info = MediaCodecList.getCodecInfoAt(i);
      } catch (IllegalArgumentException e) {
        Logging.e(TAG, "Cannot retrieve encoder codec info", e);
      }

      if (info == null || !info.isEncoder()) {
        continue;
      }

      if (isSupportedCodec(info, type)) {
        return info;
      }
    }
    return null; // No support for this type.
  }

isSupportedCodec定義如下所示:

  private boolean isSupportedCodec(MediaCodecInfo info, VideoCodecType type) {
    if (!MediaCodecUtils.codecSupportsType(info, type)) {
      return false;
    }
    // Check for a supported color format.
    if (MediaCodecUtils.selectColorFormat(
            MediaCodecUtils.ENCODER_COLOR_FORMATS, info.getCapabilitiesForType(type.mimeType()))
        == null) {
      return false;
    }
    return isHardwareSupportedInCurrentSdk(info, type);
  }

先調(diào)用codecSupportsType比對(duì)mimeType铜靶,如下所示:

  static boolean codecSupportsType(MediaCodecInfo info, VideoCodecType type) {
    for (String mimeType : info.getSupportedTypes()) {
      if (type.mimeType().equals(mimeType)) {
        return true;
      }
    }
    return false;
  }

接著調(diào)用isHardwareSupportedInCurrentSdk比對(duì)硬件平臺(tái)以及android系統(tǒng)版本叔遂,如下所示:

  private boolean isHardwareSupportedInCurrentSdk(MediaCodecInfo info, VideoCodecType type) {
    switch (type) {
      case VP8:
        return isHardwareSupportedInCurrentSdkVp8(info);
      case VP9:
        return isHardwareSupportedInCurrentSdkVp9(info);
      case H264:
        return isHardwareSupportedInCurrentSdkH264(info);
    }
    return false;
  }

如果是H264,調(diào)用的是isHardwareSupportedInCurrentSdkH264争剿,如下所示:

  private boolean isHardwareSupportedInCurrentSdkH264(MediaCodecInfo info) {
    // First, H264 hardware might perform poorly on this model.
    if (H264_HW_EXCEPTION_MODELS.contains(Build.MODEL)) {
      return false;
    }
    String name = info.getName();
    // QCOM H264 encoder is supported in KITKAT or later.
    return (name.startsWith(QCOM_PREFIX) && Build.VERSION.SDK_INT >= Build.VERSION_CODES.KITKAT)
        // Exynos H264 encoder is supported in LOLLIPOP or later.
        || (name.startsWith(EXYNOS_PREFIX)
               && Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP);
  }

可見(jiàn)已艰,在這里完成了硬件平臺(tái)以及android系統(tǒng)版本的判斷。

總的來(lái)說(shuō)蚕苇,實(shí)際使用哪一種編碼器跟調(diào)用webrtc api傳遞的參數(shù)哩掺、硬件平臺(tái)以及android系統(tǒng)版本相關(guān)。

總結(jié)

本篇文章主要分析了webrtc視頻編碼模塊流程涩笤,這個(gè)流程就是創(chuàng)建一系列相關(guān)的對(duì)象嚼吞,然后編碼器設(shè)置好輸入輸出,VideoStreamEncoder對(duì)象負(fù)責(zé)輸入輸出的銜接蹬碧,編碼器的輸入是通過(guò)將VideoStreamEncoder注冊(cè)到VideoTrack來(lái)完成圖像數(shù)據(jù)的接收舱禽,此時(shí)VideoStreamEncoder是做為一個(gè)VideoSinkInterface對(duì)象,編碼器的輸出是通過(guò)將VCMEncodedFrameCallback注冊(cè)到MediaCodecVideoEncoder恩沽,再經(jīng)過(guò)VideoStreamEncoder來(lái)完成碼流數(shù)據(jù)的打包和傳輸誊稚,這工作是交給VideoSendStreamImpl來(lái)完成的,此時(shí)VideoStreamEncoder是做為一個(gè)EncodedImageCallback對(duì)象罗心。

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末里伯,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子协屡,更是在濱河造成了極大的恐慌俏脊,老刑警劉巖,帶你破解...
    沈念sama閱讀 207,113評(píng)論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件肤晓,死亡現(xiàn)場(chǎng)離奇詭異爷贫,居然都是意外死亡认然,警方通過(guò)查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,644評(píng)論 2 381
  • 文/潘曉璐 我一進(jìn)店門(mén)漫萄,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)卷员,“玉大人,你說(shuō)我怎么就攤上這事腾务”下猓” “怎么了?”我有些...
    開(kāi)封第一講書(shū)人閱讀 153,340評(píng)論 0 344
  • 文/不壞的土叔 我叫張陵岩瘦,是天一觀的道長(zhǎng)未巫。 經(jīng)常有香客問(wèn)我,道長(zhǎng)启昧,這世上最難降的妖魔是什么叙凡? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 55,449評(píng)論 1 279
  • 正文 為了忘掉前任,我火速辦了婚禮密末,結(jié)果婚禮上握爷,老公的妹妹穿的比我還像新娘。我一直安慰自己严里,他們只是感情好新啼,可當(dāng)我...
    茶點(diǎn)故事閱讀 64,445評(píng)論 5 374
  • 文/花漫 我一把揭開(kāi)白布。 她就那樣靜靜地躺著刹碾,像睡著了一般燥撞。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上教硫,一...
    開(kāi)封第一講書(shū)人閱讀 49,166評(píng)論 1 284
  • 那天叨吮,我揣著相機(jī)與錄音,去河邊找鬼瞬矩。 笑死茶鉴,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的景用。 我是一名探鬼主播涵叮,決...
    沈念sama閱讀 38,442評(píng)論 3 401
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼伞插!你這毒婦竟也來(lái)了割粮?” 一聲冷哼從身側(cè)響起,我...
    開(kāi)封第一講書(shū)人閱讀 37,105評(píng)論 0 261
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤媚污,失蹤者是張志新(化名)和其女友劉穎舀瓢,沒(méi)想到半個(gè)月后,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體耗美,經(jīng)...
    沈念sama閱讀 43,601評(píng)論 1 300
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡京髓,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 36,066評(píng)論 2 325
  • 正文 我和宋清朗相戀三年航缀,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片堰怨。...
    茶點(diǎn)故事閱讀 38,161評(píng)論 1 334
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡芥玉,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出备图,到底是詐尸還是另有隱情灿巧,我是刑警寧澤,帶...
    沈念sama閱讀 33,792評(píng)論 4 323
  • 正文 年R本政府宣布揽涮,位于F島的核電站抠藕,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏绞吁。R本人自食惡果不足惜幢痘,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,351評(píng)論 3 307
  • 文/蒙蒙 一唬格、第九天 我趴在偏房一處隱蔽的房頂上張望家破。 院中可真熱鬧,春花似錦购岗、人聲如沸汰聋。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 30,352評(píng)論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)烹困。三九已至,卻和暖如春乾吻,著一層夾襖步出監(jiān)牢的瞬間髓梅,已是汗流浹背。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 31,584評(píng)論 1 261
  • 我被黑心中介騙來(lái)泰國(guó)打工绎签, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留枯饿,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 45,618評(píng)論 2 355
  • 正文 我出身青樓诡必,卻偏偏與公主長(zhǎng)得像奢方,于是被迫代替她去往敵國(guó)和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子爸舒,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 42,916評(píng)論 2 344