iOS WebRTC 訂閱流時(shí)不獲取麥克風(fēng)權(quán)限解決方案

起因

在 APP 中用 OWT(Open WebRTC Tookit) 實(shí)現(xiàn)直播功能時(shí)官份,發(fā)現(xiàn),只要加入到創(chuàng)建好的房間,訂閱了房間中的流之后肥惭,就會(huì)獲取用戶的麥克風(fēng)權(quán)限。這樣對(duì)只是想看直播并不想上麥講話的用戶很不友好紊搪,我們想要的效果是蜜葱,只有用戶上麥時(shí)才去獲取麥克風(fēng)權(quán)限,其他時(shí)間不獲取麥克風(fēng)權(quán)限耀石。

原因

翻閱源碼發(fā)現(xiàn)牵囤,在WebRTC官方SDK中,如果為RTCPeerConnection添加了AudioTrack滞伟,WebRTC就會(huì)嘗試去初始化音頻的輸入輸出揭鳞。
Audio通道建立成功之后WebRTC會(huì)自動(dòng)完成聲音的采集傳輸播放。
RTCAudioSession提供了一個(gè)useManualAudio屬性梆奈,將它設(shè)置為true野崇,那么音頻的輸入輸出開(kāi)關(guān)將由isAudioEnabled屬性控制。
但是亩钟,isAudioEnabled只能同時(shí)控制音頻的輸入輸出乓梨,無(wú)法分開(kāi)控制。
我們的產(chǎn)品現(xiàn)在需要關(guān)閉麥克風(fēng)的功能清酥,在只是訂閱流的時(shí)候扶镀,不需要麥克風(fēng)。需要推流(連麥等功能)焰轻,必須要使用麥克風(fēng)的時(shí)候臭觉,才需要去獲取麥克風(fēng)權(quán)限。

從WebRTC官方回復(fù)來(lái)看辱志,WebRTC 是專門為全雙工VoIP通話應(yīng)用而設(shè)計(jì)的胧谈,所以必須是需要初始化麥克風(fēng)的,而且是沒(méi)有提供修改的API荸频。


請(qǐng)?zhí)砑訄D片描述

請(qǐng)?zhí)砑訄D片描述

解決方案

目前官方?jīng)]有提供API菱肖,底層相關(guān)代碼還沒(méi)有實(shí)現(xiàn)

// sdk/objc/native/src/audio/audio_device_ios.mm
int32_t AudioDeviceIOS::SetMicrophoneMute(bool enable) {
  RTC_NOTREACHED() << "Not implemented";
  return -1;
}

分析源碼,可以在VoiceProcessingAudioUnit中找到Audio Unit的使用旭从。
OnDeliverRecordedData回調(diào)函數(shù)拿到音頻數(shù)據(jù)后通過(guò)VoiceProcessingAudioUnitObserver通知給AudioDeviceIOS

// sdk/objc/native/src/audio/voice_processing_audio_unit.mm
OSStatus VoiceProcessingAudioUnit::OnDeliverRecordedData(
    void* in_ref_con,
    AudioUnitRenderActionFlags* flags,
    const AudioTimeStamp* time_stamp,
    UInt32 bus_number,
    UInt32 num_frames,
    AudioBufferList* io_data) {
  VoiceProcessingAudioUnit* audio_unit =
      static_cast<VoiceProcessingAudioUnit*>(in_ref_con);
  return audio_unit->NotifyDeliverRecordedData(flags, time_stamp, bus_number,
                                               num_frames, io_data);
}

I/O Unit的特征

請(qǐng)?zhí)砑訄D片描述

上圖I/O Unit有兩個(gè)element稳强,但它們是獨(dú)立的场仲,例如,你可以根據(jù)應(yīng)用程序的需要使用enable I/O屬性(kAudioOutputUnitProperty_EnableIO)來(lái)獨(dú)立啟用或禁用某個(gè)element退疫。每個(gè)element都有Input scopeOutput scope渠缕。

  • I/O Unitelement 1連接音頻的輸入硬件,在上圖中由麥克風(fēng)表示褒繁。開(kāi)發(fā)者只能訪問(wèn)控制Output scope
  • I/O Unitelement 0連接音頻的輸出硬件亦鳞,在上圖中由揚(yáng)聲器表示。開(kāi)發(fā)者只能訪問(wèn)控制Input scope

input element is element 1(單詞Input的字母“I”棒坏,類似1)
output element is element 0 (單詞Output的字母“O”燕差,類型0)

通過(guò)分析Audio Unit發(fā)現(xiàn),其實(shí)要關(guān)閉麥克風(fēng)也很簡(jiǎn)單坝冕,只需要在初始化音頻單元配置的時(shí)候關(guān)閉掉輸入徒探。下面代碼新增了一個(gè)isMicrophoneMute 變量,這個(gè)變量在RTCAudioSessionConfiguration中設(shè)置喂窟。

代碼示例:

c++
// sdk/objc/native/src/audio/voice_processing_audio_unit.mm

bool VoiceProcessingAudioUnit::Init() {
  RTC_DCHECK_EQ(state_, kInitRequired);

  // Create an audio component description to identify the Voice Processing
  // I/O audio unit.
  AudioComponentDescription vpio_unit_description;
  vpio_unit_description.componentType = kAudioUnitType_Output;
  vpio_unit_description.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
  vpio_unit_description.componentManufacturer = kAudioUnitManufacturer_Apple;
  vpio_unit_description.componentFlags = 0;
  vpio_unit_description.componentFlagsMask = 0;

  // Obtain an audio unit instance given the description.
  AudioComponent found_vpio_unit_ref =
      AudioComponentFindNext(nullptr, &vpio_unit_description);

  // Create a Voice Processing IO audio unit.
  OSStatus result = noErr;
  result = AudioComponentInstanceNew(found_vpio_unit_ref, &vpio_unit_);
  if (result != noErr) {
    vpio_unit_ = nullptr;
    RTCLogError(@"AudioComponentInstanceNew failed. Error=%ld.", (long)result);
    return false;
  }

  // Enable input on the input scope of the input element.
    RTCAudioSessionConfiguration* webRTCConfiguration =  [RTCAudioSessionConfiguration webRTCConfiguration];
    
    if (webRTCConfiguration.isMicrophoneMute)
    {
        RTCLog("@Enable input on the input scope of the input element.");
      UInt32 enable_input = 1;
      result = AudioUnitSetProperty(vpio_unit_, kAudioOutputUnitProperty_EnableIO,
                                    kAudioUnitScope_Input, kInputBus, &enable_input,
                                    sizeof(enable_input));
      if (result != noErr) {
        DisposeAudioUnit();
        RTCLogError(@"Failed to enable input on input scope of input element. "
                     "Error=%ld.",
                    (long)result);
        return false;
      }
    }
    else {
        RTCLog("@Not Enable input on the input scope of the input element.");
    }
    

  // Enable output on the output scope of the output element.
  UInt32 enable_output = 1;
  result = AudioUnitSetProperty(vpio_unit_, kAudioOutputUnitProperty_EnableIO,
                                kAudioUnitScope_Output, kOutputBus,
                                &enable_output, sizeof(enable_output));
  if (result != noErr) {
    DisposeAudioUnit();
    RTCLogError(@"Failed to enable output on output scope of output element. "
                 "Error=%ld.",
                (long)result);
    return false;
  }

  // Specify the callback function that provides audio samples to the audio
  // unit.
  AURenderCallbackStruct render_callback;
  render_callback.inputProc = OnGetPlayoutData;
  render_callback.inputProcRefCon = this;
  result = AudioUnitSetProperty(
      vpio_unit_, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input,
      kOutputBus, &render_callback, sizeof(render_callback));
  if (result != noErr) {
    DisposeAudioUnit();
    RTCLogError(@"Failed to specify the render callback on the output bus. "
                 "Error=%ld.",
                (long)result);
    return false;
  }

  // Disable AU buffer allocation for the recorder, we allocate our own.
  // TODO(henrika): not sure that it actually saves resource to make this call.
    if (webRTCConfiguration.isMicrophoneMute)
    {
        RTCLog("@Disable AU buffer allocation for the recorder, we allocate our own.");
        UInt32 flag = 0;
        result = AudioUnitSetProperty(
          vpio_unit_, kAudioUnitProperty_ShouldAllocateBuffer,
          kAudioUnitScope_Output, kInputBus, &flag, sizeof(flag));
        if (result != noErr) {
          DisposeAudioUnit();
          RTCLogError(@"Failed to disable buffer allocation on the input bus. "
                     "Error=%ld.",
                    (long)result);
          return false;
        }
    }
    else {
        RTCLog("@NOT Disable AU buffer allocation for the recorder, we allocate our own.");
    }


  // Specify the callback to be called by the I/O thread to us when input audio
  // is available. The recorded samples can then be obtained by calling the
  // AudioUnitRender() method.
    if (webRTCConfiguration.isMicrophoneMute)
    {
      RTCLog("@Specify the callback to be called by the I/O thread to us when input audio");
        
      AURenderCallbackStruct input_callback;
      input_callback.inputProc = OnDeliverRecordedData;
      input_callback.inputProcRefCon = this;
      result = AudioUnitSetProperty(vpio_unit_,
                                    kAudioOutputUnitProperty_SetInputCallback,
                                    kAudioUnitScope_Global, kInputBus,
                                    &input_callback, sizeof(input_callback));
      if (result != noErr) {
        DisposeAudioUnit();
        RTCLogError(@"Failed to specify the input callback on the input bus. "
                     "Error=%ld.",
                    (long)result);
        return false;
      }
   }
   else {
       RTCLog("@NOT Specify the callback to be called by the I/O thread to us when input audio");
       
   }
     

  state_ = kUninitialized;
  return true;
}
c++
// sdk/objc/native/src/audio/voice_processing_audio_unit.mm

bool VoiceProcessingAudioUnit::Initialize(Float64 sample_rate) {
  RTC_DCHECK_GE(state_, kUninitialized);
  RTCLog(@"Initializing audio unit with sample rate: %f", sample_rate);

  OSStatus result = noErr;
  AudioStreamBasicDescription format = GetFormat(sample_rate);
  UInt32 size = sizeof(format);
#if !defined(NDEBUG)
  LogStreamDescription(format);
#endif
    
    
    RTCAudioSessionConfiguration* webRTCConfiguration =  [RTCAudioSessionConfiguration webRTCConfiguration];
    if (webRTCConfiguration.isMicrophoneMute)
    {
        RTCLog("@Setting the format on the output scope of the input element/bus because it's not movie mode");
      // Set the format on the output scope of the input element/bus.
      result =
          AudioUnitSetProperty(vpio_unit_, kAudioUnitProperty_StreamFormat,
                               kAudioUnitScope_Output, kInputBus, &format, size);
      if (result != noErr) {
        RTCLogError(@"Failed to set format on output scope of input bus. "
                     "Error=%ld.",
                    (long)result);
        return false;
      }
    }
    else {
        RTCLog("@NOT setting the format on the output sscope of the input element because it's movie mode");
    }
     

  // Set the format on the input scope of the output element/bus.
  result =
      AudioUnitSetProperty(vpio_unit_, kAudioUnitProperty_StreamFormat,
                           kAudioUnitScope_Input, kOutputBus, &format, size);
  if (result != noErr) {
    RTCLogError(@"Failed to set format on input scope of output bus. "
                 "Error=%ld.",
                (long)result);
    return false;
  }

  // Initialize the Voice Processing I/O unit instance.
  // Calls to AudioUnitInitialize() can fail if called back-to-back on
  // different ADM instances. The error message in this case is -66635 which is
  // undocumented. Tests have shown that calling AudioUnitInitialize a second
  // time, after a short sleep, avoids this issue.
  // See webrtc:5166 for details.
  int failed_initalize_attempts = 0;
  result = AudioUnitInitialize(vpio_unit_);
  while (result != noErr) {
    RTCLogError(@"Failed to initialize the Voice Processing I/O unit. "
                 "Error=%ld.",
                (long)result);
    ++failed_initalize_attempts;
    if (failed_initalize_attempts == kMaxNumberOfAudioUnitInitializeAttempts) {
      // Max number of initialization attempts exceeded, hence abort.
      RTCLogError(@"Too many initialization attempts.");
      return false;
    }
    RTCLog(@"Pause 100ms and try audio unit initialization again...");
    [NSThread sleepForTimeInterval:0.1f];
    result = AudioUnitInitialize(vpio_unit_);
  }
  if (result == noErr) {
    RTCLog(@"Voice Processing I/O unit is now initialized.");
  }

  // AGC should be enabled by default for Voice Processing I/O units but it is
  // checked below and enabled explicitly if needed. This scheme is used
  // to be absolutely sure that the AGC is enabled since we have seen cases
  // where only zeros are recorded and a disabled AGC could be one of the
  // reasons why it happens.
  int agc_was_enabled_by_default = 0;
  UInt32 agc_is_enabled = 0;
  result = GetAGCState(vpio_unit_, &agc_is_enabled);
  if (result != noErr) {
    RTCLogError(@"Failed to get AGC state (1st attempt). "
                 "Error=%ld.",
                (long)result);
    // Example of error code: kAudioUnitErr_NoConnection (-10876).
    // All error codes related to audio units are negative and are therefore
    // converted into a postive value to match the UMA APIs.
    RTC_HISTOGRAM_COUNTS_SPARSE_100000(
        "WebRTC.Audio.GetAGCStateErrorCode1", (-1) * result);
  } else if (agc_is_enabled) {
    // Remember that the AGC was enabled by default. Will be used in UMA.
    agc_was_enabled_by_default = 1;
  } else {
    // AGC was initially disabled => try to enable it explicitly.
    UInt32 enable_agc = 1;
    result =
        AudioUnitSetProperty(vpio_unit_,
                             kAUVoiceIOProperty_VoiceProcessingEnableAGC,
                             kAudioUnitScope_Global, kInputBus, &enable_agc,
                             sizeof(enable_agc));
    if (result != noErr) {
      RTCLogError(@"Failed to enable the built-in AGC. "
                   "Error=%ld.",
                  (long)result);
      RTC_HISTOGRAM_COUNTS_SPARSE_100000(
          "WebRTC.Audio.SetAGCStateErrorCode", (-1) * result);
    }
    result = GetAGCState(vpio_unit_, &agc_is_enabled);
    if (result != noErr) {
      RTCLogError(@"Failed to get AGC state (2nd attempt). "
                   "Error=%ld.",
                  (long)result);
      RTC_HISTOGRAM_COUNTS_SPARSE_100000(
          "WebRTC.Audio.GetAGCStateErrorCode2", (-1) * result);
    }
  }

  // Track if the built-in AGC was enabled by default (as it should) or not.
  RTC_HISTOGRAM_BOOLEAN("WebRTC.Audio.BuiltInAGCWasEnabledByDefault",
                        agc_was_enabled_by_default);
  RTCLog(@"WebRTC.Audio.BuiltInAGCWasEnabledByDefault: %d",
         agc_was_enabled_by_default);
  // As a final step, add an UMA histogram for tracking the AGC state.
  // At this stage, the AGC should be enabled, and if it is not, more work is
  // needed to find out the root cause.
  RTC_HISTOGRAM_BOOLEAN("WebRTC.Audio.BuiltInAGCIsEnabled", agc_is_enabled);
  RTCLog(@"WebRTC.Audio.BuiltInAGCIsEnabled: %u",
         static_cast<unsigned int>(agc_is_enabled));

  state_ = kInitialized;
  return true;
}

上面代碼通過(guò)個(gè)isMicrophoneMute變量测暗,來(lái)判斷是否開(kāi)啟輸入。

通過(guò)上面的代碼磨澡,我們可以做到初始化的時(shí)候設(shè)置是否需要麥克風(fēng)權(quán)限碗啄。但是要做到動(dòng)態(tài)連麥與下麥功能還遠(yuǎn)遠(yuǎn)不夠。

通過(guò)我們?cè)O(shè)想稳摄,我們需要有一個(gè)方法稚字,隨時(shí)切換來(lái)初始化Audio Unit峦阁。分析源碼發(fā)現(xiàn)骂倘,我們可以通過(guò)RTCAudioSession诱鞠,增加一個(gè)另一個(gè)屬性isMicrophoneMute热幔。

這個(gè)變量將會(huì)像之前的isAudioEnabled屬性一樣磺平,通過(guò)RTCAudioSession對(duì)外提供接口遏佣。我們只要模仿isAudioEnabled就可以輕松實(shí)現(xiàn)目的捉貌。

RTCAudioSession中實(shí)現(xiàn)isMicrophoneMute屬性昔穴。

代碼示例:

// sdk/objc/components/audio/RTCAudioSession.mm
- (void)setIsMicrophoneMute:(BOOL)isMicrophoneMute {
  @synchronized(self) {
    if (_isMicrophoneMute == isMicrophoneMute) {
      return;
    }
    _isMicrophoneMute = isMicrophoneMute;
  }
  [self notifyDidChangeMicrophoneMute];
}

- (BOOL)isMicrophoneMute {
  @synchronized(self) {
    return _isMicrophoneMute;
  }
}

- (void)notifyDidChangeMicrophoneMute {
  for (auto delegate : self.delegates) {
    SEL sel = @selector(audioSession:didChangeMicrophoneMute:);
    if ([delegate respondsToSelector:sel]) {
      [delegate audioSession:self didChangeMicrophoneMute:self.isMicrophoneMute];
    }
  }
}

setIsMicrophoneMute將通過(guò)RTCNativeAudioSessionDelegateAdapter把消息傳遞給AudioDeviceIOS霹菊。

代碼示例:

// sdk/objc/components/audio/RTCNativeAudioSessionDelegateAdapter.mm
- (void)audioSession:(RTCAudioSession *)session 
    didChangeMicrophoneMute:(BOOL)isMicrophoneMute {
  _observer->OnMicrophoneMuteChange(isMicrophoneMute);
}

AudioDeviceIOS中實(shí)現(xiàn)具體邏輯剧蚣,AudioDeviceIOS::OnMicrophoneMuteChange將消息發(fā)送給線程來(lái)處理。

代碼示例:

// sdk/objc/native/src/audio/audio_device_ios.mm
void AudioDeviceIOS::OnMicrophoneMuteChange(bool is_microphone_mute) {
  RTC_DCHECK(thread_);
  thread_->Post(RTC_FROM_HERE,
                this,
                kMessageTypeMicrophoneMuteChange,
                new rtc::TypedMessageData<bool>(is_microphone_mute));
}

void AudioDeviceIOS::OnMessage(rtc::Message* msg) {
  switch (msg->message_id) {
    // ...
    case kMessageTypeMicrophoneMuteChange: {
      rtc::TypedMessageData<bool>* data = static_cast<rtc::TypedMessageData<bool>*>(msg->pdata);
      HandleMicrophoneMuteChange(data->data());
      delete data;
      break;
    }
  }
}

void AudioDeviceIOS::HandleMicrophoneMuteChange(bool is_microphone_mute) {
  RTC_DCHECK_RUN_ON(&thread_checker_);
  RTCLog(@"Handling MicrophoneMute change to %d", is_microphone_mute);
  if (is_microphone_mute) {
            StopPlayout();
            InitRecording();
            StartRecording();
            StartPlayout();
        }else{
            StopRecording();
            StopPlayout();
            InitPlayout();
            StartPlayout();
        }
}

至此旋廷,麥克風(fēng)的靜音就完成了鸠按。

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市饶碘,隨后出現(xiàn)的幾起案子目尖,更是在濱河造成了極大的恐慌,老刑警劉巖扎运,帶你破解...
    沈念sama閱讀 212,383評(píng)論 6 493
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件瑟曲,死亡現(xiàn)場(chǎng)離奇詭異饮戳,居然都是意外死亡,警方通過(guò)查閱死者的電腦和手機(jī)洞拨,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 90,522評(píng)論 3 385
  • 文/潘曉璐 我一進(jìn)店門扯罐,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái),“玉大人烦衣,你說(shuō)我怎么就攤上這事歹河。” “怎么了花吟?”我有些...
    開(kāi)封第一講書(shū)人閱讀 157,852評(píng)論 0 348
  • 文/不壞的土叔 我叫張陵秸歧,是天一觀的道長(zhǎng)。 經(jīng)常有香客問(wèn)我示辈,道長(zhǎng)寥茫,這世上最難降的妖魔是什么遣蚀? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 56,621評(píng)論 1 284
  • 正文 為了忘掉前任矾麻,我火速辦了婚禮,結(jié)果婚禮上芭梯,老公的妹妹穿的比我還像新娘险耀。我一直安慰自己,他們只是感情好玖喘,可當(dāng)我...
    茶點(diǎn)故事閱讀 65,741評(píng)論 6 386
  • 文/花漫 我一把揭開(kāi)白布甩牺。 她就那樣靜靜地躺著,像睡著了一般累奈。 火紅的嫁衣襯著肌膚如雪贬派。 梳的紋絲不亂的頭發(fā)上,一...
    開(kāi)封第一講書(shū)人閱讀 49,929評(píng)論 1 290
  • 那天澎媒,我揣著相機(jī)與錄音搞乏,去河邊找鬼。 笑死戒努,一個(gè)胖子當(dāng)著我的面吹牛请敦,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播储玫,決...
    沈念sama閱讀 39,076評(píng)論 3 410
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼侍筛,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來(lái)了撒穷?” 一聲冷哼從身側(cè)響起匣椰,我...
    開(kāi)封第一講書(shū)人閱讀 37,803評(píng)論 0 268
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎端礼,沒(méi)想到半個(gè)月后禽笑,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體弛车,經(jīng)...
    沈念sama閱讀 44,265評(píng)論 1 303
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 36,582評(píng)論 2 327
  • 正文 我和宋清朗相戀三年蒲每,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了纷跛。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 38,716評(píng)論 1 341
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡邀杏,死狀恐怖贫奠,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情望蜡,我是刑警寧澤唤崭,帶...
    沈念sama閱讀 34,395評(píng)論 4 333
  • 正文 年R本政府宣布,位于F島的核電站脖律,受9級(jí)特大地震影響谢肾,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜小泉,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 40,039評(píng)論 3 316
  • 文/蒙蒙 一芦疏、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧微姊,春花似錦酸茴、人聲如沸。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 30,798評(píng)論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)。三九已至配喳,卻和暖如春酪穿,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背晴裹。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 32,027評(píng)論 1 266
  • 我被黑心中介騙來(lái)泰國(guó)打工被济, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人息拜。 一個(gè)月前我還...
    沈念sama閱讀 46,488評(píng)論 2 361
  • 正文 我出身青樓溉潭,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國(guó)和親少欺。 傳聞我的和親對(duì)象是個(gè)殘疾皇子喳瓣,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 43,612評(píng)論 2 350

推薦閱讀更多精彩內(nèi)容