AVFoundation編程指南07-導(dǎo)出

寫在前面

喜歡AVFoundation資料的同學(xué)可以關(guān)注我的專題:《AVFoundation》專輯
也可以關(guān)注我的簡(jiǎn)書賬號(hào)

正文

要讀取和寫入視頻和音頻assets堤瘤,必須使用AVFoundation框架提供的導(dǎo)出API酌住。AVAssetExportSession類為簡(jiǎn)單的導(dǎo)出需求提供了一個(gè)界面镊绪,例如修改文件格式或修剪資源的長(zhǎng)度(請(qǐng)參閱Trimming and Transcoding a Movie)游两。要獲得更深入的導(dǎo)出需求归形,請(qǐng)使用AVAssetReaderAVAssetWriter類宠页。

如果要對(duì)asset內(nèi)容執(zhí)行操作,請(qǐng)使用AVAssetReader瑰抵。例如你雌,你可能會(huì)讀取asset的音軌以生成波形的直觀表示。要從樣本緩沖區(qū)或靜止圖像等媒體生成asset二汛,請(qǐng)使用AVAssetWriter對(duì)象婿崭。

注意:asset讀取器和寫入器類不用于實(shí)時(shí)處理拨拓。實(shí)際上,asset讀取器甚至不能用于從HTTP實(shí)時(shí)流等實(shí)時(shí)源讀取逛球。但是千元,如果你正在使用具有實(shí)時(shí)數(shù)據(jù)源的asset寫入器(例如AVCaptureOutput對(duì)象)苫昌,請(qǐng)將asset寫入器輸入的expectsMediaDataInRealTime屬性設(shè)置為YES颤绕。對(duì)于非實(shí)時(shí)數(shù)據(jù)源,將此屬性設(shè)置為YES將導(dǎo)致文件無法正確交錯(cuò)祟身。

讀取Asset

每個(gè)AVAssetReader對(duì)象一次只能與一個(gè)asset相關(guān)聯(lián)奥务,但此asset可能包含多個(gè)track。因此袜硫,在開始讀取之前氯葬,必須將AVAssetReaderOutput類的具體子類分配給asset讀取器,以便配置媒體數(shù)據(jù)的讀取方式婉陷。 AVAssetReaderOutput基類有三個(gè)具體的子類帚称,可用于滿足asset讀取需求:AVAssetReaderTrackOutputAVAssetReaderAudioMixOutputAVAssetReaderVideoCompositionOutput秽澳。

創(chuàng)建Asset讀取器

初始化AVAssetReader對(duì)象所需的只是你要讀取的asset闯睹。

NSError *outError;
AVAsset *someAsset = <#AVAsset that you want to read#>;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:someAsset error:&outError];
BOOL success = (assetReader != nil);

注意:請(qǐng)務(wù)必檢查返回給你的asset讀取器是否為nil,以確保asset讀取器已成功初始化担神。否則楼吃,error參數(shù)(前一個(gè)示例中的outError)將包含相關(guān)的錯(cuò)誤信息。

設(shè)置Asset 讀取器輸出

創(chuàng)建Asset讀取器后妄讯,設(shè)置至少一個(gè)輸出以接收正在讀取的媒體數(shù)據(jù)孩锡。設(shè)置輸出時(shí),請(qǐng)務(wù)必將alwaysCopiesSampleData屬性設(shè)置為NO亥贸。通過這種方式躬窜,你可以獲得性能改進(jìn)的好處。在本章的所有示例中炕置,此屬性可以并且應(yīng)該設(shè)置為NO荣挨。

如果你只想從一個(gè)或多個(gè)軌道讀取媒體數(shù)據(jù)并可能將該數(shù)據(jù)轉(zhuǎn)換為其他格式,請(qǐng)使用AVAssetReaderTrackOutput類讹俊,為要從asset中讀取的每個(gè)AVAssetTrack對(duì)象使用單個(gè)軌道輸出對(duì)象垦沉。要使用asset讀取??器將音軌解壓縮到Linear PCM,請(qǐng)按如下方式設(shè)置音軌輸出:

AVAsset *localAsset = assetReader.asset;
// Get the audio track to read.
AVAssetTrack *audioTrack = [[localAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
// Decompression settings for Linear PCM
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the output with the audio track and decompression settings.
AVAssetReaderOutput *trackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:decompressionAudioSettings];
// Add the output to the reader if possible.
if ([assetReader canAddOutput:trackOutput])
[assetReader addOutput:trackOutput];

注意:要以特定asset``track的格式讀取媒體數(shù)據(jù)仍劈,請(qǐng)將nil傳遞給outputSettings參數(shù)厕倍。

你可以使用AVAssetReaderAudioMixOutputAVAssetReaderVideoCompositionOutput類分別讀取已使用AVAudioMix對(duì)象或AVVideoComposition對(duì)象混合或合成的媒體數(shù)據(jù)。通常贩疙,當(dāng)asset讀取器從AVComposition對(duì)象讀取時(shí)讹弯,將使用這些輸出况既。

使用單個(gè)音頻混合輸出,你可以從asset中讀取使用AVAudioMix對(duì)象混合在一起的多個(gè)音軌组民。要指定音軌的混合方式棒仍,請(qǐng)?jiān)诔跏蓟髮⒒煲舴峙浣oAVAssetReaderAudioMixOutput對(duì)象。以下代碼顯示如何使用asset中的所有音軌創(chuàng)建音頻混合輸出臭胜,將音軌解壓縮到Linear PCM莫其,并將音頻混合對(duì)象分配給輸出。有關(guān)如何配置音頻混合的詳細(xì)信息耸三,請(qǐng)參閱Editing乱陡。

AVAudioMix *audioMix = <#An AVAudioMix that specifies how the audio tracks from the AVAsset are mixed#>;
// Assumes that assetReader was initialized with an AVComposition object.
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the audio tracks to read.
NSArray *audioTracks = [composition tracksWithMediaType:AVMediaTypeAudio];
// Get the decompression settings for Linear PCM.
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the audio mix output with the audio tracks and decompression setttings.
AVAssetReaderOutput *audioMixOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:audioTracks audioSettings:decompressionAudioSettings];
// Associate the audio mix used to mix the audio tracks being read with the output.
audioMixOutput.audioMix = audioMix;
// Add the output to the reader if possible.
if ([assetReader canAddOutput:audioMixOutput])
[assetReader addOutput:audioMixOutput];

注意:為audioSettings參數(shù)傳遞nil會(huì)告訴asset讀取器以方便的未壓縮格式返回樣本。 AVAssetReaderVideoCompositionOutput類也是如此仪壮。

視頻合成輸出的行為方式大致相同:你可以使用AVVideoComposition對(duì)象從asset中讀取多個(gè)視頻軌道憨颠。要從多個(gè)合成視頻軌道中讀取媒體數(shù)據(jù)并將其解壓縮為ARGB,請(qǐng)按如下方式設(shè)置輸出:

 AVVideoComposition *videoComposition = <#An AVVideoComposition that specifies how the video tracks from the AVAsset are composited#>;
// Assumes assetReader was initialized with an AVComposition.
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the video tracks to read.
NSArray *videoTracks = [composition tracksWithMediaType:AVMediaTypeVideo];
// Decompression settings for ARGB.
NSDictionary *decompressionVideoSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32ARGB], (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary] };
// Create the video composition output with the video tracks and decompression setttings.
AVAssetReaderOutput *videoCompositionOutput = [AVAssetReaderVideoCompositionOutput assetReaderVideoCompositionOutputWithVideoTracks:videoTracks videoSettings:decompressionVideoSettings];
// Associate the video composition used to composite the video tracks being read with the output.
videoCompositionOutput.videoComposition = videoComposition;
// Add the output to the reader if possible.
if ([assetReader canAddOutput:videoCompositionOutput])
    [assetReader addOutput:videoCompositionOutput];

讀取Asset多媒體數(shù)據(jù)

要在設(shè)置所需的所有輸出后開始讀取积锅,請(qǐng)?jiān)?code>asset讀取器上調(diào)用startReading方法爽彤。接下來,使用copyNextSampleBuffer方法從每個(gè)輸出中單獨(dú)檢索媒體數(shù)據(jù)缚陷。要使用單個(gè)輸出啟動(dòng)asset讀取器并讀取其所有媒體示例适篙,請(qǐng)執(zhí)行以下操作:

// Start the asset reader up.
[self.assetReader startReading];
BOOL done = NO;
while (!done)
{
  // Copy the next sample buffer from the reader output.
  CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
  if (sampleBuffer)
  {
    // Do something with sampleBuffer here.
    CFRelease(sampleBuffer);
    sampleBuffer = NULL;
  }
  else
  {
// Find out why the asset reader output couldn't copy another sample buffer.
if (self.assetReader.status == AVAssetReaderStatusFailed)
{
  NSError *failureError = self.assetReader.error;
  // Handle the error here.
}
else
{
  // The asset reader output has read all of its samples.
  done = YES;
   }
 }
}

Asset寫入

AVAssetWriter類將來自多個(gè)源的媒體數(shù)據(jù)寫入指定文件格式的單個(gè)文件。你不需要將asset 寫入器對(duì)象與特定asset相關(guān)聯(lián)蹬跃,但必須為要?jiǎng)?chuàng)建的每個(gè)輸出文件使用單獨(dú)的asset寫入器匙瘪。由于asset寫入器可以從多個(gè)源寫入媒體數(shù)據(jù),因此必須為要寫入輸出文件的每個(gè)單獨(dú)的軌道創(chuàng)建AVAssetWriterInput對(duì)象蝶缀。每個(gè)AVAssetWriterInput對(duì)象都希望以CMSampleBufferRef對(duì)象的形式接收數(shù)據(jù)丹喻,但是如果要將CVPixelBufferRef對(duì)象附加到asset寫入器輸入,請(qǐng)使用AVAssetWriterInputPixelBufferAdaptor類翁都。

創(chuàng)建Asset寫入器

要?jiǎng)?chuàng)建asset寫入器碍论,請(qǐng)指定輸出文件的URL和所需的文件類型。以下代碼顯示如何初始化asset寫入器以創(chuàng)建QuickTime影片:

NSError *outError;
NSURL *outputURL = <#NSURL object representing the URL where you want to save the video#>;
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:outputURL
                                                  
fileType:AVFileTypeQuickTimeMovie
                                                     error:&outError];
BOOL success = (assetWriter != nil);

設(shè)置Asset寫入器輸入

要使asset寫入器能夠?qū)懭朊襟w數(shù)據(jù)柄慰,你必須至少設(shè)置一個(gè)asset寫入器輸入鳍悠。例如,如果你的媒體數(shù)據(jù)源已經(jīng)將媒體樣本作為CMSampleBufferRef對(duì)象輸出坐搔,則只需使用AVAssetWriterInput類藏研。要設(shè)置將音頻媒體數(shù)據(jù)壓縮為128 kbps AAC并將其連接到asset寫入器的asset寫入器輸入,請(qǐng)執(zhí)行以下操作:

// Configure the channel layout as stereo.
AudioChannelLayout stereoChannelLayout = {
    .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
    .mChannelBitmap = 0,
    .mNumberChannelDescriptions = 0
};

// Convert the channel layout object to an NSData object.
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];

// Get the compression settings for 128 kbps AAC.
NSDictionary *compressionAudioSettings = @{
AVFormatIDKey         : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
AVEncoderBitRateKey   : [NSNumber numberWithInteger:128000],
AVSampleRateKey       : [NSNumber numberWithInteger:44100],
AVChannelLayoutKey    : channelLayoutAsData,
AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};

// Create the asset writer input with the compression settings and specify the media type as audio.
AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:compressionAudioSettings];
// Add the input to the writer if possible.
if ([assetWriter canAddInput:assetWriterInput])
[assetWriter addInput:assetWriterInput];

注意:如果希望以存儲(chǔ)的格式寫入媒體數(shù)據(jù)概行,請(qǐng)?jiān)?code>outputSettings參數(shù)中傳遞nil蠢挡。僅當(dāng)asset寫入器使用fileTypeAVFileTypeQuickTimeMovie初始化時(shí)才傳遞nil

你的asset寫入器輸入可以選擇性地包含一些metadata,或者分別使用metadatatransform屬性為特定軌道指定不同的變換业踏。對(duì)于數(shù)據(jù)源為視頻軌道的asset寫入器輸入禽炬,你可以通過執(zhí)行以下操作在輸出文件中維護(hù)視頻的原始變換:

AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
assetWriterInput.transform = videoAssetTrack.preferredTransform;

注意:在開始使用asset寫入器進(jìn)行寫入之前,請(qǐng)先設(shè)置metadatatransform屬性,以使其生效。

將媒體數(shù)據(jù)寫入輸出文件時(shí)抒巢,有時(shí)你可能需要分配像素緩沖區(qū)。為此热幔,請(qǐng)使用AVAssetWriterInputPixelBufferAdaptor類。為了獲得最高效率晓殊,請(qǐng)使用像素緩沖適配器提供的像素緩沖池断凶,而不是添加使用單獨(dú)池分配的像素緩沖區(qū)伤提。以下代碼創(chuàng)建一個(gè)在RGB域中工作的像素緩沖區(qū)對(duì)象巫俺,該對(duì)象將使用CGImage對(duì)象來創(chuàng)建其像素緩沖區(qū)。

NSDictionary *pixelBufferAttributes = @{
 kCVPixelBufferCGImageCompatibilityKey : [NSNumber numberWithBool:YES],
 kCVPixelBufferCGBitmapContextCompatibilityKey : [NSNumber numberWithBool:YES],
 kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32ARGB]
};
AVAssetWriterInputPixelBufferAdaptor *inputPixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.assetWriterInput sourcePixelBufferAttributes:pixelBufferAttributes];

注意:所有AVAssetWriterInputPixelBufferAdaptor對(duì)象必須連接到單個(gè)asset寫入器輸入肿男。該asset寫入器輸入必須接受AVMediaTypeVideo類型的媒體數(shù)據(jù)介汹。

寫入媒體數(shù)據(jù)

配置asset寫入器所需的所有輸入后,即可開始寫入媒體數(shù)據(jù)舶沛。正如您對(duì)asset讀取器所做的那樣嘹承,通過調(diào)用startWriting方法啟動(dòng)寫入過程。然后如庭,你需要通過調(diào)用startSessionAtSourceTime:方法來啟動(dòng)sample-writing會(huì)話叹卷。asset寫入器完成的所有寫入都必須在其中一個(gè)會(huì)話中進(jìn)行,每個(gè)會(huì)話的時(shí)間范圍定義了源中包含的媒體數(shù)據(jù)的時(shí)間范圍坪它。例如骤竹,如果你的源是提供從AVAsset對(duì)象讀取的媒體數(shù)據(jù)的asset讀取器,并且你不希望包含來自asset的前半部分的媒體數(shù)據(jù)往毡,那么你將執(zhí)行以下操作:

CMTime halfAssetDuration = CMTimeMultiplyByFloat64(self.asset.duration, 0.5);
[self.assetWriter startSessionAtSourceTime:halfAssetDuration];
//Implementation continues.

通常蒙揣,要結(jié)束寫入會(huì)話,必須調(diào)用endSessionAtSourceTime:方法开瞭。但是懒震,如果你的寫入會(huì)話直到文件末尾,則只需調(diào)用finishWriting方法即可結(jié)束寫入會(huì)話嗤详。要使用單個(gè)輸入啟動(dòng)asset寫入器并寫入其所有媒體數(shù)據(jù)个扰,請(qǐng)執(zhí)行以下操作:

// Prepare the asset writer for writing.
[self.assetWriter startWriting];
// Start a sample-writing session.
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
 while ([self.assetWriterInput isReadyForMoreMediaData])
 {
      // Get the next sample buffer.
      CMSampleBufferRef nextSampleBuffer = [self copyNextSampleBufferToWrite];
      if (nextSampleBuffer)
      {
           // If it exists, append the next sample buffer to the output file.
           [self.assetWriterInput appendSampleBuffer:nextSampleBuffer];
           CFRelease(nextSampleBuffer);
           nextSampleBuffer = nil;
      }
      else
      {
           // Assume that lack of a next sample buffer means the sample buffer source is out of samples and mark the input as finished.
           [self.assetWriterInput markAsFinished];
           break;
          }
     }
}];

上面代碼中的copyNextSampleBufferToWrite方法只是一個(gè)存根。此存根的位置是你需要插入一些邏輯以返回表示你要寫入的媒體數(shù)據(jù)的CMSampleBufferRef對(duì)象的位置葱色。樣本緩沖區(qū)的一個(gè)可能來源是asset讀取器的輸出递宅。

錄制Assets

你可以串聯(lián)asset讀取器和asset寫入器對(duì)象,將asset從一種表示轉(zhuǎn)換為另一種表示。使用這些對(duì)象恐锣,你可以比使用AVAssetExportSession對(duì)象更多地控制轉(zhuǎn)換茅主。例如,你可以選擇要在輸出文件中表示哪些track土榴,指定自己的輸出格式诀姚,或在轉(zhuǎn)換過程中修改asset。此過程的第一步是根據(jù)需要設(shè)置asset讀取器輸出和asset寫入器輸入玷禽。在完全配置asset讀取器和寫入器之后赫段,分別通過調(diào)用startReading和startWriting方法啟動(dòng)它們。以下代碼段顯示如何使用單個(gè)asset寫入器輸入來寫入單個(gè)asset讀取器輸出提供的媒體數(shù)據(jù):

 NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];

// Create a serialization queue for reading and writing.
dispatch_queue_t serializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);

// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:serializationQueue usingBlock:^{
 while ([self.assetWriterInput isReadyForMoreMediaData])
 {
      // Get the asset reader output's next sample buffer.
      CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
      if (sampleBuffer != NULL)
      {
           // If it exists, append this sample buffer to the output file.
           BOOL success = [self.assetWriterInput appendSampleBuffer:sampleBuffer];
           CFRelease(sampleBuffer);
           sampleBuffer = NULL;
           // Check for errors that may have occurred when appending the new sample buffer.
           if (!success && self.assetWriter.status == AVAssetWriterStatusFailed)
           {
                NSError *failureError = self.assetWriter.error;
                //Handle the error.
           }
      }
      else
      {
           // If the next sample buffer doesn't exist, find out why the asset reader output couldn't vend another one.
           if (self.assetReader.status == AVAssetReaderStatusFailed)
           {
                NSError *failureError = self.assetReader.error;
                //Handle the error here.
           }
           else
           {
                // The asset reader output must have vended all of its samples. Mark the input as finished.
                [self.assetWriterInput markAsFinished];
                break;
              }
          }
     }
}];

綜述:使用Asset讀取器和寫入器串聯(lián)重新編碼Asset

這個(gè)簡(jiǎn)短的代碼示例說明了如何使用asset讀取器和寫入器將asset的第一個(gè)視頻和音頻軌道重新編碼為新文件矢赁。它顯示了如何:

  • 使用序列化隊(duì)列來處理讀取和寫入視頻和音頻數(shù)據(jù)的異步性質(zhì)糯笙。

  • 初始化asset讀取器并配置兩個(gè)asset讀取器輸出,一個(gè)用于音頻撩银,一個(gè)用于視頻给涕。

  • 初始化asset寫入器并配置兩個(gè)asset寫入器輸入,一個(gè)用于音頻额获,一個(gè)用于視頻够庙。

  • 使用asset讀取器通過兩種不同的輸出/輸入組合將媒體數(shù)據(jù)異步提供給asset寫入器。

  • 使用調(diào)度組通知重新編碼過程的完成抄邀。

  • 允許用戶在開始后取消重新編碼過程耘眨。

注意:為了關(guān)注最相關(guān)的代碼,此示例省略了完整應(yīng)用程序的幾個(gè)方面境肾。要使用AVFoundation剔难,你應(yīng)該有足夠的經(jīng)驗(yàn)使用Cocoa來推斷缺失的部分。

處理初始設(shè)置

在創(chuàng)建asset讀取器和寫入器并配置其輸出和輸入之前奥喻,你需要處理一些初始設(shè)置偶宫。此設(shè)置的第一部分涉及創(chuàng)建三個(gè)單獨(dú)的序列化隊(duì)列以協(xié)調(diào)讀寫過程。

NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];

// Create the main serialization queue.
self.mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw audio serialization queue", self];

// Create the serialization queue to use for reading and writing the audio data.
self.rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL);
NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw video serialization queue", self];

// Create the serialization queue to use for reading and writing the video data.
self.rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL);

主序列化隊(duì)列用于協(xié)調(diào)asset讀取器和寫入器的啟動(dòng)和停止(可能由于取消而導(dǎo)致停止)衫嵌,并且其他兩個(gè)序列化隊(duì)列用于序列化每個(gè)輸出/輸入組合的讀取和寫入以及可能的取消操作读宙。

現(xiàn)在你已經(jīng)有了一些序列化隊(duì)列,加載assettrack并開始重新編碼過程楔绞。

self.asset = <#AVAsset that you want to reencode#>;
self.cancelled = NO;
self.outputURL = <#NSURL representing desired output URL for file generated by asset writer#>;
// Asynchronously load the tracks of the asset you want to read.
[self.asset loadValuesAsynchronouslyForKeys:@[@"tracks"] completionHandler:^{
 // Once the tracks have finished loading, dispatch the work to the main serialization queue.
 dispatch_async(self.mainSerializationQueue, ^{
      // Due to asynchronous nature, check to see if user has already cancelled.
      if (self.cancelled)
           return;
      BOOL success = YES;
      NSError *localError = nil;
      // Check for success of loading the assets tracks.
      success = ([self.asset statusOfValueForKey:@"tracks" error:&localError] == AVKeyValueStatusLoaded);
      if (success)
      {
           // If the tracks loaded successfully, make sure that no file exists at the output path for the asset writer.
           NSFileManager *fm = [NSFileManager defaultManager];
           NSString *localOutputPath = [self.outputURL path];
           if ([fm fileExistsAtPath:localOutputPath])
                success = [fm removeItemAtPath:localOutputPath error:&localError];
      }
      if (success)
           success = [self setupAssetReaderAndAssetWriter:&localError];
      if (success)
           success = [self startAssetReaderAndWriter:&localError];
      if (!success)
           [self readingAndWritingDidFinishSuccessfully:success withError:localError];
     });
}];

當(dāng)track加載過程完成時(shí)结闸,無論是否成功,其余的工作都將被分派到主序列化隊(duì)列酒朵,以確保所有這些工作都被序列化并具有可能的取消¤氤現(xiàn)在剩下的就是在上一個(gè)代碼列表的末尾實(shí)現(xiàn)取消過程和三個(gè)自定義方法。

初始化Asset讀取器和寫入器

自定義setupAssetReaderAndAssetWriter:方法初始化讀寫器并配置兩個(gè)輸出/輸入組合蔫耽,一個(gè)用于音軌结耀,一個(gè)用于視頻軌留夜。在此示例中,使用asset讀取??器將音頻解壓縮為線性PCM图甜,并使用asset寫入器將其壓縮回128 kbps AAC碍粥。使用asset讀取??器將視頻解壓縮為YUV,并使用asset寫入器壓縮為H.264黑毅。

- (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
{
 // Create and initialize the asset reader.
 self.assetReader = [[AVAssetReader alloc] initWithAsset:self.asset error:outError];
 BOOL success = (self.assetReader != nil);
 if (success)
 {
      // If the asset reader was successfully initialized, do the same for the asset writer.
      self.assetWriter = [[AVAssetWriter alloc] initWithURL:self.outputURL fileType:AVFileTypeQuickTimeMovie error:outError];
      success = (self.assetWriter != nil);
 }

 if (success)
 {
      // If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
      AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
      NSArray *audioTracks = [self.asset tracksWithMediaType:AVMediaTypeAudio];
      if ([audioTracks count] > 0)
           assetAudioTrack = [audioTracks objectAtIndex:0];
      NSArray *videoTracks = [self.asset tracksWithMediaType:AVMediaTypeVideo];
      if ([videoTracks count] > 0)
           assetVideoTrack = [videoTracks objectAtIndex:0];

      if (assetAudioTrack)
      {
           // If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
           NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
           self.assetReaderAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack outputSettings:decompressionAudioSettings];
           [self.assetReader addOutput:self.assetReaderAudioOutput];
           // Then, set the compression settings to 128kbps AAC and create the asset writer input.
           AudioChannelLayout stereoChannelLayout = {
                .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
                .mChannelBitmap = 0,
                .mNumberChannelDescriptions = 0
           };
           NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
           NSDictionary *compressionAudioSettings = @{
                AVFormatIDKey         : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
                AVEncoderBitRateKey   : [NSNumber numberWithInteger:128000],
                AVSampleRateKey       : [NSNumber numberWithInteger:44100],
                AVChannelLayoutKey    : channelLayoutAsData,
                AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
           };
           self.assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType] outputSettings:compressionAudioSettings];
           [self.assetWriter addInput:self.assetWriterAudioInput];
      }

      if (assetVideoTrack)
      {
           // If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
           NSDictionary *decompressionVideoSettings = @{
                (id)kCVPixelBufferPixelFormatTypeKey     : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_422YpCbCr8],
                (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
           };
           self.assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
           [self.assetReader addOutput:self.assetReaderVideoOutput];
           CMFormatDescriptionRef formatDescription = NULL;
           // Grab the video format descriptions from the video track and grab the first one if it exists.
           NSArray *videoFormatDescriptions = [assetVideoTrack formatDescriptions];
           if ([videoFormatDescriptions count] > 0)
                formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
           CGSize trackDimensions = {
                .width = 0.0,
                .height = 0.0,
           };
           // If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
           if (formatDescription)
                trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
           else
                trackDimensions = [assetVideoTrack naturalSize];
           NSDictionary *compressionSettings = nil;
           // If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video.
           if (formatDescription)
           {
                NSDictionary *cleanAperture = nil;
                NSDictionary *pixelAspectRatio = nil;
                CFDictionaryRef cleanApertureFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_CleanAperture);
                if (cleanApertureFromCMFormatDescription)
                {
                     cleanAperture = @{
                          AVVideoCleanApertureWidthKey            : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth),
                          AVVideoCleanApertureHeightKey           : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight),
                          AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset),
                          AVVideoCleanApertureVerticalOffsetKey   : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset)
                     };
                }
                CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio);
                if (pixelAspectRatioFromCMFormatDescription)
                {
                     pixelAspectRatio = @{
                          AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing),
                          AVVideoPixelAspectRatioVerticalSpacingKey   : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing)
                     };
                }
                // Add whichever settings we could grab from the format description to the compression settings dictionary.
                if (cleanAperture || pixelAspectRatio)
                {
                     NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary];
                     if (cleanAperture)
                          [mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey];
                     if (pixelAspectRatio)
                          [mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey];
                     compressionSettings = mutableCompressionSettings;
                }
           }
           // Create the video settings dictionary for H.264.
           NSMutableDictionary *videoSettings = (NSMutableDictionary *) @{
                AVVideoCodecKey  : AVVideoCodecH264,
                AVVideoWidthKey  : [NSNumber numberWithDouble:trackDimensions.width],
                AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
           };
           // Put the compression settings into the video settings dictionary if we were able to grab them.
           if (compressionSettings)
                [videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey];
           // Create the asset writer input and add it to the asset writer.
           self.assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[videoTrack mediaType] outputSettings:videoSettings];
           [self.assetWriter addInput:self.assetWriterVideoInput];
          }
     }
     return success;
}

重新編碼Asset

如果asset讀取器和寫入器已成功初始化和配置嚼摩,則調(diào)用Handling the Initial SetupstartAssetReaderAndWriter:方法。該方法是asset的實(shí)際讀取和寫入的地方矿瘦。

- (BOOL)startAssetReaderAndWriter:(NSError **)outError
{
 BOOL success = YES;
 // Attempt to start the asset reader.
 success = [self.assetReader startReading];
 if (!success)
      *outError = [self.assetReader error];
 if (success)
 {
      // If the reader started successfully, attempt to start the asset writer.
      success = [self.assetWriter startWriting];
      if (!success)
           *outError = [self.assetWriter error];
 }

 if (success)
 {
      // If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session.
      self.dispatchGroup = dispatch_group_create();
      [self.assetWriter startSessionAtSourceTime:kCMTimeZero];
      self.audioFinished = NO;
      self.videoFinished = NO;

      if (self.assetWriterAudioInput)
      {
           // If there is audio to reencode, enter the dispatch group before beginning the work.
           dispatch_group_enter(self.dispatchGroup);
           // Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on.
           [self.assetWriterAudioInput requestMediaDataWhenReadyOnQueue:self.rwAudioSerializationQueue usingBlock:^{
                // Because the block is called asynchronously, check to see whether its task is complete.
                if (self.audioFinished)
                     return;
                BOOL completedOrFailed = NO;
                // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                while ([self.assetWriterAudioInput isReadyForMoreMediaData] && !completedOrFailed)
                {
                     // Get the next audio sample buffer, and append it to the output file.
                     CMSampleBufferRef sampleBuffer = [self.assetReaderAudioOutput copyNextSampleBuffer];
                     if (sampleBuffer != NULL)
                     {
                          BOOL success = [self.assetWriterAudioInput appendSampleBuffer:sampleBuffer];
                          CFRelease(sampleBuffer);
                          sampleBuffer = NULL;
                          completedOrFailed = !success;
                     }
                     else
                     {
                          completedOrFailed = YES;
                     }
                }
                if (completedOrFailed)
                {
                     // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished).
                     BOOL oldFinished = self.audioFinished;
                     self.audioFinished = YES;
                     if (oldFinished == NO)
                     {
                          [self.assetWriterAudioInput markAsFinished];
                     }
                     dispatch_group_leave(self.dispatchGroup);
                }
           }];
      }

      if (self.assetWriterVideoInput)
      {
           // If we had video to reencode, enter the dispatch group before beginning the work.
           dispatch_group_enter(self.dispatchGroup);
           // Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on.
           [self.assetWriterVideoInput requestMediaDataWhenReadyOnQueue:self.rwVideoSerializationQueue usingBlock:^{
                // Because the block is called asynchronously, check to see whether its task is complete.
                if (self.videoFinished)
                     return;
                BOOL completedOrFailed = NO;
                // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                while ([self.assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
                {
                     // Get the next video sample buffer, and append it to the output file.
                     CMSampleBufferRef sampleBuffer = [self.assetReaderVideoOutput copyNextSampleBuffer];
                     if (sampleBuffer != NULL)
                     {
                          BOOL success = [self.assetWriterVideoInput appendSampleBuffer:sampleBuffer];
                          CFRelease(sampleBuffer);
                          sampleBuffer = NULL;
                          completedOrFailed = !success;
                     }
                     else
                     {
                          completedOrFailed = YES;
                     }
                }
                if (completedOrFailed)
                {
                     // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished).
                     BOOL oldFinished = self.videoFinished;
                     self.videoFinished = YES;
                     if (oldFinished == NO)
                     {
                          [self.assetWriterVideoInput markAsFinished];
                     }
                     dispatch_group_leave(self.dispatchGroup);
                }
           }];
      }
      // Set up the notification that the dispatch group will send when the audio and video work have both finished.
      dispatch_group_notify(self.dispatchGroup, self.mainSerializationQueue, ^{
           BOOL finalSuccess = YES;
           NSError *finalError = nil;
           // Check to see if the work has finished due to cancellation.
           if (self.cancelled)
           {
                // If so, cancel the reader and writer.
                [self.assetReader cancelReading];
                [self.assetWriter cancelWriting];
           }
           else
           {
                // If cancellation didn't occur, first make sure that the asset reader didn't fail.
                if ([self.assetReader status] == AVAssetReaderStatusFailed)
                {
                     finalSuccess = NO;
                     finalError = [self.assetReader error];
                }
                // If the asset reader didn't fail, attempt to stop the asset writer and check for any errors.
                if (finalSuccess)
                {
                     finalSuccess = [self.assetWriter finishWriting];
                     if (!finalSuccess)
                          finalError = [self.assetWriter error];
                }
           }
           // Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful.
           [self readingAndWritingDidFinishSuccessfully:finalSuccess withError:finalError];
      });
 }
 // Return success here to indicate whether the asset reader and writer were started successfully.
 return success;
}

在重新編碼期間枕面,音頻和視頻軌道在各個(gè)序列化隊(duì)列上異步處理,以提高進(jìn)程的整體性能缚去,但兩個(gè)隊(duì)列都包含在同一個(gè)調(diào)度組中潮秘。通過將每個(gè)軌道的工作放在同一個(gè)調(diào)度組中,該組可以在完成所有工作并且可以確定重新編碼過程成功時(shí)發(fā)送通知易结。

結(jié)束回調(diào)

為了完成讀寫過程枕荞,readAndWritingDidFinishSuccessfully:方法被調(diào)用 - 帶有指示重新編碼是否成功完成的參數(shù)。如果進(jìn)程未成功完成衬衬,則asset讀取器和寫入器都將被取消买猖,并且任何與UI相關(guān)的任務(wù)都將分派到主隊(duì)列。

- (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error
{
 if (!success)
 {
      // If the reencoding process failed, we need to cancel the asset reader and writer.
      [self.assetReader cancelReading];
      [self.assetWriter cancelWriting];
      dispatch_async(dispatch_get_main_queue(), ^{
           // Handle any UI tasks here related to failure.
      });
 }
 else
 {
      // Reencoding was successful, reset booleans.
      self.cancelled = NO;
      self.videoFinished = NO;
      self.audioFinished = NO;
      dispatch_async(dispatch_get_main_queue(), ^{
           // Handle any UI tasks here related to success.
          });
     }
}

取消回調(diào)

使用多個(gè)序列化隊(duì)列滋尉,你可以允許應(yīng)用程序的用戶輕松取消重新編碼過程。在主序列化隊(duì)列中飞主,將消息異步發(fā)送到每個(gè)asset重新編碼序列化隊(duì)列以取消其讀取和寫入狮惜。當(dāng)這兩個(gè)序列化隊(duì)列完成取消時(shí),調(diào)度組會(huì)向主序列化隊(duì)列發(fā)送通知碌识,其中取消的屬性設(shè)置為YES碾篡。你可以使用UI上的按鈕將以下代碼列表中的cancel方法關(guān)聯(lián)起來。

- (void)cancel
{
 // Handle cancellation asynchronously, but serialize it with the main queue.
 dispatch_async(self.mainSerializationQueue, ^{
      // If we had audio data to reencode, we need to cancel the audio work.
      if (self.assetWriterAudioInput)
      {
           // Handle cancellation asynchronously again, but this time serialize it with the audio queue.
           dispatch_async(self.rwAudioSerializationQueue, ^{
                // Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
                BOOL oldFinished = self.audioFinished;
                self.audioFinished = YES;
                if (oldFinished == NO)
                {
                     [self.assetWriterAudioInput markAsFinished];
                }
                // Leave the dispatch group since the audio work is finished now.
                dispatch_group_leave(self.dispatchGroup);
           });
      }

      if (self.assetWriterVideoInput)
      {
           // Handle cancellation asynchronously again, but this time serialize it with the video queue.
           dispatch_async(self.rwVideoSerializationQueue, ^{
                // Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
                BOOL oldFinished = self.videoFinished;
                self.videoFinished = YES;
                if (oldFinished == NO)
                {
                     [self.assetWriterVideoInput markAsFinished];
                }
                // Leave the dispatch group, since the video work is finished now.
                dispatch_group_leave(self.dispatchGroup);
           });
      }
      // Set the cancelled Boolean property to YES to cancel any work on the main queue as well.
      self.cancelled = YES;
     });
}

Asset輸出設(shè)置Assistant

AVOutputSettingsAssistant類有助于為asset讀取器或?qū)懭肫鲃?chuàng)建輸出設(shè)置詞典筏餐。這使得設(shè)置更加簡(jiǎn)單开泽,特別是對(duì)于具有許多特定預(yù)設(shè)的高幀率H264電影。代碼5-1顯示了一個(gè)使用輸出設(shè)置assistant使用設(shè)置assistant的示例魁瞪。

AVOutputSettingsAssistant *outputSettingsAssistant = [AVOutputSettingsAssistant outputSettingsAssistantWithPreset:<some preset>];
CMFormatDescriptionRef audioFormat = [self getAudioFormat];

if (audioFormat != NULL)
[outputSettingsAssistant setSourceAudioFormat:(CMAudioFormatDescriptionRef)audioFormat];

CMFormatDescriptionRef videoFormat = [self getVideoFormat];

if (videoFormat != NULL)
[outputSettingsAssistant setSourceVideoFormat:(CMVideoFormatDescriptionRef)videoFormat];

CMTime assetMinVideoFrameDuration = [self getMinFrameDuration];
CMTime averageFrameDuration = [self getAvgFrameDuration]

[outputSettingsAssistant setSourceVideoAverageFrameDuration:averageFrameDuration];
[outputSettingsAssistant setSourceVideoMinFrameDuration:assetMinVideoFrameDuration];

AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:<some URL> fileType:[outputSettingsAssistant outputFileType] error:NULL];
AVAssetWriterInput *audioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:[outputSettingsAssistant audioSettings] sourceFormatHint:audioFormat];
AVAssetWriterInput *videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:[outputSettingsAssistant videoSettings] sourceFormatHint:videoFormat];
上一章 目錄 下一章
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末穆律,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子导俘,更是在濱河造成了極大的恐慌峦耘,老刑警劉巖,帶你破解...
    沈念sama閱讀 216,372評(píng)論 6 498
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件旅薄,死亡現(xiàn)場(chǎng)離奇詭異辅髓,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,368評(píng)論 3 392
  • 文/潘曉璐 我一進(jìn)店門洛口,熙熙樓的掌柜王于貴愁眉苦臉地迎上來矫付,“玉大人,你說我怎么就攤上這事第焰〖技矗” “怎么了?”我有些...
    開封第一講書人閱讀 162,415評(píng)論 0 353
  • 文/不壞的土叔 我叫張陵樟遣,是天一觀的道長(zhǎng)而叼。 經(jīng)常有香客問我,道長(zhǎng)豹悬,這世上最難降的妖魔是什么葵陵? 我笑而不...
    開封第一講書人閱讀 58,157評(píng)論 1 292
  • 正文 為了忘掉前任,我火速辦了婚禮瞻佛,結(jié)果婚禮上脱篙,老公的妹妹穿的比我還像新娘。我一直安慰自己伤柄,他們只是感情好绊困,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,171評(píng)論 6 388
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著适刀,像睡著了一般秤朗。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上笔喉,一...
    開封第一講書人閱讀 51,125評(píng)論 1 297
  • 那天取视,我揣著相機(jī)與錄音,去河邊找鬼常挚。 笑死作谭,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的奄毡。 我是一名探鬼主播折欠,決...
    沈念sama閱讀 40,028評(píng)論 3 417
  • 文/蒼蘭香墨 我猛地睜開眼,長(zhǎng)吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼吼过!你這毒婦竟也來了锐秦?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 38,887評(píng)論 0 274
  • 序言:老撾萬榮一對(duì)情侶失蹤那先,失蹤者是張志新(化名)和其女友劉穎农猬,沒想到半個(gè)月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體售淡,經(jīng)...
    沈念sama閱讀 45,310評(píng)論 1 310
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡斤葱,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,533評(píng)論 2 332
  • 正文 我和宋清朗相戀三年慷垮,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片揍堕。...
    茶點(diǎn)故事閱讀 39,690評(píng)論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡料身,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出衩茸,到底是詐尸還是另有隱情芹血,我是刑警寧澤,帶...
    沈念sama閱讀 35,411評(píng)論 5 343
  • 正文 年R本政府宣布楞慈,位于F島的核電站幔烛,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏囊蓝。R本人自食惡果不足惜饿悬,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,004評(píng)論 3 325
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望聚霜。 院中可真熱鬧狡恬,春花似錦、人聲如沸蝎宇。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,659評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽姥芥。三九已至兔乞,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間撇眯,已是汗流浹背报嵌。 一陣腳步聲響...
    開封第一講書人閱讀 32,812評(píng)論 1 268
  • 我被黑心中介騙來泰國(guó)打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留熊榛,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 47,693評(píng)論 2 368
  • 正文 我出身青樓腕巡,卻偏偏與公主長(zhǎng)得像玄坦,于是被迫代替她去往敵國(guó)和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子绘沉,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,577評(píng)論 2 353