Export - 輸出

文章目錄

  1. 1. Export - 輸出
    1. 1.1. Reading an Asset - 讀取資產(chǎn)
      1. 1.1.1. Creating the Asset Reader - 創(chuàng)建資產(chǎn)讀取器
      2. 1.1.2. Setting Up the Asset Reader Outputs - 建立資產(chǎn)讀取器出口
      3. 1.1.3. Reading the Asset’s Media Data - 讀取資產(chǎn)媒體數(shù)據(jù)
    2. 1.2. Writing an Asset - 寫入資產(chǎn)
      1. 1.2.1. Creating the Asset Writer - 創(chuàng)建資產(chǎn)寫入器
      2. 1.2.2. Setting Up the Asset Writer Inputs - 建立資產(chǎn)寫入器入口
      3. 1.2.3. Writing Media Data - 寫入媒體數(shù)據(jù)
    3. 1.3. Reencoding Assets - 重新編碼資產(chǎn)
    4. 1.4. Putting It All Together: Using an Asset Reader and Writer in Tandem to Reencode an Asset - 總結(jié):使用資產(chǎn)讀取器和寫入器串聯(lián)重新編碼資產(chǎn)
      1. 1.4.1. Handling the Initial Setup - 處理初始設(shè)置
      2. 1.4.2. Initializing the Asset Reader and Writer - 初始化資產(chǎn)讀取器和寫入器
      3. 1.4.3. Reencoding the Asset - 重新編碼資產(chǎn)
      4. 1.4.4. Handling Completion - 處理完成
      5. 1.4.5. Handling Cancellation - 處理注銷
    5. 1.5. Asset Output Settings Assistant - 資產(chǎn)出口設(shè)置助手

Export - 輸出

To read and write audiovisual assets, you must use the export APIs provided by the AVFoundation framework. The AVAssetExportSession class provides an interface for simple exporting needs, such as modifying the file format or trimming the length of an asset (see Trimming and Transcoding a Movie). For more in-depth exporting needs, use the AVAssetReader and AVAssetWriter classes.

必須使用 AVFoundation 框架提供的導(dǎo)出 APIs 去讀寫音視頻資產(chǎn)膨蛮。AVAssetExportSession 類為簡(jiǎn)單輸出需要滔蝉,提供了一個(gè)接口穿撮,例如修改文件格式或者削減資產(chǎn)的長度(見 Trimming and Transcoding a Movie)码泞。為了更深入的導(dǎo)出需求瞻凤,使用 AVAssetReaderAVAssetWriter 類龙优。

Use an AVAssetReader when you want to perform an operation on the contents of an asset. For example, you might read the audio track of an asset to produce a visual representation of the waveform. To produce an asset from media such as sample buffers or still images, use an AVAssetWriter object.

當(dāng)你想對(duì)一項(xiàng)資產(chǎn)的內(nèi)容進(jìn)行操作時(shí)魁瞪,使用 AVAssetReader 谣旁。例如卿叽,可以讀取一個(gè)資產(chǎn)的音頻軌道桥胞,以產(chǎn)生波形的可視化表示。為了從媒體(比如樣品緩沖或者靜態(tài)圖像)生成資產(chǎn)考婴,使用 AVAssetWriter 對(duì)象贩虾。

Note: The asset reader and writer classes are not intended to be used for real-time processing. In fact, an asset reader cannot even be used for reading from a real-time source like an HTTP live stream. However, if you are using an asset writer with a real-time data source, such as an AVCaptureOutput object, set the expectsMediaDataInRealTime property of your asset writer’s inputs to YES. Setting this property to YES for a non-real-time data source will result in your files not being interleaved properly.

注意:資產(chǎn) readerwriter 類不打算用到實(shí)時(shí)處理。實(shí)際上沥阱,一個(gè)資產(chǎn)讀取器甚至不能用于從一個(gè)類似 HTTP 直播流的實(shí)時(shí)資源中讀取缎罢。然而,如果你使用帶著實(shí)時(shí)數(shù)據(jù)資源的資產(chǎn)寫入器考杉,比如 AVCaptureOutput 對(duì)象策精,設(shè)置資產(chǎn)寫入器入口的 expectsMediaDataInRealTime 屬性為 YES。將此屬性設(shè)置為 YES 的非實(shí)時(shí)數(shù)據(jù)源將導(dǎo)致你的文件不能被正確的掃描崇棠。

Reading an Asset - 讀取資產(chǎn)

Each AVAssetReader object can be associated only with a single asset at a time, but this asset may contain multiple tracks. For this reason, you must assign concrete subclasses of the AVAssetReaderOutput class to your asset reader before you begin reading in order to configure how the media data is read. There are three concrete subclasses of the AVAssetReaderOutput base class that you can use for your asset reading needs: AVAssetReaderTrackOutput, AVAssetReaderAudioMixOutput, and AVAssetReaderVideoCompositionOutput.

每個(gè) AVAssetReader 對(duì)象只能與單個(gè)資產(chǎn)有關(guān)咽袜,但這個(gè)資產(chǎn)可能包含多個(gè)軌道。為此易茬,你必須指定 AVAssetReaderOutput 類的具體子類給你的資產(chǎn)讀取器酬蹋,在你開始按順序訪問你的資產(chǎn)以配置如何讀取數(shù)據(jù)之前及老。有 AVAssetReaderOutput 基類的3個(gè)具體子類,可以使用你的資產(chǎn)訪問需求 AVAssetReaderTrackOutput范抓,AVAssetReaderAudioMixOutput骄恶,AVAssetReaderVideoCompositionOutput

  • Creating the Asset Reader - 創(chuàng)建資產(chǎn)讀取器

All you need to initialize an AVAssetReader object is the asset that you want to read.

所有你需要去初始化 AVAssetReader 對(duì)象是你想要訪問的資產(chǎn)匕垫。

NSError *outError;
AVAsset *someAsset = <#AVAsset that you want to read#>;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:someAsset error:&outError];
BOOL success = (assetReader != nil);

Note: Always check that the asset reader returned to you is non-nil to ensure that the asset reader was initialized successfully. Otherwise, the error parameter (outError in the previous example) will contain the relevant error information.

注意:總是要資產(chǎn)讀取器是否返回給你的時(shí) non-nil 僧鲁,以確保資產(chǎn)讀取器已經(jīng)成功被初始化。否則象泵,錯(cuò)誤參數(shù)(之前的例子中 outError)將會(huì)包含有關(guān)錯(cuò)誤的信息寞秃。

  • Setting Up the Asset Reader Outputs - 建立資產(chǎn)讀取器出口

After you have created your asset reader, set up at least one output to receive the media data being read. When setting up your outputs, be sure to set the alwaysCopiesSampleData property to NO. In this way, you reap the benefits of performance improvements. In all of the examples within this chapter, this property could and should be set to NO.

在你創(chuàng)建了資產(chǎn)讀取器之后,至少設(shè)置一個(gè)出口以接收正在讀取的媒體數(shù)據(jù)偶惠。當(dāng)建立你的出口春寿,確保設(shè)置 alwaysCopiesSampleData 屬性為 NO。這樣忽孽,你就收獲了性能改進(jìn)的好處绑改。這一章的所有例子中,這個(gè)屬性可以并且應(yīng)該被設(shè)置為 NO 兄一。

If you want only to read media data from one or more tracks and potentially convert that data to a different format, use the AVAssetReaderTrackOutput class, using a single track output object for each AVAssetTrack object that you want to read from your asset. To decompress an audio track to Linear PCM with an asset reader, you set up your track output as follows:

如果你只想從一個(gè)或多個(gè)軌道讀取媒體數(shù)據(jù)厘线,潛在的數(shù)據(jù)轉(zhuǎn)換為不同的格式,使用 AVAssetReaderTrackOutput 類出革,每個(gè)你想從你的資產(chǎn)中讀取 AVAssetTrack 對(duì)象都使用單軌道出口對(duì)象造壮。將音頻軌道解壓縮為有資產(chǎn)讀取器的 Linear PCM ,建立軌道出口如下:

AVAsset *localAsset = assetReader.asset;
// Get the audio track to read.
//獲取音頻軌道以讀取。
AVAssetTrack *audioTrack = [[localAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
// Decompression settings for Linear PCM
//線性PCM的解壓縮設(shè)置
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the output with the audio track and decompression settings.
//使用音軌和解壓縮設(shè)置創(chuàng)建輸出。
AVAssetReaderOutput *trackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:decompressionAudioSettings];
// Add the output to the reader if possible.
//如果可能的話治筒,將輸出添加到閱讀器中。
if ([assetReader canAddOutput:trackOutput])
    [assetReader addOutput:trackOutput];

Note: To read the media data from a specific asset track in the format in which it was stored, pass nil to the outputSettings parameter.

注意:從一個(gè)特定的資產(chǎn)軌道讀取媒體數(shù)據(jù)楞抡,以它被存儲(chǔ)的格式,傳 niloutputSettings 參數(shù)析藕。

You use the AVAssetReaderAudioMixOutput and AVAssetReaderVideoCompositionOutput classes to read media data that has been mixed or composited together using an AVAudioMix object or AVVideoComposition object, respectively. Typically, these outputs are used when your asset reader is reading from an AVComposition object.

使用 AVAssetReaderAudioMixOutputAVAssetReaderVideoCompositionOutput 類來讀取媒體數(shù)據(jù),這些媒體數(shù)據(jù)是分別使用 AVAudioMix 對(duì)象或者 AVVideoComposition 對(duì)象混合或者組合在一起凳厢。通常情況下账胧,當(dāng)你的資產(chǎn)讀取器正在從 AVComposition 讀取時(shí),才使用這些出口先紫。

With a single audio mix output, you can read multiple audio tracks from your asset that have been mixed together using an AVAudioMix object. To specify how the audio tracks are mixed, assign the mix to the AVAssetReaderAudioMixOutput object after initialization. The following code displays how to create an audio mix output with all of the audio tracks from your asset, decompress the audio tracks to Linear PCM, and assign an audio mix object to the output. For details on how to configure an audio mix, see Editing.

一個(gè)單一音頻混合出口治泥,可以從 已經(jīng)使用 AVAudioMix 對(duì)象混合在一起的資產(chǎn)中讀取多個(gè)音軌。指定音軌是如何被混合在一起的遮精,將混合后的 AVAssetReaderAudioMixOutput 對(duì)象初始化居夹。下面的代碼顯示了如何從資產(chǎn)中創(chuàng)建一個(gè)帶著所有音軌的音頻混合出口败潦,將音軌解壓為 Linear PCM,并指定音頻混合對(duì)象到出口准脂。有如何配置音頻混合的細(xì)節(jié)劫扒,請(qǐng)參見 Editing

AVAudioMix *audioMix = <#An AVAudioMix that specifies how the audio tracks from the AVAsset are mixed#>;
// Assumes that assetReader was initialized with an AVComposition object.
//假設(shè)assetReader已使用AVComposition對(duì)象初始化狸膏。
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the audio tracks to read.
//獲取要讀取的音軌沟饥。
NSArray *audioTracks = [composition tracksWithMediaType:AVMediaTypeAudio];
// Get the decompression settings for Linear PCM.
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the audio mix output with the audio tracks and decompression setttings.
//獲取線性PCM的解壓縮設(shè)置。
AVAssetReaderOutput *audioMixOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:audioTracks audioSettings:decompressionAudioSettings];
// Associate the audio mix used to mix the audio tracks being read with the output.
//將用于混合正在讀取的音軌和輸出的音頻混合關(guān)聯(lián)起來湾戳。
audioMixOutput.audioMix = audioMix;
// Add the output to the reader if possible.
//如果可能的話贤旷,將輸出添加到閱讀器中。
if ([assetReader canAddOutput:audioMixOutput])
    [assetReader addOutput:audioMixOutput];

Note: Passing nil for the audioSettings parameter tells the asset reader to return samples in a convenient uncompressed format. The same is true for the AVAssetReaderVideoCompositionOutput class.

注意:給 audioSettings 參數(shù)傳遞 nil 砾脑,告訴資產(chǎn)讀取器返回一個(gè)方便的未壓縮格式的樣本幼驶。對(duì)于 AVAssetReaderVideoCompositionOutput 類同樣是可以的。

The video composition output behaves in much the same way: You can read multiple video tracks from your asset that have been composited together using an AVVideoComposition object. To read the media data from multiple composited video tracks and decompress it to ARGB, set up your output as follows:

視頻合成輸出行為有許多同樣的方式:可以從資產(chǎn)(已經(jīng)被使用 AVVideoComposition 對(duì)象合并在一起)讀取多個(gè)視頻軌道韧衣。從多個(gè)復(fù)合視頻軌道讀取媒體數(shù)據(jù)盅藻,解壓縮為 ARGB ,建立出口如下:

AVVideoComposition *videoComposition = <#An AVVideoComposition that specifies how the video tracks from the AVAsset are composited#>;
// Assumes assetReader was initialized with an AVComposition.
//假設(shè)assetReader是用AVComposition初始化的汹族。
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the video tracks to read.
//獲取要讀取的視頻軌道萧求。
NSArray *videoTracks = [composition tracksWithMediaType:AVMediaTypeVideo];
// Decompression settings for ARGB.
// ARGB的解壓縮設(shè)置
NSDictionary *decompressionVideoSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32ARGB], (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary] };
// Create the video composition output with the video tracks and decompression setttings.
//通過視頻軌道和解壓縮設(shè)置創(chuàng)建視頻合成輸出。
AVAssetReaderOutput *videoCompositionOutput = [AVAssetReaderVideoCompositionOutput assetReaderVideoCompositionOutputWithVideoTracks:videoTracks videoSettings:decompressionVideoSettings];
// Associate the video composition used to composite the video tracks being read with the output.
//將用于復(fù)合正在讀取的視頻軌道的視頻組合與輸出相關(guān)聯(lián)顶瞒。
videoCompositionOutput.videoComposition = videoComposition;
// Add the output to the reader if possible.
//如果可能的話夸政,將輸出添加到閱讀器中。
if ([assetReader canAddOutput:videoCompositionOutput])
    [assetReader addOutput:videoCompositionOutput];
  • Reading the Asset’s Media Data - 讀取資產(chǎn)媒體數(shù)據(jù)

To start reading after setting up all of the outputs you need, call the startReading method on your asset reader. Next, retrieve the media data individually from each output using the copyNextSampleBuffer method. To start up an asset reader with a single output and read all of its media samples, do the following:

開始讀取后建立所有你需要的出口榴徐,在你的資產(chǎn)讀取器中調(diào)用 startReading 方法守问。下一步,使用 copyNextSampleBuffer 方法從每個(gè)出口分別獲取媒體數(shù)據(jù)坑资。以一個(gè)出口啟動(dòng)一個(gè)資產(chǎn)讀取器耗帕,并讀取它的所有媒體樣本,跟著下面做:

// Start the asset reader up.
//啟動(dòng)資產(chǎn)讀取器袱贮。
[self.assetReader startReading];
BOOL done = NO;
while (!done)
{
  // Copy the next sample buffer from the reader output.
  //從閱讀器輸出中復(fù)制下一個(gè)樣本緩沖區(qū)仿便。
  CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
  if (sampleBuffer)
  {
    // Do something with sampleBuffer here.
    //在這里用sampleBuffer做些什么。
    CFRelease(sampleBuffer);
    sampleBuffer = NULL;
  }
  else
  {
    // Find out why the asset reader output couldn't copy another sample buffer.
   //找出為什么資產(chǎn)讀取器輸出無法復(fù)制另一個(gè)樣本緩沖區(qū)攒巍。
    if (self.assetReader.status == AVAssetReaderStatusFailed)
    {
      NSError *failureError = self.assetReader.error;
      // Handle the error here.
     //在這里處理錯(cuò)誤嗽仪。
    }
    else
    {
      // The asset reader output has read all of its samples.
      //資產(chǎn)讀取器輸出已讀取其所有樣本。
      done = YES;
    }
  }
}

Writing an Asset - 寫入資產(chǎn)

The AVAssetWriter class to write media data from multiple sources to a single file of a specified file format. You don’t need to associate your asset writer object with a specific asset, but you must use a separate asset writer for each output file that you want to create. Because an asset writer can write media data from multiple sources, you must create an AVAssetWriterInput object for each individual track that you want to write to the output file. Each AVAssetWriterInput object expects to receive data in the form of CMSampleBufferRef objects, but if you want to append CVPixelBufferRef objects to your asset writer input, use the AVAssetWriterInputPixelBufferAdaptor class.

AVAssetWriter 類從多個(gè)源將媒體數(shù)據(jù)寫入到指定文件格式的單個(gè)文件中柒莉。不需要將你的資產(chǎn)寫入器與一個(gè)特定的資產(chǎn)聯(lián)系起來闻坚,但你必須為你要?jiǎng)?chuàng)建的每個(gè)輸出文件 使用一個(gè)獨(dú)立的資產(chǎn)寫入器。因?yàn)橐粋€(gè)資產(chǎn)寫入器可以從多個(gè)來源寫入媒體數(shù)據(jù)兢孝,你必須為你想寫入輸出文件的每個(gè)獨(dú)立的軌道創(chuàng)建一個(gè) AVAssetWriterInput 對(duì)象窿凤。每個(gè) AVAssetWriterInput 對(duì)象預(yù)計(jì)以 CMSampleBufferRef 對(duì)象的形成接收數(shù)據(jù)仅偎,但如果你想給你的資產(chǎn)寫入器入口 附加 CVPixelBufferRef 對(duì)象,使用 AVAssetWriterInputPixelBufferAdaptor 類雳殊。

  • Creating the Asset Writer - 創(chuàng)建資產(chǎn)寫入器

To create an asset writer, specify the URL for the output file and the desired file type. The following code displays how to initialize an asset writer to create a QuickTime movie:

為了創(chuàng)建一個(gè)資產(chǎn)寫入器橘沥,為出口文件指定 URL 和所需的文件類型。下面的代碼顯示了如何初始化一個(gè)資產(chǎn)寫入器來創(chuàng)建一個(gè) QuickTime 影片:

NSError *outError;
NSURL *outputURL = <#NSURL object representing the URL where you want to save the video#>;
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:outputURL
                                                      fileType:AVFileTypeQuickTimeMovie
                                                         error:&outError];
BOOL success = (assetWriter != nil);
  • Setting Up the Asset Writer Inputs - 建立資產(chǎn)寫入器入口

For your asset writer to be able to write media data, you must set up at least one asset writer input. For example, if your source of media data is already vending media samples as CMSampleBufferRef objects, just use the AVAssetWriterInput class. To set up an asset writer input that compresses audio media data to 128 kbps AAC and connect it to your asset writer, do the following:

為你的資產(chǎn)寫入器能夠?qū)懭朊襟w數(shù)據(jù)相种,必須至少設(shè)置一個(gè)資產(chǎn)寫入器入口威恼。例如,如果你的媒體數(shù)據(jù)源已經(jīng)以 CMSampleBufferRef 對(duì)象聲明了聲明了媒體樣本寝并,只使用 AVAssetWriterInput 類箫措。建立一個(gè)資產(chǎn)寫入器入口,將音頻媒體數(shù)據(jù)壓縮到 128 kbps AAC 并且將它與你的資產(chǎn)寫入器連接衬潦,跟著下面做:

// Configure the channel layout as stereo.
//將通道布局配置為立體聲斤蔓。
AudioChannelLayout stereoChannelLayout = {
    .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
    .mChannelBitmap = 0,
    .mNumberChannelDescriptions = 0
};
 
// Convert the channel layout object to an NSData object.
//將通道布局對(duì)象轉(zhuǎn)換為NSData對(duì)象。
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
 
// Get the compression settings for 128 kbps AAC.
//獲取128 kbps AAC的壓縮設(shè)置s
NSDictionary *compressionAudioSettings = @{
    AVFormatIDKey         : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
    AVEncoderBitRateKey   : [NSNumber numberWithInteger:128000],
    AVSampleRateKey       : [NSNumber numberWithInteger:44100],
    AVChannelLayoutKey    : channelLayoutAsData,
    AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};
 
// Create the asset writer input with the compression settings and specify the media type as audio.
//使用壓縮設(shè)置創(chuàng)建資產(chǎn)編寫器輸入镀岛,并將媒體類型指定為音頻弦牡。 
AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:compressionAudioSettings];
// Add the input to the writer if possible.
//如果可能的話,將輸入添加到寫入器中漂羊。
if ([assetWriter canAddInput:assetWriterInput])
    [assetWriter addInput:assetWriterInput];

Note: If you want the media data to be written in the format in which it was stored, pass nil in the outputSettings parameter. Pass nil only if the asset writer was initialized with a fileType of AVFileTypeQuickTimeMovie.

注意:如果你想讓媒體數(shù)據(jù)以它被存儲(chǔ)的格式寫入驾锰,給 outputSettings 參數(shù)傳 nil。只有資產(chǎn)寫入器曾用 AVFileTypeQuickTimeMoviefileType 初始化走越,才傳nil 椭豫。

Your asset writer input can optionally include some metadata or specify a different transform for a particular track using the metadata and transform properties respectively. For an asset writer input whose data source is a video track, you can maintain the video’s original transform in the output file by doing the following:

你的資產(chǎn)寫入器入口可以選擇性的包含一些元數(shù)據(jù) 或者 分別使用 metadatatransform 屬性為特定的軌道指定不同的變換。對(duì)于一個(gè)資產(chǎn)寫入器的入口旨指,其數(shù)據(jù)源是一個(gè)視頻軌道赏酥,可以通過下面示例來在輸出文件中維持視頻的原始變換:

AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
assetWriterInput.transform = videoAssetTrack.preferredTransform;

Note: Set the metadata and transform properties before you begin writing with your asset writer for them to take effect.

注意:在開始用資產(chǎn)寫入器寫入生效之前,先設(shè)置 metadatatransform 屬性谆构。

When writing media data to the output file, sometimes you may want to allocate pixel buffers. To do so, use the AVAssetWriterInputPixelBufferAdaptor class. For greatest efficiency, instead of adding pixel buffers that were allocated using a separate pool, use the pixel buffer pool provided by the pixel buffer adaptor. The following code creates a pixel buffer object working in the RGB domain that will use CGImage objects to create its pixel buffers.

當(dāng)將媒體數(shù)據(jù)寫入輸出文件時(shí)裸扶,有時(shí)你可能要分配像素緩沖區(qū)。這樣做:使用 AVAssetWriterInputPixelBufferAdaptor 類搬素。為了最大的效率呵晨,使用由像素緩沖適配器提供的像素緩沖池,代替添加被分配使用一個(gè)單獨(dú)池的像素緩沖區(qū)熬尺。下面的代碼創(chuàng)建一個(gè)像素緩沖區(qū)對(duì)象何荚,在 RGB 色彩下工作,將使用 CGImage 對(duì)象創(chuàng)建它的像素緩沖猪杭。

NSDictionary *pixelBufferAttributes = @{
     kCVPixelBufferCGImageCompatibilityKey : [NSNumber numberWithBool:YES],
     kCVPixelBufferCGBitmapContextCompatibilityKey : [NSNumber numberWithBool:YES],
     kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32ARGB]
};
AVAssetWriterInputPixelBufferAdaptor *inputPixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.assetWriterInput sourcePixelBufferAttributes:pixelBufferAttributes];

Note: All AVAssetWriterInputPixelBufferAdaptor objects must be connected to a single asset writer input. That asset writer input must accept media data of type AVMediaTypeVideo.

注:所有的 AVAssetWriterInputPixelBufferAdaptor 對(duì)象必須連接到一個(gè)單獨(dú)的資產(chǎn)寫入器入口。資產(chǎn)寫入器入口必須接受 AVMediaTypeVideo 類型的媒體數(shù)據(jù)妥衣。

  • Writing Media Data - 寫入媒體數(shù)據(jù)

When you have configured all of the inputs needed for your asset writer, you are ready to begin writing media data. As you did with the asset reader, initiate the writing process with a call to the startWriting method. You then need to start a sample-writing session with a call to the startSessionAtSourceTime: method. All writing done by an asset writer has to occur within one of these sessions and the time range of each session defines the time range of media data included from within the source. For example, if your source is an asset reader that is supplying media data read from an AVAsset object and you don’t want to include media data from the first half of the asset, you would do the following:

當(dāng)你已經(jīng)為資產(chǎn)寫入器配置所有需要的入口時(shí)皂吮,這時(shí)已經(jīng)準(zhǔn)備好開始寫入媒體數(shù)據(jù)戒傻。正如在資產(chǎn)讀取器所做的,調(diào)用 startWriting 方法發(fā)起寫入過程蜂筹。然后你需要啟動(dòng)一個(gè)樣本 – 調(diào)用 startSessionAtSourceTime: 方法的寫入會(huì)話需纳。資產(chǎn)寫入器的所有寫入都必須在這些會(huì)話中發(fā)生,并且每個(gè)會(huì)話的時(shí)間范圍 定義 包含在來源內(nèi)媒體數(shù)據(jù)的時(shí)間范圍艺挪。例如不翩,如果你的來源是一個(gè)資產(chǎn)讀取器(它從 AVAsset 對(duì)象讀取到供應(yīng)的媒體數(shù)據(jù)),并且你不想包含來自資產(chǎn)的前半部分的媒體數(shù)據(jù)麻裳,你可以像下面這樣做:

CMTime halfAssetDuration = CMTimeMultiplyByFloat64(self.asset.duration, 0.5);
[self.assetWriter startSessionAtSourceTime:halfAssetDuration];
//Implementation continues.

Normally, to end a writing session you must call the endSessionAtSourceTime: method. However, if your writing session goes right up to the end of your file, you can end the writing session simply by calling the finishWriting method. To start up an asset writer with a single input and write all of its media data, do the following:

通常口蝠,必須調(diào)用 endSessionAtSourceTime: 方法結(jié)束寫入會(huì)話。然而津坑,如果你的寫入會(huì)話正確走到了你的文件末尾妙蔗,可以簡(jiǎn)單地通過調(diào)用 finishWriting 方法來結(jié)束寫入會(huì)話。要啟動(dòng)一個(gè)有單一入口的資產(chǎn)寫入器并且寫入所有媒體數(shù)據(jù)疆瑰。下面示例:

// Prepare the asset writer for writing.
//準(zhǔn)備撰寫資產(chǎn)作者眉反。
[self.assetWriter startWriting];
// Start a sample-writing session.
//開始一個(gè)示例寫作會(huì)話。
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
//指定要在資產(chǎn)寫入器準(zhǔn)備接收媒體數(shù)據(jù)時(shí)執(zhí)行的塊穆役,以及調(diào)用它的隊(duì)列寸五。
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
     while ([self.assetWriterInput isReadyForMoreMediaData])
     {
          // Get the next sample buffer.
         //獲取下一個(gè)采樣緩沖區(qū)。
          CMSampleBufferRef nextSampleBuffer = [self copyNextSampleBufferToWrite];
          if (nextSampleBuffer)
          {
               // If it exists, append the next sample buffer to the output file.
              //如果存在耿币,請(qǐng)將下一個(gè)樣本緩沖區(qū)附加到輸出文件梳杏。
               [self.assetWriterInput appendSampleBuffer:nextSampleBuffer];
               CFRelease(nextSampleBuffer);
               nextSampleBuffer = nil;
          }
          else
          {
               // Assume that lack of a next sample buffer means the sample buffer source is out of samples and mark the input as finished.
             //假設(shè)缺少下一個(gè)采樣緩沖區(qū)意味著采樣緩沖源超出采樣范圍,并將輸入標(biāo)記為已完成掰读。
               [self.assetWriterInput markAsFinished];
               break;
          }
     }
}];

The copyNextSampleBufferToWrite method in the code above is simply a stub. The location of this stub is where you would need to insert some logic to return CMSampleBufferRef objects representing the media data that you want to write. One possible source of sample buffers is an asset reader output.

上述代碼中的 copyNextSampleBufferToWrite 方法僅僅是一個(gè) stub秘狞。這個(gè) stub 的位置就是你需要插入一些邏輯 去返回 CMSampleBufferRef 對(duì)象 表示你想要寫入的媒體數(shù)據(jù)。示例緩沖區(qū)的可能來源是一個(gè)資產(chǎn)讀取器出口蹈集。

Reencoding Assets - 重新編碼資產(chǎn)

You can use an asset reader and asset writer object in tandem to convert an asset from one representation to another. Using these objects, you have more control over the conversion than you do with an AVAssetExportSession object. For example, you can choose which of the tracks you want to be represented in the output file, specify your own output format, or modify the asset during the conversion process. The first step in this process is just to set up your asset reader outputs and asset writer inputs as desired. After your asset reader and writer are fully configured, you start up both of them with calls to the startReading and startWriting methods, respectively. The following code snippet displays how to use a single asset writer input to write media data supplied by a single asset reader output:

可以使用資產(chǎn)讀取器和資產(chǎn)寫入器對(duì)象烁试,以一個(gè)表現(xiàn)轉(zhuǎn)換到另一個(gè)表現(xiàn)的資產(chǎn)。使用這些對(duì)象拢肆,你必須比用 AVAssetExportSession 對(duì)象有更多的控制轉(zhuǎn)換减响。例如,你可以選擇輸出文件中想要顯示的軌道郭怪,指定你自己的輸出格式支示,或者在轉(zhuǎn)換過程中修改該資產(chǎn)。這個(gè)過程中第一步是按需建立你的資產(chǎn)讀取器出口和資產(chǎn)寫入器入口鄙才。資產(chǎn)讀取器和寫入器充分配置后颂鸿,分別調(diào)用 startReadingstartWriting 方法啟動(dòng)它們。下面的代碼片段顯示了如何使用一個(gè)單一的資產(chǎn)寫入器入口去寫入 由一個(gè)單一的資產(chǎn)讀取器出口提供的媒體數(shù)據(jù):

NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];
 
// Create a serialization queue for reading and writing.
//為讀寫創(chuàng)建一個(gè)序列化隊(duì)列攒庵。
dispatch_queue_t serializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
 
// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
//指定要在資產(chǎn)寫入器準(zhǔn)備接收媒體數(shù)據(jù)時(shí)執(zhí)行的塊嘴纺,以及調(diào)用它的隊(duì)列败晴。
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:serializationQueue usingBlock:^{
     while ([self.assetWriterInput isReadyForMoreMediaData])
     {
          // Get the asset reader output's next sample buffer.
         //獲取資產(chǎn)讀取器輸出的下一個(gè)樣本緩沖區(qū)。
          CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
          if (sampleBuffer != NULL)
          {
               // If it exists, append this sample buffer to the output file.
              //如果存在栽渴,請(qǐng)將此示例緩沖區(qū)附加到輸出文件尖坤。
               BOOL success = [self.assetWriterInput appendSampleBuffer:sampleBuffer];
               CFRelease(sampleBuffer);
               sampleBuffer = NULL;
               // Check for errors that may have occurred when appending the new sample buffer.
              //檢查附加新樣本緩沖區(qū)時(shí)可能發(fā)生的錯(cuò)誤。
               if (!success && self.assetWriter.status == AVAssetWriterStatusFailed)
               {
                    NSError *failureError = self.assetWriter.error;
                    //Handle the error.//處理錯(cuò)誤闲擦。
               }
          }
          else
          {
               // If the next sample buffer doesn't exist, find out why the asset reader output couldn't vend another one.
              //如果下一個(gè)樣本緩沖區(qū)不存在慢味,請(qǐng)找出為什么資產(chǎn)讀取器輸出無法再銷售另一個(gè)樣本緩沖區(qū)。
               if (self.assetReader.status == AVAssetReaderStatusFailed)
               {
                    NSError *failureError = self.assetReader.error;
                    //Handle the error here.//處理錯(cuò)誤墅冷。
               }
               else
               {
                    // The asset reader output must have vended all of its samples. Mark the input as finished.
                   //資產(chǎn)讀取器輸出必須包含所有示例纯路。將輸入標(biāo)記為已完成。
                    [self.assetWriterInput markAsFinished];
                    break;
               }
          }
     }
}];

Putting It All Together: Using an Asset Reader and Writer in Tandem to Reencode an Asset - 總結(jié):使用資產(chǎn)讀取器和寫入器串聯(lián)重新編碼資產(chǎn)

This brief code example illustrates how to use an asset reader and writer to reencode the first video and audio track of an asset into a new file. It shows how to:

  • Use serialization queues to handle the asynchronous nature of reading and writing audiovisual data
  • Initialize an asset reader and configure two asset reader outputs, one for audio and one for video
  • Initialize an asset writer and configure two asset writer inputs, one for audio and one for video
  • Use an asset reader to asynchronously supply media data to an asset writer through two different - output/input combinations
  • Use a dispatch group to be notified of completion of the reencoding process
  • Allow a user to cancel the reencoding process once it has begun

這個(gè)剪短的代碼示例說明如何使用資產(chǎn)讀取器和寫入器將一個(gè)資產(chǎn)的第一個(gè)視頻和音頻軌道重新編碼 到一個(gè)新文件俺榆。它展示了:

  • 使用序列化隊(duì)列來處理讀寫視聽數(shù)據(jù)的異步性
  • 初始化一個(gè)資產(chǎn)讀取器感昼,并配置兩個(gè)資產(chǎn)讀取器出口,一個(gè)用于音頻罐脊,一個(gè)用于視頻
  • 初始化一個(gè)資產(chǎn)寫入器定嗓,并配置兩個(gè)資產(chǎn)寫入器入口,一個(gè)用于音頻萍桌,一個(gè)用于視頻
  • 使用一個(gè)資產(chǎn)讀取器宵溅,通過兩個(gè)不同的 輸出/輸入組合來異步向資產(chǎn)寫入器提供媒體數(shù)據(jù)
  • 使用一個(gè)調(diào)度組接收重新編碼過程的完成的通知
  • 一旦開始,允許用戶取消重新編碼過程

Note: To focus on the most relevant code, this example omits several aspects of a complete application. To use AVFoundation, you are expected to have enough experience with Cocoa to be able to infer the missing pieces.

注:關(guān)注最相關(guān)的代碼上炎,這個(gè)例子中省略了一個(gè)完成應(yīng)用程序的幾個(gè)方面恃逻。為了使用 AVFoundation ,希望你有足夠的 Cocoa 經(jīng)驗(yàn)藕施,能夠推斷缺少的代碼寇损。

  • Handling the Initial Setup - 處理初始設(shè)置

Before you create your asset reader and writer and configure their outputs and inputs, you need to handle some initial setup. The first part of this setup involves creating three separate serialization queues to coordinate the reading and writing process.

在創(chuàng)建資產(chǎn)讀取器和寫入器和配置它們的出口和入口之前,你需要處理一下初始設(shè)置裳食。此設(shè)置的第一部分包括創(chuàng)建3個(gè)獨(dú)立的序列化隊(duì)列來協(xié)調(diào)讀寫過程矛市。

NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];
 
// Create the main serialization queue.
//創(chuàng)建主序列化隊(duì)列。
self.mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw audio serialization queue", self];
 
// Create the serialization queue to use for reading and writing the audio data.
//創(chuàng)建序列化隊(duì)列用于讀取和寫入音頻數(shù)據(jù)诲祸。
self.rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL);
NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw video serialization queue", self];
 
// Create the serialization queue to use for reading and writing the video data.
//創(chuàng)建序列化隊(duì)列用于讀取和寫入視頻數(shù)據(jù)浊吏。
self.rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL);

The main serialization queue is used to coordinate the starting and stopping of the asset reader and writer (perhaps due to cancellation) and the other two serialization queues are used to serialize the reading and writing by each output/input combination with a potential cancellation.

主序列隊(duì)列用于協(xié)調(diào)資產(chǎn)讀取器和寫入器(可能是由于注銷)的啟動(dòng)和停止,其他兩個(gè)序列隊(duì)列用于序列化讀取器和寫入器救氯,通過每一個(gè)有潛在注銷的輸入/輸出組合找田。

Now that you have some serialization queues, load the tracks of your asset and begin the reencoding process.

現(xiàn)在你有一些序列化隊(duì)列,加載你的資產(chǎn)軌道着憨,并開始重新編碼過程墩衙。

self.asset = <#AVAsset that you want to reencode#>;
self.cancelled = NO;
self.outputURL = <#NSURL representing desired output URL for file generated by asset writer#>;
// Asynchronously load the tracks of the asset you want to read.
//異步加載您想要讀取的資產(chǎn)的曲目。
[self.asset loadValuesAsynchronouslyForKeys:@[@"tracks"] completionHandler:^{
     // Once the tracks have finished loading, dispatch the work to the main serialization queue.
   //一旦軌道完成加載,將工作分派到主序列化隊(duì)列漆改。
     dispatch_async(self.mainSerializationQueue, ^{
          // Due to asynchronous nature, check to see if user has already cancelled.
         //檢查加載資產(chǎn)軌跡是否成功植袍。
          if (self.cancelled)
               return;
          BOOL success = YES;
          NSError *localError = nil;
          // Check for success of loading the assets tracks.
          success = ([self.asset statusOfValueForKey:@"tracks" error:&localError] == AVKeyValueStatusLoaded);
          if (success)
          {
               // If the tracks loaded successfully, make sure that no file exists at the output path for the asset writer.
//如果音軌加載成功,請(qǐng)確保資產(chǎn)寫入器的輸出路徑中不存在任何文件籽懦。
               NSFileManager *fm = [NSFileManager defaultManager];
               NSString *localOutputPath = [self.outputURL path];
               if ([fm fileExistsAtPath:localOutputPath])
                    success = [fm removeItemAtPath:localOutputPath error:&localError];
          }
          if (success)
               success = [self setupAssetReaderAndAssetWriter:&localError];
          if (success)
               success = [self startAssetReaderAndWriter:&localError];
          if (!success)
               [self readingAndWritingDidFinishSuccessfully:success withError:localError];
     });
}];

When the track loading process finishes, whether successfully or not, the rest of the work is dispatched to the main serialization queue to ensure that all of this work is serialized with a potential cancellation. Now all that’s left is to implement the cancellation process and the three custom methods at the end of the previous code listing.

當(dāng)軌道加載過程結(jié)束后,無論成功與否氛魁,剩下的工作就是被分配到主序列隊(duì)列以確保所有的工作都是有潛在注銷的序列化∧核常現(xiàn)在,剩下就是實(shí)現(xiàn)注銷進(jìn)程和前面的代碼清單的結(jié)尾處的3個(gè)自定義方法秀存。

  • Initializing the Asset Reader and Writer - 初始化資產(chǎn)讀取器和寫入器

The custom setupAssetReaderAndAssetWriter: method initializes the reader and writer and configures two output/input combinations, one for an audio track and one for a video track. In this example, the audio is decompressed to Linear PCM using the asset reader and compressed back to 128 kbps AAC using the asset writer. The video is decompressed to YUV using the asset reader and compressed to H.264 using the asset writer.

自定義 setupAssetReaderAndAssetWriter: 方法初始化讀取器和寫入器捶码,并且配置兩個(gè)輸入/輸出組合,一個(gè)用于音頻軌道或链,一個(gè)用于視頻軌道惫恼。在這個(gè)例子中,使用資產(chǎn)讀取器音頻被解壓縮到 Linear PCM 澳盐,使用資產(chǎn)寫入器壓縮回 128 kbps AAC 祈纯。使用資產(chǎn)讀取器將視頻解壓縮到 YUV ,使用資產(chǎn)寫入器壓縮為 H.264 叼耙。

- (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
{
    // Create and initialize the asset reader.
     //創(chuàng)建并初始化資產(chǎn)讀取器腕窥。
    self.assetReader = [[AVAssetReader alloc] initWithAsset:self.asset error:outError];
    BOOL success = (self.assetReader != nil);
    if (success)
    {
        // If the asset reader was successfully initialized, do the same for the asset writer.
/如果資產(chǎn)讀取器已成功初始化,則對(duì)資產(chǎn)寫入器執(zhí)行相同操作
        self.assetWriter = [[AVAssetWriter alloc] initWithURL:self.outputURL
                                                     fileType:AVFileTypeQuickTimeMovie
                                                        error:outError];
        success = (self.assetWriter != nil);
    }
    
    if (success)
    {
        // If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
//如果讀取器和寫入器都已成功初始化筛婉,請(qǐng)抓取將要使用的音頻和視頻資產(chǎn)軌道簇爆。
        AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
        NSArray *audioTracks = [self.asset tracksWithMediaType:AVMediaTypeAudio];
        if ([audioTracks count] > 0)
            assetAudioTrack = [audioTracks objectAtIndex:0];
        NSArray *videoTracks = [self.asset tracksWithMediaType:AVMediaTypeVideo];
        if ([videoTracks count] > 0)
            assetVideoTrack = [videoTracks objectAtIndex:0];
        
        if (assetAudioTrack)
        {
            // If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
          //如果有音軌需要讀取,請(qǐng)將解壓縮設(shè)置設(shè)置為線性PCM并創(chuàng)建資產(chǎn)讀取器輸出爽撒。
            NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
            self.assetReaderAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack
                                                                                     outputSettings:decompressionAudioSettings];
            [self.assetReader addOutput:self.assetReaderAudioOutput];
            // Then, set the compression settings to 128kbps AAC and create the asset writer input.
           //然后入蛆,將壓縮設(shè)置設(shè)置為128kbps AAC并創(chuàng)建資產(chǎn)寫入器輸入。
            AudioChannelLayout stereoChannelLayout = {
                .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
                .mChannelBitmap = 0,
                .mNumberChannelDescriptions = 0
            };
            NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
            NSDictionary *compressionAudioSettings = @{
                                                       AVFormatIDKey         : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
                                                       AVEncoderBitRateKey   : [NSNumber numberWithInteger:128000],
                                                       AVSampleRateKey       : [NSNumber numberWithInteger:44100],
                                                       AVChannelLayoutKey    : channelLayoutAsData,
                                                       AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
                                                       };
            self.assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType]
                                                                            outputSettings:compressionAudioSettings];
            [self.assetWriter addInput:self.assetWriterAudioInput];
        }
        
        if (assetVideoTrack)
        {
            // If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
            //如果有視頻軌道要讀取硕勿,請(qǐng)為YUV設(shè)置解壓縮設(shè)置并創(chuàng)建資產(chǎn)讀取器輸出哨毁。
            NSDictionary *decompressionVideoSettings = @{
                                                         (id)kCVPixelBufferPixelFormatTypeKey     : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_422YpCbCr8],
                                                         (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
                                                         };
            self.assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack
                                                                                     outputSettings:decompressionVideoSettings];
            [self.assetReader addOutput:self.assetReaderVideoOutput];
            CMFormatDescriptionRef formatDescription = NULL;
            // Grab the video format descriptions from the video track and grab the first one if it exists.
            //從視頻軌道抓取視頻格式描述,如果存在首尼,抓住第一個(gè)挑庶。
            NSArray *videoFormatDescriptions = [assetVideoTrack formatDescriptions];
            if ([videoFormatDescriptions count] > 0)
                formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
            CGSize trackDimensions = {
                .width = 0.0,
                .height = 0.0,
            };
            // If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
           //如果視頻軌道具有格式說明,請(qǐng)從中抓取軌道尺寸软能。否則迎捺,直接從軌道上抓住他們。
            if (formatDescription)
                trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
            else
                trackDimensions = [assetVideoTrack naturalSize];
            NSDictionary *compressionSettings = nil;
            // If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video.
           //如果視頻軌道具有格式說明查排,請(qǐng)嘗試抓取視頻使用的干凈光圈設(shè)置和像素寬高比
            if (formatDescription)
            {
                NSDictionary *cleanAperture = nil;
                NSDictionary *pixelAspectRatio = nil;
                CFDictionaryRef cleanApertureFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_CleanAperture);
                if (cleanApertureFromCMFormatDescription)
                {
                    cleanAperture = @{
                                      AVVideoCleanApertureWidthKey            : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth),
                                      AVVideoCleanApertureHeightKey           : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight),
                                      AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset),
                                      AVVideoCleanApertureVerticalOffsetKey   : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset)
                                      };
                }
                CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio);
                if (pixelAspectRatioFromCMFormatDescription)
                {
                    pixelAspectRatio = @{
                                         AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing),
                                         AVVideoPixelAspectRatioVerticalSpacingKey   : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing)
                                         };
                }
                // Add whichever settings we could grab from the format description to the compression settings dictionary.
              //將我們可以從格式描述中獲取的任何設(shè)置添加到壓縮設(shè)置字典中凳枝。
                if (cleanAperture || pixelAspectRatio)
                {
                    NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary];
                    if (cleanAperture)
                        [mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey];
                    if (pixelAspectRatio)
                        [mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey];
                    compressionSettings = mutableCompressionSettings;
                }
            }
            // Create the video settings dictionary for H.264.
            //為H.264創(chuàng)建視頻設(shè)置字典。
            NSMutableDictionary *videoSettings = (NSMutableDictionary *) @{
                                                                           AVVideoCodecKey  : AVVideoCodecH264,
                                                                           AVVideoWidthKey  : [NSNumber numberWithDouble:trackDimensions.width],
                                                                           AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
                                                                           };
            // Put the compression settings into the video settings dictionary if we were able to grab them.
           //如果可以的話,把壓縮設(shè)置放到視頻設(shè)置字典中岖瑰。
            if (compressionSettings)
                [videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey];
            // Create the asset writer input and add it to the asset writer.
            //創(chuàng)建 資產(chǎn)寫入器輸入 并將其添加到資產(chǎn)寫入器中叛买。
            self.assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[videoTrack mediaType]
                                                                            outputSettings:videoSettings];
            [self.assetWriter addInput:self.assetWriterVideoInput];
        }
    }
    return success;
}

Reencoding the Asset - 重新編碼資產(chǎn)

Provided that the asset reader and writer are successfully initialized and configured, the startAssetReaderAndWriter: method described in Handling the Initial Setup is called. This method is where the actual reading and writing of the asset takes place.

如果資產(chǎn)讀取器和寫入器成功地初始化和配置,在 Handling the Initial Setup 中發(fā)現(xiàn)調(diào)用 startAssetReaderAndWriter: 方法蹋订。這個(gè)方法實(shí)際上是資產(chǎn)讀寫發(fā)生的地方率挣。

- (BOOL)startAssetReaderAndWriter:(NSError **)outError
{
     BOOL success = YES;
     // Attempt to start the asset reader.
     //嘗試啟動(dòng)資產(chǎn)讀取器。
     success = [self.assetReader startReading];
     if (!success)
          *outError = [self.assetReader error];
     if (success)
     {
          // If the reader started successfully, attempt to start the asset writer.
         //如果讀取器成功啟動(dòng)露戒,請(qǐng)嘗試啟動(dòng)資產(chǎn)寫入器椒功。
          success = [self.assetWriter startWriting];
          if (!success)
               *outError = [self.assetWriter error];
     }
 
     if (success)
     {
          // If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session.
         //如果資產(chǎn)讀取器和寫入器都已成功啟動(dòng),則在重新編碼的地方創(chuàng)建調(diào)度組智什,并開始樣本寫入會(huì)話动漾。
          self.dispatchGroup = dispatch_group_create();
          [self.assetWriter startSessionAtSourceTime:kCMTimeZero];
          self.audioFinished = NO;
          self.videoFinished = NO;
 
          if (self.assetWriterAudioInput)
          {
               // If there is audio to reencode, enter the dispatch group before beginning the work.
               //如果有音頻需要重新編碼,請(qǐng)?jiān)陂_始工作前輸入派遣組荠锭。
               dispatch_group_enter(self.dispatchGroup);
               // Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on.
              //指定要在資產(chǎn)寫入器準(zhǔn)備接收音頻媒體數(shù)據(jù)時(shí)執(zhí)行的block旱眯,并指定要調(diào)用它的隊(duì)列。
               [self.assetWriterAudioInput requestMediaDataWhenReadyOnQueue:self.rwAudioSerializationQueue usingBlock:^{
                    // Because the block is called asynchronously, check to see whether its task is complete.
                   //因?yàn)閎lock是異步調(diào)用的证九,所以要檢查它的任務(wù)是否完成删豺。
                    if (self.audioFinished)
                         return;
                    BOOL completedOrFailed = NO;
                    // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                   //如果任務(wù)還沒有完成,請(qǐng)確保輸入已經(jīng)為更多的媒體數(shù)據(jù)做好了準(zhǔn)備甫贯。
                    while ([self.assetWriterAudioInput isReadyForMoreMediaData] && !completedOrFailed)
                    {
                         // Get the next audio sample buffer, and append it to the output file.
                        //獲取下一個(gè)音頻示例緩沖區(qū)吼鳞,并將其附加到輸出文件。
                         CMSampleBufferRef sampleBuffer = [self.assetReaderAudioOutput copyNextSampleBuffer];
                         if (sampleBuffer != NULL)
                         {
                              BOOL success = [self.assetWriterAudioInput appendSampleBuffer:sampleBuffer];
                              CFRelease(sampleBuffer);
                              sampleBuffer = NULL;
                              completedOrFailed = !success;
                         }
                         else
                         {
                              completedOrFailed = YES;
                         }
                    }
                    if (completedOrFailed)
                    {
                         // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished).
                       //將輸入標(biāo)記為已完成叫搁,但前提是尚未完成赔桌,然后離開調(diào)度組(因?yàn)橐纛l工作已完成)。
                         BOOL oldFinished = self.audioFinished;
                         self.audioFinished = YES;
                         if (oldFinished == NO)
                         {
                              [self.assetWriterAudioInput markAsFinished];
                         }
                         dispatch_group_leave(self.dispatchGroup);
                    }
               }];
          }
 
          if (self.assetWriterVideoInput)
          {
               // If we had video to reencode, enter the dispatch group before beginning the work.
              //如果我們有要重新編碼的視頻渴逻,請(qǐng)?jiān)陂_始工作之前輸入調(diào)度組疾党。
               dispatch_group_enter(self.dispatchGroup);
               // Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on.
               //指定要在資產(chǎn)寫入器準(zhǔn)備好視頻媒體數(shù)據(jù)時(shí)要執(zhí)行的block,并指定隊(duì)列調(diào)用它惨奕。
               [self.assetWriterVideoInput requestMediaDataWhenReadyOnQueue:self.rwVideoSerializationQueue usingBlock:^{
                    // Because the block is called asynchronously, check to see whether its task is complete.
                    //因?yàn)檫@個(gè)block是異步調(diào)用的雪位,請(qǐng)檢查它的任務(wù)是否完成。
                    if (self.videoFinished)
                         return;
                    BOOL completedOrFailed = NO;
                    // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                  //如果任務(wù)還沒有完成梨撞,請(qǐng)確保輸入已經(jīng)為更多的媒體數(shù)據(jù)做好了準(zhǔn)備雹洗。
                    while ([self.assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
                    {
                         // Get the next video sample buffer, and append it to the output file.
                         //獲取下一個(gè)視頻采樣緩沖區(qū),并將其附加到輸出文件卧波。
                         CMSampleBufferRef sampleBuffer = [self.assetReaderVideoOutput copyNextSampleBuffer];
                         if (sampleBuffer != NULL)
                         {
                              BOOL success = [self.assetWriterVideoInput appendSampleBuffer:sampleBuffer];
                              CFRelease(sampleBuffer);
                              sampleBuffer = NULL;
                              completedOrFailed = !success;
                         }
                         else
                         {
                              completedOrFailed = YES;
                         }
                    }
                    if (completedOrFailed)
                    {
                         // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished).
                        //將輸入標(biāo)記為已完成时肿,但前提是我們尚未完成,然后離開調(diào)度組(因?yàn)橐曨l工作已完成)港粱。
                         BOOL oldFinished = self.videoFinished;
                         self.videoFinished = YES;
                         if (oldFinished == NO)
                         {
                              [self.assetWriterVideoInput markAsFinished];
                         }
                         dispatch_group_leave(self.dispatchGroup);
                    }
               }];
          }
          // Set up the notification that the dispatch group will send when the audio and video work have both finished.
         //設(shè)置當(dāng)音頻和視頻作品都完成時(shí)調(diào)度組將發(fā)送的通知螃成。
          dispatch_group_notify(self.dispatchGroup, self.mainSerializationQueue, ^{
               BOOL finalSuccess = YES;
               NSError *finalError = nil;
               // Check to see if the work has finished due to cancellation.
               //檢查工作是否由于取消而結(jié)束旦签。
               if (self.cancelled)
               {
                    // If so, cancel the reader and writer.
                   //如果取消了,就取消讀取器和寫入器寸宏。
                    [self.assetReader cancelReading];
                    [self.assetWriter cancelWriting];
               }
               else
               {
                    // If cancellation didn't occur, first make sure that the asset reader didn't fail.
                    //如果沒有取消宁炫,首先要確保資產(chǎn)讀取器沒有失敗。
                    if ([self.assetReader status] == AVAssetReaderStatusFailed)
                    {
                         finalSuccess = NO;
                         finalError = [self.assetReader error];
                    }
                    // If the asset reader didn't fail, attempt to stop the asset writer and check for any errors.
                    //如果資產(chǎn)讀取器沒有失敗氮凝,請(qǐng)嘗試停止資產(chǎn)寫入器并檢查是否有錯(cuò)誤羔巢。
                    if (finalSuccess)
                    {
                         finalSuccess = [self.assetWriter finishWriting];
                         if (!finalSuccess)
                              finalError = [self.assetWriter error];
                    }
               }
               // Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful.
               //調(diào)用方法來處理完成,并傳入適當(dāng)?shù)膮?shù)以指示重新編碼是否成功罩阵。
               [self readingAndWritingDidFinishSuccessfully:finalSuccess withError:finalError];
          });
     }
     // Return success here to indicate whether the asset reader and writer were started successfully.
      //在這里返回成功朵纷,以指示資產(chǎn)讀取器和寫入器是否已成功啟動(dòng)。
     return success;
}

During reencoding, the audio and video tracks are asynchronously handled on individual serialization queues to increase the overall performance of the process, but both queues are contained within the same dispatch group. By placing the work for each track within the same dispatch group, the group can send a notification when all of the work is done and the success of the reencoding process can be determined.

重新編碼期間永脓,音頻和視頻軌道是在各自的串行隊(duì)形上異步處理,來增加進(jìn)程的整體性能鞋仍,但兩個(gè)隊(duì)列包含在同一調(diào)度組中常摧。為同一調(diào)度組內(nèi)的每個(gè)軌道安排工作,當(dāng)所有的工作完成威创,并能夠確定重新編碼過程的成功落午,該組可以發(fā)送一個(gè)通知。

  • Handling Completion - 處理完成

To handle the completion of the reading and writing process, the readingAndWritingDidFinishSuccessfully: method is called—with parameters indicating whether or not the reencoding completed successfully. If the process didn’t finish successfully, the asset reader and writer are both canceled and any UI related tasks are dispatched to the main queue.

處理讀寫進(jìn)程的完成肚豺,readingAndWritingDidFinishSuccessfully: 方法被調(diào)用溃斋,帶著參數(shù),指出重新編碼是否成功完成吸申。如果進(jìn)程沒有成功完成梗劫,該資產(chǎn)讀取器和寫入器都被取消,任何 UI 相關(guān)的任何都被發(fā)送到主隊(duì)列中截碴。

- (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error
{
     if (!success)
     {
          // If the reencoding process failed, we need to cancel the asset reader and writer.
         //如果重新編碼過程失敗梳侨,我們需要取消資產(chǎn)讀取器和寫入器。
          [self.assetReader cancelReading];
          [self.assetWriter cancelWriting];
          dispatch_async(dispatch_get_main_queue(), ^{
               // Handle any UI tasks here related to failure.
               //處理任何與失敗相關(guān)的UI任務(wù)日丹。
          });
     }
     else
     {
          // Reencoding was successful, reset booleans.
          //重新編碼成功走哺,重啟boolean值。
          self.cancelled = NO;
          self.videoFinished = NO;
          self.audioFinished = NO;
          dispatch_async(dispatch_get_main_queue(), ^{
               // Handle any UI tasks here related to success.
               //處理與成功相關(guān)的任何UI任務(wù)哲虾。
          });
     }
}
  • Handling Cancellation - 處理注銷

Using multiple serialization queues, you can allow the user of your app to cancel the reencoding process with ease. On the main serialization queue, messages are asynchronously sent to each of the asset reencoding serialization queues to cancel their reading and writing. When these two serialization queues complete their cancellation, the dispatch group sends a notification to the main serialization queue where the cancelled property is set to YES. You might associate the cancel method from the following code listing with a button on your UI.

使用多個(gè)序列化隊(duì)列丙躏,你可以提供方便,讓你的應(yīng)用程序的用戶取消重新編碼進(jìn)程束凑。在主串行隊(duì)列晒旅,消息被異步發(fā)送到每個(gè)資產(chǎn)重編碼序列化隊(duì)列,來取消它們的讀寫湘今。當(dāng)這兩個(gè)序列化隊(duì)列完成它們的注銷敢朱,調(diào)度組向主序列化隊(duì)列(cancelled 屬性被設(shè)置為 YES)發(fā)送一個(gè)通知.你可能從下面的代碼將 cancel 方法與 UI 上的按鈕關(guān)聯(lián)起來。

- (void)cancel
{
     // Handle cancellation asynchronously, but serialize it with the main queue.
     //異步處理取消操作,但使用主隊(duì)列序列化取消操作拴签。
     dispatch_async(self.mainSerializationQueue, ^{
          // If we had audio data to reencode, we need to cancel the audio work.
         //如果我們有音頻數(shù)據(jù)要重新編碼孝常,我們需要取消音頻工作。
          if (self.assetWriterAudioInput)
          {
               // Handle cancellation asynchronously again, but this time serialize it with the audio queue.
              //再次異步處理取消蚓哩,但是這次使用音頻隊(duì)列將其序列化构灸。
               dispatch_async(self.rwAudioSerializationQueue, ^{
                    // Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
                    //更新指示任務(wù)已完成的Boolean屬性,并將輸入標(biāo)記為已完成(如果尚未標(biāo)記為已完成)岸梨。
                    BOOL oldFinished = self.audioFinished;
                    self.audioFinished = YES;
                    if (oldFinished == NO)
                    {
                         [self.assetWriterAudioInput markAsFinished];
                    }
                    // Leave the dispatch group since the audio work is finished now.
                    //離開調(diào)度組喜颁,因?yàn)橐纛l工作已經(jīng)完成。
                    dispatch_group_leave(self.dispatchGroup);
               });
          }
 
          if (self.assetWriterVideoInput)
          {
               // Handle cancellation asynchronously again, but this time serialize it with the video queue.
               //再次異步處理取消曹阔,但這一次使用視頻隊(duì)列序列化取消半开。
               dispatch_async(self.rwVideoSerializationQueue, ^{
                    // Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
                   //更新指示任務(wù)已完成的Boolean屬性,并將輸入標(biāo)記為已完成(如果尚未標(biāo)記為已完成)赃份。
                    BOOL oldFinished = self.videoFinished;
                    self.videoFinished = YES;
                    if (oldFinished == NO)
                    {
                         [self.assetWriterVideoInput markAsFinished];
                    }
                    // Leave the dispatch group, since the video work is finished now.
                    //離開調(diào)度組寂拆,因?yàn)橐曨l工作已經(jīng)完成。
                    dispatch_group_leave(self.dispatchGroup);
               });
          }
          // Set the cancelled Boolean property to YES to cancel any work on the main queue as well.
          //將取消的Boolean屬性設(shè)置為YES以取消主隊(duì)列上的任何工作抓韩。
          self.cancelled = YES;
     });
}

Asset Output Settings Assistant - 資產(chǎn)出口設(shè)置助手

The AVOutputSettingsAssistant class aids in creating output-settings dictionaries for an asset reader or writer. This makes setup much simpler, especially for high frame rate H264 movies that have a number of specific presets. Listing 5-1 shows an example that uses the output settings assistant to use the settings assistant.

AVOutputSettingsAssistant 類在創(chuàng)建出口時(shí)能幫上忙 – 為資產(chǎn)讀取器或者寫入器設(shè)置字典纠永。這使得設(shè)置更簡(jiǎn)單,特別是對(duì)于有一些具體的預(yù)設(shè)的高幀速率 H264 影片谒拴。 Listing 5-1 顯示了使用輸出設(shè)置助手去使用設(shè)置助手的例子尝江。

Listing 5-1 AVOutputSettingsAssistant sample

AVOutputSettingsAssistant *outputSettingsAssistant = [AVOutputSettingsAssistant outputSettingsAssistantWithPreset:<some preset>];
CMFormatDescriptionRef audioFormat = [self getAudioFormat];
 
if (audioFormat != NULL)
    [outputSettingsAssistant setSourceAudioFormat:(CMAudioFormatDescriptionRef)audioFormat];
 
CMFormatDescriptionRef videoFormat = [self getVideoFormat];
 
if (videoFormat != NULL)
    [outputSettingsAssistant setSourceVideoFormat:(CMVideoFormatDescriptionRef)videoFormat];
 
CMTime assetMinVideoFrameDuration = [self getMinFrameDuration];
CMTime averageFrameDuration = [self getAvgFrameDuration]
 
[outputSettingsAssistant setSourceVideoAverageFrameDuration:averageFrameDuration];
[outputSettingsAssistant setSourceVideoMinFrameDuration:assetMinVideoFrameDuration];
 
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:<some URL> fileType:[outputSettingsAssistant outputFileType] error:NULL];
AVAssetWriterInput *audioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:[outputSettingsAssistant audioSettings] sourceFormatHint:audioFormat];
AVAssetWriterInput *videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:[outputSettingsAssistant videoSettings] sourceFormatHint:videoFormat];

參考文獻(xiàn):
Yofer Zhang的博客
AVFoundation的蘋果官網(wǎng)

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市英上,隨后出現(xiàn)的幾起案子炭序,更是在濱河造成了極大的恐慌,老刑警劉巖苍日,帶你破解...
    沈念sama閱讀 206,214評(píng)論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件少态,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡易遣,警方通過查閱死者的電腦和手機(jī)彼妻,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,307評(píng)論 2 382
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來豆茫,“玉大人侨歉,你說我怎么就攤上這事】辏” “怎么了幽邓?”我有些...
    開封第一講書人閱讀 152,543評(píng)論 0 341
  • 文/不壞的土叔 我叫張陵,是天一觀的道長火脉。 經(jīng)常有香客問我牵舵,道長柒啤,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 55,221評(píng)論 1 279
  • 正文 為了忘掉前任畸颅,我火速辦了婚禮担巩,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘没炒。我一直安慰自己涛癌,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 64,224評(píng)論 5 371
  • 文/花漫 我一把揭開白布送火。 她就那樣靜靜地躺著拳话,像睡著了一般。 火紅的嫁衣襯著肌膚如雪种吸。 梳的紋絲不亂的頭發(fā)上弃衍,一...
    開封第一講書人閱讀 49,007評(píng)論 1 284
  • 那天,我揣著相機(jī)與錄音坚俗,去河邊找鬼笨鸡。 笑死,一個(gè)胖子當(dāng)著我的面吹牛坦冠,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播哥桥,決...
    沈念sama閱讀 38,313評(píng)論 3 399
  • 文/蒼蘭香墨 我猛地睜開眼辙浑,長吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來了拟糕?” 一聲冷哼從身側(cè)響起判呕,我...
    開封第一講書人閱讀 36,956評(píng)論 0 259
  • 序言:老撾萬榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎送滞,沒想到半個(gè)月后侠草,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 43,441評(píng)論 1 300
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡犁嗅,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 35,925評(píng)論 2 323
  • 正文 我和宋清朗相戀三年边涕,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片褂微。...
    茶點(diǎn)故事閱讀 38,018評(píng)論 1 333
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡功蜓,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出宠蚂,到底是詐尸還是另有隱情式撼,我是刑警寧澤,帶...
    沈念sama閱讀 33,685評(píng)論 4 322
  • 正文 年R本政府宣布求厕,位于F島的核電站著隆,受9級(jí)特大地震影響扰楼,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜美浦,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,234評(píng)論 3 307
  • 文/蒙蒙 一弦赖、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧抵代,春花似錦腾节、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,240評(píng)論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至康吵,卻和暖如春劈榨,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背晦嵌。 一陣腳步聲響...
    開封第一講書人閱讀 31,464評(píng)論 1 261
  • 我被黑心中介騙來泰國打工火窒, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人献汗。 一個(gè)月前我還...
    沈念sama閱讀 45,467評(píng)論 2 352
  • 正文 我出身青樓准给,卻偏偏與公主長得像,于是被迫代替她去往敵國和親描滔。 傳聞我的和親對(duì)象是個(gè)殘疾皇子棒妨,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 42,762評(píng)論 2 345

推薦閱讀更多精彩內(nèi)容