iOS視頻重編碼

iOS視頻重編碼


移動端編碼無外乎兩種:

  1. 軟編碼,利用CPU來對視頻做編碼和解碼的凄贩,但效率不高.(大部分移動端通過FFMpeg來實現(xiàn)軟編碼。)
  2. 硬編碼,利用GPU或者專用處理器來對視頻做編碼和解碼晾腔。iOS 8.0之后開放的Video ToolBox框架就是來實現(xiàn)硬件的編碼和解碼的.

一. VideoToolbox基本數(shù)據(jù)結構

Video Toolbox視頻編解碼前后需要應用的數(shù)據(jù)結構進行說明

(1)CVPixelBuffer:編碼前和解碼后的圖像數(shù)據(jù)結構。

(2)CMTime僵井、CMClock和CMTimebase:時間戳相關庄萎。時間以64-bit/32-bit的形式出現(xiàn)。

(3)CMBlockBuffer:編碼后挣惰,結果圖像的數(shù)據(jù)結構。

(4)CMVideoFormatDescription:圖像存儲方式,編解碼器等格式描述憎茂。

(5)CMSampleBuffer:存放編解碼前后的視頻圖像的容器數(shù)據(jù)結構珍语。

二. 硬解碼和硬編碼使用方法

  1. 創(chuàng)建AVAssetReader,將H264碼流轉(zhuǎn)換成解碼前的CMSampleBuffer竖幔。

    (1)提取sps和pps生成format description板乙。

    (2)提取視頻圖像數(shù)據(jù)生成CMBlockBuffer。

    (3)根據(jù)需要拳氢,生成CMTime信息募逞。

  2. 創(chuàng)建AVAssetWriter,設置輸出及壓縮屬性馋评。


下面的代碼簡要示例了使用asset reader 和 writer 對一個asset中的第一個video 和 audio track 進行重新編碼并將結果數(shù)據(jù)寫入到一個新文件中.(可以改變視頻的尺寸放接,幀率碼率,文件格式等)


                             初始化設置

在創(chuàng)建和配置asset reader 和 writer 之前, 需要進行一些初始化設置. 首先需要為讀寫過程創(chuàng)建三個串行隊列.

NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];

// Create the main serialization queue.
self.mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw audio serialization queue", self];

// Create the serialization queue to use for reading and writing the audio data.
self.rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL);
NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw video serialization queue", self];

// Create the serialization queue to use for reading and writing the video data.
self.rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL);

隊列mainSerializationQueue 用于asset reader 和 writer 的啟動,停止和取消. 其他兩個隊列用于output/input的讀取和寫入.

接著, 加載asset中的track, 并開始重編碼.

    self.asset = <#AVAsset that you want to reencode#>;
    self.cancelled = NO;
    self.outputURL = <#NSURL representing desired output URL for file generated by asset writer#>;
    // Asynchronously load the tracks of the asset you want to read.
    [self.asset loadValuesAsynchronouslyForKeys:@[@"tracks"] completionHandler:^{
         // Once the tracks have finished loading, dispatch the work to the main serialization queue.
         dispatch_async(self.mainSerializationQueue, ^{
              // Due to asynchronous nature, check to see if user has already cancelled.
              if (self.cancelled)
                   return;
              BOOL success = YES;
              NSError *localError = nil;
              // Check for success of loading the assets tracks.
              success = ([self.asset statusOfValueForKey:@"tracks" error:&localError] == AVKeyValueStatusLoaded);
              if (success)
              {
                   // If the tracks loaded successfully, make sure that no file exists at the output path for the asset writer.
                   NSFileManager *fm = [NSFileManager defaultManager];
                   NSString *localOutputPath = [self.outputURL path];
                   if ([fm fileExistsAtPath:localOutputPath])
                        success = [fm removeItemAtPath:localOutputPath error:&localError];
              }
              if (success)
                   success = [self setupAssetReaderAndAssetWriter:&localError];
              if (success)
                   success = [self startAssetReaderAndWriter:&localError];
              if (!success)
                   [self readingAndWritingDidFinishSuccessfully:success withError:localError];
         });
    }]; 

剩下的工作就是實現(xiàn)取消的處理, 并實現(xiàn)三個自定義方法.


                      初始化Asset Reader 和 Writer

自定義方法setupAssetReaderAndAssetWriter實現(xiàn)了asset Reader 和 writer的初始化和配置. 在這個示例中, audio先被asset reader解壓為 Linear PCM, 然后被asset write壓縮為128 kbps AAC. video被asset reader 解壓為YUV, 然后被asset writer 壓縮為H.264:

    - (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
{       
        // Create and initialize the asset reader.
     self.assetReader = [[AVAssetReader alloc] initWithAsset:self.asset error:outError];
     BOOL success = (self.assetReader != nil);
     if (success)
     {
          // If the asset reader was successfully initialized, do the same for the asset writer.
          self.assetWriter = [[AVAssetWriter alloc] initWithURL:self.outputURL fileType:AVFileTypeQuickTimeMovie error:outError];
          success = (self.assetWriter != nil);
     }

     if (success)
     {
          // If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
          AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
          NSArray *audioTracks = [self.asset tracksWithMediaType:AVMediaTypeAudio];
          if ([audioTracks count] > 0)
               assetAudioTrack = [audioTracks objectAtIndex:0];
          NSArray *videoTracks = [self.asset tracksWithMediaType:AVMediaTypeVideo];
          if ([videoTracks count] > 0)
               assetVideoTrack = [videoTracks objectAtIndex:0];

          if (assetAudioTrack)
          {
               // If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
               NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
               self.assetReaderAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack outputSettings:decompressionAudioSettings];
               [self.assetReader addOutput:self.assetReaderAudioOutput];
               // Then, set the compression settings to 128kbps AAC and create the asset writer input.
               AudioChannelLayout stereoChannelLayout = {
                    .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
                    .mChannelBitmap = 0,
                    .mNumberChannelDescriptions = 0
               };
               NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
               NSDictionary *compressionAudioSettings = @{
                    AVFormatIDKey         : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
                    AVEncoderBitRateKey   : [NSNumber numberWithInteger:128000],
                    AVSampleRateKey       : [NSNumber numberWithInteger:44100],
                    AVChannelLayoutKey    : channelLayoutAsData,
                    AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
               };
               self.assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType] outputSettings:compressionAudioSettings];
               [self.assetWriter addInput:self.assetWriterAudioInput];
          }

          if (assetVideoTrack)
          {
               // If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
               NSDictionary *decompressionVideoSettings = @{
                    (id)kCVPixelBufferPixelFormatTypeKey     : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_422YpCbCr8],
                    (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
               };
               self.assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
               [self.assetReader addOutput:self.assetReaderVideoOutput];
               CMFormatDescriptionRef formatDescription = NULL;
               // Grab the video format descriptions from the video track and grab the first one if it exists.
               NSArray *videoFormatDescriptions = [assetVideoTrack formatDescriptions];
               if ([videoFormatDescriptions count] > 0)
                    formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
               CGSize trackDimensions = {
                    .width = 0.0,
                    .height = 0.0,
               };
               // If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
               if (formatDescription)
                    trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
               else
                    trackDimensions = [assetVideoTrack naturalSize];
               NSDictionary *compressionSettings = nil;
               // If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video.
               if (formatDescription)
               {
                    NSDictionary *cleanAperture = nil;
                    NSDictionary *pixelAspectRatio = nil;
                    CFDictionaryRef cleanApertureFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_CleanAperture);
                    if (cleanApertureFromCMFormatDescription)
                    {
                         cleanAperture = @{
                              AVVideoCleanApertureWidthKey            : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth),
                              AVVideoCleanApertureHeightKey           : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight),
                              AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset),
                              AVVideoCleanApertureVerticalOffsetKey   : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset)
                         };
                    }
                    CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio);
                    if (pixelAspectRatioFromCMFormatDescription)
                    {
                         pixelAspectRatio = @{
                              AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing),
                              AVVideoPixelAspectRatioVerticalSpacingKey   : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing)
                         };
                    }
                    // Add whichever settings we could grab from the format description to the compression settings dictionary.
                    if (cleanAperture || pixelAspectRatio)
                    {
                         NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary];
                         if (cleanAperture)
                              [mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey];
                         if (pixelAspectRatio)
                              [mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey];
                              
                              [mutableCompressionSettings setObject:[NSNumber numberWithFloat:bps] forKey:AVVideoAverageBitRateKey];//比特率
                [mutableCompressionSettings setObject:@(24) forKey:AVVideoExpectedSourceFrameRateKey];//幀率
                [mutableCompressionSettings setObject:@(1) forKey:AVVideoMaxKeyFrameIntervalKey];//幀間隔
                [mutableCompressionSettings setObject:AVVideoProfileLevelH264Main31 forKey:AVVideoProfileLevelKey];
                         compressionSettings = mutableCompressionSettings;
                         
                    }
               }
               // Create the video settings dictionary for H.264.
               NSMutableDictionary *videoSettings = (NSMutableDictionary *) @{
                    AVVideoCodecKey  : AVVideoCodecH264,
                    AVVideoWidthKey  : [NSNumber numberWithDouble:trackDimensions.width],
                    AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
               };//設置視頻尺寸留特,及h.264的編碼格式
               // Put the compression settings into the video settings dictionary if we were able to grab them.
               if (compressionSettings)
                    [videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey];
               // Create the asset writer input and add it to the asset writer.
               self.assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[videoTrack mediaType] outputSettings:videoSettings];
               [self.assetWriter addInput:self.assetWriterVideoInput];
          }
     }
     return success;
}

                            重編碼Asset

方法startAssetReaderAndWriter負責讀取和寫入asset:

- (BOOL)startAssetReaderAndWriter:(NSError **)outError
{
     BOOL success = YES;
     // Attempt to start the asset reader.
     success = [self.assetReader startReading];
     if (!success)
          *outError = [self.assetReader error];
     if (success)
     {
          // If the reader started successfully, attempt to start the asset writer.
          success = [self.assetWriter startWriting];
          if (!success)
               *outError = [self.assetWriter error];
     }

     if (success)
     {
          // If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session.
          self.dispatchGroup = dispatch_group_create();
          [self.assetWriter startSessionAtSourceTime:kCMTimeZero];
          self.audioFinished = NO;
          self.videoFinished = NO;

          if (self.assetWriterAudioInput)
          {
               // If there is audio to reencode, enter the dispatch group before beginning the work.
               dispatch_group_enter(self.dispatchGroup);
               // Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on.
               [self.assetWriterAudioInput requestMediaDataWhenReadyOnQueue:self.rwAudioSerializationQueue usingBlock:^{
                    // Because the block is called asynchronously, check to see whether its task is complete.
                    if (self.audioFinished)
                         return;
                    BOOL completedOrFailed = NO;
                    // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                    while ([self.assetWriterAudioInput isReadyForMoreMediaData] && !completedOrFailed)
                    {
                         // Get the next audio sample buffer, and append it to the output file.
                         CMSampleBufferRef sampleBuffer = [self.assetReaderAudioOutput copyNextSampleBuffer];
                         if (sampleBuffer != NULL)
                         {
                              BOOL success = [self.assetWriterAudioInput appendSampleBuffer:sampleBuffer];
                              CFRelease(sampleBuffer);
                              sampleBuffer = NULL;
                              completedOrFailed = !success;
                         }
                         else
                         {
                              completedOrFailed = YES;
                         }
                    }
                    if (completedOrFailed)
                    {
                         // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished).
                         BOOL oldFinished = self.audioFinished;
                         self.audioFinished = YES;
                         if (oldFinished == NO)
                         {
                              [self.assetWriterAudioInput markAsFinished];
                         }
                         dispatch_group_leave(self.dispatchGroup);
                    }
               }];
          }

          if (self.assetWriterVideoInput)
          {
               // If we had video to reencode, enter the dispatch group before beginning the work.
               dispatch_group_enter(self.dispatchGroup);
               // Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on.
               [self.assetWriterVideoInput requestMediaDataWhenReadyOnQueue:self.rwVideoSerializationQueue usingBlock:^{
                    // Because the block is called asynchronously, check to see whether its task is complete.
                    if (self.videoFinished)
                         return;
                    BOOL completedOrFailed = NO;
                    // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                    while ([self.assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
                    {
                         // Get the next video sample buffer, and append it to the output file.
                         CMSampleBufferRef sampleBuffer = [self.assetReaderVideoOutput copyNextSampleBuffer];
                         if (sampleBuffer != NULL)
                         {
                              BOOL success = [self.assetWriterVideoInput appendSampleBuffer:sampleBuffer];
                              CFRelease(sampleBuffer);
                              sampleBuffer = NULL;
                              completedOrFailed = !success;
                         }
                         else
                         {
                              completedOrFailed = YES;
                         }
                    }
                    if (completedOrFailed)
                    {
                         // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished).
                         BOOL oldFinished = self.videoFinished;
                         self.videoFinished = YES;
                         if (oldFinished == NO)
                         {
                              [self.assetWriterVideoInput markAsFinished];
                         }
                         dispatch_group_leave(self.dispatchGroup);
                    }
               }];
          }
          // Set up the notification that the dispatch group will send when the audio and video work have both finished.
          dispatch_group_notify(self.dispatchGroup, self.mainSerializationQueue, ^{
               BOOL finalSuccess = YES;
               NSError *finalError = nil;
               // Check to see if the work has finished due to cancellation.
               if (self.cancelled)
               {
                    // If so, cancel the reader and writer.
                    [self.assetReader cancelReading];
                    [self.assetWriter cancelWriting];
               }
               else
               {
                    // If cancellation didn't occur, first make sure that the asset reader didn't fail.
                    if ([self.assetReader status] == AVAssetReaderStatusFailed)
                    {
                         finalSuccess = NO;
                         finalError = [self.assetReader error];
                    }
                    // If the asset reader didn't fail, attempt to stop the asset writer and check for any errors.
                    if (finalSuccess)
                    {
                         finalSuccess = [self.assetWriter finishWriting];
                         if (!finalSuccess)
                              finalError = [self.assetWriter error];
                    }
               }
               // Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful.
               [self readingAndWritingDidFinishSuccessfully:finalSuccess withError:finalError];
          });
     }
     // Return success here to indicate whether the asset reader and writer were started successfully.
     return success;
}

在重編碼過程中, 為了提升性能, 音頻處理和視頻處理在兩個不同隊列中進行. 但這兩個隊列在一個dispatchGroup中, 當每個隊列的任務都完成后, 會調(diào)用readingAndWritingDidFinishSuccessfully。


                           處理編碼結果  

對重編碼的結果進行處理并同步到UI:

- (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error
{
     if (!success)
     {
          // If the reencoding process failed, we need to cancel the asset reader and writer.
          [self.assetReader cancelReading];
          [self.assetWriter cancelWriting];
          dispatch_async(dispatch_get_main_queue(), ^{
               // Handle any UI tasks here related to failure.
          });
     }
     else
     {
          // Reencoding was successful, reset booleans.
          self.cancelled = NO;
          self.videoFinished = NO;
          self.audioFinished = NO;
          dispatch_async(dispatch_get_main_queue(), ^{
               // Handle any UI tasks here related to success.
          });
     }
}

當然,還可以取消重編碼


使用多個串行隊列, 可以很輕松的取消對asset的重編碼. 可以將下面的代碼與UI上的”取消”按鈕關聯(lián)起來:

- (void)cancel
{
     // Handle cancellation asynchronously, but serialize it with the main queue.
     dispatch_async(self.mainSerializationQueue, ^{
          // If we had audio data to reencode, we need to cancel the audio work.
          if (self.assetWriterAudioInput)
          {
               // Handle cancellation asynchronously again, but this time serialize it with the audio queue.
               dispatch_async(self.rwAudioSerializationQueue, ^{
                    // Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
                    BOOL oldFinished = self.audioFinished;
                    self.audioFinished = YES;
                    if (oldFinished == NO)
                    {
                         [self.assetWriterAudioInput markAsFinished];
                    }
                    // Leave the dispatch group since the audio work is finished now.
                    dispatch_group_leave(self.dispatchGroup);
               });
          }

          if (self.assetWriterVideoInput)
          {
               // Handle cancellation asynchronously again, but this time serialize it with the video queue.
               dispatch_async(self.rwVideoSerializationQueue, ^{
                    // Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
                    BOOL oldFinished = self.videoFinished;
                    self.videoFinished = YES;
                    if (oldFinished == NO)
                    {
                         [self.assetWriterVideoInput markAsFinished];
                    }
                    // Leave the dispatch group, since the video work is finished now.
                    dispatch_group_leave(self.dispatchGroup);
               });
          }
          // Set the cancelled Boolean property to YES to cancel any work on the main queue as well.
          self.cancelled = YES;
     });
}


如果需要重編碼相冊中的視頻的話淤年,還需要一下步驟:

  1. 導出視頻資源

     PHVideoRequestOptions* options = [[PHVideoRequestOptions alloc] init];
         options.version = PHVideoRequestOptionsVersionOriginal;
         options.deliveryMode = PHVideoRequestOptionsDeliveryModeAutomatic;
         options.networkAccessAllowed = YES;
         [[PHImageManager defaultManager] requestAVAssetForVideo:asset options:options resultHandler:^(AVAsset* avasset, AVAudioMix* audioMix, NSDictionary* info){
             // NSLog(@"Info:\n%@",info);
             AVURLAsset *videoAsset = (AVURLAsset*)avasset;
             NSLog(@"AVAsset URL: %@",videoAsset.URL);
             NSString *videoPath = videoAsset.URL.path;
             NSLog(@"原視頻大小:%llu", [[NSFileManager defaultManager] attributesOfItemAtPath:videoPath error:nil].fileSize);
             [self startExportVideoWithVideoAsset:videoAsset completion:completion];
    
  2. 設置導出視頻是壓縮系數(shù)等

     NSArray *presets = [AVAssetExportSession exportPresetsCompatibleWithAsset:videoAsset];
     if ([presets containsObject:AVAssetExportPreset640x480]) {
         AVAssetExportSession *session = [[AVAssetExportSession alloc]initWithAsset:videoAsset presetName:AVAssetExportPreset640x480];
         
         NSDateFormatter *formater = [[NSDateFormatter alloc] init];
         [formater setDateFormat:@"yyyy-MM-dd-HH:mm:ss"];
         NSString *outputPath = [[CACHES_FOLDER stringByAppendingPathComponent:@"video"] stringByAppendingFormat:@"/%@.mp4",[formater stringFromDate:[NSDate date]]];
         NSLog(@"video outputPath = %@",outputPath);
         session.outputURL = [NSURL fileURLWithPath:outputPath];
         
         // Optimize for network use.
         session.shouldOptimizeForNetworkUse = true;
         
         NSArray *supportedTypeArray = session.supportedFileTypes;
         if ([supportedTypeArray containsObject:AVFileTypeMPEG4]) {
             session.outputFileType = AVFileTypeMPEG4;
         } else if (supportedTypeArray.count == 0) {
             NSLog(@"No supported file types 視頻類型暫不支持導出");
             return;
         } else {
             session.outputFileType = [supportedTypeArray objectAtIndex:0];
         }
         
         if (![[NSFileManager defaultManager] fileExistsAtPath:[CACHES_FOLDER stringByAppendingPathComponent:@"video"]]) {
             [[NSFileManager defaultManager] createDirectoryAtPath:[CACHES_FOLDER stringByAppendingPathComponent:@"video"] withIntermediateDirectories:YES attributes:nil error:nil];
         }
         
         AVMutableVideoComposition *videoComposition = [self fixedCompositionWithAsset:videoAsset];
         if (videoComposition.renderSize.width) {
             // 修正視頻轉(zhuǎn)向
             session.videoComposition = videoComposition;
         }
         
         // Begin to export video to the output path asynchronously.
         [session exportAsynchronouslyWithCompletionHandler:^(void) {
             switch (session.status) {
                 case AVAssetExportSessionStatusUnknown:
                     NSLog(@"AVAssetExportSessionStatusUnknown"); break;
                 case AVAssetExportSessionStatusWaiting:
                     NSLog(@"AVAssetExportSessionStatusWaiting"); break;
                 case AVAssetExportSessionStatusExporting:
                     NSLog(@"AVAssetExportSessionStatusExporting"); break;
                 case AVAssetExportSessionStatusCompleted: {
                     NSLog(@"AVAssetExportSessionStatusCompleted");
                     dispatch_async(dispatch_get_main_queue(), ^{
                         if (completion) {
                             completion(outputPath);
                         }
                         NSLog(@"導出的視頻大小:%llu", [[NSFileManager defaultManager] attributesOfItemAtPath:outputPath error:nil].fileSize);
                     });
                 }  break;
                 case AVAssetExportSessionStatusFailed:
                     NSLog(@"AVAssetExportSessionStatusFailed"); break;
                 default: break;
             }
         }];
     }
    
  3. 獲取優(yōu)化后的視頻轉(zhuǎn)向信息

     -(AVMutableVideoComposition *)fixedCompositionWithAsset:(AVAsset *)videoAsset {
     AVMutableVideoComposition *videoComposition = [AVMutableVideoComposition videoComposition];
     // 視頻轉(zhuǎn)向
     int degrees = [self degressFromVideoFileWithAsset:videoAsset];
     if (degrees != 0) {
     CGAffineTransform translateToCenter;
     CGAffineTransform mixedTransform;
     videoComposition.frameDuration = CMTimeMake(1, 30);
     
     NSArray *tracks = [videoAsset tracksWithMediaType:AVMediaTypeVideo];
     AVAssetTrack *videoTrack = [tracks objectAtIndex:0];
     
     if (degrees == 90) {
         // 順時針旋轉(zhuǎn)90°
         translateToCenter = CGAffineTransformMakeTranslation(videoTrack.naturalSize.height, 0.0);
         mixedTransform = CGAffineTransformRotate(translateToCenter,M_PI_2);
         videoComposition.renderSize = CGSizeMake(videoTrack.naturalSize.height,videoTrack.naturalSize.width);
     } else if(degrees == 180){
         // 順時針旋轉(zhuǎn)180°
         translateToCenter = CGAffineTransformMakeTranslation(videoTrack.naturalSize.width, videoTrack.naturalSize.height);
         mixedTransform = CGAffineTransformRotate(translateToCenter,M_PI);
         videoComposition.renderSize = CGSizeMake(videoTrack.naturalSize.width,videoTrack.naturalSize.height);
     } else if(degrees == 270){
         // 順時針旋轉(zhuǎn)270°
         translateToCenter = CGAffineTransformMakeTranslation(0.0, videoTrack.naturalSize.width);
         mixedTransform = CGAffineTransformRotate(translateToCenter,M_PI_2*3.0);
         videoComposition.renderSize = CGSizeMake(videoTrack.naturalSize.height,videoTrack.naturalSize.width);
     }
     
     AVMutableVideoCompositionInstruction *roateInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
     roateInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, [videoAsset duration]);
     AVMutableVideoCompositionLayerInstruction *roateLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
     
     [roateLayerInstruction setTransform:mixedTransform atTime:kCMTimeZero];
     
     roateInstruction.layerInstructions = @[roateLayerInstruction];
     // 加入視頻方向信息
     videoComposition.instructions = @[roateInstruction];
     }
     
     return videoComposition;
    
  4. 獲取視頻角度

     -(int)degressFromVideoFileWithAsset:(AVAsset *)asset {
         int degress = 0;
         NSArray *tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
         if([tracks count] > 0) {
             AVAssetTrack *videoTrack = [tracks objectAtIndex:0];
             CGAffineTransform t = videoTrack.preferredTransform;
             if(t.a == 0 && t.b == 1.0 && t.c == -1.0 && t.d == 0){
                 // Portrait
                 degress = 90;
             } else if(t.a == 0 && t.b == -1.0 && t.c == 1.0 && t.d == 0){
                 // PortraitUpsideDown
                 degress = 270;
             } else if(t.a == 1.0 && t.b == 0 && t.c == 0 && t.d == 1.0){
                 // LandscapeRight
                 degress = 0;
             } else if(t.a == -1.0 && t.b == 0 && t.c == 0 && t.d == -1.0){
                 // LandscapeLeft
                 degress = 180;
             }
         }
         return degress;
     }
    


在實現(xiàn)中發(fā)現(xiàn)横蜒,對于導出的壓縮過后的視頻,如果小于一定程度右核,再進行編解碼的話慧脱,不但體積沒有變小,反而增大了贺喝,無論改什么參數(shù)都不會變小體積磷瘤,這個需要進一步研究。另外搜变,這個是對于壓縮過后的視頻進行重編碼的采缚,還可以交換下步驟,先獲取視頻資源AVAsset挠他,在將AVAsset重編碼扳抽,然后進行壓縮。經(jīng)過試驗殖侵,倆者得出的結果是一樣的贸呢,在一定程度都能壓縮,到達一定程度了拢军,無論改什么參數(shù)都壓縮不了大小了

參考文獻:http://www.devzhang.cn/2016/09/20/Asset%E7%9A%84%E9%87%8D%E7%BC%96%E7%A0%81%E5%8F%8A%E5%AF%BC%E5%87%BA/

最后編輯于
?著作權歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末楞陷,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子茉唉,更是在濱河造成了極大的恐慌固蛾,老刑警劉巖结执,帶你破解...
    沈念sama閱讀 221,198評論 6 514
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異艾凯,居然都是意外死亡献幔,警方通過查閱死者的電腦和手機,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 94,334評論 3 398
  • 文/潘曉璐 我一進店門趾诗,熙熙樓的掌柜王于貴愁眉苦臉地迎上來蜡感,“玉大人,你說我怎么就攤上這事恃泪≈P耍” “怎么了?”我有些...
    開封第一講書人閱讀 167,643評論 0 360
  • 文/不壞的土叔 我叫張陵贝乎,是天一觀的道長情连。 經(jīng)常有香客問我,道長糕非,這世上最難降的妖魔是什么蒙具? 我笑而不...
    開封第一講書人閱讀 59,495評論 1 296
  • 正文 為了忘掉前任,我火速辦了婚禮朽肥,結果婚禮上禁筏,老公的妹妹穿的比我還像新娘。我一直安慰自己衡招,他們只是感情好篱昔,可當我...
    茶點故事閱讀 68,502評論 6 397
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著始腾,像睡著了一般州刽。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上浪箭,一...
    開封第一講書人閱讀 52,156評論 1 308
  • 那天穗椅,我揣著相機與錄音,去河邊找鬼奶栖。 笑死匹表,一個胖子當著我的面吹牛,可吹牛的內(nèi)容都是我干的宣鄙。 我是一名探鬼主播袍镀,決...
    沈念sama閱讀 40,743評論 3 421
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼冻晤!你這毒婦竟也來了苇羡?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 39,659評論 0 276
  • 序言:老撾萬榮一對情侶失蹤鼻弧,失蹤者是張志新(化名)和其女友劉穎设江,沒想到半個月后锦茁,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 46,200評論 1 319
  • 正文 獨居荒郊野嶺守林人離奇死亡绣硝,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 38,282評論 3 340
  • 正文 我和宋清朗相戀三年蜻势,在試婚紗的時候發(fā)現(xiàn)自己被綠了撑刺。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片鹉胖。...
    茶點故事閱讀 40,424評論 1 352
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖够傍,靈堂內(nèi)的尸體忽然破棺而出甫菠,到底是詐尸還是另有隱情,我是刑警寧澤冕屯,帶...
    沈念sama閱讀 36,107評論 5 349
  • 正文 年R本政府宣布寂诱,位于F島的核電站,受9級特大地震影響安聘,放射性物質(zhì)發(fā)生泄漏痰洒。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點故事閱讀 41,789評論 3 333
  • 文/蒙蒙 一浴韭、第九天 我趴在偏房一處隱蔽的房頂上張望丘喻。 院中可真熱鬧,春花似錦念颈、人聲如沸泉粉。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,264評論 0 23
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽嗡靡。三九已至,卻和暖如春窟感,著一層夾襖步出監(jiān)牢的瞬間讨彼,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 33,390評論 1 271
  • 我被黑心中介騙來泰國打工柿祈, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留哈误,地道東北人。 一個月前我還...
    沈念sama閱讀 48,798評論 3 376
  • 正文 我出身青樓谍夭,卻偏偏與公主長得像黑滴,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子紧索,可洞房花燭夜當晚...
    茶點故事閱讀 45,435評論 2 359

推薦閱讀更多精彩內(nèi)容