需求
本文主要將含有編碼的H.264,H.265視頻流文件解碼為原始視頻數(shù)據(jù),解碼后即可渲染到屏幕或用作其他用途.
實(shí)現(xiàn)原理
正如我們所知,編碼數(shù)據(jù)僅用于傳輸,無(wú)法直接渲染到屏幕上,所以這里利用蘋果原生框架VideoToolbox解析文件中的編碼的視頻流,并將壓縮視頻數(shù)據(jù)(h264/h265)解碼為指定格式(yuv,RGB)的視頻原始數(shù)據(jù),以渲染到屏幕上.
注意: 本例主要為解碼,需要借助FFmpeg搭建模塊,視頻解析模塊,渲染模塊,這些模塊在下面閱讀前提皆有鏈接可直接訪問(wèn).
閱讀前提
- 音視頻基礎(chǔ)
- iOS FFmpeg環(huán)境搭建
- FFmpeg解析視頻數(shù)據(jù)
- OpenGL渲染視頻數(shù)據(jù)
- H.264,H.265碼流結(jié)構(gòu)
代碼地址 : Video Decoder
掘金地址 : Video Decoder
簡(jiǎn)書(shū)地址 : Video Decoder
博客地址 : Video Decoder
總體架構(gòu)
總體思想即將FFmpeg parse到的數(shù)據(jù)裝到CMBlockBuffer
中,將extra data分離出的vps,sps,pps裝到CMVideoFormatDesc
中,將計(jì)算好的時(shí)間戳裝到CMTime
中,最后即可拼成完成的CMSampleBuffer
以用來(lái)提供給解碼器.
簡(jiǎn)易流程
FFmpeg parse流程
- 創(chuàng)建format context:
avformat_alloc_context
- 打開(kāi)文件流:
avformat_open_input
- 尋找流信息:
avformat_find_stream_info
- 獲取音視頻流的索引值:
formatContext->streams[i]->codecpar->codec_type == (isVideoStream ? AVMEDIA_TYPE_VIDEO : AVMEDIA_TYPE_AUDIO)
- 獲取音視頻流:
m_formatContext->streams[m_audioStreamIndex]
- 解析音視頻數(shù)據(jù)幀:
av_read_frame
- 獲取extra data:
av_bitstream_filter_filter
VideoToolbox decode流程
- 比較上一次的extra data,如果數(shù)據(jù)更新需要重新創(chuàng)建解碼器
- 分離并保存FFmpeg parse到的extra data中分離vps, sps, pps等關(guān)鍵信息 (比較NALU頭)
- 通過(guò)
CMVideoFormatDescriptionCreateFromH264ParameterSets
,CMVideoFormatDescriptionCreateFromHEVCParameterSets
裝載vps,sps,pps等NALU header信息. - 指定解碼器回調(diào)函數(shù)與解碼后視頻數(shù)據(jù)類型(yuv,RGB...)
- 創(chuàng)建解碼器
VTDecompressionSessionCreate
- 生成
CMBlockBufferRef
裝載解碼前數(shù)據(jù),再將其轉(zhuǎn)為CMSampleBufferRef
以提供給解碼器. - 開(kāi)始解碼
VTDecompressionSessionDecodeFrame
- 在回調(diào)函數(shù)中
CVImageBufferRef
即為解碼后的數(shù)據(jù),可轉(zhuǎn)為CMSampleBufferRef
傳出.
文件結(jié)構(gòu)
快速使用
-
初始化preview
解碼后的視頻數(shù)據(jù)將渲染到該預(yù)覽層
- (void)viewDidLoad {
[super viewDidLoad];
[self setupUI];
}
- (void)setupUI {
self.previewView = [[XDXPreviewView alloc] initWithFrame:self.view.frame];
[self.view addSubview:self.previewView];
[self.view bringSubviewToFront:self.startBtn];
}
- 解析并解碼文件中視頻數(shù)據(jù)
- (void)startDecodeByVTSessionWithIsH265Data:(BOOL)isH265 {
NSString *path = [[NSBundle mainBundle] pathForResource:isH265 ? @"testh265" : @"testh264" ofType:@"MOV"];
XDXAVParseHandler *parseHandler = [[XDXAVParseHandler alloc] initWithPath:path];
XDXVideoDecoder *decoder = [[XDXVideoDecoder alloc] init];
decoder.delegate = self;
[parseHandler startParseWithCompletionHandler:^(BOOL isVideoFrame, BOOL isFinish, struct XDXParseVideoDataInfo *videoInfo, struct XDXParseAudioDataInfo *audioInfo) {
if (isFinish) {
[decoder stopDecoder];
return;
}
if (isVideoFrame) {
[decoder startDecodeVideoData:videoInfo];
}
}];
}
- 將解碼后數(shù)據(jù)渲染到屏幕上
注意: 如果數(shù)據(jù)中含有B幀則需要做一個(gè)重排序才能渲染,本例提供兩個(gè)文件,一個(gè)不含B幀的h264類型文件,一個(gè)含B幀的h265類型文件.
- (void)getVideoDecodeDataCallback:(CMSampleBufferRef)sampleBuffer {
if (self.isH265File) {
// Note : the first frame not need to sort.
if (self.isDecodeFirstFrame) {
self.isDecodeFirstFrame = NO;
CVPixelBufferRef pix = CMSampleBufferGetImageBuffer(sampleBuffer);
[self.previewView displayPixelBuffer:pix];
}
XDXSortFrameHandler *sortHandler = [[XDXSortFrameHandler alloc] init];
sortHandler.delegate = self;
[sortHandler addDataToLinkList:sampleBuffer];
}else {
CVPixelBufferRef pix = CMSampleBufferGetImageBuffer(sampleBuffer);
[self.previewView displayPixelBuffer:pix];
}
}
- (void)getSortedVideoNode:(CMSampleBufferRef)sampleBuffer {
int64_t pts = (int64_t)(CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)) * 1000);
static int64_t lastpts = 0;
NSLog(@"Test marigin - %lld",pts - lastpts);
lastpts = pts;
[self.previewView displayPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)];
}
具體實(shí)現(xiàn)
1. 從Parse到的數(shù)據(jù)中檢測(cè)是否需要更新extra data.
使用FFmpeg parse的數(shù)據(jù)裝在XDXParseVideoDataInfo
結(jié)構(gòu)體中,結(jié)構(gòu)體定義如下,parse模塊可在上文鏈接中學(xué)習(xí),本節(jié)只將解碼模塊.
struct XDXParseVideoDataInfo {
uint8_t *data;
int dataSize;
uint8_t *extraData;
int extraDataSize;
Float64 pts;
Float64 time_base;
int videoRotate;
int fps;
CMSampleTimingInfo timingInfo;
XDXVideoEncodeFormat videoFormat;
};
通過(guò)緩存當(dāng)前extra data可以將當(dāng)前獲取的extra data與上一次的進(jìn)行對(duì)比,如果改變需要重新創(chuàng)建解碼器,如果沒(méi)有改變則解碼器可復(fù)用.(此代碼尤其適用于網(wǎng)絡(luò)流中的視頻流,因?yàn)橐曨l流可能會(huì)改變)
uint8_t *extraData = videoInfo->extraData;
int size = videoInfo->extraDataSize;
BOOL isNeedUpdate = [self isNeedUpdateExtraDataWithNewExtraData:extraData
newSize:size
lastData:&_lastExtraData
lastSize:&_lastExtraDataSize];
......
- (BOOL)isNeedUpdateExtraDataWithNewExtraData:(uint8_t *)newData newSize:(int)newSize lastData:(uint8_t **)lastData lastSize:(int *)lastSize {
BOOL isNeedUpdate = NO;
if (*lastSize == 0) {
isNeedUpdate = YES;
}else {
if (*lastSize != newSize) {
isNeedUpdate = YES;
}else {
if (memcmp(newData, *lastData, newSize) != 0) {
isNeedUpdate = YES;
}
}
}
if (isNeedUpdate) {
[self destoryDecoder];
*lastData = (uint8_t *)malloc(newSize);
memcpy(*lastData, newData, newSize);
*lastSize = newSize;
}
return isNeedUpdate;
}
2. 從extra data中分離關(guān)鍵信息(h265:vps),sps,pps.
創(chuàng)建解碼器必須要有NALU Header中的一些關(guān)鍵信息,如vps,sps,pps,以用來(lái)組成一個(gè)CMVideoFormatDesc
描述視頻信息的數(shù)據(jù)結(jié)構(gòu),如上圖
注意: h264碼流需要sps,pps, h265碼流則需要vps,sps,pps
- 分離NALU Header
首先確定start code的位置,通過(guò)比較前四個(gè)字節(jié)是否為00 00 00 01
即可. 對(duì)于h264的數(shù)據(jù),start code之后緊接著的是sps,pps, 對(duì)于h265的數(shù)據(jù)則是vps,sps,pps
- 確定NALU Header長(zhǎng)度
通過(guò)sps索引與pps索引值可以確定sps長(zhǎng)度,其他類似,注意,碼流結(jié)構(gòu)中均以4個(gè)字節(jié)的start code作為分界符,所以需要減去對(duì)應(yīng)長(zhǎng)度.
- 分離NALU Header數(shù)據(jù)
對(duì)于h264類型數(shù)據(jù)將數(shù)據(jù)&
上0x1F
可以確定NALU header的類型,對(duì)于h265類型數(shù)據(jù),將數(shù)據(jù)&
上0x4F
可以確定NALU header的類型,這源于h264,h265的碼流結(jié)構(gòu),如果不懂請(qǐng)參考文章最上方閱讀前提中碼流結(jié)構(gòu)相關(guān)文章.
得到對(duì)應(yīng)類型的數(shù)據(jù)與大小后,將其賦給全局變量,即可供后面使用.
if (isNeedUpdate) {
log4cplus_error(kModuleName, "%s: update extra data",__func__);
[self getNALUInfoWithVideoFormat:videoInfo->videoFormat
extraData:extraData
extraDataSize:size
decoderInfo:&_decoderInfo];
}
......
- (void)getNALUInfoWithVideoFormat:(XDXVideoEncodeFormat)videoFormat extraData:(uint8_t *)extraData extraDataSize:(int)extraDataSize decoderInfo:(XDXDecoderInfo *)decoderInfo {
uint8_t *data = extraData;
int size = extraDataSize;
int startCodeVPSIndex = 0;
int startCodeSPSIndex = 0;
int startCodeFPPSIndex = 0;
int startCodeRPPSIndex = 0;
int nalu_type = 0;
for (int i = 0; i < size; i ++) {
if (i >= 3) {
if (data[i] == 0x01 && data[i - 1] == 0x00 && data[i - 2] == 0x00 && data[i - 3] == 0x00) {
if (videoFormat == XDXH264EncodeFormat) {
if (startCodeSPSIndex == 0) {
startCodeSPSIndex = i;
}
if (i > startCodeSPSIndex) {
startCodeFPPSIndex = i;
}
}else if (videoFormat == XDXH265EncodeFormat) {
if (startCodeVPSIndex == 0) {
startCodeVPSIndex = i;
continue;
}
if (i > startCodeVPSIndex && startCodeSPSIndex == 0) {
startCodeSPSIndex = i;
continue;
}
if (i > startCodeSPSIndex && startCodeFPPSIndex == 0) {
startCodeFPPSIndex = i;
continue;
}
if (i > startCodeFPPSIndex && startCodeRPPSIndex == 0) {
startCodeRPPSIndex = i;
}
}
}
}
}
int spsSize = startCodeFPPSIndex - startCodeSPSIndex - 4;
decoderInfo->sps_size = spsSize;
if (videoFormat == XDXH264EncodeFormat) {
int f_ppsSize = size - (startCodeFPPSIndex + 1);
decoderInfo->f_pps_size = f_ppsSize;
nalu_type = ((uint8_t)data[startCodeSPSIndex + 1] & 0x1F);
if (nalu_type == 0x07) {
uint8_t *sps = &data[startCodeSPSIndex + 1];
[self copyDataWithOriginDataRef:&decoderInfo->sps newData:sps size:spsSize];
}
nalu_type = ((uint8_t)data[startCodeFPPSIndex + 1] & 0x1F);
if (nalu_type == 0x08) {
uint8_t *pps = &data[startCodeFPPSIndex + 1];
[self copyDataWithOriginDataRef:&decoderInfo->f_pps newData:pps size:f_ppsSize];
}
} else {
int vpsSize = startCodeSPSIndex - startCodeVPSIndex - 4;
decoderInfo->vps_size = vpsSize;
int f_ppsSize = startCodeRPPSIndex - startCodeFPPSIndex - 4;
decoderInfo->f_pps_size = f_ppsSize;
nalu_type = ((uint8_t) data[startCodeVPSIndex + 1] & 0x4F);
if (nalu_type == 0x40) {
uint8_t *vps = &data[startCodeVPSIndex + 1];
[self copyDataWithOriginDataRef:&decoderInfo->vps newData:vps size:vpsSize];
}
nalu_type = ((uint8_t) data[startCodeSPSIndex + 1] & 0x4F);
if (nalu_type == 0x42) {
uint8_t *sps = &data[startCodeSPSIndex + 1];
[self copyDataWithOriginDataRef:&decoderInfo->sps newData:sps size:spsSize];
}
nalu_type = ((uint8_t) data[startCodeFPPSIndex + 1] & 0x4F);
if (nalu_type == 0x44) {
uint8_t *pps = &data[startCodeFPPSIndex + 1];
[self copyDataWithOriginDataRef:&decoderInfo->f_pps newData:pps size:f_ppsSize];
}
if (startCodeRPPSIndex == 0) {
return;
}
int r_ppsSize = size - (startCodeRPPSIndex + 1);
decoderInfo->r_pps_size = r_ppsSize;
nalu_type = ((uint8_t) data[startCodeRPPSIndex + 1] & 0x4F);
if (nalu_type == 0x44) {
uint8_t *pps = &data[startCodeRPPSIndex + 1];
[self copyDataWithOriginDataRef:&decoderInfo->r_pps newData:pps size:r_ppsSize];
}
}
}
- (void)copyDataWithOriginDataRef:(uint8_t **)originDataRef newData:(uint8_t *)newData size:(int)size {
if (*originDataRef) {
free(*originDataRef);
*originDataRef = NULL;
}
*originDataRef = (uint8_t *)malloc(size);
memcpy(*originDataRef, newData, size);
}
3. 創(chuàng)建解碼器
根據(jù)編碼數(shù)據(jù)類型確定使用h264解碼器還是h265解碼器,如上圖我們可得知,我們需要將數(shù)據(jù)拼成一個(gè)CMSampleBuffer類型以傳給解碼器解碼.
- 生成
CMVideoFormatDescriptionRef
通過(guò)(vps)sps,pps信息組成CMVideoFormatDescriptionRef
. 這里需要注意的是, h265編碼數(shù)據(jù)有的碼流數(shù)據(jù)中含有兩個(gè)pps, 所以在拼裝時(shí)需要判斷以確定參數(shù)數(shù)量.
- 確定視頻數(shù)據(jù)類型
通過(guò)指定kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
將視頻數(shù)據(jù)類型設(shè)置為yuv 420sp, 如需其他格式可自行更改適配.
- 指定回調(diào)函數(shù)
- 創(chuàng)建編碼器
通過(guò)上面提供的所有信息,即可調(diào)用VTDecompressionSessionCreate
生成解碼器上下文對(duì)象.
// create decoder
if (!_decoderSession) {
_decoderSession = [self createDecoderWithVideoInfo:videoInfo
videoDescRef:&_decoderFormatDescription
videoFormat:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
lock:_decoder_lock
callback:VideoDecoderCallback
decoderInfo:_decoderInfo];
}
- (VTDecompressionSessionRef)createDecoderWithVideoInfo:(XDXParseVideoDataInfo *)videoInfo videoDescRef:(CMVideoFormatDescriptionRef *)videoDescRef videoFormat:(OSType)videoFormat lock:(pthread_mutex_t)lock callback:(VTDecompressionOutputCallback)callback decoderInfo:(XDXDecoderInfo)decoderInfo {
pthread_mutex_lock(&lock);
OSStatus status;
if (videoInfo->videoFormat == XDXH264EncodeFormat) {
const uint8_t *const parameterSetPointers[2] = {decoderInfo.sps, decoderInfo.f_pps};
const size_t parameterSetSizes[2] = {static_cast<size_t>(decoderInfo.sps_size), static_cast<size_t>(decoderInfo.f_pps_size)};
status = CMVideoFormatDescriptionCreateFromH264ParameterSets(kCFAllocatorDefault,
2,
parameterSetPointers,
parameterSetSizes,
4,
videoDescRef);
}else if (videoInfo->videoFormat == XDXH265EncodeFormat) {
if (decoderInfo.r_pps_size == 0) {
const uint8_t *const parameterSetPointers[3] = {decoderInfo.vps, decoderInfo.sps, decoderInfo.f_pps};
const size_t parameterSetSizes[3] = {static_cast<size_t>(decoderInfo.vps_size), static_cast<size_t>(decoderInfo.sps_size), static_cast<size_t>(decoderInfo.f_pps_size)};
if (@available(iOS 11.0, *)) {
status = CMVideoFormatDescriptionCreateFromHEVCParameterSets(kCFAllocatorDefault,
3,
parameterSetPointers,
parameterSetSizes,
4,
NULL,
videoDescRef);
} else {
status = -1;
log4cplus_error(kModuleName, "%s: System version is too low!",__func__);
}
} else {
const uint8_t *const parameterSetPointers[4] = {decoderInfo.vps, decoderInfo.sps, decoderInfo.f_pps, decoderInfo.r_pps};
const size_t parameterSetSizes[4] = {static_cast<size_t>(decoderInfo.vps_size), static_cast<size_t>(decoderInfo.sps_size), static_cast<size_t>(decoderInfo.f_pps_size), static_cast<size_t>(decoderInfo.r_pps_size)};
if (@available(iOS 11.0, *)) {
status = CMVideoFormatDescriptionCreateFromHEVCParameterSets(kCFAllocatorDefault,
4,
parameterSetPointers,
parameterSetSizes,
4,
NULL,
videoDescRef);
} else {
status = -1;
log4cplus_error(kModuleName, "%s: System version is too low!",__func__);
}
}
}else {
status = -1;
}
if (status != noErr) {
log4cplus_error(kModuleName, "%s: NALU header error !",__func__);
pthread_mutex_unlock(&lock);
[self destoryDecoder];
return NULL;
}
uint32_t pixelFormatType = videoFormat;
const void *keys[] = {kCVPixelBufferPixelFormatTypeKey};
const void *values[] = {CFNumberCreate(NULL, kCFNumberSInt32Type, &pixelFormatType)};
CFDictionaryRef attrs = CFDictionaryCreate(NULL, keys, values, 1, NULL, NULL);
VTDecompressionOutputCallbackRecord callBackRecord;
callBackRecord.decompressionOutputCallback = callback;
callBackRecord.decompressionOutputRefCon = (__bridge void *)self;
VTDecompressionSessionRef session;
status = VTDecompressionSessionCreate(kCFAllocatorDefault,
*videoDescRef,
NULL,
attrs,
&callBackRecord,
&session);
CFRelease(attrs);
pthread_mutex_unlock(&lock);
if (status != noErr) {
log4cplus_error(kModuleName, "%s: Create decoder failed",__func__);
[self destoryDecoder];
return NULL;
}
return session;
}
4. 開(kāi)始解碼
- 將parse出來(lái)的原始數(shù)據(jù)裝在
XDXDecodeVideoInfo
結(jié)構(gòu)體中,以便后續(xù)擴(kuò)展使用.
typedef struct {
CVPixelBufferRef outputPixelbuffer;
int rotate;
Float64 pts;
int fps;
int source_index;
} XDXDecodeVideoInfo;
- 將編碼數(shù)據(jù)裝在
CMBlockBufferRef
中. - 通過(guò)
CMBlockBufferRef
生成CMSampleBufferRef
- 解碼數(shù)據(jù)
通過(guò)VTDecompressionSessionDecodeFrame
函數(shù)即可完成解碼一幀視頻數(shù)據(jù).第三個(gè)參數(shù)可以指定解碼采用同步或異步方式.
// start decode
[self startDecode:videoInfo
session:_decoderSession
lock:_decoder_lock];
......
- (void)startDecode:(XDXParseVideoDataInfo *)videoInfo session:(VTDecompressionSessionRef)session lock:(pthread_mutex_t)lock {
pthread_mutex_lock(&lock);
uint8_t *data = videoInfo->data;
int size = videoInfo->dataSize;
int rotate = videoInfo->videoRotate;
CMSampleTimingInfo timingInfo = videoInfo->timingInfo;
uint8_t *tempData = (uint8_t *)malloc(size);
memcpy(tempData, data, size);
XDXDecodeVideoInfo *sourceRef = (XDXDecodeVideoInfo *)malloc(sizeof(XDXParseVideoDataInfo));
sourceRef->outputPixelbuffer = NULL;
sourceRef->rotate = rotate;
sourceRef->pts = videoInfo->pts;
sourceRef->fps = videoInfo->fps;
CMBlockBufferRef blockBuffer;
OSStatus status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,
(void *)tempData,
size,
kCFAllocatorNull,
NULL,
0,
size,
0,
&blockBuffer);
if (status == kCMBlockBufferNoErr) {
CMSampleBufferRef sampleBuffer = NULL;
const size_t sampleSizeArray[] = { static_cast<size_t>(size) };
status = CMSampleBufferCreateReady(kCFAllocatorDefault,
blockBuffer,
_decoderFormatDescription,
1,
1,
&timingInfo,
1,
sampleSizeArray,
&sampleBuffer);
if (status == kCMBlockBufferNoErr && sampleBuffer) {
VTDecodeFrameFlags flags = kVTDecodeFrame_EnableAsynchronousDecompression;
VTDecodeInfoFlags flagOut = 0;
OSStatus decodeStatus = VTDecompressionSessionDecodeFrame(session,
sampleBuffer,
flags,
sourceRef,
&flagOut);
if(decodeStatus == kVTInvalidSessionErr) {
pthread_mutex_unlock(&lock);
[self destoryDecoder];
if (blockBuffer)
CFRelease(blockBuffer);
free(tempData);
tempData = NULL;
CFRelease(sampleBuffer);
return;
}
CFRelease(sampleBuffer);
}
}
if (blockBuffer) {
CFRelease(blockBuffer);
}
free(tempData);
tempData = NULL;
pthread_mutex_unlock(&lock);
}
5. 解碼后的數(shù)據(jù)
解碼后的數(shù)據(jù)可在回調(diào)函數(shù)中獲取.這里需要將解碼后的數(shù)據(jù)CVImageBufferRef
轉(zhuǎn)為CMSampleBufferRef
.然后通過(guò)代理傳出.
#pragma mark - Callback
static void VideoDecoderCallback(void *decompressionOutputRefCon, void *sourceFrameRefCon, OSStatus status, VTDecodeInfoFlags infoFlags, CVImageBufferRef pixelBuffer, CMTime presentationTimeStamp, CMTime presentationDuration) {
XDXDecodeVideoInfo *sourceRef = (XDXDecodeVideoInfo *)sourceFrameRefCon;
if (pixelBuffer == NULL) {
log4cplus_error(kModuleName, "%s: pixelbuffer is NULL status = %d",__func__,status);
if (sourceRef) {
free(sourceRef);
}
return;
}
XDXVideoDecoder *decoder = (__bridge XDXVideoDecoder *)decompressionOutputRefCon;
CMSampleTimingInfo sampleTime = {
.presentationTimeStamp = presentationTimeStamp,
.decodeTimeStamp = presentationTimeStamp
};
CMSampleBufferRef samplebuffer = [decoder createSampleBufferFromPixelbuffer:pixelBuffer
videoRotate:sourceRef->rotate
timingInfo:sampleTime];
if (samplebuffer) {
if ([decoder.delegate respondsToSelector:@selector(getVideoDecodeDataCallback:)]) {
[decoder.delegate getVideoDecodeDataCallback:samplebuffer];
}
CFRelease(samplebuffer);
}
if (sourceRef) {
free(sourceRef);
}
}
- (CMSampleBufferRef)createSampleBufferFromPixelbuffer:(CVImageBufferRef)pixelBuffer videoRotate:(int)videoRotate timingInfo:(CMSampleTimingInfo)timingInfo {
if (!pixelBuffer) {
return NULL;
}
CVPixelBufferRef final_pixelbuffer = pixelBuffer;
CMSampleBufferRef samplebuffer = NULL;
CMVideoFormatDescriptionRef videoInfo = NULL;
OSStatus status = CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, final_pixelbuffer, &videoInfo);
status = CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, final_pixelbuffer, true, NULL, NULL, videoInfo, &timingInfo, &samplebuffer);
if (videoInfo != NULL) {
CFRelease(videoInfo);
}
if (samplebuffer == NULL || status != noErr) {
return NULL;
}
return samplebuffer;
}
6.銷毀解碼器
用完后記得銷毀,以便下次使用.
if (_decoderSession) {
VTDecompressionSessionWaitForAsynchronousFrames(_decoderSession);
VTDecompressionSessionInvalidate(_decoderSession);
CFRelease(_decoderSession);
_decoderSession = NULL;
}
if (_decoderFormatDescription) {
CFRelease(_decoderFormatDescription);
_decoderFormatDescription = NULL;
}
7. 補(bǔ)充:關(guān)于帶B幀數(shù)據(jù)重排序問(wèn)題
注意,如果視頻文件或視頻流中含有B幀,則渲染時(shí)需要對(duì)視頻幀做一個(gè)重排序,本文重點(diǎn)講解碼,排序?qū)⒃诤竺嫖恼轮懈?代碼中以實(shí)現(xiàn),如需了解請(qǐng)下載Demo.