ffmpeg中利用avcodec_encode_video2
函數(shù)將YUV數(shù)據(jù)編碼成.264數(shù)據(jù):
/**
* Encode a frame of video.
*
* Takes input raw video data from frame and writes the next output packet, if
* available, to avpkt. The output packet does not necessarily contain data for
* the most recent frame, as encoders can delay and reorder input frames
* internally as needed.
*
* @param avctx codec context
* @param avpkt output AVPacket.
* The user can supply an output buffer by setting
* avpkt->data and avpkt->size prior to calling the
* function, but if the size of the user-provided data is not
* large enough, encoding will fail. All other AVPacket fields
* will be reset by the encoder using av_init_packet(). If
* avpkt->data is NULL, the encoder will allocate it.
* The encoder will set avpkt->size to the size of the
* output packet. The returned data (if any) belongs to the
* caller, he is responsible for freeing it.
*
* If this function fails or produces no output, avpkt will be
* freed using av_free_packet() (i.e. avpkt->destruct will be
* called to free the user supplied buffer).
* @param[in] frame AVFrame containing the raw video data to be encoded.
* May be NULL when flushing an encoder that has the
* CODEC_CAP_DELAY capability set.
* @param[out] got_packet_ptr This field is set to 1 by libavcodec if the
* output packet is non-empty, and to 0 if it is
* empty. If the function returns an error, the
* packet can be assumed to be invalid, and the
* value of got_packet_ptr is undefined and should
* not be used.
* @return 0 on success, negative error code on failure
*/
int avcodec_encode_video2(AVCodecContext *avctx,
AVPacket *avpkt,
const AVFrame *frame,
int *got_packet_ptr);
- *AVCodecContext avctx
用來(lái)保存編碼參數(shù)等改览。
參數(shù)初始化舉例:
pCodecCtx->bit_rate = 1024*1024;
pCodecCtx->width = m_nScreenWidth;
pCodecCtx->height = m_nScreenHeight;
pCodecCtx->time_base.num = 1;
pCodecCtx->time_base.den = 25;
pCodecCtx->gop_size = 25;
pCodecCtx->max_b_frames = 1;
pCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
*AVPacket avpkt
輸出的編碼數(shù)據(jù)缤言。*const AVFrame frame
該參數(shù)中包括輸入的YUV數(shù)據(jù)等。*int got_packet_ptr
這個(gè)參數(shù)用于表明是否生成了編碼數(shù)據(jù)庆揩。
需要說(shuō)明的是: 不是輸入一個(gè)YUV數(shù)據(jù)跌穗,就會(huì)輸出一個(gè)編碼數(shù)據(jù)。
編碼器內(nèi)部需要進(jìn)行幀內(nèi)預(yù)測(cè)锈拨、幀間預(yù)測(cè)等各種算法之后,才會(huì)輸出一個(gè)編碼數(shù)據(jù)奕枢。
調(diào)用實(shí)例:
AVCodec *pCodec = NULL;
AVCodecContext *pCodecCtx = NULL;
AVFrame *pFrame = NULL;
AVPacket pkt;
AVCodecID codec_id = AV_CODEC_ID_H264;
avcodec_register_all();
pCodec = avcodec_find_encoder(codec_id);
if (!pCodec)
{
printf("Codec not found\n");
return -1;
}
pCodecCtx = avcodec_alloc_context3(pCodec);
if (!pCodecCtx)
{
printf("Could not allocate video codec context\n");
return -1;
}
...
pFrame->data[0] = m_pYUVBuffer; // Y
pFrame->data[1] = m_pYUVBuffer + m_nUOffset; // U
pFrame->data[2] = m_pYUVBuffer + m_nVOffset; // V
av_init_packet(&pkt);
pkt.data = NULL; // packet data will be allocated by the encoder
pFrame->pts= ...;
/* encode the image */
nRet = avcodec_encode_video2(pCodecCtx, &pkt, pFrame, &got_packet);
if (nRet < 0)
{
printf("Error encoding frame\n");
return -1;
}
if (got_packet) // 獲取到編碼數(shù)據(jù)
{
{
printf("Succeed to encode frame: %5d\tsize:%5d\n", nVideoFrameCnt, pkt.size);
av_free_packet(&pkt);
}
}
Sleep(40);
}
調(diào)用avcodec_encode_video2輸出的編碼序列到底長(zhǎng)啥樣呢?
H.264編碼序列 (ES碼流)
(以上碼流分析工具為:H264VideoESViewer)
可以看出序列的順序:
SPS -> PPS -> SEI -> I幀 -> ...個(gè)P幀->SPS -> PPS -> SEI -> I幀 -> ...
另外需要注意的是:
SEI和I幀的start code是 0x00 0x00 0x01
, 而其他的都是0x00 0x00 0x00 0x01
缝彬。