簡介
觀看手游直播時蛔溃,我們觀眾端看到的是選手的屏幕上的內(nèi)容扶檐,這是如何實(shí)現(xiàn)的呢樟蠕?這篇博客將手寫一個錄屏直播Demo,實(shí)現(xiàn)類似手游投屏直播的效果
獲取屏幕數(shù)據(jù)很簡單俯逾,Android 系統(tǒng)有提供對應(yīng)的服務(wù)拟赊,難點(diǎn)在于傳輸數(shù)據(jù)到直播服務(wù)器,我們使用 RtmpDump 來傳輸 Rtmp 數(shù)據(jù)踱阿,由于 RtmpDump 使用 C 語言實(shí)現(xiàn),我們還需要用到 NDK 開發(fā)钦铁,單單用 Java 無法實(shí)現(xiàn)哈,當(dāng)年沒有使用開發(fā)實(shí)現(xiàn)的RTMP協(xié)議的開源庫才漆,所以在RTMP協(xié)議包上牛曹,使用了RtmpDump C庫。
實(shí)現(xiàn)效果
基本流程
- 獲取錄屏數(shù)據(jù)
- 對數(shù)據(jù)進(jìn)行 h264 編碼
- Rtmp 數(shù)據(jù)包
- 上傳到直播服務(wù)器推流地址
獲取錄屏數(shù)據(jù)
MediaProjection 視頻采集 SDK中的接口
A token granting applications the ability to capture screen contents and/or record system audio. The exact capabilities granted depend on the type of MediaProjection.
A screen capture session can be started through
MediaProjectionManager.createScreenCaptureIntent()
. This grants the ability to capture screen contents, but not system audio.
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == 100 && resultCode == Activity.RESULT_OK) {
if (editText.getText()!=null&&!TextUtils.isEmpty(editText.getText().toString())) {
url=editText.getText().toString();
Log.i("tuch", "url: "+url);
}
Log.i(TAG, " url:"+url);
mediaProjection = mediaProjectionManager.getMediaProjection(resultCode, data);
}
}
public void startLive(View view) {
this.mediaProjectionManager = (MediaProjectionManager)getSystemService(Context.MEDIA_PROJECTION_SERVICE);
Intent captureIntent = mediaProjectionManager.createScreenCaptureIntent();
startActivityForResult(captureIntent, 100);
}
獲取錄屏數(shù)據(jù)
通過 Intent 獲取到 MediaProjectionService醇滥,繼而獲取到 Mediaprojection 的 VirtualCanvas黎比,我們錄屏的原始數(shù)據(jù)就是從中得來的
public VirtualDisplay createVirtualDisplay (String name, int width, int height, int dpi, int flags, Surface surface, VirtualDisplay.Callback callback, Handler handler)
Creates a VirtualDisplay
to capture the contents of the screen.
mediaProjection--->產(chǎn)生錄屏數(shù)據(jù)
對數(shù)據(jù)進(jìn)行 h264 編碼
通過 MediaProjection 獲取到的 YUV 裸數(shù)據(jù)超营,我們先需要對其進(jìn)行 h264 編碼,此時我們使用原生 MediaCodec 進(jìn)行硬編碼,本例子是Demo工程阅虫,沒有考慮編碼兼容性的問題演闭,直接使用 MediaCodec
。MediaCodec的使用參考 MediaCodec 介紹
public void startLive(MediaProjection mediaProjection) {
this.mediaProjection = mediaProjection;
MediaFormat format = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_AVC,
720,
1280);
format.setInteger(MediaFormat.KEY_COLOR_FORMAT,
MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
//碼率颓帝,幀率米碰,分辨率,關(guān)鍵幀間隔
format.setInteger(MediaFormat.KEY_BIT_RATE, 400_000);
format.setInteger(MediaFormat.KEY_FRAME_RATE, 15);
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 1);
try {
mediaCodec = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_VIDEO_AVC);//手機(jī)
mediaCodec.configure(format, null, null,
MediaCodec.CONFIGURE_FLAG_ENCODE);
Surface surface = mediaCodec.createInputSurface();
virtualDisplay = mediaProjection.createVirtualDisplay(
"screen-codec",
720, 1280, 1,
DisplayManager.VIRTUAL_DISPLAY_FLAG_PUBLIC,
surface, null, null);
} catch (IOException e) {
e.printStackTrace();
}
LiveTaskManager.getInstance().execute(this);
}
Surface surface = mediaCodec.createInputSurface();
從編碼器創(chuàng)建一個畫布, 畫布上的圖像會被編碼器自動編碼, 調(diào)用createVirtualDisplay
創(chuàng)建虛擬顯示器VirtualDisplay 购城,即會將手機(jī)屏幕鏡像到虛擬顯示器上吕座。在createVirtualDisplay時,需要傳遞一個Surface(畫布)瘪板。需要獲取圖像數(shù)據(jù)即可從這個Surface中讀取吴趴。
配置完成后,從MediaCodec中獲取數(shù)據(jù)
@Override
public void run() {
isLiving = true;
mediaCodec.start();
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
while (isLiving){
//若時間差大于 2 s侮攀,則通知編碼器锣枝,生成 I 幀
if (System.currentTimeMillis() - timeStamp >= 2000){
// Bundle 通知 Dsp
Bundle msgBundle = new Bundle();
msgBundle.putInt(MediaCodec.PARAMETER_KEY_REQUEST_SYNC_FRAME,0);
mediaCodec.setParameters(msgBundle);
timeStamp = System.currentTimeMillis();
}
// 接下來就是 MediaCodec 常規(guī)操作,獲取 Buffer 可用索引兰英,這里不需要獲取輸出索引撇叁,內(nèi)部已經(jīng)操作了
int outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo,100_000);
if (outputBufferIndex >=0){
// 獲取到了
ByteBuffer byteBuffer = mediaCodec.getOutputBuffer(outputBufferIndex);
byte[] outData = new byte[bufferInfo.size];
byteBuffer.get(outData);
}
}
VideoCodec
線程的潤方法中,不斷的判int outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo,100_000);
是否有編碼完成的H264數(shù)據(jù)箭昵,為什么是h264編碼的税朴,在配置的時候,MediaFormat.MIMETYPE_VIDEO_AVC
參數(shù)定義的家制!
此時可以把好的數(shù)據(jù)正林,以byte的方式寫入文件查看
可以看到sps pps I幀等關(guān)鍵信息
也可以通過ffplay命令進(jìn)行播放
ffplay -i codec.h264
此時,我們獲得了編碼好的 h264 數(shù)據(jù)颤殴,接下來封裝 Rtmp數(shù)據(jù)包觅廓。
LIBRTMP
C語言開源RTMP庫,封裝 Socket 建立TCP通信涵但,并實(shí)現(xiàn)了RTMP數(shù)據(jù)的收發(fā)杈绸。
RTMPDump
rtmpdump is a toolkit for RTMP streams. All forms of RTMP are supported, including rtmp://, rtmpt://, rtmpe://, rtmpte://, and rtmps://.
License: GPLv2
Copyright (C) 2009 Andrej Stepanchuk
Copyright (C) 2010-2011 Howard Chu
Download the source:
git clone git://git.ffmpeg.org/rtmpdump
The latest release is 2.4 which you can check out from git. Aside from various minor bugfixes since 2.3, RTMPE type 9 handshakes are now supported.
使用第三方庫 Rtmpdump 來實(shí)現(xiàn)推流到直播服務(wù)器,由于 Rtmpdump 的代碼量不是很多矮瘟,我們直接拷貝源代碼到 Android 的 cpp 文件
#定義宏 如果代碼中定義了 #defind NO_CRYPTO
#就表示不適用ssl瞳脓,不支持rtmps。我們這里不支持ssl
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DNO_CRYPTO")
# 把當(dāng)前目錄下所有得文件 變成一個 SOURCE變量表示
aux_source_directory(. SOURCE)
# 編譯成librtmp.a 靜態(tài)庫澈侠,編譯源文件引用${SOURCE}獲取
add_library(rtmp STATIC ${SOURCE})
工程cmake引入編譯好的rtmp靜態(tài)庫
# 加入子文件夾
add_subdirectory(librtmp)
# 鏈接引入
target_link_libraries( # Specifies the target library.
native-lib
# Links the target library to the log library
# included in the NDK.
${log-lib}
rtmp)
RtmpDump 的使用
- 連接服務(wù)器
- RTMP_Init(RTMP *r) 初始化
- RTMP_EnableWrite(RTMP *r) 配置開啟數(shù)據(jù)寫入
- RTMP_Connect(RTMP *r, RTMPPacket *cp)
- RTMP_ConnectStream(RTMP *r, int seekTime)
- 發(fā)送數(shù)據(jù)
RTMPPacket_Alloc(RTMPPacket *p, int nSize)
RTMP_SendPacket(RTMP *r, RTMPPacket *packet, int queue)
RTMPPacket_Free(RTMPPacket *p)
- Rtmp 協(xié)議關(guān)鍵幀協(xié)議格式
- Rtmp 協(xié)議非關(guān)鍵幀協(xié)議格式
- SPS PPS數(shù)據(jù)包
啟動配置好的SRS推流服務(wù)器
./objs/srs -c conf/rtmp.conf
lsof -i :1935
連接直播服務(wù)器
這一步中劫侧,需要預(yù)先準(zhǔn)備直播推流地址,然后實(shí)現(xiàn) native 方法
extern "C"
JNIEXPORT jboolean JNICALL
Java_com_lecture_rtmtscreenlive_ScreenLive_connect(JNIEnv *env, jobject thiz, jstring url_) {
// 首先 Java 的轉(zhuǎn)成 C 的字符串,不然無法使用
const char *url = env->GetStringUTFChars(url_, 0);
int ret;
do {
live = (Live *) malloc(sizeof(Live));
memset(live, 0, sizeof(Live));
live->rtmp = RTMP_Alloc();// Rtmp 申請內(nèi)存
RTMP_Init(live->rtmp);
live->rtmp->Link.timeout = 10;// 設(shè)置 rtmp 初始化參數(shù)烧栋,比如超時時間写妥、url
LOGI("connect %s", url);
if (!(ret = RTMP_SetupURL(live->rtmp, (char *) url))) break;
RTMP_EnableWrite(live->rtmp);// 開啟 Rtmp 寫入
LOGI("RTMP_Connect");
if (!(ret = RTMP_Connect(live->rtmp, 0))) break;
LOGI("RTMP_ConnectStream ");
if (!(ret = RTMP_ConnectStream(live->rtmp, 0))) break;
LOGI("connect success");
} while (0);
if (!ret && live) {
free(live);
live = nullptr;
}
env->ReleaseStringUTFChars(url_, url);
return ret;
}
2021-03-26 20:25:33.202 9139-9211/com.lecture.rtmtscreenlive I/DDDDDD: connect rtmp://192.168.10.224/live/livestream
2021-03-26 20:25:33.202 9139-9211/com.lecture.rtmtscreenlive I/DDDDDD: RTMP_Connect
2021-03-26 20:25:33.422 9139-9211/com.lecture.rtmtscreenlive I/DDDDDD: RTMP_ConnectStream
2021-03-26 20:25:33.562 9139-9211/com.lecture.rtmtscreenlive I/DDDDDD: connect success
2021-03-26 20:25:34.152 9139-9222/com.lecture.rtmtscreenlive I/------>dddd<---------: run: -2
2021-03-26 20:25:34.152 9139-9222/com.lecture.rtmtscreenlive I/------>dddd<---------: run: 0
服務(wù)器連接成功,現(xiàn)在開始發(fā)送RTMP視頻數(shù)據(jù)
- 視頻數(shù)據(jù)封包
RTMPPacket *createVideoPackage(int8_t *buf, int len, const long tms, Live *live) {
// 分隔符被拋棄了 --buf指的是651
buf += 4;
len -= 4;
int body_size = len + 9;
RTMPPacket *packet = (RTMPPacket *) malloc(sizeof(RTMPPacket));
RTMPPacket_Alloc(packet, len + 9);
packet->m_body[0] = 0x27;
if (buf[0] == 0x65) { //關(guān)鍵幀
packet->m_body[0] = 0x17;
LOGI("發(fā)送關(guān)鍵幀 data");
}
packet->m_body[1] = 0x01;
packet->m_body[2] = 0x00;
packet->m_body[3] = 0x00;
packet->m_body[4] = 0x00;
//長度
packet->m_body[5] = (len >> 24) & 0xff;
packet->m_body[6] = (len >> 16) & 0xff;
packet->m_body[7] = (len >> 8) & 0xff;
packet->m_body[8] = (len) & 0xff;
//數(shù)據(jù)
memcpy(&packet->m_body[9], buf, len);
packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
packet->m_nBodySize = body_size;
packet->m_nChannel = 0x04;
packet->m_nTimeStamp = tms;
packet->m_hasAbsTimestamp = 0;
packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
packet->m_nInfoField2 = live->rtmp->m_stream_id;
return packet;
}
- Sps pps 封包
RTMPPacket *createVideoPackage(Live *live) {
int body_size = 13 + live->sps_len + 3 + live->pps_len;
RTMPPacket *packet = (RTMPPacket *) malloc(sizeof(RTMPPacket));
RTMPPacket_Alloc(packet, body_size);
int i = 0;
//AVC sequence header 與IDR一樣
packet->m_body[i++] = 0x17;
//AVC sequence header 設(shè)置為0x00
packet->m_body[i++] = 0x00;
//CompositionTime
packet->m_body[i++] = 0x00;
packet->m_body[i++] = 0x00;
packet->m_body[i++] = 0x00;
//AVC sequence header
packet->m_body[i++] = 0x01; //configurationVersion 版本號 1
packet->m_body[i++] = live->sps[1]; //profile 如baseline审姓、main珍特、 high
packet->m_body[i++] = live->sps[2]; //profile_compatibility 兼容性
packet->m_body[i++] = live->sps[3]; //profile level
packet->m_body[i++] = 0xFF; // reserved(111111) + lengthSizeMinusOne(2位 nal 長度) 總是0xff
//sps
packet->m_body[i++] = 0xE1; //reserved(111) + lengthSizeMinusOne(5位 sps 個數(shù)) 總是0xe1
//sps length 2字節(jié)
packet->m_body[i++] = (live->sps_len >> 8) & 0xff; //第0個字節(jié)
packet->m_body[i++] = live->sps_len & 0xff; //第1個字節(jié)
memcpy(&packet->m_body[i], live->sps, live->sps_len);
i += live->sps_len;
/*pps*/
packet->m_body[i++] = 0x01; //pps number
//pps length
packet->m_body[i++] = (live->pps_len >> 8) & 0xff;
packet->m_body[i++] = live->pps_len & 0xff;
memcpy(&packet->m_body[i], live->pps, live->pps_len);
packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
packet->m_nBodySize = body_size;
packet->m_nChannel = 0x04;
packet->m_nTimeStamp = 0;
packet->m_hasAbsTimestamp = 0;
packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
packet->m_nInfoField2 = live->rtmp->m_stream_id;
return packet;
}
int sendPacket(RTMPPacket *packet) {
int r = RTMP_SendPacket(live->rtmp, packet, 1);
RTMPPacket_Free(packet);
free(packet);
return r;
}
void prepareVideo(int8_t *data, int len, Live *live) {
for (int i = 0; i < len; i++) {
//0x00 0x00 0x00 0x01
if (i + 4 < len) {
if (data[i] == 0x00 && data[i + 1] == 0x00
&& data[i + 2] == 0x00
&& data[i + 3] == 0x01) {
//0x00 0x00 0x00 0x01 7 sps 0x00 0x00 0x00 0x01 8 pps
//將sps pps分開
//找到pps
if (data[i + 4] == 0x68) {
//去掉界定符
live->sps_len = i - 4;
live->sps = static_cast<int8_t *>(malloc(live->sps_len));
memcpy(live->sps, data + 4, live->sps_len);
live->pps_len = len - (4 + live->sps_len) - 4;
live->pps = static_cast<int8_t *>(malloc(live->pps_len));
memcpy(live->pps, data + 4 + live->sps_len + 4, live->pps_len);
LOGI("sps:%d pps:%d", live->sps_len, live->pps_len);
break;
}
}
}
}
}
測試
ffplay rtmp://192.168.10.224/live/livestream
獲取音頻數(shù)據(jù)
- AudioRecord 采集
//錄音工具類 采樣位數(shù) 通道數(shù) 采樣評率 固定了 設(shè)備沒關(guān)系 錄音 數(shù)據(jù)一樣的
minBufferSize = AudioRecord.getMinBufferSize(44100,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT);
audioRecord = new AudioRecord(
MediaRecorder.AudioSource.MIC, 44100,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT, minBufferSize);
- 使用MediaCodec對采集的PCM數(shù)據(jù)編碼
MediaFormat format = MediaFormat.createAudioFormat(MediaFormat.MIMETYPE_AUDIO_AAC, 44100, 1);
//錄音質(zhì)量
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel
.AACObjectLC);
//一秒的碼率 aac
format.setInteger(MediaFormat.KEY_BIT_RATE, 64_000);
mediaCodec = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_AUDIO_AAC);
mediaCodec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mediaCodec.start();
- 麥克風(fēng)的數(shù)據(jù)讀取出來 pcm
audioRecord.startRecording();
// 容器 固定
byte[] buffer = new byte[minBufferSize];
// 麥克風(fēng)的數(shù)據(jù)讀取出來 pcm buffer aac
int len = audioRecord.read(buffer, 0, buffer.length);
- 獲取經(jīng)過mediaCodec編碼的aac數(shù)據(jù)
int index = mediaCodec.dequeueInputBuffer(0);
if (index >= 0) {
ByteBuffer inputBuffer = mediaCodec.getInputBuffer(index);
inputBuffer.clear();
inputBuffer.put(buffer, 0, len);
//填充數(shù)據(jù)后再加入隊(duì)列
mediaCodec.queueInputBuffer(index, 0, len,
System.nanoTime() / 1000, 0);
}
RTMP 包中封裝的音視頻數(shù)據(jù)流,其實(shí)和FLV/tag封裝音頻和視頻數(shù)據(jù)的方式是相同的魔吐,所以我們只需要按照FLV格式封裝音視頻即可扎筒。
RTMPPacket *createAudioPacket(int8_t *buf, const int len, const int type, const long tms,
Live *live) {
// 組裝音頻包 兩個字節(jié) 是固定的 af 如果是第一次發(fā) 你就是 01 如果后面 00 或者是 01 aac
int body_size = len + 2;
RTMPPacket *packet = (RTMPPacket *) malloc(sizeof(RTMPPacket));
RTMPPacket_Alloc(packet, body_size);
// 音頻頭
packet->m_body[0] = 0xAF;
if (type == 1) {
// 頭
packet->m_body[1] = 0x00;
}else{
packet->m_body[1] = 0x01;
}
memcpy(&packet->m_body[2], buf, len);
packet->m_packetType = RTMP_PACKET_TYPE_AUDIO;
packet->m_nChannel = 0x05;
packet->m_nBodySize = body_size;
packet->m_nTimeStamp = tms;
packet->m_hasAbsTimestamp = 0;
packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
packet->m_nInfoField2 = live->rtmp->m_stream_id;
return packet;
}