Android跨進(jìn)程通信IPC整體內(nèi)容如下
- 1趣效、Android跨進(jìn)程通信IPC之1——Linux基礎(chǔ)
- 2纸兔、Android跨進(jìn)程通信IPC之2——Bionic
- 3誓沸、Android跨進(jìn)程通信IPC之3——關(guān)于"JNI"的那些事
- 4姿锭、Android跨進(jìn)程通信IPC之4——AndroidIPC基礎(chǔ)1
- 4塔鳍、Android跨進(jìn)程通信IPC之4——AndroidIPC基礎(chǔ)2
- 5爷肝、Android跨進(jìn)程通信IPC之5——Binder的三大接口
- 6斗这、Android跨進(jìn)程通信IPC之6——Binder框架
- 7、Android跨進(jìn)程通信IPC之7——Binder相關(guān)結(jié)構(gòu)體簡(jiǎn)介
- 8洪添、Android跨進(jìn)程通信IPC之8——Binder驅(qū)動(dòng)
- 9焚鲜、Android跨進(jìn)程通信IPC之9——Binder之Framework層C++篇1
- 9掌唾、Android跨進(jìn)程通信IPC之9——Binder之Framework層C++篇2
- 10、Android跨進(jìn)程通信IPC之10——Binder之Framework層Java篇
- 11忿磅、Android跨進(jìn)程通信IPC之11——AIDL
- 12糯彬、Android跨進(jìn)程通信IPC之12——Binder補(bǔ)充
- 13、Android跨進(jìn)程通信IPC之13——Binder總結(jié)
- 14葱她、Android跨進(jìn)程通信IPC之14——其他IPC方式
- 15撩扒、Android跨進(jìn)程通信IPC之15——感謝
四、注冊(cè)服務(wù)
(一) 源碼位置:
framework/native/libs/binder/
- Binder.cpp
- BpBinder.cpp
- IPCThreadState.cpp
- ProcessState.cpp
- IServiceManager.cpp
- IInterface.cpp
- Parcel.cpp
frameworks/native/include/binder/
- IInterface.h (包括BnInterface, BpInterface)
/frameworks/av/media/mediaserver/
- main_mediaserver.cpp
/frameworks/av/media/libmediaplayerservice/
- MediaPlayerService.cpp
對(duì)應(yīng)的鏈接為
(二)吨些、概述
由于服務(wù)注冊(cè)會(huì)涉及到具體的服務(wù)注冊(cè)搓谆,網(wǎng)上大多數(shù)說(shuō)的都是Media注冊(cè)服務(wù),我們也說(shuō)它豪墅。
media入口函數(shù)是 “main_mediaserver.cpp”中的main()方法泉手,代碼如下:
frameworks/av/media/mediaserver/main_mediaserver.cpp 44行
int main(int argc __unused, char** argv)
{
*** 省略部分代碼 *****
InitializeIcuOrDie();
// 獲得ProcessState實(shí)例對(duì)象
sp<ProcessState> proc(ProcessState::self());
//獲取 BpServiceManager
sp<IServiceManager> sm = defaultServiceManager();
AudioFlinger::instantiate();
//注冊(cè)多媒體服務(wù)
MediaPlayerService::instantiate();
ResourceManagerService::instantiate();
CameraService::instantiate();
AudioPolicyService::instantiate();
SoundTriggerHwService::instantiate();
RadioService::instantiate();
registerExtensions();
//啟動(dòng)Binder線程池
ProcessState::self()->startThreadPool();
//當(dāng)前線程加入到線程池
IPCThreadState::self()->joinThreadPool();
}
所以在main函數(shù)里面
- 首先 獲得了一個(gè)ProcessState的實(shí)例
- 其次 調(diào)用defualtServiceManager方法獲取IServiceManager實(shí)例
- 再次 進(jìn)行重要服務(wù)的初始化
- 最后調(diào)用startThreadPool方法和joinThreadPool方法。
PS: (1)獲取ServiceManager:我們上篇文章講解了defaultServiceManager()返回的是BpServiceManager對(duì)象偶器,用于跟servicemanger進(jìn)行通信斩萌。
(三)啡氢、類圖
我們這里主要講解的是Native層的服務(wù),所以我們以native層的media為例术裸,來(lái)說(shuō)一說(shuō)服務(wù)注冊(cè)的過(guò)程倘是,先來(lái)看看media的關(guān)系圖
圖解
- 藍(lán)色代表的是注冊(cè)MediaPlayerService
- 綠色代表的是Binder架構(gòu)中與Binder驅(qū)動(dòng)通信
- 紫色代表的是注冊(cè)服務(wù)和獲取服務(wù)的公共接口/父類
(四)、時(shí)序圖
先通過(guò)一幅圖來(lái)說(shuō)說(shuō)袭艺,media服務(wù)啟動(dòng)過(guò)程是如何向servicemanager注冊(cè)服務(wù)的搀崭。
(五)、流程介紹
1猾编、inistantiate()函數(shù)
// MediaPlayerService.cpp 269行
void MediaPlayerService::instantiate() {
defaultServiceManager()->addService(String16("media.player"), new MediaPlayerService());
}
- 1 創(chuàng)建一個(gè)新的Service——BnMediaPlayerService瘤睹,想把它告訴ServiceManager。然后調(diào)用BnServiceManager的addService的addService來(lái)向ServiceManager中添加一個(gè)Service答倡,其他進(jìn)程可以通過(guò)字符串"media.player"來(lái)向ServiceManager查詢此服務(wù)轰传。
- 2 注冊(cè)服務(wù)MediaPlayerService:由defaultServiceManager()返回的是BpServiceManager,同時(shí)會(huì)創(chuàng)建ProcessState對(duì)象和BpBinder對(duì)象瘪撇。故此處等價(jià)于調(diào)用BpServiceManager->addService获茬。
2、BpSserviceManager.addService()函數(shù)
/frameworks/native/libs/binder/IServiceManager.cpp 155行
virtual status_t addService(const String16& name, const sp<IBinder>& service,
bool allowIsolated)
{
//data是送到BnServiceManager的命令包
Parcel data, reply;
//先把interface名字寫進(jìn)去倔既,寫入頭信息"android.os.IServiceManager"
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
// 再把新service的名字寫進(jìn)去 恕曲,name為"media.player"
data.writeString16(name);
// MediaPlayerService對(duì)象
data.writeStrongBinder(service);
// allowIsolated= false
data.writeInt32(allowIsolated ? 1 : 0);
//remote()指向的BpServiceManager中保存的BpBinder
status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
return err == NO_ERROR ? reply.readExceptionCode() : err;
}
服務(wù)注冊(cè)過(guò)程:向ServiceManager 注冊(cè)服務(wù)MediaPlayerService,服務(wù)名為"media.player"渤涌。這樣別的進(jìn)程皆可以通過(guò)"media.player"來(lái)查詢?cè)摲?wù)
這里我們重點(diǎn)說(shuō)下writeStrongBinder()函數(shù)和最后的transact()函數(shù)
2.1佩谣、writeStrongBinder()函數(shù)
/frameworks/native/libs/binder/Parcel.cpp 872行
status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
return flatten_binder(ProcessState::self(), val, this);
}
里面調(diào)用flatten_binder()函數(shù),那我們繼續(xù)跟蹤
2.1.1实蓬、 flatten_binder()函數(shù)
/frameworks/native/libs/binder/Parcel.cpp 205行
status_t flatten_binder(const sp<ProcessState>& /*proc*/,
const sp<IBinder>& binder, Parcel* out)
{
flat_binder_object obj;
obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
//本地Binder不為空
if (binder != NULL) {
IBinder *local = binder->localBinder();
if (!local) {
BpBinder *proxy = binder->remoteBinder();
const int32_t handle = proxy ? proxy->handle() : 0;
obj.type = BINDER_TYPE_HANDLE;
obj.binder = 0;
obj.handle = handle;
obj.cookie = 0;
} else {
// 進(jìn)入該分支
obj.type = BINDER_TYPE_BINDER;
obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());
obj.cookie = reinterpret_cast<uintptr_t>(local);
}
} else {
...
}
return finish_flatten_binder(binder, obj, out);
}
其實(shí)是將Binder對(duì)象扁平化茸俭,轉(zhuǎn)換成flat_binder_object對(duì)象
- 對(duì)于Binder實(shí)體,則用cookie記錄binder實(shí)體的指針安皱。
- 對(duì)于Binder代理调鬓,則用handle記錄Binder代理的句柄。
關(guān)于localBinder练俐,代碼如下:
//frameworks/native/libs/binder/Binder.cpp 191行
BBinder* BBinder::localBinder()
{
return this;
}
//frameworks/native/libs/binder/Binder.cpp 47行
BBinder* IBinder::localBinder()
{
return NULL;
}
上面 最后又調(diào)用了finish_flatten_binder()讓我們一起來(lái)看下
2.1.1袖迎、 finish_flatten_binder()函數(shù)
//frameworks/native/libs/binder/Parcel.cpp 199行
inline static status_t finish_flatten_binder(
const sp<IBinder>& , const flat_binder_object& flat, Parcel* out)
{
return out->writeObject(flat, false);
}
2.2、 transact()函數(shù)
//frameworks/native/libs/binder/BpBinder.cpp 159行
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
if (mAlive) {
// code=ADD_SERVICE_TRANSACTION
status_t status = IPCThreadState::self()->transact(
mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return DEAD_OBJECT;
}
Binder代理類調(diào)用transact()方法腺晾,真正的工作還是交給IPCThreadState來(lái)進(jìn)行transact工作燕锥,先來(lái),看見IPCThreadState:: self的過(guò)程悯蝉。
Binder代理類調(diào)用transact()方法归形,真正工作還是交給IPCThreadState來(lái)進(jìn)行transact工作。先來(lái) 看看IPCThreadState::self的過(guò)程鼻由。
2.2.1暇榴、IPCThreadState::self()函數(shù)
//frameworks/native/libs/binder/IPCThreadState.cpp 280行
IPCThreadState* IPCThreadState::self()
{
if (gHaveTLS) {
restart:
const pthread_key_t k = gTLS;
IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
if (st) return st;
// new 了一個(gè) IPCThreadState對(duì)象
return new IPCThreadState;
}
if (gShutdown) return NULL;
pthread_mutex_lock(&gTLSMutex);
//首次進(jìn)入gHaveTLS為false
if (!gHaveTLS) {
// 創(chuàng)建線程的TLS
if (pthread_key_create(&gTLS, threadDestructor) != 0) {
pthread_mutex_unlock(&gTLSMutex);
return NULL;
}
gHaveTLS = true;
}
pthread_mutex_unlock(&gTLSMutex);
goto restart;
}
TLS 是指Thread local storage(線程本地存儲(chǔ)空間)厚棵,每個(gè)線程都擁有自己的TLS,并且是私有空間蔼紧,線程空間是不會(huì)共享的婆硬。通過(guò)pthread_getspecific/pthread_setspecific函數(shù)可以設(shè)置這些空間中的內(nèi)容。從線程本地存儲(chǔ)空間中獲得保存在其中的IPCThreadState對(duì)象奸例。
說(shuō)到 IPCThreadState對(duì)象彬犯,我們就來(lái)看看它的構(gòu)造函數(shù)
2.2.1、IPCThreadState的構(gòu)造函數(shù)
//frameworks/native/libs/binder/IPCThreadState.cpp 686行
IPCThreadState::IPCThreadState()
: mProcess(ProcessState::self()),
mMyThreadId(gettid()),
mStrictModePolicy(0),
mLastTransactionBinderFlags(0)
{
pthread_setspecific(gTLS, this);
clearCaller();
mIn.setDataCapacity(256);
mOut.setDataCapacity(256);
}
每個(gè)線程都有一個(gè)IPCThreadState查吊,每個(gè)IPCThreadState中都有一個(gè)mIn谐区,一個(gè)mOut。成員變量mProcess保存了ProccessState變量(每個(gè)進(jìn)程只有一個(gè))
- mIn:用來(lái)接收來(lái)自Binder設(shè)備的數(shù)據(jù)逻卖,默認(rèn)大小為256字節(jié)
- mOut:用來(lái)存儲(chǔ)發(fā)往Binder設(shè)備的數(shù)據(jù)宋列,默認(rèn)大小為256字節(jié)
2.2.2、IPCThreadState::transact()函數(shù)
//frameworks/native/libs/binder/IPCThreadState.cpp 548行
status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
//數(shù)據(jù)錯(cuò)誤檢查
status_t err = data.errorCheck();
flags |= TF_ACCEPT_FDS;
// *** 省略部分代碼 ***
if (err == NO_ERROR) {
//傳輸數(shù)據(jù)
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
}
// *** 省略部分代碼 ***
if ((flags & TF_ONE_WAY) == 0) {
if (reply) {
//等待響應(yīng)
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
} else {
//one waitForReponse(NULL,NULL)
err = waitForResponse(NULL, NULL);
}
return err;
}
IPCThreadState進(jìn)行trancsact事物處理3部分:
- errorCheck() :負(fù)責(zé) 數(shù)據(jù)錯(cuò)誤檢查
- writeTransactionData(): 負(fù)責(zé) 傳輸數(shù)據(jù)
- waitForResponse(): 負(fù)責(zé) 等待響應(yīng)
那我們重點(diǎn)看下writeTransactionData()函數(shù)與waitForResponse()函數(shù)
2.2.2.1评也、writeTransactionData)函數(shù)
//frameworks/native/libs/binder/IPCThreadState.cpp 904行
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags, int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
binder_transaction_data tr;
tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
// handle=0
tr.target.handle = handle;
//code=ADD_SERVICE_TRANSACTION
tr.code = code;
// binderFlags=0
tr.flags = binderFlags;
tr.cookie = 0;
tr.sender_pid = 0;
tr.sender_euid = 0;
// data為記錄Media服務(wù)信息的Parcel對(duì)象
const status_t err = data.errorCheck();
if (err == NO_ERROR) {
tr.data_size = data.ipcDataSize();
tr.data.ptr.buffer = data.ipcData();
tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
tr.data.ptr.offsets = data.ipcObjects();
} else if (statusBuffer) {
tr.flags |= TF_STATUS_CODE;
*statusBuffer = err;
tr.data_size = sizeof(status_t);
tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
tr.offsets_size = 0;
tr.data.ptr.offsets = 0;
} else {
return (mLastError = err);
}
// cmd=BC_TRANSACTION
mOut.writeInt32(cmd);
// 寫入binder_transaction_data數(shù)據(jù)
mOut.write(&tr, sizeof(tr));
return NO_ERROR;
}
其中handle的值用來(lái)標(biāo)示目的端炼杖,注冊(cè)服務(wù)過(guò)程的目的端為service manager,此處handle=0所對(duì)應(yīng)的是binder_context_mgr_node對(duì)象仇参,正是service manager所對(duì)應(yīng)的binder實(shí)體對(duì)象嘹叫。其中 binder_transaction_data結(jié)構(gòu)體是binder驅(qū)動(dòng)通信的數(shù)據(jù)結(jié)構(gòu),該過(guò)程最終是把Binder請(qǐng)求碼BC_TRANSACTION和binder_transaction_data寫入mOut诈乒。
transact的過(guò)程,先寫完binder_transaction_data數(shù)據(jù)婆芦,接下來(lái)執(zhí)行waitForResponse怕磨。
2.2.2.2、waitForResponse()函數(shù)
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult) {
uint32_t cmd;
int32_t err;
while (1) {
if ((err = talkWithDriver()) < NO_ERROR) break;
err = mIn.errorCheck();
if (err < NO_ERROR) break;
if (mIn.dataAvail() == 0) continue;
cmd = (uint32_t) mIn.readInt32();
IF_LOG_COMMANDS() {
alog << "Processing waitForResponse Command: "
<< getReturnString(cmd) << endl;
}
switch (cmd) {
case BR_TRANSACTION_COMPLETE:
if (!reply && !acquireResult) goto finish;
break;
case BR_DEAD_REPLY:
err = DEAD_OBJECT;
goto finish;
case BR_FAILED_REPLY:
err = FAILED_TRANSACTION;
goto finish;
case BR_ACQUIRE_RESULT: {
ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
const int32_t result = mIn.readInt32();
if (!acquireResult) continue;
*acquireResult = result ? NO_ERROR : INVALID_OPERATION;
}
goto finish;
case BR_REPLY: {
binder_transaction_data tr;
err = mIn.read( & tr, sizeof(tr));
ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
if (err != NO_ERROR) goto finish;
if (reply) {
if ((tr.flags & TF_STATUS_CODE) == 0) {
reply -> ipcSetDataReference(
reinterpret_cast <const uint8_t * > (tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast <const binder_size_t * > (tr.data.ptr.offsets),
tr.offsets_size / sizeof(binder_size_t),
freeBuffer, this);
} else {
err = *reinterpret_cast<const status_t * > (tr.data.ptr.buffer);
freeBuffer(NULL,
reinterpret_cast <const uint8_t * > (tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast <const binder_size_t * > (tr.data.ptr.offsets),
tr.offsets_size / sizeof(binder_size_t), this);
}
} else {
freeBuffer(NULL,
reinterpret_cast <const uint8_t * > (tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast <const binder_size_t * > (tr.data.ptr.offsets),
tr.offsets_size / sizeof(binder_size_t), this);
continue;
}
}
goto finish;
default:
err = executeCommand(cmd);
if (err != NO_ERROR) goto finish;
break;
}
}
finish:
if (err != NO_ERROR) {
if (acquireResult) *acquireResult = err;
if (reply) reply -> setError(err);
mLastError = err;
}
return err;
}
在waitForResponse過(guò)程消约,首先執(zhí)行BR_TRANSACTION_COMPLETE肠鲫;另外,目標(biāo)進(jìn)程收到事物后或粮,處理BR_TRANSACTION事物导饲,然后送法給當(dāng)前進(jìn)程,再執(zhí)行BR_REPLY命令氯材。
這里詳細(xì)說(shuō)下talkWithDriver()函數(shù)
2.2.2.3渣锦、talkWithDriver()函數(shù)
status_t IPCThreadState::talkWithDriver(bool doReceive) {
if (mProcess -> mDriverFD <= 0) {
return -EBADF;
}
binder_write_read bwr;
// Is the read buffer empty?
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
// We don't want to write anything if we are still reading
// from data left in the input buffer and the caller
// has requested to read the next data.
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t) mOut.data();
// This is what we'll read.
if (doReceive && needRead) {
//接受數(shù)據(jù)緩沖區(qū)信息的填充,如果以后收到數(shù)據(jù)氢哮,就直接填在mIn中了袋毙。
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t) mIn.data();
} else {
bwr.read_size = 0;
bwr.read_buffer = 0;
}
IF_LOG_COMMANDS() {
TextOutput::Bundle _b(alog);
if (outAvail != 0) {
alog << "Sending commands to driver: " << indent;
const void*cmds = (const void*)bwr.write_buffer;
const void*end = ((const uint8_t *)cmds)+bwr.write_size;
alog << HexDump(cmds, bwr.write_size) << endl;
while (cmds < end) cmds = printCommand(alog, cmds);
alog << dedent;
}
alog << "Size of receive buffer: " << bwr.read_size
<< ", needRead: " << needRead << ", doReceive: " << doReceive << endl;
}
// Return immediately if there is nothing to do.
// 當(dāng)讀緩沖和寫緩沖都為空,則直接返回
if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
do {
IF_LOG_COMMANDS() {
alog << "About to read/write, write size = " << mOut.dataSize() << endl;
}
#if defined(HAVE_ANDROID_OS)
//通過(guò)ioctl不停的讀寫操作冗尤,跟Binder驅(qū)動(dòng)進(jìn)行通信
if (ioctl(mProcess -> mDriverFD, BINDER_WRITE_READ, & bwr) >=0)
err = NO_ERROR;
else
err = -errno;
#else
err = INVALID_OPERATION;
#endif
if (mProcess -> mDriverFD <= 0) {
err = -EBADF;
}
IF_LOG_COMMANDS() {
alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
}
} while (err == -EINTR);
IF_LOG_COMMANDS() {
alog << "Our err: " << (void*)(intptr_t) err << ", write consumed: "
<< bwr.write_consumed << " (of " << mOut.dataSize()
<< "), read consumed: " << bwr.read_consumed << endl;
}
if (err >= NO_ERROR) {
if (bwr.write_consumed > 0) {
if (bwr.write_consumed < mOut.dataSize())
mOut.remove(0, bwr.write_consumed);
else
mOut.setDataSize(0);
}
if (bwr.read_consumed > 0) {
mIn.setDataSize(bwr.read_consumed);
mIn.setDataPosition(0);
}
IF_LOG_COMMANDS() {
TextOutput::Bundle _b(alog);
alog << "Remaining data size: " << mOut.dataSize() << endl;
alog << "Received commands from driver: " << indent;
const void*cmds = mIn.data();
const void*end = mIn.data() + mIn.dataSize();
alog << HexDump(cmds, mIn.dataSize()) << endl;
while (cmds < end) cmds = printReturnCommand(alog, cmds);
alog << dedent;
}
return NO_ERROR;
}
return err;
}
binder_write_read結(jié)構(gòu)體 用來(lái)與Binder設(shè)備交換數(shù)據(jù)的結(jié)構(gòu)听盖,通過(guò)ioctl與mDriverFD通信胀溺,是真正與Binder驅(qū)動(dòng)進(jìn)行數(shù)據(jù)讀寫交互的過(guò)程。主要操作是mOut和mIn變量皆看。
ioctl經(jīng)過(guò)系統(tǒng)調(diào)用后進(jìn)入Binder Driver
大體流程如下圖
(六)仓坞、Binder驅(qū)動(dòng)
Binder驅(qū)動(dòng)內(nèi)部調(diào)用了流程
ioctl——> binder_ioctl ——> binder_ioctl_write_read
1、binder_ioctl_write_read()函數(shù)處理
static int binder_ioctl_write_read(struct file *filp,
unsigned int cmd, unsigned long arg,
struct binder_thread *thread)
{
struct binder_proc *proc = filp->private_data;
void __user *ubuf = (void __user *)arg;
struct binder_write_read bwr;
//將用戶空間bwr結(jié)構(gòu)體拷貝到內(nèi)核空間
copy_from_user(&bwr, ubuf, sizeof(bwr));
// ***省略部分代碼***
if (bwr.write_size > 0) {
//將數(shù)據(jù)放入目標(biāo)進(jìn)程
ret = binder_thread_write(proc, thread,
bwr.write_buffer,
bwr.write_size,
&bwr.write_consumed);
// ***省略部分代碼***
}
if (bwr.read_size > 0) {
//讀取自己隊(duì)列的數(shù)據(jù)
ret = binder_thread_read(proc, thread, bwr.read_buffer,
bwr.read_size,
&bwr.read_consumed,
filp->f_flags & O_NONBLOCK);
if (!list_empty(&proc->todo))
wake_up_interruptible(&proc->wait);
// ***省略部分代碼***
}
//將內(nèi)核空間bwr結(jié)構(gòu)體拷貝到用戶空間
copy_to_user(ubuf, &bwr, sizeof(bwr));
// ***省略部分代碼***
}
2腰吟、binder_thread_write()函數(shù)處理
static int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed)
{
uint32_t cmd;
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
while (ptr < end && thread->return_error == BR_OK) {
//拷貝用戶空間的cmd命令扯躺,此時(shí)為BC_TRANSACTION
if (get_user(cmd, (uint32_t __user *)ptr)) -EFAULT;
ptr += sizeof(uint32_t);
switch (cmd) {
case BC_TRANSACTION:
case BC_REPLY: {
struct binder_transaction_data tr;
//拷貝用戶空間的binder_transaction_data
if (copy_from_user(&tr, ptr, sizeof(tr))) return -EFAULT;
ptr += sizeof(tr);
binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
break;
}
// ***省略部分代碼***
}
*consumed = ptr - buffer;
}
return 0;
}
3、binder_thread_write()函數(shù)處理
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply){
if (reply) {
// ***省略部分代碼***
}else {
if (tr->target.handle) {
// ***省略部分代碼***
} else {
// handle=0則找到servicemanager實(shí)體
target_node = binder_context_mgr_node;
}
//target_proc為servicemanager進(jìn)程
target_proc = target_node->proc;
}
if (target_thread) {
// ***省略部分代碼***
} else {
//找到servicemanager進(jìn)程的todo隊(duì)列
target_list = &target_proc->todo;
target_wait = &target_proc->wait;
}
t = kzalloc(sizeof(*t), GFP_KERNEL);
tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
//非oneway的通信方式蝎困,把當(dāng)前thread保存到transaction的from字段
if (!reply && !(tr->flags & TF_ONE_WAY))
t->from = thread;
else
t->from = NULL;
t->sender_euid = task_euid(proc->tsk);
t->to_proc = target_proc; //此次通信目標(biāo)進(jìn)程為servicemanager進(jìn)程
t->to_thread = target_thread;
t->code = tr->code; //此次通信code = ADD_SERVICE_TRANSACTION
t->flags = tr->flags; // 此次通信flags = 0
t->priority = task_nice(current);
//從servicemanager進(jìn)程中分配buffer
t->buffer = binder_alloc_buf(target_proc, tr->data_size,
tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
t->buffer->allow_user_free = 0;
t->buffer->transaction = t;
t->buffer->target_node = target_node;
if (target_node)
//引用計(jì)數(shù)+1
binder_inc_node(target_node, 1, 0, NULL);
offp = (binder_size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));
//分別拷貝用戶空間的binder_transaction_data中ptr.buffer和ptr.offsets到內(nèi)核
copy_from_user(t->buffer->data,
(const void __user *)(uintptr_t)tr->data.ptr.buffer, tr->data_size);
copy_from_user(offp,
(const void __user *)(uintptr_t)tr->data.ptr.offsets, tr->offsets_size);
off_end = (void *)offp + tr->offsets_size;
for (; offp < off_end; offp++) {
struct flat_binder_object *fp;
fp = (struct flat_binder_object *)(t->buffer->data + *offp);
off_min = *offp + sizeof(struct flat_binder_object);
switch (fp->type) {
case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER: {
struct binder_ref *ref;
struct binder_node *node = binder_get_node(proc, fp->binder);
if (node == NULL) {
//服務(wù)所在進(jìn)程 創(chuàng)建binder_node實(shí)體
node = binder_new_node(proc, fp->binder, fp->cookie);
// ***省略部分代碼***
}
//servicemanager進(jìn)程binder_ref
ref = binder_get_ref_for_node(target_proc, node);
...
//調(diào)整type為HANDLE類型
if (fp->type == BINDER_TYPE_BINDER)
fp->type = BINDER_TYPE_HANDLE;
else
fp->type = BINDER_TYPE_WEAK_HANDLE;
fp->binder = 0;
fp->handle = ref->desc; //設(shè)置handle值
fp->cookie = 0;
binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE,
&thread->todo);
} break;
case : // ***省略部分代碼***
}
if (reply) {
// ***省略部分代碼***
} else if (!(t->flags & TF_ONE_WAY)) {
//BC_TRANSACTION 且 非oneway,則設(shè)置事務(wù)棧信息
t->need_reply = 1;
t->from_parent = thread->transaction_stack;
thread->transaction_stack = t;
} else {
// ***省略部分代碼***
}
//將BINDER_WORK_TRANSACTION添加到目標(biāo)隊(duì)列录语,本次通信的目標(biāo)隊(duì)列為target_proc->todo
t->work.type = BINDER_WORK_TRANSACTION;
list_add_tail(&t->work.entry, target_list);
//將BINDER_WORK_TRANSACTION_COMPLETE添加到當(dāng)前線程的todo隊(duì)列
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
list_add_tail(&tcomplete->entry, &thread->todo);
//喚醒等待隊(duì)列,本次通信的目標(biāo)隊(duì)列為target_proc->wait
if (target_wait)
wake_up_interruptible(target_wait);
return;
}
- 注冊(cè)服務(wù)的過(guò)程禾乘,傳遞的是BBinder對(duì)象澎埠,因此上面的writeStrongBinder()過(guò)程中l(wèi)ocalBinder不為空,從而flat_binder_object.type等于BINDER_TYPE_BINDER始藕。
- 服務(wù)注冊(cè)過(guò)程是在服務(wù)所在進(jìn)程創(chuàng)建binder_node蒲稳,在servicemanager進(jìn)程創(chuàng)建binder_ref。對(duì)于同一個(gè)binder_node伍派,每個(gè)進(jìn)程只會(huì)創(chuàng)建一個(gè)binder_ref對(duì)象江耀。
- 向servicemanager的binder_proc->todo添加BINDER_WORK_TRANSACTION事務(wù),接下來(lái)進(jìn)入ServiceManager進(jìn)程诉植。
這里說(shuō)下這個(gè)函數(shù)里面涉及的三個(gè)重要函數(shù)
- binder_get_node()
- binder_new_node()
- binder_get_ref_for_node()
3.1祥国、binder_get_node()函數(shù)處理
// /kernel/drivers/android/binder.c 904行
static struct binder_node *binder_get_node(struct binder_proc *proc,
binder_uintptr_t ptr)
{
struct rb_node *n = proc->nodes.rb_node;
struct binder_node *node;
while (n) {
node = rb_entry(n, struct binder_node, rb_node);
if (ptr < node->ptr)
n = n->rb_left;
else if (ptr > node->ptr)
n = n->rb_right;
else
return node;
}
return NULL;
}
從binder_proc來(lái)根據(jù)binder指針ptr值,查詢相應(yīng)的binder_node
3.2晾腔、binder_new_node()函數(shù)處理
//kernel/drivers/android/binder.c 923行
static struct binder_node *binder_new_node(struct binder_proc *proc,
binder_uintptr_t ptr,
binder_uintptr_t cookie)
{
struct rb_node **p = &proc->nodes.rb_node;
struct rb_node *parent = NULL;
struct binder_node *node;
//第一次進(jìn)來(lái)是空
while (*p) {
parent = *p;
node = rb_entry(parent, struct binder_node, rb_node);
if (ptr < node->ptr)
p = &(*p)->rb_left;
else if (ptr > node->ptr)
p = &(*p)->rb_right;
else
return NULL;
}
//給創(chuàng)建的binder_node 分配內(nèi)存空間
node = kzalloc(sizeof(*node), GFP_KERNEL);
if (node == NULL)
return NULL;
binder_stats_created(BINDER_STAT_NODE);
//將創(chuàng)建的node對(duì)象添加到proc紅黑樹
rb_link_node(&node->rb_node, parent, p);
rb_insert_color(&node->rb_node, &proc->nodes);
node->debug_id = ++binder_last_id;
node->proc = proc;
node->ptr = ptr;
node->cookie = cookie;
//設(shè)置binder_work的type
node->work.type = BINDER_WORK_NODE;
INIT_LIST_HEAD(&node->work.entry);
INIT_LIST_HEAD(&node->async_todo);
binder_debug(BINDER_DEBUG_INTERNAL_REFS,
"%d:%d node %d u%016llx c%016llx created\n",
proc->pid, current->pid, node->debug_id,
(u64)node->ptr, (u64)node->cookie);
return node;
}
3.3舌稀、binder_get_ref_for_node()函數(shù)處理
// kernel/drivers/android/binder.c 1066行
static struct binder_ref *binder_get_ref_for_node(struct binder_proc *proc,
struct binder_node *node)
{
struct rb_node *n;
struct rb_node **p = &proc->refs_by_node.rb_node;
struct rb_node *parent = NULL;
struct binder_ref *ref, *new_ref;
//從refs_by_node紅黑樹,找到binder_ref則直接返回灼擂。
while (*p) {
parent = *p;
ref = rb_entry(parent, struct binder_ref, rb_node_node);
if (node < ref->node)
p = &(*p)->rb_left;
else if (node > ref->node)
p = &(*p)->rb_right;
else
return ref;
}
//創(chuàng)建binder_ref
new_ref = kzalloc_preempt_disabled(sizeof(*ref));
new_ref->debug_id = ++binder_last_id;
//記錄進(jìn)程信息
new_ref->proc = proc;
// 記錄binder節(jié)點(diǎn)
new_ref->node = node;
rb_link_node(&new_ref->rb_node_node, parent, p);
rb_insert_color(&new_ref->rb_node_node, &proc->refs_by_node);
//計(jì)算binder引用的handle值壁查,該值返回給target_proc進(jìn)程
new_ref->desc = (node == binder_context_mgr_node) ? 0 : 1;
//從紅黑樹最最左邊的handle對(duì)比,依次遞增剔应,直到紅黑樹遍歷結(jié)束或者找到更大的handle則結(jié)束睡腿。
for (n = rb_first(&proc->refs_by_desc); n != NULL; n = rb_next(n)) {
//根據(jù)binder_ref的成員變量rb_node_desc的地址指針n,來(lái)獲取binder_ref的首地址
ref = rb_entry(n, struct binder_ref, rb_node_desc);
if (ref->desc > new_ref->desc)
break;
new_ref->desc = ref->desc + 1;
}
// 將新創(chuàng)建的new_ref 插入proc->refs_by_desc紅黑樹
p = &proc->refs_by_desc.rb_node;
while (*p) {
parent = *p;
ref = rb_entry(parent, struct binder_ref, rb_node_desc);
if (new_ref->desc < ref->desc)
p = &(*p)->rb_left;
else if (new_ref->desc > ref->desc)
p = &(*p)->rb_right;
else
BUG();
}
rb_link_node(&new_ref->rb_node_desc, parent, p);
rb_insert_color(&new_ref->rb_node_desc, &proc->refs_by_desc);
if (node) {
hlist_add_head(&new_ref->node_entry, &node->refs);
}
return new_ref;
}
handle值計(jì)算方法規(guī)律:
- 每個(gè)進(jìn)程binder_proc所記錄的binder_ref的handle值是從1開始遞增的
- 所有進(jìn)程binder_proc所記錄的handle=0的binder_ref都指向service manager
- 同一服務(wù)的binder_node在不同進(jìn)程的binder_ref的handle值可以不同
(七)峻贮、ServiceManager流程
關(guān)于ServiceManager的啟動(dòng)流程席怪,我這里就不詳細(xì)講解了。啟動(dòng)后月洛,就會(huì)循環(huán)在binder_loop()過(guò)程何恶,當(dāng)來(lái)消息后,會(huì)調(diào)用binder_parse()函數(shù)
1嚼黔、binder_parse()函數(shù)
// framework/native/cmds/servicemanager/binder.c 204行
int binder_parse(struct binder_state *bs, struct binder_io *bio,
uintptr_t ptr, size_t size, binder_handler func)
{
int r = 1;
uintptr_t end = ptr + (uintptr_t) size;
while (ptr < end) {
uint32_t cmd = *(uint32_t *) ptr;
ptr += sizeof(uint32_t);
switch(cmd) {
case BR_TRANSACTION: {
struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
// *** 省略部分源碼 ***
binder_dump_txn(txn);
if (func) {
unsigned rdata[256/4];
struct binder_io msg;
struct binder_io reply;
int res;
bio_init(&reply, rdata, sizeof(rdata), 4);
//從txn解析出binder_io信息
bio_init_from_txn(&msg, txn);
// 收到Binder事務(wù)
res = func(bs, txn, &msg, &reply);
// 發(fā)送reply事件
binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
}
ptr += sizeof(*txn);
break;
}
case : // *** 省略部分源碼 ***
}
return r;
}
2细层、svcmgr_handler()函數(shù)
//frameworks/native/cmds/servicemanager/service_manager.c
244行
int svcmgr_handler(struct binder_state *bs,
struct binder_transaction_data *txn,
struct binder_io *msg,
struct binder_io *reply)
{
struct svcinfo *si;
uint16_t *s;
size_t len;
uint32_t handle;
uint32_t strict_policy;
int allow_isolated;
// *** 省略部分源碼 ***
strict_policy = bio_get_uint32(msg);
s = bio_get_string16(msg, &len);
// *** 省略部分源碼 ***
switch(txn->code) {
case SVC_MGR_ADD_SERVICE:
s = bio_get_string16(msg, &len);
...
handle = bio_get_ref(msg); //獲取handle
allow_isolated = bio_get_uint32(msg) ? 1 : 0;
//注冊(cè)指定服務(wù)
if (do_add_service(bs, s, len, handle, txn->sender_euid,
allow_isolated, txn->sender_pid))
return -1;
break;
case : // *** 省略部分源碼 ***
}
bio_put_uint32(reply, 0);
return 0;
}
3惜辑、do_add_service()函數(shù)
// frameworks/native/cmds/servicemanager/service_manager.c 194行
int do_add_service(struct binder_state *bs,
const uint16_t *s, size_t len,
uint32_t handle, uid_t uid, int allow_isolated,
pid_t spid)
{
struct svcinfo *si;
if (!handle || (len == 0) || (len > 127))
return -1;
//權(quán)限檢查
if (!svc_can_register(s, len, spid)) {
return -1;
}
//服務(wù)檢索
si = find_svc(s, len);
if (si) {
if (si->handle) {
//服務(wù)已經(jīng)注冊(cè)時(shí),釋放相應(yīng)的服務(wù)
svcinfo_death(bs, si);
}
si->handle = handle;
} else {
si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
//內(nèi)存不足時(shí)疫赎,無(wú)法分配足夠的內(nèi)存
if (!si) {
return -1;
}
si->handle = handle;
si->len = len;
//內(nèi)存拷貝服務(wù)信息
memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
si->name[len] = '\0';
si->death.func = (void*) svcinfo_death;
si->death.ptr = si;
si->allow_isolated = allow_isolated;
//svclist保存所有已注冊(cè)的服務(wù)
si->next = svclist;
svclist = si;
}
//以BC_ACQUIRE命令盛撑,handle為目標(biāo)的信息,通過(guò)ioctl發(fā)送給binder驅(qū)動(dòng)
binder_acquire(bs, handle);
//以BC_REQUEST_DEATH_NOTIFICATION命令的信息捧搞,通過(guò)ioctl發(fā)送給binder驅(qū)動(dòng)抵卫,主要用于清理內(nèi)存等收尾工作。
binder_link_to_death(bs, handle, &si->death);
return 0;
}
svcinfo記錄著服務(wù)名和handle信息
4胎撇、binder_send_reply()函數(shù)
// frameworks/native/cmds/servicemanager/binder.c 170行
void binder_send_reply(struct binder_state *bs,
struct binder_io *reply,
binder_uintptr_t buffer_to_free,
int status)
{
struct {
uint32_t cmd_free;
binder_uintptr_t buffer;
uint32_t cmd_reply;
struct binder_transaction_data txn;
} __attribute__((packed)) data;
//free buffer命令
data.cmd_free = BC_FREE_BUFFER;
data.buffer = buffer_to_free;
// reply命令
data.cmd_reply = BC_REPLY;
data.txn.target.ptr = 0;
data.txn.cookie = 0;
data.txn.code = 0;
if (status) {
// *** 省略部分源碼 ***
} else {
data.txn.flags = 0;
data.txn.data_size = reply->data - reply->data0;
data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);
data.txn.data.ptr.buffer = (uintptr_t)reply->data0;
data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;
}
//向Binder驅(qū)動(dòng)通信
binder_write(bs, &data, sizeof(data));
}
binder_write進(jìn)去binder驅(qū)動(dòng)后介粘,將BC_FREE_BUFFER和BC_REPLY命令協(xié)議發(fā)送給Binder驅(qū)動(dòng),向Client端發(fā)送reply
binder_write進(jìn)入binder驅(qū)動(dòng)后晚树,將BC_FREE_BUFFER和BC_REPLY命令協(xié)議發(fā)送給Binder驅(qū)動(dòng)姻采, 向client端發(fā)送reply.
(八)、總結(jié)
服務(wù)注冊(cè)過(guò)程(addService)核心功能:在服務(wù)所在進(jìn)程創(chuàng)建的binder_node爵憎,在servicemanager進(jìn)程創(chuàng)建binder_ref慨亲。其中binder_ref的desc在同一個(gè)進(jìn)程內(nèi)是唯一的:
- 每個(gè)進(jìn)程binder_proc所記錄的binder_ref的handle值是從1開始遞增的
- 所有進(jìn)程binder_proc所記錄的bandle=0的binder_ref指向service manager
- 同一個(gè)服務(wù)的binder_node在不同的進(jìn)程的binder_ref的handle值可以不同
Media服務(wù)注冊(cè)的過(guò)程設(shè)計(jì)到MediaPlayerService(作為Cliient進(jìn)程)和Service Manager(作為Service 進(jìn)程),通信的流程圖如下:
過(guò)程分析:
- 1宝鼓、MediaPlayerService進(jìn)程調(diào)用 ioctl()向Binder驅(qū)動(dòng)發(fā)送IPC數(shù)據(jù)刑棵,該過(guò)程可以理解成一個(gè)事物 binder_transaction (記為BT1),執(zhí)行當(dāng)前操作線程的binder_thread(記為 thread1)愚铡,則BT1 ->from_parent=NULL蛉签, BT1 ->from=thread1,thread1 ->transaction_stack=T1茂附。其中IPC數(shù)據(jù)內(nèi)容包括:
- Binder協(xié)議為BC_TRANSACTION
- Handle等于0
- PRC代碼為ADD_SERVICE
- PRC數(shù)據(jù)為"media.player"
- 2正蛙、Binder驅(qū)動(dòng)收到該Binder請(qǐng)求。生成BR_TRANSACTION命令,選擇目標(biāo)處理該請(qǐng)求的線程,即ServiceManager的binder線程(記為thread2)往毡,則T1->to_parent=NULL,T1 -> to_thread=thread2踏揣,并將整個(gè)binder_transaction數(shù)據(jù)(記為BT2)插入到目標(biāo)線程的todo隊(duì)列。
- 3狂塘、Service Manager的線程thread收到BT2后录煤,調(diào)用服務(wù)注冊(cè)函數(shù)將服務(wù)“media.player”注冊(cè)到服務(wù)目錄中。當(dāng)服務(wù)注冊(cè)完成荞胡,生成IPC應(yīng)答數(shù)據(jù)(BC_REPLY)妈踊,BT2->from_parent=BT1,BT2 ->from=thread2泪漂,thread2->transaction_stack=BT2廊营。
- 4歪泳、Binder驅(qū)動(dòng)收到該Binder應(yīng)答請(qǐng)求,生成BR_REPLY命令露筒,BT2->to_parent=BT1呐伞,BT2->to_thread1,thread1->transaction_stack=BT2慎式。在MediaPlayerService收到該命令后伶氢,知道服務(wù)注冊(cè)完成便可以正常使用。
五瘪吏、獲取服務(wù)
(一) 源碼位置
/frameworks/av/media/libmedia/
- IMediaDeathNotifier.cpp
framework/native/libs/binder/
- Binder.cpp
- BpBinder.cpp
- IPCThreadState.cpp
- ProcessState.cpp
- IServiceManager.cpp
對(duì)應(yīng)的鏈接為
在Native層的服務(wù)注冊(cè)癣防,我們依舊選擇media為例展開講解,先來(lái)看看media類關(guān)系圖掌眠。
(二)蕾盯、類圖
圖解:
- 藍(lán)色:代表獲取MediaPlayerService服務(wù)相關(guān)的類
- 綠色:代表Binder架構(gòu)中與Binder驅(qū)動(dòng)通信過(guò)程中的最為核心的兩個(gè)雷
- 紫色:代表 注冊(cè)服務(wù) 和 獲取服務(wù) 的公共接口/父類
(二)、獲取服務(wù)流程
1扇救、getMediaPlayerService()函數(shù)
//frameworks/av/media/libmedia/IMediaDeathNotifier.cpp 35行
sp<IMediaPlayerService>&
IMediaDeathNotifier::getMediaPlayerService()
{
Mutex::Autolock _l(sServiceLock);
if (sMediaPlayerService == 0) {
// 獲取 ServiceManager
sp<IServiceManager> sm = defaultServiceManager();
sp<IBinder> binder;
do {
//獲取名為"media.player"的服務(wù)
binder = sm->getService(String16("media.player"));
if (binder != 0) {
break;
}
usleep(500000); // 0.5s
} while (true);
if (sDeathNotifier == NULL) {
// 創(chuàng)建死亡通知對(duì)象
sDeathNotifier = new DeathNotifier();
}
//將死亡通知連接到binder
binder->linkToDeath(sDeathNotifier);
sMediaPlayerService = interface_cast<IMediaPlayerService>(binder);
}
return sMediaPlayerService;
}
其中defaultServiceManager()過(guò)程在上面已經(jīng)說(shuō)了刑枝,返回的是BpServiceManager
在請(qǐng)求獲取名為"media.player"的服務(wù)過(guò)程中,采用不斷循環(huán)獲取的方法迅腔。由于MediaPlayerService服務(wù)可能還沒向ServiceManager注冊(cè)完成或者尚未啟動(dòng)完成等情況装畅,故則binder返回NULL,休眠0.5s后繼續(xù)請(qǐng)求沧烈,知道獲取服務(wù)為止掠兄。
2、BpServiceManager.getService()函數(shù)
//frameworks/native/libs/binder/IServiceManager.cpp 134行
virtual sp<IBinder> getService(const String16& name) const
{
unsigned n;
for (n = 0; n < 5; n++){
sp<IBinder> svc = checkService(name);
if (svc != NULL) return svc;
sleep(1);
}
return NULL;
}
通過(guò)BpServiceManager來(lái)獲取MediaPlayer服務(wù):檢索服務(wù)是否存在锌雀,當(dāng)服務(wù)存在則返回相應(yīng)的服務(wù)蚂夕,當(dāng)服務(wù)不存在則休眠1s再繼續(xù)檢索服務(wù)。該循環(huán)進(jìn)行5次腋逆。為什么循環(huán)5次婿牍?這估計(jì)和Android的ANR的時(shí)間為5s相關(guān)。如果每次都無(wú)法獲取服務(wù)惩歉,循環(huán)5次等脂,每次循環(huán)休眠1s,忽略checkService()的時(shí)間撑蚌,差不多是5s的時(shí)間上遥。
3、BpSeriveManager.checkService()函數(shù)
//frameworks/native/libs/binder/IServiceManager.cpp 146行
virtual sp<IBinder> checkService( const String16& name) const
{
Parcel data, reply;
//寫入RPC頭
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
//寫入服務(wù)名
data.writeString16(name);
remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
return reply.readStrongBinder();
}
檢索制定服務(wù)是否存在争涌,其中remote()為BpBinder
4粉楚、BpBinder::transact()函數(shù)
// /frameworks/native/libs/binder/BpBinder.cpp 159行
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
if (mAlive) {
status_t status = IPCThreadState::self()->transact(
mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return DEAD_OBJECT;
}
Binder代理類調(diào)用transact()方法,真正工作還是交給IPCThreadState來(lái)進(jìn)行transact工作。
4.1模软、IPCThreadState::self()函數(shù)
IPCThreadState* IPCThreadState::self()
{
if (gHaveTLS) {
restart:
const pthread_key_t k = gTLS;
IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
if (st) return st;
//初始化 IPCThreadState
return new IPCThreadState;
}
if (gShutdown) return NULL;
pthread_mutex_lock(&gTLSMutex);
//首次進(jìn)入gHaveTLS為false
if (!gHaveTLS) {
//創(chuàng)建線程的TLS
if (pthread_key_create(&gTLS, threadDestructor) != 0) {
pthread_mutex_unlock(&gTLSMutex);
return NULL;
}
gHaveTLS = true;
}
pthread_mutex_unlock(&gTLSMutex);
goto restart;
}
TLS是指Thread local storage(線程本地存儲(chǔ)空間)伟骨,每個(gè)線程都擁有自己的TLS,并且是私有空間撵摆,線程之間不會(huì)共享底靠。通過(guò)pthread_getspecific()/pthread_setspecific()函數(shù)可以獲取/設(shè)置這些空間中的內(nèi)容。從線程本地存儲(chǔ)空間獲的保存期中的IPCThreadState對(duì)象特铝。
以后面的流程和上面的注冊(cè)流程大致相同暑中,主要流程也是 IPCThreadState:: transact()函數(shù)、IPCThreadState::writeTransactionData()函數(shù)鲫剿、IPCThreadState::waitForResponse()函數(shù)和IPCThreadState.talkWithDriver()函數(shù)鳄逾,由于上面已經(jīng)講解過(guò)了,這里就不詳細(xì)說(shuō)明了灵莲。我們從IPCThreadState.talkWithDriver() 開始繼講解
4.2雕凹、IPCThreadState:: talkWithDriver()函數(shù)
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
...
binder_write_read bwr;
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t)mOut.data();
//接收數(shù)據(jù)緩沖區(qū)信息的填充。如果以后收到數(shù)據(jù)政冻,就直接填在mIn中了枚抵。
if (doReceive && needRead) {
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t)mIn.data();
} else {
bwr.read_size = 0;
bwr.read_buffer = 0;
}
//當(dāng)讀緩沖和寫緩沖都為空,則直接返回
if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
do {
//通過(guò)ioctl不停的讀寫操作明场,跟Binder Driver進(jìn)行通信
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
...
//當(dāng)被中斷汽摹,則繼續(xù)執(zhí)行
} while (err == -EINTR);
...
return err;
}
binder_write_read結(jié)構(gòu)體 用來(lái)與Binder設(shè)備交換數(shù)據(jù)的結(jié)構(gòu),通過(guò)ioctl與mDriverFD通信苦锨,是真正的與Binder驅(qū)動(dòng)進(jìn)行數(shù)據(jù)讀寫交互的過(guò)程逼泣。先向service manager進(jìn)程發(fā)送查詢服務(wù)的請(qǐng)求(BR_TRANSACTION)。當(dāng)service manager 進(jìn)程收到帶命令后舟舒,會(huì)執(zhí)行do_find_service()查詢服務(wù)所對(duì)應(yīng)的handle拉庶,然后再binder_send_reply()應(yīng)發(fā)送者,發(fā)送BC_REPLY協(xié)議秃励,然后再調(diào)用binder_transaction()氏仗,再向服務(wù)請(qǐng)求者的todo隊(duì)列插入事務(wù)。接下來(lái)夺鲜,再看看binder_transaction過(guò)程廓鞠。
讓我們繼續(xù)看下binder_transaction的過(guò)程
4.2.1、binder_transaction()函數(shù)
//kernel/drivers/android/binder.c 1827行
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply){
//根據(jù)各種判定谣旁,獲取以下信息:
// 目標(biāo)線程
struct binder_thread *target_thread;
// 目標(biāo)進(jìn)程
struct binder_proc *target_proc滋早;
/// 目標(biāo)binder節(jié)點(diǎn)
struct binder_node *target_node榄审;
// 目標(biāo) TODO隊(duì)列
struct list_head *target_list;
// 目標(biāo)等待隊(duì)列
wait_queue_head_t *target_wait杆麸;
...
//分配兩個(gè)結(jié)構(gòu)體內(nèi)存
struct binder_transaction *t = kzalloc(sizeof(*t), GFP_KERNEL);
struct binder_work *tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
//從target_proc分配一塊buffer
t->buffer = binder_alloc_buf(target_proc, tr->data_size,
for (; offp < off_end; offp++) {
switch (fp->type) {
case BINDER_TYPE_BINDER: ...
case BINDER_TYPE_WEAK_BINDER: ...
case BINDER_TYPE_HANDLE:
case BINDER_TYPE_WEAK_HANDLE: {
struct binder_ref *ref = binder_get_ref(proc, fp->handle,
fp->type == BINDER_TYPE_HANDLE);
...
//此時(shí)運(yùn)行在servicemanager進(jìn)程搁进,故ref->node是指向服務(wù)所在進(jìn)程的binder實(shí)體浪感,
//而target_proc為請(qǐng)求服務(wù)所在的進(jìn)程,此時(shí)并不相等饼问。
if (ref->node->proc == target_proc) {
if (fp->type == BINDER_TYPE_HANDLE)
fp->type = BINDER_TYPE_BINDER;
else
fp->type = BINDER_TYPE_WEAK_BINDER;
fp->binder = ref->node->ptr;
// BBinder服務(wù)的地址
fp->cookie = ref->node->cookie;
binder_inc_node(ref->node, fp->type == BINDER_TYPE_BINDER, 0, NULL);
} else {
struct binder_ref *new_ref;
//請(qǐng)求服務(wù)所在進(jìn)程并非服務(wù)所在進(jìn)程影兽,則為請(qǐng)求服務(wù)所在進(jìn)程創(chuàng)建binder_ref
new_ref = binder_get_ref_for_node(target_proc, ref->node);
fp->binder = 0;
//重新給handle賦值
fp->handle = new_ref->desc;
fp->cookie = 0;
binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL);
}
} break;
case BINDER_TYPE_FD: ...
}
}
//分別target_list和當(dāng)前線程TODO隊(duì)列插入事務(wù)
t->work.type = BINDER_WORK_TRANSACTION;
list_add_tail(&t->work.entry, target_list);
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
list_add_tail(&tcomplete->entry, &thread->todo);
if (target_wait)
wake_up_interruptible(target_wait);
return;
}
這個(gè)過(guò)程非常重要,分兩種情況來(lái)說(shuō):
- 情況1 當(dāng)請(qǐng)求服務(wù)的進(jìn)程與服務(wù)屬于不同的進(jìn)程莱革,則為請(qǐng)求服務(wù)所在進(jìn)程創(chuàng)建binder_ref對(duì)象峻堰,指向服務(wù)進(jìn)程中的binder_node
- 當(dāng)請(qǐng)求服務(wù)的進(jìn)程與服務(wù)屬于同一進(jìn)程,則不再創(chuàng)建新對(duì)象盅视,只是引用計(jì)數(shù)+1捐名,并且修改type為BINDER_TYPE_BINER或BINDER_TYPE_WEAK_BINDER。
4.2.2闹击、binder_thread_read()函數(shù)
//kernel/drivers/android/binder.c 2650行
binder_thread_read(struct binder_proc *proc,struct binder_thread *thread,binder_uintptr_t binder_buffer, size_t size,binder_size_t *consumed, int non_block){
...
//當(dāng)線程todo隊(duì)列有數(shù)據(jù)則執(zhí)行往下執(zhí)行镶蹋;當(dāng)線程todo隊(duì)列沒有數(shù)據(jù),則進(jìn)入休眠等待狀態(tài)
ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));
...
while (1) {
uint32_t cmd;
struct binder_transaction_data tr;
struct binder_work *w;
struct binder_transaction *t = NULL;
//先從線程todo隊(duì)列獲取事務(wù)數(shù)據(jù)
if (!list_empty(&thread->todo)) {
w = list_first_entry(&thread->todo, struct binder_work, entry);
// 線程todo隊(duì)列沒有數(shù)據(jù), 則從進(jìn)程todo對(duì)獲取事務(wù)數(shù)據(jù)
} else if (!list_empty(&proc->todo) && wait_for_proc_work) {
...
}
switch (w->type) {
case BINDER_WORK_TRANSACTION:
//獲取transaction數(shù)據(jù)
t = container_of(w, struct binder_transaction, work);
break;
case : ...
}
//只有BINDER_WORK_TRANSACTION命令才能繼續(xù)往下執(zhí)行
if (!t) continue;
if (t->buffer->target_node) {
...
} else {
tr.target.ptr = NULL;
tr.cookie = NULL;
//設(shè)置命令為BR_REPLY
cmd = BR_REPLY;
}
tr.code = t->code;
tr.flags = t->flags;
tr.sender_euid = t->sender_euid;
if (t->from) {
struct task_struct *sender = t->from->proc->tsk;
//當(dāng)非oneway的情況下,將調(diào)用者進(jìn)程的pid保存到sender_pid
tr.sender_pid = task_tgid_nr_ns(sender, current->nsproxy->pid_ns);
} else {
...
}
tr.data_size = t->buffer->data_size;
tr.offsets_size = t->buffer->offsets_size;
tr.data.ptr.buffer = (void *)t->buffer->data +
proc->user_buffer_offset;
tr.data.ptr.offsets = tr.data.ptr.buffer +
ALIGN(t->buffer->data_size,
sizeof(void *));
//將cmd和數(shù)據(jù)寫回用戶空間
put_user(cmd, (uint32_t __user *)ptr);
ptr += sizeof(uint32_t);
copy_to_user(ptr, &tr, sizeof(tr));
ptr += sizeof(tr);
list_del(&t->work.entry);
t->buffer->allow_user_free = 1;
if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
...
} else {
t->buffer->transaction = NULL;
//通信完成則運(yùn)行釋放
kfree(t);
}
break;
}
done:
*consumed = ptr - buffer;
if (proc->requested_threads + proc->ready_threads == 0 &&
proc->requested_threads_started < proc->max_threads &&
(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
BINDER_LOOPER_STATE_ENTERED))) {
proc->requested_threads++;
// 生成BR_SPAWN_LOOPER命令赏半,用于創(chuàng)建新的線程
put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer)贺归;
}
return 0;
}
4.3、readStrongBinder()函數(shù)
//frameworks/native/libs/binder/Parcel.cpp 1334行
sp<IBinder> Parcel::readStrongBinder() const
{
sp<IBinder> val;
unflatten_binder(ProcessState::self(), *this, &val);
return val;
}
里面主要是調(diào)用unflatten_binder()函數(shù)
那我們就來(lái)詳細(xì)看下
4.3.1断箫、unflatten_binder()函數(shù)
status_t unflatten_binder(const sp<ProcessState>& proc,
const Parcel& in, sp<IBinder>* out)
{
const flat_binder_object* flat = in.readObject(false);
if (flat) {
switch (flat->type) {
case BINDER_TYPE_BINDER:
// 當(dāng)請(qǐng)求服務(wù)的進(jìn)程與服務(wù)屬于同一進(jìn)程
*out = reinterpret_cast<IBinder*>(flat->cookie);
return finish_unflatten_binder(NULL, *flat, in);
case BINDER_TYPE_HANDLE:
//請(qǐng)求服務(wù)的進(jìn)程與服務(wù)屬于不同進(jìn)程
*out = proc->getStrongProxyForHandle(flat->handle);
//創(chuàng)建BpBinder對(duì)象
return finish_unflatten_binder(
static_cast<BpBinder*>(out->get()), *flat, in);
}
}
return BAD_TYPE;
}
如果服務(wù)的進(jìn)程與服務(wù)屬于不同的進(jìn)程會(huì)調(diào)用getStrongProxyForHandle()函數(shù)拂酣,那我們就好好研究下
4.3.2、getStrongProxyForHandle()函數(shù)
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
//查找handle對(duì)應(yīng)的資源項(xiàng)[2.9.3]
handle_entry* e = lookupHandleLocked(handle);
if (e != NULL) {
IBinder* b = e->binder;
if (b == NULL || !e->refs->attemptIncWeak(this)) {
...
//當(dāng)handle值所對(duì)應(yīng)的IBinder不存在或弱引用無(wú)效時(shí)瑰枫,則創(chuàng)建BpBinder對(duì)象
b = new BpBinder(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}
readStrong的功能是flat_binder_object解析并創(chuàng)建BpBinder對(duì)象
4.3.2踱葛、getStrongProxyForHandle()函數(shù)
ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)
{
const size_t N=mHandleToObject.size();
//當(dāng)handle大于mHandleToObject的長(zhǎng)度時(shí),進(jìn)入該分支
if (N <= (size_t)handle) {
handle_entry e;
e.binder = NULL;
e.refs = NULL;
//從mHandleToObject的第N個(gè)位置開始光坝,插入(handle+1-N)個(gè)e到隊(duì)列中
status_t err = mHandleToObject.insertAt(e, N, handle+1-N);
if (err < NO_ERROR) return NULL;
}
return &mHandleToObject.editItemAt(handle);
}
根據(jù)handle值來(lái)查找對(duì)應(yīng)的handle_entry尸诽。
(三)、死亡通知
死亡通知時(shí)為了讓Bp端知道Bn端的生死情況
- DeathNotifier是繼承IBinder::DeathRecipient類盯另,主要需要實(shí)現(xiàn)其binderDied()來(lái)進(jìn)行死亡通告性含。
- 注冊(cè):binder->linkToDeath(sDeathNotifier)是為了將sDeathNotifier死亡通知注冊(cè)到Binder上。
Bp端只需要覆寫binderDied()方法鸳惯,實(shí)現(xiàn)一些后尾清楚類的工作商蕴,則在Bn端死掉后,會(huì)回調(diào)binderDied()進(jìn)行相應(yīng)處理
1芝发、linkToDeath()函數(shù)
// frameworks/native/libs/binder/BpBinder.cpp 173行
status_t BpBinder::linkToDeath(
const sp<DeathRecipient>& recipient, void* cookie, uint32_t flags)
{
Obituary ob;
ob.recipient = recipient;
ob.cookie = cookie;
ob.flags = flags;
{
AutoMutex _l(mLock);
if (!mObitsSent) {
if (!mObituaries) {
mObituaries = new Vector<Obituary>;
if (!mObituaries) {
return NO_MEMORY;
}
getWeakRefs()->incWeak(this);
IPCThreadState* self = IPCThreadState::self();
self->requestDeathNotification(mHandle, this);
self->flushCommands();
}
ssize_t res = mObituaries->add(ob);
return res >= (ssize_t)NO_ERROR ? (status_t)NO_ERROR : res;
}
}
return DEAD_OBJECT;
}
里面調(diào)用了requestDeathNotification()函數(shù)
2绪商、requestDeathNotification()函數(shù)
//frameworks/native/libs/binder/IPCThreadState.cpp 670行
status_t IPCThreadState::requestDeathNotification(int32_t handle, BpBinder* proxy)
{
mOut.writeInt32(BC_REQUEST_DEATH_NOTIFICATION);
mOut.writeInt32((int32_t)handle);
mOut.writePointer((uintptr_t)proxy);
return NO_ERROR;
}
向binder driver發(fā)送 BC_REQUEST_DEATH_NOTIFICATION命令。后面的流程和 Service Manager 里面的 ** binder_link_to_death() ** 的過(guò)程辅鲸。
3格郁、binderDied()函數(shù)
//frameworks/av/media/libmedia/IMediaDeathNotifier.cpp 78行
void IMediaDeathNotifier::DeathNotifier::binderDied(const wp<IBinder>& who __unused) {
SortedVector< wp<IMediaDeathNotifier> > list;
{
Mutex::Autolock _l(sServiceLock);
// 把Bp端的MediaPlayerService清除掉
sMediaPlayerService.clear();
list = sObitRecipients;
}
size_t count = list.size();
for (size_t iter = 0; iter < count; ++iter) {
sp<IMediaDeathNotifier> notifier = list[iter].promote();
if (notifier != 0) {
//當(dāng)MediaServer掛了則通知應(yīng)用程序,應(yīng)用程序回調(diào)該方法
notifier->died();
}
}
}
客戶端進(jìn)程通過(guò)Binder驅(qū)動(dòng)獲得Binder的代理(BpBinder),死亡通知注冊(cè)的過(guò)程就是客戶端進(jìn)程向Binder驅(qū)動(dòng)注冊(cè)的一個(gè)死亡通知例书,該死亡通知關(guān)聯(lián)BBinder锣尉,即與BpBinder所對(duì)應(yīng)的服務(wù)端。
4决采、unlinkToDeath()函數(shù)
當(dāng)Bp在收到服務(wù)端的死亡通知之前先掛了自沧,那么需要在對(duì)象的銷毀方法內(nèi),調(diào)用unlinkToDeath()來(lái)取消死亡通知树瞭;
//frameworks/av/media/libmedia/IMediaDeathNotifier.cpp 101行
IMediaDeathNotifier::DeathNotifier::~DeathNotifier()
{
Mutex::Autolock _l(sServiceLock);
sObitRecipients.clear();
if (sMediaPlayerService != 0) {
IInterface::asBinder(sMediaPlayerService)->unlinkToDeath(this);
}
}
5拇厢、觸發(fā)時(shí)機(jī)
每當(dāng)service進(jìn)程退出時(shí),service manager 會(huì)收到來(lái)自Binder驅(qū)動(dòng)的死亡通知移迫。這項(xiàng)工作在啟動(dòng)Service Manager時(shí)通過(guò) binder_link_to_death(bs, ptr, &si->death)完成旺嬉。另外,每個(gè)Bp端也可以自己注冊(cè)死亡通知厨埋,能獲取Binder的死亡消息邪媳,比如前面的IMediaDeathNotifier。
那Binder的死亡通知時(shí)如何被出發(fā)的荡陷?對(duì)于Binder的IPC進(jìn)程都會(huì)打開/dev/binder文件雨效,當(dāng)進(jìn)程異常退出的時(shí)候,Binder驅(qū)動(dòng)會(huì)保證釋放將要退出的進(jìn)程中沒有正常關(guān)閉的/dev/binder文件废赞,實(shí)現(xiàn)機(jī)制是binder驅(qū)動(dòng)通過(guò)調(diào)用/dev/binder文件所對(duì)應(yīng)的release回調(diào)函數(shù)徽龟,執(zhí)行清理工作,并且檢查BBinder是否有注冊(cè)死亡通知唉地,當(dāng)發(fā)現(xiàn)存在死亡通知時(shí)据悔,那么久向其對(duì)應(yīng)的BpBinder端發(fā)送死亡通知消息。
(三)總結(jié)
在請(qǐng)求服務(wù)(getService)的過(guò)程耘沼,當(dāng)執(zhí)行到binder_transaction()時(shí)极颓,會(huì)區(qū)分請(qǐng)求服務(wù)所屬進(jìn)程情況。
- 當(dāng)請(qǐng)求服務(wù)的進(jìn)程與服務(wù)屬于不同進(jìn)程群嗤,則為請(qǐng)求服務(wù)所在進(jìn)程創(chuàng)binder_ref對(duì)象菠隆,指向服務(wù)進(jìn)程的binder_noder
- 當(dāng)請(qǐng)求服務(wù)的進(jìn)程與服務(wù)屬于同一進(jìn)程, 則不再創(chuàng)建新對(duì)象狂秘,只是引用計(jì)數(shù)+1骇径,并且修改type為BINDER_TYPE_BINDER或BINDER_TYPE_WEAK_BINDER。
- 最終readStrongBinder()者春,返回的是BB對(duì)象的真實(shí)子類