轉(zhuǎn)載請(qǐng)標(biāo)注出處:http://www.reibang.com/p/95e61dcaa1fe
這幅圖很好的描述了binder驅(qū)動(dòng)的功能套啤。本文將圍繞這幅圖來(lái)學(xué)習(xí)binder驅(qū)動(dòng)。
由于網(wǎng)上關(guān)于binder的相關(guān)文章都是和ServiceManager相關(guān)了,很少有不通過(guò)ServiceManager來(lái)講binder的潜沦,所以基于此萄涯,本文就不再涉及ServiceManager了,而是直接Client通過(guò)bindService去綁定Server端的Service, 然后Client獲得Server端的一個(gè)遠(yuǎn)程代理唆鸡,client再通過(guò)這個(gè)遠(yuǎn)程代理去獲得Server端的版本號(hào)這個(gè)流程來(lái)說(shuō)涝影,具體流程如下:
其中
Client
客戶端進(jìn)程Server
Server端進(jìn)程IDemo
一個(gè)AIDL文件,它會(huì)自動(dòng)生成 IDemo.Stub, IDemo.Stub.Proxy, 前者用于Server端提供服務(wù)争占,后者用于Client獲得Server端IDemo.Stub的代理IDemoServer
是Server端繼續(xù)了IDemo.Stub, 提供具體的服務(wù)
可以看出燃逻,如果直接通過(guò) AIDL 進(jìn)程跨進(jìn)程通信,這里也會(huì)涉及到第三個(gè)進(jìn)程(SystemServer), 也就是Server端并不會(huì)"直接"將"IDemoServer"傳遞給Client, 而是先傳遞給SystemServer, 再由SystemServer傳遞給Client,
所以這里只需要說(shuō)明其中一種情況即可臂痕。
本文前面兩節(jié)簡(jiǎn)單描述了binder驅(qū)動(dòng)中一些很重要的數(shù)據(jù)結(jié)構(gòu)伯襟,然后第3節(jié)講述了Server端在Binder驅(qū)動(dòng)中創(chuàng)建binder_node以及對(duì)應(yīng)的binder_ref, 接著在第4節(jié)講述了Server將IDemoServer的引用先傳遞給AMS,然后再由AMS傳遞到Client進(jìn)程握童,最好第5小節(jié)簡(jiǎn)單了說(shuō)了下Client通過(guò)getVersion去獲得一個(gè)服務(wù)姆怪。
一. 進(jìn)程/線程在Binder驅(qū)動(dòng)的中結(jié)構(gòu)體
1.1 進(jìn)程相關(guān)的信息 binder_proc
每個(gè)進(jìn)程在打開binder時(shí)會(huì)創(chuàng)建一個(gè) binder_proc
static int binder_open(struct inode *nodp, struct file *filp)
{
struct binder_proc *proc;
proc = kzalloc(sizeof(*proc), GFP_KERNEL); //生成一個(gè)binder_proc內(nèi)核空間,用于表示該進(jìn)程的相關(guān)信息
if (proc == NULL)
return -ENOMEM;
get_task_struct(current); //增加當(dāng)前進(jìn)程描述符引用計(jì)數(shù)澡绩,因?yàn)閎inder將會(huì)有當(dāng)前進(jìn)程描述符的引用
proc->tsk = current; //引用到當(dāng)前進(jìn)程的進(jìn)程描述符
INIT_LIST_HEAD(&proc->todo); //初始化todo list
init_waitqueue_head(&proc->wait); //初始化等待隊(duì)列
...
hlist_add_head(&proc->proc_node, &binder_procs); //將binder_proc加入到全局的 binder_procs列表中
proc->pid = current->group_leader->pid; //current其實(shí)對(duì)應(yīng)用戶態(tài)的一個(gè)線程稽揭,這里找到它所在組的pid
filp->private_data = proc; //將binder_proc保存到文件指針中
...
return 0;
}
flip->private_date = proc 是將當(dāng)前進(jìn)程在Binder驅(qū)動(dòng)中對(duì)應(yīng)的binder_proc保存在文件指針中,那這個(gè)文件指針是誰(shuí)呢英古?
在ProcessState
初始化會(huì)open "/dev/binder" 設(shè)備淀衣,這時(shí)會(huì)得到一個(gè)文件句柄fd,保存到 ProcessState->mDriverFD中召调,open函數(shù)最終會(huì)調(diào)用到 binder_open, 所以這個(gè)file指針是與mDriverFD對(duì)應(yīng)起來(lái)的。
在后面的 binder_ioctl/binder_mmap里 就是通過(guò)該文件指針找到當(dāng)前進(jìn)程對(duì)應(yīng)的binder_proc的蛮浑。
1.2 線程相關(guān)的結(jié)構(gòu)體 - binder_thread
我們知道一個(gè)進(jìn)程是多線程的唠叛,客戶端去請(qǐng)求服務(wù)端是由一個(gè)進(jìn)程的線程發(fā)起的,所以binder里應(yīng)該有表示線程的結(jié)構(gòu)體沮稚,也就是 binder_thread.
注意:線程相關(guān)的結(jié)構(gòu)由binder_proc中的threads紅黑樹保存
在 binder_ioctl中艺沼,會(huì)通過(guò) binder_get_thread 從進(jìn)程結(jié)構(gòu)體中(binder_proc)去獲得線程相關(guān)的binder_thread.
注意:在Linux kernel中其實(shí)并沒(méi)有線程這么一說(shuō),用戶態(tài)的線程在內(nèi)核中都對(duì)應(yīng)著一個(gè)task_struct, 看起來(lái)就像一個(gè)普通的進(jìn)程蕴掏,也就是說(shuō)有自己的pid
static struct binder_thread *binder_get_thread(struct binder_proc *proc)
{
struct binder_thread *thread = NULL;
struct rb_node *parent = NULL;
struct rb_node **p = &proc->threads.rb_node; //當(dāng)前進(jìn)程中所有的線程障般,找到樹頭
while (*p) { //查找線程是否已經(jīng)存在了
parent = *p;
thread = rb_entry(parent, struct binder_thread, rb_node);
// current->pid 是指當(dāng)前線程(用戶態(tài)下的線程)的pid
if (current->pid < thread->pid)
p = &(*p)->rb_left;
else if (current->pid > thread->pid)
p = &(*p)->rb_right;
else
break;
}
//如果線程結(jié)構(gòu)體不存在,則創(chuàng)建盛杰,并加入到binder_proc的threads里
if (*p == NULL) {
thread = kzalloc(sizeof(*thread), GFP_KERNEL);
thread->proc = proc; //引用到進(jìn)程的結(jié)構(gòu)體
thread->pid = current->pid; //設(shè)置 pid
init_waitqueue_head(&thread->wait); //初始化線程的等待隊(duì)列
INIT_LIST_HEAD(&thread->todo); //線程的todo list
rb_link_node(&thread->rb_node, parent, p);
rb_insert_color(&thread->rb_node, &proc->threads); //插入到binder_proc的threads中的紅黑樹中
thread->looper |= BINDER_LOOPER_STATE_NEED_RETURN;
thread->return_error = BR_OK; //設(shè)置返回錯(cuò)誤碼
thread->return_error2 = BR_OK;
}
return thread;
}
二. Binder驅(qū)動(dòng)中BBinder和BpBinder對(duì)應(yīng)的結(jié)構(gòu)體
我們知道在 Android Native上都對(duì)應(yīng)著BBinder與BpBinder, 那么它們?cè)贐inder驅(qū)動(dòng)是怎么表示的呢挽荡?
2.1 Binder驅(qū)動(dòng)中BBinder實(shí)體 - Server端的實(shí)體
由 binder_node 表示, 具體是在 binder_new_node
中創(chuàng)建,
static struct binder_node *binder_new_node(struct binder_proc *proc,
binder_uintptr_t ptr,
binder_uintptr_t cookie)
{
struct rb_node **p = &proc->nodes.rb_node;
struct rb_node *parent = NULL;
struct binder_node *node;
//在binder_proc中的nodes紅黑樹中查找,看是否已經(jīng)有BBinder對(duì)象了
while (*p) {
parent = *p;
node = rb_entry(parent, struct binder_node, rb_node);
if (ptr < node->ptr)
p = &(*p)->rb_left;
else if (ptr > node->ptr)
p = &(*p)->rb_right;
else
return NULL;
}
//如果沒(méi)有找到即供,那么在內(nèi)核空間中生成一個(gè)新的binder_node
node = kzalloc(sizeof(*node), GFP_KERNEL);
rb_link_node(&node->rb_node, parent, p);
rb_insert_color(&node->rb_node, &proc->nodes);//插入到binder_proc的nodes中
node->proc = proc; //引用到進(jìn)程結(jié)構(gòu)體binder_proc
node->ptr = ptr; //保存用戶態(tài)下的BBinder的地址
node->cookie = cookie;
node->work.type = BINDER_WORK_NODE;
return node;
}
2.2 Binder驅(qū)動(dòng)中BpBinder實(shí)體 - BBinder的引用也就是代理
static struct binder_ref *binder_get_ref_for_node(struct binder_proc *proc,
struct binder_node *node)
{
struct rb_node *n;
struct rb_node **p = &proc->refs_by_node.rb_node;
struct rb_node *parent = NULL;
struct binder_ref *ref, *new_ref;
//查找是否binder_node已經(jīng)創(chuàng)建了它的引用定拟,
while (*p) {
parent = *p;
ref = rb_entry(parent, struct binder_ref, rb_node_node);
if (node < ref->node)
p = &(*p)->rb_left;
else if (node > ref->node)
p = &(*p)->rb_right;
else
return ref;
}
//還沒(méi)有創(chuàng)建binder_node的引用,則新生成一個(gè)binder_ref逗嫡,
new_ref = kzalloc(sizeof(*ref), GFP_KERNEL);
new_ref->proc = proc; //指向當(dāng)前進(jìn)程的binder_proc
new_ref->node = node; //指向引用到的也就是代理的 binder_node
rb_link_node(&new_ref->rb_node_node, parent, p);
rb_insert_color(&new_ref->rb_node_node, &proc->refs_by_node); //插入到binder_proc的refs_by_node中
new_ref->desc = (node == binder_context_mgr_node) ? 0 : 1; //指明是否是SeviceManager的引用
for (n = rb_first(&proc->refs_by_desc); n != NULL; n = rb_next(n)) {
ref = rb_entry(n, struct binder_ref, rb_node_desc);
if (ref->desc > new_ref->desc)
break;
new_ref->desc = ref->desc + 1; //代理的desc是在當(dāng)前的已有的代理上+1
}
p = &proc->refs_by_desc.rb_node;
while (*p) {
parent = *p;
ref = rb_entry(parent, struct binder_ref, rb_node_desc);
if (new_ref->desc < ref->desc)
p = &(*p)->rb_left;
else if (new_ref->desc > ref->desc)
p = &(*p)->rb_right;
else
BUG();
}
rb_link_node(&new_ref->rb_node_desc, parent, p);
rb_insert_color(&new_ref->rb_node_desc, &proc->refs_by_desc); //同時(shí)將binder_ref插入到 binder_proc里的refs_by_desc中
if (node) {
hlist_add_head(&new_ref->node_entry, &node->refs); //加入到refs列表中
} else {
}
return new_ref;
}
三青自、Server端生成IDemoServer在驅(qū)動(dòng)中的binder_node與binder_ref
先來(lái)看下Server端publishService的過(guò)程, 當(dāng)AMS通知Server端handleBindService后株依,Server就會(huì)將Service publish給AMS
private void handleBindService(BindServiceData data) {
...
IBinder binder = s.onBind(data.intent);
ActivityManagerNative.getDefault().publishService(data.token, data.intent, binder);
...
}
public void publishService(IBinder token,
Intent intent, IBinder service) throws RemoteException {
Parcel data = Parcel.obtain();
Parcel reply = Parcel.obtain();
data.writeInterfaceToken(IActivityManager.descriptor);
data.writeStrongBinder(token);
intent.writeToParcel(data, 0);
data.writeStrongBinder(service);
mRemote.transact(PUBLISH_SERVICE_TRANSACTION, data, reply, 0);
reply.readException();
data.recycle();
reply.recycle();
}
publishService就是轉(zhuǎn)換成圖2所示的數(shù)據(jù)流
從第二節(jié)已經(jīng)得知 binder_node與binder_ref分別是在 binder_new_node 與 binder_get_ref_for_node中創(chuàng)建中,那這兩個(gè)函數(shù)創(chuàng)建的時(shí)機(jī)是什么呢延窜?換句話說(shuō)這兩個(gè)函數(shù)是在代碼中哪里被調(diào)用到的呢恋腕?
binder_ioctl
-> case BINDER_WRITE_READ
binder_ioctl_write_read
-> binder_ioctl_write_read
-> binder_thread_write
其實(shí) binder_thread_write
也就是對(duì)圖2進(jìn)行解包的過(guò)程
static int binder_ioctl_write_read(struct file *filp, unsigned int cmd, unsigned long arg, struct binder_thread *thread)
{
int ret = 0;
struct binder_proc *proc = filp->private_data; //這里獲得對(duì)應(yīng)進(jìn)程結(jié)構(gòu)體 binder_proc
unsigned int size = _IOC_SIZE(cmd); //此時(shí) cmd也就是 BC_TRANSACTION
void __user *ubuf = (void __user *)arg; //用戶態(tài)數(shù)據(jù)地址
struct binder_write_read bwr;
if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { //將用戶態(tài)數(shù)據(jù)拷貝到內(nèi)核態(tài)下,也就是獲得圖2中的binder_write_read
...
}
if (bwr.write_size > 0) { //如果write_size >0 , 表示write_buffer有數(shù)據(jù)逆瑞,這里需要進(jìn)行寫操作
ret = binder_thread_write(proc, thread,
bwr.write_buffer,
bwr.write_size,
&bwr.write_consumed);
...
}
if (bwr.read_size > 0) { //如果read_size >0 , 表示需要讀數(shù)據(jù)荠藤,
ret = binder_thread_read(proc, thread, bwr.read_buffer,
bwr.read_size,
&bwr.read_consumed,
filp->f_flags & O_NONBLOCK);
...
}
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) { //將處理完的數(shù)據(jù)binder_write_read再拷貝到用戶空間
...
}
out:
return ret;
}
3.1 binder_thread_write
static int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size, binder_size_t *consumed)
{
uint32_t cmd;
//binder_buffer也就是binder_write_read.write_buffer, 其實(shí)也就是 IPCProcessState里的Parcel.mData
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed; //此時(shí) consumed還為0, ptr指向Parcel.mData
void __user *end = buffer + size;
while (ptr < end && thread->return_error == BR_OK) {
if (get_user(cmd, (uint32_t __user *)ptr)) //這里得到的cmd也就是BC_TRANSACTION
return -EFAULT;
ptr += sizeof(uint32_t); //ptr往后移4個(gè)字節(jié)呆万,也就是圖3的偏移4處商源,這時(shí)指向 binder_transaction_data
switch (cmd) {
...
case BC_TRANSACTION: //進(jìn)入該分支
case BC_REPLY: {
struct binder_transaction_data tr;
if (copy_from_user(&tr, ptr, sizeof(tr))) //將用戶態(tài)下的binder_transaction_data拷貝到內(nèi)核態(tài)下
ptr += sizeof(tr); //繼續(xù)移動(dòng)ptr, 此時(shí)ptr指向了end
binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
break;
}
*consumed = ptr - buffer; //這里修改 binder_write_read里 write_consumed 值表示消費(fèi)了這么多字節(jié)
}
return 0;
}
繼續(xù)來(lái)看binder_transaction, 此時(shí)傳入的 binder_transaction_data為圖4所示,另外谋减,reply = cmd == BC_REPLY, 這里為false.
static void binder_transaction(struct binder_proc *proc, struct binder_thread *thread,
struct binder_transaction_data *tr, int reply)
{
if (reply) { //reply為false
...
} else {
if (tr->target.handle) {
//如圖4所示牡彻, 因?yàn)榇藭r(shí)是通過(guò) IActivityManager 將IDemoServer的publishSerive,
// 所以這里的target.handle是指 IActivityManager的代理, 且不為空
struct binder_ref *ref;
//獲得IActivityManager代理對(duì)應(yīng)的binder_ref, 見(jiàn)第3.1小節(jié)
ref = binder_get_ref(proc, tr->target.handle, true);
//通過(guò)IActivityManager代理的binder_ref找到binder_node,也就是"activity" serivice對(duì)應(yīng)的BBinder實(shí)體
target_node = ref->node;
} else { //這里是與serivce manager相關(guān)的
target_node = binder_context_mgr_node;
}
target_proc = target_node->proc; //找到"activity" service所在的進(jìn)程結(jié)構(gòu)體 binder_proc
...
}
if (target_thread) { //此時(shí)target_thread為空
} else {
target_list = &target_proc->todo; //從"activity" service的binder_proc獲得target_list
target_wait = &target_proc->wait; //從"activity" service的binder_proc中獲得systemservice進(jìn)程的等待隊(duì)列
}
//分配一個(gè)binder_transaction結(jié)構(gòu)體
t = kzalloc(sizeof(*t), GFP_KERNEL);
//分配一個(gè)binder_work結(jié)構(gòu)體
tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
//下面開始組裝 binder_transaction結(jié)構(gòu)體
t->sender_euid = task_euid(proc->tsk); //獲得euid
t->to_proc = target_proc; //將binder_transaction發(fā)送給哪個(gè)進(jìn)程的binder_proc
t->to_thread = target_thread;//將binder_transaction發(fā)送給哪個(gè)進(jìn)程的線程 target_thread
t->code = tr->code; //這里是PUBLISH_SERVICE_TRANSACTION
t->flags = tr->flags;
t->priority = task_nice(current);
// binder_alloc_buf從target_proc,也就是"activity" service所在的進(jìn)程 systemserver進(jìn)程中分配binder_buffer出爹,
//主要是為了后面保存用戶態(tài)的數(shù)據(jù)
t->buffer = binder_alloc_buf(target_proc, tr->data_size,
tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
t->buffer->allow_user_free = 0;
t->buffer->debug_id = t->debug_id;
t->buffer->transaction = t;
t->buffer->target_node = target_node;
...
//offp用來(lái)獲得flatten_binder_object, 找到offp的
offp = (binder_size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));
//將 binder_transaction_data中的 data.ptr.buffer也就是Parcel中的mData拷貝到binder_transaction的buffer->data中
if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
tr->data.ptr.buffer, tr->data_size)) {
}
// offp是拷貝完P(guān)arcel的mData后面庄吼。這里把 Parcel的mObjects拷貝到 offp 處,
//注意: mObjects是保存flatten_binder_object結(jié)構(gòu)體的位置的
if (copy_from_user(offp, (const void __user *)(uintptr_t)
tr->data.ptr.offsets, tr->offsets_size)) {
}
for (; offp < off_end; offp++) {
struct flat_binder_object *fp;
fp = (struct flat_binder_object *)(t->buffer->data + *offp);
off_min = *offp + sizeof(struct flat_binder_object);
switch (fp->type) { //這里只看 IDemoServer相關(guān)的严就,不看token相關(guān)的
case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER: { //由圖2可知总寻,這里進(jìn)入該分支
struct binder_ref *ref;
struct binder_node *node = binder_get_node(proc, fp->binder);
if (node == NULL) {
//哦,原來(lái)在這里將 IDemoServer對(duì)應(yīng)的 binder_node創(chuàng)建出來(lái)了
node = binder_new_node(proc, fp->binder, fp->cookie);
}
//獲得binder_node對(duì)應(yīng)的引用
ref = binder_get_ref_for_node(target_proc, node);
//由于IDemoServer是本地binder_node, 這里要將binder_transaction傳遞給systemserver,
//所以將它改為 BINDER_TYPE_HANDLE, 即傳binder_ref到systemserver
if (fp->type == BINDER_TYPE_BINDER)
fp->type = BINDER_TYPE_HANDLE;
else
fp->type = BINDER_TYPE_WEAK_HANDLE;
fp->binder = 0;
fp->handle = ref->desc; //獲得binder_ref的handle值梢为,這個(gè)在生成BpBinder時(shí)會(huì)傳入
fp->cookie = 0;
} break;
}
if (reply) {
//這里target_thread為空
binder_pop_transaction(target_thread, in_reply_to);
} else if (!(t->flags & TF_ONE_WAY)) {
...
} else {
...
}
t->work.type = BINDER_WORK_TRANSACTION;
list_add_tail(&t->work.entry, target_list); //將binder_transaction插入到systemserver進(jìn)程的todo列表里面
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE; //這個(gè)是返回給當(dāng)前進(jìn)程用戶態(tài)的
list_add_tail(&tcomplete->entry, &thread->todo); //將tcomplete加入到當(dāng)前線程的todo list里
if (target_wait) //如果systemserver正有等待隊(duì)列的話渐行,喚醒systemserver進(jìn)程
wake_up_interruptible(target_wait);
return;
}
由 binder_transaction
可知,binder_transaction進(jìn)一步將數(shù)據(jù)組裝成 struct binder_transaction,
binder_transaction函數(shù)的主要作用
- 生成binder_transaction結(jié)構(gòu)體
- 生成IDemoServer對(duì)應(yīng)的binder_node與binder_ref
- 修改binder_transaction里的binder_buffer, 將IDemoServer的引用號(hào)也就是binder_ref里的desc(handle)傳遞給其它進(jìn)程塊(這里是system_server)
- 將binder_transaction傳入到目標(biāo)進(jìn)程塊(這里是system_server)的binder_proc的todo列表里
- 喚醒system_server來(lái)處理PUBLISH_SERVICE_TRANSACTION服務(wù)
同理, System_server將相同的數(shù)據(jù)傳遞給IDemoClient進(jìn)程铸董,這樣祟印,IDemoClient也就是是持有的是IDemoServer的在binder驅(qū)動(dòng)中的引用的handle值。見(jiàn)第四節(jié)分析
3.2 binder_thread_read
Server進(jìn)程在完成binder_thread_write后粟害,接著就會(huì)執(zhí)行binder_thread_read, 參見(jiàn)binder_ioctl_write_read.
static int binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size, binder_size_t *consumed, int non_block)
{
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
int ret = 0;
int wait_for_proc_work;
if (*consumed == 0) { //這里第一次進(jìn)來(lái)是0
if (put_user(BR_NOOP, (uint32_t __user *)ptr)) //將BR_NOOP return碼返回給當(dāng)前線程
ptr += sizeof(uint32_t); //ptr移位4個(gè)字節(jié)
}
retry:
//由3.2小節(jié)可知 當(dāng)前線程里有todo list, 即有個(gè) tcomplete, 所以wait_for_proc_work這里為false
wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo);
if (wait_for_proc_work) { //wait_for_proc_work為false
...
} else {
//由于此時(shí) thread->todo 列表里不為空蕴忆,所以這里并不會(huì)一直阻塞的等待這里,
ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));
}
while (1) {
uint32_t cmd;
struct binder_transaction_data tr;
struct binder_work *w;
struct binder_transaction *t = NULL;
if (!list_empty(&thread->todo)) {
//獲得第一個(gè)binder_work, 也就是在3.1小節(jié)中的 tcomplete
w = list_first_entry(&thread->todo, struct binder_work, entry);
} else if (!list_empty(&proc->todo) && wait_for_proc_work) {
...
} else {
}
if (end - ptr < sizeof(tr) + 4)
break;
switch (w->type) { //這個(gè)w->type 為BINDER_WORK_TRANSACTION_COMPLETE
case BINDER_WORK_TRANSACTION: {
...
} break;
case BINDER_WORK_TRANSACTION_COMPLETE: {
cmd = BR_TRANSACTION_COMPLETE;
if (put_user(cmd, (uint32_t __user *)ptr)) //再向用戶態(tài)寫入 BR_TRANSACTION_COMPLETE cmd
ptr += sizeof(uint32_t); //ptr再移動(dòng)4個(gè)字節(jié)
list_del(&w->entry); //從 thread 的 todo 列表里移除 binder_work
kfree(w);
} break;
}
done:
*consumed = ptr - buffer; //讀到了多少字節(jié)
return 0;
}
四、Client進(jìn)程獲得IDemoServer的引用
4.1 AMS先得到IDemoServer的引用
Server端將Service publish到AMS悲幅,看來(lái)AMS是怎么接收的
由3.1小節(jié)可知套鹅,binder驅(qū)動(dòng)在生成IDemoServer的binder_node后,**并將要傳遞給AMS的 IDemoServer" 改為傳遞給AMS為 IDemoServer在 Binder驅(qū)動(dòng)對(duì)應(yīng)的binder_ref引用號(hào)汰具,且修改type為BINDER_TYPE_HANDLE. 然后喚醒AMS的binder線程來(lái)接收binder transaction. 代碼如下
t->work.type = BINDER_WORK_TRANSACTION;
list_add_tail(&t->work.entry, target_list);
if (target_wait)
wake_up_interruptible(target_wait);
有了這些作鋪墊后卓鹿,來(lái)看下AMS是怎么接收的, 具體是 binder_thread_read
static int binder_thread_read(...) {
...
//binder線程一直阻塞等待這里,
ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
...
//被喚醒后
w = list_first_entry(&proc->todo, struct binder_work, entry); //從binder_proc的todo列表里取出一個(gè)binder_work
switch (w->type) {
case BINDER_WORK_TRANSACTION: { //進(jìn)入該分支,見(jiàn)上面
t = container_of(w, struct binder_transaction, work);
//通過(guò)container_of獲得binder_transaction,自行百度 container_of
} break;
//接下來(lái)的代碼就是將binder_transaction封裝成binder_transaction_data, 傳遞給用戶態(tài)來(lái)解析
tr.data_size = t->buffer->data_size;
tr.offsets_size = t->buffer->offsets_size;
tr.data.ptr.buffer = (binder_uintptr_t)((uintptr_t)t->buffer->data +proc->user_buffer_offset);
tr.data.ptr.offsets = tr.data.ptr.buffer +ALIGN(t->buffer->data_size,sizeof(void *));
if (put_user(cmd, (uint32_t __user *)ptr)) //這里cmd是BR_TRANSACTION
ptr += sizeof(uint32_t);
if (copy_to_user(ptr, &tr, sizeof(tr))) //拷貝到用戶態(tài)
}
binder_thread_read
最主要的作用就是獲得binder_transaction, 然后將binder_transaction里的數(shù)據(jù)封裝到binder_transaction_data郁副,然后再將binder_transaction_data保存到binder_write_read里也就是mIn Parcel的mData中减牺,最后由內(nèi)核態(tài)回到用戶態(tài)
status_t IPCThreadState::talkWithDriver() {
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0) //陷入內(nèi)核態(tài)
//返回內(nèi)核態(tài)
if (bwr.read_consumed > 0) {
mIn.setDataSize(bwr.read_consumed); //設(shè)置 data 大小
mIn.setDataPosition(0); //設(shè)置data偏移
}
}
整個(gè)流程為
getAndExecuteCommand -> talkWithDriver -> executeCommand ->
用戶態(tài)獲得binder_transaction_date數(shù)據(jù)
executeCommand () {
case BR_TRANSACTION:
binder_transaction_data tr; //
result = mIn.read(&tr, sizeof(tr));
//傳給ActivityManagerNative的native BBinder的transact
error = reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer, &reply, tr.flags);
}
接著傳遞到ActivityManagerNative
BBinder::transact -> JavaBBinder::onTransact (Native層)
-> Binder::execTransact -> ActivityManagerNative::onTransact (Java層)
ActivityManagerNative.java
onTransaction() {
case PUBLISH_SERVICE_TRANSACTION: {
data.enforceInterface(IActivityManager.descriptor);
IBinder token = data.readStrongBinder();
Intent intent = Intent.CREATOR.createFromParcel(data);
IBinder service = data.readStrongBinder();
publishService(token, intent, service);
reply.writeNoException();
return true;
}
我們現(xiàn)在只關(guān)心
IBinder service = data.readStrongBinder();
來(lái)看下 readStrongBinder 的流程
readStrongBinder -> nativeReadStrongBinder -> android_os_Parcel_readStrongBinder
-> javaObjectForIBinder(env, parcel->readStrongBinder())
status_t Parcel::readStrongBinder(sp<IBinder>* val) const
{
return unflatten_binder(ProcessState::self(), *this, val);
}
status_t unflatten_binder(const sp<ProcessState>& proc,
const Parcel& in, sp<IBinder>* out)
{
const flat_binder_object* flat = in.readObject(false);
if (flat) {
switch (flat->type) {
case BINDER_TYPE_BINDER:
*out = reinterpret_cast<IBinder*>(flat->cookie);
return finish_unflatten_binder(NULL, *flat, in);
case BINDER_TYPE_HANDLE:
*out = proc->getStrongProxyForHandle(flat->handle);
return finish_unflatten_binder(
static_cast<BpBinder*>(out->get()), *flat, in);
}
}
return BAD_TYPE;
}
相當(dāng)于就是解析圖6
由圖可知,最終進(jìn)入 BINDER_TYPE_HANDLE
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
handle_entry* e = lookupHandleLocked(handle);
//獲得一個(gè)handle_entry, 進(jìn)程中所有的遠(yuǎn)端代理都由handle_entry表示,并由mHandleToObject保存
if (e != NULL) {
IBinder* b = e->binder;
if (b == NULL || !e->refs->attemptIncWeak(this)) {
if (handle == 0) { //這里是ServiceManager
Parcel data;
status_t status = IPCThreadState::self()->transact(
0, IBinder::PING_TRANSACTION, data, NULL, 0);
if (status == DEAD_OBJECT)
return NULL;
}
b = new BpBinder(handle); //獲得BpBinder
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}
從 getStrongProxyForHandle 可知拔疚,AMS根據(jù)IDemoServer在Binder驅(qū)動(dòng)中的binder_node的binder_ref引用號(hào)生成一個(gè)native的BpBinder. 由此可知這個(gè)handle號(hào)是識(shí)別一個(gè)遠(yuǎn)程代理的標(biāo)識(shí).
接著javaObjectForIBinder生成一個(gè)BinderProxy, 用來(lái)保存BpBinder的地址肥隆,最終在Java層傳遞給AMS是BinderProxy.
4.2 Client獲得IDemoServer引用
AMS拿到IDemoServer的引用,在Java層也就是BinderProxy后來(lái)看下它做了些什么操作呢稚失?
接著ActivityManagerNative.java 的 onTransaction() 來(lái)看publishService
onTransaction() {
case PUBLISH_SERVICE_TRANSACTION: {
data.enforceInterface(IActivityManager.descriptor);
IBinder token = data.readStrongBinder();
Intent intent = Intent.CREATOR.createFromParcel(data);
IBinder service = data.readStrongBinder();
publishService(token, intent, service);
reply.writeNoException();
return true;
}
publishService最終會(huì)調(diào)用到 publishSericeLocked,
void publishServiceLocked(ServiceRecord r, Intent intent, IBinder service) {
...
c.conn.connected(r.name, service);
...
}
而service也就是IDemoServer的遠(yuǎn)程代理也就是4.1節(jié)分析到最后的BinderProxy, 最后會(huì)調(diào)用到 Client中的ServiceConnection::connected(), 參數(shù)也就是這個(gè)BinderProxy, 可以看到AMS僅是作為一個(gè)中轉(zhuǎn)的作用栋艳,它不會(huì)干預(yù)Server和Client相關(guān)的通信。
ServiceConnection也是一個(gè)Binder通信句各,只不過(guò)現(xiàn)在的Server端Client中的ServiceConnection, 此時(shí)AMS就變成了Client了.
接下來(lái)看下IServiceConnection中的connected函數(shù)
public void connected(android.content.ComponentName name, android.os.IBinder service) throws android.os.RemoteException
{
android.os.Parcel _data = android.os.Parcel.obtain();
try {
_data.writeInterfaceToken(DESCRIPTOR);
if ((name!=null)) {
_data.writeInt(1);
name.writeToParcel(_data, 0);
}
else {
_data.writeInt(0);
}
_data.writeStrongBinder(service);
mRemote.transact(Stub.TRANSACTION_connected, _data, null, android.os.IBinder.FLAG_ONEWAY);
}
finally {
_data.recycle();
}
}
connected中,AMS將service也就是BinderProxy通過(guò)writeStrongBinder寫入到Parcel中,
status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
return flatten_binder(ProcessState::self(), val, this);
}
status_t flatten_binder(const sp<ProcessState>& /*proc*/,
const sp<IBinder>& binder, Parcel* out)
{
flat_binder_object obj;
obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
if (binder != NULL) {
IBinder *local = binder->localBinder(); //這里binder是個(gè)遠(yuǎn)端代理 ,所以localBinder()為空
if (!local) { //進(jìn)入該分支
BpBinder *proxy = binder->remoteBinder();
const int32_t handle = proxy ? proxy->handle() : 0;
obj.type = BINDER_TYPE_HANDLE;
obj.binder = 0; /* Don't pass uninitialized stack data to a remote process */
obj.handle = handle;
obj.cookie = 0;
} else {
}
} else {
}
...
}
傳遞的flat_binder_object如下
最后在Binder驅(qū)動(dòng)中的 binder_transaction函數(shù)中根據(jù) BINDER_TYPE_HANDLE取出binder_ref
binder_transaction(...) {
...
case BINDER_TYPE_HANDLE:
case BINDER_TYPE_WEAK_HANDLE: {
struct binder_ref *ref = binder_get_ref(proc, fp->handle,
fp->type == BINDER_TYPE_HANDLE);
if (ref->node->proc == target_proc) {
//這里的binder_ref的的binder_node的binder_proc是Server進(jìn)程吸占,而target_proc是Client進(jìn)程,
//所以不會(huì)走這里凿宾,不知道這里是不是給同一個(gè)進(jìn)程使用binder時(shí)走的矾屯?未調(diào)研
} else {
struct binder_ref *new_ref;
new_ref = binder_get_ref_for_node(target_proc, ref->node); //client第一次獲得,所以創(chuàng)建一個(gè)新的binder_ref
fp->binder = 0;
fp->handle = new_ref->desc; //設(shè)置handle號(hào)
fp->cookie = 0;
binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL);
} break;
...
}
最后Client走的流程就和AMS獲得IDemoServer引用的流程一致了初厚,
五件蚕、Client進(jìn)程獲得Server進(jìn)程的服務(wù)
Client要去獲得Server的一個(gè)API服務(wù),這里 Client通過(guò)IDemoServer的BinderProxy的handle值能夠輕松找到 binder_ref,
然后再通過(guò) binder_ref的node可以找到 binder_node, 而這個(gè)binder_node就是Server端進(jìn)程中的IDemoServer的binder_node,
接著通過(guò)binder_node又可以很輕松的找到Server進(jìn)程 binder_proc,
最后將transaction插入到Server進(jìn)程binder_proc中产禾,然后binder驅(qū)動(dòng)喚醒Server進(jìn)程排作,
六、醞釀中的binder問(wèn)題
至此binder通信的整個(gè)流程都擼了一遍了亚情。但是不是還有些東西感覺(jué)沒(méi)有說(shuō)啊妄痪,比如binder線程是怎么來(lái)的,Server進(jìn)程阻塞在哪了楞件,Client進(jìn)程又是怎么被喚醒的這些啊衫生。。土浸。
所以打算再擼一篇帶著問(wèn)題來(lái)看binder. 待續(xù)