從AMS.attachApplicationLocked()分析Binder.linkToDeath

當系統(tǒng)創(chuàng)建進程以后會調(diào)用AMS.attachApplicationLocked(),在這個方法內(nèi)部會注冊該進程的死亡回調(diào)

//其中thread是ActivityThread通過夸進程通信獲取Binder的代理對象饼问,然后調(diào)用linkToDeath()
AppDeathRecipient adr = new AppDeathRecipient(app, pid, thread);
thread.asBinder().linkToDeath(adr, 0);

我們會發(fā)現(xiàn)這個一個空實現(xiàn)

ApplicationThread.java

/**
 * Local implementation is a no-op.
 */
public void linkToDeath(DeathRecipient recipient, int flags) {
}

空實現(xiàn)我們肯定會很好奇律想,什么也沒做呀,但是我們想想流济,thread.asBinder()代表的是ActivityThread但是實際上是ActivityThread對象本身嗎锐锣?答案:不是的。帶著這個疑問绳瘟,我們繼續(xù)倒退代碼雕憔,這個thread到底誰。

我們會在ActivityThread.main中去開始我們創(chuàng)建子進程后的操作所以流程如下:

ActivityThread.main

ActivityThread thread = new ActivityThread();//這里thread是ActivityThread
thread.attach(false);

attach()

 final ApplicationThread mAppThread = new ApplicationThread();//AT的成員變量
-------
final IActivityManager mgr = ActivityManagerNative.getDefault();//這個時候我們需要夸進程通信到AMS的attachApplicationLocked方法糖声,又回到了最初的原點斤彼。
try {
    mgr.attachApplication(mAppThread);
} catch (RemoteException ex) {
    // Ignore
}

所以到這里我們清楚了,那個thread.asBinder()代表的是ApplicationThread蘸泻,注意這里我說的是代表的是看下面琉苇。

ActivityManagerNative.java

public void attachApplication(IApplicationThread app) throws RemoteException
{
    Parcel data = Parcel.obtain();
    Parcel reply = Parcel.obtain();
    data.writeInterfaceToken(IActivityManager.descriptor);
    data.writeStrongBinder(app.asBinder());//看這里看這里
    mRemote.transact(ATTACH_APPLICATION_TRANSACTION, data, reply, 0);
    reply.readException();
    data.recycle();
    reply.recycle();
}

傳的是Binder的代理,也就是ApplicationThread的代理悦施,那我們現(xiàn)在肯定還不死心并扇,非得要看看ApplicationThread的asBinder()是什么鬼。

ApplicationThread.java

private class ApplicationThread extends ApplicationThreadNative {
...
}

ApplicationThreadNative.java

public abstract class ApplicationThreadNative extends Binder
        implements IApplicationThread {
    public IBinder asBinder()
    {
        return this;//代表的是ApplicationThread抡诞,因為是繼承關(guān)系
    }
}        

到這里我們清楚了thread.asBinder()ApplicationThreadNative穷蛹,通過attachApplication傳遞進去的是ApplicationThread。ApplicationThread對象的asBinder是ApplicationThread本身昼汗,ApplicationThread繼承了ApplicationThreadNative肴熏,也就是傳遞的是引用本身。通過binder傳遞對端得到的就是ApplicationThread實體對象的代理對象顷窒,所以我們需要關(guān)注的是ApplicationThread這個對象代理對象ApplicationThreadProxy既然是代理對象蛙吏,那就使用的是BinderProxy,所以我們就知道了linkToDeath是在BinderProxy中。


繼續(xù)來到BinderProxy.java中

BinderProxy.java

//是native的
public native void linkToDeath(DeathRecipient recipient, int flags)
        throws RemoteException;

這個問題也證明了BinderProxy代理端持有者鸦做,也就是那些client端才需要處理死亡回調(diào)璧疗。而Binder服務端不需要,所以為空馁龟。

我們看看native怎么寫的

static const JNINativeMethod gBinderProxyMethods[] = {
     {"linkToDeath", "(Landroid/os/IBinder$DeathRecipient;I)V", (void*)android_os_BinderProxy_linkToDeath}
 };

android_util_Binder.cpp
//我們傳遞進來的參數(shù):創(chuàng)建的是通過子進程pid崩侠,name封裝的AppDeathRecipient對象,0

static void android_os_BinderProxy_linkToDeath(JNIEnv* env, jobject obj,
        jobject recipient, jint flags) // throws RemoteException
{
    //這里順便可以學習一下jni拋出異常的形式
    if (recipient == NULL) {
        jniThrowNullPointerException(env, NULL);
        return;
    }
    //獲取BpBinder引用
    IBinder* target = (IBinder*)
        env->GetLongField(obj, gBinderProxyOffsets.mObject);//[1.0]
    if (target == NULL) {
        ALOGW("Binder has been finalized when calling linkToDeath() with recip=%p)\n", recipient);
        assert(false);
    }
    //也要注意這里打印的日志
    LOGDEATH("linkToDeath: binder=%p recipient=%p\n", target, recipient);

    if (!target->localBinder()) {//[1.0]BpBinder必須不為空
        DeathRecipientList* list = (DeathRecipientList*)
                env->GetLongField(obj, gBinderProxyOffsets.mOrgue);
        //創(chuàng)建JavaDeathRecipient對象
        sp<JavaDeathRecipient> jdr = new JavaDeathRecipient(env, recipient, list);
        //這里才是真正建立死亡回調(diào)的地方[3.0]
        status_t err = target->linkToDeath(jdr, NULL, flags);
        if (err != NO_ERROR) {
            // Failure adding the death recipient, so clear its reference
            // now.
            jdr->clearReference();//[2.0]
            signalExceptionForError(env, obj, err, true /*canThrowRemoteException*/);
        }
    }
}

1.0

IBinder* target = (IBinder*)
env->GetLongField(obj, gBinderProxyOffsets.mObject);
-------------------
使用jni里面的函數(shù)
jlong       (*GetLongField)(JNIEnv*, jobject, jfieldID);
這個函數(shù)目的是從obj中胡群毆對應mObject那個字段的值
--------------------
obj是傳遞過來的參數(shù)
也就是我們通過子進程封裝的AppDeathRecipient對象
//注意這里jid的設置
jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val){
    // The proxy holds a reference to the native object.
    env->SetLongField(object, gBinderProxyOffsets.mObject, (jlong)val.get());
}

1.0.1

例如這種:
jfieldID fid = (*env)->GetFieldID(env, cls, "key", "Ljava/lang/String;");//得到字段jfieldID
jstring jstr = (*env)->GetObjectField(env, jobj, fid);//獲取jfieldID對應字段的屬性值


Get<type>Field
NativeType Get<type>Field(JNIEnv *env, jobject obj, jfieldID fieldID);
函數(shù)作用:
  該訪問器例程系列返回對象的實例(非靜態(tài))域的值坷檩。要訪問的域由通過調(diào)用GetFieldID() 而得到的域 ID 指定却音。
參數(shù)說明:
  env:JNI 接口指針。
  obj:Java 對象(不能為 NULL)矢炼。
  fieldID:有效的域 ID系瓢。

<type>可以是Boolean、Char等類型句灌,所有的Get<type>Field參考下面的函數(shù)

jboolean (*GetBooleanField)(JNIEnv*, jobject, jfieldID);
jbyte (*GetByteField)(JNIEnv*, jobject, jfieldID);
jchar (*GetCharField)(JNIEnv*, jobject, jfieldID);
jshort (*GetShortField)(JNIEnv*, jobject, jfieldID);
jint (*GetIntField)(JNIEnv*, jobject, jfieldID);
jlong (*GetLongField)(JNIEnv*, jobject, jfieldID);
jfloat (*GetFloatField)(JNIEnv*, jobject, jfieldID);
jdouble (*GetDoubleField)(JNIEnv*, jobject, jfieldID);

1.1

191BBinder* BBinder::localBinder()
192{
193    return this;
194}

到這里我們小節(jié)一下我們的android_os_BinderProxy_linkToDeath方法:

我們首先會得到BpBinder夷陋。然后獲取到DeathRecipientList,主要記錄BpBinder的JavaDeathRecipient信息列表胰锌,因為一個BpBnder可以注冊多個死亡回調(diào)骗绕。
創(chuàng)建JavaDeathRecipient繼承了IBinder::DeathRecipient

class JavaDeathRecipient : public IBinder::DeathRecipient
{
public:
    JavaDeathRecipient(JNIEnv* env, jobject object, const sp<DeathRecipientList>& list)
        : mVM(jnienv_to_javavm(env)), mObject(env->NewGlobalRef(object)),
          mObjectWeak(NULL), mList(list)
    {
        //將當前對象sp添加到列表DeathRecipientList
        LOGDEATH("Adding JDR %p to DRL %p", this, list.get());
        list->add(this);

        android_atomic_inc(&gNumDeathRefs);
        incRefsCreated(env);
    }
}
  • 通過env->NewGlobalRef(object),為recipient創(chuàng)建相應的全局引用资昧,并保存到mObject成員變量酬土;
  • 將當前對象JavaDeathRecipient的強指針sp添加到DeathRecipientList;

android_util_Binder.cpp

static void incRefsCreated(JNIEnv* env)
{
    int old = android_atomic_inc(&gNumRefsCreated);
    if (old == 2000) {
        android_atomic_and(0, &gNumRefsCreated);
        //觸發(fā)forceGc
        env->CallStaticVoidMethod(gBinderInternalOffsets.mClass,
                gBinderInternalOffsets.mForceGc);
    }
}

這個方法主要計數(shù)格带,每計數(shù)到2000則會執(zhí)行一次forceGc

調(diào)用的場景如下:

JavaBBinder構(gòu)造中
    JavaBBinder(JNIEnv* env, jobject object)
        : mVM(jnienv_to_javavm(env)), mObject(env->NewGlobalRef(object))
    {
        ALOGV("Creating JavaBBinder %p\n", this);
        android_atomic_inc(&gNumLocalRefs);
        incRefsCreated(env);
    }
創(chuàng)建JavaDeathRecipient對象時
JavaDeathRecipient(JNIEnv* env, jobject object, const sp<DeathRecipientList>& list)
    : mVM(jnienv_to_javavm(env)), mObject(env->NewGlobalRef(object)),
      mObjectWeak(NULL), mList(list)
{
    // These objects manage their own lifetimes so are responsible for final bookkeeping.
    // The list holds a strong reference to this object.
    LOGDEATH("Adding JDR %p to DRL %p", this, list.get());
    list->add(this);

    android_atomic_inc(&gNumDeathRefs);
    incRefsCreated(env);
}

將native層BpBinder對象轉(zhuǎn)換為Java層BinderProxy對象的過程撤缴;
jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
{
 incRefsCreated(env);
}

2.0 clearReference

//清除引用,將JavaDeathRecipient從DeathRecipientList列表中移除.
void clearReference()
 {
     sp<DeathRecipientList> list = mList.promote();
     if (list != NULL) {
         list->remove(this); //從列表中移除引用
     }
 }

3.0

status_t BpBinder::linkToDeath(
    const sp<DeathRecipient>& recipient, void* cookie, uint32_t flags)
{
    Obituary ob;
    ob.recipient = recipient; //該對象為JavaDeathRecipient
    ob.cookie = cookie; // cookie=NULL
    ob.flags = flags; // flags=0
    {
        AutoMutex _l(mLock);
        if (!mObitsSent) { //沒有執(zhí)行過sendObituary叽唱,則進入該方法
            if (!mObituaries) {
                mObituaries = new Vector<Obituary>;
                if (!mObituaries) {
                    return NO_MEMORY;
                }
                getWeakRefs()->incWeak(this);
                IPCThreadState* self = IPCThreadState::self();
                //[3.1]
                self->requestDeathNotification(mHandle, this);
                //[3.2]
                self->flushCommands();
            }
            //將新創(chuàng)建的Obituary添加到mObituaries
            ssize_t res = mObituaries->add(ob);
            return res >= (ssize_t)NO_ERROR ? (status_t)NO_ERROR : res;
        }
    }
    return DEAD_OBJECT;
}

3.1requestDeathNotification

直接寫命令BC_REQUEST_DEATH_NOTIFICATION

status_t IPCThreadState::requestDeathNotification(int32_t handle, BpBinder* proxy)
{
    mOut.writeInt32(BC_REQUEST_DEATH_NOTIFICATION);
    mOut.writeInt32((int32_t)handle);
    mOut.writePointer((uintptr_t)proxy);
    return NO_ERROR;
}

3.2 flushCommands
給驅(qū)動發(fā)消息屈呕,false是不會阻塞等待。

void IPCThreadState::flushCommands()
{
    if (mProcess->mDriverFD <= 0)
        return;
    talkWithDriver(false);
}

binder.c

static int binder_thread_write(struct binder_proc *proc,
      struct binder_thread *thread,
      binder_uintptr_t binder_buffer, size_t size,
      binder_size_t *consumed)
{
  uint32_t cmd;
  //proc, thread都是指當前發(fā)起端進程的信息
  struct binder_context *context = proc->context;
  void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
  void __user *ptr = buffer + *consumed; 
  void __user *end = buffer + size;
  while (ptr < end && thread->return_error == BR_OK) {
    get_user(cmd, (uint32_t __user *)ptr); //獲取BC_REQUEST_DEATH_NOTIFICATION
    ptr += sizeof(uint32_t);
    switch (cmd) {
        case BC_REQUEST_DEATH_NOTIFICATION:{ //注冊死亡通知
            uint32_t target;
            void __user *cookie;
            struct binder_ref *ref;
            struct binder_ref_death *death;

            get_user(target, (uint32_t __user *)ptr); //獲取target
            ptr += sizeof(uint32_t);
            get_user(cookie, (void __user * __user *)ptr); //獲取BpBinder
            ptr += sizeof(void *);

            ref = binder_get_ref(proc, target); //拿到目標服務的binder_ref

            if (cmd == BC_REQUEST_DEATH_NOTIFICATION) {
                //native Bp可注冊多個棺亭,但Kernel只允許注冊一個死亡通知
                if (ref->death) {
                    break; 
                }
                death = kzalloc(sizeof(*death), GFP_KERNEL);

                INIT_LIST_HEAD(&death->work.entry);
                death->cookie = cookie;
                ref->death = death;
                //當目標binder服務所在進程已死,則直接發(fā)送死亡通知虎眨。這是非常規(guī)情況
                if (ref->node->proc == NULL) { 
                    ref->death->work.type = BINDER_WORK_DEAD_BINDER;
                    //當前線程為binder線程,則直接添加到當前線程的todo隊列. 
                    if (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) {
                        list_add_tail(&ref->death->work.entry, &thread->todo);
                    } else {
                        list_add_tail(&ref->death->work.entry, &proc->todo);
                        wake_up_interruptible(&proc->wait);
                    }
                }
            } else {
                ...
            }
        } break;
      case ...;
    }
    *consumed = ptr - buffer;
  }    }

可見現(xiàn)在已經(jīng)在Binder的todo鏈表中添加了BpBinder的信息。所以現(xiàn)在意味著侦铜,只要對端進程掛掉专甩,Binder是在底層可以從todo鏈表中拿出來client的然后調(diào)用對應的回調(diào)方法。

通過上面的分析钉稍,我們已經(jīng)知道,可以有多個BpBinder綁定到當前服務端的死亡列表中棺耍,然后通過真正的BpBinder中的linkToDeath添加到Binder內(nèi)核中的todo鏈表中贡未。todo鏈表記錄著所有的binder,在這里通過work.type區(qū)分這個Binder是已經(jīng)linkToDeath的。

 DeathRecipientList* list = (DeathRecipientList*)env->GetLongField(obj, gBinderProxyOffsets.mOrgue);
//創(chuàng)建JavaDeathRecipient對象
sp<JavaDeathRecipient> jdr = new JavaDeathRecipient(env, recipient, list);
//這里才是真正建立死亡回調(diào)的地方[3.0]
status_t err = target->linkToDeath(jdr, NULL, flags);

那么什么時候才會觸發(fā)呢俊卤?

我們按著這個思路往下想嫩挤,既然內(nèi)核todo鏈表中有l(wèi)inkToDeath的Binder引用,那么我們什么時候才能觸發(fā)遍歷帶有特殊type的linkToDeath的Binder呢消恍?這個就和我們的目的有關(guān)岂昭,答案是Binder服務端死亡的時候會觸發(fā)。既然這樣我們就需要知道Binder死亡后的一些事情狠怨。我們下面就分析Binder死亡后的過程约啊。

小發(fā)現(xiàn)

start


當我們調(diào)試Binder的時候,log中會有一些調(diào)試信息佣赖,比如

當打開調(diào)試開關(guān)BINDER_DEBUG_OPEN_CLOSE時恰矩,主要輸出binder的open, mmap, close, flush, release方法中的log信息

具體kernel log,如下:

  • binder_open: 4681:4681
  • binder_mmap: 4681 b6b42000-b6c40000 (1016 K) vma 200071 pagep 79f
  • binder: 4681 close vm area b6b42000-b6c40000 (1016 K) vma 2220051 pagep 79f
  • binder_flush: 4681 woke 0 threads
  • binder_release: 4681 threads 1, nodes 0 (ref 0), refs 2, active transactions 0, buffers 1, pages 1

對應的log信息是:

  • binder_open: group_leader->pid:pid
  • binder_mmap: pid vm_start-vm_end (vm_size K) vma vm_flags pagep vm_page_prot
  • binder: pid close vm area vm_start-vm_end (vm_size K) vma vm_flags pagep vm_page_prot
  • binder_flush: pid woke wake_count threads
  • binder_release: pid threads threads, nodes nodes (ref incoming_refs), refs outgoing_refs, active transactions active_transactions, buffers buffers, pages page_count

具體的含義:

  • vm_page_prot:是指當前進程的VMA訪問權(quán)限憎蛤;
  • wake_count:是指該進程喚醒了處于BINDER_LOOPER_STATE_WAITING休眠等待狀態(tài)的線程個數(shù)外傅;
  • threads是指該進程中的線程個數(shù);
  • nodes代表該進程中創(chuàng)建binder_node個數(shù)俩檬;
  • incoming_refs指向當前node的refs個數(shù)萎胰;
  • outgoing_refs指向其他進程的refs個數(shù);
  • active_transactions是指當前進程中所有binder線程的transactions總和棚辽;
  • buffers是指當前進程已分配的buffer個數(shù)奥洼;
    page_count是指當前進程已分配的物理page個數(shù)。

對應的函數(shù):

  • binder_open()
  • binder_vma_open() 或者 binder_mmap()
  • binder_vma_close()
  • binder_deferred_flush() 由binder_flush調(diào)用(見下方調(diào)用棧)
  • binder_deferred_release() 由binder_release調(diào)用(見下方調(diào)用棧)

end


我們在這里著重看binder_release的調(diào)用棧

binder_release  
  binder_defer_work(proc, BINDER_DEFERRED_RELEASE);
    queue_work(binder_deferred_workqueue, &binder_deferred_work);
      binder_deferred_func    //通過 DECLARE_WORK(binder_deferred_work, binder_deferred_func);
        binder_deferred_release

顧名思義晚胡,當binder所在進程結(jié)束時候會調(diào)用binder_release,binder_open打開binder驅(qū)動/dev/binder灵奖,這是字符設備,獲取文件苗舒服估盘,在進程結(jié)束的時候會有關(guān)閉文件系統(tǒng)的過程瓷患,會調(diào)用close(0,對應的方法就是release()遣妥。

我們在來思考一下擅编,Linux系統(tǒng)是一個文件系統(tǒng),android中操作很多文件節(jié)點箫踩,有輸入的event事件爱态,binder節(jié)點文件等等,既然是文件境钟,那就有文件的操作锦担,既然有文件的操作,那就必須涉及到文件的打開和關(guān)閉慨削,我們也從binder中驗證了這一點洞渔。binder_open(),那么肯定對應有關(guān)閉這個文件節(jié)點套媚,所以我們從close入手就利索應當了。

binder.c

void binder_release(struct binder_state *bs, uint32_t target)
{
    uint32_t cmd[2];
    cmd[0] = BC_RELEASE;
    cmd[1] = target;
    binder_write(bs, cmd, sizeof(cmd));
}
int binder_write(struct binder_state *bs, void *data, size_t len)
{
    struct binder_write_read bwr;
    int res;

    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (uintptr_t) data;
    bwr.read_size = 0;
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
    if (res < 0) {
        fprintf(stderr,"binder_write: ioctl failed (%s)\n",
                strerror(errno));
    }
    return res;
}

我們知道所有binder的請求都是通過binder_thread_write

binder_thread_write(){
    while (ptr < end && thread->return_error == BR_OK) {
        get_user(cmd, (uint32_t __user *)ptr)磁椒;//獲取IPC數(shù)據(jù)中的Binder協(xié)議(BC碼)
        switch (cmd) {
            case BC_INCREFS: ...
            case BC_ACQUIRE: ...
            case BC_RELEASE: ...
            case BC_DECREFS: ...
            case BC_INCREFS_DONE: ...
            case BC_ACQUIRE_DONE: ...
            case BC_FREE_BUFFER: ...
            
            case BC_TRANSACTION:
            case BC_REPLY: {
                struct binder_transaction_data tr;
                copy_from_user(&tr, ptr, sizeof(tr))堤瘤; //拷貝用戶空間tr到內(nèi)核
                // 【見小節(jié)2.2.1】
                binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
                break;

            case BC_REGISTER_LOOPER: ...
            case BC_ENTER_LOOPER: ...
            case BC_EXIT_LOOPER: ...
            case BC_REQUEST_DEATH_NOTIFICATION: ...
            case BC_CLEAR_DEATH_NOTIFICATION:  ...
            case BC_DEAD_BINDER_DONE: ...
            }
        }
    }
}

我們清晰的看見,對應有BC_RELEASE
這個函數(shù)我們就不用多說了浆熔,之前binder有過分析本辐,看我的其他博客。
通過給驅(qū)動寫如BINDER_WRITE_READ來告訴驅(qū)動医增,我要寫一個數(shù)據(jù)慎皱,數(shù)據(jù)具體帶有BC_RELEASE這個命令
最后BC_RELEASE功能是實現(xiàn)文件描述引用-1.當引用清0的時候這個Binder就是調(diào)用close的時候,

binder.c

static const struct file_operations binder_fops = {
  .owner = THIS_MODULE,
  .poll = binder_poll,
  .unlocked_ioctl = binder_ioctl,
  .compat_ioctl = binder_ioctl,
  .mmap = binder_mmap,
  .open = binder_open,
  .flush = binder_flush,
  .release = binder_release, //對應于release的方法
};

static int binder_release(struct inode *nodp, struct file *filp)
{
  struct binder_proc *proc = filp->private_data;
  debugfs_remove(proc->debugfs_entry);
  binder_defer_work(proc, BINDER_DEFERRED_RELEASE);//下面
  return 0;
}
static void binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer)
{
  mutex_lock(&binder_deferred_lock); //獲取鎖
  //添加BINDER_DEFERRED_RELEASE
  proc->deferred_work |= defer; 
  if (hlist_unhashed(&proc->deferred_work_node)) {
    hlist_add_head(&proc->deferred_work_node, &binder_deferred_list);
    //向工作隊列添加binder_deferred_work [見小節(jié)4.4]
    queue_work(binder_deferred_workqueue, &binder_deferred_work);
  }
  mutex_unlock(&binder_deferred_lock); //釋放鎖
}
//全局工作隊列
static struct workqueue_struct *binder_deferred_workqueue;

static int __init binder_init(void)
{
  int ret;
  //創(chuàng)建了名叫“binder”的工作隊列
  binder_deferred_workqueue = create_singlethread_workqueue("binder");
  if (!binder_deferred_workqueue)
    return -ENOMEM;
  ...
}

device_initcall(binder_init);

static DECLARE_WORK(binder_deferred_work, binder_deferred_func);

#define DECLARE_WORK(n, f)            \
  struct work_struct n = __WORK_INITIALIZER(n, f)

#define __WORK_INITIALIZER(n, f) {          \
  .data = WORK_DATA_STATIC_INIT(),        \
  .entry  = { &(n).entry, &(n).entry },        \
  .func = (f),              \
  __WORK_INIT_LOCKDEP_MAP(#n, &(n))        \
  }

在Binder設備驅(qū)動初始化的過程執(zhí)行binder_init()方法中调窍,調(diào)用 create_singlethread_workqueue(“binder”)宝冕,創(chuàng)建了名叫“binder”的工作隊列(workqueue)。 workqueue是kernel提供的一種實現(xiàn)簡單而有效的內(nèi)核線程機制邓萨,可延遲執(zhí)行任務地梨。

binder_deferred_func

static void binder_deferred_func(struct work_struct *work)
{
    binder_deferred_release(proc);
}
static void binder_deferred_release(struct binder_proc *proc)
{
  struct binder_transaction *t;
  struct rb_node *n;
  int threads, nodes, incoming_refs, outgoing_refs, buffers,
    active_transactions, page_count;

  hlist_del(&proc->proc_node); //刪除proc_node節(jié)點

  if (binder_context_mgr_node && binder_context_mgr_node->proc == proc) {
    binder_context_mgr_node = NULL;
  }

  //釋放binder_thread
  threads = 0;
  active_transactions = 0;
  while ((n = rb_first(&proc->threads))) {
    struct binder_thread *thread;
    thread = rb_entry(n, struct binder_thread, rb_node);
    threads++;
    active_transactions += binder_free_thread(proc, thread);
  }

  //釋放binder_node 
  nodes = 0;
  incoming_refs = 0;
  while ((n = rb_first(&proc->nodes))) {
    struct binder_node *node;
    node = rb_entry(n, struct binder_node, rb_node);
    nodes++;
    rb_erase(&node->rb_node, &proc->nodes);
    incoming_refs = binder_node_release(node, incoming_refs);
  }

  //釋放binder_ref 
  outgoing_refs = 0;
  while ((n = rb_first(&proc->refs_by_desc))) {
    struct binder_ref *ref;

    ref = rb_entry(n, struct binder_ref, rb_node_desc);
    outgoing_refs++;
    binder_delete_ref(ref);
  }
  
  //釋放binder_work 
  binder_release_work(&proc->todo);
  binder_release_work(&proc->delivered_death);

  buffers = 0;
  while ((n = rb_first(&proc->allocated_buffers))) {
    struct binder_buffer *buffer;
    buffer = rb_entry(n, struct binder_buffer, rb_node);

    t = buffer->transaction;
    if (t) {
      t->buffer = NULL;
      buffer->transaction = NULL;
    }
    //釋放binder_buf 
    binder_free_buf(proc, buffer);
    buffers++;
  }

  binder_stats_deleted(BINDER_STAT_PROC);

  page_count = 0;
  if (proc->pages) {
    int i;

    for (i = 0; i < proc->buffer_size / PAGE_SIZE; i++) {
      void *page_addr;
      if (!proc->pages[i])
        continue;

      page_addr = proc->buffer + i * PAGE_SIZE;
      unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
      __free_page(proc->pages[i]);
      page_count++;
    }
    kfree(proc->pages);
    vfree(proc->buffer);
  }
  put_task_struct(proc->tsk);
  kfree(proc);
}

此處proc是來自Bn端的binder_proc.

binder_deferred_release的主要工作有:

  • binder_free_thread(proc, thread)
  • binder_node_release(node, incoming_refs);
  • binder_delete_ref(ref);
  • binder_release_work(&proc->todo);
  • binder_release_work(&proc->delivered_death);
  • binder_free_buf(proc, buffer);
    以及釋放各種內(nèi)存信息

我們現(xiàn)在關(guān)心binder_node也就是binder實體釋放


static int binder_node_release(struct binder_node *node, int refs)
{
  struct binder_ref *ref;
  int death = 0;

  list_del_init(&node->work.entry);
  binder_release_work(&node->async_todo);//重點

  if (hlist_empty(&node->refs)) {
    kfree(node); //引用為空,則直接刪除節(jié)點
    binder_stats_deleted(BINDER_STAT_NODE);
    return refs;
  }

  node->proc = NULL;
  node->local_strong_refs = 0;
  node->local_weak_refs = 0;
  hlist_add_head(&node->dead_node, &binder_dead_nodes);

  hlist_for_each_entry(ref, &node->refs, node_entry) {
    refs++;
    if (!ref->death)
      continue;
    death++;

    if (list_empty(&ref->death->work.entry)) {
      //添加BINDER_WORK_DEAD_BINDER事務到todo隊列重點
      ref->death->work.type = BINDER_WORK_DEAD_BINDER;
      list_add_tail(&ref->death->work.entry, &ref->proc->todo);
      wake_up_interruptible(&ref->proc->wait);
    } 
  }
  return refs;
}

該方法會遍歷該binder_node所有的binder_ref, 當存在binder死亡通知缔恳,則向相應的binder_ref 所在進程的todo隊列添加BINDER_WORK_DEAD_BINDER事務并喚醒處于proc->wait的binder線程宝剖。

static void binder_release_work(struct list_head *list)
{
  struct binder_work *w;
  while (!list_empty(list)) {
    w = list_first_entry(list, struct binder_work, entry);
    list_del_init(&w->entry); //刪除binder_work
    switch (w->type) {
    case BINDER_WORK_TRANSACTION: {
      struct binder_transaction *t;
      t = container_of(w, struct binder_transaction, work);
      if (t->buffer->target_node &&
          !(t->flags & TF_ONE_WAY)) {
        //發(fā)送failed回復
        binder_send_failed_reply(t, BR_DEAD_REPLY);
      } else {
        t->buffer->transaction = NULL;
        kfree(t);
        binder_stats_deleted(BINDER_STAT_TRANSACTION);
      }
    } break;
    
    case BINDER_WORK_TRANSACTION_COMPLETE: {
      kfree(w);
      binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
    } break;
    
    case BINDER_WORK_DEAD_BINDER_AND_CLEAR:
    case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: {
      struct binder_ref_death *death;
      death = container_of(w, struct binder_ref_death, work);
      kfree(death);
      binder_stats_deleted(BINDER_STAT_DEATH);
    } break;
    
    default:
      break;
    }
  }

}

到這里我們已經(jīng)清楚了,binder_node_release這個過程中歉甚,BINDER_WORK_DEAD_BINDER事務并喚醒處于proc->wait的binder線程万细。

我們回過頭來看

static int binder_thread_read(struct binder_proc *proc,
                  struct binder_thread *thread,
                  binder_uintptr_t binder_buffer, size_t size,
                  binder_size_t *consumed, int non_block)
    ...
    //喚醒等待中的binder線程
    wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
    binder_lock(__func__); //加鎖

    if (wait_for_proc_work)
        proc->ready_threads--; //空閑的binder線程減1
    thread->looper &= ~BINDER_LOOPER_STATE_WAITING;

    while (1) {
        uint32_t cmd;
        struct binder_transaction_data tr;
        struct binder_work *w;
        struct binder_transaction *t = NULL;

        //從todo隊列拿出前面放入的binder_work, 此時type為BINDER_WORK_DEAD_BINDER
        if (!list_empty(&thread->todo)) {
            w = list_first_entry(&thread->todo, struct binder_work,
                         entry);
        } else if (!list_empty(&proc->todo) && wait_for_proc_work) {
            w = list_first_entry(&proc->todo, struct binder_work,
                         entry);
        }

        switch (w->type) {
          case BINDER_WORK_DEAD_BINDER:
            case BINDER_WORK_DEAD_BINDER_AND_CLEAR:
            case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: {
                struct binder_ref_death *death;
                uint32_t cmd;

                death = container_of(w, struct binder_ref_death, work);
                if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION)
                    cmd = BR_CLEAR_DEATH_NOTIFICATION_DONE; //清除完成
                ...
                if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION) {
                    list_del(&w->entry); //清除死亡通知的work隊列
                    kfree(death);
                    binder_stats_deleted(BINDER_STAT_DEATH);
                } 
                ...
                if (cmd == BR_DEAD_BINDER)
                    goto done;
            } break;
        }
    }
    ...
    return 0;
}

queue_work(binder_deferred_workqueue,&binder_deferred_work);

給工作隊列中添加binder_deferred_workqueue,其中binder_deferred_workqueue=create_singlethread_workqueue("binder");

static DECLARE_WORK(binder_deferred_work,binder_deferred_func);這個是定義就是添加一個函數(shù)引用在工作隊列中纸泄,以后對應binder_deferred_func方法

在這個binder_deferred_func方法中,可見將

 if (defer & BINDER_DEFERRED_RELEASE)
      binder_deferred_release(proc);

我們現(xiàn)在來精簡一下調(diào)用棧:

static int binder_release(struct inode *nodp, struct file *filp)
{
    binder_defer_work(proc, BINDER_DEFERRED_RELEASE);
}
static void binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer)
{
    //添加BINDER_DEFERRED_RELEASE
    proc->deferred_work |= defer; 
    //向工作隊列添加binder_deferred_work
    queue_work(binder_deferred_workqueue, &binder_deferred_work);
}

binder_deferred_workqueue我們現(xiàn)在已經(jīng)知道了赖钞,對應這binder_deferred_func這個方法。

static void binder_deferred_func(struct work_struct *work)
{
    if (defer & BINDER_DEFERRED_RELEASE)
      binder_deferred_release(proc); 
}

static void binder_deferred_release(struct binder_proc *proc)
{
    hlist_del(&proc->proc_node); //刪除proc_node節(jié)點
    //釋放binder_thread聘裁,binder_node雪营,binder_ref,binder_work衡便,binder_buf
    //其中在釋放binder_node的時候會調(diào)用binder_node_release
    incoming_refs = binder_node_release(node, incoming_refs);
}
static int binder_node_release(struct binder_node *node, int refs)
{
    binder_release_work(&node->async_todo);
    if (list_empty(&ref->death->work.entry)) {
        //添加BINDER_WORK_DEAD_BINDER事務到todo隊列
        ref->death->work.type = BINDER_WORK_DEAD_BINDER;
        list_add_tail(&ref->death->work.entry, &ref->proc->todo);
        wake_up_interruptible(&ref->proc->wait);
    }
}

到這里我們就已經(jīng)明白献起,binder_node_release這個方法會遍歷該binder_node所有的binder_ref, 當存在binder死亡通知,則向相應的binder_ref 所在進程的todo隊列添加BINDER_WORK_DEAD_BINDER事務并喚醒處于proc->wait的binder線程

還是那句老話镣陕,binder是數(shù)據(jù)傳輸中樞還是binder_thread_read這個方法谴餐,這個方法內(nèi)部我們看看是如何處理,binder死亡的呆抑。

static int binder_thread_read(struct binder_proc *proc,
                  struct binder_thread *thread,
                  binder_uintptr_t binder_buffer, size_t size,
                  binder_size_t *consumed, int non_block){
    while (1) {
        //從todo隊列拿出前面放入的binder_work, 此時type為BINDER_WORK_DEAD_BINDER
        if (!list_empty(&thread->todo)) {
            w = list_first_entry(&thread->todo, struct binder_work,
                         entry);
        } else if (!list_empty(&proc->todo) && wait_for_proc_work) {
            w = list_first_entry(&proc->todo, struct binder_work,
                         entry);
        }
        switch (w->type) {
            case BINDER_WORK_DEAD_BINDER: {
                //將這個binder的描述體寫入用戶空間
                put_user(cmd, (uint32_t __user *)ptr);
                //把該work加入到delivered_death隊列
                list_move(&w->entry, &proc->delivered_death);
            }
            
        }
    }          
}

寫入到用戶空間岂嗓,那么用戶空間一定在阻塞等待讀取操作

IPCThreadState.java


status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;
    result = talkWithDriver(); //該Binder Driver進行交互
    if (result >= NO_ERROR) {
        cmd = mIn.readInt32(); //讀取命令
        result = executeCommand(cmd);//核心
    }
    return result;
}
status_t IPCThreadState::executeCommand(int32_t cmd)
{
    BBinder* obj;
    switch ((uint32_t)cmd) {
      case BR_DEAD_BINDER:
      {
          BpBinder *proxy = (BpBinder*)mIn.readPointer();
          proxy->sendObituary();
          mOut.writeInt32(BC_DEAD_BINDER_DONE);
          mOut.writePointer((uintptr_t)proxy);
      } break;
      ...
    }
    ...
    return result;
}

這里死亡只調(diào)用一次的原因是實體Binder只有一個,所以死亡回調(diào)之發(fā)送一次理肺。

Bp.sendObituary


void BpBinder::sendObituary()
{
        IPCThreadState* self = IPCThreadState::self();
        //清空死亡通知[見小節(jié)6.2]
        self->clearDeathNotification(mHandle, this);
        self->flushCommands();
        reportOneDeath(obits->itemAt(i));//在清空之前已經(jīng)保存了引用摄闸。所以這里里發(fā)送死亡通知
    }
}

reportOneDeath

void BpBinder::reportOneDeath(const Obituary& obit)
{
    //將弱引用提升到sp
    sp<DeathRecipient> recipient = obit.recipient.promote();
    if (recipient == NULL) return;
    //回調(diào)死亡通知的方法
    recipient->binderDied(this);
}

binderDied

private final class AppDeathRecipient implements IBinder.DeathRecipient {
    ...
    public void binderDied() {
        synchronized(ActivityManagerService.this) {
            appDiedLocked(mApp, mPid, mAppThread, true);
        }
    }
}

到這里我們終于親切的看到appDiedLocked這個方法善镰。我們在下次會分析這個方法

unlinkeToDeath

有了上面的基礎妹萨,我們就很好分析這個了年枕。

BpBinder

status_t BpBinder::unlinkToDeath(
    const wp<DeathRecipient>& recipient, void* cookie, uint32_t flags,
    wp<DeathRecipient>* outRecipient)
{
    mObituaries->removeAt(i); //移除死亡通知
    //清理死亡通知
    self->clearDeathNotification(mHandle, this);
    self->flushCommands();
}
status_t IPCThreadState::clearDeathNotification(int32_t handle, BpBinder* proxy)
{
    mOut.writeInt32(BC_CLEAR_DEATH_NOTIFICATION);
    mOut.writeInt32((int32_t)handle);
    mOut.writePointer((uintptr_t)proxy);
    return NO_ERROR;
}

還是通過內(nèi)核寫入BC_CLEAR_DEATH_NOTIFICATION

還是那句老話,就不用我說了哈乎完。

static int binder_thread_write(struct binder_proc *proc,
      struct binder_thread *thread,
      binder_uintptr_t binder_buffer, size_t size,
      binder_size_t *consumed)
{
    switch (cmd) {
        case BC_CLEAR_DEATH_NOTIFICATION: { //清除死亡通知
        
            ref = binder_get_ref(proc, target); //拿到目標服務的binder_ref
            //添加BINDER_WORK_CLEAR_DEATH_NOTIFICATION事務
            death->work.type = BINDER_WORK_CLEAR_DEATH_NOTIFICATION;
            list_add_tail(&death->work.entry, &thread->todo);
            
        }
    }
}

將對應的type設置成BINDER_WORK_CLEAR_DEATH_NOTIFICATION熏兄,然后添加到todo鏈表中

也就是說將對應的type換成BINDER_WORK_CLEAR_DEATH_NOTIFICATION了。

對于Binder IPC進程都會打開/dev/binder文件树姨,當進程異常退出時摩桶,Binder驅(qū)動會保證釋放將要退出的進程中沒有正常關(guān)閉的/dev/binder文件,實現(xiàn)機制是binder驅(qū)動通過調(diào)用/dev/binder文件所對應的release回調(diào)函數(shù)帽揪,執(zhí)行清理工作硝清,并且檢查BBinder是否有注冊死亡通知,當發(fā)現(xiàn)存在死亡通知時转晰,那么就向其對應的BpBinder端發(fā)送死亡通知消息芦拿。

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌德澈,老刑警劉巖浅侨,帶你破解...
    沈念sama閱讀 219,366評論 6 508
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異昂秃,居然都是意外死亡,警方通過查閱死者的電腦和手機,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,521評論 3 395
  • 文/潘曉璐 我一進店門未桥,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人芥备,你說我怎么就攤上這事冬耿。” “怎么了门躯?”我有些...
    開封第一講書人閱讀 165,689評論 0 356
  • 文/不壞的土叔 我叫張陵淆党,是天一觀的道長。 經(jīng)常有香客問我讶凉,道長染乌,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 58,925評論 1 295
  • 正文 為了忘掉前任懂讯,我火速辦了婚禮荷憋,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘褐望。我一直安慰自己勒庄,他們只是感情好串前,可當我...
    茶點故事閱讀 67,942評論 6 392
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著实蔽,像睡著了一般荡碾。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上局装,一...
    開封第一講書人閱讀 51,727評論 1 305
  • 那天坛吁,我揣著相機與錄音,去河邊找鬼铐尚。 笑死拨脉,一個胖子當著我的面吹牛,可吹牛的內(nèi)容都是我干的宣增。 我是一名探鬼主播玫膀,決...
    沈念sama閱讀 40,447評論 3 420
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼爹脾!你這毒婦竟也來了帖旨?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 39,349評論 0 276
  • 序言:老撾萬榮一對情侶失蹤誉简,失蹤者是張志新(化名)和其女友劉穎碉就,沒想到半個月后,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體闷串,經(jīng)...
    沈念sama閱讀 45,820評論 1 317
  • 正文 獨居荒郊野嶺守林人離奇死亡瓮钥,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,990評論 3 337
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現(xiàn)自己被綠了烹吵。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片碉熄。...
    茶點故事閱讀 40,127評論 1 351
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖肋拔,靈堂內(nèi)的尸體忽然破棺而出锈津,到底是詐尸還是另有隱情,我是刑警寧澤凉蜂,帶...
    沈念sama閱讀 35,812評論 5 346
  • 正文 年R本政府宣布琼梆,位于F島的核電站,受9級特大地震影響窿吩,放射性物質(zhì)發(fā)生泄漏茎杂。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點故事閱讀 41,471評論 3 331
  • 文/蒙蒙 一纫雁、第九天 我趴在偏房一處隱蔽的房頂上張望煌往。 院中可真熱鬧,春花似錦轧邪、人聲如沸刽脖。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,017評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽曲管。三九已至却邓,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間翘地,已是汗流浹背申尤。 一陣腳步聲響...
    開封第一講書人閱讀 33,142評論 1 272
  • 我被黑心中介騙來泰國打工癌幕, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留衙耕,地道東北人。 一個月前我還...
    沈念sama閱讀 48,388評論 3 373
  • 正文 我出身青樓勺远,卻偏偏與公主長得像橙喘,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子胶逢,可洞房花燭夜當晚...
    茶點故事閱讀 45,066評論 2 355

推薦閱讀更多精彩內(nèi)容