android跨進(jìn)程通信IPC之12——Binder的補(bǔ)充

Android跨進(jìn)程通信IPC整體內(nèi)容如下

本篇文章的主要內(nèi)容

  • Binder中的線程池
  • Binder的權(quán)限
  • Binder的死亡通知機(jī)制

一 、Binder中的線程池

客戶端在使用Binder可以調(diào)用服務(wù)端的方法咖驮,這里面有一些隱含的問(wèn)題边器,如果我們服務(wù)端的方法是一個(gè)耗時(shí)的操作,那么對(duì)于我們客戶端和服務(wù)端都存在風(fēng)險(xiǎn)托修,如果有很多客戶端都來(lái)調(diào)用它的方法忘巧,那么是否會(huì)造成ANR那?多個(gè)客戶端調(diào)用睦刃,是否會(huì)有同步問(wèn)題砚嘴?如果客戶端在UI線程中調(diào)用的這個(gè)是耗時(shí)方法,那么是不是它也會(huì)造成ANR涩拙?這些問(wèn)題都是真實(shí)存在的际长,首先第一個(gè)問(wèn)題是不會(huì)出現(xiàn),因?yàn)榉?wù)端所有這些被調(diào)用方法都是在一個(gè)線程池中執(zhí)行的兴泥,不在服務(wù)端的UI線程中工育,因此服務(wù)端不會(huì)ANR,但是服務(wù)端會(huì)有同步問(wèn)題搓彻,因此我們提供的服務(wù)端接口方法應(yīng)該注意同步問(wèn)題如绸。客戶端會(huì)ANR很容易解決好唯,就是我們不要在UI線程中就可以避免了竭沫。那我們一起來(lái)看下Binder的線程池

(一) Binder線程池簡(jiǎn)述

Android系統(tǒng)啟動(dòng)完成后,ActivityManager骑篙、PackageManager等各大服務(wù)都運(yùn)行在system_server進(jìn)程蜕提,app應(yīng)用需要使用系統(tǒng)服務(wù)都是通過(guò)Binder來(lái)完成進(jìn)程間的通信,那么對(duì)于Binder線程是如何管理的靶端?又是如何創(chuàng)建的谎势?其實(shí)無(wú)論是system_server進(jìn)程還是app進(jìn)程,都是在fork完成后杨名,便會(huì)在新進(jìn)程中執(zhí)行onZygoteInit()的過(guò)程脏榆,啟動(dòng)Binder線程池。

從整體架構(gòu)以及通信協(xié)議的角度來(lái)闡述了Binder架構(gòu)台谍。那對(duì)于binder線程是如何管理的呢须喂,又是如何創(chuàng)建的呢?其實(shí)無(wú)論是system_server進(jìn)程,還是app進(jìn)程坞生,都是在進(jìn)程fork完成后仔役,便會(huì)在新進(jìn)程中執(zhí)行onZygoteInit()的過(guò)程中,啟動(dòng)binder線程池是己。

(二) Binder線程池創(chuàng)建

Binder 線程創(chuàng)建與其坐在進(jìn)程的創(chuàng)建中產(chǎn)生又兵,Java層進(jìn)程的創(chuàng)建都是通過(guò)Process.start()方法,向Zygote進(jìn)程發(fā)出創(chuàng)建進(jìn)程的socket消息卒废,Zygote收到消息后會(huì)調(diào)用Zygote.forkAndSpecialize()來(lái)fork出新進(jìn)程沛厨,在新進(jìn)程中調(diào)用RuntimeInit.nativeZygoteInit()方法,該方法經(jīng)過(guò)JNI映射摔认,最終會(huì)調(diào)用app_main.cpp中的onZygoteInit逆皮,那么接下來(lái)從這個(gè)方法開(kāi)始。

1级野、onZygoteInit()

代碼在app_main.cpp 的91行

    virtual void onZygoteInit()
    {
        // 獲取 ProcessState對(duì)象
        sp<ProcessState> proc = ProcessState::self();
        ALOGV("App process: starting thread pool.\n");
        proc->startThreadPool();
    }

ProcessState主要工作是調(diào)用open()打開(kāi)/dev/binder驅(qū)動(dòng)設(shè)備页屠,再利用mmap()映射內(nèi)核的地址空間,將Binder驅(qū)動(dòng)的fd賦值ProcessState對(duì)象的變量mDriverFD蓖柔,用于交互操作。startThreadPool()是創(chuàng)建一個(gè)型的Binder線程风纠,不斷進(jìn)行talkWithDriver()况鸣。

2、ProcessState.startThreadPool()

代碼在ProcessState.cpp 的132行

void ProcessState::startThreadPool()
{
     //多線程同步
    AutoMutex _l(mLock);
    if (!mThreadPoolStarted) {
        mThreadPoolStarted = true;
        spawnPooledThread(true);
    }
}

啟動(dòng)Binder線程池后竹观,則設(shè)置mThreadPoolStarted=true镐捧,通過(guò)變量mThreadPoolStarted來(lái)保證每個(gè)應(yīng)用進(jìn)程只允許啟動(dòng)一個(gè)Binder線程池,且本次創(chuàng)建的是Binder主線程(isMain=true臭增,isMain具體請(qǐng)看spawnPooledThread(true))懂酱。其余Binder線程池中的線程都是由Binder驅(qū)動(dòng)來(lái)控制創(chuàng)建的。然后繼續(xù)跟蹤看下spawnPooledThread(true)函數(shù)

3誊抛、ProcessState. spawnPooledThread()

代碼在ProcessState.cpp 的286行

void ProcessState::spawnPooledThread(bool isMain)
{
    if (mThreadPoolStarted) {
        //獲取Binder線程名
        String8 name = makeBinderThreadName();
        ALOGV("Spawning new pooled thread, name=%s\n", name.string());
         //這里注意isMain=true
        sp<Thread> t = new PoolThread(isMain);
        t->run(name.string());
    }
}
3.1列牺、ProcessState. makeBinderThreadName()

代碼在ProcessState.cpp 的279行

String8 ProcessState::makeBinderThreadName() {
    int32_t s = android_atomic_add(1, &mThreadPoolSeq);
    String8 name;
    name.appendFormat("Binder_%X", s);
    return name;
}

獲取Binder的線程名,格式為Binder_X拗窃,其中X為整數(shù)瞎领,每個(gè)進(jìn)程中的Binder編碼是從1開(kāi)始,依次遞增随夸;只有通過(guò)makeBinderThreadName()方法來(lái)創(chuàng)建線程才符合這個(gè)格式九默,對(duì)于直接將當(dāng)前線程通過(guò)joinThreadPool()加入線程池的線程名則不符合這個(gè)命名規(guī)則。另外宾毒,目前Android N中Binder命令已改為Binder:<pid> _X驼修,則對(duì)于分析問(wèn)題很有幫助,通過(guò)Binder名稱的pid就可以很快定位到該Binder所屬進(jìn)程的pid

3.2、PoolThread.run

代碼在ProcessState.cpp 的52行

class PoolThread : public Thread
{
public:
    PoolThread(bool isMain)
        : mIsMain(isMain)
    {
    }

protected:
    virtual bool threadLoop()
    {
        IPCThreadState::self()->joinThreadPool(mIsMain); 
        return false;
    }
    const bool mIsMain;
};

從函數(shù)名看起來(lái)是創(chuàng)建線程池乙各,其實(shí)就只是創(chuàng)建一個(gè)線程勉躺,該P(yáng)oolThread繼承Thread類,t->run()函數(shù)最終會(huì)調(diào)用PoolThread的threadLooper()方法觅丰。

4饵溅、IPCThreadState. joinThreadPool()

代碼在IPCThreadState.cpp.cpp 的52行


void IPCThreadState::joinThreadPool(bool isMain)
{
    LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid());
     // 創(chuàng)建Binder線程
    mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);

    // This thread may have been spawned by a thread that was in the background
    // scheduling group, so first we will make sure it is in the foreground
    // one to avoid performing an initial transaction in the background.
    //設(shè)置前臺(tái)調(diào)度策略
    set_sched_policy(mMyThreadId, SP_FOREGROUND);

    status_t result;
    do {
         //清楚隊(duì)列的引用
        processPendingDerefs();
        // now get the next command to be processed, waiting if necessary
        // 處理下一條指令
        result = getAndExecuteCommand();

        if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {
            ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting",
                  mProcess->mDriverFD, result);
            abort();
        }

        // Let this thread exit the thread pool if it is no longer
        // needed and it is not the main process thread.
        if(result == TIMED_OUT && !isMain) {
            //非主線程出現(xiàn)timeout則線程退出
            break;
        }
    } while (result != -ECONNREFUSED && result != -EBADF);

    LOG_THREADPOOL("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%p\n",
        (void*)pthread_self(), getpid(), (void*)result);
    // 線程退出循環(huán)
    mOut.writeInt32(BC_EXIT_LOOPER);
    //false代表bwr數(shù)據(jù)的read_buffer為空
    talkWithDriver(false);
}
  • 1 對(duì)于isMain=true的情況下,command為BC_ENTER_LOOPER妇萄,代表的是Binder主線程蜕企,不會(huì)退出線程。
  • 2 對(duì)于isMain=false的情況下冠句,command為BC_REGISTER_LOOPER轻掩,表示的是binder驅(qū)動(dòng)創(chuàng)建的線程。

joinThreadLoop()里面有一個(gè)do——while循環(huán)懦底,這個(gè)thread里面主要的調(diào)用唇牧,也就是重點(diǎn),里面主要就是調(diào)用了兩個(gè)函數(shù)processPendingDerefs()和getAndExecuteCommand()函數(shù)聚唐,那我們依次來(lái)看下丐重。

4.1、IPCThreadState. processPendingDerefs()

代碼在IPCThreadState.cpp 的454行

// When we've cleared the incoming command queue, process any pending derefs
void IPCThreadState::processPendingDerefs()
{
    if (mIn.dataPosition() >= mIn.dataSize()) {
        size_t numPending = mPendingWeakDerefs.size();
        if (numPending > 0) {
            for (size_t i = 0; i < numPending; i++) {
                RefBase::weakref_type* refs = mPendingWeakDerefs[i];
                //弱引用減一
                refs->decWeak(mProcess.get());
            }
            mPendingWeakDerefs.clear();
        }

        numPending = mPendingStrongDerefs.size();
        if (numPending > 0) {
            for (size_t i = 0; i < numPending; i++) {
                BBinder* obj = mPendingStrongDerefs[i];
                //強(qiáng)引用減一
                obj->decStrong(mProcess.get());
            }
            mPendingStrongDerefs.clear();
        }
    }
}

我們知道了processPendingDerefs()這個(gè)函數(shù)主要是將mPendingWeakDerefs和mPendingStrongDerefs中的指針解除應(yīng)用杆查,而且他的執(zhí)行結(jié)果并不影響Loop的執(zhí)行扮惦,那我們主要看下getAndExecuteCommand()函數(shù)里面做了什么。

4.2亲桦、IPCThreadState. getAndExecuteCommand()

代碼在IPCThreadState.cpp 的414行

status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;
     //與binder進(jìn)行交互
    result = talkWithDriver();
    if (result >= NO_ERROR) {
        size_t IN = mIn.dataAvail();
        if (IN < sizeof(int32_t)) return result;
        cmd = mIn.readInt32();
        IF_LOG_COMMANDS() {
            alog << "Processing top-level Command: "
                 << getReturnString(cmd) << endl;
        }

        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount++;
        pthread_mutex_unlock(&mProcess->mThreadCountLock);
        // 執(zhí)行Binder響應(yīng)嗎
        result = executeCommand(cmd);

        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount--;
        pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
        pthread_mutex_unlock(&mProcess->mThreadCountLock);

        // After executing the command, ensure that the thread is returned to the
        // foreground cgroup before rejoining the pool.  The driver takes care of
        // restoring the priority, but doesn't do anything with cgroups so we
        // need to take care of that here in userspace.  Note that we do make
        // sure to go in the foreground after executing a transaction, but
        // there are other callbacks into user code that could have changed
        // our group so we want to make absolutely sure it is put back.
        set_sched_policy(mMyThreadId, SP_FOREGROUND);
    }

    return result;
}

我們知道了getAndExecuteCommand()主要就是調(diào)用兩個(gè)函數(shù)talkWithDriver()和executeCommand()崖蜜,我們分別看一下

4.2.1、ProcessState. talkWithDriver()

代碼在IPCThreadState.cpp 的803行

status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    if (mProcess->mDriverFD <= 0) {
        return -EBADF;
    }

    binder_write_read bwr;

    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();

    // We don't want to write anything if we are still reading
    // from data left in the input buffer and the caller
    // has requested to read the next data.
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;

    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }

    IF_LOG_COMMANDS() {
        TextOutput::Bundle _b(alog);
        if (outAvail != 0) {
            alog << "Sending commands to driver: " << indent;
            const void* cmds = (const void*)bwr.write_buffer;
            const void* end = ((const uint8_t*)cmds)+bwr.write_size;
            alog << HexDump(cmds, bwr.write_size) << endl;
            while (cmds < end) cmds = printCommand(alog, cmds);
            alog << dedent;
        }
        alog << "Size of receive buffer: " << bwr.read_size
            << ", needRead: " << needRead << ", doReceive: " << doReceive << endl;
    }

    // Return immediately if there is nothing to do.
    //如果沒(méi)有輸入輸出數(shù)據(jù)客峭,直接返回
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
        IF_LOG_COMMANDS() {
            alog << "About to read/write, write size = " << mOut.dataSize() << endl;
        }
#if defined(HAVE_ANDROID_OS)
        // ioctl執(zhí)行binder讀寫(xiě)操作豫领,經(jīng)過(guò)syscall,進(jìn)入Binder驅(qū)動(dòng)舔琅,調(diào)用Binder_ioctl
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
        if (mProcess->mDriverFD <= 0) {
            err = -EBADF;
        }
        IF_LOG_COMMANDS() {
            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
        }
    } while (err == -EINTR);

    IF_LOG_COMMANDS() {
        alog << "Our err: " << (void*)(intptr_t)err << ", write consumed: "
            << bwr.write_consumed << " (of " << mOut.dataSize()
                        << "), read consumed: " << bwr.read_consumed << endl;
    }

    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else
                mOut.setDataSize(0);
        }
        if (bwr.read_consumed > 0) {
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }
        IF_LOG_COMMANDS() {
            TextOutput::Bundle _b(alog);
            alog << "Remaining data size: " << mOut.dataSize() << endl;
            alog << "Received commands from driver: " << indent;
            const void* cmds = mIn.data();
            const void* end = mIn.data() + mIn.dataSize();
            alog << HexDump(cmds, mIn.dataSize()) << endl;
            while (cmds < end) cmds = printReturnCommand(alog, cmds);
            alog << dedent;
        }
        return NO_ERROR;
    }

    return err;
}

在這里調(diào)用的是isMain=true等恐,也就是向mOut寫(xiě)入的是便是BC_ENTER_LOOPER。后面就是進(jìn)入Binder驅(qū)動(dòng)了搏明,具體到binder_thread_write()函數(shù)的BC_ENTER_LOOPER的處理過(guò)程鼠锈。

4.2.1.1、binder_thread_write

代碼在binder.c 的2252行

static int binder_thread_write(struct binder_proc *proc,
            struct binder_thread *thread,
            binder_uintptr_t binder_buffer, size_t size,
            binder_size_t *consumed)
{
    uint32_t cmd;
    void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
    void __user *ptr = buffer + *consumed;
    void __user *end = buffer + size;
    while (ptr < end && thread->return_error == BR_OK) {
        //拷貝用戶空間的cmd命令星著,此時(shí)為BC_ENTER_LOOPER
        if (get_user(cmd, (uint32_t __user *)ptr)) -EFAULT;
        ptr += sizeof(uint32_t);
        switch (cmd) {
          case BC_REGISTER_LOOPER:
              if (thread->looper & BINDER_LOOPER_STATE_ENTERED) {
                //出錯(cuò)原因:線程調(diào)用完BC_ENTER_LOOPER购笆,不能執(zhí)行該分支
                thread->looper |= BINDER_LOOPER_STATE_INVALID;

              } else if (proc->requested_threads == 0) {
                //出錯(cuò)原因:沒(méi)有請(qǐng)求就創(chuàng)建線程
                thread->looper |= BINDER_LOOPER_STATE_INVALID;

              } else {
                proc->requested_threads--;
                proc->requested_threads_started++;
              }
              thread->looper |= BINDER_LOOPER_STATE_REGISTERED;
              break;

          case BC_ENTER_LOOPER:
              if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) {
                //出錯(cuò)原因:線程調(diào)用完BC_REGISTER_LOOPER,不能立刻執(zhí)行該分支
                thread->looper |= BINDER_LOOPER_STATE_INVALID;
              }
              //創(chuàng)建Binder主線程
              thread->looper |= BINDER_LOOPER_STATE_ENTERED;
              break;

          case BC_EXIT_LOOPER:
              thread->looper |= BINDER_LOOPER_STATE_EXITED;
              break;
        }
        ...
    }
    *consumed = ptr - buffer;
  }
  return 0;
}

處理完BC_ENTER_LOOPER命令后虚循,一般情況下成功設(shè)置thread->looper |= BINDER_LOOPER_STATE_ENTERED同欠。那么binder線程的創(chuàng)建是在什么時(shí)候样傍?那就當(dāng)該線程有事務(wù)需要處理的時(shí)候,進(jìn)入binder_thread_read()過(guò)程铺遂。

4.2.1.2衫哥、binder_thread_read

代碼在binder.c 的2654行

binder_thread_read(){
  ...
retry:
    //當(dāng)前線程todo隊(duì)列為空且transaction棧為空,則代表該線程是空閑的
    wait_for_proc_work = thread->transaction_stack == NULL &&
        list_empty(&thread->todo);

    if (thread->return_error != BR_OK && ptr < end) {
        ...
        put_user(thread->return_error, (uint32_t __user *)ptr);
        ptr += sizeof(uint32_t);
        //發(fā)生error襟锐,則直接進(jìn)入done
        goto done; 
    }

    thread->looper |= BINDER_LOOPER_STATE_WAITING;
    if (wait_for_proc_work)
         //可用線程個(gè)數(shù)+1
        proc->ready_threads++; 
    binder_unlock(__func__);

    if (wait_for_proc_work) {
        if (non_block) {
            ...
        } else
            //當(dāng)進(jìn)程todo隊(duì)列沒(méi)有數(shù)據(jù),則進(jìn)入休眠等待狀態(tài)
            ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
    } else {
        if (non_block) {
            ...
        } else
            //當(dāng)線程todo隊(duì)列沒(méi)有數(shù)據(jù)撤逢,則進(jìn)入休眠等待狀態(tài)
            ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));
    }

    binder_lock(__func__);
    if (wait_for_proc_work)
        //可用線程個(gè)數(shù)-1
        proc->ready_threads--; 
    thread->looper &= ~BINDER_LOOPER_STATE_WAITING;

    if (ret)
        //對(duì)于非阻塞的調(diào)用,直接返回
        return ret; 

    while (1) {
        uint32_t cmd;
        struct binder_transaction_data tr;
        struct binder_work *w;
        struct binder_transaction *t = NULL;

        //先考慮從線程todo隊(duì)列獲取事務(wù)數(shù)據(jù)
        if (!list_empty(&thread->todo)) {
            w = list_first_entry(&thread->todo, struct binder_work, entry);
        //線程todo隊(duì)列沒(méi)有數(shù)據(jù), 則從進(jìn)程todo對(duì)獲取事務(wù)數(shù)據(jù)
        } else if (!list_empty(&proc->todo) && wait_for_proc_work) {
            w = list_first_entry(&proc->todo, struct binder_work, entry);
        } else {
            ... //沒(méi)有數(shù)據(jù),則返回retry
        }

        switch (w->type) {
            case BINDER_WORK_TRANSACTION: ...  break;
            case BINDER_WORK_TRANSACTION_COMPLETE:...  break;
            case BINDER_WORK_NODE: ...    break;
            case BINDER_WORK_DEAD_BINDER:
            case BINDER_WORK_DEAD_BINDER_AND_CLEAR:
            case BINDER_WORK_CLEAR_DEATH_NOTIFICATION:
                struct binder_ref_death *death;
                uint32_t cmd;

                death = container_of(w, struct binder_ref_death, work);
                if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION)
                  cmd = BR_CLEAR_DEATH_NOTIFICATION_DONE;
                else
                  cmd = BR_DEAD_BINDER;
                put_user(cmd, (uint32_t __user *)ptr;
                ptr += sizeof(uint32_t);
                put_user(death->cookie, (void * __user *)ptr);
                ptr += sizeof(void *);
                ...
                if (cmd == BR_DEAD_BINDER)
                  goto done; //Binder驅(qū)動(dòng)向client端發(fā)送死亡通知粮坞,則進(jìn)入done
                break;
        }

        if (!t)
            continue; //只有BINDER_WORK_TRANSACTION命令才能繼續(xù)往下執(zhí)行
        ...
        break;
    }

done:
    *consumed = ptr - buffer;
    //創(chuàng)建線程的條件
    if (proc->requested_threads + proc->ready_threads == 0 &&
        proc->requested_threads_started < proc->max_threads &&
        (thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
         BINDER_LOOPER_STATE_ENTERED))) {
        proc->requested_threads++;
        // 生成BR_SPAWN_LOOPER命令蚊荣,用于創(chuàng)建新的線程
        put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer);
    }
    return 0;
}

放生一下三種情況中的任意一種莫杈,就會(huì)進(jìn)入done

  • 當(dāng)前線程return_error發(fā)生error的情況
  • 當(dāng)Binder驅(qū)動(dòng)向client端發(fā)送死亡通知的情況
  • 當(dāng)類型為BINDER_WORK_TRANSACTION(即受到命令是BC_TRANSACTION或BC_REPLY)的情況

任何一個(gè)Binder線程當(dāng)同事滿足以下條件時(shí)互例,則會(huì)生成用于創(chuàng)建新線程的BR_SPAWN_LOOPER命令:

  • 1、當(dāng)前進(jìn)程中沒(méi)有請(qǐng)求創(chuàng)建binder線程筝闹,即request_threads=0
  • 2媳叨、當(dāng)前進(jìn)程沒(méi)有空閑可用binder線程,即ready_threads=0(線程進(jìn)入休眠狀態(tài)的個(gè)數(shù)就是空閑線程數(shù))
  • 3关顷、當(dāng)前線程應(yīng)啟動(dòng)線程個(gè)數(shù)小于最大上限(默認(rèn)是15)
  • 4糊秆、當(dāng)前線程已經(jīng)接收到BC_ENTER_LOOPER或者BC_REGISTER_LOOPEE命令,即當(dāng)前處于BINDER_LOOPER_STATE_REGISTERED或者BINDER_LOOPER_STATE_ENTERED狀態(tài)解寝。
4.2.2扩然、IPCThreadState. executeCommand()

代碼在IPCThreadState.cpp 的947行

status_t IPCThreadState::executeCommand(int32_t cmd)
{
    status_t result = NO_ERROR;
    switch ((uint32_t)cmd) {
      ...
      case BR_SPAWN_LOOPER:
          //創(chuàng)建新的binder線程 
          mProcess->spawnPooledThread(false);
          break;
      ...
    }
    return result;
}

Binder主線程的創(chuàng)建時(shí)在其所在進(jìn)程創(chuàng)建的過(guò)程中一起創(chuàng)建的,后面再創(chuàng)建的普通binder線程是由
spawnPooledThread(false)方法所創(chuàng)建的聋伦。

(三) Binder線程池流程

Binder設(shè)計(jì)架構(gòu)中,只有第一個(gè)Binder主線程也就是Binder_1線程是由應(yīng)用程序主動(dòng)創(chuàng)建的界睁,Binder線程池的普通線程都是Binder驅(qū)動(dòng)根據(jù)IPC通信需求而創(chuàng)建觉增,Binder線程的創(chuàng)建流程圖如下:

Binder線程的創(chuàng)建流程.png

每次由Zygote fork出新進(jìn)程的過(guò)程中,伴隨著創(chuàng)建binder線程池翻斟,調(diào)用spawnPooledThread來(lái)創(chuàng)建binder主線程逾礁。當(dāng)線程執(zhí)行binder_thread_read的過(guò)程中,發(fā)現(xiàn)當(dāng)前沒(méi)有空閑線程访惜,沒(méi)有請(qǐng)求創(chuàng)建線程嘹履,且沒(méi)有達(dá)到上限,則創(chuàng)建新的binder線程债热。

Binder的transaction有3種類型:

-call:發(fā)起進(jìn)程的線程不一定是Binder線程砾嫉,大多數(shù)情況下,接受者只指向進(jìn)程窒篱,并不確定會(huì)有兩個(gè)線程來(lái)處理焕刮,所以不指定線程舶沿。

  • reply:發(fā)起者一定是binder線程,并且接收者線程便是上此call時(shí)的發(fā)起線程(該線程不一定是binder線程配并,可以是任意線程)
  • async:與call類型差不多括荡,唯一不同的是async是oneway方式,不需要回復(fù)溉旋,發(fā)起進(jìn)程的線程不一定是在Binder線程畸冲,接收者只指向進(jìn)程,并不確定會(huì)有那個(gè)線程來(lái)處理观腊,所以不指定線程邑闲。

Binder系統(tǒng)中可分為3類binder線程:

  • Binder主線程:進(jìn)程創(chuàng)建過(guò)程會(huì)調(diào)用startThreadPool過(guò)程再進(jìn)入spawnPooledThread(true),來(lái)創(chuàng)建Binder主線程恕沫。編號(hào)從1開(kāi)始监憎,也就是意味著binder主線程名為binder_1,并且主線程是不會(huì)退出的婶溯。
  • Binder普通線程:是由Binder Driver是根據(jù)是否有空閑的binder線程來(lái)決定是否創(chuàng)建binder線程鲸阔,回調(diào)spawnPooledThread(false) ,isMain=false迄委,該線程名格式為binder_x
  • Binder其他線程:其他線程是指并沒(méi)有調(diào)用spawnPooledThread方法褐筛,而是直接調(diào)用IPCThreadState.joinThreadPool(),將當(dāng)前線程直接加入binder線程隊(duì)列叙身。例如:mediaserver和servicemanager的主線程都是binder主線程渔扎,但是system_server的主線程并非binder主線程。

二信轿、Binder的權(quán)限

(一) 概述

前面關(guān)于Binder的文章晃痴,講解了Binder的IPC機(jī)制〔坪觯看過(guò)Android系統(tǒng)源代碼的讀者一定看到過(guò)Binder.clearCallingIdentity()倘核,Binder.restoreCallingIdentity(),定義在Binder.java文件中

 // Binder.java
    //清空遠(yuǎn)程調(diào)用端的uid和pid即彪,用當(dāng)前本地進(jìn)程的uid和pid替代
    public static final native long clearCallingIdentity();
     // 作用是回復(fù)遠(yuǎn)程調(diào)用端的uid和pid信息紧唱,正好是"clearCallingIdentity"的飯過(guò)程
    public static final native void restoreCallingIdentity(long token);

這兩個(gè)方法都涉及了uid和pid,每個(gè)線程都有自己獨(dú)一無(wú)二IPCThreadState對(duì)象隶校,記錄當(dāng)前線程的pid和uid漏益,可通過(guò)方法Binder.getCallingPid()Binder.getCallingUid()**獲取相應(yīng)的pic和uid。

clearCallingIdentity()深胳,restoreCallingIdentity()這兩個(gè)方法使用過(guò)程都是成對(duì)使用的绰疤,這兩個(gè)方法配合使用,用于權(quán)限控制檢測(cè)功能稠屠。

(二) 原理

從定義這兩個(gè)方法是native方法峦睡,通過(guò)Binder的JNI調(diào)用翎苫,在android_util_Binder.cpp文件中定義了native兩個(gè)方法所對(duì)應(yīng)的jni方法。

1榨了、clearCallingIdentity

代碼在android_util_Binder.cpp 771行

static jlong android_os_Binder_clearCallingIdentity(JNIEnv* env, jobject clazz)
{
    return IPCThreadState::self()->clearCallingIdentity();
}

這里面代碼混簡(jiǎn)單煎谍,就是調(diào)用了IPCThreadState的clearCallingIdentity()方法

1.1、IPCThreadState::clearCallingIdentity()

代碼在IPCThreadState.cpp 356行

int64_t IPCThreadState::clearCallingIdentity()
{
    int64_t token = ((int64_t)mCallingUid<<32) | mCallingPid;
    clearCaller();
    return token;
}

UID和PID是IPCThreadState的成員變量龙屉,都是32位的int型數(shù)據(jù)呐粘,通過(guò)移動(dòng)操作,將UID和PID的信息保存到token转捕,其中高32位保存UID作岖,低32位保存PID。然后調(diào)用clearCaller()方法將當(dāng)前本地進(jìn)程pid和uid分別賦值給PID和UID五芝,這個(gè)具體的操作在IPCThreadState::clearCaller()里面痘儡,最后返回token

1.1.1、IPCThreadState::clearCaller()

代碼在IPCThreadState.cpp 356行

void IPCThreadState::clearCaller()
{
    mCallingPid = getpid();
    mCallingUid = getuid();
}
2枢步、JNI:restoreCallingIdentity

代碼在android_util_Binder.cpp 776行

static void android_os_Binder_restoreCallingIdentity(JNIEnv* env, jobject clazz, jlong token)
{
    // XXX temporary sanity check to debug crashes.
    //token記錄著uid信息沉删,將其右移32位得到的是uid
    int uid = (int)(token>>32);
    if (uid > 0 && uid < 999) {
        // In Android currently there are no uids in this range.
        //目前android系統(tǒng)不存在小于999的uid,所以u(píng)id<999則拋出異常醉途。
        char buf[128];
        sprintf(buf, "Restoring bad calling ident: 0x%" PRIx64, token);
        jniThrowException(env, "java/lang/IllegalStateException", buf);
        return;
    }
    IPCThreadState::self()->restoreCallingIdentity(token);
}

這個(gè)方法主要是獲取uid矾瑰,然后調(diào)用IPCThreadState的restoreCallingIdentity(token)方法

2.1、restoreCallingIdentity

代碼在IPCThreadState.cpp 383行

void IPCThreadState::restoreCallingIdentity(int64_t token)
{
    mCallingUid = (int)(token>>32);
    mCallingPid = (int)token;
}

從token中解析出PID和UID隘擎,并賦值給相應(yīng)的變量殴穴。該方法正好是clearCallingIdentity反過(guò)程。

3货葬、JNI:getCallingPid

代碼在android_util_Binder.cpp 761行

static jint android_os_Binder_getCallingPid(JNIEnv* env, jobject clazz)
{
    return IPCThreadState::self()->getCallingPid();
}

調(diào)用的是IPCThreadState的getCallingPid()方法

3.1采幌、IPCThreadState::getCallingPid

代碼在IPCThreadState.cpp 346行

pid_t IPCThreadState::getCallingPid() const
{
    return mCallingPid;
}

直接返回mCallingPid

4、JNI:getCallingUid

代碼在android_util_Binder.cpp 766行

static jint android_os_Binder_getCallingUid(JNIEnv* env, jobject clazz)
{
    return IPCThreadState::self()->getCallingUid();
}

調(diào)用的是IPCThreadState的getCallingUid()方法

4.1震桶、IPCThreadState::getCallingUid

代碼在IPCThreadState.cpp 346行

uid_t IPCThreadState::getCallingUid() const
{
    return mCallingUid;
}

直接返回mCallingUid

5植榕、遠(yuǎn)程調(diào)用
5.1、binder_thread_read

代碼在binder.c 的2654行

binder_thread_read(){
    ...
    while (1) {
      struct binder_work *w;
      switch (w->type) {
        case BINDER_WORK_TRANSACTION:
            t = container_of(w, struct binder_transaction, work);
            break;
        case :...
      }
      if (!t)
        continue; //只有BR_TRANSACTION,BR_REPLY才會(huì)往下執(zhí)行
        
      tr.code = t->code;
      tr.flags = t->flags;
      tr.sender_euid = t->sender_euid; //mCallingUid

      if (t->from) {
          struct task_struct *sender = t->from->proc->tsk;
          //當(dāng)非oneway的情況下,將調(diào)用者進(jìn)程的pid保存到sender_pid
          tr.sender_pid = task_tgid_nr_ns(sender,current->nsproxy->pid_ns);
      } else {
          //當(dāng)oneway的的情況下,則該值為0
          tr.sender_pid = 0;
      }
      ...
}
5.2尼夺、IPCThreadState. executeCommand()

代碼在IPCThreadState.cpp 的947行

status_t IPCThreadState::executeCommand(int32_t cmd)
{
    BBinder* obj;
    RefBase::weakref_type* refs;
    status_t result = NO_ERROR;

    switch ((uint32_t)cmd) {
        case BR_TRANSACTION:
        {
            const pid_t origPid = mCallingPid;
            const uid_t origUid = mCallingUid;
            // 設(shè)置調(diào)用者pid
            mCallingPid = tr.sender_pid; 
            // 設(shè)置調(diào)用者uid
            mCallingUid = tr.sender_euid;
            ...
            reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer,
                        &reply, tr.flags);
            // 恢復(fù)原來(lái)的pid
            mCallingPid = origPid; 
             // 恢復(fù)原來(lái)的uid
            mCallingUid = origUid; 
        }
        
        case :...
    }
}

關(guān)于mCallingPid、mCallingUid修改過(guò)程:是在每次Binder Call的遠(yuǎn)程進(jìn)程在執(zhí)行binder_thread_read()過(guò)程中炒瘸,會(huì)設(shè)置pid和uid淤堵,然后在IPCThreadState的transact收到BR_TRANSACTION則會(huì)修改mCallingPid,mCallingUid顷扩。

PS:當(dāng)oneway的情況下:mCallingPid=0拐邪,不過(guò)mCallingUid可以拿到正確值

(三) 思考

1、場(chǎng)景分析:
(1)場(chǎng)景:比如線程X通過(guò)Binder遠(yuǎn)程調(diào)用線程Y隘截,然后線程Y通過(guò)Binder調(diào)用當(dāng)前線程的另一個(gè)Service或者activity之類的組件扎阶。
(2)分析:
  • 1 線程X通過(guò)Binder遠(yuǎn)程調(diào)用線程Y:則線程Y的IPCThreadState中的mCallingUid和mCallingPid保存的就是線程X的UID和PID汹胃。這時(shí)在線程Y中調(diào)用Binder.getCallingPid()和Binder.getCallingUid()方法便可獲取線程X的UID和PID,然后利用UID和PID進(jìn)行權(quán)限對(duì)比东臀,判斷線程X是否有權(quán)限調(diào)用線程Y的某個(gè)方法
  • 2 線程Y通過(guò)Binder調(diào)用當(dāng)前線程的某個(gè)組件:此時(shí)線程Y是線程Y某個(gè)組件的調(diào)用端着饥,則mCallingUid和mCallingPid應(yīng)該保存當(dāng)前線程Y的PID和UID,故需要調(diào)用clearCallingIdentity()方法完成這個(gè)功能惰赋。當(dāng)前線程Y調(diào)用完某個(gè)組件宰掉,由于線程Y仍然處于線程A的被用調(diào)用端,因此mCallingUidh和mCallingPid需要回復(fù)線程A的UID和PID赁濒,這時(shí)調(diào)用restoreCallingIdentity()即完成轨奄。
場(chǎng)景分析.png

一句話:圖中過(guò)程2(調(diào)用組件2開(kāi)始之前)執(zhí)行clearCallingIdentity(),過(guò)程3(調(diào)用組件2結(jié)束之后)執(zhí)行restoreCallingIdentity()拒炎。

2挪拟、實(shí)例分析:

上述過(guò)程主要在system_server進(jìn)程各個(gè)線程中比較常見(jiàn)(普通app應(yīng)用很少出現(xiàn)),比如system_server進(jìn)程中的ActivityManagerService子線程

代碼在ActivityManagerService.java 6246行

    @Override
    public final void attachApplication(IApplicationThread thread) {
        synchronized (this) {
            //獲取遠(yuǎn)程Binder調(diào)用端的pid
            int callingPid = Binder.getCallingPid();
            // 清除遠(yuǎn)程Binder調(diào)用端的uid和pid信息击你,并保存到origId變量
            final long origId = Binder.clearCallingIdentity();
            attachApplicationLocked(thread, callingPid);
            //通過(guò)origId變量玉组,還原遠(yuǎn)程Binder調(diào)用端的uid和pid信息
            Binder.restoreCallingIdentity(origId);
        }
    }

attachApplication()該方法一般是system_server進(jìn)程的子線程調(diào)用遠(yuǎn)程進(jìn)程時(shí)使用,而attachApplicationLocked()方法則在同一個(gè)線程中果漾,故需要在調(diào)用該方法前清空遠(yuǎn)程調(diào)用該方法清空遠(yuǎn)程調(diào)用者的uid和pid球切,調(diào)用結(jié)束后恢復(fù)遠(yuǎn)程調(diào)用者的uid和pid。

三绒障、Binder的死亡通知機(jī)制

(一)吨凑、概述

死亡通知時(shí)為了讓Bp端(客戶端進(jìn)程)能知曉Bn端(服務(wù)端進(jìn)程)的生死情況,當(dāng)Bn進(jìn)程死亡后能通知到Bp端户辱。

  • 定義:AppDeathRecipient是繼承IBinder::DeathRecipient類鸵钝,主要需要實(shí)現(xiàn)其binderDied()來(lái)進(jìn)行死亡通告。
  • 注冊(cè):binder->linkToDeath(AppDeathRecipient)是為了將AppDeathRecipient死亡通知注冊(cè)到Binder上

Bn端只需要重寫(xiě)binderDied()方法庐镐,實(shí)現(xiàn)一些后尾清楚類的工作恩商,則在Bn端死掉后,會(huì)回調(diào)binderDied()進(jìn)行相應(yīng)處理必逆。

(二)怠堪、注冊(cè)死亡通知

1、Java層代碼

代碼在ActivityManagerService.java 6016行

public final class ActivityManagerService {
    private final boolean attachApplicationLocked(IApplicationThread thread, int pid) {
        ...
        //創(chuàng)建IBinder.DeathRecipient子類對(duì)象
        AppDeathRecipient adr = new AppDeathRecipient(app, pid, thread);
        //建立binder死亡回調(diào)
        thread.asBinder().linkToDeath(adr, 0);
        app.deathRecipient = adr;
        ...
        //取消binder死亡回調(diào)
        app.unlinkDeathRecipient();
    }

    private final class AppDeathRecipient implements IBinder.DeathRecipient {
        ...
        public void binderDied() {
            synchronized(ActivityManagerService.this) {
                appDiedLocked(mApp, mPid, mAppThread, true);
            }
        }
    }
}

這里面涉及兩個(gè)方法linkToDeath和unlinkToDeath方法名眉,實(shí)現(xiàn)如下:

1.1粟矿、linkToDeath()與unlinkToDeath()

代碼在ActivityManagerService.java 397行

public class Binder implements IBinder {
    public void linkToDeath(DeathRecipient recipient, int flags) {
    }

    public boolean unlinkToDeath(DeathRecipient recipient, int flags) {
        return true;
    }
}

代碼在ActivityManagerService.java 509行

final class BinderProxy implements IBinder {
    public native void linkToDeath(DeathRecipient recipient, int flags)
            throws RemoteException;
    public native boolean unlinkToDeath(DeathRecipient recipient, int flags);
}

可見(jiàn),以上兩個(gè)方法:

  • 當(dāng)為Binder服務(wù)端损拢,則相應(yīng)的兩個(gè)方法實(shí)現(xiàn)為空陌粹,沒(méi)有實(shí)際功能;
  • 當(dāng)為BinderProxy代理端福压,則調(diào)用native方法來(lái)實(shí)現(xiàn)相應(yīng)功能掏秩,這是真實(shí)使用場(chǎng)景
2或舞、JNI及Native層代碼

native方法linkToDeath()和unlinkToDeath() 通過(guò)JNI實(shí)現(xiàn),我們來(lái)依次了解蒙幻。

2.1 android_os_BinderProxy_linkToDeath()

代碼在android_util_Binder.cpp 397行

static void android_os_BinderProxy_linkToDeath(JNIEnv* env, jobject obj,
        jobject recipient, jint flags)
{
    if (recipient == NULL) {
        jniThrowNullPointerException(env, NULL);
        return;
    }

    //第一步 獲取BinderProxy.mObject成員變量值, 即BpBinder對(duì)象
    IBinder* target = (IBinder*)env->GetLongField(obj, gBinderProxyOffsets.mObject);
    ...

    //只有Binder代理對(duì)象才會(huì)進(jìn)入該對(duì)象
    if (!target->localBinder()) {
        DeathRecipientList* list = (DeathRecipientList*)
                env->GetLongField(obj, gBinderProxyOffsets.mOrgue);
        //第二步 創(chuàng)建JavaDeathRecipient對(duì)象
        sp<JavaDeathRecipient> jdr = new JavaDeathRecipient(env, recipient, list);
        //第三步 建立死亡通知
        status_t err = target->linkToDeath(jdr, NULL, flags);
        if (err != NO_ERROR) {
            //如果添加失敗映凳,第四步 , 則從list移除引用
            jdr->clearReference();
            signalExceptionForError(env, obj, err, true /*canThrowRemoteException*/);
        }
    }
}

大體流程是:

  • 第一步杆煞,獲取BpBinder對(duì)象
  • 第二步魏宽,構(gòu)建JavaDeathRecipient對(duì)象
  • 第三步,調(diào)用BpBinder的linkToDeath决乎,建立死亡通知
  • 第四步队询,如果添加死亡通知失敗,則調(diào)用JavaDeathRecipient的clearReference移除

補(bǔ)充說(shuō)明:

  • 獲取DeathRecipientList:其成員變量mList記錄該BinderProxy的JavaDeathRecipient列表信息(一個(gè)BpBinder可以注冊(cè)多個(gè)死亡回調(diào))
  • 創(chuàng)建JavaDeathRecipient:繼承與IBinder::DeathRecipient

那我們就依照上面四個(gè)步驟依次詳細(xì)了解下,獲取BpBinder對(duì)象的過(guò)程和之前講解Binder一樣构诚,這里就不詳細(xì)說(shuō)明了蚌斩,直接從第二步開(kāi)始。

2.1.1 JavaDeathRecipient類

代碼在android_util_Binder.cpp 348行

class JavaDeathRecipient : public IBinder::DeathRecipient
{
public:
    JavaDeathRecipient(JNIEnv* env, jobject object, const sp<DeathRecipientList>& list)
        : mVM(jnienv_to_javavm(env)), mObject(env->NewGlobalRef(object)),
          mObjectWeak(NULL), mList(list)
    {
        //將當(dāng)前對(duì)象sp添加到列表DeathRecipientList
        list->add(this);
        android_atomic_inc(&gNumDeathRefs);
        incRefsCreated(env); 
    }
}

該方法主要功能:

  • 通過(guò)env->NewGloablRef(object)范嘱,為recipient創(chuàng)建相應(yīng)的全局引用眷蜓,并保存到mObject成員變量
  • 將當(dāng)前對(duì)象JavaDeathRecipient強(qiáng)指針sp添加到DeathRecipientList

這里說(shuō)下DeathRecipient關(guān)系圖

DeathRecipient關(guān)系圖.png

其中Java層的BinderProxy.mOrgue 指向DeathRecipientList纸兔,而DeathRecipientList記錄JavaDeathRecipient對(duì)象

最后調(diào)用了incRefsCreated()函數(shù)乌叶,讓我們來(lái)看下

2.1.1.1 incRefsCreated()函數(shù)

代碼在android_util_Binder.cpp 144行

static void incRefsCreated(JNIEnv* env)
{
    int old = android_atomic_inc(&gNumRefsCreated);
    if (old == 200) {
        android_atomic_and(0, &gNumRefsCreated);
        //出發(fā)forceGc
        env->CallStaticVoidMethod(gBinderInternalOffsets.mClass,
                gBinderInternalOffsets.mForceGc);
    } else {
        ALOGV("Now have %d binder ops", old);
    }
}

該方法的主要作用是增加引用計(jì)數(shù)incRefsCreated麻车,每計(jì)數(shù)增加200則執(zhí)行一次forceGC;
會(huì)觸發(fā)調(diào)用incRefsCreated()的場(chǎng)景有:

  • JavaBBinder 對(duì)象創(chuàng)建過(guò)程
  • JavaDeathRecipient對(duì)象創(chuàng)建過(guò)程
  • javaObjectForIBinder()方法:將native層的BpBinder對(duì)象轉(zhuǎn)換為Java層BinderProxy對(duì)象的過(guò)程
2.1.2 BpBinder::linkToDeath()

代碼在BpBinder.cpp 173行

status_t BpBinder::linkToDeath(
    const sp<DeathRecipient>& recipient, void* cookie, uint32_t flags)
{
    Obituary ob;
    //recipient 該對(duì)象為JavaDeathRecipient
    ob.recipient = recipient;
    // cookie 為null
    ob.cookie = cookie;
    // flags=0;
    ob.flags = flags;

    LOG_ALWAYS_FATAL_IF(recipient == NULL,
                        "linkToDeath(): recipient must be non-NULL");

    {
        AutoMutex _l(mLock);
        if (!mObitsSent) {
             // 沒(méi)有執(zhí)行過(guò)sendObituary,則進(jìn)入該方法
            if (!mObituaries) {
                mObituaries = new Vector<Obituary>;
                if (!mObituaries) {
                    return NO_MEMORY;
                }
                ALOGV("Requesting death notification: %p handle %d\n", this, mHandle);
                getWeakRefs()->incWeak(this);
                IPCThreadState* self = IPCThreadState::self();
                //具體調(diào)用步驟1
                self->requestDeathNotification(mHandle, this);
                // 具體調(diào)用步驟2
                self->flushCommands();
            }
            // 將創(chuàng)新的Obituary添加到mbituaries
            ssize_t res = mObituaries->add(ob);
            return res >= (ssize_t)NO_ERROR ? (status_t)NO_ERROR : res;
        }
    }
    return DEAD_OBJECT;
}

這里面的核心代碼的就是分別調(diào)用了** IPCThreadState的requestDeathNotification(mHandle, this)函數(shù)和flushCommands()**函數(shù)受裹,那我們就一次來(lái)看下

2.1.2.1 IPCThreadState::requestDeathNotification()函數(shù)

代碼在BpBinder.cpp 670行

status_t IPCThreadState::requestDeathNotification(int32_t handle, BpBinder* proxy)
{
    mOut.writeInt32(BC_REQUEST_DEATH_NOTIFICATION);
    mOut.writeInt32((int32_t)handle);
    mOut.writePointer((uintptr_t)proxy);
    return NO_ERROR;
}

進(jìn)入Binder Driver后碌补,直接調(diào)用后進(jìn)入binder_thread_write處理BC_REQUEST_DEATH_NOTIFICATION命令

2.1.2.2 IPCThreadState::flushCommands()函數(shù)

代碼在BpBinder.cpp 395行

void IPCThreadState::flushCommands()
{
    if (mProcess->mDriverFD <= 0)
        return;
    talkWithDriver(false);
}

flushCommands就是把命令向驅(qū)動(dòng)發(fā)出,此處參數(shù)是false棉饶,則不會(huì)阻塞等待讀厦章。向Linux Kernel層的Binder Driver發(fā)送 BC_REQEUST_DEATH_NOTIFACTION命令,經(jīng)過(guò)ioctl執(zhí)行到binder_ioctl_write_read()方法照藻。

2.1.3 clearReference()函數(shù)

代碼在android_util_Binder.cpp 412行

void clearReference()
 {
     sp<DeathRecipientList> list = mList.promote();
     if (list != NULL) {
         // 從列表中移除
         list->remove(this); 
     }
 }
3袜啃、Linux Kernel層代碼
3.1、binder_ioctl_write_read()函數(shù)

代碼在binder.c 3138行

static int binder_ioctl_write_read(struct file *filp,
                unsigned int cmd, unsigned long arg,
                struct binder_thread *thread)
{
    int ret = 0;
    struct binder_proc *proc = filp->private_data;
    void __user *ubuf = (void __user *)arg;
    struct binder_write_read bwr;
    // 把用戶控件數(shù)據(jù)ubuf拷貝到bwr
    if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { 
        ret = -EFAULT;
        goto out;
    }
    // 此時(shí)寫(xiě)入緩存數(shù)據(jù)
    if (bwr.write_size > 0) { 
        ret = binder_thread_write(proc, thread,
                  bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
         ...
    }
    //此時(shí)讀緩存沒(méi)有數(shù)據(jù)
    if (bwr.read_size > 0) {
      ...
    }
    // 將內(nèi)核數(shù)據(jù)bwr拷貝到用戶控件ubuf
    if (copy_to_user(ubuf, &bwr, sizeof(bwr))) { 
        ret = -EFAULT;
        goto out;
    }
out:
    return ret;
}

主要調(diào)用binder_thread_write來(lái)讀寫(xiě)緩存數(shù)據(jù)幸缕,按我們來(lái)看下binder_thread_write()函數(shù)

3.2群发、binder_thread_write()函數(shù)

代碼在binder.c 2252行

static int binder_thread_write(struct binder_proc *proc,
      struct binder_thread *thread,
      binder_uintptr_t binder_buffer, size_t size,
      binder_size_t *consumed)
{
  uint32_t cmd;
  //proc, thread都是指當(dāng)前發(fā)起端進(jìn)程的信息
  struct binder_context *context = proc->context;
  void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
  void __user *ptr = buffer + *consumed; 
  void __user *end = buffer + size;
  while (ptr < end && thread->return_error == BR_OK) {
    get_user(cmd, (uint32_t __user *)ptr); //獲取BC_REQUEST_DEATH_NOTIFICATION
    ptr += sizeof(uint32_t);
    switch (cmd) {
        case BC_REQUEST_DEATH_NOTIFICATION:{ 
           //注冊(cè)死亡通知
            uint32_t target;
            void __user *cookie;
            struct binder_ref *ref;
            struct binder_ref_death *death;
            //獲取targe
            get_user(target, (uint32_t __user *)ptr); t
            ptr += sizeof(uint32_t);
             //獲取BpBinder
            get_user(cookie, (void __user * __user *)ptr); 
            ptr += sizeof(void *);
            //拿到目標(biāo)服務(wù)的binder_ref
            ref = binder_get_ref(proc, target); 

            if (cmd == BC_REQUEST_DEATH_NOTIFICATION) {
                //native Bp可注冊(cè)多個(gè),但Kernel只允許注冊(cè)一個(gè)死亡通知
                if (ref->death) {
                    break; 
                }
                death = kzalloc(sizeof(*death), GFP_KERNEL);

                INIT_LIST_HEAD(&death->work.entry);
                death->cookie = cookie;
                ref->death = death;
                //當(dāng)目標(biāo)binder服務(wù)所在進(jìn)程已死,則直接發(fā)送死亡通知发乔。這是非常規(guī)情況
                if (ref->node->proc == NULL) { 
                    ref->death->work.type = BINDER_WORK_DEAD_BINDER;
                    //當(dāng)前線程為binder線程,則直接添加到當(dāng)前線程的todo隊(duì)列. 
                    if (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) {
                        list_add_tail(&ref->death->work.entry, &thread->todo);
                    } else {
                        list_add_tail(&ref->death->work.entry, &proc->todo);
                        wake_up_interruptible(&proc->wait);
                    }
                }
            } else {
                ...
            }
        } break;
      case ...;
    }
    *consumed = ptr - buffer;
  }   
 }

該方法在處理BC_REQUEST_DEATH_NOTIFACTION過(guò)程也物,正好遇到目標(biāo)Binder進(jìn)服務(wù)所在進(jìn)程已死的情況,向todo隊(duì)列增加BINDER_WORK_BINDER事務(wù)列疗,直接發(fā)送死亡通知,但這屬于非常規(guī)情況浪蹂。

更常見(jiàn)的場(chǎng)景是binder服務(wù)所在進(jìn)程死亡后抵栈,會(huì)調(diào)用binder_release方法告材,然后調(diào)用binder_node_release。這個(gè)過(guò)程便會(huì)發(fā)出死亡通知的回調(diào)古劲。

(三)斥赋、出發(fā)死亡通知

當(dāng)Binder服務(wù)所在進(jìn)程死亡后,會(huì)釋放進(jìn)程相關(guān)的資源产艾,Binder也是一種資源疤剑。binder_open打開(kāi)binder驅(qū)動(dòng)/dev/binder,這是字符設(shè)備闷堡,獲取文件描述符隘膘。在進(jìn)程結(jié)束的時(shí)候會(huì)有一個(gè)關(guān)閉文件系統(tǒng)的過(guò)程會(huì)調(diào)用驅(qū)動(dòng)close方法,該方法相對(duì)應(yīng)的是release()方法杠览。當(dāng)binder的fd被釋放后弯菊,此處調(diào)用相應(yīng)的方法是binder_release()。

但并不是每個(gè)close系統(tǒng)調(diào)用都會(huì)出發(fā)調(diào)用release()方法踱阿。只有真正釋放設(shè)備數(shù)據(jù)結(jié)構(gòu)才調(diào)用release()管钳,內(nèi)核維持一個(gè)文件結(jié)構(gòu)被使用多少次的技術(shù),即便是應(yīng)用程序沒(méi)有明顯地關(guān)閉它打開(kāi)的文件也使用:內(nèi)核在進(jìn)程exit()時(shí)會(huì)釋放所有內(nèi)存和關(guān)閉相應(yīng)的文件資源软舌,通過(guò)使用close系統(tǒng)調(diào)用最終也會(huì)release binder才漆。

1、release

代碼在binder.c 4172行

static const struct file_operations binder_fops = {
  .owner = THIS_MODULE,
  .poll = binder_poll,
  .unlocked_ioctl = binder_ioctl,
  .compat_ioctl = binder_ioctl,
  .mmap = binder_mmap,
  .open = binder_open,
  .flush = binder_flush,
   //對(duì)應(yīng)著release的方法
  .release = binder_release, 
};

那我們來(lái)看下binder_release

2佛点、binder_release

代碼在binder.c 3536行

static int binder_release(struct inode *nodp, struct file *filp)
{
    struct binder_proc *proc = filp->private_data;

    debugfs_remove(proc->debugfs_entry);
    binder_defer_work(proc, BINDER_DEFERRED_RELEASE);

    return 0;
}

我們看到里面調(diào)用了binder_defer_work()函數(shù)醇滥,那我們一起繼續(xù)看下

3、binder_defer_work

代碼在binder.c 3739行

static void
binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer)
{
        //獲取鎖
    mutex_lock(&binder_deferred_lock);
        // 添加BINDER_DEFERRED_RELEASE
    proc->deferred_work |= defer;
    if (hlist_unhashed(&proc->deferred_work_node)) {
        hlist_add_head(&proc->deferred_work_node,
                &binder_deferred_list);
                //向工作隊(duì)列添加binder_derred_work
        schedule_work(&binder_deferred_work);
    }
        // 釋放鎖
    mutex_unlock(&binder_deferred_lock);
}

這里面涉及到一個(gè)結(jié)構(gòu)體binder_deferred_workqueue恋脚,那我們就來(lái)看下

4腺办、binder_deferred_workqueue

代碼在binder.c 3737行

static DECLARE_WORK(binder_deferred_work, binder_deferred_func);

代碼在workqueue.h 183行

#define DECLARE_WORK(n, f)                      \
    struct work_struct n = __WORK_INITIALIZER(n, f)

代碼在workqueue.h 169行

#define __WORK_INITIALIZER(n, f) {          \
  .data = WORK_DATA_STATIC_INIT(),        \
  .entry  = { &(n).entry, &(n).entry },        \
  .func = (f),              \
  __WORK_INIT_LOCKDEP_MAP(#n, &(n))        \
  }

上面看起來(lái)有點(diǎn)凌亂,那我們合起來(lái)看

static DECLARE_WORK(binder_deferred_work, binder_deferred_func);

#define DECLARE_WORK(n, f)            \
  struct work_struct n = __WORK_INITIALIZER(n, f)

#define __WORK_INITIALIZER(n, f) {          \
  .data = WORK_DATA_STATIC_INIT(),        \
  .entry  = { &(n).entry, &(n).entry },        \
  .func = (f),              \
  __WORK_INIT_LOCKDEP_MAP(#n, &(n))        \
  }

那么 他是什么時(shí)候被初始化的糟描?

代碼在binder.c 4215行

//全局工作隊(duì)列
static struct workqueue_struct *binder_deferred_workqueue;
static int __init binder_init(void)
{
  int ret;
  //創(chuàng)建了名叫“binder”的工作隊(duì)列
  binder_deferred_workqueue = create_singlethread_workqueue("binder");
  if (!binder_deferred_workqueue)
    return -ENOMEM;
  ...
}

device_initcall(binder_init);

在Binder設(shè)備驅(qū)動(dòng)初始化過(guò)程中執(zhí)行binder_init()方法中怀喉,調(diào)用create_singlethread_workqueue(“binder”),創(chuàng)建了名叫"binder"的工作隊(duì)列(workqueue)船响。workqueue是kernel提供的一種實(shí)現(xiàn)簡(jiǎn)單而有效的內(nèi)核線程機(jī)制躬拢,可延遲執(zhí)行任務(wù)。
此處的binder_deferred_work的func為binder_deferred_func,接下來(lái)看該方法见间。

5聊闯、binder_deferred_work

代碼在binder.c 2697行

static void binder_deferred_func(struct work_struct *work)
{
    struct binder_proc *proc;
    struct files_struct *files;

    int defer;
    do {
        binder_lock(__func__);
                // 獲取binder_main_lock
        mutex_lock(&binder_deferred_lock);
        if (!hlist_empty(&binder_deferred_list)) {
            proc = hlist_entry(binder_deferred_list.first,
                    struct binder_proc, deferred_work_node);
            hlist_del_init(&proc->deferred_work_node);
            defer = proc->deferred_work;
            proc->deferred_work = 0;
        } else {
            proc = NULL;
            defer = 0;
        }
        mutex_unlock(&binder_deferred_lock);

        files = NULL;
        if (defer & BINDER_DEFERRED_PUT_FILES) {
            files = proc->files;
            if (files)
                proc->files = NULL;
        }

        if (defer & BINDER_DEFERRED_FLUSH)
            binder_deferred_flush(proc);

        if (defer & BINDER_DEFERRED_RELEASE)
                         // 核心代碼,調(diào)用binder_deferred_release()
            binder_deferred_release(proc); /* frees proc */

        binder_unlock(__func__);
        if (files)
            put_files_struct(files);
    } while (proc);
}

可見(jiàn)米诉,binder_release最終調(diào)用的是binder_deferred_release菱蔬;同理,binder_flush最終調(diào)用的是binder_deferred_flush。

6拴泌、binder_deferred_release

代碼在binder.c 3590行


static void binder_deferred_release(struct binder_proc *proc)
{
    struct binder_transaction *t;
    struct binder_context *context = proc->context;
    struct rb_node *n;
    int threads, nodes, incoming_refs, outgoing_refs, buffers,
        active_transactions, page_count;

    BUG_ON(proc->vma);
    BUG_ON(proc->files);

        //刪除proc_node節(jié)點(diǎn)
    hlist_del(&proc->proc_node);

    if (context->binder_context_mgr_node &&
        context->binder_context_mgr_node->proc == proc) {
        binder_debug(BINDER_DEBUG_DEAD_BINDER,
                 "%s: %d context_mgr_node gone\n",
                 __func__, proc->pid);
        context->binder_context_mgr_node = NULL;
    }
  
        //釋放binder_thread
    threads = 0;
    active_transactions = 0;
    while ((n = rb_first(&proc->threads))) {
        struct binder_thread *thread;

        thread = rb_entry(n, struct binder_thread, rb_node);
        threads++;
        active_transactions += binder_free_thread(proc, thread);
    }

        //釋放binder_node
    nodes = 0;
    incoming_refs = 0;
    while ((n = rb_first(&proc->nodes))) {
        struct binder_node *node;

        node = rb_entry(n, struct binder_node, rb_node);
        nodes++;
        rb_erase(&node->rb_node, &proc->nodes);
        incoming_refs = binder_node_release(node, incoming_refs);
    }

        //釋放binder_ref
    outgoing_refs = 0;
    while ((n = rb_first(&proc->refs_by_desc))) {
        struct binder_ref *ref;

        ref = rb_entry(n, struct binder_ref, rb_node_desc);
        outgoing_refs++;
        binder_delete_ref(ref);
    }

        //釋放binder_work
    binder_release_work(&proc->todo);
    binder_release_work(&proc->delivered_death);

    buffers = 0;
    while ((n = rb_first(&proc->allocated_buffers))) {
        struct binder_buffer *buffer;

        buffer = rb_entry(n, struct binder_buffer, rb_node);

        t = buffer->transaction;
        if (t) {
            t->buffer = NULL;
            buffer->transaction = NULL;
            pr_err("release proc %d, transaction %d, not freed\n",
                   proc->pid, t->debug_id);
            /*BUG();*/
        }

                //釋放binder_buf
        binder_free_buf(proc, buffer);
        buffers++;
    }

    binder_stats_deleted(BINDER_STAT_PROC);

    page_count = 0;
    if (proc->pages) {
        int i;

        for (i = 0; i < proc->buffer_size / PAGE_SIZE; i++) {
            void *page_addr;

            if (!proc->pages[i])
                continue;

            page_addr = proc->buffer + i * PAGE_SIZE;
            binder_debug(BINDER_DEBUG_BUFFER_ALLOC,
                     "%s: %d: page %d at %p not freed\n",
                     __func__, proc->pid, i, page_addr);
            unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
            __free_page(proc->pages[i]);
            page_count++;
        }
        kfree(proc->pages);
        vfree(proc->buffer);
    }

    put_task_struct(proc->tsk);

    binder_debug(BINDER_DEBUG_OPEN_CLOSE,
             "%s: %d threads %d, nodes %d (ref %d), refs %d, active transactions %d, buffers %d, pages %d\n",
             __func__, proc->pid, threads, nodes, incoming_refs,
             outgoing_refs, active_transactions, buffers, page_count);

    kfree(proc);
}

此處proc是來(lái)自Bn端的binder_proc.
binder_defered_release的主要工作有:

  • binder_free_thread(proc,thread)
  • binder_node_release(node,incoming_refs)
  • binder_delete_ref(ref)
    -binder_release_work(&proc->todo)
    -binder_release_work(&proc->delivered_death)
    -binder_free_buff(proc,buffer)
    -以及釋放各種內(nèi)存信息
6.1魏身、binder_free_thread

代碼在binder.c 3065行


static int binder_free_thread(struct binder_proc *proc,
                  struct binder_thread *thread)
{
    struct binder_transaction *t;
    struct binder_transaction *send_reply = NULL;
    int active_transactions = 0;

    rb_erase(&thread->rb_node, &proc->threads);
    t = thread->transaction_stack;
    if (t && t->to_thread == thread)
        send_reply = t;
    while (t) {
        active_transactions++;
        binder_debug(BINDER_DEBUG_DEAD_TRANSACTION,
                 "release %d:%d transaction %d %s, still active\n",
                  proc->pid, thread->pid,
                 t->debug_id,
                 (t->to_thread == thread) ? "in" : "out");

        if (t->to_thread == thread) {
            t->to_proc = NULL;
            t->to_thread = NULL;
            if (t->buffer) {
                t->buffer->transaction = NULL;
                t->buffer = NULL;
            }
            t = t->to_parent;
        } else if (t->from == thread) {
            t->from = NULL;
            t = t->from_parent;
        } else
            BUG();
    }
        //發(fā)送失敗回復(fù)
    if (send_reply)
        binder_send_failed_reply(send_reply, BR_DEAD_REPLY);
    binder_release_work(&thread->todo);
    kfree(thread);
    binder_stats_deleted(BINDER_STAT_THREAD);
    return active_transactions;
}
6.2、binder_node_release

代碼在binder.c 3546行

static int binder_node_release(struct binder_node *node, int refs)
{
    struct binder_ref *ref;
    int death = 0;

    list_del_init(&node->work.entry);
    binder_release_work(&node->async_todo);

    if (hlist_empty(&node->refs)) {
                //引用為空蚪腐,直接刪除節(jié)點(diǎn)
        kfree(node);
        binder_stats_deleted(BINDER_STAT_NODE);

        return refs;
    }

    node->proc = NULL;
    node->local_strong_refs = 0;
    node->local_weak_refs = 0;
    hlist_add_head(&node->dead_node, &binder_dead_nodes);

    hlist_for_each_entry(ref, &node->refs, node_entry) {
        refs++;

        if (!ref->death)
            continue;

        death++;

        if (list_empty(&ref->death->work.entry)) {
                       //添加BINDER_WORK_DEAD_BINDER事務(wù)到todo隊(duì)列
            ref->death->work.type = BINDER_WORK_DEAD_BINDER;
            list_add_tail(&ref->death->work.entry,
                      &ref->proc->todo);
            wake_up_interruptible(&ref->proc->wait);
        } else
            BUG();
    }
    binder_debug(BINDER_DEBUG_DEAD_BINDER,
             "node %d now dead, refs %d, death %d\n",
             node->debug_id, refs, death);
    return refs;
}

該方法會(huì)遍歷該binder_node所有的binder_ref箭昵,當(dāng)存在binder希望通知,則向相應(yīng)的binder_ref所在進(jìn)程的todo隊(duì)列添加BINDER_WORK_DEAD_BINDER事務(wù)并喚醒處于proc->wait的binder線程回季。

6.3家制、binder_delete_ref

代碼在binder.c 1133行

static void binder_delete_ref(struct binder_ref *ref)
{
    binder_debug(BINDER_DEBUG_INTERNAL_REFS,
             "%d delete ref %d desc %d for node %d\n",
              ref->proc->pid, ref->debug_id, ref->desc,
              ref->node->debug_id);

    rb_erase(&ref->rb_node_desc, &ref->proc->refs_by_desc);
    rb_erase(&ref->rb_node_node, &ref->proc->refs_by_node);
    if (ref->strong)
        binder_dec_node(ref->node, 1, 1);
    hlist_del(&ref->node_entry);
    binder_dec_node(ref->node, 0, 1);
    if (ref->death) {
        binder_debug(BINDER_DEBUG_DEAD_BINDER,
                 "%d delete ref %d desc %d has death notification\n",
                  ref->proc->pid, ref->debug_id, ref->desc);
        list_del(&ref->death->work.entry);
        kfree(ref->death);
        binder_stats_deleted(BINDER_STAT_DEATH);
    }
    kfree(ref);
    binder_stats_deleted(BINDER_STAT_REF);
}
6.4、binder_delete_ref

代碼在binder.c 2980行


static void binder_release_work(struct list_head *list)
{
    struct binder_work *w;

    while (!list_empty(list)) {
        w = list_first_entry(list, struct binder_work, entry);
                 //刪除 binder_work
        list_del_init(&w->entry);
        switch (w->type) {
        case BINDER_WORK_TRANSACTION: {
            struct binder_transaction *t;

            t = container_of(w, struct binder_transaction, work);
            if (t->buffer->target_node &&
                !(t->flags & TF_ONE_WAY)) {
                                 //發(fā)送failed回復(fù)
                binder_send_failed_reply(t, BR_DEAD_REPLY);
            } else {
                binder_debug(BINDER_DEBUG_DEAD_TRANSACTION,
                    "undelivered transaction %d\n",
                    t->debug_id);
                t->buffer->transaction = NULL;
                kfree(t);
                binder_stats_deleted(BINDER_STAT_TRANSACTION);
            }
        } break;
        case BINDER_WORK_TRANSACTION_COMPLETE: {
            binder_debug(BINDER_DEBUG_DEAD_TRANSACTION,
                "undelivered TRANSACTION_COMPLETE\n");
            kfree(w);
            binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
        } break;
        case BINDER_WORK_DEAD_BINDER_AND_CLEAR:
        case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: {
            struct binder_ref_death *death;

            death = container_of(w, struct binder_ref_death, work);
            binder_debug(BINDER_DEBUG_DEAD_TRANSACTION,
                "undelivered death notification, %016llx\n",
                (u64)death->cookie);
            kfree(death);
            binder_stats_deleted(BINDER_STAT_DEATH);
        } break;
        default:
            pr_err("unexpected work type, %d, not freed\n",
                   w->type);
            break;
        }
    }
}
6.4泡一、binder_delete_ref

代碼在binder.c 2980行


static void binder_free_buf(struct binder_proc *proc,
                struct binder_buffer *buffer)
{
    size_t size, buffer_size;

    buffer_size = binder_buffer_size(proc, buffer);

    size = ALIGN(buffer->data_size, sizeof(void *)) +
        ALIGN(buffer->offsets_size, sizeof(void *)) +
        ALIGN(buffer->extra_buffers_size, sizeof(void *));

    binder_debug(BINDER_DEBUG_BUFFER_ALLOC,
             "%d: binder_free_buf %p size %zd buffer_size %zd\n",
              proc->pid, buffer, size, buffer_size);

    BUG_ON(buffer->free);
    BUG_ON(size > buffer_size);
    BUG_ON(buffer->transaction != NULL);
    BUG_ON((void *)buffer < proc->buffer);
    BUG_ON((void *)buffer > proc->buffer + proc->buffer_size);

    if (buffer->async_transaction) {
        proc->free_async_space += size + sizeof(struct binder_buffer);

        binder_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC,
                 "%d: binder_free_buf size %zd async free %zd\n",
                  proc->pid, size, proc->free_async_space);
    }

    binder_update_page_range(proc, 0,
        (void *)PAGE_ALIGN((uintptr_t)buffer->data),
        (void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK),
        NULL);
    rb_erase(&buffer->rb_node, &proc->allocated_buffers);
    buffer->free = 1;
    if (!list_is_last(&buffer->entry, &proc->buffers)) {
        struct binder_buffer *next = list_entry(buffer->entry.next,
                        struct binder_buffer, entry);

        if (next->free) {
            rb_erase(&next->rb_node, &proc->free_buffers);
            binder_delete_free_buffer(proc, next);
        }
    }
    if (proc->buffers.next != &buffer->entry) {
        struct binder_buffer *prev = list_entry(buffer->entry.prev,
                        struct binder_buffer, entry);

        if (prev->free) {
            binder_delete_free_buffer(proc, buffer);
            rb_erase(&prev->rb_node, &proc->free_buffers);
            buffer = prev;
        }
    }
    binder_insert_free_buffer(proc, buffer);
}

(四)颤殴、總結(jié)

對(duì)于Binder IPC進(jìn)程都會(huì)打開(kāi)/dev/binder文件,當(dāng)進(jìn)程異常退出時(shí)瘾杭,Binder驅(qū)動(dòng)會(huì)保證釋放將要退出的進(jìn)程中沒(méi)有正常關(guān)閉的/dev/binder文件诅病,實(shí)現(xiàn)機(jī)制是binder驅(qū)動(dòng)通過(guò)調(diào)用/dev/binder文件所在對(duì)應(yīng)的release回調(diào)函數(shù),執(zhí)行清理工作粥烁,并且檢查BBinder是否注冊(cè)死亡通知贤笆,當(dāng)發(fā)現(xiàn)存在死亡通知時(shí),就向其對(duì)應(yīng)的BpBinder端發(fā)送死亡通知消息讨阻。

死亡回調(diào)DeathRecipient只有Bp才能正確使用芥永,因?yàn)镈eathRecipient用于監(jiān)控Bn掛掉的情況,如果Bn建立跟自己的死亡通知钝吮,自己進(jìn)程都掛了埋涧,就無(wú)法通知了。

清空引用奇瘦,將JavaDeathRecipient從DeathRecipientList列表移除棘催。

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市耳标,隨后出現(xiàn)的幾起案子醇坝,更是在濱河造成了極大的恐慌,老刑警劉巖次坡,帶你破解...
    沈念sama閱讀 206,214評(píng)論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件呼猪,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡砸琅,警方通過(guò)查閱死者的電腦和手機(jī)宋距,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,307評(píng)論 2 382
  • 文/潘曉璐 我一進(jìn)店門(mén),熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)症脂,“玉大人谚赎,你說(shuō)我怎么就攤上這事淫僻。” “怎么了沸版?”我有些...
    開(kāi)封第一講書(shū)人閱讀 152,543評(píng)論 0 341
  • 文/不壞的土叔 我叫張陵嘁傀,是天一觀的道長(zhǎng)。 經(jīng)常有香客問(wèn)我视粮,道長(zhǎng),這世上最難降的妖魔是什么橙凳? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 55,221評(píng)論 1 279
  • 正文 為了忘掉前任蕾殴,我火速辦了婚禮,結(jié)果婚禮上岛啸,老公的妹妹穿的比我還像新娘钓觉。我一直安慰自己,他們只是感情好坚踩,可當(dāng)我...
    茶點(diǎn)故事閱讀 64,224評(píng)論 5 371
  • 文/花漫 我一把揭開(kāi)白布荡灾。 她就那樣靜靜地躺著,像睡著了一般瞬铸。 火紅的嫁衣襯著肌膚如雪批幌。 梳的紋絲不亂的頭發(fā)上,一...
    開(kāi)封第一講書(shū)人閱讀 49,007評(píng)論 1 284
  • 那天嗓节,我揣著相機(jī)與錄音荧缘,去河邊找鬼。 笑死拦宣,一個(gè)胖子當(dāng)著我的面吹牛截粗,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播鸵隧,決...
    沈念sama閱讀 38,313評(píng)論 3 399
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼绸罗,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來(lái)了豆瘫?” 一聲冷哼從身側(cè)響起珊蟀,我...
    開(kāi)封第一講書(shū)人閱讀 36,956評(píng)論 0 259
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎靡羡,沒(méi)想到半個(gè)月后系洛,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 43,441評(píng)論 1 300
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡略步,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 35,925評(píng)論 2 323
  • 正文 我和宋清朗相戀三年描扯,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片趟薄。...
    茶點(diǎn)故事閱讀 38,018評(píng)論 1 333
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡绽诚,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情恩够,我是刑警寧澤卒落,帶...
    沈念sama閱讀 33,685評(píng)論 4 322
  • 正文 年R本政府宣布,位于F島的核電站蜂桶,受9級(jí)特大地震影響儡毕,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜扑媚,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,234評(píng)論 3 307
  • 文/蒙蒙 一腰湾、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧疆股,春花似錦费坊、人聲如沸。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 30,240評(píng)論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)。三九已至两残,卻和暖如春永毅,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背磕昼。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 31,464評(píng)論 1 261
  • 我被黑心中介騙來(lái)泰國(guó)打工卷雕, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人票从。 一個(gè)月前我還...
    沈念sama閱讀 45,467評(píng)論 2 352
  • 正文 我出身青樓漫雕,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國(guó)和親峰鄙。 傳聞我的和親對(duì)象是個(gè)殘疾皇子浸间,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 42,762評(píng)論 2 345

推薦閱讀更多精彩內(nèi)容