一镣陕、Android 中IPC 跨進程通信方式
1.1包竹、Linux下的進程通信:
- 進程間隔離
進程與進程間內存是不共享的愉择。兩個進程就像兩個平行的世界 - 進程空間劃分:用戶空間(User Space)/內核空間(Kernel Space)。
現(xiàn)在的操作系統(tǒng)采用的都是虛擬存儲技術巡李。對32位系統(tǒng)而言,它的虛擬存儲空間是2^32滔蝉,就是4GB击儡。 操作系統(tǒng)的核心是內核,可以訪問一些受保護的內存空間,也可以訪問底層硬件的權限蝠引。為了保護內核的安全,邏輯上將虛擬內存空間劃分為了用戶空間和內核空間阳谍。 - 系統(tǒng)調用(SystemCall):用戶態(tài)->內核態(tài)
雖然劃分了用戶空間和內核空間,但是用戶某些情況下需要訪問內核資源螃概,就需要系統(tǒng)調用矫夯。
當進程執(zhí)行系統(tǒng)調用而陷入內核代碼中執(zhí)行時,稱此進程處于內核態(tài);當進程在執(zhí)行自己的代碼時,稱為用戶態(tài)吊洼。
copy內存相關的系統(tǒng)調用
copy_from_user()//將數(shù)據從用戶空間拷貝到內核空間
copy_to_user()//將數(shù)據從內核空間拷貝到用戶空間
IPC 進程通信的基礎:
每個進程的用戶空間是彼此獨立的训貌,但是內核空間是進程共享的。進程A 可以將數(shù)據從用戶空間copy到內核空間冒窍,然后由內核做中轉递沪,再將數(shù)據copy到 進程B中,達到跨進程通信的目的
傳統(tǒng)的IPC通信存在兩個問題
- 一次數(shù)據傳遞需要經過兩次拷貝:內存緩存區(qū)-->內核緩存區(qū)-->內存緩存區(qū)
- 浪費空間和時間:接收區(qū)的緩存區(qū)由數(shù)據接收進程提供综液,但接收進程并不知道要多大的空間來存放傳遞過來的數(shù)據款慨。
1.2、Binder IPC 通信
Binder IPC 也是利用內核空間 做數(shù)據中轉谬莹,不同的是Binder在內核空間 為Server端用戶進程做了內存映射檩奠。
Binder驅動內存映射: binder_mmap
- 首先在內核虛擬地址空間申請一塊和用戶虛擬內存大小相同的內存;
- 再申請1個page(頁)大小的物理內存附帽;
- 再將同一塊物理內存分別映射到內核虛擬地址空間和用戶虛擬內存空間埠戳。
這樣就實現(xiàn)了用戶空間的Buffer和內核空間的Buffer同步操作的功能。這使得Binder通信只需要從用戶空間復制一次信息到內核空間就可以了
1.3蕉扮、Binder 與linux進程通信的區(qū)別
Linux進程通信機制大概分為:管道整胃、消息隊列、socket和共享內存喳钟。
- 效率方面:消息隊列屁使、socket欠啤、管道 都需要兩次拷貝,Binder只需要一次拷貝。
兩個進程都使用mmap就是共享內存屋灌;一個進程是用copy_from_user,另一個進程使用mmap 就是用的Binder機制洁段。
- 穩(wěn)定性方面:共享內存的性能優(yōu)于Binder,但是共享內存需要處理并發(fā)同步問題,容易出現(xiàn)死鎖和資源競爭共郭,穩(wěn)定性差祠丝。Binder基于C/S架構 ,Server端與Client端相對獨立除嘹,穩(wěn)定性較好写半。
- 安全性方面:傳統(tǒng)Linux IPC的接收方無法獲得對方進程可靠的UID/PID,從而無法鑒別對方身份尉咕;而Binder機制為每個進程分配了UID/PID,且在Binder通信時會根據UID/PID進行有效性檢測叠蝇。
二、Binder的實現(xiàn)原理和構成
Android Binder 通信有兩套方法年缎,一個是ServierManager 直接open悔捶、ioctl操作binder driver,另一個是通過Binder IPC 框架來進行通信单芜。
Binder 是基于C/S 結構的,自上而下Java層(FrameWork層)蜕该、Native(包括JNI)層和Driver層。
FrameWork層 Binder 主要包括:Binder洲鸠、BinderProxy堂淡、BinderInternal
Natvie層Binder主要包括:JavaBBinder、BBinder扒腕、BpBinder绢淀、ProcessState、IPCThreadState瘾腰、ServiceManager
Driver層 也叫Kernel層皆的,它直接操作“/dev/binder” 設備節(jié)點。主要操作open居灯、ioctl祭务、mmap()等内狗。
Java Binder是native Binder的一個映射(Mirror)怪嫌,Java Binder 跨進程通信依托于由native Binder實現(xiàn)具體功能,最后都經過driver設備節(jié)點。
下面分別就各個類做一下簡答的介紹柳沙。
2.1岩灭、FrameWork Binder
IBinder接口常量FLAG_ONEWAY:客戶端利用Binder跟服務端通信是阻塞式的,但如果設置了FLAG_ONEWAY,這成為非阻塞式的調用方式赂鲤,客戶端能立即返回噪径,服務端采用回調方式來通知客戶端完成情況柱恤。另外IBinder接口有一個內部接口DeathDecipient(死亡通告)
Binder 是服務端類,為對端提供具體服務找爱。mObject持有native層的JavaBBinderHolder梗顺,子類onTransact()方法實現(xiàn)具體的操作。
public class Binder implements IBinder {
/**
* Raw native pointer to JavaBBinderHolder object. Owned by this Java object. Not null.
*/
private final long mObject; //指向native層的JavaBBinderHolder對象
private IInterface mOwner;
private String mDescriptor;
public Binder() {
mObject = getNativeBBinderHolder();
}
public @Nullable IInterface queryLocalInterface(@NonNull String descriptor) {
if (mDescriptor != null && mDescriptor.equals(descriptor)) {
return mOwner;
}
return null;
}
protected boolean onTransact(int code, @NonNull Parcel data, @Nullable Parcel reply,
int flags){
}
}
- BinderProxy Binder代理類,用戶Client端,向Server端發(fā)送IPC請求
final class BinderProxy implements IBinder {
/**
* C++ pointer to BinderProxyNativeData. That consists of strong pointers to the
* native IBinder object, and a DeathRecipientList.
*/
private final long mNativeData;
private BinderProxy(long nativeData) {
mNativeData = nativeData;
}
public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
Binder.checkParcel(this, code, data, "Unreasonably large binder buffer");
try {
return transactNative(code, data, reply, flags);
} finally {
}
}
public native boolean transactNative(int code, Parcel data, Parcel reply,
int flags) throws RemoteException;
}
- BinderInternal 主要負責Binder的回收车摄、獲取ServiceManagerProxy實例等寺谤。
public class BinderInternal {
//GcWatcher 用于管理Binder的銷毀和回收。
static WeakReference<BinderInternal.GcWatcher> sGcWatcher = new WeakReference(new BinderInternal.GcWatcher());
public BinderInternal() {
}
public static void addGcWatcher(Runnable watcher) {
synchronized(sGcWatchers) {
sGcWatchers.add(watcher);
}
}
//當前線程加入Binder線程池
public static final native void joinThreadPool();
//getContextObject() 用于獲取handle=0 的BpBinder,也就是ServiceManager的BpBinder
public static final native IBinder getContextObject();
}
通過BinderInternal.getContextObject() 可以獲得代表ServiceManager(C++)的BpBinder吮播,進而生成ServiceManagerProxy()
- Java 層的ServiceManager 管理所有的Service(IBinder)類,提供addService(注冊)和getService(查詢)兩種操作变屁。所有的系統(tǒng)服務都要向ServiceManager注冊,使用某個服務時意狠,要通過ServiceManager.getService(name) 進行查詢粟关,獲得BinderProxy對象。
具體的addService环戈、getService() 最終是通過C++層的ServiceManager實現(xiàn)的闷板。
frameworks/base/core/java/android/os/ServiceManager.java
ServiceManager{
@UnsupportedAppUsage
public static IBinder getService(String name) {
try {
IBinder service = sCache.get(name);
if (service != null) {
return service;
} else {
return Binder.allowBlocking(rawGetService(name));
}
} catch (RemoteException e) {
Log.e(TAG, "error in getService", e);
}
return null;
}
public static void addService(String name, IBinder service, boolean allowIsolated,
int dumpPriority) {
try {
getIServiceManager().addService(name, service, allowIsolated, dumpPriority);
} catch (RemoteException e) {
Log.e(TAG, "error in addService", e);
}
}
private static IServiceManager getIServiceManager() {
if (sServiceManager != null) {
return sServiceManager;
}
// Find the service manager
sServiceManager = ServiceManagerNative
.asInterface(Binder.allowBlocking(BinderInternal.getContextObject()));
return sServiceManager;
}
}
public final class ServiceManagerNative {
private ServiceManagerNative() {}
@UnsupportedAppUsage
public static IServiceManager asInterface(IBinder obj) {
if (obj == null) {
return null;
}
// ServiceManager is never local
return new ServiceManagerProxy(obj);
}
}
2.2 Native 層Binder
2.2.1、 BpBinder: Native層 Binder代理類院塞,持有handle屬性,記錄Binder節(jié)點的id蛔垢,是Client端代理類。
2.2.2迫悠、JavaBBinder:C++類,繼承BBinder類鹏漆,是Server端Binder的代表;
繼承關系:
JavaBBinder -> BBinder->IBinder->RefBase
BpBinder->IBinder->RefBase
持有關系:
|Java | Binder.mObject -> |C++| JavaBBinderHolder->JavaBBinder
|Java | BinderProxy.mNativeData -> |C++| BinderProxyNativeData -> BinderProxyNativeData.mObject-> BpBinder
Java層的Binder對象mObject屬性持有JavaBBinderHolder(C++)類,JavaBBinderHolder持有JavaBBinder對象创泄。
Java層的BinderProxy.mNativeData屬性持有BinderProxyNativeData屬性,BinderProxyNativeData持有BpBinder對象艺玲。
2.2.3、ProcessState類
ProcessState 是一個單例類,一個進程僅存在一個ProcessState實例鞠抑。
ProcessState 實例化時 做了兩件事情:
- 調用 open("/dev/binder")打開設備節(jié)點,持有Binder驅動的mDriverFD
static int open_driver(const char *driver)
{
//"/dev/binder"
int fd = open(driver, O_RDWR | O_CLOEXEC);
if (fd >= 0) {
int vers = 0;
status_t result = ioctl(fd, BINDER_VERSION, &vers);
size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;
result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
} else {
ALOGW("Opening '%s' failed: %s\n", driver, strerror(errno));
}
return fd;
}
//調用mmap 申請共享內存,共享內存大小為1M-8k.
if (mDriverFD >= 0) {
// mmap the binder, providing a chunk of virtual address space to receive transactions.
mVMStart = mmap(nullptr, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
}
2.4饭聚、IPCThreadState Binder的線程管理類
主要工作:
- 通過ProcessState提供的mDriverFD,執(zhí)行ioctl()向binder driver 發(fā)送和接收數(shù)據。
- joinThreadPool 開啟無限循環(huán),以輪詢的方式 從Binder Driver 讀取數(shù)據搁拙。
IPCThreadState 內部有有mIn和mOut 兩個Parcel類型的屬性秒梳。
mIn - Parcel對象 用于接收/dev/binder驅動發(fā)來的消息
mOut - Parcel對象, 用于向/dev/binder驅動 發(fā)送消息
IPCThreadState::talkWithDriver 會從mOut中讀取指令,發(fā)送給Binder Driver箕速;Binder Driver 收到的指令會寫到mInt中
三酪碘、Java Binder和Native Binder是如何建立聯(lián)系的。
我們平時使用Binder用的是 Java層的Binder,但最終會調用Natvie 層的Binder,實現(xiàn)具體的功能盐茎,那么java 層的Binder和Natvie層的Binder是如何建立聯(lián)系的呢兴垦?
在Android系統(tǒng)開機過程中,Zygote啟動時會有一個虛擬機注冊過程,該過程調用AndroidRunntime::startReg方法來完成jni方法的注冊探越。
startReg
int AndroidRuntime::startReg(JNIEnv* env)
{
androidSetCreateThreadFunc((android_create_thread_fn) javaCreateThreadEtc);
env->PushLocalFrame(200);
//注冊jni方法
if (register_jni_procs(gRegJNI, NELEM(gRegJNI), env) < 0) {
env->PopLocalFrame(NULL);
return -1;
}
env->PopLocalFrame(NULL);
return 0;
}
注冊jni方法狡赐,其中gRegJNI是一個數(shù)組,記錄所有需要注冊的jni方法钦幔,其中有一項就是REG_JNI(register_android_os_Binder)枕屉。
register_android_os_Binder
int register_android_os_Binder(JNIEnv* env) {
// 注冊Binder類的jni方法
if (int_register_android_os_Binder(env) < 0)
return -1;
// 注冊BinderInternal類的jni方法
if (int_register_android_os_BinderInternal(env) < 0)
return -1;
// 注冊BinderProxy類的jni方法
if (int_register_android_os_BinderProxy(env) < 0)
return -1;
...
return 0;
}
注冊Binder
static int int_register_android_os_Binder(JNIEnv* env) {
//其中kBinderPathName = "android/os/Binder";查找kBinderPathName路徑所屬類
jclass clazz = FindClassOrDie(env, kBinderPathName);
//將Java層Binder類保存到mClass變量;
gBinderOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
//將Java層execTransact()方法保存到mExecTransact變量鲤氢;
gBinderOffsets.mExecTransact = GetMethodIDOrDie(env, clazz, "execTransact", "(IJJI)Z");
//將Java層mObject屬性保存到mObject變量
gBinderOffsets.mObject = GetFieldIDOrDie(env, clazz, "mObject", "J");
//注冊JNI方法
return RegisterMethodsOrDie(env, kBinderPathName, gBinderMethods,
NELEM(gBinderMethods));
}
把有關Binder的重要變量都保存到了gBinderOffsets中搀庶。主要有三步:
- mClass:保存java層的Binder類
- mExecTransact:保存execTransact()方法
- mObject:保存mObject屬性
gBinderOffsets
gBinderOffsets是全局靜態(tài)結構體,定義如下:
static struct bindernative_offsets_t
{
jclass mClass; //記錄Binder類
jmethodID mExecTransact; //記錄execTransact()方法
jfieldID mObject; //記錄mObject屬性
} gBinderOffsets;
gBinderOffsets保存了Binder.java類本身以及成員方法execTransact()和成員屬性mObject铜异,這為JNI層訪問java層提供通道哥倔。
gBidnerOffsets結構體保存binder類信息也是一種空間換時間的方法,不需要每次查找binder類信息揍庄,提高了查詢效率咆蒿。
gBinderMethods
static const JNINativeMethod gBinderMethods[] = {
/* 名稱, 簽名, 函數(shù)指針 */
{ "getCallingPid", "()I", (void*)android_os_Binder_getCallingPid },
{ "getCallingUid", "()I", (void*)android_os_Binder_getCallingUid },
{ "clearCallingIdentity", "()J", (void*)android_os_Binder_clearCallingIdentity },
{ "restoreCallingIdentity", "(J)V", (void*)android_os_Binder_restoreCallingIdentity },
{ "setThreadStrictModePolicy", "(I)V", (void*)android_os_Binder_setThreadStrictModePolicy },
{ "getThreadStrictModePolicy", "()I", (void*)android_os_Binder_getThreadStrictModePolicy },
{ "flushPendingCommands", "()V", (void*)android_os_Binder_flushPendingCommands },
{ "init", "()V", (void*)android_os_Binder_init },
{ "destroy", "()V", (void*)android_os_Binder_destroy },
{ "blockUntilThreadAvailable", "()V", (void*)android_os_Binder_blockUntilThreadAvailable }
};
RegisterMethodsOrDie()中為gBinderMethods數(shù)組中的方法建立了一一映射關系,從而為java層訪問JNI層提供通道蚂子。
int_register_android_os_Binder方法的主要功能:
- 通過gBinderOffsets,保存java層Binder類的信息沃测,為JNI層訪問java層提供通道
- 通過RegisterMethodsOrDie,將gBinderMethods數(shù)組完成映射關系,從而為java層訪問jni層提供通道食茎。
也就是說該過程建立了Binder類在Native層與framework層之間相互調用的橋梁蒂破。
注冊BinderInternal
static int int_register_android_os_BinderInternal(JNIEnv* env) {
//其中kBinderInternalPathName = "com/android/internal/os/BinderInternal"
jclass clazz = FindClassOrDie(env, kBinderInternalPathName);
gBinderInternalOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
gBinderInternalOffsets.mForceGc = GetStaticMethodIDOrDie(env, clazz, "forceBinderGc", "()V");
return RegisterMethodsOrDie(
env, kBinderInternalPathName,
gBinderInternalMethods, NELEM(gBinderInternalMethods));
}
注冊了BinderInternal類的jni方法,gBinderInternalOffsets保存了BinderInternal的forceBinderGC()方法别渔。
下面是BinderInternal類的jni方法注冊
static const JNINativeMethod gBinderInternalMethods[] = {
{ "getContextObject", "()Landroid/os/IBinder;", (void*)android_os_BinderInternal_getContextObject },
{ "joinThreadPool", "()V", (void*)android_os_BinderInternal_joinThreadPool },
{ "disableBackgroundScheduling", "(Z)V", (void*)android_os_BinderInternal_disableBackgroundScheduling },
{ "handleGc", "()V", (void*)android_os_BinderInternal_handleGc }
};
和注冊Binder非常類似附迷,該過程建立了BinderInternal類在Native層與framework層之間的相互調用的橋梁。
注冊BinderProxy
static int int_register_android_os_BinderProxy(JNIEnv* env) {
//gErrorOffsets保存了Error類信息
jclass clazz = FindClassOrDie(env, "java/lang/Error");
gErrorOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
//gBinderProxyOffsets保存了BinderProxy類的信息
//其中kBinderProxyPathName = "android/os/BinderProxy"
clazz = FindClassOrDie(env, kBinderProxyPathName);
gBinderProxyOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
gBinderProxyOffsets.mConstructor = GetMethodIDOrDie(env, clazz, "<init>", "()V");
gBinderProxyOffsets.mSendDeathNotice = GetStaticMethodIDOrDie(env, clazz, "sendDeathNotice", "(Landroid/os/IBinder$DeathRecipient;)V");
gBinderProxyOffsets.mObject = GetFieldIDOrDie(env, clazz, "mObject", "J");
gBinderProxyOffsets.mSelf = GetFieldIDOrDie(env, clazz, "mSelf", "Ljava/lang/ref/WeakReference;");
gBinderProxyOffsets.mOrgue = GetFieldIDOrDie(env, clazz, "mOrgue", "J");
//gClassOffsets保存了Class.getName()方法
clazz = FindClassOrDie(env, "java/lang/Class");
gClassOffsets.mGetName = GetMethodIDOrDie(env, clazz, "getName", "()Ljava/lang/String;");
return RegisterMethodsOrDie(
env, kBinderProxyPathName,
gBinderProxyMethods, NELEM(gBinderProxyMethods));
}
注冊BinderProxy類的jni方法哎媚,gBinderProxyOffsets保存了BinderProxy的構造方法喇伯,sendDeathNotice(),mObject,mSelf,mOrgue信息。
BinderProxy類在Binder類中拨与,是本地IBinder對象的java代理稻据,BinderProxy 是由C++層創(chuàng)建后,傳遞到java層的买喧。
ibinderForJavaObject:Java類轉換成C++ IBinder類
將java Binder 類對象轉換成JavaBBinder;
將Java BinderProxy 轉換成BpBinder
sp<IBinder> ibinderForJavaObject(JNIEnv* env, jobject obj)
{
if (obj == NULL) return NULL;
// Instance of Binder?
if (env->IsInstanceOf(obj, gBinderOffsets.mClass)) {
JavaBBinderHolder* jbh = (JavaBBinderHolder*)
env->GetLongField(obj, gBinderOffsets.mObject);
return jbh->get(env, obj);
}
// Instance of BinderProxy?
if (env->IsInstanceOf(obj, gBinderProxyOffsets.mClass)) {
return getBPNativeData(env, obj)->mObject;
}
ALOGW("ibinderForJavaObject: %p is not a Binder object", obj);
return NULL;
}
javaObjectForIBinder C++ IBinder類轉換成對應的Java類
C++類 JavaBBBinder-> Java類 Binder
C++類 BpBinder -> Java類的 BinderProxy
object javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val) {
if (val == NULL) return NULL;
if (val->checkSubclass(&gBinderOffsets)) { //返回false
jobject object = static_cast<JavaBBinder*>(val.get())->object();
return object;
}
AutoMutex _l(mProxyLock);
jobject object = (jobject)val->findObject(&gBinderProxyOffsets);
if (object != NULL) { //第一次object為null
jobject res = jniGetReferent(env, object);
if (res != NULL) {
return res;
}
android_atomic_dec(&gNumProxyRefs);
val->detachObject(&gBinderProxyOffsets);
env->DeleteGlobalRef(object);
}
//創(chuàng)建BinderProxy對象
object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);
if (object != NULL) {
//BinderProxy.mObject成員變量記錄BpBinder對象
env->SetLongField(object, gBinderProxyOffsets.mObject, (jlong)val.get());
val->incStrong((void*)javaObjectForIBinder);
jobject refObject = env->NewGlobalRef(
env->GetObjectField(object, gBinderProxyOffsets.mSelf));
//將BinderProxy對象信息附加到BpBinder的成員變量mObjects中
val->attachObject(&gBinderProxyOffsets, refObject,
jnienv_to_javavm(env), proxy_cleanup);
sp<DeathRecipientList> drl = new DeathRecipientList;
drl->incStrong((void*)javaObjectForIBinder);
//BinderProxy.mOrgue成員變量記錄死亡通知對象
env->SetLongField(object, gBinderProxyOffsets.mOrgue, reinterpret_cast<jlong>(drl.get()));
android_atomic_inc(&gNumProxyRefs);
incRefsCreated(env);
}
return object;
}
四捻悯、Binder 數(shù)據傳遞過程分析
Android系統(tǒng)中的眾多服務(Binder)都需要向ServerManager進行注冊,當需要使用某個Service時 再從ServiceManager進行查詢。
ServiceManager.addService() 和getService()來簡單看下淤毛,ServiceManager是如何注冊和維護Service(IBinder)的
4.1今缚、SM.addService()
public static void addService(String name, IBinder service, boolean allowIsolated) {
try {
//先獲取SMP對象,則執(zhí)行注冊服務操作
getIServiceManager().addService(name, service, allowIsolated);
} catch (RemoteException e) {
Log.e(TAG, "error in addService", e);
}
}
getIServiceManager
private static IServiceManager getIServiceManager() {
if (sServiceManager != null) {
return sServiceManager;
}
//這里調用了ServiceManagerNative獲取sServiceManager
sServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject());
return sServiceManager;
}
采用單例模式獲取ServiceManager getIServiceManager()返回的是ServiceManagerProxy對象
getContextObject
static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz)
{
//獲取一個BpBinder(0) C++對象
sp<IBinder> b = ProcessState::self()->getContextObject(NULL);
//由BpBinder(0) 創(chuàng)建一個BinderProxy()對象
return javaObjectForIBinder(env, b);
}
BinderInternal.java中有一個native方法getContextObject(),JNI調用執(zhí)行上述方法钱床。
這里有個熟悉的ProcessState::self()->getContextObject()在分析獲取ServiceManager的時候調用過荚斯,就相當于是new BpBinder(0)
javaObjectForIBinder
object javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val) {
if (val == NULL) return NULL;
if (val->checkSubclass(&gBinderOffsets)) { //返回false
jobject object = static_cast<JavaBBinder*>(val.get())->object();
return object;
}
AutoMutex _l(mProxyLock);
jobject object = (jobject)val->findObject(&gBinderProxyOffsets);
if (object != NULL) { //第一次object為null
jobject res = jniGetReferent(env, object);
if (res != NULL) {
return res;
}
android_atomic_dec(&gNumProxyRefs);
val->detachObject(&gBinderProxyOffsets);
env->DeleteGlobalRef(object);
}
//創(chuàng)建BinderProxy對象
object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);
if (object != NULL) {
//BinderProxy.mObject成員變量記錄BpBinder對象
env->SetLongField(object, gBinderProxyOffsets.mObject, (jlong)val.get());
val->incStrong((void*)javaObjectForIBinder);
jobject refObject = env->NewGlobalRef(
env->GetObjectField(object, gBinderProxyOffsets.mSelf));
//將BinderProxy對象信息附加到BpBinder的成員變量mObjects中
val->attachObject(&gBinderProxyOffsets, refObject,
jnienv_to_javavm(env), proxy_cleanup);
sp<DeathRecipientList> drl = new DeathRecipientList;
drl->incStrong((void*)javaObjectForIBinder);
//BinderProxy.mOrgue成員變量記錄死亡通知對象
env->SetLongField(object, gBinderProxyOffsets.mOrgue, reinterpret_cast<jlong>(drl.get()));
android_atomic_inc(&gNumProxyRefs);
incRefsCreated(env);
}
return object;
}
根據BpBinder(c++)生成BinderProxy(java)對象埠居,并把BpBinder對象地址保存到BinderProxy.mObject成員變量查牌。
ServiceManagerNative.asInterface(BinderInternal.getContextObject())
等價于
ServiceManagerNative.asInterface(new BinderProxy());
SMN.asInterface
static public IServiceManager asInterface(IBinder obj)
{
if (obj == null) { //obj為BpBinder
return null;
}
//由于obj為BpBinder事期,該方法默認返回null
IServiceManager in = (IServiceManager)obj.queryLocalInterface(descriptor);
if (in != null) {
return in;
}
return new ServiceManagerProxy(obj);
}
這里可以知道ServiceManagerNative.asInterface(new BinderProxy())等價于ServiceManagerProxy(new BinderProxy())
ServiceManagerProxy初始化
class ServiceManagerProxy implements IServiceManager {
public ServiceManagerProxy(IBinder remote) {
mRemote = remote;
}
}
mRemote為BinderProxy對象,該BinderProxy對象對應于BpBinder(0),其作為Binder代理端纸颜,指向Native層大管家SerivceManager兽泣。
framework層的ServiceManager的調用實際上是交給了ServiceManagerProxy的成員變量BinderProxy;
BinderProxy通過jni方式胁孙,最終會調用BpBinder對象唠倦。
4.2、SMP.addService()
public void addService(String name, IBinder service, boolean allowIsolated) throws RemoteException {
Parcel data = Parcel.obtain();
Parcel reply = Parcel.obtain();
data.writeInterfaceToken(IServiceManager.descriptor);
data.writeString(name);
data.writeStrongBinder(service);
data.writeInt(allowIsolated ? 1 : 0);
//mRemote為BinderProxy
mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);
reply.recycle();
data.recycle();
}
大概做了兩件事情:
1涮较、 data.writeStrongBinder(service); 利用Parcel對Binder做處理后
2稠鼻、 mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);調用BinderProxy.trasanct() 向Binder Driver發(fā)消息.
writeStrongBinder
public writeStrongBinder(IBinder val){
//此處為Native調用
nativewriteStrongBinder(mNativePtr, val);
}
android_os_Parcel_writeStrongBinder
static void android_os_Parcel_writeStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr, jobject object) {
//將java層Parcel轉換為native層Parcel
Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
if (parcel != NULL) {
const status_t err = parcel->writeStrongBinder(ibinderForJavaObject(env, object));
if (err != NO_ERROR) {
signalExceptionForError(env, clazz, err);
}
}
}
ibinderForJavaObject
sp<IBinder> ibinderForJavaObject(JNIEnv* env, jobject obj)
{
if (obj == NULL) return NULL;
//Java層的Binder對象
if (env->IsInstanceOf(obj, gBinderOffsets.mClass)) {
JavaBBinderHolder* jbh = (JavaBBinderHolder*)
env->GetLongField(obj, gBinderOffsets.mObject);
return jbh != NULL ? jbh->get(env, obj) : NULL;
}
//Java層的BinderProxy對象
if (env->IsInstanceOf(obj, gBinderProxyOffsets.mClass)) {
return (IBinder*)env->GetLongField(obj, gBinderProxyOffsets.mObject);
}
return NULL;
}
frameworks/base/core/jni/android_util_Binder.cpp
JavaBBinderHolder
class JavaBBinderHolder
sp<JavaBBinder> get(JNIEnv* env, jobject obj)
{
AutoMutex _l(mLock);
sp<JavaBBinder> b = mBinder.promote();
if (b == NULL) {
b = new JavaBBinder(env, obj);
if (mVintf) {
::android::internal::Stability::markVintf(b.get());
}
if (mExtension != nullptr) {
b.get()->setExtension(mExtension);
}
mBinder = b;
ALOGV("Creating JavaBinder %p (refs %p) for Object %p, weakCount=%" PRId32 "\n",
b.get(), b->getWeakRefs(), obj, b->getWeakRefs()->getWeakCount());
}
return b;
}
Java Binder創(chuàng)建時 會自動創(chuàng)建一個JavaBBinderHolder對象。
JavaBBinderHolder有一個成員變量mBinder,保存當前創(chuàng)建的JavaBBinder對象,這是一個弱引用類型的狂票,可能會被垃圾回收器給回收候齿,所以每次使用前都需要判斷是否存在。
當量mBinder不存在時闺属,會創(chuàng)建一個新的JavaBBinder()實例,并且JavaBBinder持有Java 層的Binder對象慌盯。
此時的引用關系如下:
(Java) Binder->JavaBBinderHolder->JavaBBinder
JavaBBinder->(Java) Binder
data.writeStrongBinder(service)等價于parcel->writeStrongBinder(new JavaBinder(env,obj));
writeStrongBinder(C++)
status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
return flatten_binder(ProcessState::self(), val, this);
}
flatten_binder
flatten_binder 將Binder信息扁平化,保存在flat_binder_object結構體中。
status_t flatten_binder(const sp<ProcessState>& /*proc*/,
const sp<IBinder>& binder, Parcel* out)
{
flat_binder_object obj;
obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
if (binder != NULL) {
IBinder *local = binder->localBinder();
if (!local) {
BpBinder *proxy = binder->remoteBinder();
const int32_t handle = proxy ? proxy->handle() : 0;
obj.type = BINDER_TYPE_HANDLE; //遠程Binder
obj.binder = 0;
obj.handle = handle;
obj.cookie = 0;
} else {
obj.type = BINDER_TYPE_BINDER; //本地Binder掂器,進入該分支
obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());
obj.cookie = reinterpret_cast<uintptr_t>(local);
}
} else {
obj.type = BINDER_TYPE_BINDER; //本地Binder
obj.binder = 0;
obj.cookie = 0;
}
return finish_flatten_binder(binder, obj, out);
}
- 對于Binder實體,則cookie記錄Binder實體指針,type= BINDER_TYPE_BINDER
- 對于Binder代理亚皂,則handle記錄Binder代理的句柄;type = BINDER_TYPE_HANDLE
由此可知在IPC前,Binder對象的處理過程:
- Parce.writeStrongBinder() - 輸入Binder
- ibinderForJavaObject - 返回BpBinder
- flatten_binder - 返回flat_binder_object
- (C++) parcel.writeObject(flat); flat_binder_object保存到parcel
BinderProxy.transact
Binder對象被保存在Parcel對象后,調用BinderProxy.transact 執(zhí)行進入到IPC的調用流程中国瓮。
public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
//用于檢測Parcel大小是否大于800k
Binder.checkParcel(this, code, data, "Unreasonably large binder buffer");
return transactNative(code, data, reply, flags);
}
mRemote是BinderProxy灭必。transactNative經過jni調用,進入下面的方法
android_os_BinderProxy_transact
static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
jint code, jobject dataObj, jobject replyObj, jint flags)
{
...
//java Parcel轉為native Parcel
Parcel* data = parcelForJavaObject(env, dataObj);
Parcel* reply = parcelForJavaObject(env, replyObj);
...
//gBinderProxyOffsets.mObject中保存的是new BpBinder(0)對象
IBinder* target = (IBinder*)
env->GetLongField(obj, gBinderProxyOffsets.mObject);
...
//此處便是BpBinder::transact(), 經過native層乃摹,進入Binder驅動程序
status_t err = target->transact(code, *data, reply, flags);
...
return JNI_FALSE;
}
java層的BinderProxy.transact()最終交由Native層的BpBinder::transact()完成厂财。
addService的核心過程:
public void addService(String name, IBinder service, boolean allowIsolated) throws RemoteException {
...
Parcel data = Parcel.obtain(); //此處還需要將java層的Parcel轉為Native層的Parcel
data->writeStrongBinder(new JavaBBinder(env, obj));
BpBinder::transact(ADD_SERVICE_TRANSACTION, *data, reply, 0); //與Binder驅動交互
...
}
4.2、 SM.getService
ServiceManager.java
public static IBinder getService(String name) {
try {
IBinder service = sCache.get(name); //先從緩存中查看
if (service != null) {
return service;
} else {
return getIServiceManager().getService(name); 【見4.2】
}
} catch (RemoteException e) {
Log.e(TAG, "error in getService", e);
}
return null;
}
優(yōu)先從sCache緩存中 獲取已查詢的IBinder峡懈,若緩存未命中,則從ServiceManagerProxy()中來查詢
SMP.getService
class ServiceManagerProxy implements IServiceManager {
public IBinder getService(String name) throws RemoteException {
Parcel data = Parcel.obtain();
Parcel reply = Parcel.obtain();
data.writeInterfaceToken(IServiceManager.descriptor);
data.writeString(name);
//mRemote為BinderProxy 【見4.3】
mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
//從reply里面解析出獲取的IBinder對象【見4.8】
IBinder binder = reply.readStrongBinder();
reply.recycle();
data.recycle();
return binder;
}
}
BinderProxy.transact
android_util_Binder.cpp
final class BinderProxy implements IBinder {
public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
Binder.checkParcel(this, code, data, "Unreasonably large binder buffer");
return transactNative(code, data, reply, flags);
}
}
android_os_BinderProxy_transact
static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
jint code, jobject dataObj, jobject replyObj, jint flags)
{
...
//java Parcel轉為native Parcel
Parcel* data = parcelForJavaObject(env, dataObj);
Parcel* reply = parcelForJavaObject(env, replyObj);
...
//gBinderProxyOffsets.mObject中保存的是new BpBinder(0)對象
IBinder* target = (IBinder*)
env->GetLongField(obj, gBinderProxyOffsets.mObject);
...
//此處便是BpBinder::transact(), 經過native層[見小節(jié)4.5]
status_t err = target->transact(code, *data, reply, flags);
...
return JNI_FALSE;
}
BpBinder.transact
BpBinder.cpp
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
if (mAlive) {
// [見小節(jié)4.6]
status_t status = IPCThreadState::self()->transact(
mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return DEAD_OBJECT;
}
IPC.transact
IPCThreadState.cpp
status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
status_t err = data.errorCheck(); //數(shù)據錯誤檢查
flags |= TF_ACCEPT_FDS;
....
if (err == NO_ERROR) {
//(1)將handle璃饱、code、data暫存在binder_transaction_data結構體,然后寫如到mOut(Parcel)中
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
}
...
// 默認情況下,都是采用非oneway的方式, 也就是需要等待服務端的返回結果
if ((flags & TF_ONE_WAY) == 0) {
if (reply) {
//(2)執(zhí)行waitForResponse,等待回應事件
err = waitForResponse(reply);
}else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
} else {
err = waitForResponse(NULL, NULL);
}
return err;
}
writeTransactionData
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
binder_transaction_data tr;
tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
tr.target.handle = handle;
tr.code = code;
tr.flags = binderFlags;
tr.cookie = 0;
tr.sender_pid = 0;
tr.sender_euid = 0;
const status_t err = data.errorCheck();
if (err == NO_ERROR) {
tr.data_size = data.ipcDataSize();
tr.data.ptr.buffer = data.ipcData();
tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
tr.data.ptr.offsets = data.ipcObjects();
} else if (statusBuffer) {
tr.flags |= TF_STATUS_CODE;
*statusBuffer = err;
tr.data_size = sizeof(status_t);
tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
tr.offsets_size = 0;
tr.data.ptr.offsets = 0;
} else {
return (mLastError = err);
}
mOut.writeInt32(cmd);
mOut.write(&tr, sizeof(tr));
return NO_ERROR;
}
IPC.waitForResponse
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
int32_t cmd;
int32_t err;
while (1) {
if ((err=talkWithDriver()) < NO_ERROR) break; // 【見小節(jié)2.11】
err = mIn.errorCheck();
if (err < NO_ERROR) break; //當存在error則退出循環(huán)
//每當跟Driver交互一次肪康,若mIn收到數(shù)據則往下執(zhí)行一次BR命令
if (mIn.dataAvail() == 0) continue;
cmd = mIn.readInt32();
switch (cmd) {
case BR_TRANSACTION_COMPLETE:
//只有當不需要reply, 也就是oneway時 才會跳出循環(huán),否則還需要等待.
if (!reply && !acquireResult) goto finish; break;
case BR_DEAD_REPLY:
err = DEAD_OBJECT; goto finish;
case BR_FAILED_REPLY:
err = FAILED_TRANSACTION; goto finish;
case BR_REPLY: ... goto finish;
default:
err = executeCommand(cmd); //【見小節(jié)2.12】
if (err != NO_ERROR) goto finish;
break;
}
}
finish:
if (err != NO_ERROR) {
if (reply) reply->setError(err); //將發(fā)送的錯誤代碼返回給最初的調用者
}
return err;
}
在這個過程中, 收到以下任一BR_命令荚恶,處理后便會退出waitForResponse()的狀態(tài):
- BR_TRANSACTION_COMPLETE binder驅動收到BC_TRANSACTION事件后的應答消息; 對于oneway transaction,當收到該消息,則完成了本次Binder通信;
- BR_DEAD_REPLY:回復失敗,往往是線程或節(jié)點為空. 則結束本次通信Binder;
- BR_FAILED_REPLY::回復失敗磷支,往往是transaction出錯導致. 則結束本次通信Binder;
- BR_REPLY: Binder驅動向Client端發(fā)送回應消息; 對于非oneway transaction時,當收到該消息,則完整地完成本次Binder通信;
除了以上命令谒撼,其他命令 執(zhí)行executeCommand()
這里SM.addService() 采用的是非ONE_WAY方法,所以waitForResponse中,收到BC_REPLAY時,返回調用者進程雾狈。
class ServiceManagerProxy implements IServiceManager {
public IBinder getService(String name) throws RemoteException {
Parcel data = Parcel.obtain();
Parcel reply = Parcel.obtain();
//(1)設置parcel參數(shù)
data.writeInterfaceToken(IServiceManager.descriptor);
data.writeString(name);
//(2)mRemote為BinderProxy 【見4.3】
mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
//(3)從reply里面解析出獲取的IBinder對象【見4.8】
IBinder binder = reply.readStrongBinder();
reply.recycle();
data.recycle();
return binder;
}
}
也就是會回到了(3)處
IBinder binder = reply.readStrongBinder();
Parcel.java
static jobject android_os_Parcel_readStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr) {
Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
if (parcel != NULL) {
//【見小節(jié)4.8.1】
return javaObjectForIBinder(env, parcel->readStrongBinder());
}
return NULL;
}
javaObjectForIBinder 將native層BpBinder對象轉換為Java層BinderProxy對象廓潜。
readStrongBinder(C++)
sp<IBinder> Parcel::readStrongBinder() const
{
sp<IBinder> val;
//【見小節(jié)4.8.2】
unflatten_binder(ProcessState::self(), *this, &val);
return val;
}
unflatten_binder
status_t unflatten_binder(const sp<ProcessState>& proc,
const Parcel& in, sp<IBinder>* out)
{
const flat_binder_object* flat = in.readObject(false);
if (flat) {
switch (flat->type) {
case BINDER_TYPE_BINDER:
*out = reinterpret_cast<IBinder*>(flat->cookie);
return finish_unflatten_binder(NULL, *flat, in);
case BINDER_TYPE_HANDLE:
//進入該分支【見4.8.3】
*out = proc->getStrongProxyForHandle(flat->handle);
//創(chuàng)建BpBinder對象
return finish_unflatten_binder(
static_cast<BpBinder*>(out->get()), *flat, in);
}
}
return BAD_TYPE;
}
getStrongProxyForHandle
ProcessState.cpp
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
//查找handle對應的資源項
handle_entry* e = lookupHandleLocked(handle);
if (e != NULL) {
IBinder* b = e->binder;
if (b == NULL || !e->refs->attemptIncWeak(this)) {
...
//當handle值所對應的IBinder不存在或弱引用無效時,則創(chuàng)建BpBinder對象
b = new BpBinder(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}
經過該方法,最終創(chuàng)建了指向Binder服務端的BpBinder代理對象辩蛋。
javaObjectForIBinder將native層BpBinder對象轉換為Java層BinderProxy對象呻畸。
getService()最終獲取了指向目標Binder服務端的代理對象BinderProxy。
Binder.readStrongBinder 具體過程:
- Parcel.unflatten_binder 返回flat_binder_object
- ProcessState.getStrongProxyForHandle(handle):返回BpBinder
- javaObjectForIBinder 返回BinderProxy對象
getService的核心過程:
public static IBinder getService(String name) {
...
Parcel reply = Parcel.obtain(); //此處還需要將java層的Parcel轉為Native層的Parcel
BpBinder::transact(GET_SERVICE_TRANSACTION, *data, reply, 0); //與Binder驅動交互
IBinder binder = javaObjectForIBinder(env, new BpBinder(handle));
...
}
五悼院、Binder 線程創(chuàng)建
Binder線程的創(chuàng)建都是伴隨著Java進程的創(chuàng)建而創(chuàng)建的伤为。
ava層進程的創(chuàng)建都是通過Process.start()方法,向Zygote進程發(fā)出創(chuàng)建進程的socket消息。Zygote收到消息后會調用Zygote.forkAndSpecialize()來fork出新進程据途,在新進程中會調用到RuntimeInit.nativeZygoteInit方法绞愚,該方法經過jni映射,最終會調用到app_main.cpp中的onZygoteInit颖医。
- Process.start()
- Zygote.forkAndSpecialize()
- RuntimeInit.nativeZygoteInit
- onZygoteInit
5.1位衩、onZygoteInit
[-> app_main.cpp]
virtual void onZygoteInit() {
//獲取ProcessState對象
sp<ProcessState> proc = ProcessState::self();
//啟動新binder線程 【見小節(jié)2.2】
proc->startThreadPool();
}
ProcessState::self()是單例模式,主要工作是調用open()打開/dev/binder驅動設備熔萧,再利用mmap()映射內核的地址空間蚂四,將Binder驅動的fd賦值ProcessState對象中的變量mDriverFD,用于交互操作哪痰。startThreadPool()是創(chuàng)建一個新的binder線程遂赠,不斷進行talkWithDriver()。
5.2晌杰、 PS.startThreadPool
[-> ProcessState.cpp]
void ProcessState::startThreadPool()
{
AutoMutex _l(mLock); //多線程同步
if (!mThreadPoolStarted) {
mThreadPoolStarted = true;
spawnPooledThread(true); 【見小節(jié)2.3】
}
啟動Binder線程池后, 則設置mThreadPoolStarted=true. 通過變量mThreadPoolStarted來保證每個應用進程只允許啟動一個binder主線程池跷睦。本次創(chuàng)建的是binder主線程(isMain=true). 其余binder線程池中的線程都是由Binder驅動來控制創(chuàng)建的。
5.3肋演、PS.spawnPooledThread
void ProcessState::spawnPooledThread(bool isMain)
{
if (mThreadPoolStarted) {
//獲取Binder線程名【見小節(jié)2.3.1】
String8 name = makeBinderThreadName();
//此處isMain=true【見小節(jié)2.3.2】
sp<Thread> t = new PoolThread(isMain);
t->run(name.string());
}
}
5.4 PoolThread.run
[-> ProcessState.cpp]
class PoolThread : public Thread
{
public:
PoolThread(bool isMain)
: mIsMain(isMain)
{
}
protected:
virtual bool threadLoop() {
IPCThreadState::self()->joinThreadPool(mIsMain); //【見小節(jié)2.4】
return false;
}
const bool mIsMain;
};
從函數(shù)名看起來是創(chuàng)建線程池抑诸,其實就只是創(chuàng)建一個線程,該PoolThread繼承Thread類爹殊。t->run()方法最終調用 PoolThread的threadLoop()方法蜕乡。
5.5、IPC.joinThreadPool
[-> IPCThreadState.cpp]
void IPCThreadState::joinThreadPool(bool isMain)
{
//創(chuàng)建Binder線程
mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
set_sched_policy(mMyThreadId, SP_FOREGROUND); //設置前臺調度策略
status_t result;
do {
processPendingDerefs(); //清除隊列的引用
result = getAndExecuteCommand(); //處理下一條指令
if (result < NO_ERROR && result != TIMED_OUT
&& result != -ECONNREFUSED && result != -EBADF) {
abort();
}
if(result == TIMED_OUT && !isMain) {
break; //非主線程出現(xiàn)timeout則線程退出
}
} while (result != -ECONNREFUSED && result != -EBADF);
mOut.writeInt32(BC_EXIT_LOOPER); // 線程退出循環(huán)
talkWithDriver(false); //false代表bwr數(shù)據的read_buffer為空
}
- 對于isMain=true的情況下梗夸, command為BC_ENTER_LOOPER层玲,代表的是Binder主線程,不會退出的線程反症;
- 對于isMain=false的情況下辛块,command為BC_REGISTER_LOOPER,表示是由binder驅動創(chuàng)建的線程铅碍。非主線程出現(xiàn)timeout則線程退出润绵。
5.6、IPC.getAndExecuteCommand
status_t IPCThreadState::getAndExecuteCommand()
{
status_t result;
int32_t cmd;
result = talkWithDriver(); //與binder進行交互
if (result >= NO_ERROR) {
size_t IN = mIn.dataAvail();
if (IN < sizeof(int32_t)) return result;
cmd = mIn.readInt32();
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount++;
pthread_mutex_unlock(&mProcess->mThreadCountLock);
result = executeCommand(cmd); //執(zhí)行Binder響應碼
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount--;
pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
pthread_mutex_unlock(&mProcess->mThreadCountLock);
set_sched_policy(mMyThreadId, SP_FOREGROUND);
}
return result;
}
5.7胞谈、talkWithDriver
//mOut有數(shù)據憨愉,mIn還沒有數(shù)據。doReceive默認值為true
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
binder_write_read bwr;
...
// 當同時沒有輸入和輸出數(shù)據則直接返回
if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
...
do {
//ioctl執(zhí)行binder讀寫操作,經過syscall睹晒,進入Binder驅動趟庄。調用Binder_ioctl
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
...
} while (err == -EINTR);
...
return err;
}
5.8、binder_thread_write
[-> binder.c]
static int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed)
{
uint32_t cmd;
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
while (ptr < end && thread->return_error == BR_OK) {
//拷貝用戶空間的cmd命令伪很,此時為BC_ENTER_LOOPER
if (get_user(cmd, (uint32_t __user *)ptr)) -EFAULT;
ptr += sizeof(uint32_t);
switch (cmd) {
case BC_REGISTER_LOOPER:
if (thread->looper & BINDER_LOOPER_STATE_ENTERED) {
//出錯原因:線程調用完BC_ENTER_LOOPER戚啥,不能執(zhí)行該分支
thread->looper |= BINDER_LOOPER_STATE_INVALID;
} else if (proc->requested_threads == 0) {
//出錯原因:沒有請求就創(chuàng)建線程
thread->looper |= BINDER_LOOPER_STATE_INVALID;
} else {
proc->requested_threads--;
proc->requested_threads_started++;
}
thread->looper |= BINDER_LOOPER_STATE_REGISTERED;
break;
case BC_ENTER_LOOPER:
if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) {
//出錯原因:線程調用完BC_REGISTER_LOOPER,不能立刻執(zhí)行該分支
thread->looper |= BINDER_LOOPER_STATE_INVALID;
}
//創(chuàng)建Binder主線程
thread->looper |= BINDER_LOOPER_STATE_ENTERED;
break;
case BC_EXIT_LOOPER:
thread->looper |= BINDER_LOOPER_STATE_EXITED;
break;
}
...
}
*consumed = ptr - buffer;
}
return 0;
}
處理完BC_ENTER_LOOPER命令后锉试,一般情況下會設置成功,為thread->looper增加BINDER_LOOPER_STATE_ENTERED標志猫十。
thread->looper |= BINDER_LOOPER_STATE_ENTERED。
至此主Binder線程的已經創(chuàng)建完成呆盖。之后就可以正常接收Binder消息了拖云。
5.9、Binder驅動主動創(chuàng)建Binder線程的時機应又。
binder_thread_read
binder_thread_read(){
...
retry:
//當前線程todo隊列為空且transaction棧為空宙项,則代表該線程是空閑的
wait_for_proc_work = thread->transaction_stack == NULL &&
list_empty(&thread->todo);
if (thread->return_error != BR_OK && ptr < end) {
...
put_user(thread->return_error, (uint32_t __user *)ptr);
ptr += sizeof(uint32_t);
goto done; //發(fā)生error,則直接進入done
}
thread->looper |= BINDER_LOOPER_STATE_WAITING;
if (wait_for_proc_work)
proc->ready_threads++; //可用線程個數(shù)+1
binder_unlock(__func__);
if (wait_for_proc_work) {
if (non_block) {
...
} else
//當進程todo隊列沒有數(shù)據,則進入休眠等待狀態(tài)
ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
} else {
if (non_block) {
...
} else
//當線程todo隊列沒有數(shù)據株扛,則進入休眠等待狀態(tài)
ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));
}
binder_lock(__func__);
if (wait_for_proc_work)
proc->ready_threads--; //可用線程個數(shù)-1
thread->looper &= ~BINDER_LOOPER_STATE_WAITING;
if (ret)
return ret; //對于非阻塞的調用尤筐,直接返回
while (1) {
uint32_t cmd;
struct binder_transaction_data tr;
struct binder_work *w;
struct binder_transaction *t = NULL;
//先考慮從線程todo隊列獲取事務數(shù)據
if (!list_empty(&thread->todo)) {
w = list_first_entry(&thread->todo, struct binder_work, entry);
//線程todo隊列沒有數(shù)據, 則從進程todo對獲取事務數(shù)據
} else if (!list_empty(&proc->todo) && wait_for_proc_work) {
w = list_first_entry(&proc->todo, struct binder_work, entry);
} else {
... //沒有數(shù)據,則返回retry
}
switch (w->type) {
case BINDER_WORK_TRANSACTION: ... break;
case BINDER_WORK_TRANSACTION_COMPLETE:... break;
case BINDER_WORK_NODE: ... break;
case BINDER_WORK_DEAD_BINDER:
case BINDER_WORK_DEAD_BINDER_AND_CLEAR:
case BINDER_WORK_CLEAR_DEATH_NOTIFICATION:
struct binder_ref_death *death;
uint32_t cmd;
death = container_of(w, struct binder_ref_death, work);
if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION)
cmd = BR_CLEAR_DEATH_NOTIFICATION_DONE;
else
cmd = BR_DEAD_BINDER;
put_user(cmd, (uint32_t __user *)ptr;
ptr += sizeof(uint32_t);
put_user(death->cookie, (void * __user *)ptr);
ptr += sizeof(void *);
...
if (cmd == BR_DEAD_BINDER)
goto done; //Binder驅動向client端發(fā)送死亡通知,則進入done
break;
}
if (!t)
continue; //只有BINDER_WORK_TRANSACTION命令才能繼續(xù)往下執(zhí)行
...
break;
}
done:
*consumed = ptr - buffer;
//創(chuàng)建線程的條件
if (proc->requested_threads + proc->ready_threads == 0 &&
proc->requested_threads_started < proc->max_threads &&
(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
BINDER_LOOPER_STATE_ENTERED))) {
proc->requested_threads++;
// 生成BR_SPAWN_LOOPER命令洞就,用于創(chuàng)建新的線程
put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer)盆繁;
}
return 0;
}
當發(fā)生以下3種情況之一,便會進入done:
- 當前線程的return_error發(fā)生error的情況旬蟋;
- 當Binder驅動向client端發(fā)送死亡通知的情況油昂;
- 當類型為BINDER_WORK_TRANSACTION(即收到命令是BC_TRANSACTION或BC_REPLY)的情況;
任何一個Binder線程當同時滿足以下條件倾贰,則會生成用于創(chuàng)建新線程的BR_SPAWN_LOOPER命令:
- 當前進程沒有空閑可用的binder線程秕狰,即ready_threads = 0;(線程進入休眠狀態(tài)的個數(shù)就是空閑線程數(shù))
- 當前進程已啟動線程個數(shù)小于最大上限(默認15)躁染;
- 當前線程已接收到BC_ENTER_LOOPER或者BC_REGISTER_LOOPER命令鸣哀,即當前處于BINDER_LOOPER_STATE_REGISTERED或者BINDER_LOOPER_STATE_ENTERED狀態(tài)。
從system_server的binder線程一直的執(zhí)行流: IPC.joinThreadPool –> IPC.getAndExecuteCommand() -> IPC.talkWithDriver() ,但talkWithDriver收到事務之后, 便進入IPC.executeCommand(), 接下來,從executeCommand說起.
status_t IPCThreadState::executeCommand(int32_t cmd)
{
status_t result = NO_ERROR;
switch ((uint32_t)cmd) {
...
case BR_SPAWN_LOOPER:
//創(chuàng)建新的binder線程 【見小節(jié)2.3】
mProcess->spawnPooledThread(false);
break;
...
}
return result;
}
Binder主線程的創(chuàng)建是在其所在進程創(chuàng)建的過程一起創(chuàng)建的吞彤,后面再創(chuàng)建的普通binder線程是由spawnPooledThread(false)方法所創(chuàng)建的我衬。
思考:
每個進程的binder線程池的線程個數(shù)上限為15叹放,該上限不統(tǒng)計通過BC_ENTER_LOOPER命令創(chuàng)建的binder主線程, 只計算BC_REGISTER_LOOPER命令創(chuàng)建的線程挠羔。
某個進程的主線程執(zhí)行如下方法井仰,那么該進程可創(chuàng)建的binder線程個數(shù)上限是多少呢?
ProcessState::self()->setThreadPoolMaxThreadCount(6); // 6個線程
ProcessState::self()->startThreadPool(); // 1個線程
IPCThread::self()->joinThreadPool(); // 1個線程
首先由Binder驅動創(chuàng)建的的binder線程個數(shù)上限為6個破加,通過startThreadPool()創(chuàng)建的主線程不算在最大線程上限俱恶,最后一句是將當前線程成為binder線程,所以說可創(chuàng)建的binder線程個數(shù)上限為8范舀。
Binder線程創(chuàng)建 小結
每次由Zygote fork出新進程的過程中合是,伴隨著創(chuàng)建binder線程池,調用spawnPooledThread來創(chuàng)建binder主線程锭环。當線程執(zhí)行binder_thread_read的過程中聪全,發(fā)現(xiàn)當前沒有空閑線程,沒有請求創(chuàng)建線程辅辩,且沒有達到上限难礼,則創(chuàng)建新的binder線程。
Binder系統(tǒng)中可分為3類binder線程:
- Binder主線程:進程創(chuàng)建過程會調用startThreadPool()過程中再進入spawnPooledThread(true)玫锋,來創(chuàng)建Binder主線程蛾茉。編號從1開始,也就是意味著binder主線程名為binder_1撩鹿,并且主線程是不會退出的谦炬。
- Binder普通線程:是由Binder Driver來根據是否有空閑的binder線程來決定是否創(chuàng)建binder線程翎卓,回調spawnPooledThread(false) 眶拉,isMain=false让腹,該線程名格式為binder_x告希。
- Binder其他線程:其他線程是指并沒有調用spawnPooledThread方法匪补,而是直接調用IPC.joinThreadPool()蛹屿,將當前線程直接加入binder線程隊列慷彤。例如: mediaserver和servicemanager的主線程都是binder線程侮繁,但system_server的主線程并非binder線程获搏。
六赖条、Binder IPC 的一般調用流程。
上圖為以IActivityManager.startService為例,Binder IPC 跨進程通信的流程圖常熙。
Client端 BinderProxy.transact -> Server端 Binder.onTransact Binder纬乍,為一個通用的調用過程。
6.1裸卫、發(fā)起端線程向Binder Driver發(fā)起binder ioctl請求后, 便采用環(huán)不斷talkWithDriver,此時該線程處于阻塞狀態(tài), 直到收到如下BR_XXX命令才會結束該過程.
- BR_TRANSACTION_COMPLETE: oneway模式下,收到該命令則退出
- BR_REPLY: 非oneway模式下,收到該命令才退出;
- BR_DEAD_REPLY: 目標進程/線程/binder實體為空, 以及釋放正在等待reply的binder thread或者binder buffer;
- BR_FAILED_REPLY: 情況較多,比如非法handle, 錯誤事務棧, security, 內存不足, buffer不足, 數(shù)據拷貝失敗, 節(jié)點創(chuàng)建失敗, 各種不匹配等問題
- BR_ACQUIRE_RESULT: 目前未使用的協(xié)議;
6.2仿贬、目標Binder線程創(chuàng)建后, 便進入joinThreadPool()方法, 采用循環(huán)不斷地循環(huán)執(zhí)行getAndExecuteCommand()方法, 當bwr的讀寫buffer都沒有數(shù)據時,則阻塞在binder_thread_read的wait_event過程. 另外,正常情況下binder線程一旦創(chuàng)建則不會退出
七、參考文章
http://www.reibang.com/p/1bef7e865498
http://gityuan.com/2015/11/21/binder-framework/#getiservicemanager