前言
本文代碼基于Android 10.0.0 r16 具體代碼將上傳到我的github倉庫
在Android中阅爽,所有應(yīng)用程序的進(jìn)程和系統(tǒng)服務(wù)的進(jìn)程都是由Zygote進(jìn)程通過fork子進(jìn)程產(chǎn)生的敢辩,Zygote進(jìn)程包含著已經(jīng)預(yù)加載的資源和虛擬機(jī)匾灶,所以通過fork
出來的子進(jìn)程也天生具有Zygote進(jìn)程的所有東西建钥,減輕了每次系統(tǒng)新建進(jìn)程時(shí)的壓力。
圖源來自Gityuan
可以看出Zygote連接了Native層也就是C/C++層和Java Framework層于未,作為接下來所有創(chuàng)建的線程的爸爸戚炫,Zygote自帶已經(jīng)加載好的Java虛擬機(jī),class資源墨吓,jni運(yùn)行環(huán)境等,真正的一人之下(init進(jìn)程)球匕,萬人之上(AMS,ATMS帖烘,WMS等)亮曹。fork
子進(jìn)程的時(shí)候子進(jìn)程就獲得了父進(jìn)程的一份資源副本,然后就開始脫離父進(jìn)程運(yùn)行秘症。所以打開一個(gè)app即創(chuàng)建一個(gè)進(jìn)程只需要幾百毫秒的時(shí)間照卦。
Zygote是如何被啟動(dòng)的
在system/core/rootdir目錄下有不同的配置文件,他們是由Android初始化語言(Android Init Language)編寫的腳本乡摹,具體語法可以查看相關(guān)文章役耕,primaryZygote和secondaryZygote分別對(duì)應(yīng)主模式和副模式,例如init.zygote64_32.rc
聪廉,這里主模式就是64位瞬痘,副模式則是32位。
目前來說一共有以下四個(gè)rc文件:
- init.zygote32.rc
- init.zygote32_64.rc
- init.zygote64.rc
- init.zygote64_32.rc
這里以zygote64_32.rc為例 源碼如下:
/system/core/rootdir/init.zygote64_32.rc
service zygote /system/bin/app_process64 -Xzygote /system/bin --zygote --start-system-server --socket-name=zygote
class main
priority -20
user root
group root readproc reserved_disk
// 660 權(quán)限 只有擁有者有讀寫權(quán)限板熊;而屬組用戶和其他用戶只有讀權(quán)限框全。
socket zygote stream 660 root system
socket usap_pool_primary stream 660 root system
onrestart write /sys/android_power/request_state wake
onrestart write /sys/power/state on
onrestart restart audioserver
onrestart restart cameraserver
onrestart restart media
onrestart restart netd
onrestart restart wificond
// 創(chuàng)建子進(jìn)程時(shí),向 /dev/cpuset/foreground/tasks 寫入pid
writepid /dev/cpuset/foreground/tasks
service zygote_secondary /system/bin/app_process32 -Xzygote /system/bin --zygote --socket-name=zygote_secondary --enable-lazy-preload
class main
priority -20
user root
group root readproc reserved_disk
socket zygote_secondary stream 660 root system
socket usap_pool_secondary stream 660 root system
onrestart restart zygote
writepid /dev/cpuset/foreground/tasks
上面的腳本大概的意思就是干签,通過Service
命令創(chuàng)建zygote進(jìn)程津辩,zygote進(jìn)程對(duì)應(yīng)的路徑為system/bin/app_process64,啟動(dòng)的入口即是class main
所指的main函數(shù),而app_process64對(duì)應(yīng)的代碼定義在app_main.cpp
中喘沿。
app_main.cpp
下面我們選出一些app_main.cpp
中關(guān)鍵的代碼來看
/frameworks/base/cmds/app_process/app_main.cpp
int main(int argc, char* const argv[])
{
...
// 1
if (zygote) {
runtime.start("com.android.internal.os.ZygoteInit", args, zygote);
} else if (className) {
runtime.start("com.android.internal.os.RuntimeInit", args, zygote);
} else {
fprintf(stderr, "Error: no class name or --zygote supplied.\n");
app_usage();
LOG_ALWAYS_FATAL("app_process: no class name or --zygote supplied.");
}
}
如注釋1所描述的這里的runtime
為AppRuntime
類型闸度,而AppRuntime
又繼承于AndroidRuntime
,AppRuntime
中并沒有重寫父類的start()
方法蚜印,所以這里start()
方法調(diào)用的是AndroidRuntime
的start()
方法莺禁。
AndroidRuntime.cpp
我們?cè)賮砜纯锤割?code>AndroidRuntime.cpp實(shí)現(xiàn)的start()
方法
void AndroidRuntime::start(const char* className, const Vector<String8>& options, bool zygote)
{
// 開機(jī)時(shí)如果沒看到這個(gè)log的話可能在Zygote初始化時(shí)發(fā)生錯(cuò)誤
ALOGD(">>>>>> START %s uid %d <<<<<<\n",
className != NULL ? className : "(unknown)", getuid());
...
/* start the virtual machine */
JniInvocation jni_invocation;
jni_invocation.Init(NULL);
JNIEnv* env;
// 開啟Java虛擬機(jī)
if (startVm(&mJavaVM, &env, zygote) != 0) {
return;
}
onVmCreated(env);
/*
* Register android functions.
*/
// Java虛擬機(jī)注冊(cè)JNI方法
if (startReg(env) < 0) {
ALOGE("Unable to register all android natives\n");
return;
}
/*
* We want to call main() with a String array with arguments in it.
* At present we have two arguments, the class name and an option string.
* Create an array to hold them.
*/
jclass stringClass;
jobjectArray strArray;
jstring classNameStr;
// classNameStr是傳入的參數(shù)className轉(zhuǎn)化而來,值為com.android.internal.os.ZygoteInit
stringClass = env->FindClass("java/lang/String");
assert(stringClass != NULL);
strArray = env->NewObjectArray(options.size() + 1, stringClass, NULL);
assert(strArray != NULL);
classNameStr = env->NewStringUTF(className);
assert(classNameStr != NULL);
env->SetObjectArrayElement(strArray, 0, classNameStr);
for (size_t i = 0; i < options.size(); ++i) {
jstring optionsStr = env->NewStringUTF(options.itemAt(i).string());
assert(optionsStr != NULL);
env->SetObjectArrayElement(strArray, i + 1, optionsStr);
}
/*
* Start VM. This thread becomes the main thread of the VM, and will
* not return until the VM exits.
*/
// 將className的"."替換為"/" 這里為ZygoteInit類
// 替換之后為com/android/internal/os/ZygoteInit
char* slashClassName = toSlashClassName(className != NULL ? className : "");
jclass startClass = env->FindClass(slashClassName);
if (startClass == NULL) {
ALOGE("JavaVM unable to locate class '%s'\n", slashClassName);
/* keep going */
} else {
// 找到ZygoteInit的main()方法
jmethodID startMeth = env->GetStaticMethodID(startClass, "main",
"([Ljava/lang/String;)V");
if (startMeth == NULL) {
ALOGE("JavaVM unable to find main() in '%s'\n", className);
/* keep going */
} else {
// 調(diào)用ZygoteInit的main()方法
// 從Native層進(jìn)入了Java層
env->CallStaticVoidMethod(startClass, startMeth, strArray);
#if 0
if (env->ExceptionCheck())
threadExitUncaughtException(env);
#endif
}
}
free(slashClassName);
ALOGD("Shutting down VM\n");
if (mJavaVM->DetachCurrentThread() != JNI_OK)
ALOGW("Warning: unable to detach main thread\n");
if (mJavaVM->DestroyJavaVM() != 0)
ALOGW("Warning: VM did not shut down cleanly\n");
}
ZygoteInit.java
從這里開始就進(jìn)入了Java層,從前面的runtime.start("com.android.internal.os.ZygoteInit", args, zygote);
可知最后通過反射調(diào)用了ZygoteInit.main()
我們?cè)賮砜纯催@個(gè)ZygoteInit.main()
又是什么晒哄。
frameworks/base/core/java/com/android/internal/os/ZygoteInit.java
@UnsupportedAppUsage
public static void main(String argv[]) {
ZygoteServer zygoteServer = null;
// 確保創(chuàng)建線程會(huì)拋出異常 因?yàn)閆ygote初始化時(shí)是單線程運(yùn)行的
ZygoteHooks.startZygoteNoThreadCreation();
// Zygote goes into its own process group.
try {
Os.setpgid(0, 0);
} catch (ErrnoException ex) {
throw new RuntimeException("Failed to setpgid(0,0)", ex);
}
Runnable caller;
try {
// 記錄Zygote的啟動(dòng)時(shí)間
if (!"1".equals(SystemProperties.get("sys.boot_completed"))) {
MetricsLogger.histogram(null, "boot_zygote_init",
(int) SystemClock.elapsedRealtime());
}
...
// 打開DDMS
RuntimeInit.enableDdms();
boolean startSystemServer = false;
// 定義了zygote socket名為zygote 簡(jiǎn)單的初始化 后面可能會(huì)重新賦值
String zygoteSocketName = "zygote";
String abiList = null;
boolean enableLazyPreload = false;
for (int i = 1; i < argv.length; i++) {
// init.zygote64_32.rc的參數(shù)傳到這里了
if ("start-system-server".equals(argv[i])) {
startSystemServer = true;
} else if ("--enable-lazy-preload".equals(argv[i])) {
enableLazyPreload = true;
} else if (argv[i].startsWith(ABI_LIST_ARG)) {
// app_main.cpp 讀取abi list的文件然后append到參數(shù)中 在這里解析
abiList = argv[i].substring(ABI_LIST_ARG.length());
} else if (argv[i].startsWith(SOCKET_NAME_ARG)) {
// socketName也在app_main.cpp中被設(shè)置
zygoteSocketName = argv[i].substring(SOCKET_NAME_ARG.length());
} else {
throw new RuntimeException("Unknown command line argument: " + argv[i]);
}
}
// Zygote.PRIMARY_SOCKET_NAME = "zygote";
final boolean isPrimaryZygote = zygoteSocketName.equals(Zygote.PRIMARY_SOCKET_NAME);
if (abiList == null) {
throw new RuntimeException("No ABI list supplied.");
}
// In some configurations, we avoid preloading resources and classes eagerly.
// In such cases, we will preload things prior to our first fork.
if (!enableLazyPreload) {
bootTimingsTraceLog.traceBegin("ZygotePreload");
EventLog.writeEvent(LOG_BOOT_PROGRESS_PRELOAD_START, SystemClock.uptimeMillis());
// preload方法在下面展開
preload(bootTimingsTraceLog);
EventLog.writeEvent(LOG_BOOT_PROGRESS_PRELOAD_END,
SystemClock.uptimeMillis());
bootTimingsTraceLog.traceEnd(); // ZygotePreload
} else {
// Thread.currentThread().setPriority(Thread.NORM_PRIORITY);
// 設(shè)置線程優(yōu)先級(jí)為NORM_PRIORITY = 5;
Zygote.resetNicePriority();
}
// Do an initial gc to clean up after startup
bootTimingsTraceLog.traceBegin("PostZygoteInitGC");
// 回收一些前面預(yù)加載資源的內(nèi)存
gcAndFinalize();
bootTimingsTraceLog.traceEnd(); // PostZygoteInitGC
bootTimingsTraceLog.traceEnd(); // ZygoteInit
// 關(guān)閉日志跟蹤 后面fork進(jìn)程就不會(huì)有之前的日志記錄了
Trace.setTracingEnabled(false, 0);
Zygote.initNativeState(isPrimaryZygote);
// 從這里開始可以創(chuàng)建新線程了
ZygoteHooks.stopZygoteNoThreadCreation();
// 創(chuàng)建Server端等待之后的AMS等進(jìn)程連接
zygoteServer = new ZygoteServer(isPrimaryZygote);
if (startSystemServer) {
// 先fork一個(gè)SystemServer進(jìn)程出來
// 在下面展開
// 這里的r其實(shí)就是handleSystemServerProcess()方法
Runnable r = forkSystemServer(abiList, zygoteSocketName, zygoteServer);
// {@code r == null} in the parent (zygote) process, and {@code r != null} in the
// child (system_server) process.
if (r != null) {
r.run();
return;
}
}
Log.i(TAG, "Accepting command socket connections");
// 阻塞等待客戶端連接請(qǐng)求
caller = zygoteServer.runSelectLoop(abiList);
} catch (Throwable ex) {
Log.e(TAG, "System zygote died with exception", ex);
throw ex;
} finally {
if (zygoteServer != null) {
zygoteServer.closeServerSocket();
}
}
// 子進(jìn)程執(zhí)行返回的caller對(duì)象
// 父進(jìn)程只會(huì)阻塞獲取連接請(qǐng)求或者處理fork請(qǐng)求
if (caller != null) {
caller.run();
}
}
static void preload(TimingsTraceLog bootTimingsTraceLog) {
Log.d(TAG, "begin preload");
bootTimingsTraceLog.traceBegin("BeginPreload");
beginPreload();
bootTimingsTraceLog.traceEnd(); // BeginPreload
bootTimingsTraceLog.traceBegin("PreloadClasses");
// 加載/system/etc/preloaded-classes目錄下的class文件
preloadClasses();
bootTimingsTraceLog.traceEnd(); // PreloadClasses
bootTimingsTraceLog.traceBegin("CacheNonBootClasspathClassLoaders");
// 加載許多應(yīng)用程序使用但不能放在啟動(dòng)類路徑中的內(nèi)容睁宰。
// 這里主要加載兩個(gè)jar文件
// /system/framework/android.hidl.base-V1.0-java.jar
// /system/framework/android.hidl.manager-V1.0-java.jar
cacheNonBootClasspathClassLoaders();
bootTimingsTraceLog.traceEnd(); // CacheNonBootClasspathClassLoaders
bootTimingsTraceLog.traceBegin("PreloadResources");
// 加載一些資源文件
// R.array.preloaded_drawables R.array.preloaded_color_state_lists等
preloadResources();
bootTimingsTraceLog.traceEnd(); // PreloadResources
Trace.traceBegin(Trace.TRACE_TAG_DALVIK, "PreloadAppProcessHALs");
// 最終調(diào)用 frameworks/native/libs/ui/GraphicBufferMapper.cpp的preloadHal()方法
nativePreloadAppProcessHALs();
Trace.traceEnd(Trace.TRACE_TAG_DALVIK);
Trace.traceBegin(Trace.TRACE_TAG_DALVIK, "PreloadGraphicsDriver");
// 通過一定的條件判斷后決定調(diào)用navtive層frameworks/native/opengl/libagl/egl.cpp 的eglGetDisplay方法
maybePreloadGraphicsDriver();
Trace.traceEnd(Trace.TRACE_TAG_DALVIK);
// 加載一些共享庫 android compiler_rt jnigraphics
preloadSharedLibraries();
// 設(shè)置文字的一些效果以緩存文字描繪
// 在native層做一些初始化 frameworks/base/core/jni/android_text_Hyphenator.cpp init()方法
preloadTextResources();
// Ask the WebViewFactory to do any initialization that must run in the zygote process,
// for memory sharing purposes.
// 加載webviewchromium_loader庫
WebViewFactory.prepareWebViewInZygote();
// 轉(zhuǎn)換為軟引用 讓 Zygote GC時(shí)可以回收
// 即調(diào)用gcAndFinalize()方法的時(shí)候
endPreload();
warmUpJcaProviders();
Log.d(TAG, "end preload");
sPreloadComplete = true;
}
private static Runnable forkSystemServer(String abiList, String socketName,
ZygoteServer zygoteServer) {
...
/* Hardcoded command line to start the system server */
String args[] = {
"--setuid=1000",
"--setgid=1000",
"--setgroups=1001,1002,1003,1004,1005,1006,1007,1008,1009,1010,1018,1021,1023,"
+ "1024,1032,1065,3001,3002,3003,3006,3007,3009,3010",
"--capabilities=" + capabilities + "," + capabilities,
"--nice-name=system_server",
"--runtime-args",
"--target-sdk-version=" + VMRuntime.SDK_VERSION_CUR_DEVELOPMENT,
"com.android.server.SystemServer",
};
ZygoteArguments parsedArgs = null;
int pid;
try {
parsedArgs = new ZygoteArguments(args);
Zygote.applyDebuggerSystemProperty(parsedArgs);
Zygote.applyInvokeWithSystemProperty(parsedArgs);
boolean profileSystemServer = SystemProperties.getBoolean(
"dalvik.vm.profilesystemserver", false);
if (profileSystemServer) {
parsedArgs.mRuntimeFlags |= Zygote.PROFILE_SYSTEM_SERVER;
}
// fork SystemServer進(jìn)程
// 調(diào)用native方法 nativeForkSystemServer()
pid = Zygote.forkSystemServer(
parsedArgs.mUid, parsedArgs.mGid,
parsedArgs.mGids,
parsedArgs.mRuntimeFlags,
null,
parsedArgs.mPermittedCapabilities,
parsedArgs.mEffectiveCapabilities);
} catch (IllegalArgumentException ex) {
throw new RuntimeException(ex);
}
// pid為0則為子進(jìn)程 pid > 0為父進(jìn)程
// 父進(jìn)程返回子進(jìn)程的pid
if (pid == 0) {
if (hasSecondZygote(abiList)) {
waitForSecondaryZygote(socketName);
}
//關(guān)閉socket端口
zygoteServer.closeServerSocket();
return handleSystemServerProcess(parsedArgs);
}
return null;
}
com_android_internal_os_Zygote.cpp
最后我們來說一下這個(gè)nativeForkSystemServer()的方法,看看Zygote是怎么把SystemServer的進(jìn)程fork出來的肪获。
/frameworks/base/core/jni/com_android_internal_os_Zygote.cpp
static jint com_android_internal_os_Zygote_nativeForkSystemServer(
JNIEnv* env, jclass, uid_t uid, gid_t gid, jintArray gids,
jint runtime_flags, jobjectArray rlimits, jlong permitted_capabilities,
jlong effective_capabilities) {
// 一個(gè)vector是子進(jìn)程需要關(guān)閉的fd 這是屬于Zygote自己的fd
// 而另一個(gè)vector存的是
// 在第一次fork的時(shí)候創(chuàng)建一個(gè)fd table 否則每次fork需要檢查里面的fd是否正常
std::vector<int> fds_to_close(MakeUsapPipeReadFDVector()),
fds_to_ignore(fds_to_close);
fds_to_close.push_back(gUsapPoolSocketFD);
if (gUsapPoolEventFD != -1) {
fds_to_close.push_back(gUsapPoolEventFD);
fds_to_ignore.push_back(gUsapPoolEventFD);
}
// 里面調(diào)用fork()函數(shù)
pid_t pid = ForkCommon(env, true,
fds_to_close,
fds_to_ignore);
// 子進(jìn)程
if (pid == 0) {
SpecializeCommon(env, uid, gid, gids, runtime_flags, rlimits,
permitted_capabilities, effective_capabilities,
MOUNT_EXTERNAL_DEFAULT, nullptr, nullptr, true,
false, nullptr, nullptr);
} else if (pid > 0) {
// The zygote process checks whether the child process has died or not.
ALOGI("System server process %d has been created", pid);
gSystemServerPid = pid;
int status;
// 檢查一下子線程這時(shí)候有沒有發(fā)生錯(cuò)誤
// WNOHANG 為非阻塞模式的option 如果發(fā)生錯(cuò)誤則返回子線程的pid 沒發(fā)生錯(cuò)誤返回0
if (waitpid(pid, &status, WNOHANG) == pid) {
ALOGE("System server process %d has died. Restarting Zygote!", pid);
RuntimeAbort(env, __LINE__, "System server process has died. Restarting Zygote!");
}
if (UsePerAppMemcg()) {
// Assign system_server to the correct memory cgroup.
// Not all devices mount memcg so check if it is mounted first
// to avoid unnecessarily printing errors and denials in the logs.
if (!SetTaskProfiles(pid, std::vector<std::string>{"SystemMemoryProcess"})) {
ALOGE("couldn't add process %d into system memcg group", pid);
}
}
}
return pid;
}
static pid_t ForkCommon(JNIEnv* env, bool is_system_server,
const std::vector<int>& fds_to_close,
const std::vector<int>& fds_to_ignore) {
...
pid_t pid = fork();
if (pid == 0) {
// The child process.
PreApplicationInit();
// 關(guān)掉所有fds_to_close中的fd
DetachDescriptors(env, fds_to_close, fail_fn);
// Invalidate the entries in the USAP table.
ClearUsapTable();
// Re-open all remaining open file descriptors so that they aren't shared
// with the zygote across a fork.
gOpenFdTable->ReopenOrDetach(fail_fn);
// Turn fdsan back on.
android_fdsan_set_error_level(fdsan_error_level);
} else {
ALOGD("Forked child process %d", pid);
}
// We blocked SIGCHLD prior to a fork, we unblock it here.
UnblockSignal(SIGCHLD, fail_fn);
return pid;
}
ZygoteServer.java
Zygote自己的啟動(dòng)過程和Zygote啟動(dòng)SystemServer進(jìn)程到這里就說得差不多了寝凌,我們最后再來同場(chǎng)加映一下這個(gè)ZygoteServer的runSelectLoop()
方法,這個(gè)方法是干嘛的呢孝赫?主要是接受AMS较木,ATMS等系統(tǒng)服務(wù)進(jìn)程作為Client端經(jīng)過socket通信,向ZygoteServer申請(qǐng)fork()
新的進(jìn)程青柄,處理這些請(qǐng)求用的伐债。這個(gè)方法是一個(gè)阻塞方法,父進(jìn)程不會(huì)有返回值致开,子進(jìn)程才會(huì)返回一個(gè)Runnable峰锁。
frameworks/base/core/java/com/android/internal/os/ZygoteServer.java
Runnable runSelectLoop(String abiList) {
ArrayList<FileDescriptor> socketFDs = new ArrayList<FileDescriptor>();
ArrayList<ZygoteConnection> peers = new ArrayList<ZygoteConnection>();
// 第一個(gè)元素存自己作為Server端的fd
// 其實(shí)它就是ZygoteServer的管家,你要申請(qǐng)fork子進(jìn)程必須在這個(gè)Socket中申請(qǐng)注冊(cè)
// 注冊(cè)完成后才能申請(qǐng)fork子進(jìn)程
socketFDs.add(mZygoteSocket.getFileDescriptor());
// 相應(yīng)的就在對(duì)應(yīng)的connection數(shù)組加一個(gè)null
// 因?yàn)檫@時(shí)候還沒有請(qǐng)求連接的Connection
peers.add(null);
while (true) {
// 獲取UsapPool的最大/最小連接值 重新填充的閾值
// 還有每隔一段時(shí)間檢查配置文件去更新這些值
fetchUsapPoolPolicyPropsWithMinInterval();
// 存儲(chǔ)usapPool的fd
int[] usapPipeFDs = null;
// 通信連接用的StructPollfd結(jié)構(gòu)數(shù)組
StructPollfd[] pollFDs = null;
// Allocate enough space for the poll structs, taking into account
// the state of the USAP pool for this Zygote (could be a
// regular Zygote, a WebView Zygote, or an AppZygote).
if (mUsapPoolEnabled) {
// 拿到活躍的usap socket fd
// 調(diào)用的是native層的MakeUsapPipeReadFDVector()函數(shù)
usapPipeFDs = Zygote.getUsapPipeFDs();
// 這里加的1是為了下面有一個(gè)新建的StructPollfd usapPoolEventFd騰出來的空間
// 可以看下面注釋2
pollFDs = new StructPollfd[socketFDs.size() + 1 + usapPipeFDs.length];
} else {
pollFDs = new StructPollfd[socketFDs.size()];
}
/*
* For reasons of correctness the USAP pool pipe and event FDs
* must be processed before the session and server sockets. This
* is to ensure that the USAP pool accounting information is
* accurate when handling other requests like API blacklist
* exemptions.
*/
int pollIndex = 0;
// 遍歷已經(jīng)存儲(chǔ)好的fd
for (FileDescriptor socketFD : socketFDs) {
pollFDs[pollIndex] = new StructPollfd();
pollFDs[pollIndex].fd = socketFD;
// POLLIN即為可讀狀態(tài)
pollFDs[pollIndex].events = (short) POLLIN;
++pollIndex;
}
final int usapPoolEventFDIndex = pollIndex;
// 2
if (mUsapPoolEnabled) {
// 上面騰出了一個(gè)位置放置這個(gè)StructPollfd
pollFDs[pollIndex] = new StructPollfd();
pollFDs[pollIndex].fd = mUsapPoolEventFD;
pollFDs[pollIndex].events = (short) POLLIN;
++pollIndex;
// 然后把活躍的usap socket fd依次放到后面
for (int usapPipeFD : usapPipeFDs) {
FileDescriptor managedFd = new FileDescriptor();
managedFd.setInt$(usapPipeFD);
pollFDs[pollIndex] = new StructPollfd();
pollFDs[pollIndex].fd = managedFd;
pollFDs[pollIndex].events = (short) POLLIN;
++pollIndex;
}
}
try {
// 這里阻塞獲取事件 主要原理是Linux的I/O復(fù)用技術(shù)
Os.poll(pollFDs, -1);
} catch (ErrnoException ex) {
throw new RuntimeException("poll failed", ex);
}
// 標(biāo)記是否需要填充usapPool
boolean usapPoolFDRead = false;
while (--pollIndex >= 0) {
if ((pollFDs[pollIndex].revents & POLLIN) == 0) {
continue;
}
// 等于0即為ZygoteSever自身的socket 用來處理連接請(qǐng)求
// 然后存入peers即ZygoteConnection的數(shù)組中
// 其實(shí)就是系統(tǒng)服務(wù)在ZygoteServer這里注冊(cè)的過程
if (pollIndex == 0) {
// Zygote server socket
ZygoteConnection newPeer = acceptCommandPeer(abiList);
peers.add(newPeer);
socketFDs.add(newPeer.getFileDescriptor());
} else if (pollIndex < usapPoolEventFDIndex) {
// Session socket accepted from the Zygote server socket
// ZygoteServer已經(jīng)連接好的的fd
// 其實(shí)這里就是看看有沒有socket發(fā)出fork子進(jìn)程請(qǐng)求的過程
try {
ZygoteConnection connection = peers.get(pollIndex);
// 3
// 這里就是處理fork子進(jìn)程請(qǐng)求的地方
final Runnable command = connection.processOneCommand(this);
// TODO (chriswailes): Is this extra check necessary?
// mIsForkChild是由子進(jìn)程設(shè)置的双戳,在forkAndSpecialize()方法之后
if (mIsForkChild) {
// We're in the child. We should always have a command to run at this
// stage if processOneCommand hasn't called "exec".
if (command == null) {
throw new IllegalStateException("command == null");
}
return command;
} else {
// We're in the server - we should never have any commands to run.
if (command != null) {
throw new IllegalStateException("command != null");
}
// We don't know whether the remote side of the socket was closed or
// not until we attempt to read from it from processOneCommand. This
// shows up as a regular POLLIN event in our regular processing loop.
// 檢測(cè)這個(gè)connection是否關(guān)閉了虹蒋,如果關(guān)閉就從數(shù)組中移除
// 防止下次再讀這個(gè)connection的時(shí)候已經(jīng)關(guān)閉了
if (connection.isClosedByPeer()) {
connection.closeSocket();
peers.remove(pollIndex);
socketFDs.remove(pollIndex);
}
}
} catch (Exception e) {
if (!mIsForkChild) {
// We're in the server so any exception here is one that has taken place
// pre-fork while processing commands or reading / writing from the
// control socket. Make a loud noise about any such exceptions so that
// we know exactly what failed and why.
Slog.e(TAG, "Exception executing zygote command: ", e);
// Make sure the socket is closed so that the other end knows
// immediately that something has gone wrong and doesn't time out
// waiting for a response.
ZygoteConnection conn = peers.remove(pollIndex);
conn.closeSocket();
socketFDs.remove(pollIndex);
} else {
// We're in the child so any exception caught here has happened post
// fork and before we execute ActivityThread.main (or any other main()
// method). Log the details of the exception and bring down the process.
Log.e(TAG, "Caught post-fork exception in child process.", e);
throw e;
}
} finally {
// Reset the child flag, in the event that the child process is a child-
// zygote. The flag will not be consulted this loop pass after the Runnable
// is returned.
mIsForkChild = false;
}
} else {
// Either the USAP pool event FD or a USAP reporting pipe.
// 如果這是event FD房午,那這個(gè)值就是USAPs被清除的數(shù)量
// 如果這是reporting pipe FD纤勒,那這個(gè)值就是其對(duì)應(yīng)的PID
long messagePayload = -1;
// 剩下的就是usapPool連接池的Fd
try {
byte[] buffer = new byte[Zygote.USAP_MANAGEMENT_MESSAGE_BYTES];
int readBytes = Os.read(pollFDs[pollIndex].fd, buffer, 0, buffer.length);
if (readBytes == Zygote.USAP_MANAGEMENT_MESSAGE_BYTES) {
DataInputStream inputStream =
new DataInputStream(new ByteArrayInputStream(buffer));
messagePayload = inputStream.readLong();
} else {
Log.e(TAG, "Incomplete read from USAP management FD of size "
+ readBytes);
continue;
}
} catch (Exception ex) {
if (pollIndex == usapPoolEventFDIndex) {
Log.e(TAG, "Failed to read from USAP pool event FD: "
+ ex.getMessage());
} else {
Log.e(TAG, "Failed to read from USAP reporting pipe: "
+ ex.getMessage());
}
continue;
}
// 把這個(gè)fd對(duì)應(yīng)pid 從 UsapTableEntry 清理掉
if (pollIndex > usapPoolEventFDIndex) {
Zygote.removeUsapTableEntry((int) messagePayload);
}
// 重新填充usapPool
usapPoolFDRead = true;
}
}
// Check to see if the USAP pool needs to be refilled.
// 如果UsapPool連接池需要填充
// 則新建socket fd填充至閾值
if (usapPoolFDRead) {
int[] sessionSocketRawFDs =
socketFDs.subList(1, socketFDs.size())
.stream()
.mapToInt(fd -> fd.getInt$())
.toArray();
final Runnable command = fillUsapPool(sessionSocketRawFDs);
// 把填充方法fillUsapPool()作為runnable返回
if (command != null) {
return command;
}
}
}
}
runSelectLoop最主要的核心點(diǎn)就是注釋3所標(biāo)注的processOneCommand()
方法竭业,用來處理服務(wù)進(jìn)程fork()
子進(jìn)程的請(qǐng)求灶伊,這個(gè)方法具體會(huì)在下一篇文章給大家分析前塔,這篇文章寫到這里是有點(diǎn)長(zhǎng)了陪竿∑矗看完上面的runSelectLoop()
方法覺得不太懂或者想了解底層原理的同學(xué)可以看一下一下這三篇關(guān)于Linux的I/O的文章寂恬,看完這三篇文章基本上就可以對(duì)Linux的I/O模型有一定的了解了扣墩。
寫在最后
文章到這就寫完了,下一篇文章的主題應(yīng)該是Android Q上Zygote
如何接收其他服務(wù)進(jìn)程的請(qǐng)求蟆融,如ATMS草巡,AMS等來創(chuàng)建新的進(jìn)程,在Zygote這塊在Android Q上改動(dòng)還是比較大的,引入了AppZygote
(為app加載做一些優(yōu)化)山憨,WebViewZygote
和普通的Zygote
查乒,還有引入了這個(gè)UsapPool
連接池,筆者對(duì)于UsapPool
連接池具體作用還是不太清楚郁竟,如果有大神可以在評(píng)論指引一下玛迄,如果文章有錯(cuò)誤的地方也可在評(píng)論中指出,感謝萬分棚亩。
文章代碼存放鏈接