其實(shí)這里的原因,主要是因?yàn)镸essageQueue底層采用了epoll進(jìn)行阻塞宅此,當(dāng)接收到消息的時(shí)候會(huì)喚醒主線程。我們這里主要從MessageQueue的入隊(duì)還有next()方法進(jìn)行分析爬范。
MessageQueue的構(gòu)造器如下
MessageQueue(boolean quitAllowed) {
mQuitAllowed = quitAllowed;
mPtr = nativeInit();
}
可以看到這里通過(guò)調(diào)用nativeInit()對(duì)mPtr做了初始化父腕,而nativeInit()的實(shí)現(xiàn)如下:
static jlong android_os_MessageQueue_nativeInit(JNIEnv* env, jclass clazz) {
NativeMessageQueue* nativeMessageQueue = new NativeMessageQueue();
if (!nativeMessageQueue) {
jniThrowRuntimeException(env, "Unable to allocate native queue");
return 0;
}
nativeMessageQueue->incStrong(env);
return reinterpret_cast<jlong>(nativeMessageQueue);
}
可以看出,最終的返回值青瀑,其實(shí)是通過(guò)reinterpret_cast做類型的強(qiáng)制轉(zhuǎn)換
NativeMessageQueue::NativeMessageQueue() :
mPollEnv(NULL), mPollObj(NULL), mExceptionObj(NULL) {
// 代表消息循環(huán)的Looper也在Native層中呈現(xiàn)身影了璧亮。根據(jù)消息驅(qū)動(dòng)的知識(shí)萧诫,一個(gè)線程會(huì)有一個(gè)
//Looper來(lái)循環(huán)處理消息隊(duì)列中的消息。下面一行的調(diào)用就是取得保存在線程本地存儲(chǔ)空間(Thread Local Storage)中的Looper對(duì)象
mLooper = Looper::getForThread();
// 如為第一次進(jìn)來(lái)枝嘶,則該線程沒(méi)有設(shè)置本地存儲(chǔ)帘饶,所以須先創(chuàng)建一個(gè)Looper,
//然后再將其保存到TLS中躬络,這是很常見(jiàn)的一種以線程為單位的單例模式
if (mLooper == NULL) {
mLooper = new Looper(false);
Looper::setForThread(mLooper);
}
}
其實(shí)nativeInit方法最終返回的就是native代碼中MessageQueue的指針
MessageQueue內(nèi)部消息入隊(duì)和喚醒機(jī)制
而MessageQueue的enqueueMessage尖奔,就是將消息入隊(duì)的操作,MessageQueue是單向鏈表結(jié)構(gòu)穷当,是采用先入先出的操作來(lái)處理消息
boolean enqueueMessage(Message msg, long when) {
// 判斷消息是否為空
if (msg.target == null) {
throw new IllegalArgumentException("Message must have a target.");
}
// 判斷消息是否正在使用
if (msg.isInUse()) {
throw new IllegalStateException(msg + " This message is already in use.");
}
// 采用同步方法塊的方式,實(shí)現(xiàn)線程同步淹禾,保證一個(gè)隊(duì)列一次只處理一個(gè)消息的入隊(duì)
synchronized (this) {
// 判斷Looper是否有退出馁菜,這是在Looper.quit()方法中調(diào)用mQueue.quit(false);設(shè)置mQuitting為true的
if (mQuitting) {
IllegalStateException e = new IllegalStateException(
msg.target + " sending message to a Handler on a dead thread");
Log.w(TAG, e.getMessage(), e);
// 回收消息,但是如果消息正在使用铃岔,則會(huì)拋異常汪疮,不會(huì)回收
msg.recycle();
return false;
}
// 設(shè)置消息為正在使用
msg.markInUse();
// 獲取當(dāng)前時(shí)間
msg.when = when;
Message p = mMessages;
boolean needWake;
// p相當(dāng)于當(dāng)前Message的head
if (p == null || when == 0 || when < p.when) {
// New head, wake up the event queue if blocked.
msg.next = p;
mMessages = msg;
needWake = mBlocked;
} else {
// Inserted within the middle of the queue. Usually we don't have to wake
// up the event queue unless there is a barrier at the head of the queue
// and the message is the earliest asynchronous message in the queue.
needWake = mBlocked && p.target == null && msg.isAsynchronous();
Message prev;
// 采用無(wú)限for循環(huán)尋找插入點(diǎn),直到找到為null的時(shí)候毁习,因?yàn)檫@個(gè)時(shí)候p為當(dāng)前節(jié)點(diǎn)智嚷,而prev為前一個(gè)節(jié)點(diǎn),找到為空的當(dāng)前節(jié)點(diǎn)纺且,然后在這個(gè)位置插入
for (;;) {
prev = p;
p = p.next;
if (p == null || when < p.when) {
break;
}
if (needWake && p.isAsynchronous()) {
needWake = false;
}
}
// 設(shè)置需要插入的Message的下一個(gè)節(jié)點(diǎn)為null
// 設(shè)置前一個(gè)節(jié)點(diǎn)的下一個(gè)節(jié)點(diǎn)為Message
msg.next = p; // invariant: p == prev.next
prev.next = msg;
}
// We can assume mPtr != 0 because mQuitting is false.
// 這里是判斷線程是否需要被喚醒
if (needWake) {
nativeWake(mPtr);
}
}
return true;
}
這里的nativeWake盏道,其實(shí)是調(diào)用了frameworks/base/core/jni/android_os_MessageQueue.cpp中的
static void android_os_MessageQueue_nativeWake(JNIEnv* env, jclass clazz, jlong ptr) {
NativeMessageQueue* nativeMessageQueue = reinterpret_cast<NativeMessageQueue*>(ptr);
nativeMessageQueue->wake();
}
而這里的NativeMessageQueue的wake()方法,其實(shí)就是在android_os_MessageQueue.cpp中實(shí)現(xiàn)的
void NativeMessageQueue::wake() {
mLooper->wake();
}
這里調(diào)用的是/system/core/libutils/Looper.cpp中的wake方法
void Looper::wake() {
#if DEBUG_POLL_AND_WAKE
ALOGD("%p ~ wake", this);
#endif
uint64_t inc = 1;
ssize_t nWrite = TEMP_FAILURE_RETRY(write(mWakeEventFd, &inc, sizeof(uint64_t)));
if (nWrite != sizeof(uint64_t)) {
if (errno != EAGAIN) {
LOG_ALWAYS_FATAL("Could not write wake signal to fd %d: %s",
mWakeEventFd, strerror(errno));
}
}
}
這里是調(diào)用了系統(tǒng)的write()方法载碌,寫(xiě)入喚醒事件猜嘱,通過(guò)I/O流寫(xiě)入,然后通過(guò)pipe(管道)的方式實(shí)現(xiàn)跨進(jìn)程喚醒嫁艇。
在frameworks/base/core/jni/android_os_MessageQueue.cpp的NativeMessageQueue方法中朗伶,會(huì)在mLooper為null的時(shí)候,初始化步咪,這里的初始化是通過(guò)system/core/libutils/Looper.cpp進(jìn)行的论皆。
Looper::Looper(bool allowNonCallbacks) :
mAllowNonCallbacks(allowNonCallbacks), mSendingMessage(false),
mPolling(false), mEpollFd(-1), mEpollRebuildRequired(false),
mNextRequestSeq(0), mResponseIndex(0), mNextMessageUptime(LLONG_MAX) {
// 初始化一個(gè)喚醒事件
// 調(diào)用eventfd接口返回一個(gè)文件描述符,專門(mén)用于事件通知
mWakeEventFd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
LOG_ALWAYS_FATAL_IF(mWakeEventFd < 0, "Could not make wake event fd: %s",
strerror(errno));
AutoMutex _l(mLock);
rebuildEpollLocked();
}
在Looper初始化時(shí)猾漫,會(huì)最終調(diào)用rebuildEpollLocked()
void Looper::rebuildEpollLocked() {
// Close old epoll instance if we have one.
if (mEpollFd >= 0) {
#if DEBUG_CALLBACKS
ALOGD("%p ~ rebuildEpollLocked - rebuilding epoll set", this);
#endif
close(mEpollFd);
}
// Allocate the new epoll instance and register the wake pipe.
// 在這里点晴,會(huì)分配一個(gè)新的epoll實(shí)例,并且注冊(cè)喚醒管道
// 這里的mEpollFd其實(shí)就是eventpoll的句柄
mEpollFd = epoll_create(EPOLL_SIZE_HINT);
LOG_ALWAYS_FATAL_IF(mEpollFd < 0, "Could not create epoll instance: %s", strerror(errno));
struct epoll_event eventItem;
memset(& eventItem, 0, sizeof(epoll_event)); // zero out unused members of data field union
eventItem.events = EPOLLIN;
eventItem.data.fd = mWakeEventFd;
// 這里是首次調(diào)用epoll_etl静袖,會(huì)拷貝fd
// 這里傳入的第四參數(shù)event的events的值是EPOLLIN觉鼻,表示有可以讀的操作
// 第三個(gè)參數(shù)表示被監(jiān)聽(tīng)的描述符,即wakeEvent文件描述符
// 這里的添加操作其實(shí)就是epoll添加mWakeEventFd文件描述符為要監(jiān)聽(tīng)的文件描述符
int result = epoll_ctl(mEpollFd, EPOLL_CTL_ADD, mWakeEventFd, & eventItem);
LOG_ALWAYS_FATAL_IF(result != 0, "Could not add wake event fd to epoll instance: %s",
strerror(errno));
for (size_t i = 0; i < mRequests.size(); i++) {
const Request& request = mRequests.valueAt(i);
struct epoll_event eventItem;
// 獲取管道中的事件item
request.initEventItem(&eventItem);
// 將管道事件的item队橙,添加到epoll中坠陈,并且開(kāi)始監(jiān)控管道事件萨惑,當(dāng)管道中有事件寫(xiě)入的時(shí)候,讀取管道事件仇矾,并且喚醒
int epollResult = epoll_ctl(mEpollFd, EPOLL_CTL_ADD, request.fd, & eventItem);
if (epollResult < 0) {
ALOGE("Error adding epoll events for fd %d while rebuilding epoll set: %s",
request.fd, strerror(errno));
}
}
}
所以最終的喚醒庸蔼,其實(shí)是通過(guò)監(jiān)控管道中的I/O消息。而具體的喚醒贮匕,其實(shí)是在Message.next()中調(diào)用nativePollOnce等待和喚醒的姐仅。監(jiān)控管道中的事情,是為了在喚醒時(shí)刻盐,知道管道中是否有文件描述符中有事件可以用來(lái)喚醒掏膏。
MessageQueue消息處理和等待機(jī)制
Message next() {
// Return here if the message loop has already quit and been disposed.
// This can happen if the application tries to restart a looper after quit
// which is not supported.
final long ptr = mPtr;
// 是否退出的判斷
if (ptr == 0) {
return null;
}
int pendingIdleHandlerCount = -1; // -1 only during first iteration
int nextPollTimeoutMillis = 0;
// 無(wú)限for循環(huán)
for (;;) {
if (nextPollTimeoutMillis != 0) {
// 因?yàn)橄乱粭lMessage尚未到處理時(shí)間,則會(huì)將等待過(guò)程中需要處理的內(nèi)容交給CPU
Binder.flushPendingCommands();
}
// 這里會(huì)有一個(gè)等待敦锌,在這個(gè)等待中設(shè)置了一個(gè)超時(shí)時(shí)間馒疹,即postDelayed等方式發(fā)送的延遲處理的消息,其實(shí)是通過(guò)等待一定的時(shí)間再繼續(xù)執(zhí)行的方式來(lái)進(jìn)行
nativePollOnce(ptr, nextPollTimeoutMillis);
synchronized (this) {
// Try to retrieve the next message. Return if found.
final long now = SystemClock.uptimeMillis();
Message prevMsg = null;
Message msg = mMessages;
if (msg != null && msg.target == null) {
// 如果當(dāng)前的msg不為空乙墙,但是這個(gè)msg中的Handler為空颖变,那么直接拿下一個(gè)消息,因?yàn)檫@個(gè)消息已經(jīng)沒(méi)有Handler來(lái)進(jìn)行處理
// Stalled by a barrier. Find the next asynchronous message in the queue.
do {
prevMsg = msg;
msg = msg.next;
} while (msg != null && !msg.isAsynchronous());
}
if (msg != null) {
//判斷當(dāng)前時(shí)間是否小于下一條要處理的消息的時(shí)間
if (now < msg.when) {
// 下一條消息尚未就緒听想。 設(shè)置超時(shí)以在準(zhǔn)備就緒時(shí)喚醒腥刹。
// Next message is not ready. Set a timeout to wake up when it is ready.
nextPollTimeoutMillis = (int) Math.min(msg.when - now, Integer.MAX_VALUE);
} else {
// Got a message.
mBlocked = false;
// 取出消息,如果前一個(gè)消息不為空汉买,則將前一個(gè)消息的指向指到當(dāng)前消息的下一個(gè)
if (prevMsg != null) {
prevMsg.next = msg.next;
} else {
// 如果前一個(gè)消息為空衔峰,則說(shuō)明當(dāng)前消息是第一個(gè)
mMessages = msg.next;
}
// 將當(dāng)前消息的指向置為null
msg.next = null;
if (DEBUG) Log.v(TAG, "Returning message: " + msg);
msg.markInUse();
return msg;
}
} else {
// No more messages.
nextPollTimeoutMillis = -1;
}
// Process the quit message now that all pending messages have been handled.
if (mQuitting) {
dispose();
return null;
}
// If first time idle, then get the number of idlers to run.
// Idle handles only run if the queue is empty or if the first message
// in the queue (possibly a barrier) is due to be handled in the future.
if (pendingIdleHandlerCount < 0
&& (mMessages == null || now < mMessages.when)) {
pendingIdleHandlerCount = mIdleHandlers.size();
}
if (pendingIdleHandlerCount <= 0) {
// No idle handlers to run. Loop and wait some more.
mBlocked = true;
continue;
}
if (mPendingIdleHandlers == null) {
mPendingIdleHandlers = new IdleHandler[Math.max(pendingIdleHandlerCount, 4)];
}
mPendingIdleHandlers = mIdleHandlers.toArray(mPendingIdleHandlers);
}
// Run the idle handlers.
// We only ever reach this code block during the first iteration.
for (int i = 0; i < pendingIdleHandlerCount; i++) {
final IdleHandler idler = mPendingIdleHandlers[i];
mPendingIdleHandlers[i] = null; // release the reference to the handler
boolean keep = false;
try {
keep = idler.queueIdle();
} catch (Throwable t) {
Log.wtf(TAG, "IdleHandler threw exception", t);
}
if (!keep) {
synchronized (this) {
mIdleHandlers.remove(idler);
}
}
}
// Reset the idle handler count to 0 so we do not run them again.
pendingIdleHandlerCount = 0;
// While calling an idle handler, a new message could have been delivered
// so go back and look again for a pending message without waiting.
nextPollTimeoutMillis = 0;
}
}
這里,nativePollOnce在底層調(diào)用的是MessageQueue.cpp的android_os_MessageQueue_nativePollOnce函數(shù)录别,在這個(gè)函數(shù)內(nèi)部是調(diào)用了MessageQueue.cpp的pollOnce函數(shù)朽色,而pollOnce函數(shù),其實(shí)是調(diào)用了Looper.cpp的pollOnce函數(shù)
Looper.cpp的pollOnce函數(shù)如下:
int Looper::pollOnce(int timeoutMillis, int* outFd, int* outEvents, void** outData) {
int result = 0;
for (;;) {
while (mResponseIndex < mResponses.size()) {
const Response& response = mResponses.itemAt(mResponseIndex++);
int ident = response.request.ident;
if (ident >= 0) {
int fd = response.request.fd;
int events = response.events;
void* data = response.request.data;
#if DEBUG_POLL_AND_WAKE
ALOGD("%p ~ pollOnce - returning signalled identifier %d: "
"fd=%d, events=0x%x, data=%p",
this, ident, fd, events, data);
#endif
if (outFd != NULL) *outFd = fd;
if (outEvents != NULL) *outEvents = events;
if (outData != NULL) *outData = data;
return ident;
}
}
if (result != 0) {
#if DEBUG_POLL_AND_WAKE
ALOGD("%p ~ pollOnce - returning result %d", this, result);
#endif
if (outFd != NULL) *outFd = 0;
if (outEvents != NULL) *outEvents = 0;
if (outData != NULL) *outData = NULL;
return result;
}
result = pollInner(timeoutMillis);
}
}
在這個(gè)函數(shù)中不做過(guò)多的分析组题,其實(shí)這里最終調(diào)用了pollInner函數(shù)葫男。
Looper.cpp的pollInner函數(shù):
int Looper::pollInner(int timeoutMillis) {
#if DEBUG_POLL_AND_WAKE
ALOGD("%p ~ pollOnce - waiting: timeoutMillis=%d", this, timeoutMillis);
#endif
// Adjust the timeout based on when the next message is due.
if (timeoutMillis != 0 && mNextMessageUptime != LLONG_MAX) {
nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);
int messageTimeoutMillis = toMillisecondTimeoutDelay(now, mNextMessageUptime);
if (messageTimeoutMillis >= 0
&& (timeoutMillis < 0 || messageTimeoutMillis < timeoutMillis)) {
timeoutMillis = messageTimeoutMillis;
}
#if DEBUG_POLL_AND_WAKE
ALOGD("%p ~ pollOnce - next message in %" PRId64 "ns, adjusted timeout: timeoutMillis=%d",
this, mNextMessageUptime - now, timeoutMillis);
#endif
}
// Poll.
int result = POLL_WAKE;
mResponses.clear();
mResponseIndex = 0;
// We are about to idle.
mPolling = true;
struct epoll_event eventItems[EPOLL_MAX_EVENTS];
// 第一點(diǎn)
// 這里四個(gè)參數(shù)‘
// 該方法其實(shí)就是mEpollFd監(jiān)聽(tīng)mWakeEventFd所產(chǎn)生的對(duì)應(yīng)事件
// 第一個(gè)參數(shù):表示epoll的句柄
// 第二個(gè)參數(shù):eventItems表示回傳處理事件的數(shù)組
// 第三個(gè)參數(shù):表示每次能處理的最大事件數(shù)
// 第四個(gè)參數(shù):表示阻塞的時(shí)間,如果是-1崔列,則表示一直阻塞梢褐,直到下一次來(lái)IO被喚醒
// 在handler中,如果沒(méi)有更多的數(shù)據(jù)了赵讯,則會(huì)傳入-1盈咳,讓其一直阻塞等待。
int eventCount = epoll_wait(mEpollFd, eventItems, EPOLL_MAX_EVENTS, timeoutMillis);
// No longer idling.
mPolling = false;
// Acquire lock.
mLock.lock();
// Rebuild epoll set if needed.
if (mEpollRebuildRequired) {
mEpollRebuildRequired = false;
rebuildEpollLocked();
goto Done;
}
// Check for poll error.
if (eventCount < 0) {
if (errno == EINTR) {
goto Done;
}
ALOGW("Poll failed with an unexpected error: %s", strerror(errno));
result = POLL_ERROR;
goto Done;
}
// Check for poll timeout.
if (eventCount == 0) {
#if DEBUG_POLL_AND_WAKE
ALOGD("%p ~ pollOnce - timeout", this);
#endif
result = POLL_TIMEOUT;
goto Done;
}
// Handle all events.
#if DEBUG_POLL_AND_WAKE
ALOGD("%p ~ pollOnce - handling events from %d fds", this, eventCount);
#endif
// 第二點(diǎn)
// eventCount大于0边翼,表示有eventCount個(gè)文件描述符有數(shù)據(jù)可讀事件發(fā)生
for (int i = 0; i < eventCount; i++) {
int fd = eventItems[i].data.fd;
uint32_t epollEvents = eventItems[i].events;
// 若通過(guò)管道讀端被喚醒
if (fd == mWakeEventFd) {
// 若為POLLIN事件鱼响,即為可讀事件
if (epollEvents & EPOLLIN) {
// 去讀取管道數(shù)據(jù),執(zhí)行到這里组底,線程相當(dāng)于已經(jīng)被喚醒
awoken();
} else {
ALOGW("Ignoring unexpected epoll events 0x%x on wake event fd.", epollEvents);
}
} else {
ssize_t requestIndex = mRequests.indexOfKey(fd);
if (requestIndex >= 0) {
int events = 0;
if (epollEvents & EPOLLIN) events |= EVENT_INPUT;
if (epollEvents & EPOLLOUT) events |= EVENT_OUTPUT;
if (epollEvents & EPOLLERR) events |= EVENT_ERROR;
if (epollEvents & EPOLLHUP) events |= EVENT_HANGUP;
pushResponse(events, mRequests.valueAt(requestIndex));
} else {
ALOGW("Ignoring unexpected epoll events 0x%x on fd %d that is "
"no longer registered.", epollEvents, fd);
}
}
}
Done: ;
// Invoke pending message callbacks.
mNextMessageUptime = LLONG_MAX;
while (mMessageEnvelopes.size() != 0) {
nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);
const MessageEnvelope& messageEnvelope = mMessageEnvelopes.itemAt(0);
if (messageEnvelope.uptime <= now) {
// Remove the envelope from the list.
// We keep a strong reference to the handler until the call to handleMessage
// finishes. Then we drop it so that the handler can be deleted *before*
// we reacquire our lock.
{ // obtain handler
sp<MessageHandler> handler = messageEnvelope.handler;
Message message = messageEnvelope.message;
mMessageEnvelopes.removeAt(0);
mSendingMessage = true;
mLock.unlock();
#if DEBUG_POLL_AND_WAKE || DEBUG_CALLBACKS
ALOGD("%p ~ pollOnce - sending message: handler=%p, what=%d",
this, handler.get(), message.what);
#endif
handler->handleMessage(message);
} // release handler
mLock.lock();
mSendingMessage = false;
result = POLL_CALLBACK;
} else {
// The last message left at the head of the queue determines the next wakeup time.
mNextMessageUptime = messageEnvelope.uptime;
break;
}
}
// Release lock.
mLock.unlock();
// Invoke all response callbacks.
for (size_t i = 0; i < mResponses.size(); i++) {
Response& response = mResponses.editItemAt(i);
if (response.request.ident == POLL_CALLBACK) {
int fd = response.request.fd;
int events = response.events;
void* data = response.request.data;
#if DEBUG_POLL_AND_WAKE || DEBUG_CALLBACKS
ALOGD("%p ~ pollOnce - invoking fd event callback %p: fd=%d, events=0x%x, data=%p",
this, response.request.callback.get(), fd, events, data);
#endif
// Invoke the callback. Note that the file descriptor may be closed by
// the callback (and potentially even reused) before the function returns so
// we need to be a little careful when removing the file descriptor afterwards.
int callbackResult = response.request.callback->handleEvent(fd, events, data);
if (callbackResult == 0) {
removeFd(fd, response.request.seq);
}
// Clear the callback reference in the response structure promptly because we
// will not clear the response vector itself until the next poll.
response.request.callback.clear();
result = POLL_CALLBACK;
}
}
return result;
}
這段代碼有點(diǎn)長(zhǎng)丈积,其實(shí)我們只要看兩點(diǎn)筐骇。第一點(diǎn)就是調(diào)用epoll_wait()函數(shù)的位置,這里是通過(guò)epoll機(jī)制江滨,將線程先等待铛纬。
在看第二點(diǎn),即進(jìn)行同步鎖定之后唬滑,執(zhí)行喚醒的for循環(huán)告唆,根據(jù)eventCount執(zhí)行for循環(huán),eventCount是在調(diào)用epoll_wait等待之后返回的一個(gè)管道數(shù)據(jù)的事件數(shù)量值晶密,如果等于0擒悬,則不進(jìn)行喚醒操作。如果大于0稻艰,則進(jìn)行通過(guò)讀取喚醒事件寫(xiě)入的I/O數(shù)據(jù)將線程喚醒茄螃。
在這個(gè)過(guò)程中,epoll_wait有兩種情況會(huì)直接跳過(guò)喚醒過(guò)程连锯,直接進(jìn)入Done部分。
- eventCount<0用狱,即error运怖,則直接跳過(guò)進(jìn)入Done
- eventCount=0,即poll等待超時(shí)夏伊,進(jìn)入Done
而Done部分摇展,主要是處理Native層中的消息,將消息發(fā)送給Handler的handleMessage來(lái)處理溺忧。
還有一部分就是處理所有的response的callback咏连,即POLL_CALLBACK類型的response
response消息是在request中收集所有的reponse,然后在pollInner中的Done部分處理response
而Android主線程中鲁森,一直調(diào)用Looper.loop()卻不會(huì)死循環(huán)阻塞的原因祟滴,其實(shí)就是通過(guò)采用epoll機(jī)制,由Looper監(jiān)控管道中的消息歌溉,每當(dāng)喚醒的時(shí)候垄懂,向管道中發(fā)送喚醒的文件描述符,而在loop()循環(huán)獲取消息的時(shí)候痛垛,會(huì)優(yōu)先調(diào)用epoll_wait等待草慧,然后獲取等待過(guò)程中管道中的文件描述符的數(shù)量,進(jìn)而處理不同的情況匙头,選擇是否要喚醒主線程漫谷。
Looper死循環(huán)為什么不會(huì)導(dǎo)致應(yīng)用卡死?
首先理解ANR是什么:
ANR:點(diǎn)擊事件和Message沒(méi)有及時(shí)的處理蹂析,比如點(diǎn)擊事件舔示,會(huì)記錄一個(gè)響應(yīng)時(shí)間碟婆,如果超過(guò)了5s,沒(méi)處理完斩郎,則Handler就會(huì)發(fā)送一個(gè)ANR消息提醒脑融。
點(diǎn)擊事件5s沒(méi)響應(yīng)
廣播10s沒(méi)響應(yīng)
service20s沒(méi)響應(yīng)
這些事件最終都是Message。比如點(diǎn)擊事件缩宜,實(shí)在Choreographer封裝肘迎,在doFrame函數(shù)的setVsync函數(shù)進(jìn)行封裝,在對(duì)應(yīng)的doCallbacks進(jìn)行消息封裝回調(diào)锻煌。
ANR都是由Handler發(fā)送消息觸發(fā)的妓布,所以Looper死循環(huán)跟block沒(méi)什么關(guān)系的,所以Looper的死循環(huán)不會(huì)導(dǎo)致ANR宋梧。
而Looper的死循環(huán)匣沼,其實(shí)就是線程沒(méi)事做了,需要交出CPU捂龄,進(jìn)行阻塞睡眠释涛,當(dāng)有消息來(lái)的時(shí)候,就會(huì)被喚醒倦沧。
所以Looper的死循環(huán)跟ANR并沒(méi)有關(guān)系唇撬,風(fēng)馬牛不相及的兩個(gè)點(diǎn)。ANR與Looper和Handler的關(guān)系就在于ANR是一個(gè)Message展融,也是由Handler發(fā)送的窖认,而Looper就是輪詢?nèi)〕鯝NR消息進(jìn)行處理。
http://www.reibang.com/p/7bc2b86c4d89
https://www.cnblogs.com/renhui/p/12875396.html