Android圖形渲染原理中

前言

在上一篇文章 《Android圖形渲染原理(上)》中,詳細(xì)的講解了圖像消費(fèi)者叹哭,我們已經(jīng)了解了Android中的圖像元數(shù)據(jù)是如何被SurfaceFlinger,HWComposer或者OpenGL ES消費(fèi)的,那么献酗,圖像元數(shù)據(jù)又是怎么生成的呢?這一篇文章就來詳細(xì)介紹Android中的圖像生產(chǎn)者——SKIA坷牛,OPenGL ES罕偎,Vulkan,他們是Android中最重要的三支畫筆京闰。

圖像生產(chǎn)者

OpenGL ES

什么是OpenGL呢颜及?OpenGL是一套圖像編程接口甩苛,對(duì)于開發(fā)者來說,其實(shí)就是一套C語言編寫的API接口俏站,通過調(diào)用這些函數(shù)讯蒲,便可以調(diào)用顯卡來進(jìn)行計(jì)算機(jī)的圖形開發(fā)。雖然OpenGL是一套API接口肄扎,但它并沒有具體的實(shí)現(xiàn)這些接口墨林,接口的實(shí)現(xiàn)是由顯卡的驅(qū)動(dòng)程序來完成的。在前一篇文章中介紹過犯祠,顯卡驅(qū)動(dòng)是其他模塊和顯卡溝通的入口旭等,開發(fā)者通過調(diào)用OpenGL的圖像編程接口發(fā)出渲染命令,這些渲染命令被稱為DrawCall衡载,顯卡驅(qū)動(dòng)會(huì)將渲染命令翻譯能GPU能理解的數(shù)據(jù)搔耕,然后通知GPU讀取數(shù)據(jù)進(jìn)行操作。OpenGL ES又是什么呢痰娱?它是為了更好的適應(yīng)嵌入式等硬件較差的設(shè)備弃榨,推出的OpenGL的剪裁版,基本和OpenGL是一致的梨睁。Android從4.0開始默認(rèn)開啟硬件加速鲸睛,也就是默認(rèn)使用OpenGL ES來進(jìn)行圖形的生成和渲染工作。

我們接著來看看如何使用OpenGL ES而姐。

如何使用OpenGL ES腊凶?

想要在Android上使用OpenGL ES,我們要先了解EGL拴念。OpenGL雖然是跨平臺(tái)的钧萍,但是在各個(gè)平臺(tái)上也不能直接使用,因?yàn)槊總€(gè)平臺(tái)的窗口都是不一樣的政鼠,而EGL就是適配Android本地窗口系統(tǒng)和OpenGL ES橋接層风瘦。

OpenGL ES 定義了平臺(tái)無關(guān)的 GL 繪圖指令,EGL則定義了控制 displays公般,contexts 以及 surfaces 的統(tǒng)一的平臺(tái)接口


那么如何使用EGL和OpenGL ES生成圖形呢万搔?其實(shí)比較簡(jiǎn)單,主要有這三步

  1. EGL初始化Display官帘,Context和Surface
  2. OpenGL ES調(diào)用繪制指令
  3. EGL提交繪制后的buffer

我們?cè)敿?xì)來看一下每一步的流程

1瞬雹,EGL進(jìn)行初始化:主要初始化Display,Context 和Surface三個(gè)元素就可以了刽虹。

  • Display(EGLDisplay) 是對(duì)實(shí)際顯示設(shè)備的抽象
//創(chuàng)建于本地窗口系統(tǒng)的連接
EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
//初始化display
eglInitialize(display, NULL, NULL);
  • Context (EGLContext) 存儲(chǔ) OpenGL ES繪圖的一些狀態(tài)信息
/* create an EGL rendering context */
context = eglCreateContext(display, config, EGL_NO_CONTEXT, NULL);
  • Surface(EGLSurface)是對(duì)用來存儲(chǔ)圖像的內(nèi)存區(qū)域
//設(shè)置Surface配置
eglChooseConfig(display, attribute_list, &config, 1, &num_config);
//創(chuàng)建本地窗口
native_window = createNativeWindow();
//創(chuàng)建surface
surface = eglCreateWindowSurface(display, config, native_window, NULL);
  • 初始化完成后酗捌,需要綁定上下文
//綁定上下文
eglMakeCurrent(display, surface, surface, context);

2,OpenGL ES調(diào)用繪制指令:主要通過使用 OpenGL ES API ——gl_*(),接口進(jìn)行繪制圖形

//繪制點(diǎn)
glBegin(GL_POINTS);
    glVertex3f(0.7f胖缤,-0.5f尚镰,0.0f); //入?yún)槿S坐標(biāo)
    glVertex3f(0.6f,-0.7f哪廓,0.0f);
    glVertex3f(0.6f狗唉,-0.8f,0.0f);
glEnd();
//繪制線
glBegin(GL_LINE_STRIP)涡真;
    glVertex3f(-1.0f分俯,1.0f,0.0f);
    glVertex3f(-0.5f哆料,0.5f澳迫,0.0f);
    glVertex3f(-0.7f,0.5f剧劝,0.0f);
glEnd();
//……

3,EGL提交繪制后的buffer:通過eglSwapBuffer()進(jìn)行雙緩沖buffer的切換

EGLBoolean res = eglSwapBuffers(mDisplay, mSurface);

swapBuffer切換緩沖區(qū)buffer后抓歼,顯卡就會(huì)對(duì)Buffer中的圖像進(jìn)行渲染處理讥此。此時(shí),我們的圖像就能顯示出來了谣妻。

我們看一個(gè)完整的使用流程Demo

#include <stdlib.h>
#include <unistd.h>
#include <EGL/egl.h>
#include <GLES/gl.h>
typedef ... NativeWindowType;
extern NativeWindowType createNativeWindow(void);
static EGLint const attribute_list[] = {
        EGL_RED_SIZE, 1,
        EGL_GREEN_SIZE, 1,
        EGL_BLUE_SIZE, 1,
        EGL_NONE
};
int main(int argc, char ** argv)
{
        EGLDisplay display;
        EGLConfig config;
        EGLContext context;
        EGLSurface surface;
        NativeWindowType native_window;
        EGLint num_config;

        /* get an EGL display connection */
        display = eglGetDisplay(EGL_DEFAULT_DISPLAY);

        /* initialize the EGL display connection */
        eglInitialize(display, NULL, NULL);

        /* get an appropriate EGL frame buffer configuration */
        eglChooseConfig(display, attribute_list, &config, 1, &num_config);

        /* create an EGL rendering context */
        context = eglCreateContext(display, config, EGL_NO_CONTEXT, NULL);

        /* create a native window */
        native_window = createNativeWindow();

        /* create an EGL window surface */
        surface = eglCreateWindowSurface(display, config, native_window, NULL);

        /* connect the context to the surface */
        eglMakeCurrent(display, surface, surface, context);

        /* clear the color buffer */
        glClearColor(1.0, 1.0, 0.0, 1.0);
        glClear(GL_COLOR_BUFFER_BIT);
        glFlush();

        eglSwapBuffers(display, surface);

        sleep(10);
        return EXIT_SUCCESS;
}

介紹完EGL和OpenGL的使用方式了,我們可以開始看Android是如何通過它進(jìn)行界面的繪制的,這里會(huì)列舉兩個(gè)場(chǎng)景:開機(jī)動(dòng)畫突琳,硬件加速來詳細(xì)的講解OpenGL ES作為圖像生產(chǎn)者账阻,是如何生產(chǎn),即如何繪制圖像的减江。

OpenGL ES播放開機(jī)動(dòng)畫

當(dāng)Android系統(tǒng)啟動(dòng)時(shí)染突,會(huì)啟動(dòng)Init進(jìn)程,Init進(jìn)程會(huì)啟動(dòng)Zygote辈灼,ServerManager份企,SurfaceFlinger等服務(wù)。隨著SurfaceFlinger的啟動(dòng)巡莹,我們的開機(jī)動(dòng)畫也會(huì)開始啟動(dòng)司志。先看看SurfaceFlinger的初始化函數(shù)。

//文件-->/frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
void SurfaceFlinger::init() {
    ...
    mStartBootAnimThread = new StartBootAnimThread();
    if (mStartBootAnimThread->Start() != NO_ERROR) {
        ALOGE("Run StartBootAnimThread failed!");
    }
}

//文件-->/frameworks/native/services/surfaceflinger/StartBootAnimThread.cpp
status_t StartBootAnimThread::Start() {
    return run("SurfaceFlinger::StartBootAnimThread", PRIORITY_NORMAL);
}

bool StartBootAnimThread::threadLoop() {
    property_set("service.bootanim.exit", "0");
    property_set("ctl.start", "bootanim");
    // Exit immediately
    return false;
}

從上面的代碼可以看到降宅,SurfaceFlinger的init函數(shù)中會(huì)啟動(dòng)BootAnimThread線程骂远,BootAnimThread線程會(huì)通過property_set來發(fā)送通知,它是一種Socket方式的IPC通信機(jī)制腰根,對(duì)Android IPC通信感興趣的可以看看我的這篇文章《掌握Android進(jìn)程間通信機(jī)制》激才,這里就不過多講解了。init進(jìn)程會(huì)接收到bootanim的通知,然后啟動(dòng)我們的動(dòng)畫線程BootAnimation贸营。

了解了前面的流程吨述,我們開始看BootAnimation這個(gè)類,Android的開機(jī)動(dòng)畫的邏輯都在這個(gè)類中钞脂。我們先看看構(gòu)造函數(shù)和onFirsetRef函數(shù)揣云,這是這個(gè)類創(chuàng)建時(shí)最先執(zhí)行的兩個(gè)函數(shù):

//文件-->/frameworks/base/cmds/bootanimation/BootAnimation.cpp
BootAnimation::BootAnimation() : Thread(false), mClockEnabled(true), mTimeIsAccurate(false),
        mTimeFormat12Hour(false), mTimeCheckThread(NULL) {
    //創(chuàng)建SurfaceComposerClient
    mSession = new SurfaceComposerClient();
    //……
}

void BootAnimation::onFirstRef() {
    status_t err = mSession->linkToComposerDeath(this);
    if (err == NO_ERROR) {
        run("BootAnimation", PRIORITY_DISPLAY);
    }
}

構(gòu)造函數(shù)中創(chuàng)建了SurfaceComposerClient,SurfaceComposerClient是SurfaceFlinger的客戶端代理冰啃,我們可以通過它來和SurfaceFlinger建立通信邓夕。構(gòu)造函數(shù)執(zhí)行完后就會(huì)執(zhí)行onFirsetRef()函數(shù),這個(gè)函數(shù)會(huì)啟動(dòng)BootAnimation線程

接著看BootAnimation線程的初始化函數(shù)readyToRun阎毅。

//文件-->/frameworks/base/cmds/bootanimation/BootAnimation.cpp
status_t BootAnimation::readyToRun() {
    mAssets.addDefaultAssets();

    sp<IBinder> dtoken(SurfaceComposerClient::getBuiltInDisplay(
            ISurfaceComposer::eDisplayIdMain));
    DisplayInfo dinfo;
    //獲取屏幕信息
    status_t status = SurfaceComposerClient::getDisplayInfo(dtoken, &dinfo);
    if (status)
        return -1;

    // 通知SurfaceFlinger創(chuàng)建Surface焚刚,創(chuàng)建成功會(huì)返回一個(gè)SurfaceControl代理
    sp<SurfaceControl> control = session()->createSurface(String8("BootAnimation"),
            dinfo.w, dinfo.h, PIXEL_FORMAT_RGB_565);

    SurfaceComposerClient::openGlobalTransaction();
    //設(shè)置這個(gè)layer在SurfaceFlinger中的層級(jí)順序
    control->setLayer(0x40000000);

    //獲取surface
    sp<Surface> s = control->getSurface();

    // 以下是EGL的初始化流程
    const EGLint attribs[] = {
            EGL_RED_SIZE,   8,
            EGL_GREEN_SIZE, 8,
            EGL_BLUE_SIZE,  8,
            EGL_DEPTH_SIZE, 0,
            EGL_NONE
    };
    EGLint w, h;
    EGLint numConfigs;
    EGLConfig config;
    EGLSurface surface;
    EGLContext context;

    //步驟1:獲取Display
    EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
    //步驟2:初始化EGL
    eglInitialize(display, 0, 0);
    //步驟3:選擇參數(shù)
    eglChooseConfig(display, attribs, &config, 1, &numConfigs);
    //步驟4:傳入SurfaceFlinger生成的surface,并以此構(gòu)造EGLSurface
    surface = eglCreateWindowSurface(display, config, s.get(), NULL);
    //步驟5:構(gòu)造egl上下文
    context = eglCreateContext(display, config, NULL, NULL);
    //步驟6:綁定EGL上下文
    if (eglMakeCurrent(display, surface, surface, context) == EGL_FALSE)
        return NO_INIT;
    //……
}

通過readyToRun函數(shù)可以看到扇调,里面主要做了兩件事情:初始化Surface矿咕,初始化EGL,EGL的初始化流程和上面OpenGL ES使用中講的流程是一樣的狼钮,這里就不詳細(xì)講了碳柱,主要簡(jiǎn)單介紹一下Surface初始化的流程,詳細(xì)的流程會(huì)在下一篇文章圖像緩沖區(qū)中講熬芜,它的步驟如下:

  • 創(chuàng)建SurfaceComponentClient
  • 通過SurfaceComponentClient通知SurfaceFlinger創(chuàng)建Surface莲镣,并返回SurfaceControl
  • 有了SurfaceControl之后,我們就可以設(shè)置這塊Surface的層級(jí)等屬性涎拉,并能獲取到這塊Surface瑞侮。
  • 獲取到Surface后,將Surface綁定到EGL中去

Surface也創(chuàng)建好了鼓拧,EGL也創(chuàng)建好了半火,此時(shí)我們就可以通過OpenGL來生成圖像——也就是開機(jī)動(dòng)畫了,我們接著看看線程的執(zhí)行方法threadLoop函數(shù)中是如何播放的動(dòng)畫的毁枯。

//文件-->/frameworks/base/cmds/bootanimation/BootAnimation.cpp
bool BootAnimation::threadLoop()
{
    bool r;
    if (mZipFileName.isEmpty()) {
        r = android();   //Android默認(rèn)動(dòng)畫
    } else {
        r = movie();     //自定義動(dòng)畫
    }
    //動(dòng)畫播放完后的釋放工作
    eglMakeCurrent(mDisplay, EGL_NO_SURFACE, EGL_NO_SURFACE, EGL_NO_CONTEXT);
    eglDestroyContext(mDisplay, mContext);
    eglDestroySurface(mDisplay, mSurface);
    mFlingerSurface.clear();
    mFlingerSurfaceControl.clear();
    eglTerminate(mDisplay);
    eglReleaseThread();
    IPCThreadState::self()->stopProcess();
    return r;
}

函數(shù)中會(huì)判斷是否有自定義的開機(jī)動(dòng)畫文件慈缔,如果沒有就播放默認(rèn)的動(dòng)畫,有就播放自定義的動(dòng)畫种玛,播放完成后就是釋放和清除的操作藐鹤。默認(rèn)動(dòng)畫和自定義動(dòng)畫的播放方式其實(shí)差不多,我們以自定義動(dòng)畫為例赂韵,看看具體的實(shí)現(xiàn)流程娱节。

//文件-->/frameworks/base/cmds/bootanimation/BootAnimation.cpp
bool BootAnimation::movie()
{
    //根據(jù)文件路徑加載動(dòng)畫文件
    Animation* animation = loadAnimation(mZipFileName);
    if (animation == NULL)
        return false;

    //……

    
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
    // 調(diào)用OpenGL清理屏幕
    glShadeModel(GL_FLAT);
    glDisable(GL_DITHER);
    glDisable(GL_SCISSOR_TEST);
    glDisable(GL_BLEND);

    glBindTexture(GL_TEXTURE_2D, 0);
    glEnable(GL_TEXTURE_2D);
    glTexEnvx(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
    glTexParameterx(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
    glTexParameterx(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
    glTexParameterx(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameterx(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

    //……

    //播放動(dòng)畫
    playAnimation(*animation);

    //……
    
    //釋放動(dòng)畫
    releaseAnimation(animation);

    return false;
}

movie函數(shù)主要做的事情如下

  1. 通過文件路徑加載動(dòng)畫
  2. 調(diào)用OpenGL做清屏操作
  3. 調(diào)用playAnimation函數(shù)播放動(dòng)畫。
  4. 停止播放動(dòng)畫后通過releaseAnimation釋放資源

我們接著看playAnimation函數(shù)

//文件-->/frameworks/base/cmds/bootanimation/BootAnimation.cpp
bool BootAnimation::playAnimation(const Animation& animation)
{
    const size_t pcount = animation.parts.size();
    nsecs_t frameDuration = s2ns(1) / animation.fps;
    const int animationX = (mWidth - animation.width) / 2;
    const int animationY = (mHeight - animation.height) / 2;

    //遍歷動(dòng)畫片段
    for (size_t i=0 ; i<pcount ; i++) {
        const Animation::Part& part(animation.parts[i]);
        const size_t fcount = part.frames.size();
        glBindTexture(GL_TEXTURE_2D, 0);

        // Handle animation package
        if (part.animation != NULL) {
            playAnimation(*part.animation);
            if (exitPending())
                break;
            continue; //to next part
        }
        
        //循環(huán)動(dòng)畫片段
        for (int r=0 ; !part.count || r<part.count ; r++) {
            // Exit any non playuntil complete parts immediately
            if(exitPending() && !part.playUntilComplete)
                break;

            
            //啟動(dòng)音頻線程祭示,播放音頻文件
            if (r == 0 && part.audioData && playSoundsAllowed()) {               
                if (mInitAudioThread != nullptr) {
                    mInitAudioThread->join();
                }
                audioplay::playClip(part.audioData, part.audioLength);
            }

            glClearColor(
                    part.backgroundColor[0],
                    part.backgroundColor[1],
                    part.backgroundColor[2],
                    1.0f);
            //按照frameDuration頻率肄满,循環(huán)繪制開機(jī)動(dòng)畫圖片紋理
            for (size_t j=0 ; j<fcount && (!exitPending() || part.playUntilComplete) ; j++) {
                const Animation::Frame& frame(part.frames[j]);
                nsecs_t lastFrame = systemTime();

                if (r > 0) {
                    glBindTexture(GL_TEXTURE_2D, frame.tid);
                } else {
                    if (part.count != 1) {
                        //生成紋理
                        glGenTextures(1, &frame.tid);
                        //綁定紋理
                        glBindTexture(GL_TEXTURE_2D, frame.tid);
                        glTexParameterx(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
                        glTexParameterx(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
                    }
                    int w, h;
                    initTexture(frame.map, &w, &h);
                }

                const int xc = animationX + frame.trimX;
                const int yc = animationY + frame.trimY;
                Region clearReg(Rect(mWidth, mHeight));
                clearReg.subtractSelf(Rect(xc, yc, xc+frame.trimWidth, yc+frame.trimHeight));
                if (!clearReg.isEmpty()) {
                    Region::const_iterator head(clearReg.begin());
                    Region::const_iterator tail(clearReg.end());
                    glEnable(GL_SCISSOR_TEST);
                    while (head != tail) {
                        const Rect& r2(*head++);
                        glScissor(r2.left, mHeight - r2.bottom, r2.width(), r2.height());
                        glClear(GL_COLOR_BUFFER_BIT);
                    }
                    glDisable(GL_SCISSOR_TEST);
                }
                // 繪制紋理
                glDrawTexiOES(xc, mHeight - (yc + frame.trimHeight),
                              0, frame.trimWidth, frame.trimHeight);
                if (mClockEnabled && mTimeIsAccurate && validClock(part)) {
                    drawClock(animation.clockFont, part.clockPosX, part.clockPosY);
                }

                eglSwapBuffers(mDisplay, mSurface);

                nsecs_t now = systemTime();
                nsecs_t delay = frameDuration - (now - lastFrame);
                //ALOGD("%lld, %lld", ns2ms(now - lastFrame), ns2ms(delay));
                lastFrame = now;

                if (delay > 0) {
                    struct timespec spec;
                    spec.tv_sec  = (now + delay) / 1000000000;
                    spec.tv_nsec = (now + delay) % 1000000000;
                    int err;
                    do {
                        err = clock_nanosleep(CLOCK_MONOTONIC, TIMER_ABSTIME, &spec, NULL);
                    } while (err<0 && errno == EINTR);
                }

                checkExit();
            }
            //休眠
            usleep(part.pause * ns2us(frameDuration));

            // 動(dòng)畫退出條件判斷
            if(exitPending() && !part.count)
                break;
        }

    }

    // 釋放紋理
    for (const Animation::Part& part : animation.parts) {
        if (part.count != 1) {
            const size_t fcount = part.frames.size();
            for (size_t j = 0; j < fcount; j++) {
                const Animation::Frame& frame(part.frames[j]);
                glDeleteTextures(1, &frame.tid);
            }
        }
    }

    // 關(guān)閉和視頻音頻
    audioplay::setPlaying(false);
    audioplay::destroy();

    return true;
}

從上面的源碼可以看到,playAnimation函數(shù)播放動(dòng)畫的原理,其實(shí)就是按照一定的頻率稠歉,循環(huán)調(diào)用glDrawTexiOES函數(shù)掰担,繪制圖片紋理,同時(shí)調(diào)用音頻播放模塊播放音頻怒炸。

通過OpenGL ES播放動(dòng)畫的案例就講完了带饱,我們也了解了通過OpenGL來播放視頻的一種方式,我們接著看第二個(gè)案例阅羹,Activity界面如何通過OpenGL來進(jìn)行硬件加速勺疼,也就是硬件繪制繪制的。

OpenGL ES進(jìn)行硬件加速

我們知道捏鱼,Activity界面的顯示需要經(jīng)歷Measure測(cè)量执庐,Layout布局,和Draw繪制三個(gè)過程导梆,而Draw繪制流程又分為軟件繪制和硬件繪制轨淌,硬件繪制便是通過OpenGL ES進(jìn)行的。我們直接看看硬件繪制流程里看尼,OpenGL ES是如何來進(jìn)行繪制的猿诸,它的入口在ViewRootImpl的performDraw函數(shù)中。

//文件-->/frameworks/base/core/java/android/view/ViewRootImpl.java
private void performDraw() {
    //……
    draw(fullRedrawNeeded);
    //……
}

private void draw(boolean fullRedrawNeeded) {
    Surface surface = mSurface;
    if (!surface.isValid()) {
        return;
    }

    //……

    if (!dirty.isEmpty() || mIsAnimating || accessibilityFocusDirty) {
        if (!dirty.isEmpty() || mIsAnimating || accessibilityFocusDirty) {
            if (mAttachInfo.mThreadedRenderer != null && mAttachInfo.mThreadedRenderer.isEnabled()) {
                
                //……

                //硬件渲染
                mAttachInfo.mThreadedRenderer.draw(mView, mAttachInfo, this);

            } else {
                
                //……

                //軟件渲染
                if (!drawSoftware(surface, mAttachInfo, xOffset, yOffset, scalingRequired, dirty)) {
                    return;
                }
            }
        }

        //……
    }

    //……
}

從上面的代碼可以看到狡忙,硬件渲染是通過mThreadedRenderer.draw方法進(jìn)行的,在分析mThreadedRenderer.draw函數(shù)之前址芯,我們需要先了解ThreadedRenderer是什么灾茁,它的創(chuàng)建要在Measure,Layout和Draw的流程之前谷炸,當(dāng)我們?cè)贏ctivity的onCreate回調(diào)中執(zhí)行setContentView函數(shù)時(shí)北专,最終會(huì)執(zhí)行ViewRootImpl的setView方法,ThreadedRenderer就是在這個(gè)此時(shí)被創(chuàng)建的旬陡。

//文件-->/frameworks/base/core/java/android/view/ViewRootImpl.java
public void setView(View view, WindowManager.LayoutParams attrs, View panelParentView) {
    synchronized (this) {
        if (mView == null) {
            mView = view;

            //……            
            if (mSurfaceHolder == null) {
                enableHardwareAcceleration(attrs);
            }

            //……
        }
    }
}

private void enableHardwareAcceleration(WindowManager.LayoutParams attrs) {
    mAttachInfo.mHardwareAccelerated = false;
    mAttachInfo.mHardwareAccelerationRequested = false;

    // 兼容模式下不開啟硬件加速
    if (mTranslator != null) return;

    
    final boolean hardwareAccelerated =
            (attrs.flags & WindowManager.LayoutParams.FLAG_HARDWARE_ACCELERATED) != 0;

    if (hardwareAccelerated) {
        if (!ThreadedRenderer.isAvailable()) {
            return;
        }

        //……

        if (fakeHwAccelerated) {
            //……
        } else if (!ThreadedRenderer.sRendererDisabled
                || (ThreadedRenderer.sSystemRendererDisabled && forceHwAccelerated)) {
            //……
            //創(chuàng)建ThreadedRenderer
            mAttachInfo.mThreadedRenderer = ThreadedRenderer.create(mContext, translucent,
                    attrs.getTitle().toString());
            if (mAttachInfo.mThreadedRenderer != null) {
                mAttachInfo.mHardwareAccelerated =
                        mAttachInfo.mHardwareAccelerationRequested = true;
            }
        }
    }
}

可以看到,當(dāng)RootViewImpl在調(diào)用setView的時(shí)候驶睦,會(huì)開啟硬件加速,并通過ThreadedRenderer.create函數(shù)來創(chuàng)建ThreadedRenderer溉痢。

我們繼續(xù)看看ThreadedRenderer這個(gè)類的實(shí)現(xiàn)髓削。

//文件-->/frameworks/base/core/java/android/view/ThreadedRenderer.java
public static ThreadedRenderer create(Context context, boolean translucent, String name) {
    ThreadedRenderer renderer = null;
    if (isAvailable()) {
        renderer = new ThreadedRenderer(context, translucent, name);
    }
    return renderer;
}

ThreadedRenderer(Context context, boolean translucent, String name) {
    //……
    
    //創(chuàng)建RootRenderNode
    long rootNodePtr = nCreateRootRenderNode();
    mRootNode = RenderNode.adopt(rootNodePtr);
    mRootNode.setClipToBounds(false);
    mIsOpaque = !translucent;
    //創(chuàng)建RenderProxy
    mNativeProxy = nCreateProxy(translucent, rootNodePtr);
    nSetName(mNativeProxy, name);
    //啟動(dòng)GraphicsStatsService立膛,統(tǒng)計(jì)渲染信息
    ProcessInitializer.sInstance.init(context, mNativeProxy);
    
    loadSystemProperties();
}

ThreadedRenderer的構(gòu)造函數(shù)中主要做了這兩件事情:

  1. 通過JNI方法nCreateRootRenderNode在Native創(chuàng)建RootRenderNode,每一個(gè)View都對(duì)應(yīng)了一個(gè)RenderNode汽畴,它包含了這個(gè)View及其子view的DisplayList旧巾,DisplayList包含了是可以讓openGL識(shí)別的渲染指令,這些渲染指令被封裝成了一條條OP忍些。
//文件-->/frameworks/base/core/jni/android_view_ThreadedRenderer.cpp
static jlong android_view_ThreadedRenderer_createRootRenderNode(JNIEnv* env, jobject clazz) {
    RootRenderNode* node = new RootRenderNode(env);
    node->incStrong(0);
    node->setName("RootRenderNode");
    return reinterpret_cast<jlong>(node);
}
  1. 通過Jni方法nCreateProxy在Native層的RenderProxy鲁猩,它就是用來跟渲染線程進(jìn)行通信的句柄,我們看下nCreateProxy的Native實(shí)現(xiàn)
//文件-->/frameworks/base/core/jni/android_view_ThreadedRenderer.cpp
static jlong android_view_ThreadedRenderer_createProxy(JNIEnv* env, jobject clazz,
        jboolean translucent, jlong rootRenderNodePtr) {
    RootRenderNode* rootRenderNode = reinterpret_cast<RootRenderNode*>(rootRenderNodePtr);
    ContextFactoryImpl factory(rootRenderNode);
    return (jlong) new RenderProxy(translucent, rootRenderNode, &factory);
}

//文件-->/frameworks/base/libs/hwui/renderthread/RenderProxy.cpp
RenderProxy::RenderProxy(bool translucent, RenderNode* rootRenderNode, IContextFactory* contextFactory)
        : mRenderThread(RenderThread::getInstance())
        , mContext(nullptr) {
    SETUP_TASK(createContext);
    args->translucent = translucent;
    args->rootRenderNode = rootRenderNode;
    args->thread = &mRenderThread;
    args->contextFactory = contextFactory;
    mContext = (CanvasContext*) postAndWait(task);
    mDrawFrameTask.setContext(&mRenderThread, mContext, rootRenderNode);
}

從RenderProxy構(gòu)造函數(shù)可以看到罢坝,通過RenderThread::getInstance()創(chuàng)建了RenderThread廓握,也就是硬件繪制的渲染線程。相比于在主線程進(jìn)行的軟件繪制嘁酿,硬件加速會(huì)新建一個(gè)線程隙券,這樣能減輕主線程的工作量牲迫。

了解了ThreadedRenderer的創(chuàng)建和初始化流程陪每,我們繼續(xù)回到渲染的流程mThreadedRenderer.draw這個(gè)函數(shù)中來,先看看這個(gè)函數(shù)的源碼啦辐。

//文件-->/frameworks/base/core/java/android/view/ThreadedRenderer.java
void draw(View view, AttachInfo attachInfo, DrawCallbacks callbacks) {
    attachInfo.mIgnoreDirtyState = true;

    final Choreographer choreographer = attachInfo.mViewRootImpl.mChoreographer;
    choreographer.mFrameInfo.markDrawStart();

    //1,構(gòu)建RootView的DisplayList
    updateRootDisplayList(view, callbacks);

    attachInfo.mIgnoreDirtyState = false;

    //…… 窗口動(dòng)畫處理

    final long[] frameInfo = choreographer.mFrameInfo.mFrameInfo;
    //2,通知渲染
    int syncResult = nSyncAndDrawFrame(mNativeProxy, frameInfo, frameInfo.length);
    
    //…… 渲染失敗的處理
}

這個(gè)流程我們只需要關(guān)心這兩件事情:

  1. 構(gòu)建DisplayList
  2. **繪制DisplayList****

經(jīng)過這兩步,界面就顯示出來。我們?cè)敿?xì)看一下這這兩步的流程:

構(gòu)建DisplayList

1,通過updateRootDisplayList函數(shù)構(gòu)建根view的DisplayList字旭,DisplayList在前面提到過洲脂,它包含了可以讓openGL識(shí)別的渲染指令一铅,先看看函數(shù)的實(shí)現(xiàn)

//文件-->/frameworks/base/core/java/android/view/ThreadedRenderer.java
private void updateRootDisplayList(View view, DrawCallbacks callbacks) {
    Trace.traceBegin(Trace.TRACE_TAG_VIEW, "Record View#draw()");
    //構(gòu)建View的DisplayList
    updateViewTreeDisplayList(view);

    if (mRootNodeNeedsUpdate || !mRootNode.isValid()) {
        //獲取DisplayListCanvas
        DisplayListCanvas canvas = mRootNode.start(mSurfaceWidth, mSurfaceHeight);
        try {
            final int saveCount = canvas.save();
            canvas.translate(mInsetLeft, mInsetTop);
            callbacks.onPreDraw(canvas);

            canvas.insertReorderBarrier();
            //合并和優(yōu)化DisplayList
            canvas.drawRenderNode(view.updateDisplayListIfDirty());
            canvas.insertInorderBarrier();

            callbacks.onPostDraw(canvas);
            canvas.restoreToCount(saveCount);
            mRootNodeNeedsUpdate = false;
        } finally {
            //更新RootRenderNode
            mRootNode.end(canvas);
        }
    }
    Trace.traceEnd(Trace.TRACE_TAG_VIEW);
}

updateRootDisplayList函數(shù)的主要流程有這幾步:

  1. 構(gòu)建根View的DisplayList
  2. 合并和優(yōu)化DisplayList
構(gòu)建根View的DisplayList

我們先看第一步構(gòu)建根View的DisplayList的源碼戈擒。

//文件-->/frameworks/base/core/java/android/view/ThreadedRenderer.java
private void updateViewTreeDisplayList(View view) {
    view.mPrivateFlags |= View.PFLAG_DRAWN;
    view.mRecreateDisplayList = (view.mPrivateFlags & View.PFLAG_INVALIDATED)
            == View.PFLAG_INVALIDATED;
    view.mPrivateFlags &= ~View.PFLAG_INVALIDATED;
    view.updateDisplayListIfDirty();
    view.mRecreateDisplayList = false;
}

//文件-->/frameworks/base/core/java/android/view/View.java
public RenderNode updateDisplayListIfDirty() {
    final RenderNode renderNode = mRenderNode;
    if (!canHaveDisplayList()) {
        return renderNode;
    }

    //判斷硬件加速是否可用
    if ((mPrivateFlags & PFLAG_DRAWING_CACHE_VALID) == 0
        || !renderNode.isValid()
        || (mRecreateDisplayList)) {
        //…… 不需要更新displaylist時(shí),則直接返回renderNode
        

        
        //獲取DisplayListCanvas
        final DisplayListCanvas canvas = renderNode.start(width, height);       

        try {
            if (layerType == LAYER_TYPE_SOFTWARE) {
                //如果強(qiáng)制開啟了軟件繪制,比如一些不支持硬件加速的組件浴栽,或者靜止了硬件加速的組件萝玷,會(huì)轉(zhuǎn)換成bitmap后挎春,交給硬件渲染
                buildDrawingCache(true);
                Bitmap cache = getDrawingCache(true);
                if (cache != null) {
                    canvas.drawBitmap(cache, 0, 0, mLayerPaint);
                }
            } else {                               
                if ((mPrivateFlags & PFLAG_SKIP_DRAW) == PFLAG_SKIP_DRAW) {
                    //遞歸子View構(gòu)建或更新displaylist
                    dispatchDraw(canvas);                    
                } else {
                    //調(diào)用自身的draw方法
                    draw(canvas);
                }
            }
        } finally {
            //講DisplayListCanvas內(nèi)容綁定到renderNode上
            renderNode.end(canvas);
            setDisplayListProperties(renderNode);
        }
    } else {
        mPrivateFlags |= PFLAG_DRAWN | PFLAG_DRAWING_CACHE_VALID;
        mPrivateFlags &= ~PFLAG_DIRTY_MASK;
    }
    return renderNode;
}

可以看到updateDisplayListIfDirty主要做的事情有這幾件

  1. 獲取DisplayListCanvas
  2. 判斷組件是否支持硬件加速脚线,不支持則轉(zhuǎn)換成bitmap后交給DisplayListCanvas
  3. 遞歸子View執(zhí)行DisplayList的構(gòu)建
  4. 調(diào)用自身的draw方法一死,交給DisplayListCanvas進(jìn)行繪制
  5. 返回RenderNode

看到這里可能會(huì)有人疑問伪煤,為什么構(gòu)建更新DisplayList函數(shù)updateDisplayListIfDirty中并沒有看到DisplayList,返回對(duì)象也不是DisplayList锁右,而是RenderNode码泞?這個(gè)DisplayList其實(shí)是在Native層創(chuàng)建的,在前面提到過RenderNode其實(shí)包含了DisplayList呐馆,renderNode.end(canvas)函數(shù)會(huì)將DisplayList綁定到renderNode中。而DisplayListCanvas的作用莲兢,就是在Native層創(chuàng)建DisplayList汹来。那么我們接著看DisplayListCanvas這個(gè)類坟岔。

//文件-->/frameworks/base/core/java/android/view/RenderNode.java
public DisplayListCanvas start(int width, int height) {
    return DisplayListCanvas.obtain(this, width, height);
}

//文件-->/frameworks/base/core/java/android/view/DisplayListCanvas.java
static DisplayListCanvas obtain(@NonNull RenderNode node, int width, int height) {
    if (node == null) throw new IllegalArgumentException("node cannot be null");
    DisplayListCanvas canvas = sPool.acquire();
    if (canvas == null) {
        canvas = new DisplayListCanvas(node, width, height);
    } else {
        nResetDisplayListCanvas(canvas.mNativeCanvasWrapper, node.mNativeRenderNode,
                                width, height);
    }
    canvas.mNode = node;
    canvas.mWidth = width;
    canvas.mHeight = height;
    return canvas;
}

private DisplayListCanvas(@NonNull RenderNode node, int width, int height) {
    super(nCreateDisplayListCanvas(node.mNativeRenderNode, width, height));
    mDensity = 0; // disable bitmap density scaling
}

我們通過RenderNode.start方法獲取一個(gè)DisplayListCanvas,RenderNode會(huì)通過obtain來創(chuàng)建或從緩存中獲取DisplayListCanvas蛾找,這是一種享元模式。DisplayListCanvas的構(gòu)造函數(shù)里炼彪,會(huì)通過JNI方法nCreateDisplayListCanvas創(chuàng)建native的Canvas,我們接著看一下Native的流程

//文件-->/frameworks/base/core/jni/android_view_DisplayListCanvas.cpp
static jlong android_view_DisplayListCanvas_createDisplayListCanvas(jlong renderNodePtr,
        jint width, jint height) {
    RenderNode* renderNode = reinterpret_cast<RenderNode*>(renderNodePtr);
    return reinterpret_cast<jlong>(Canvas::create_recording_canvas(width, height, renderNode));
}

//文件-->/frameworks/base/libs/hwui/hwui/Canvas.cpp
Canvas* Canvas::create_recording_canvas(int width, int height, uirenderer::RenderNode* renderNode) {
    if (uirenderer::Properties::isSkiaEnabled()) {
        return new uirenderer::skiapipeline::SkiaRecordingCanvas(renderNode, width, height);
    }
    return new uirenderer::RecordingCanvas(width, height);
}

可以看到邻辉,java層的DisplayListCanvas對(duì)應(yīng)了native層RecordingCanvas或者SkiaRecordingCanvas

這里簡(jiǎn)單介紹一下這兩個(gè)Canvas的區(qū)別缕允,在Android8之前,HWUI中通過OpenGL對(duì)繪制操作進(jìn)行封裝后透葛,直接送GPU進(jìn)行渲染妻往。Android 8.0開始烤芦,HWUI進(jìn)行了重構(gòu)贞滨,增加了RenderPipeline的概念,RenderPipeline有三種類型拍棕,分別為Skia,OpenGL和Vulkan勺良,分別對(duì)應(yīng)不同的渲染绰播。并且Android8.0開始強(qiáng)化和重視Skia的地位,Android10版本后尚困,所有通過硬件加速的渲染蠢箩,都是通過SKIA進(jìn)行封裝,然后再經(jīng)過OpenGL或Vulkan事甜,最后交給GPU渲染谬泌。我講解的源碼是8.0的源碼,可以看到逻谦,其實(shí)已經(jīng)可以通過配置掌实,來開啟skiapipeline了。

為了更容易的講解如何通過OpenGL進(jìn)行硬件渲染邦马,這里我還是以RecordingCanvas來講解贱鼻,這里列舉幾個(gè)RecordingCanvas中的常規(guī)操作

//文件-->/frameworks/base/libs/hwui/RecordingCanvas.cpp
//繪制點(diǎn)
void RecordingCanvas::drawPoints(const float* points, int floatCount, const SkPaint& paint) {
    if (CC_UNLIKELY(floatCount < 2 || paint.nothingToDraw())) return;
    floatCount &= ~0x1; // round down to nearest two

    addOp(alloc().create_trivial<PointsOp>(
            calcBoundsOfPoints(points, floatCount),
            *mState.currentSnapshot()->transform,
            getRecordedClip(),
            refPaint(&paint), refBuffer<float>(points, floatCount), floatCount));
}

struct PointsOp : RecordedOp {
    PointsOp(BASE_PARAMS, const float* points, const int floatCount)
            : SUPER(PointsOp)
            , points(points)
            , floatCount(floatCount) {}
    const float* points;
    const int floatCount;
};
//繪制線
void RecordingCanvas::drawLines(const float* points, int floatCount, const SkPaint& paint) {
    if (CC_UNLIKELY(floatCount < 4 || paint.nothingToDraw())) return;
    floatCount &= ~0x3; // round down to nearest four

    addOp(alloc().create_trivial<LinesOp>(
            calcBoundsOfPoints(points, floatCount),
            *mState.currentSnapshot()->transform,
            getRecordedClip(),
            refPaint(&paint), refBuffer<float>(points, floatCount), floatCount));
}
struct LinesOp : RecordedOp {
    LinesOp(BASE_PARAMS, const float* points, const int floatCount)
            : SUPER(LinesOp)
            , points(points)
            , floatCount(floatCount) {}
    const float* points;
    const int floatCount;
};

//繪制矩陣
void RecordingCanvas::drawRect(float left, float top, float right, float bottom, const SkPaint& paint) {
    if (CC_UNLIKELY(paint.nothingToDraw())) return;

    addOp(alloc().create_trivial<RectOp>(
            Rect(left, top, right, bottom),
            *(mState.currentSnapshot()->transform),
            getRecordedClip(),
            refPaint(&paint)));
}

struct RectOp : RecordedOp {
    RectOp(BASE_PARAMS)
            : SUPER(RectOp) {}
};

struct RoundRectOp : RecordedOp {
    RoundRectOp(BASE_PARAMS, float rx, float ry)
            : SUPER(RoundRectOp)
            , rx(rx)
            , ry(ry) {}
    const float rx;
    const float ry;
};

int RecordingCanvas::addOp(RecordedOp* op) {
    // skip op with empty clip
    if (op->localClip && op->localClip->rect.isEmpty()) {
        // NOTE: this rejection happens after op construction/content ref-ing, so content ref'd
        // and held by renderthread isn't affected by clip rejection.
        // Could rewind alloc here if desired, but callers would have to not touch op afterwards.
        return -1;
    }

    int insertIndex = mDisplayList->ops.size();
    mDisplayList->ops.push_back(op);
    if (mDeferredBarrierType != DeferredBarrierType::None) {
        // op is first in new chunk
        mDisplayList->chunks.emplace_back();
        DisplayList::Chunk& newChunk = mDisplayList->chunks.back();
        newChunk.beginOpIndex = insertIndex;
        newChunk.endOpIndex = insertIndex + 1;
        newChunk.reorderChildren = (mDeferredBarrierType == DeferredBarrierType::OutOfOrder);
        newChunk.reorderClip = mDeferredBarrierClip;

        int nextChildIndex = mDisplayList->children.size();
        newChunk.beginChildIndex = newChunk.endChildIndex = nextChildIndex;
        mDeferredBarrierType = DeferredBarrierType::None;
    } else {
        // standard case - append to existing chunk
        mDisplayList->chunks.back().endOpIndex = insertIndex + 1;
    }
    return insertIndex;
}

可以看到,我們通過RecordingCanvas繪制的圖元滋将,都被封裝成了一個(gè)個(gè)能夠讓GPU能夠識(shí)別的OP邻悬,這些OP都存儲(chǔ)在了mDisplayList中。這就回答了前面的疑問随闽,為什么updateDisplayListIfDirty沒有看到DisplayList父丰,因?yàn)镈isplayListCanvas通過調(diào)用Natice層的RecordingCanvas,更新了Natice層的mDisplayList掘宪。

我們?cè)诮又磖enderNode.end(canvas)函數(shù)蛾扇,如何將Natice層的DisplayList綁定到renderNode中。

//文件-->/frameworks/base/core/java/android/view/RenderNode.java
public void end(DisplayListCanvas canvas) {
    long displayList = canvas.finishRecording();
    nSetDisplayList(mNativeRenderNode, displayList);
    canvas.recycle();
}

這里通過JNI方法nSetDisplayList進(jìn)行了DisplayList和RenderNode的綁定魏滚,此時(shí)屁桑,我們就能理解我在前面說的:RenderNode包含了這個(gè)View及其子view的DisplayList,DisplayList包含了一條條可以讓openGL識(shí)別的渲染指令——OP操作栏赴,它是一個(gè)基本的能讓GPU識(shí)別的繪制元素蘑斧。

合并和優(yōu)化DisplayList

updateViewTreeDisplayList花了比較大精力,將所有的View的DisplayList已經(jīng)創(chuàng)建好了,DisplayList里的DrawOP樹也創(chuàng)建好了竖瘾,為什么還要在調(diào)用canvas.drawRenderNode(view.updateDisplayListIfDirty())這個(gè)函數(shù)呢沟突?這個(gè)函數(shù)的主要功能是對(duì)前面構(gòu)建的DisplayList做優(yōu)化和合并處理,我們看看具體的實(shí)現(xiàn)細(xì)節(jié)捕传。

//文件-->/frameworks/base/core/java/android/view/DisplayListCanvas.java
public void drawRenderNode(RenderNode renderNode) {
    nDrawRenderNode(mNativeCanvasWrapper, renderNode.getNativeDisplayList());
}

//文件-->/frameworks/base/core/jni/android_view_DisplayListCanvas.cpp
static void android_view_DisplayListCanvas_drawRenderNode(jlong canvasPtr, jlong renderNodePtr) {
    Canvas* canvas = reinterpret_cast<Canvas*>(canvasPtr);
    RenderNode* renderNode = reinterpret_cast<RenderNode*>(renderNodePtr);
    canvas->drawRenderNode(renderNode);
}

//文件-->/frameworks/base/libs/hwui/RecordingCanvas.cpp
void RecordingCanvas::drawRenderNode(RenderNode* renderNode) {
    auto&& stagingProps = renderNode->stagingProperties();
    RenderNodeOp* op = alloc().create_trivial<RenderNodeOp>(
            Rect(stagingProps.getWidth(), stagingProps.getHeight()),
            *(mState.currentSnapshot()->transform),
            getRecordedClip(),
            renderNode);
    int opIndex = addOp(op);
    if (CC_LIKELY(opIndex >= 0)) {
        int childIndex = mDisplayList->addChild(op);

        // update the chunk's child indices
        DisplayList::Chunk& chunk = mDisplayList->chunks.back();
        chunk.endChildIndex = childIndex + 1;

        if (renderNode->stagingProperties().isProjectionReceiver()) {
            // use staging property, since recording on UI thread
            mDisplayList->projectionReceiveIndex = opIndex;
        }
    }
}

可以看到惠拭,最終執(zhí)行到了RecordingCanvas中的drawRenderNode函數(shù),這個(gè)函數(shù)會(huì)對(duì)DisplayList做合并和優(yōu)化庸论。

繪制DisplayList

經(jīng)過比較長(zhǎng)的篇幅职辅,我們把mThreadedRenderer.draw函數(shù)中的第一個(gè)流程,構(gòu)建DisplayList說完聂示,現(xiàn)在開始說第二個(gè)流程域携,nSyncAndDrawFrame進(jìn)行幀繪制,這個(gè)流程結(jié)束鱼喉,我們的界面就能在屏幕上顯示出來了秀鞭。nSyncAndDrawFrame是一個(gè)native方法,我們看看它的實(shí)現(xiàn)

static int android_view_ThreadedRenderer_syncAndDrawFrame(JNIEnv* env, jobject clazz,
        jlong proxyPtr, jlongArray frameInfo, jint frameInfoSize) {
    LOG_ALWAYS_FATAL_IF(frameInfoSize != UI_THREAD_FRAME_INFO_SIZE,
            "Mismatched size expectations, given %d expected %d",
            frameInfoSize, UI_THREAD_FRAME_INFO_SIZE);
    RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr);
    env->GetLongArrayRegion(frameInfo, 0, frameInfoSize, proxy->frameInfo());
    return proxy->syncAndDrawFrame();
}

int RenderProxy::syncAndDrawFrame() {
    return mDrawFrameTask.drawFrame();
}


nSyncAndDrawFrame函數(shù)調(diào)用了RenderProxy的syncAndDrawFrame扛禽,syncAndDrawFrame調(diào)用了DrawFrameTask.drawFrame()方法

//文件-->/frameworks/base/libs/hwui/renderthread/DrawFrameTask.cpp
int DrawFrameTask::drawFrame() {
    LOG_ALWAYS_FATAL_IF(!mContext, "Cannot drawFrame with no CanvasContext!");

    mSyncResult = SyncResult::OK;
    mSyncQueued = systemTime(CLOCK_MONOTONIC);
    postAndWait();

    return mSyncResult;
}

void DrawFrameTask::postAndWait() {
    AutoMutex _lock(mLock);
    mRenderThread->queue(this);
    mSignal.wait(mLock);
}

void DrawFrameTask::run() {
    ATRACE_NAME("DrawFrame");

    bool canUnblockUiThread;
    bool canDrawThisFrame;
    {
        TreeInfo info(TreeInfo::MODE_FULL, *mContext);
        canUnblockUiThread = syncFrameState(info);
        canDrawThisFrame = info.out.canDrawThisFrame;
    }

    // Grab a copy of everything we need
    CanvasContext* context = mContext;

    // From this point on anything in "this" is *UNSAFE TO ACCESS*
    if (canUnblockUiThread) {
        unblockUiThread();
    }

    if (CC_LIKELY(canDrawThisFrame)) {
        context->draw();
    } else {
        // wait on fences so tasks don't overlap next frame
        context->waitOnFences();
    }

    if (!canUnblockUiThread) {
        unblockUiThread();
    }
}

DrawFrameTask做了兩件事情

  1. 調(diào)用syncFrameState函數(shù)同步frame信息
  2. 調(diào)用CanvasContext.draw()函數(shù)進(jìn)行繪制
同步Frame信息

我們先看看第一件事情锋边,同步Frame信息,它主要的工作是將主線程的RenderNode同步到RenderNode來编曼,在前面講mAttachInfo.mThreadedRenderer.draw函數(shù)中豆巨,第一步會(huì)將DisplayList構(gòu)建完畢,然后綁定到RenderNode中掐场,這個(gè)RenderNode是在主線程創(chuàng)建的搀矫。而我們的DrawFrameTask,是在native層的RenderThread中執(zhí)行的刻肄,所以需要講數(shù)據(jù)同步過來瓤球。

//文件-->/frameworks/base/libs/hwui/renderthread/DrawFrameTask.cpp
bool DrawFrameTask::syncFrameState(TreeInfo& info) {
    ATRACE_CALL();
    int64_t vsync = mFrameInfo[static_cast<int>(FrameInfoIndex::Vsync)];
    mRenderThread->timeLord().vsyncReceived(vsync);
    bool canDraw = mContext->makeCurrent();
    mContext->unpinImages();

    for (size_t i = 0; i < mLayers.size(); i++) {
        mLayers[i]->apply();
    }
    mLayers.clear();
    mContext->prepareTree(info, mFrameInfo, mSyncQueued, mTargetNode);
    
    //……
   
    // If prepareTextures is false, we ran out of texture cache space
    return info.prepareTextures;
}

這里調(diào)用了mContext->prepareTree函數(shù),mContext在下面會(huì)詳細(xì)講敏弃,我們這里先看看這個(gè)方法的實(shí)現(xiàn)卦羡。

//文件-->/frameworks/base/libs/hwui/renderthread/CanvasContext.cpp
void CanvasContext::prepareTree(TreeInfo& info, int64_t* uiFrameInfo,
        int64_t syncQueued, RenderNode* target) {
   //……
    for (const sp<RenderNode>& node : mRenderNodes) {
        // Only the primary target node will be drawn full - all other nodes would get drawn in
        // real time mode. In case of a window, the primary node is the window content and the other
        // node(s) are non client / filler nodes.
        info.mode = (node.get() == target ? TreeInfo::MODE_FULL : TreeInfo::MODE_RT_ONLY);
        node->prepareTree(info);
        GL_CHECKPOINT(MODERATE);
    }
    //……
}

void RenderNode::prepareTree(TreeInfo& info) {
    bool functorsNeedLayer = Properties::debugOverdraw;
    prepareTreeImpl(info, functorsNeedLayer);
}

void RenderNode::prepareTreeImpl(TreeInfo& info, bool functorsNeedLayer) {
    info.damageAccumulator->pushTransform(this);

    if (info.mode == TreeInfo::MODE_FULL) {
        // 同步屬性 
        pushStagingPropertiesChanges(info);
    }
     
    // layer
    prepareLayer(info, animatorDirtyMask);
    //同步DrawOpTree
    if (info.mode == TreeInfo::MODE_FULL) {
        pushStagingDisplayListChanges(info);
    }
    //遞歸處理子View
    prepareSubTree(info, childFunctorsNeedLayer, mDisplayListData);
    // push
    pushLayerUpdate(info);
    info.damageAccumulator->popTransform();
}

同步Frame的操作完成了,我們接著看最后繪制的流程麦到。

進(jìn)行繪制

圖形的硬件渲染绿饵,是通過調(diào)用CanvasContext的draw方法來進(jìn)行繪制的,CanvasContext是什么呢瓶颠?

它是渲染的上下文拟赊,CanvasContext可以選擇不同的渲染模式進(jìn)行渲染,這是策略模式的設(shè)計(jì)粹淋。我們看一下CanvasContext的create方法吸祟,可以看到瑟慈,方法中會(huì)根據(jù)渲染類型,創(chuàng)建不同的渲染管道屋匕,總共有三種渲染管道——OpenGL,SKiaGL和SkiaVulkan葛碧。

CanvasContext* CanvasContext::create(RenderThread& thread,
        bool translucent, RenderNode* rootRenderNode, IContextFactory* contextFactory) {

    auto renderType = Properties::getRenderPipelineType();

    switch (renderType) {
        case RenderPipelineType::OpenGL:
            return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
                    std::make_unique<OpenGLPipeline>(thread));
        case RenderPipelineType::SkiaGL:
            return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
                    std::make_unique<skiapipeline::SkiaOpenGLPipeline>(thread));
        case RenderPipelineType::SkiaVulkan:
            return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
                                std::make_unique<skiapipeline::SkiaVulkanPipeline>(thread));
        default:
            LOG_ALWAYS_FATAL("canvas context type %d not supported", (int32_t) renderType);
            break;
    }
    return nullptr;
}

我們這里這里只看通過OpenGL進(jìn)行渲染的OpenGLPipeline

OpenGLPipeline::OpenGLPipeline(RenderThread& thread)
        :  mEglManager(thread.eglManager())
        , mRenderThread(thread) {
}

在OpenGLPipeline的構(gòu)造函數(shù)里面,創(chuàng)建了EglManager过吻,EglManager將我們對(duì)EGL的操作全部封裝好了进泼,我們看看EglManager的初始化方法

//文件-->/frameworks/base/libs/hwui/renderthread/EglManager.cpp
void EglManager::initialize() {
    if (hasEglContext()) return;

    ATRACE_NAME("Creating EGLContext");

    //獲取 EGL Display 對(duì)象
    mEglDisplay = eglGetDisplay(EGL_DEFAULT_DISPLAY);
    LOG_ALWAYS_FATAL_IF(mEglDisplay == EGL_NO_DISPLAY,
            "Failed to get EGL_DEFAULT_DISPLAY! err=%s", eglErrorString());

    EGLint major, minor;
    //初始化與 EGLDisplay 之間的連接
    LOG_ALWAYS_FATAL_IF(eglInitialize(mEglDisplay, &major, &minor) == EGL_FALSE,
            "Failed to initialize display %p! err=%s", mEglDisplay, eglErrorString());
    //……

    //EGL配置設(shè)置
    loadConfig();
    //創(chuàng)建EGL上下文
    createContext();
    //創(chuàng)建離屏渲染Buffer
    createPBufferSurface();
    //綁定上下文
    makeCurrent(mPBufferSurface);
    DeviceInfo::initialize();
    mRenderThread.renderState().onGLContextCreated();
}

在這里我們看到了熟悉的身影亲怠,EglManager中的初始化流程和前面所有EGL初始化的流程都是一樣的甘晤。但在初始化的流程中,我們沒看到WindowSurface的設(shè)置公给,只看到了PBufferSurface的創(chuàng)建逼纸,它是一個(gè)離屏渲染的Buffer洋措,這里簡(jiǎn)單介紹一下WindowSurface和PbufferSurface

  • WindowSurface:是和窗口相關(guān)的,也就是在屏幕上的一塊顯示區(qū)的封裝樊展,渲染后即顯示在界面上。
  • PbufferSurface:在顯存中開辟一個(gè)空間堆生,將渲染后的數(shù)據(jù)(幀)存放在這里专缠。

可以看到?jīng)]有WindowSurface,OpenGL ES渲染的圖形是沒法顯示在界面上的淑仆。其實(shí)EglManager已經(jīng)封裝了初始化WindowSurface的方法涝婉。

//文件-->/frameworks/base/libs/hwui/renderthread/EglManager.cpp
EGLSurface EglManager::createSurface(EGLNativeWindowType window) {
    initialize();

    EGLint attribs[] = {
#ifdef ANDROID_ENABLE_LINEAR_BLENDING
            EGL_GL_COLORSPACE_KHR, EGL_GL_COLORSPACE_SRGB_KHR,
            EGL_COLORSPACE, EGL_COLORSPACE_sRGB,
#endif
            EGL_NONE
    };

    EGLSurface surface = eglCreateWindowSurface(mEglDisplay, mEglConfig, window, attribs);
    LOG_ALWAYS_FATAL_IF(surface == EGL_NO_SURFACE,
            "Failed to create EGLSurface for window %p, eglErr = %s",
            (void*) window, eglErrorString());

    if (mSwapBehavior != SwapBehavior::Preserved) {
        LOG_ALWAYS_FATAL_IF(eglSurfaceAttrib(mEglDisplay, surface, EGL_SWAP_BEHAVIOR, EGL_BUFFER_DESTROYED) == EGL_FALSE,
                            "Failed to set swap behavior to destroyed for window %p, eglErr = %s",
                            (void*) window, eglErrorString());
    }

    return surface;
}

這個(gè)surface又是什么時(shí)候設(shè)置的呢?在activity的界面顯示流程中蔗怠,當(dāng)我們setView后墩弯,ViewRootImpl會(huì)執(zhí)行performTraveserl函數(shù),然后執(zhí)行Measure測(cè)量寞射,Layout布局渔工,和Draw繪制的流程,setView函數(shù)在前面講過桥温,會(huì)開啟硬件加速引矩,創(chuàng)建ThreadedRenderer,draw函數(shù)也講過侵浸,measure,layout的流程就不在這兒說了旺韭,它和OpgenGL沒關(guān)系,其實(shí)performTraveserl函數(shù)里掏觉,同時(shí)也設(shè)置了EGL的Surface区端,可見這個(gè)函數(shù)是多么重要的一個(gè)函數(shù),我們看一下澳腹。

private void performTraversals() {
    //……
    if (mAttachInfo.mThreadedRenderer != null) {
        try {
            //調(diào)用ThreadedRenderer initialize函數(shù)
            hwInitialized = mAttachInfo.mThreadedRenderer.initialize(
                    mSurface);
            if (hwInitialized && (host.mPrivateFlags
                    & View.PFLAG_REQUEST_TRANSPARENT_REGIONS) == 0) {
                // Don't pre-allocate if transparent regions
                // are requested as they may not be needed
                mSurface.allocateBuffers();
            }
        } catch (OutOfResourcesException e) {
            handleOutOfResourcesException(e);
            return;
        }
    }
    //……
}

boolean initialize(Surface surface) throws OutOfResourcesException {
    boolean status = !mInitialized;
    mInitialized = true;
    updateEnabledState(surface);
    nInitialize(mNativeProxy, surface);
    return status;
}


ThreadedRenderer的initialize函數(shù)調(diào)用了native層的initialize方法织盼。

static void android_view_ThreadedRenderer_initialize(JNIEnv* env, jobject clazz,
        jlong proxyPtr, jobject jsurface) {
    RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr);
    sp<Surface> surface = android_view_Surface_getSurface(env, jsurface);
    proxy->initialize(surface);
}

void RenderProxy::initialize(const sp<Surface>& surface) {
    SETUP_TASK(initialize);
    args->context = mContext;
    args->surface = surface.get();
    post(task);
}

void CanvasContext::initialize(Surface* surface) {
    setSurface(surface);
}

void CanvasContext::setSurface(Surface* surface) {
    ATRACE_CALL();

    mNativeSurface = surface;

    bool hasSurface = mRenderPipeline->setSurface(surface, mSwapBehavior);

    mFrameNumber = -1;

    if (hasSurface) {
         mHaveNewSurface = true;
         mSwapHistory.clear();
    } else {
         mRenderThread.removeFrameCallback(this);
    }
}

從這里可以看到杨何,EGL的Surface在很早之前就已經(jīng)設(shè)置好了。

此時(shí)我們的流程中悔政,EGL的初始化工作都已經(jīng)完成了晚吞,現(xiàn)在可以開始繪制了,我們回到DrawFrameTask::run的draw流程上來

void CanvasContext::draw() {
    SkRect dirty;
    mDamageAccumulator.finish(&dirty);

    mCurrentFrameInfo->markIssueDrawCommandsStart();

    Frame frame = mRenderPipeline->getFrame();

    SkRect windowDirty = computeDirtyRect(frame, &dirty);
    //調(diào)用OpenGL的draw函數(shù)
    bool drew = mRenderPipeline->draw(frame, windowDirty, dirty, mLightGeometry, &mLayerUpdateQueue,
            mContentDrawBounds, mOpaque, mLightInfo, mRenderNodes, &(profiler()));

    waitOnFences();

    bool requireSwap = false;
    //交換緩沖區(qū)
    bool didSwap = mRenderPipeline->swapBuffers(frame, drew, windowDirty, mCurrentFrameInfo,
            &requireSwap);

    mIsDirty = false;

    //……

}

這里調(diào)用mRenderPipeline的draw方法谋国,其實(shí)就是調(diào)用了OpenGL的draw方法槽地,然后調(diào)用mRenderPipeline->swapBuffers進(jìn)行緩存區(qū)交換

//文件-->/frameworks/base/libs/hwui/renderthread/OpenGLPipeline.cpp
bool OpenGLPipeline::draw(const Frame& frame, const SkRect& screenDirty, const SkRect& dirty,
        const FrameBuilder::LightGeometry& lightGeometry,
        LayerUpdateQueue* layerUpdateQueue,
        const Rect& contentDrawBounds, bool opaque,
        const BakedOpRenderer::LightInfo& lightInfo,
        const std::vector< sp<RenderNode> >& renderNodes,
        FrameInfoVisualizer* profiler) {

    //……
    //BakedOpRenderer用于替代之前的OpenGLRenderer
    BakedOpRenderer renderer(caches, mRenderThread.renderState(),
            opaque, lightInfo);
    frameBuilder.replayBakedOps<BakedOpDispatcher>(renderer);
    //調(diào)用GPU進(jìn)行渲染
    drew = renderer.didDraw();

    //……

    return drew;
}

bool OpenGLPipeline::swapBuffers(const Frame& frame, bool drew, const SkRect& screenDirty,
        FrameInfo* currentFrameInfo, bool* requireSwap) {

    GL_CHECKPOINT(LOW);

    // Even if we decided to cancel the frame, from the perspective of jank
    // metrics the frame was swapped at this point
    currentFrameInfo->markSwapBuffers();

    *requireSwap = drew || mEglManager.damageRequiresSwap();

    if (*requireSwap && (CC_UNLIKELY(!mEglManager.swapBuffers(frame, screenDirty)))) {
        return false;
    }

    return *requireSwap;
}

至此,通過OpenGL ES進(jìn)行硬件渲染的主要流程結(jié)束了芦瘾“莆茫看完了兩個(gè)例子,是不是對(duì)OpenGL ES作為圖像生產(chǎn)者是如何生產(chǎn)圖像已經(jīng)了解了呢近弟?我們接著看下一個(gè)圖像生產(chǎn)者Skia缅糟。

Skia

Skia是谷歌開源的一款跨平臺(tái)的2D圖形引擎,目前谷歌的Chrome瀏覽器祷愉、Android窗宦、Flutter、以及火狐瀏覽器二鳄、火狐操作系統(tǒng)和其它許多產(chǎn)品都使用它作為圖形引擎赴涵,它作為Android系統(tǒng)第三方軟件,放在external/skia/ 目錄下订讼。雖然Android從4.0開始默認(rèn)開啟了硬件加速髓窜,但不代表Skia的作用就不大了,其實(shí)Skia在Android中的地位是越來越重要了欺殿,從Android 8開始寄纵,我們可以選擇使用Skia進(jìn)行硬件加速,Android 9開始就默認(rèn)使用Skia來進(jìn)行硬件加速脖苏。Skia的硬件加速主要是通過 copybit 模塊調(diào)用OpenGL或者SKia來實(shí)現(xiàn)分程拭。


由于Skia的硬件加速也是通過Copybit模塊調(diào)用的OpenGL或者Vulkan接口,所以我們這兒只說說Skia通過cpu繪制的棍潘,也就是軟繪的方式哺壶。還是老規(guī)則,先看看Skia要如何使用

如何使用Skia蜒谤?

OpenGL ES的使用要配合EGL山宾,需要初始化Display,surface鳍徽,context等资锰,用法還是比較繁瑣的,Skia在使用上就方便很多了阶祭。掌握Skia繪制三要素:畫板SKCanvas 绷杜、畫紙SiBitmap直秆、畫筆Skpaint,我們就能很輕松的用Skia來繪制圖形鞭盟。

下面詳細(xì)的解釋Skia的繪圖三要素

  1. SKBitmap用來存儲(chǔ)圖形數(shù)據(jù)圾结,它封裝了與位圖相關(guān)的一系列操作
SkBitmap bitmap = new SkBitmap();
//設(shè)置位圖格式及寬高
bitmap->setConfig(SkBitmap::kRGB_565_Config,800,480);
//分配位圖所占空間
bitmap->allocPixels();
  1. SKCanvas 封裝了所有畫圖操作的函數(shù),通過調(diào)用這些函數(shù)齿诉,我們就能實(shí)現(xiàn)繪制操作筝野。
//使用前傳入bitmap
SkCanvas canvas(bitmap);
//移位,縮放粤剧,旋轉(zhuǎn)歇竟,變形操作
translate(SkiaScalar dx, SkiaScalar dy);
scale(SkScalar sx, SkScalar sy);
rotate(SkScalar degrees);
skew(SkScalar sx, SkScalar sy);
//繪制操作
drawARGB(u8 a, u8 r, u8 g, u8 b....)  //給定透明度以及紅,綠抵恋,蘭3色焕议,填充整個(gè)可繪制區(qū)域。
drawColor(SkColor color...) //給定顏色color, 填充整個(gè)繪制區(qū)域弧关。
drawPaint(SkPaint& paint) //用指定的畫筆填充整個(gè)區(qū)域盅安。
drawPoint(...)//根據(jù)各種不同參數(shù)繪制不同的點(diǎn)。
drawLine(x0, y0, x1, y1, paint) //畫線世囊,起點(diǎn)(x0, y0), 終點(diǎn)(x1, y1), 使用paint作為畫筆别瞭。
drawRect(rect, paint) //畫矩形,矩形大小由rect指定茸习,畫筆由paint指定畜隶。
drawRectCoords(left, top, right, bottom, paint),//給定4個(gè)邊界畫矩陣壁肋。
drawOval(SkRect& oval, SkPaint& paint) //畫橢圓号胚,橢圓大小由oval矩形指定。
//……其他操作
  1. Skpaint用來設(shè)置繪制內(nèi)容的風(fēng)格,樣式浸遗,顏色等信息
setAntiAlias: 設(shè)置畫筆的鋸齒效果猫胁。 
setColor: 設(shè)置畫筆顏色 
setARGB:  設(shè)置畫筆的a,r,p,g值。 
setAlpha:  設(shè)置Alpha值 
setTextSize: 設(shè)置字體尺寸跛锌。 
setStyle:  設(shè)置畫筆風(fēng)格弃秆,空心或者實(shí)心。 
setStrokeWidth: 設(shè)置空心的邊框?qū)挾取?
getColor:  得到畫筆的顏色 
getAlpha:  得到畫筆的Alpha值髓帽。 

我們看一個(gè)完整的使用Demo

void draw() {
    SkBitmap bitmap = new SkBitmap();
    //設(shè)置位圖格式及寬高
    bitmap->setConfig(SkBitmap::kRGB_565_Config,800,480);
    //分配位圖所占空間
    bitmap->allocPixels();
    //使用前傳入bitmap
    SkCanvas canvas(bitmap);
    //定義畫筆
    SkPaint paint1, paint2, paint3;

    paint1.setAntiAlias(true);
    paint1.setColor(SkColorSetRGB(255, 0, 0));
    paint1.setStyle(SkPaint::kFill_Style);

    paint2.setAntiAlias(true);
    paint2.setColor(SkColorSetRGB(0, 136, 0));
    paint2.setStyle(SkPaint::kStroke_Style);
    paint2.setStrokeWidth(SkIntToScalar(3));

    paint3.setAntiAlias(true);
    paint3.setColor(SkColorSetRGB(136, 136, 136));

    sk_sp<SkTextBlob> blob1 =
            SkTextBlob::MakeFromString("Skia!", SkFont(nullptr, 64.0f, 1.0f, 0.0f));
    sk_sp<SkTextBlob> blob2 =
            SkTextBlob::MakeFromString("Skia!", SkFont(nullptr, 64.0f, 1.5f, 0.0f));

    canvas->clear(SK_ColorWHITE);
    canvas->drawTextBlob(blob1.get(), 20.0f, 64.0f, paint1);
    canvas->drawTextBlob(blob1.get(), 20.0f, 144.0f, paint2);
    canvas->drawTextBlob(blob2.get(), 20.0f, 224.0f, paint3);
}

這個(gè)Demo的效果如下

了解了Skia如何使用菠赚,我們接著看兩個(gè)場(chǎng)景:Skia進(jìn)行軟件繪制,F(xiàn)lutter界面繪制

Skia進(jìn)行軟件繪制

在上面我講了通過使用OpenGL渲染的硬件繪制方式郑藏,這里會(huì)接著講使用Skia渲染的軟件繪制方式衡查,雖然Android默認(rèn)開啟了硬件加速,但是由于硬件加速會(huì)有耗電和內(nèi)存的問題必盖,一些系統(tǒng)應(yīng)用和常駐應(yīng)用依然是使用的軟件繪制的方式拌牲,軟繪入口還是在draw方法中俱饿。

//文件-->/frameworks/base/core/java/android/view/ViewRootImpl.java
private void performDraw() {
    //……
    draw(fullRedrawNeeded);
    //……
}

private void draw(boolean fullRedrawNeeded) {
    Surface surface = mSurface;
    if (!surface.isValid()) {
        return;
    }

    //……

    if (!dirty.isEmpty() || mIsAnimating || accessibilityFocusDirty) {
        if (!dirty.isEmpty() || mIsAnimating || accessibilityFocusDirty) {
            if (mAttachInfo.mThreadedRenderer != null && mAttachInfo.mThreadedRenderer.isEnabled()) {
                
                //……

                //硬件渲染
                mAttachInfo.mThreadedRenderer.draw(mView, mAttachInfo, this);

            } else {
                
                //……

                //軟件渲染
                if (!drawSoftware(surface, mAttachInfo, xOffset, yOffset, scalingRequired, dirty)) {
                    return;
                }
            }
        }

        //……
    }

    //……
}

我們來看看drawSoftware函數(shù)的實(shí)現(xiàn)

private boolean drawSoftware(Surface surface, AttachInfo attachInfo, int xoff, int yoff,
        boolean scalingRequired, Rect dirty) {

    // Draw with software renderer.
    final Canvas canvas;

    //……

    canvas = mSurface.lockCanvas(dirty);
    
    //……
        
    mView.draw(canvas);
    
    //……
    
    surface.unlockCanvasAndPost(canvas);
    
    //……
    
    return true;
}

drawSoftware函數(shù)的流程主要為三步

  1. 通過mSurface.lockCanvas獲取Canvas
  2. 通過draw方法,將根View及其子View遍歷繪制到Canvas上
  3. 通過surface.unlockCanvasAndPost將繪制內(nèi)容提交給surfaceFlinger進(jìn)行合成

Lock Surface

我們先來看第一步塌忽,這個(gè)Canvas對(duì)應(yīng)著Native層的SKCanvas拍埠。

//文件-->/frameworks/base/core/java/android/view/Surface.java
public Canvas lockCanvas(Rect inOutDirty)
    throws Surface.OutOfResourcesException, IllegalArgumentException {
    synchronized (mLock) {
        checkNotReleasedLocked();
        if (mLockedObject != 0) {           
            throw new IllegalArgumentException("Surface was already locked");
        }
        mLockedObject = nativeLockCanvas(mNativeObject, mCanvas, inOutDirty);
        return mCanvas;
    }
}

lockCanvas函數(shù)中通過JNI函數(shù)nativeLockCanvas,創(chuàng)建Nativce層的Canvas土居,nativeLockCanvas的入?yún)NativeObject對(duì)應(yīng)著Native層的Surface枣购,關(guān)于Surface和Buffer的知識(shí),在下一篇圖形緩沖區(qū)中會(huì)詳細(xì)簡(jiǎn)介装盯,這里不做太多介紹坷虑。我們直接著看nativeLockCanvas的實(shí)現(xiàn)。

static jlong nativeLockCanvas(JNIEnv* env, jclass clazz,
        jlong nativeObject, jobject canvasObj, jobject dirtyRectObj) {
    sp<Surface> surface(reinterpret_cast<Surface *>(nativeObject));

    if (!isSurfaceValid(surface)) {
        doThrowIAE(env);
        return 0;
    }

    Rect dirtyRect(Rect::EMPTY_RECT);
    Rect* dirtyRectPtr = NULL;

    if (dirtyRectObj) {
        dirtyRect.left   = env->GetIntField(dirtyRectObj, gRectClassInfo.left);
        dirtyRect.top    = env->GetIntField(dirtyRectObj, gRectClassInfo.top);
        dirtyRect.right  = env->GetIntField(dirtyRectObj, gRectClassInfo.right);
        dirtyRect.bottom = env->GetIntField(dirtyRectObj, gRectClassInfo.bottom);
        dirtyRectPtr = &dirtyRect;
    }

    ANativeWindow_Buffer outBuffer;
    //1埂奈,獲取用來存儲(chǔ)圖形繪制的buffer
    status_t err = surface->lock(&outBuffer, dirtyRectPtr);
    if (err < 0) {
        const char* const exception = (err == NO_MEMORY) ?
                OutOfResourcesException :
                "java/lang/IllegalArgumentException";
        jniThrowException(env, exception, NULL);
        return 0;
    }

    SkImageInfo info = SkImageInfo::Make(outBuffer.width, outBuffer.height,
                                         convertPixelFormat(outBuffer.format),
                                         outBuffer.format == PIXEL_FORMAT_RGBX_8888
                                                 ? kOpaque_SkAlphaType : kPremul_SkAlphaType,
                                         GraphicsJNI::defaultColorSpace());

    SkBitmap bitmap;
    ssize_t bpr = outBuffer.stride * bytesPerPixel(outBuffer.format);
    bitmap.setInfo(info, bpr);
   
    if (outBuffer.width > 0 && outBuffer.height > 0) {
         //將上一個(gè)buffer里的圖形數(shù)據(jù)復(fù)制到當(dāng)前bitmap中
        bitmap.setPixels(outBuffer.bits);
    } else {
        // be safe with an empty bitmap.
        bitmap.setPixels(NULL);
    }

    //2迄损,創(chuàng)建一個(gè)SKCanvas
    Canvas* nativeCanvas = GraphicsJNI::getNativeCanvas(env, canvasObj);
    //3,給SKCanvas設(shè)置Bitmap
    nativeCanvas->setBitmap(bitmap);
    //如果指定了臟區(qū)账磺,則設(shè)定臟區(qū)的區(qū)域
    if (dirtyRectPtr) {
        nativeCanvas->clipRect(dirtyRect.left, dirtyRect.top,
                dirtyRect.right, dirtyRect.bottom, SkClipOp::kIntersect);
    }

    if (dirtyRectObj) {
        env->SetIntField(dirtyRectObj, gRectClassInfo.left,   dirtyRect.left);
        env->SetIntField(dirtyRectObj, gRectClassInfo.top,    dirtyRect.top);
        env->SetIntField(dirtyRectObj, gRectClassInfo.right,  dirtyRect.right);
        env->SetIntField(dirtyRectObj, gRectClassInfo.bottom, dirtyRect.bottom);
    }

    sp<Surface> lockedSurface(surface);
    lockedSurface->incStrong(&sRefBaseOwner);
    return (jlong) lockedSurface.get();
}

nativeLockCanvas主要做了這幾件事情

  1. 通過surface->lock函數(shù)獲取繪制用的Buffer
  2. 根據(jù)Buffer信息創(chuàng)建SKBitmap
  3. 根據(jù)SKBitmap芹敌,創(chuàng)建并初始化SKCanvas

通過nativeLockCanvas,我們就創(chuàng)建好了SKCanvas了垮抗,并且設(shè)置了可以繪制圖形的bitmap氏捞,此時(shí)我們就可以通過SKCanvas往bitmap里面繪制圖形,mView.draw()函數(shù)冒版,就做了這件事情液茎。

繪制

我們接著看看View中的draw()函數(shù)

//文件-->/frameworks/base/core/java/android/view/View.java
public void draw(Canvas canvas) {
    final int privateFlags = mPrivateFlags;
    final boolean dirtyOpaque = (privateFlags & PFLAG_DIRTY_MASK) == PFLAG_DIRTY_OPAQUE &&
            (mAttachInfo == null || !mAttachInfo.mIgnoreDirtyState);
    mPrivateFlags = (privateFlags & ~PFLAG_DIRTY_MASK) | PFLAG_DRAWN;

    int saveCount;
    //1,繪制背景
    if (!dirtyOpaque) {
        drawBackground(canvas);
    }

    final int viewFlags = mViewFlags;
    boolean horizontalEdges = (viewFlags & FADING_EDGE_HORIZONTAL) != 0;
    boolean verticalEdges = (viewFlags & FADING_EDGE_VERTICAL) != 0;
    if (!verticalEdges && !horizontalEdges) {
        // 2辞嗡,繪制當(dāng)前view的圖形
        if (!dirtyOpaque) onDraw(canvas);

        // 3捆等,繪制子view的圖形
        dispatchDraw(canvas);

        drawAutofilledHighlight(canvas);

        // Overlay is part of the content and draws beneath Foreground
        if (mOverlay != null && !mOverlay.isEmpty()) {
            mOverlay.getOverlayView().dispatchDraw(canvas);
        }

        //4,繪制decorations续室,如滾動(dòng)條栋烤,前景等 Step 6, draw decorations (foreground, scrollbars)
        onDrawForeground(canvas);

        // 5,繪制焦點(diǎn)的高亮
        drawDefaultFocusHighlight(canvas);

        if (debugDraw()) {
            debugDrawFocus(canvas);
        }

        // we're done...
        return;
    }

    //……
}

draw函數(shù)中做了這幾件事情

  1. 繪制背景
  2. 繪制當(dāng)前view
  3. 遍歷繪制子view
  4. 繪制前景

我們可以看看Canvas里的繪制方法挺狰,這些繪制方法都是JNI方法明郭,并且一一對(duì)應(yīng)著SKCanvas中的繪制方法

//文件-->/frameworks/base/graphics/java/android/graphics/Canvas.java

//……
private static native void nDrawBitmap(long nativeCanvas, int[] colors, int offset, int stride,
            float x, float y, int width, int height, boolean hasAlpha, long nativePaintOrZero);

private static native void nDrawColor(long nativeCanvas, int color, int mode);

private static native void nDrawPaint(long nativeCanvas, long nativePaint);

private static native void nDrawPoint(long canvasHandle, float x, float y, long paintHandle);

private static native void nDrawPoints(long canvasHandle, float[] pts, int offset, int count,
                                       long paintHandle);

private static native void nDrawLine(long nativeCanvas, float startX, float startY, float stopX,
                                     float stopY, long nativePaint);

private static native void nDrawLines(long canvasHandle, float[] pts, int offset, int count,
                                      long paintHandle);

private static native void nDrawRect(long nativeCanvas, float left, float top, float right,
                                     float bottom, long nativePaint);

private static native void nDrawOval(long nativeCanvas, float left, float top, float right,
                                     float bottom, long nativePaint);

private static native void nDrawCircle(long nativeCanvas, float cx, float cy, float radius,
                                       long nativePaint);

private static native void nDrawArc(long nativeCanvas, float left, float top, float right,
                                    float bottom, float startAngle, float sweep, boolean useCenter, long nativePaint);

private static native void nDrawRoundRect(long nativeCanvas, float left, float top, float right,
                                          float bottom, float rx, float ry, long nativePaint);
//……

Post Surface

軟件繪制的最后一步,通過surface.unlockCanvasAndPost將繪制內(nèi)容提交給surfaceFlinger繪制丰泊,將繪制出來的圖形提交給SurfaceFlinger薯定,然后SurfaceFlinger作為消費(fèi)者處理圖形后,我們的界面就顯示出來了瞳购。

public void unlockCanvasAndPost(Canvas canvas) {
    synchronized (mLock) {
        checkNotReleasedLocked();

        if (mHwuiContext != null) {
            mHwuiContext.unlockAndPost(canvas);
        } else {
            unlockSwCanvasAndPost(canvas);
        }
    }
}

private void unlockSwCanvasAndPost(Canvas canvas) {
    if (canvas != mCanvas) {
        throw new IllegalArgumentException("canvas object must be the same instance that "
                                           + "was previously returned by lockCanvas");
    }
    if (mNativeObject != mLockedObject) {
        Log.w(TAG, "WARNING: Surface's mNativeObject (0x" +
              Long.toHexString(mNativeObject) + ") != mLockedObject (0x" +
              Long.toHexString(mLockedObject) +")");
    }
    if (mLockedObject == 0) {
        throw new IllegalStateException("Surface was not locked");
    }
    try {
        nativeUnlockCanvasAndPost(mLockedObject, canvas);
    } finally {
        nativeRelease(mLockedObject);
        mLockedObject = 0;
    }
}

這里調(diào)用了Native函數(shù)nativeUnlockCanvasAndPost话侄,我們接著往下看。

static void nativeUnlockCanvasAndPost(JNIEnv* env, jclass clazz,
        jlong nativeObject, jobject canvasObj) {
    sp<Surface> surface(reinterpret_cast<Surface *>(nativeObject));
    if (!isSurfaceValid(surface)) {
        return;
    }

    // detach the canvas from the surface
    Canvas* nativeCanvas = GraphicsJNI::getNativeCanvas(env, canvasObj);
    nativeCanvas->setBitmap(SkBitmap());

    // unlock surface
    status_t err = surface->unlockAndPost();
    if (err < 0) {
        doThrowIAE(env);
    }
}

在這里苛败,surface->unlockAndPost()函數(shù)就會(huì)將Skia繪制出來的圖像傳遞給SurfaceFlinger進(jìn)行合成满葛。通過skia進(jìn)行軟件繪制的流程已經(jīng)講完了径簿,至于如何通過Surface獲取緩沖區(qū),在緩沖區(qū)繪制完數(shù)據(jù)后嘀韧,surface->unlockAndPost()又如何通知SurfaceFlinger篇亭,這一點(diǎn)在下一篇文章的圖形緩沖區(qū)中會(huì)詳細(xì)的講解。

可以看到锄贷,Skia軟件繪制的流程比硬件繪制要簡(jiǎn)單很多译蒂,我們接著看看Skia進(jìn)行Flutter繪制的案例。

Skia進(jìn)行Flutter的界面繪制

在講解Flutter如何通過Skia生產(chǎn)圖像之前谊却,先簡(jiǎn)單介紹一下Flutter柔昼,F(xiàn)lutter的架構(gòu)分為Framework層,Engine層和Embedder三層炎辨。

  • Framework層使用dart語言實(shí)現(xiàn)捕透,包括UI,文本碴萧,圖片乙嘀,按鈕等Widgets,渲染破喻,動(dòng)畫虎谢,手勢(shì)等。

  • Engine使用C++實(shí)現(xiàn)曹质,主要包括渲染引擎Skia, Dart虛擬機(jī)和文字排版Tex等模塊婴噩。

  • Embedder是一個(gè)嵌入層,通過該層把Flutter嵌入到各個(gè)平臺(tái)上去羽德,Embedder的主要工作包括渲染Surface設(shè)置, 線程設(shè)置几莽,以及插件等

了解了Flutter的架構(gòu),我們?cè)诮又私釬lutter顯示一個(gè)界面的流程玩般。我們知道在Android中银觅,顯示一個(gè)界面需要將XML界面布局解析成ViewGroup礼饱,然后再經(jīng)過測(cè)量Measure坏为,布局Layout和繪制Draw的流程。Flutter和Android的顯示不太一樣镊绪,它會(huì)將通過Dart語言編寫的Widget界面布局轉(zhuǎn)換成ElementTree和Render ObjectTree匀伏。ElementTree相當(dāng)于是ViewGroup,Render ObjectTree相當(dāng)于是經(jīng)過Measure和Layout流程之后的ViewGroup蝴韭。這種模式在很多場(chǎng)景上都有使用够颠,比如Webview,在渲染界面時(shí)榄鉴,也會(huì)創(chuàng)建一顆Dom樹履磨,render樹和RenderObject蛉抓,這樣的好處是可以通過Diff比較改變過的組件,然后渲染時(shí)剃诅,只對(duì)改變過的組件做渲染巷送,同時(shí)對(duì)跨平臺(tái)友好,可以通過這種樹的形式來抽象出不同平臺(tái)的公共部分矛辕。


講完了上面兩個(gè)背景笑跛,我們直接來看Flutter是如何使用Skia來繪制界面的。

下面是一個(gè)Flutter頁面的Demo

import 'package:flutter/material.dart';

void main() => runApp(MyApp());

class MyApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'Flutter Demo',
      theme: ThemeData(
        primarySwatch: Colors.blue,
      ),
      home: const MyHomePage(title: 'Flutter Demo Home Page'),
    );
  }
}

這個(gè)頁面是一個(gè)WidgetTree聊品,相當(dāng)于我們Activity的xml飞蹂,widget樹會(huì)轉(zhuǎn)換成ElementTree和RenderObjectTree,我們看看入口函數(shù)runApp時(shí)如何進(jìn)行樹的轉(zhuǎn)換的翻屈。

//文件-->/packages/flutter/lib/src/widgets
void runApp(Widget app) {
  WidgetsFlutterBinding.ensureInitialized()
    ..scheduleAttachRootWidget(app)
    ..scheduleWarmUpFrame();
}

void scheduleAttachRootWidget(Widget rootWidget) {
    Timer.run(() {
        attachRootWidget(rootWidget);
    });
}

void attachRootWidget(Widget rootWidget) {
    _readyToProduceFrames = true;
    _renderViewElement = RenderObjectToWidgetAdapter<RenderBox>(
        container: renderView,
        debugShortDescription: '[root]',
        child: rootWidget,
    ).attachToRenderTree(buildOwner, renderViewElement as RenderObjectToWidgetElement<RenderBox>);
}

接著看attachToRenderTree函數(shù)

RenderObjectToWidgetElement<T> attachToRenderTree(BuildOwner owner, [RenderObjectToWidgetElement<T> element]) {
    if (element == null) {
      owner.lockState(() {
        element = createElement();  //創(chuàng)建rootElement
        element.assignOwner(owner); //綁定BuildOwner
      });
      owner.buildScope(element, () { //子widget的初始化從這里開始
        element.mount(null, null);  // 初始化子Widget前陈哑,先執(zhí)行rootElement的mount方法
      });
    } else {
      ...
    }
    return element;
  }

void mount(Element parent, dynamic newSlot) {
  super.mount(parent, newSlot);
  _renderObject = widget.createRenderObject(this);
  attachRenderObject(newSlot);
  _dirty = false;
}

從代碼中可以看到,Widget都被轉(zhuǎn)換成了Element伸眶,Element接著調(diào)用了mount方法芥颈,在mount方法中,可以看到Widget又被轉(zhuǎn)換成了RenderObject赚抡,此時(shí)Widget Tree的ElementTree和RenderObject便都生成完了爬坑。

前面提到了RenderObject類似于經(jīng)過了Measure和Layout流程的ViewGroup,RenderObject的Measure和Layout就不在這兒說了涂臣,那么還剩一個(gè)流程Draw流程盾计,同樣是在RenderObject中進(jìn)行的,它的入口在RenderObject的paint函數(shù)中赁遗。

// 繪制入口署辉,從 view 根節(jié)點(diǎn)開始,逐個(gè)繪制所有子節(jié)點(diǎn)
@override
  void paint(PaintingContext context, Offset offset) {
    if (child != null)
      context.paintChild(child, offset);
  }

可以看到岩四,RenderObject通過PaintingContext來進(jìn)行了圖形的繪制哭尝,我們接著來了解一下PaintingContext是什么。

//文件-->/packages/flutter/lib/src/rendering/object.dart

import 'dart:ui' as ui show PictureRecorder;

class PaintingContext extends ClipContext {
  @protected
  PaintingContext(this._containerLayer, this.estimatedBounds)
 
  final ContainerLayer _containerLayer;
  final Rect estimatedBounds;
  
  PictureLayer _currentLayer;
  ui.PictureRecorder _recorder;
  Canvas _canvas;
 
  @override
  Canvas get canvas {
    if (_canvas == null)
      _startRecording();
    return _canvas;
  }
 
  void _startRecording() {
    _currentLayer = PictureLayer(estimatedBounds);
    _recorder = ui.PictureRecorder();
    _canvas = Canvas(_recorder);
    _containerLayer.append(_currentLayer);
  }
  
   void stopRecordingIfNeeded() {
    if (!_isRecording)
      return;
    _currentLayer.picture = _recorder.endRecording();
    _currentLayer = null;
    _recorder = null;
    _canvas = null;
}

可以看到剖煌,PaintingContext是繪制的上下文材鹦,前面講OpenGL進(jìn)行硬件加速時(shí)提到的CanvasContext,它也是繪制的上下文耕姊,里面封裝了Skia桶唐,Opengl或者Vulkan的渲染管線。這里的PaintingContext則封裝了Skia茉兰。

我們可以通過CanvasContext的get canvas函數(shù)獲取Canvas尤泽,它調(diào)用了_startRecording函數(shù)中,函數(shù)中創(chuàng)建了PictureRecorder和Canvas,這兩個(gè)類都是位于dart:ui庫中坯约,dart:ui位于engine層熊咽,在前面架構(gòu)中提到,F(xiàn)lutter分為Framewrok闹丐,Engine和embened三層网棍,Engine中包含了Skia,dart虛擬機(jī)和Text妇智。dart:ui就是位于Engine層的滥玷。

我們接著去Engine層的代碼看看Canvas的實(shí)現(xiàn)。

//文件-->engine-master\lib\ui\canvas.dart  
Canvas(PictureRecorder recorder, [ Rect? cullRect ]) : assert(recorder != null) { // ignore: unnecessary_null_comparison
    if (recorder.isRecording)
      throw ArgumentError('"recorder" must not already be associated with another Canvas.');
    _recorder = recorder;
    _recorder!._canvas = this;
    cullRect ??= Rect.largest;
    _constructor(recorder, cullRect.left, cullRect.top, cullRect.right, cullRect.bottom);
  }
void _constructor(PictureRecorder recorder,
                  double left,
                  double top,
                  double right,
                  double bottom) native 'Canvas_constructor';

這里Canvas調(diào)用了Canvas_constructor這一個(gè)native方法巍棱,我們接著看這個(gè)native方法的實(shí)現(xiàn)惑畴。

//文件-->engine-master\lib\ui\painting\engine.cc
static void Canvas_constructor(Dart_NativeArguments args) {
  UIDartState::ThrowIfUIOperationsProhibited();
  DartCallConstructor(&Canvas::Create, args);
}
fml::RefPtr<Canvas> Canvas::Create(PictureRecorder* recorder,
                                   double left,
                                   double top,
                                   double right,
                                   double bottom) {
  if (!recorder) {
    Dart_ThrowException(
        ToDart("Canvas constructor called with non-genuine PictureRecorder."));
    return nullptr;
  }
  fml::RefPtr<Canvas> canvas = fml::MakeRefCounted<Canvas>(
      recorder->BeginRecording(SkRect::MakeLTRB(left, top, right, bottom)));
  recorder->set_canvas(canvas);
  return canvas;
}

Canvas::Canvas(SkCanvas* canvas) : canvas_(canvas) {}

可以看到,這里通過PictureRecorder->BeginRecording創(chuàng)建了SKCanvas航徙,這其實(shí)是SKCanvas的另外一種使用方式如贷,這里我簡(jiǎn)單的介紹一個(gè)使用demo。

Picture createSolidRectanglePicture(
  Color color, double width, double height)
{

  PictureRecorder recorder = PictureRecorder();
  Canvas canvas = Canvas(recorder);

  Paint paint = Paint();
  paint.color = color;

  canvas.drawRect(Rect.fromLTWH(0, 0, width, height), paint);
  return recorder.endRecording();
}

這個(gè)demo的效果如下圖到踏,它創(chuàng)建Skia的方式就和Flutter創(chuàng)建Skia的方式是一樣的杠袱。


此時(shí),我們的SKCanvas創(chuàng)建好了窝稿,并且直接通過PaintingContext的get canvas函數(shù)就能獲取到楣富,那么獲取到SKCanvas后直接調(diào)用Canvas的繪制api,就可以將圖像繪制出來了伴榔。

Flutter界面顯示的全流程是比較復(fù)雜的纹蝴,F(xiàn)lutter是完全是自建的一套圖像顯示流程,無法通過Android的SurfaceFlinger進(jìn)行圖像合成踪少,也無法使用Android的Gralloc模塊分配圖像緩沖區(qū)塘安,所以它需要有自己的圖像生產(chǎn)者,有自己的圖形消費(fèi)者援奢,也有自己的圖形緩沖區(qū)兼犯,這里面就有非常多的流程,比如如何接收VSync集漾,如何處理及合成Layer切黔,如何創(chuàng)建圖像緩沖區(qū),這里只是對(duì)Flutter的圖像生產(chǎn)者的部分做了一個(gè)初步的介紹帆竹,關(guān)于Flutter更深入一步的細(xì)節(jié)绕娘,就不在這里繼續(xù)講解了脓规。后面我會(huì)專門寫一系列文章來詳細(xì)講解Flutter栽连。

Vulkan

與OpenGL相比,Vulkan可以更詳細(xì)的向顯卡描述你的應(yīng)用程序打算做什么,從而可以獲得更好的性能和更小的驅(qū)動(dòng)開銷秒紧,作為OpenGL的替代者绢陌,它設(shè)計(jì)之初就是為了跨平臺(tái)實(shí)現(xiàn)的,可以同時(shí)在Windows熔恢、Linux和Android開發(fā)脐湾。甚至在Mac OS系統(tǒng)上運(yùn)行。Android在7.0開始叙淌,便增加了對(duì)Vulkan的支持秤掌,Vulkan一定是未來的趨勢(shì),因?yàn)樗萇penGL的性能更好更強(qiáng)大鹰霍。下面我們就了解一下闻鉴,如何使用Vulkan來生產(chǎn)圖像。

如何使用Vulkan茂洒?

Vulkan的使用和OpenGL類似孟岛,同樣是三步:初始化,繪制督勺,提交buffer下面來看一下具體的流程

1渠羞,初始化Vulkan實(shí)例,物理設(shè)備和任務(wù)隊(duì)列以及Surface

  • 創(chuàng)建Instances實(shí)例
VkInstanceCreateInfo instance_create_info = { 
  VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO, 
  nullptr, 
  0, 
  &application_info, 
  0, 
  nullptr, 
  static_cast<uint32_t>(desired_extensions.size()), 
  desired_extensions.size() > 0 ? &desired_extensions[0] : nullptr 
};

VkInstance inst;
VkResult result = vkCreateInstance( &instance_create_info, nullptr, &inst ); 
  • 初始化物理設(shè)備智哀,也就是我們的顯卡設(shè)備次询,Vulkna的設(shè)計(jì)是支持多GPU的波丰,這里選擇第一個(gè)設(shè)備就行了钟鸵。
uint32_t extensions_count = 0; 
VkResult result = VK_SUCCESS; 

//獲取所有可用物理設(shè)備乎赴,并選擇第一個(gè)
result = vkEnumerateDeviceExtensionProperties( physical_device, nullptr, &extensions_count, &available_extensions[0]); 
if( (result != VK_SUCCESS) || 
    (extensions_count == 0) ) { 
  std::cout << "Could not get the number of device extensions." << std::endl; 
  return false; 
}
  • 獲取queue葛躏,Vulkan的所有操作魁蒜,從繪圖到上傳紋理恩袱,都需要將命令提交到隊(duì)列中
uint32_t queue_families_count = 0; 

//獲取隊(duì)列簇瓶堕,并選擇第一個(gè)
queue_families.resize( queue_families_count ); 
vkGetPhysicalDeviceQueueFamilyProperties( physical_device, &queue_families_count, &queue_families[0] ); 
if( queue_families_count == 0 ) { 
  std::cout << "Could not acquire properties of queue families." << std::endl; 
  return false; 
} 
  • 初始化邏輯設(shè)備,在選擇要使用的物理設(shè)備之后败砂,我們需要設(shè)置一個(gè)邏輯設(shè)備用于交互辨嗽。
VkResult result = vkCreateDevice( physical_device, &device_create_info, nullptr, &logical_device ); 
if( (result != VK_SUCCESS) || 
    (logical_device == VK_NULL_HANDLE) ) { 
  std::cout << "Could not create logical device." << std::endl; 
  return false; 
} 

return true;
  • 上述初始完畢后世落,接著初始化Surface,然后我們就可以使用Vulkan進(jìn)行繪制了
#ifdef VK_USE_PLATFORM_WIN32_KHR 

//創(chuàng)建WIN32的surface糟需,如果是Android屉佳,需要使用VkAndroidSurfaceCreateInfoKHR
VkWin32SurfaceCreateInfoKHR surface_create_info = { 
  VK_STRUCTURE_TYPE_WIN32_SURFACE_CREATE_INFO_KHR, 
  nullptr, 
  0, 
  window_parameters.HInstance, 
  window_parameters.HWnd 
}; 

VkResult result = vkCreateWin32SurfaceKHR( instance, &surface_create_info, nullptr, &presentation_surface );

2,通過vkCmdDraw函數(shù)進(jìn)行圖像繪制

void vkCmdDraw(
    //在Vulkan中洲押,像繪畫命令武花、內(nèi)存轉(zhuǎn)換等操作并不是直接通過方法調(diào)用去完成的,而是需要把所有的操作放在Command Buffer里
    VkCommandBuffer commandBuffer,
    uint32_t vertexCount, //頂點(diǎn)數(shù)量
    uint32_t instanceCount, // 要畫的instance數(shù)量杈帐,沒有:置1
    uint32_t firstVertex,// vertex buffer中第一個(gè)位置 和 vertex Shader 里gl_vertexIndex 相關(guān)体箕。
    uint32_t firstInstance);// 同firstVertex 類似专钉。

3,提交buffer

if (vkQueueSubmit(graphicsQueue, 1, &submitInfo, VK_NULL_HANDLE) != VK_SUCCESS) {
    throw std::runtime_error("failed to submit draw command buffer!");
}

我在這里比較淺顯的介紹了Vulkan的用法累铅,但上面介紹的只是Vulkan的一點(diǎn)皮毛跃须,Vulkan的使用比OpenGL要復(fù)雜的很多,機(jī)制也復(fù)雜很多娃兽,如果想進(jìn)一步了解Vulkan還是得專門去深入研究菇民。雖然只介紹了一點(diǎn)皮毛,但已經(jīng)可以讓我們?nèi)チ私釼ulkan這一圖像生產(chǎn)者投储,是如何在Android系統(tǒng)中生產(chǎn)圖像的第练,下面就來看看吧。

Vulkan進(jìn)行硬件加速

在前面講OpenGL 進(jìn)行硬件加速時(shí)玛荞,提到了CanvasContext复旬,它會(huì)根據(jù)渲染的類型選擇不同的渲染管線,Android是通過Vulkan或者還是通過OpenGL渲染冲泥,主要是CanvasContext里選擇的渲染管線的不同驹碍。

CanvasContext* CanvasContext::create(RenderThread& thread,
        bool translucent, RenderNode* rootRenderNode, IContextFactory* contextFactory) {

    auto renderType = Properties::getRenderPipelineType();

    switch (renderType) {
        case RenderPipelineType::OpenGL:
            return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
                    std::make_unique<OpenGLPipeline>(thread));
        case RenderPipelineType::SkiaGL:
            return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
                    std::make_unique<skiapipeline::SkiaOpenGLPipeline>(thread));
        case RenderPipelineType::SkiaVulkan:
            return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
                                std::make_unique<skiapipeline::SkiaVulkanPipeline>(thread));
        default:
            LOG_ALWAYS_FATAL("canvas context type %d not supported", (int32_t) renderType);
            break;
    }
    return nullptr;
}

我們這里直接看SkiaVulkanPipeline。

//文件->/frameworks/base/libs/hwui/pipeline/skia/SkiaVulkanPipeline.cpp
SkiaVulkanPipeline::SkiaVulkanPipeline(renderthread::RenderThread& thread)
        : SkiaPipeline(thread), mVkManager(thread.vulkanManager()) {}

SkiaVulkanPipeline的構(gòu)造函數(shù)中初始化了VulkanManager凡恍,VulkanManager是對(duì)Vulkan使用的封裝志秃,和前面講到的OpenGLPipeline中的EglManager類似。我們看一下VulkanManager的初始化函數(shù)嚼酝。

//文件-->/frameworks/base/libs/hwui/renderthread/VulkanManager.cpp
void VulkanManager::initialize() {
    if (hasVkContext()) {
        return;
    }

    auto canPresent = [](VkInstance, VkPhysicalDevice, uint32_t) { return true; };
    mBackendContext.reset(GrVkBackendContext::Create(vkGetInstanceProcAddr, vkGetDeviceProcAddr,
                                                     &mPresentQueueIndex, canPresent));
    //……
}

初始化函數(shù)中我們主要關(guān)注GrVkBackendContext::Create方法浮还。

// Create the base Vulkan objects needed by the GrVkGpu object
const GrVkBackendContext* GrVkBackendContext::Create(uint32_t* presentQueueIndexPtr,
                                                     CanPresentFn canPresent,
                                                     GrVkInterface::GetProc getProc) {
    //……

    const VkInstanceCreateInfo instance_create = {
        VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,    // sType
        nullptr,                                   // pNext
        0,                                         // flags
        &app_info,                                 // pApplicationInfo
        (uint32_t) instanceLayerNames.count(),     // enabledLayerNameCount
        instanceLayerNames.begin(),                // ppEnabledLayerNames
        (uint32_t) instanceExtensionNames.count(), // enabledExtensionNameCount
        instanceExtensionNames.begin(),            // ppEnabledExtensionNames
    };

    ACQUIRE_VK_PROC(CreateInstance, VK_NULL_HANDLE, VK_NULL_HANDLE);
    //1,創(chuàng)建Vulkan實(shí)例
    err = grVkCreateInstance(&instance_create, nullptr, &inst);
    if (err < 0) {
        SkDebugf("vkCreateInstance failed: %d\n", err);
        return nullptr;
    }

    

    uint32_t gpuCount;
    //2,查詢可用物理設(shè)備
    err = grVkEnumeratePhysicalDevices(inst, &gpuCount, nullptr);
    if (err) {
        //……
    }
    //……
    gpuCount = 1;
    //3,選擇物理設(shè)備
    
    err = grVkEnumeratePhysicalDevices(inst, &gpuCount, &physDev);
    if (err) {
        //……
    }

    //4闽巩,查詢隊(duì)列簇
    uint32_t queueCount;
    grVkGetPhysicalDeviceQueueFamilyProperties(physDev, &queueCount, nullptr);
    if (!queueCount) {
        //……
        return nullptr;
    }

    SkAutoMalloc queuePropsAlloc(queueCount * sizeof(VkQueueFamilyProperties));
    // now get the actual queue props
    VkQueueFamilyProperties* queueProps = (VkQueueFamilyProperties*)queuePropsAlloc.get();
    //5钧舌,選擇隊(duì)列簇
    grVkGetPhysicalDeviceQueueFamilyProperties(physDev, &queueCount, queueProps);
    
    //……

    // iterate to find the graphics queue
    const VkDeviceCreateInfo deviceInfo = {
        VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO,    // sType
        nullptr,                                 // pNext
        0,                                       // VkDeviceCreateFlags
        queueInfoCount,                          // queueCreateInfoCount
        queueInfo,                               // pQueueCreateInfos
        (uint32_t) deviceLayerNames.count(),     // layerCount
        deviceLayerNames.begin(),                // ppEnabledLayerNames
        (uint32_t) deviceExtensionNames.count(), // extensionCount
        deviceExtensionNames.begin(),            // ppEnabledExtensionNames
        &deviceFeatures                          // ppEnabledFeatures
    };
    //6,創(chuàng)建邏輯設(shè)備
    err = grVkCreateDevice(physDev, &deviceInfo, nullptr, &device);
    if (err) {
        SkDebugf("CreateDevice failed: %d\n", err);
        grVkDestroyInstance(inst, nullptr);
        return nullptr;
    }

    auto interface =
        sk_make_sp<GrVkInterface>(getProc, inst, device, extensionFlags);
    if (!interface->validate(extensionFlags)) {
        SkDebugf("Vulkan interface validation failed\n");
        grVkDeviceWaitIdle(device);
        grVkDestroyDevice(device, nullptr);
        grVkDestroyInstance(inst, nullptr);
        return nullptr;
    }

    VkQueue queue;
    grVkGetDeviceQueue(device, graphicsQueueIndex, 0, &queue);

    GrVkBackendContext* ctx = new GrVkBackendContext();
    ctx->fInstance = inst;
    ctx->fPhysicalDevice = physDev;
    ctx->fDevice = device;
    ctx->fQueue = queue;
    ctx->fGraphicsQueueIndex = graphicsQueueIndex;
    ctx->fMinAPIVersion = kGrVkMinimumVersion;
    ctx->fExtensions = extensionFlags;
    ctx->fFeatures = featureFlags;
    ctx->fInterface.reset(interface.release());
    ctx->fOwnsInstanceAndDevice = true;

    return ctx;
}

可以看到涎跨,GrVkBackendContext::Create中所作的事情就是初始化Vulkan洼冻,初始化的流程和前面介紹如何使用Vulkan中初始化流程都是一樣的,這些都是通用的流程隅很。

初始化完成撞牢,我們接著看看Vulkan如何綁定Surface,只有綁定了Surface叔营,我們才能使用Vulkan進(jìn)行圖像繪制屋彪。

//文件-->/frameworks/base/libs/hwui/renderthread/VulkanManager.cpp
VulkanSurface* VulkanManager::createSurface(ANativeWindow* window) {
    initialize();

    if (!window) {
        return nullptr;
    }

    VulkanSurface* surface = new VulkanSurface();

    VkAndroidSurfaceCreateInfoKHR surfaceCreateInfo;
    memset(&surfaceCreateInfo, 0, sizeof(VkAndroidSurfaceCreateInfoKHR));
    surfaceCreateInfo.sType = VK_STRUCTURE_TYPE_ANDROID_SURFACE_CREATE_INFO_KHR;
    surfaceCreateInfo.pNext = nullptr;
    surfaceCreateInfo.flags = 0;
    surfaceCreateInfo.window = window;

    VkResult res = mCreateAndroidSurfaceKHR(mBackendContext->fInstance, &surfaceCreateInfo, nullptr,
                                            &surface->mVkSurface);
    if (VK_SUCCESS != res) {
        delete surface;
        return nullptr;
    }

    SkDEBUGCODE(VkBool32 supported; res = mGetPhysicalDeviceSurfaceSupportKHR(
                                            mBackendContext->fPhysicalDevice, mPresentQueueIndex,
                                            surface->mVkSurface, &supported);
                // All physical devices and queue families on Android must be capable of
                // presentation with any
                // native window.
                SkASSERT(VK_SUCCESS == res && supported););

    if (!createSwapchain(surface)) {
        destroySurface(surface);
        return nullptr;
    }

    return surface;
}

可以看到,這個(gè)創(chuàng)建了VulkanSurface绒尊,并綁定了ANativeWindow畜挥,ANativeWindow是Android的原生窗口,在前面介紹OpenGL進(jìn)行硬件渲染時(shí)婴谱,也提到過createSurface這個(gè)函數(shù)蟹但,它是在performDraw被執(zhí)行的躯泰,在這里就不重復(fù)說了。

接下來就是調(diào)用Vulkan的api進(jìn)行繪制的圖像的流程

bool SkiaVulkanPipeline::draw(const Frame& frame, const SkRect& screenDirty, const SkRect& dirty,
                              const FrameBuilder::LightGeometry& lightGeometry,
                              LayerUpdateQueue* layerUpdateQueue, const Rect& contentDrawBounds,
                              bool opaque, bool wideColorGamut,
                              const BakedOpRenderer::LightInfo& lightInfo,
                              const std::vector<sp<RenderNode>>& renderNodes,
                              FrameInfoVisualizer* profiler) {
    sk_sp<SkSurface> backBuffer = mVkSurface->getBackBufferSurface();
    if (backBuffer.get() == nullptr) {
        return false;
    }
    SkiaPipeline::updateLighting(lightGeometry, lightInfo);
    renderFrame(*layerUpdateQueue, dirty, renderNodes, opaque, wideColorGamut, contentDrawBounds,
                backBuffer);
    layerUpdateQueue->clear();

    // Draw visual debugging features
    if (CC_UNLIKELY(Properties::showDirtyRegions ||
                    ProfileType::None != Properties::getProfileType())) {
        SkCanvas* profileCanvas = backBuffer->getCanvas();
        SkiaProfileRenderer profileRenderer(profileCanvas);
        profiler->draw(profileRenderer);
        profileCanvas->flush();
    }

    // Log memory statistics
    if (CC_UNLIKELY(Properties::debugLevel != kDebugDisabled)) {
        dumpResourceCacheUsage();
    }

    return true;
}

最后通過swapBuffers提交繪制內(nèi)容

void VulkanManager::swapBuffers(VulkanSurface* surface) {
    if (CC_UNLIKELY(Properties::waitForGpuCompletion)) {
        ATRACE_NAME("Finishing GPU work");
        mDeviceWaitIdle(mBackendContext->fDevice);
    }

    SkASSERT(surface->mBackbuffers);
    VulkanSurface::BackbufferInfo* backbuffer =
            surface->mBackbuffers + surface->mCurrentBackbufferIndex;
    GrVkImageInfo* imageInfo;
    SkSurface* skSurface = surface->mImageInfos[backbuffer->mImageIndex].mSurface.get();
    skSurface->getRenderTargetHandle((GrBackendObject*)&imageInfo,
                                     SkSurface::kFlushRead_BackendHandleAccess);
    // Check to make sure we never change the actually wrapped image
    SkASSERT(imageInfo->fImage == surface->mImages[backbuffer->mImageIndex]);

    // We need to transition the image to VK_IMAGE_LAYOUT_PRESENT_SRC_KHR and make sure that all
    // previous work is complete for before presenting. So we first add the necessary barrier here.
    VkImageLayout layout = imageInfo->fImageLayout;
    VkPipelineStageFlags srcStageMask = layoutToPipelineStageFlags(layout);
    VkPipelineStageFlags dstStageMask = VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT;
    VkAccessFlags srcAccessMask = layoutToSrcAccessMask(layout);
    VkAccessFlags dstAccessMask = VK_ACCESS_MEMORY_READ_BIT;

    VkImageMemoryBarrier imageMemoryBarrier = {
            VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,     // sType
            NULL,                                       // pNext
            srcAccessMask,                              // outputMask
            dstAccessMask,                              // inputMask
            layout,                                     // oldLayout
            VK_IMAGE_LAYOUT_PRESENT_SRC_KHR,            // newLayout
            mBackendContext->fGraphicsQueueIndex,       // srcQueueFamilyIndex
            mPresentQueueIndex,                         // dstQueueFamilyIndex
            surface->mImages[backbuffer->mImageIndex],  // image
            {VK_IMAGE_ASPECT_COLOR_BIT, 0, 1, 0, 1}     // subresourceRange
    };

    mResetCommandBuffer(backbuffer->mTransitionCmdBuffers[1], 0);
    VkCommandBufferBeginInfo info;
    memset(&info, 0, sizeof(VkCommandBufferBeginInfo));
    info.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO;
    info.flags = 0;
    mBeginCommandBuffer(backbuffer->mTransitionCmdBuffers[1], &info);
    mCmdPipelineBarrier(backbuffer->mTransitionCmdBuffers[1], srcStageMask, dstStageMask, 0, 0,
                        nullptr, 0, nullptr, 1, &imageMemoryBarrier);
    mEndCommandBuffer(backbuffer->mTransitionCmdBuffers[1]);

    surface->mImageInfos[backbuffer->mImageIndex].mImageLayout = VK_IMAGE_LAYOUT_PRESENT_SRC_KHR;

    // insert the layout transfer into the queue and wait on the acquire
    VkSubmitInfo submitInfo;
    memset(&submitInfo, 0, sizeof(VkSubmitInfo));
    submitInfo.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO;
    submitInfo.waitSemaphoreCount = 0;
    submitInfo.pWaitDstStageMask = 0;
    submitInfo.commandBufferCount = 1;
    submitInfo.pCommandBuffers = &backbuffer->mTransitionCmdBuffers[1];
    submitInfo.signalSemaphoreCount = 1;
    // When this command buffer finishes we will signal this semaphore so that we know it is now
    // safe to present the image to the screen.
    submitInfo.pSignalSemaphores = &backbuffer->mRenderSemaphore;

    // Attach second fence to submission here so we can track when the command buffer finishes.
    mQueueSubmit(mBackendContext->fQueue, 1, &submitInfo, backbuffer->mUsageFences[1]);

    // Submit present operation to present queue. We use a semaphore here to make sure all rendering
    // to the image is complete and that the layout has been change to present on the graphics
    // queue.
    const VkPresentInfoKHR presentInfo = {
            VK_STRUCTURE_TYPE_PRESENT_INFO_KHR,  // sType
            NULL,                                // pNext
            1,                                   // waitSemaphoreCount
            &backbuffer->mRenderSemaphore,       // pWaitSemaphores
            1,                                   // swapchainCount
            &surface->mSwapchain,                // pSwapchains
            &backbuffer->mImageIndex,            // pImageIndices
            NULL                                 // pResults
    };

    mQueuePresentKHR(mPresentQueue, &presentInfo);

    surface->mBackbuffer.reset();
    surface->mImageInfos[backbuffer->mImageIndex].mLastUsed = surface->mCurrentTime;
    surface->mImageInfos[backbuffer->mImageIndex].mInvalid = false;
    surface->mCurrentTime++;
}

這些流程都和OpenGL是一樣的矮湘,初始化斟冕,綁定Surface口糕,繪制缅阳,提交,所以就不細(xì)說了景描,對(duì)Vulkan有興趣的十办,可以深入的去研究。至此Android中的另一個(gè)圖像生產(chǎn)者Vulkan生產(chǎn)圖像的流程也講完了超棺。

結(jié)尾

OpenGL向族,Skia,Vulkan都是跨平臺(tái)的圖形生產(chǎn)者棠绘,我們不僅僅可以在Android設(shè)備上使用件相,我們也可以在IOS設(shè)備上使用,也可以在Windows設(shè)備上使用氧苍,使用的流程基本和上面一致夜矗,但是需要適配設(shè)備的原生窗口和緩沖,所以掌握了Android是如何繪制圖像的让虐,我們也具備了掌握其他任何設(shè)備上是如何繪制圖像的能力紊撕。

在下一篇文章中,我會(huì)介紹Android圖像渲染原理的最后一部分:圖像緩沖區(qū)赡突。這三部分如果都能掌握对扶,我們基本就能掌握Android中圖像繪制的原理了。

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末惭缰,一起剝皮案震驚了整個(gè)濱河市浪南,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌漱受,老刑警劉巖逞泄,帶你破解...
    沈念sama閱讀 223,207評(píng)論 6 521
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場(chǎng)離奇詭異拜效,居然都是意外死亡喷众,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 95,455評(píng)論 3 400
  • 文/潘曉璐 我一進(jìn)店門紧憾,熙熙樓的掌柜王于貴愁眉苦臉地迎上來到千,“玉大人,你說我怎么就攤上這事赴穗°舅模” “怎么了膀息?”我有些...
    開封第一講書人閱讀 170,031評(píng)論 0 366
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)了赵。 經(jīng)常有香客問我潜支,道長(zhǎng),這世上最難降的妖魔是什么柿汛? 我笑而不...
    開封第一講書人閱讀 60,334評(píng)論 1 300
  • 正文 為了忘掉前任冗酿,我火速辦了婚禮,結(jié)果婚禮上络断,老公的妹妹穿的比我還像新娘裁替。我一直安慰自己,他們只是感情好貌笨,可當(dāng)我...
    茶點(diǎn)故事閱讀 69,322評(píng)論 6 398
  • 文/花漫 我一把揭開白布弱判。 她就那樣靜靜地躺著,像睡著了一般锥惋。 火紅的嫁衣襯著肌膚如雪昌腰。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 52,895評(píng)論 1 314
  • 那天膀跌,我揣著相機(jī)與錄音遭商,去河邊找鬼。 笑死淹父,一個(gè)胖子當(dāng)著我的面吹牛株婴,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播暑认,決...
    沈念sama閱讀 41,300評(píng)論 3 424
  • 文/蒼蘭香墨 我猛地睜開眼困介,長(zhǎng)吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來了蘸际?” 一聲冷哼從身側(cè)響起座哩,我...
    開封第一講書人閱讀 40,264評(píng)論 0 277
  • 序言:老撾萬榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎粮彤,沒想到半個(gè)月后根穷,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 46,784評(píng)論 1 321
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡导坟,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 38,870評(píng)論 3 343
  • 正文 我和宋清朗相戀三年屿良,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片惫周。...
    茶點(diǎn)故事閱讀 40,989評(píng)論 1 354
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡尘惧,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出递递,到底是詐尸還是另有隱情喷橙,我是刑警寧澤啥么,帶...
    沈念sama閱讀 36,649評(píng)論 5 351
  • 正文 年R本政府宣布,位于F島的核電站贰逾,受9級(jí)特大地震影響悬荣,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜疙剑,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 42,331評(píng)論 3 336
  • 文/蒙蒙 一氯迂、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧核芽,春花似錦囚戚、人聲如沸酵熙。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,814評(píng)論 0 25
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽匾二。三九已至哮独,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間察藐,已是汗流浹背皮璧。 一陣腳步聲響...
    開封第一講書人閱讀 33,940評(píng)論 1 275
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留分飞,地道東北人悴务。 一個(gè)月前我還...
    沈念sama閱讀 49,452評(píng)論 3 379
  • 正文 我出身青樓,卻偏偏與公主長(zhǎng)得像譬猫,于是被迫代替她去往敵國和親讯檐。 傳聞我的和親對(duì)象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,995評(píng)論 2 361