前言
整個(gè)圖元的合成轴咱,大致上分為如下6個(gè)步驟:
- 1.preComposition 預(yù)處理合成
- 2.rebuildLayerStacks 重新構(gòu)建Layer棧
- 3.setUpHWComposer HWC的渲染或者準(zhǔn)備
- 4.doDebugFlashRegions 打開debug繪制模式
- 5.doTracing 跟蹤打印
- 6.doComposition 合成圖元
- 7.postComposition 圖元合成后的vysnc等收尾工作。
上文已經(jīng)分析了1霹俺,2砍鸠,3點(diǎn)介紹了SF繪制準(zhǔn)備流程。本著重分析第6,7 兩個(gè)SF圖元合成步驟分衫。
如果遇到問題,請到本文進(jìn)行討論http://www.reibang.com/p/65a3f8ac88c1
正文
doComposition OpenGL es合成圖元
文件:/frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
void SurfaceFlinger::doComposition() {
const bool repaintEverything = android_atomic_and(0, &mRepaintEverything);
for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
const sp<DisplayDevice>& hw(mDisplays[dpy]);
if (hw->isDisplayOn()) {
// transform the dirty region into this screen's coordinate space
const Region dirtyRegion(hw->getDirtyRegion(repaintEverything));
// repaint the framebuffer (if needed)
doDisplayComposition(hw, dirtyRegion);
hw->dirtyRegion.clear();
hw->flip();
}
}
postFramebuffer();
}
合成圖元分為兩部分:
- 1.doDisplayComposition 通過DisplayDevice合成圖元
- 2.postFramebuffer 合成完畢之后般此,需要的話蚪战,將會(huì)把圖元發(fā)送到fb驅(qū)動(dòng)。
doDisplayComposition 合成圖元
void SurfaceFlinger::doDisplayComposition(
const sp<const DisplayDevice>& displayDevice,
const Region& inDirtyRegion)
{
...
if (!doComposeSurfaces(displayDevice)) return;
displayDevice->swapBuffers(getHwComposer());
}
- 1.doComposeSurfaces 合成每一個(gè)Surface對(duì)應(yīng)的Layer中的圖元
- 2.swapBuffers 進(jìn)行一次圖元交換铐懊,把數(shù)據(jù)推到FrameBufferSurface中進(jìn)一步消費(fèi)邀桑。
doComposeSurfaces 合成圖元
這個(gè)方法很長,我拆開成2部分分析:
bool SurfaceFlinger::doComposeSurfaces(const sp<const DisplayDevice>& displayDevice)
{
const Region bounds(displayDevice->bounds());
const DisplayRenderArea renderArea(displayDevice);
const auto hwcId = displayDevice->getHwcDisplayId();
const bool hasClientComposition = getBE().mHwc->hasClientComposition(hwcId);
bool applyColorMatrix = false;
bool needsLegacyColorMatrix = false;
bool legacyColorMatrixApplied = false;
if (hasClientComposition) {
Dataspace outputDataspace = Dataspace::UNKNOWN;
if (displayDevice->hasWideColorGamut()) {
outputDataspace = displayDevice->getCompositionDataSpace();
}
getBE().mRenderEngine->setOutputDataSpace(outputDataspace);
getBE().mRenderEngine->setDisplayMaxLuminance(
displayDevice->getHdrCapabilities().getDesiredMaxLuminance());
const bool hasDeviceComposition = getBE().mHwc->hasDeviceComposition(hwcId);
const bool skipClientColorTransform = getBE().mHwc->hasCapability(
HWC2::Capability::SkipClientColorTransform);
applyColorMatrix = !hasDeviceComposition && !skipClientColorTransform;
if (applyColorMatrix) {
getRenderEngine().setupColorTransform(mDrawingState.colorMatrix);
}
needsLegacyColorMatrix =
(displayDevice->getActiveRenderIntent() >= RenderIntent::ENHANCE &&
outputDataspace != Dataspace::UNKNOWN &&
outputDataspace != Dataspace::SRGB);
if (!displayDevice->makeCurrent()) {
getRenderEngine().resetCurrentSurface();
if(!getDefaultDisplayDeviceLocked()->makeCurrent()) {
}
return false;
}
if (hasDeviceComposition) {
getBE().mRenderEngine->clearWithColor(0, 0, 0, 0);
} else {
const Region letterbox(bounds.subtract(displayDevice->getScissor()));
Region region(displayDevice->undefinedRegion.merge(letterbox));
if (!region.isEmpty()) {
drawWormhole(displayDevice, region);
}
}
if (displayDevice->getDisplayType() != DisplayDevice::DISPLAY_PRIMARY) {
const Rect& bounds(displayDevice->getBounds());
const Rect& scissor(displayDevice->getScissor());
if (scissor != bounds) {
const uint32_t height = displayDevice->getHeight();
getBE().mRenderEngine->setScissor(scissor.left, height - scissor.bottom,
scissor.getWidth(), scissor.getHeight());
}
}
}
...
}
在doComposeSurfaces第一部分中科乎,會(huì)判斷每一個(gè)DisplayDevice中是否包含著Client模式的Layer概漱,如果有就會(huì)初始化Layer在RenderEngine的環(huán)境。
- 1.獲取DataSpace設(shè)置到RenderEngine
- 2.RenderEngine 設(shè)置ColorMatrix 顏色矩陣
- 3.DisplayService.makeCurrent 綁定當(dāng)前OpenGL es環(huán)境
- 4.如果包含了Device模式喜喂,也就是存在通過HWC進(jìn)行OverLayer繪制的Layer瓤摧,則會(huì)調(diào)用OpenGL es 清空背景,否則調(diào)用drawWormhole
- 5.最后對(duì)Layer的區(qū)域在OpenGL es中進(jìn)行裁剪玉吁。
DisplayService makeCurrent
bool DisplayDevice::makeCurrent() const {
bool success = mFlinger->getRenderEngine().setCurrentSurface(*mSurface);
setViewportAndProjection();
return success;
}
RenderEngine會(huì)設(shè)置DisplayDevice的mSurface為當(dāng)前渲染對(duì)象照弥。mSurface就是第一篇文章中,提到過的由RenderEngine生成一個(gè)RE::Surface對(duì)象进副。最后初始化整個(gè)實(shí)圖矩陣这揣,這是一個(gè)正交投射。
drawWormhole
void SurfaceFlinger::drawWormhole(const sp<const DisplayDevice>& displayDevice, const Region& region) const {
const int32_t height = displayDevice->getHeight();
auto& engine(getRenderEngine());
engine.fillRegionWithColor(region, height, 0, 0, 0, 0);
}
乍看之下影斑,好像這個(gè)方法和上面包含Device模式清空背景似乎一致给赞,我們看看源碼:
void RenderEngine::fillRegionWithColor(const Region& region, uint32_t height, float red,
float green, float blue, float alpha) {
size_t c;
Rect const* r = region.getArray(&c);
Mesh mesh(Mesh::TRIANGLES, c * 6, 2);
Mesh::VertexArray<vec2> position(mesh.getPositionArray<vec2>());
for (size_t i = 0; i < c; i++, r++) {
position[i * 6 + 0].x = r->left;
position[i * 6 + 0].y = height - r->top;
position[i * 6 + 1].x = r->left;
position[i * 6 + 1].y = height - r->bottom;
position[i * 6 + 2].x = r->right;
position[i * 6 + 2].y = height - r->bottom;
position[i * 6 + 3].x = r->left;
position[i * 6 + 3].y = height - r->top;
position[i * 6 + 4].x = r->right;
position[i * 6 + 4].y = height - r->bottom;
position[i * 6 + 5].x = r->right;
position[i * 6 + 5].y = height - r->top;
}
setupFillWithColor(red, green, blue, alpha);
drawMesh(mesh);
}
void RenderEngine::clearWithColor(float red, float green, float blue, float alpha) {
glClearColor(red, green, blue, alpha);
glClear(GL_COLOR_BUFFER_BIT);
}
能看到clearWithColor其實(shí)是把整個(gè)OpenGL es對(duì)應(yīng)的屏幕區(qū)域背景全部清空為黑色。fillRegionWithColor其實(shí)就是把這個(gè)區(qū)域內(nèi)的顏色填充為一種顏色矫户,這里是黑色片迅。
裁剪我們就暫時(shí)不需要關(guān)注。接下來皆辽,我們來看看后半段核心邏輯柑蛇。
const Transform& displayTransform = displayDevice->getTransform();
bool firstLayer = true;
for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
const Region clip(bounds.intersect(
displayTransform.transform(layer->visibleRegion)));
if (!clip.isEmpty()) {
switch (layer->getCompositionType(hwcId)) {
case HWC2::Composition::Cursor:
case HWC2::Composition::Device:
case HWC2::Composition::Sideband:
case HWC2::Composition::SolidColor: {
const Layer::State& state(layer->getDrawingState());
if (layer->getClearClientTarget(hwcId) && !firstLayer &&
layer->isOpaque(state) && (state.color.a == 1.0f)
&& hasClientComposition) {
layer->clearWithOpenGL(renderArea);
}
break;
}
case HWC2::Composition::Client: {
...
layer->draw(renderArea, clip);
break;
}
default:
break;
}
} else {
}
firstLayer = false;
}
if (applyColorMatrix) {
getRenderEngine().setupColorTransform(mat4());
}
if (needsLegacyColorMatrix && legacyColorMatrixApplied) {
getRenderEngine().setSaturationMatrix(mat4());
}
getBE().mRenderEngine->disableScissor();
return true;
值得注意的是芥挣,在真正的合成步驟中,分別對(duì)Cursor耻台,Device空免,Sideband,SolidColor盆耽,Client進(jìn)行處理蹋砚。
對(duì)于前四者,只是簡單做了Layer的背景清空摄杂,不允許背景對(duì)HWC的渲染造成影響都弹,把渲染時(shí)機(jī)往后移動(dòng)。在這個(gè)步驟只會(huì)處理Client渲染模式匙姜。最后檢測是否有顏色矩陣畅厢,有則會(huì)設(shè)置。
layer->draw會(huì)直接調(diào)用子類的onDraw方法氮昧,這個(gè)方法就是OpenGL es繪制合成核心邏輯框杜。
BufferLayer onDraw
文件:/frameworks/native/services/surfaceflinger/BufferLayer.cpp
void BufferLayer::onDraw(const RenderArea& renderArea, const Region& clip,
bool useIdentityTransform) const {
if (CC_UNLIKELY(getBE().compositionInfo.mBuffer == 0)) {
...
return;
}
// Bind the current buffer to the GL texture, and wait for it to be
// ready for us to draw into.
status_t err = mConsumer->bindTextureImage();
...
bool blackOutLayer = isProtected() || (isSecure() && !renderArea.isSecure());
auto& engine(mFlinger->getRenderEngine());
if (!blackOutLayer) {
const bool useFiltering = getFiltering() || needsFiltering(renderArea) || isFixedSize();
float textureMatrix[16];
mConsumer->setFilteringEnabled(useFiltering);
mConsumer->getTransformMatrix(textureMatrix);
if (getTransformToDisplayInverse()) {
uint32_t transform = DisplayDevice::getPrimaryDisplayOrientationTransform();
mat4 tr = inverseOrientation(transform);
sp<Layer> p = mDrawingParent.promote();
if (p != nullptr) {
const auto parentTransform = p->getTransform();
tr = tr * inverseOrientation(parentTransform.getOrientation());
}
const mat4 texTransform(mat4(static_cast<const float*>(textureMatrix)) * tr);
memcpy(textureMatrix, texTransform.asArray(), sizeof(textureMatrix));
}
// Set things up for texturing.
mTexture.setDimensions(getBE().compositionInfo.mBuffer->getWidth(),
getBE().compositionInfo.mBuffer->getHeight());
mTexture.setFiltering(useFiltering);
mTexture.setMatrix(textureMatrix);
engine.setupLayerTexturing(mTexture);
} else {
engine.setupLayerBlackedOut();
}
drawWithOpenGL(renderArea, useIdentityTransform);
engine.disableTexturing();
}
- BufferLayerConsumer.bindTextureImage 把GraphicBuffer綁定到OpenGL es。
- 由于關(guān)于protect的標(biāo)志位和secure標(biāo)志位默認(rèn)都是關(guān)閉的袖肥,此時(shí)blackOutLayer是false咪辱。在setFilteringEnabled計(jì)算裁剪,旋轉(zhuǎn)等效果矩陣椎组,getTransformMatrix獲取在setFilteringEnabled計(jì)算出來的矩陣油狂,把這個(gè)矩陣賦值給Texture對(duì)象,并且通過setupLayerTexturing 把這個(gè)矩陣作為紋理綁定都OpenGL es中寸癌。
- drawWithOpenGL 核心是調(diào)用glDrawArray 合成所有OpenGL es步驟中的繪制效果专筷。
BufferLayerConsumer bindTextureImage
文件:/frameworks/native/services/surfaceflinger/BufferLayer.cpp
status_t BufferLayerConsumer::bindTextureImage() {
Mutex::Autolock lock(mMutex);
return bindTextureImageLocked();
}
status_t BufferLayerConsumer::bindTextureImageLocked() {
mRE.checkErrors();
if (mCurrentTexture == BufferQueue::INVALID_BUFFER_SLOT && mCurrentTextureImage == nullptr) {
mRE.bindExternalTextureImage(mTexName, *mRE.createImage());
return NO_INIT;
}
const Rect& imageCrop = canUseImageCrop(mCurrentCrop) ? mCurrentCrop : Rect::EMPTY_RECT;
status_t err = mCurrentTextureImage->createIfNeeded(imageCrop);
if (err != NO_ERROR) {
...
return UNKNOWN_ERROR;
}
mRE.bindExternalTextureImage(mTexName, mCurrentTextureImage->image());
return doFenceWaitLocked();
}
先通過canUseImageCrop判斷是否能參見,最后為mCurrentTextureImage設(shè)置裁剪區(qū)域蒸苇。mCurrentTextureImage是什么呢磷蛹?其實(shí)就是RE::Image對(duì)象,內(nèi)部包含了GraphicBuffer溪烤。
最后通過doFenceWaitLocked 進(jìn)行Fence進(jìn)行等待OpenGL es完成綁定圖元的完成味咳。
RenderEngine bindExternalTextureImage
文件:/frameworks/native/services/surfaceflinger/RenderEngine/RenderEngine.cpp
void RenderEngine::bindExternalTextureImage(uint32_t texName, const android::RE::Image& image) {
return bindExternalTextureImage(texName, static_cast<const android::RE::impl::Image&>(image));
}
void RenderEngine::bindExternalTextureImage(uint32_t texName,
const android::RE::impl::Image& image) {
const GLenum target = GL_TEXTURE_EXTERNAL_OES;
glBindTexture(target, texName);
if (image.getEGLImage() != EGL_NO_IMAGE_KHR) {
glEGLImageTargetTexture2DOES(target, static_cast<GLeglImageOES>(image.getEGLImage()));
}
}
還記得我在OpenGL es上的封裝一文中和大家聊到的OpenGL es的優(yōu)化嗎?實(shí)際上glEGLImageTargetTexture2DOES設(shè)置的EGLImage對(duì)象本質(zhì)上就是我們GraphicBuffer對(duì)象檬嘀。之后操作就以GraphicBuffer為紋理藍(lán)本進(jìn)行效果繪制槽驶。。
setFilteringEnabled 計(jì)算變換矩陣
void BufferLayerConsumer::setFilteringEnabled(bool enabled) {
Mutex::Autolock lock(mMutex);
...
bool needsRecompute = mFilteringEnabled != enabled;
mFilteringEnabled = enabled;
....
if (needsRecompute && mCurrentTextureImage != nullptr) {
computeCurrentTransformMatrixLocked();
}
}
void BufferLayerConsumer::computeCurrentTransformMatrixLocked() {
sp<GraphicBuffer> buf =
(mCurrentTextureImage == nullptr) ? nullptr : mCurrentTextureImage->graphicBuffer();
const Rect& cropRect = canUseImageCrop(mCurrentCrop) ? Rect::EMPTY_RECT : mCurrentCrop;
GLConsumer::computeTransformMatrix(mCurrentTransformMatrix, buf, cropRect, mCurrentTransform,
mFilteringEnabled);
}
一般情況下鸳兽,標(biāo)志位isFixSize掂铐,getFilter,needsFiltering都是false.因此會(huì)走到computeTransformMatrix中進(jìn)行變換舉證的處理,能看到這里有之前計(jì)算invalidate步驟中得到mCurrentTransform堡纬,這個(gè)mCurrentTransform實(shí)際上是一個(gè)flag。
GLConsumer computeTransformMatrix
void GLConsumer::computeTransformMatrix(float outTransform[16],
const sp<GraphicBuffer>& buf, const Rect& cropRect, uint32_t transform,
bool filtering) {
static const mat4 mtxFlipH(
-1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
1, 0, 0, 1
);
static const mat4 mtxFlipV(
1, 0, 0, 0,
0, -1, 0, 0,
0, 0, 1, 0,
0, 1, 0, 1
);
static const mat4 mtxRot90(
0, 1, 0, 0,
-1, 0, 0, 0,
0, 0, 1, 0,
1, 0, 0, 1
);
mat4 xform;
if (transform & NATIVE_WINDOW_TRANSFORM_FLIP_H) {
xform *= mtxFlipH;
}
if (transform & NATIVE_WINDOW_TRANSFORM_FLIP_V) {
xform *= mtxFlipV;
}
if (transform & NATIVE_WINDOW_TRANSFORM_ROT_90) {
xform *= mtxRot90;
}
if (!cropRect.isEmpty()) {
float tx = 0.0f, ty = 0.0f, sx = 1.0f, sy = 1.0f;
float bufferWidth = buf->getWidth();
float bufferHeight = buf->getHeight();
float shrinkAmount = 0.0f;
if (filtering) {
switch (buf->getPixelFormat()) {
case PIXEL_FORMAT_RGBA_8888:
case PIXEL_FORMAT_RGBX_8888:
case PIXEL_FORMAT_RGBA_FP16:
case PIXEL_FORMAT_RGBA_1010102:
case PIXEL_FORMAT_RGB_888:
case PIXEL_FORMAT_RGB_565:
case PIXEL_FORMAT_BGRA_8888:
shrinkAmount = 0.5;
break;
default:
// If we don't recognize the format, we must assume the
// worst case (that we care about), which is YUV420.
shrinkAmount = 1.0;
break;
}
}
if (cropRect.width() < bufferWidth) {
tx = (float(cropRect.left) + shrinkAmount) / bufferWidth;
sx = (float(cropRect.width()) - (2.0f * shrinkAmount)) /
bufferWidth;
}
if (cropRect.height() < bufferHeight) {
ty = (float(bufferHeight - cropRect.bottom) + shrinkAmount) /
bufferHeight;
sy = (float(cropRect.height()) - (2.0f * shrinkAmount)) /
bufferHeight;
}
mat4 crop(
sx, 0, 0, 0,
0, sy, 0, 0,
0, 0, 1, 0,
tx, ty, 0, 1
);
xform = crop * xform;
}
// SurfaceFlinger expects the top of its window textures to be at a Y
// coordinate of 0, so GLConsumer must behave the same way. We don't
// want to expose this to applications, however, so we must add an
// additional vertical flip to the transform after all the other transforms.
xform = mtxFlipV * xform;
memcpy(outTransform, xform.asArray(), sizeof(xform));
}
在這里面涉及到了一些矩陣計(jì)算蒿秦,原理不多介紹烤镐,推導(dǎo)過程,我已經(jīng)寫過了棍鳖,就在OpenGL(四)坐標(biāo)以及OpenGL(三)矩陣的基本使用
首先看頭三個(gè)矩陣炮叶,十分簡單。
static const mat4 mtxFlipH(
-1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
1, 0, 0, 1
);
static const mat4 mtxFlipV(
1, 0, 0, 0,
0, -1, 0, 0,
0, 0, 1, 0,
0, 1, 0, 1
);
static const mat4 mtxRot90(
0, 1, 0, 0,
-1, 0, 0, 0,
0, 0, 1, 0,
1, 0, 0, 1
);
第一個(gè)矩陣渡处,象征x軸的縮放向量從1變成了-1镜悉,也就說進(jìn)行了水平翻轉(zhuǎn)。同理mtxFlipV医瘫,就是指y軸上的水平翻轉(zhuǎn)侣肄。mtxRot90 則是進(jìn)行了繞著z軸一次90度的翻轉(zhuǎn)。他們會(huì)根據(jù)Surface中設(shè)置得到flag進(jìn)行一些簡單的處理醇份。
if (filtering) {
switch (buf->getPixelFormat()) {
case PIXEL_FORMAT_RGBA_8888:
case PIXEL_FORMAT_RGBX_8888:
case PIXEL_FORMAT_RGBA_FP16:
case PIXEL_FORMAT_RGBA_1010102:
case PIXEL_FORMAT_RGB_888:
case PIXEL_FORMAT_RGB_565:
case PIXEL_FORMAT_BGRA_8888:
shrinkAmount = 0.5;
break;
default:
shrinkAmount = 1.0;
break;
}
}
if (cropRect.width() < bufferWidth) {
tx = (float(cropRect.left) + shrinkAmount) / bufferWidth;
sx = (float(cropRect.width()) - (2.0f * shrinkAmount)) /
bufferWidth;
}
if (cropRect.height() < bufferHeight) {
ty = (float(bufferHeight - cropRect.bottom) + shrinkAmount) /
bufferHeight;
sy = (float(cropRect.height()) - (2.0f * shrinkAmount)) /
bufferHeight;
}
mat4 crop(
sx, 0, 0, 0,
0, sy, 0, 0,
0, 0, 1, 0,
tx, ty, 0, 1
);
xform = crop * xform;
解析來稼锅,再來看看下一段,如果裁剪區(qū)域比圖元小僚纷。此時(shí)會(huì)在原來的變換矩陣上計(jì)算矩距,sx是代表x軸上的縮放,sy代表y軸上的縮放怖竭,tx,ty則會(huì)影響平移位置锥债。
縮放遵循如下公式:
當(dāng)為RGBA時(shí)候:(裁剪區(qū)域?qū)挾?- (2 * 0.5)) / 圖元寬度
(裁剪區(qū)域高度 - (2 * 0.5)) / 圖元高度
當(dāng)為非RGBA(一般為YUV420)時(shí)候:(裁剪區(qū)域?qū)挾?- (2 * 1)) / 圖元寬度
(裁剪區(qū)域高度 - (2 * 1)) / 圖元高度
BufferLayer drawWithOpenGL 繪制合成OpenGL es中的參數(shù)
void BufferLayer::drawWithOpenGL(const RenderArea& renderArea, bool useIdentityTransform) const {
ATRACE_CALL();
const State& s(getDrawingState());
computeGeometry(renderArea, getBE().mMesh, useIdentityTransform);
const Rect bounds{computeBounds()}; // Rounds from FloatRect
Transform t = getTransform();
Rect win = bounds;
if (!s.finalCrop.isEmpty()) {
win = t.transform(win);
if (!win.intersect(s.finalCrop, &win)) {
win.clear();
}
win = t.inverse().transform(win);
if (!win.intersect(bounds, &win)) {
win.clear();
}
}
float left = float(win.left) / float(s.active.w);
float top = float(win.top) / float(s.active.h);
float right = float(win.right) / float(s.active.w);
float bottom = float(win.bottom) / float(s.active.h);
// TODO: we probably want to generate the texture coords with the mesh
// here we assume that we only have 4 vertices
Mesh::VertexArray<vec2> texCoords(getBE().mMesh.getTexCoordArray<vec2>());
texCoords[0] = vec2(left, 1.0f - top);
texCoords[1] = vec2(left, 1.0f - bottom);
texCoords[2] = vec2(right, 1.0f - bottom);
texCoords[3] = vec2(right, 1.0f - top);
auto& engine(mFlinger->getRenderEngine());
engine.setupLayerBlending(mPremultipliedAlpha, isOpaque(s), false /* disableTexture */,
getColor());
engine.setSourceDataSpace(mCurrentDataSpace);
if (isHdrY410()) {
engine.setSourceY410BT2020(true);
}
engine.drawMesh(getBE().mMesh);
engine.disableBlending();
engine.setSourceY410BT2020(false);
}
這里面的核心思想,可以閱讀我之前寫的OpenGL 紋理基礎(chǔ)與索引痊臭,其實(shí)就是設(shè)置了紋理坐標(biāo)哮肚,紋理之間的混合模式,顏色空間广匙。
由于紋理坐標(biāo)都是在[-1,1]之間绽左,因此需要把活躍范圍active和GraphicBuffer進(jìn)行等比壓縮,把它歸一化到一個(gè)區(qū)間艇潭。最后設(shè)置到RenderEngine拼窥。
最后調(diào)用drawMesh 合并OpenGL es的渲染參數(shù)。
RenderEngine drawMesh
void GLES20RenderEngine::drawMesh(const Mesh& mesh) {
ATRACE_CALL();
if (mesh.getTexCoordsSize()) {
glEnableVertexAttribArray(Program::texCoords);
glVertexAttribPointer(Program::texCoords, mesh.getTexCoordsSize(), GL_FLOAT, GL_FALSE,
mesh.getByteStride(), mesh.getTexCoords());
}
glVertexAttribPointer(Program::position, mesh.getVertexSize(), GL_FLOAT, GL_FALSE,
mesh.getByteStride(), mesh.getPositions());
//處理DataSpace
...
ProgramCache::getInstance().useProgram(wideColorState);
glDrawArrays(mesh.getPrimitive(), 0, mesh.getVertexCount());
...
} else {
ProgramCache::getInstance().useProgram(mState);
glDrawArrays(mesh.getPrimitive(), 0, mesh.getVertexCount());
}
if (mesh.getTexCoordsSize()) {
glDisableVertexAttribArray(Program::texCoords);
}
}
這里就是我們十分熟悉的OpenGL es的繪制循環(huán)步驟蹋凝。
- 1.glVertexAttribPointer 告訴OpenGL es改怎么解析頂點(diǎn)
- 2.ProgramCache::getInstance().useProgram(wideColorState); 調(diào)用著色器程序鲁纠。
- 3.glDrawArrays 合成OpenGL es中所有參數(shù),繪制在在之前通過EGLImage保存進(jìn)來的GraphicBuffer
這里面的核心原理我已經(jīng)在我寫的OpenGL es上的封裝(下)
的軟件模擬OpenGL es渲染流程中已經(jīng)解析了鳍寂。
DisplayDevice swapBuffers
當(dāng)我們完成了OpenGL es的繪制改含,將會(huì)調(diào)用DisplayDevice swapBuffers 進(jìn)行OpenGL es中緩沖區(qū)的交換,發(fā)送到消費(fèi)端消費(fèi)迄汛。
文件:/frameworks/native/services/surfaceflinger/DisplayDevice.cpp
void DisplayDevice::swapBuffers(HWComposer& hwc) const {
if (hwc.hasClientComposition(mHwcDisplayId) || hwc.hasFlipClientTargetRequest(mHwcDisplayId)) {
mSurface->swapBuffers();
}
status_t result = mDisplaySurface->advanceFrame();
}
由于DisplayDevice管理兩個(gè)Surface捍壤,一個(gè)RE::Surface以及FrameBufferSurface骤视。
RE::Surface實(shí)際上就是上文bindTexture步驟,OpenGL es繪制的承載體鹃觉。
- 1.如果發(fā)現(xiàn)需要進(jìn)行Client模式渲染专酗,則會(huì)調(diào)用RE::Surface的swapBuffers
- 2.FrameBufferSurface advanceFrame 獲取下一個(gè)需要消費(fèi)圖元盗扇。
我們需要探索他們之間的關(guān)系佑笋,我們需要退回我寫的第一篇文章SurfaceFlinger 的初始化瓷蛙。
先看看第一個(gè)片段道宅,processDisplayChangesLocked:
sp<DisplaySurface> dispSurface;
sp<IGraphicBufferProducer> producer;
sp<IGraphicBufferProducer> bqProducer;
sp<IGraphicBufferConsumer> bqConsumer;
mCreateBufferQueue(&bqProducer, &bqConsumer, false);
...
dispSurface = new FramebufferSurface(*getBE().mHwc, hwcId, bqConsumer);
FramebufferSurface在底層持有一個(gè)IGraphicBufferConsumer,一個(gè)圖元消費(fèi)者。說明這個(gè)FramebufferSurface和我們常說的Surface不一樣巷查。我們客戶端的Surface一般作為生產(chǎn)端獲取生產(chǎn)圖元推入SF中,而這里面FramebufferSurface將會(huì)進(jìn)行圖元的消費(fèi)痛黎。
再來看看第二個(gè)片段,setupNewDisplayDeviceInternal:
auto nativeWindowSurface = mCreateNativeWindowSurface(producer);
auto nativeWindow = nativeWindowSurface->getNativeWindow();
...
std::unique_ptr<RE::Surface> renderSurface = getRenderEngine().createSurface();
renderSurface->setCritical(state.type == DisplayDevice::DISPLAY_PRIMARY);
renderSurface->setAsync(state.type >= DisplayDevice::DISPLAY_VIRTUAL);
renderSurface->setNativeWindow(nativeWindow.get());
這里面可以得知在生成RE::Surface時(shí)候會(huì)通過IGraphicBufferProducer生成一個(gè)openGL es對(duì)應(yīng)的egl_surface_v2_t結(jié)構(gòu)體刮吧。他就是OpenGL es中進(jìn)行承載圖元繪制結(jié)果的對(duì)象舅逸。
結(jié)合我寫的一篇OpenGL es上的封裝(上),就能得知皇筛,一次swapbuffer其實(shí)就是把其中的IGraphicBufferProducer重復(fù)上面幾篇文章的圖元一直到queue的過程琉历。
有了這個(gè)基礎(chǔ)之后,我們來看看mDisplaySurface->advanceFrame方法。
FramebufferSurface advanceFrame
文件:/frameworks/native/services/surfaceflinger/DisplayHardware/FramebufferSurface.cpp
status_t FramebufferSurface::advanceFrame() {
uint32_t slot = 0;
sp<GraphicBuffer> buf;
sp<Fence> acquireFence(Fence::NO_FENCE);
Dataspace dataspace = Dataspace::UNKNOWN;
status_t result = nextBuffer(slot, buf, acquireFence, dataspace);
mDataSpace = dataspace;
...
return result;
}
status_t FramebufferSurface::nextBuffer(uint32_t& outSlot,
sp<GraphicBuffer>& outBuffer, sp<Fence>& outFence,
Dataspace& outDataspace) {
Mutex::Autolock lock(mMutex);
BufferItem item;
status_t err = acquireBufferLocked(&item, 0);
if (err == BufferQueue::NO_BUFFER_AVAILABLE) {
mHwcBufferCache.getHwcBuffer(mCurrentBufferSlot, mCurrentBuffer,
&outSlot, &outBuffer);
return NO_ERROR;
} else if (err != NO_ERROR) {
...
return err;
}
if (mCurrentBufferSlot != BufferQueue::INVALID_BUFFER_SLOT &&
item.mSlot != mCurrentBufferSlot) {
mHasPendingRelease = true;
mPreviousBufferSlot = mCurrentBufferSlot;
mPreviousBuffer = mCurrentBuffer;
}
mCurrentBufferSlot = item.mSlot;
mCurrentBuffer = mSlots[mCurrentBufferSlot].mGraphicBuffer;
mCurrentFence = item.mFence;
outFence = item.mFence;
mHwcBufferCache.getHwcBuffer(mCurrentBufferSlot, mCurrentBuffer,
&outSlot, &outBuffer);
outDataspace = static_cast<Dataspace>(item.mDataSpace);
status_t result =
mHwc.setClientTarget(mDisplayType, outSlot, outFence, outBuffer, outDataspace);
...
return NO_ERROR;
}
這里面的邏輯就不多介紹旗笔,我之前寫的圖元消費(fèi)幾乎一樣的邏輯彪置。經(jīng)過acquireBufferLocked步驟獲取了OpenGL es此時(shí)繪制合成完畢的圖元,需要顯示到屏幕的GraphicBuffer蝇恶。
SF postFramebuffer
void SurfaceFlinger::postFramebuffer()
{
const nsecs_t now = systemTime();
mDebugInSwapBuffers = now;
for (size_t displayId = 0; displayId < mDisplays.size(); ++displayId) {
auto& displayDevice = mDisplays[displayId];
if (!displayDevice->isDisplayOn()) {
continue;
}
const auto hwcId = displayDevice->getHwcDisplayId();
if (hwcId >= 0) {
getBE().mHwc->presentAndGetReleaseFences(hwcId);
}
displayDevice->onSwapBuffersCompleted();
displayDevice->makeCurrent();
for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
auto hwcLayer = layer->getHwcLayer(hwcId);
sp<Fence> releaseFence = getBE().mHwc->getLayerReleaseFence(hwcId, hwcLayer);
if (layer->getCompositionType(hwcId) == HWC2::Composition::Client) {
releaseFence = Fence::merge("LayerRelease", releaseFence,
displayDevice->getClientTargetAcquireFence());
}
layer->onLayerDisplayed(releaseFence);
}
if (!displayDevice->getLayersNeedingFences().isEmpty()) {
sp<Fence> presentFence = getBE().mHwc->getPresentFence(hwcId);
for (auto& layer : displayDevice->getLayersNeedingFences()) {
layer->onLayerDisplayed(presentFence);
}
}
if (hwcId >= 0) {
getBE().mHwc->clearReleaseFences(hwcId);
}
}
mLastSwapBufferTime = systemTime() - now;
mDebugInSwapBuffers = 0;
if (getBE().mHwc->isConnected(HWC_DISPLAY_PRIMARY)) {
uint32_t flipCount = getDefaultDisplayDeviceLocked()->getPageFlipCount();
if (flipCount % LOG_FRAME_STATS_PERIOD == 0) {
logFrameStats();
}
}
}
- 1.presentAndGetReleaseFences 統(tǒng)一通過HWC的繪制到屏幕
- 2.onSwapBuffersCompleted 釋放當(dāng)前的GraphicBuffer拳魁,讓它回到Free狀態(tài)。
- 3.onLayerDisplayed 設(shè)置釋放狀態(tài)的Fence撮弧。
還記得因?yàn)槲覀儫o論怎么繪制也好潘懊,在HWC的Hal層中,硬件層中贿衍,OpenGL es中其實(shí)持有的都是通過buffer_handle句柄的GraphicBuffer授舟,換句話說無論三方中那一方對(duì)保存在自己緩存中的GraphicBuffer進(jìn)行修改,也是修改ion中同一段內(nèi)存贸辈。所以释树,SF才能統(tǒng)一通過HWC渲染到屏幕上。
我們主要關(guān)注前兩個(gè)核心的方法擎淤。
HWC presentAndGetReleaseFences
文件:/frameworks/native/services/surfaceflinger/DisplayHardware/HWComposer.cpp
status_t HWComposer::presentAndGetReleaseFences(int32_t displayId) {
ATRACE_CALL();
RETURN_IF_INVALID_DISPLAY(displayId, BAD_INDEX);
auto& displayData = mDisplayData[displayId];
auto& hwcDisplay = displayData.hwcDisplay;
...
auto error = hwcDisplay->present(&displayData.lastPresentFence);
std::unordered_map<HWC2::Layer*, sp<Fence>> releaseFences;
error = hwcDisplay->getReleaseFences(&releaseFences);
displayData.releaseFences = std::move(releaseFences);
return NO_ERROR;
}
- 1.調(diào)用Hal層的present方法奢啥,進(jìn)行渲染
- 2.getReleaseFences從Hal層中獲取釋放的Fence
我們直接看ComposerHal中的presentDisplay
Error Composer::presentDisplay(Display display, int* outPresentFence)
{
mWriter.selectDisplay(display);
mWriter.presentDisplay();
Error error = execute();
if (error != Error::NONE) {
return error;
}
mReader.takePresentFence(display, outPresentFence);
return Error::NONE;
}
其實(shí)這里還是使用上一篇文章中的ComposeCommandEngine方式給HWC下命令把數(shù)據(jù)渲染到fb驅(qū)動(dòng)中。
接下來就不繼續(xù)贅述其中的流程嘴拢,我們直接看核心代碼HWC2On1Adapter::Display::present桩盲。
HWC2On1Adapter::Display::present
文件:/hardware/interfaces/graphics/composer/2.1/utils/hwc2on1adapter/HWC2On1Adapter.cpp
Error HWC2On1Adapter::Display::present(int32_t* outRetireFence) {
std::unique_lock<std::recursive_mutex> lock(mStateMutex);
if (mChanges) {
Error error = mDevice.setAllDisplays();
if (error != Error::None) {
return error;
}
}
*outRetireFence = mRetireFence.get()->dup();
return Error::None;
}
核心是調(diào)用setAllDisplays,發(fā)送每一個(gè)Displays上的圖元席吴,dup則是進(jìn)行fence的等待赌结。
HWC2On1Adapter::setAllDisplays
Error HWC2On1Adapter::setAllDisplays() {
ATRACE_CALL();
std::unique_lock<std::recursive_timed_mutex> lock(mStateMutex);
// Make sure we're ready to validate
for (size_t hwc1Id = 0; hwc1Id < mHwc1Contents.size(); ++hwc1Id) {
if (mHwc1Contents[hwc1Id] == nullptr) {
continue;
}
auto displayId = mHwc1DisplayMap[hwc1Id];
auto& display = mDisplays[displayId];
Error error = display->set(*mHwc1Contents[hwc1Id]);
if (error != Error::None) {
return error;
}
}
{
mHwc1Device->set(mHwc1Device, mHwc1Contents.size(),
mHwc1Contents.data());
}
// Add retire and release fences
for (size_t hwc1Id = 0; hwc1Id < mHwc1Contents.size(); ++hwc1Id) {
if (mHwc1Contents[hwc1Id] == nullptr) {
continue;
}
auto displayId = mHwc1DisplayMap[hwc1Id];
auto& display = mDisplays[displayId];
auto retireFenceFd = mHwc1Contents[hwc1Id]->retireFenceFd;
display->addRetireFence(mHwc1Contents[hwc1Id]->retireFenceFd);
display->addReleaseFences(*mHwc1Contents[hwc1Id]);
}
return Error::None;
}
- 1.HWC2On1Adapter::Display::set 設(shè)置渲染Target
- 2.hw_device_t 設(shè)備的set方法
- display 記錄釋放的fence。
我們直接看msm8960中set對(duì)應(yīng)的方法抢腐。
硬件發(fā)送圖像到fb驅(qū)動(dòng)
文件/hardware/qcom/display/msm8960/libhwcomposer/hwc.cpp
static int hwc_set(hwc_composer_device_1 *dev,
size_t numDisplays,
hwc_display_contents_1_t** displays)
{
int ret = 0;
hwc_context_t* ctx = (hwc_context_t*)(dev);
Locker::Autolock _l(ctx->mBlankLock);
for (uint32_t i = 0; i < numDisplays; i++) {
hwc_display_contents_1_t* list = displays[i];
switch(i) {
case HWC_DISPLAY_PRIMARY:
ret = hwc_set_primary(ctx, list);
break;
case HWC_DISPLAY_EXTERNAL:
ret = hwc_set_external(ctx, list, i);
break;
case HWC_DISPLAY_VIRTUAL:
ret = hwc_set_virtual(ctx, list, i);
break;
default:
ret = -EINVAL;
}
}
CALC_FPS();
MDPComp::resetIdleFallBack();
ctx->mVideoTransFlag = false;
return ret;
}
這里照樣的處理三種屏幕對(duì)應(yīng)的渲染方法姑曙,我們直接關(guān)注hwc_set_primary。
tatic int hwc_set_primary(hwc_context_t *ctx, hwc_display_contents_1_t* list) {
ATRACE_CALL();
int ret = 0;
const int dpy = HWC_DISPLAY_PRIMARY;
if (LIKELY(list) && ctx->dpyAttr[dpy].isActive) {
uint32_t last = list->numHwLayers - 1;
hwc_layer_1_t *fbLayer = &list->hwLayers[last];
int fd = -1;
bool copybitDone = false;
if(ctx->mCopyBit[dpy])
copybitDone = ctx->mCopyBit[dpy]->draw(ctx, list, dpy, &fd);
if(list->numHwLayers > 1)
hwc_sync(ctx, list, dpy, fd);
if (!ctx->mMDPComp[dpy]->draw(ctx, list)) {
ret = -1;
}
private_handle_t *hnd = (private_handle_t *)fbLayer->handle;
if(copybitDone) {
hnd = ctx->mCopyBit[dpy]->getCurrentRenderBuffer();
}
if(hnd) {
if (!ctx->mFBUpdate[dpy]->draw(ctx, hnd)) {
ret = -1;
}
}
if (display_commit(ctx, dpy) < 0) {
return -1;
}
}
closeAcquireFds(list);
return ret;
}
讓我們回憶一下hwc在prepare中做了什么迈倍。就清楚實(shí)際上這里面都做了什么事情,由于在MDP中會(huì)找到需要自己處理的Layer伤靠,把另一部分Layer交給FBUpdate中進(jìn)行完成。
因此會(huì)有2個(gè)步驟:
- 1.mCopyBit 進(jìn)行draw
- 2.hwc_sync
- 3.mMDPComp對(duì)應(yīng)屏幕的MDPComp對(duì)象調(diào)用draw
- 4.mFBUpdate對(duì)應(yīng)屏幕的FBUpdate調(diào)用draw
- 5.display_commit 提交所有的渲染
在prepare中啼染,mCopyBit相關(guān)的數(shù)據(jù)宴合,在msm8960并沒有經(jīng)過prepare處理,我們先不去關(guān)注它迹鹅。
hwc_sync
int hwc_sync(hwc_context_t *ctx, hwc_display_contents_1_t* list, int dpy,
int fd) {
int ret = 0;
int acquireFd[MAX_NUM_APP_LAYERS];
int count = 0;
int releaseFd = -1;
int retireFd = -1;
int fbFd = -1;
bool swapzero = false;
int mdpVersion = qdutils::MDPVersion::getInstance().getMDPVersion();
struct mdp_buf_sync data;
memset(&data, 0, sizeof(data));
//Until B-family supports sync for rotator
if(mdpVersion >= qdutils::MDSS_V5) {
data.flags = MDP_BUF_SYNC_FLAG_WAIT;
}
data.acq_fen_fd = acquireFd;
data.rel_fen_fd = &releaseFd;
data.retire_fen_fd = &retireFd;
...
#ifndef MDSS_TARGET
if(mdpVersion < qdutils::MDSS_V5) {
//A-family
int rotFd = ctx->mRotMgr->getRotDevFd();
struct msm_rotator_buf_sync rotData;
for(uint32_t i = 0; i < ctx->mLayerRotMap[dpy]->getCount(); i++) {
memset(&rotData, 0, sizeof(rotData));
int& acquireFenceFd =
ctx->mLayerRotMap[dpy]->getLayer(i)->acquireFenceFd;
rotData.acq_fen_fd = acquireFenceFd;
rotData.session_id = ctx->mLayerRotMap[dpy]->getRot(i)->getSessId();
ioctl(rotFd, MSM_ROTATOR_IOCTL_BUFFER_SYNC, &rotData);
close(acquireFenceFd);
acquireFenceFd = dup(rotData.rel_fen_fd);
ctx->mLayerRotMap[dpy]->getLayer(i)->releaseFenceFd =
rotData.rel_fen_fd;
}
} else {
//TODO B-family
}
#endif
for(uint32_t i = 0; i < list->numHwLayers; i++) {
if(list->hwLayers[i].compositionType == HWC_OVERLAY &&
list->hwLayers[i].acquireFenceFd >= 0) {
if(UNLIKELY(swapzero))
acquireFd[count++] = -1;
else
acquireFd[count++] = list->hwLayers[i].acquireFenceFd;
}
if(list->hwLayers[i].compositionType == HWC_FRAMEBUFFER_TARGET) {
if(UNLIKELY(swapzero))
acquireFd[count++] = -1;
else if(fd >= 0) {
acquireFd[count++] = fd;
data.flags &= ~MDP_BUF_SYNC_FLAG_WAIT;
} else if(list->hwLayers[i].acquireFenceFd >= 0)
acquireFd[count++] = list->hwLayers[i].acquireFenceFd;
}
}
data.acq_fen_fd_cnt = count;
fbFd = ctx->dpyAttr[dpy].fd;
//Waits for acquire fences, returns a release fence
if(LIKELY(!swapzero)) {
uint64_t start = systemTime();
ret = ioctl(fbFd, MSMFB_BUFFER_SYNC, &data);
}
if(ret < 0) {
}
for(uint32_t i = 0; i < list->numHwLayers; i++) {
if(list->hwLayers[i].compositionType == HWC_OVERLAY ||
list->hwLayers[i].compositionType == HWC_FRAMEBUFFER_TARGET) {
if(UNLIKELY(swapzero)) {
list->hwLayers[i].releaseFenceFd = -1;
} else if(list->hwLayers[i].releaseFenceFd < 0) {
list->hwLayers[i].releaseFenceFd = dup(releaseFd);
}
}
}
if(fd >= 0) {
close(fd);
fd = -1;
}
if (ctx->mCopyBit[dpy])
ctx->mCopyBit[dpy]->setReleaseFd(releaseFd);
//A-family
if(mdpVersion < qdutils::MDSS_V5) {
ctx->mLayerRotMap[dpy]->setReleaseFd(releaseFd);
}
close(releaseFd);
if(UNLIKELY(swapzero))
list->retireFenceFd = -1;
else
list->retireFenceFd = retireFd;
return ret;
}
如果編譯模式打開了MDSS_TARGET標(biāo)志位卦洽。MDP低版本此時(shí)還沒有旋轉(zhuǎn)等功能,會(huì)把這部分任務(wù)交給msm_rotator驅(qū)動(dòng)完成斜棚。發(fā)送MSM_ROTATOR_IOCTL_BUFFER_SYNC一個(gè)命令讓驅(qū)動(dòng)進(jìn)行同步阀蒂。
接著會(huì)發(fā)送MSMFB_BUFFER_SYNC命令到fb驅(qū)動(dòng)该窗,也進(jìn)行同步操作。
MDPCompLowRes draw
bool MDPCompLowRes::draw(hwc_context_t *ctx, hwc_display_contents_1_t* list) {
...
/* reset Invalidator */
if(idleInvalidator && !sIdleFallBack && mCurrentFrame.mdpCount)
idleInvalidator->markForSleep();
overlay::Overlay& ov = *ctx->mOverlay;
LayerProp *layerProp = ctx->layerProp[mDpy];
int numHwLayers = ctx->listStats[mDpy].numAppLayers;
for(int i = 0; i < numHwLayers && mCurrentFrame.mdpCount; i++ )
{
if(mCurrentFrame.isFBComposed[i]) continue;
hwc_layer_1_t *layer = &list->hwLayers[i];
private_handle_t *hnd = (private_handle_t *)layer->handle;
if(!hnd) {
return false;
}
int mdpIndex = mCurrentFrame.layerToMDP[i];
MdpPipeInfoLowRes& pipe_info =
*(MdpPipeInfoLowRes*)mCurrentFrame.mdpToLayer[mdpIndex].pipeInfo;
ovutils::eDest dest = pipe_info.index;
if(dest == ovutils::OV_INVALID) {
return false;
}
if(!(layerProp[i].mFlags & HWC_MDPCOMP)) {
continue;
}
int fd = hnd->fd;
uint32_t offset = hnd->offset;
Rotator *rot = mCurrentFrame.mdpToLayer[mdpIndex].rot;
if(rot) {
if(!rot->queueBuffer(fd, offset))
return false;
fd = rot->getDstMemId();
offset = rot->getDstOffset();
}
if (!ov.queueBuffer(fd, offset, dest)) {
return false;
}
layerProp[i].mFlags &= ~HWC_MDPCOMP;
}
return true;
}
核心只有一個(gè)調(diào)用OverLayer和Rotator的queueBuffer方法蚤霞,把圖元對(duì)應(yīng)的fd句柄返回到交給OverLayer進(jìn)一步開始消費(fèi)酗失。
最后會(huì)調(diào)用到如下方法:
文件:/hardware/qcom/display/msm8960/liboverlay/mdpWrapper.h
inline bool play(int fd, msmfb_overlay_data& od) {
if (ioctl(fd, MSMFB_OVERLAY_PLAY, &od) < 0) {
return false;
}
return true;
}
最后通過ioctl 發(fā)送MSMFB_OVERLAY_PLAY對(duì)fb通信,渲染fd句柄對(duì)應(yīng)的ion中的內(nèi)存數(shù)據(jù)昧绣。
FBUpdateLowRes::draw
bool FBUpdateLowRes::draw(hwc_context_t *ctx, private_handle_t *hnd)
{
if(!mModeOn) {
return true;
}
bool ret = true;
overlay::Overlay& ov = *(ctx->mOverlay);
ovutils::eDest dest = mDest;
if (!ov.queueBuffer(hnd->fd, hnd->offset, dest)) {
ret = false;
}
return ret;
}
這里的邏輯和MDP都是一樣的规肴,通過Overlay調(diào)用queueBuffer方法,而向fb發(fā)送MSMFB_OVERLAY_PLAY命令夜畴,把數(shù)據(jù)保存到fb驅(qū)動(dòng)拖刃。
display_commit
static int display_commit(hwc_context_t *ctx, int dpy) {
int fbFd = ctx->dpyAttr[dpy].fd;
if(fbFd == -1) {
ALOGE("%s: Invalid FB fd for display: %d", __FUNCTION__, dpy);
return -1;
}
struct mdp_display_commit commit_info;
memset(&commit_info, 0, sizeof(struct mdp_display_commit));
commit_info.flags = MDP_DISPLAY_COMMIT_OVERLAY;
if(ioctl(fbFd, MSMFB_DISPLAY_COMMIT, &commit_info) == -1) {
return -errno;
}
return 0;
}
最后通過MSMFB_DISPLAY_COMMIT 提交所有的圖元到fb驅(qū)動(dòng)中渲染。fb驅(qū)動(dòng)會(huì)拿到LCD屏幕驅(qū)動(dòng)中的一塊內(nèi)存贪绘,把ion對(duì)應(yīng)的內(nèi)存數(shù)據(jù)拷貝上去兑牡,最后完成LCD屏幕的渲染。
postComposition 處理合成圖元后的工作
文件:/frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
void SurfaceFlinger::postComposition(nsecs_t refreshStartTime)
{
// Release any buffers which were replaced this frame
nsecs_t dequeueReadyTime = systemTime();
for (auto& layer : mLayersWithQueuedFrames) {
layer->releasePendingBuffer(dequeueReadyTime);
}
// |mStateLock| not needed as we are on the main thread
const sp<const DisplayDevice> hw(getDefaultDisplayDeviceLocked());
getBE().mGlCompositionDoneTimeline.updateSignalTimes();
std::shared_ptr<FenceTime> glCompositionDoneFenceTime;
if (hw && getBE().mHwc->hasClientComposition(HWC_DISPLAY_PRIMARY)) {
glCompositionDoneFenceTime =
std::make_shared<FenceTime>(hw->getClientTargetAcquireFence());
getBE().mGlCompositionDoneTimeline.push(glCompositionDoneFenceTime);
} else {
glCompositionDoneFenceTime = FenceTime::NO_FENCE;
}
getBE().mDisplayTimeline.updateSignalTimes();
sp<Fence> presentFence = getBE().mHwc->getPresentFence(HWC_DISPLAY_PRIMARY);
auto presentFenceTime = std::make_shared<FenceTime>(presentFence);
getBE().mDisplayTimeline.push(presentFenceTime);
nsecs_t vsyncPhase = mPrimaryDispSync.computeNextRefresh(0);
nsecs_t vsyncInterval = mPrimaryDispSync.getPeriod();
updateCompositorTiming(
vsyncPhase, vsyncInterval, refreshStartTime, presentFenceTime);
CompositorTiming compositorTiming;
{
std::lock_guard<std::mutex> lock(getBE().mCompositorTimingLock);
compositorTiming = getBE().mCompositorTiming;
}
mDrawingState.traverseInZOrder([&](Layer* layer) {
bool frameLatched = layer->onPostComposition(glCompositionDoneFenceTime,
presentFenceTime, compositorTiming);
if (frameLatched) {
recordBufferingStats(layer->getName().string(),
layer->getOccupancyHistory(false));
}
});
if (presentFenceTime->isValid()) {
if (mPrimaryDispSync.addPresentFence(presentFenceTime)) {
enableHardwareVsync();
} else {
disableHardwareVsync(false);
}
}
if (!hasSyncFramework) {
if (getBE().mHwc->isConnected(HWC_DISPLAY_PRIMARY) && hw->isDisplayOn()) {
enableHardwareVsync();
}
}
if (mAnimCompositionPending) {
mAnimCompositionPending = false;
if (presentFenceTime->isValid()) {
mAnimFrameTracker.setActualPresentFence(
std::move(presentFenceTime));
} else if (getBE().mHwc->isConnected(HWC_DISPLAY_PRIMARY)) {
// The HWC doesn't support present fences, so use the refresh
// timestamp instead.
nsecs_t presentTime =
getBE().mHwc->getRefreshTimestamp(HWC_DISPLAY_PRIMARY);
mAnimFrameTracker.setActualPresentTime(presentTime);
}
mAnimFrameTracker.advanceFrame();
}
mTimeStats.incrementTotalFrames();
if (mHadClientComposition) {
mTimeStats.incrementClientCompositionFrames();
}
if (getBE().mHwc->isConnected(HWC_DISPLAY_PRIMARY) &&
hw->getPowerMode() == HWC_POWER_MODE_OFF) {
return;
}
nsecs_t currentTime = systemTime();
if (mHasPoweredOff) {
mHasPoweredOff = false;
} else {
nsecs_t elapsedTime = currentTime - getBE().mLastSwapTime;
size_t numPeriods = static_cast<size_t>(elapsedTime / vsyncInterval);
if (numPeriods < SurfaceFlingerBE::NUM_BUCKETS - 1) {
getBE().mFrameBuckets[numPeriods] += elapsedTime;
} else {
getBE().mFrameBuckets[SurfaceFlingerBE::NUM_BUCKETS - 1] += elapsedTime;
}
getBE().mTotalTime += elapsedTime;
}
getBE().mLastSwapTime = currentTime;
}
在這里主要記錄每一次刷新完屏幕后兔簇,記錄當(dāng)前的時(shí)間在Timeline发绢,同時(shí)更新計(jì)算mPrimaryDispSync中的時(shí)間硬耍。
詳細(xì)的后面的文章我們再進(jìn)行詳談垄琐。
總結(jié)
到這里,我就把整個(gè)圖元的消費(fèi)到合成经柴,到硬件的渲染大體都過了一遍狸窘,如果對(duì)fb驅(qū)動(dòng)具體的技術(shù)細(xì)節(jié)感興趣,可以去看老羅的坯认。雖然不是最新的翻擒,但是也代表經(jīng)典的設(shè)計(jì),我就暫時(shí)不去解析fb驅(qū)動(dòng)中做了什么了牛哺,如果以后有機(jī)會(huì)我會(huì)解析fb驅(qū)動(dòng)其中的設(shè)計(jì)陋气。
老規(guī)矩,我們把整個(gè)流程從圖元消費(fèi)到渲染全部復(fù)習(xí)一遍引润,并且用圖來表達(dá)出來巩趁。下面這幅圖是我盡可能的精簡得出的結(jié)果。
我把整個(gè)流程從消費(fèi)到合成淳附,我分成三個(gè)步驟:
判斷是否需要SF是否需要刷新屏幕
- 1.handleMessageTransaction將會(huì)處理每一個(gè)Layer的事務(wù)议慰,最核心的事情就是把每一個(gè)Layer中的上一幀的mDrawState被當(dāng)前幀的mCurrentState替代。一旦有事務(wù)需要處理奴曙,說明有Surface發(fā)生了狀態(tài)的變化别凹,如寬高如位置。此時(shí)就必須重新刷新整個(gè)界面洽糟。
- 2.handleMessageInvalidate處理的核心:
- 首先檢測哪一些圖元需要顯示炉菲,需要的則會(huì)添加到mLayersWithQueuedFrames蕴侧。條件是入隊(duì)時(shí)間不能超過預(yù)期時(shí)間的一秒,也能不能超過預(yù)期時(shí)間(mQueueItems是onFrameAvailable回調(diào)添加)搔扁。
- 遍歷每一個(gè)需要顯示的Layer赚哗,調(diào)用latchBuffer方法。這個(gè)方法核心是updateTexImage沉御。這個(gè)方法分為3個(gè)步驟:
acquireBufferLocked 本質(zhì)上是獲取mQueue的第一個(gè)加進(jìn)來的圖元作為即將顯示的圖元屿讽。但是如果遇到顯示的時(shí)間和預(yù)期時(shí)間差大于1秒,同時(shí)發(fā)現(xiàn)這個(gè)圖元已經(jīng)過期了(free狀態(tài)),則會(huì)跳幀吠裆,直到找到最近時(shí)間的一幀伐谈。
LayerRejecter 判斷是否有打開凍結(jié)窗口模式,打開了但是發(fā)現(xiàn)圖元的大小不對(duì)則拒絕顯示试疙。相反诵棵,則會(huì)mDrawState的requested賦值給active。
updateAndReleaseLocked 釋放前一幀的圖元祝旷,同時(shí)準(zhǔn)備設(shè)置當(dāng)前消費(fèi)的圖元作為準(zhǔn)備繪制的畫面履澳。
SF 繪制的準(zhǔn)備流程
- 1.preComposition 通知需要繪制的圖元解開mLocalSyncPoints阻塞。
- rebuildLayerStacks 如果發(fā)現(xiàn)有Layer添加或者有新的圖元進(jìn)入了SF怀跛。則會(huì)重新遍歷一遍可視的Layer棧距贷,重新計(jì)算可視區(qū)域,遮罩區(qū)域吻谋,透明區(qū)域忠蝗,非透明區(qū)域。
- setUpHWComposer 準(zhǔn)備HWComposer的Hal層以及控制硬件行為的lib層漓拾。主要的工作為4點(diǎn):
遍歷每一個(gè)DisplayDevice調(diào)用beginFrame方法,準(zhǔn)備繪制圖元阁最。
遍歷每一個(gè)DisplayDevice先判斷他的色彩空間。并且設(shè)置顏色矩陣骇两。接著獲取DisplayDevice中需要繪制的Layer速种,檢查是否創(chuàng)建了hwcLayer,沒有則創(chuàng)建低千,創(chuàng)建失敗則設(shè)置forceClientComposition配阵,強(qiáng)制設(shè)置為Client渲染模式,進(jìn)行OpenGL es渲染栋操。最后調(diào)用setGeometry闸餐。
遍歷每一個(gè)DisplayDevice,根據(jù)DataSpace矾芙,進(jìn)一步處理是否需要強(qiáng)制使用Client渲染模式舍沙,最后調(diào)用layer的setPerFrameData方法。setPerFrameData最終會(huì)用到Hal層的setBuffer剔宪,把圖元句柄保存在對(duì)應(yīng)的hw_layer_t中拂铡。
遍歷每一個(gè)DisplayDevice壹无,調(diào)用prepareFrame。準(zhǔn)備所有屏幕對(duì)應(yīng)的hwc的設(shè)備hw_device_t感帅。在這個(gè)過程中為FBUpdate和MDPComp兩個(gè)對(duì)象從PipeBook申請一段合適type的管道空間斗锭,承載接下來需要渲染的參數(shù)。
SF的圖元合成
- doComposition 主要的工作實(shí)際上就是判斷到是OpenGL es渲染模式失球,最會(huì)調(diào)用每一個(gè)Layer的onDraw方法岖是。每一個(gè)Layer都會(huì)通過RenderEngine進(jìn)行常規(guī)OpenGL es繪制,最后通過swapBuffers 把圖元從RE::Surface推到FramebufferSurface中消費(fèi)实苞。最后緩存下來當(dāng)前需要繪制的圖元豺撑,最后通過presentAndGetReleaseFences,通知Hal層進(jìn)行渲染黔牵。Hal層會(huì)通知hwc硬件進(jìn)行渲染聪轿,hwc最后會(huì)通知fb通過lcd驅(qū)動(dòng)渲染到屏幕上。
- postComposition 處理一些渲染完的同步參數(shù)猾浦。
因此我們可以得出一個(gè)結(jié)論陆错,Android中有兩種渲染模式,一種是OpenGL es金赦,一種是HWC模式音瓷。最后都會(huì)通過HWC通過管道通知到fb中。
在Android渲染體系中素邪,也不是只有一對(duì)生產(chǎn)者消費(fèi)者模型:
到這里外莲,我們對(duì)整個(gè)SF的流程已經(jīng)有一個(gè)透徹的理解猪半。但是有一個(gè)問題兔朦,里面包含了幾個(gè)不同對(duì)象,SF磨确,app應(yīng)用沽甥,OpenGL es,HWC乏奥,Android摆舟。這些都想都在自己的進(jìn)程中,有著自己的時(shí)間順序邓了,那么Android是怎么把這些對(duì)象同步起來恨诱,接下來我們將會(huì)對(duì)這個(gè)問題進(jìn)行剖析。