版本記錄
版本號 | 時間 |
---|---|
V1.0 | 2018.01.19 |
前言
OpenGL 圖形庫項目中一直也沒用過残拐,最近也想學著使用這個圖形庫涎拉,感覺還是很有意思瑞侮,也就自然想著好好的總結(jié)一下,希望對大家能有所幫助鼓拧。下面內(nèi)容來自歡迎來到OpenGL的世界半火。
1. OpenGL 圖形庫使用(一) —— 概念基礎(chǔ)
2. OpenGL 圖形庫使用(二) —— 渲染模式、對象季俩、擴展和狀態(tài)機
3. OpenGL 圖形庫使用(三) —— 著色器钮糖、數(shù)據(jù)類型與輸入輸出
4. OpenGL 圖形庫使用(四) —— Uniform及更多屬性
5. OpenGL 圖形庫使用(五) —— 紋理
6. OpenGL 圖形庫使用(六) —— 變換
7. OpenGL 圖形庫的使用(七)—— 坐標系統(tǒng)之五種不同的坐標系統(tǒng)(一)
8. OpenGL 圖形庫的使用(八)—— 坐標系統(tǒng)之3D效果(二)
9. OpenGL 圖形庫的使用(九)—— 攝像機(一)
10. OpenGL 圖形庫的使用(十)—— 攝像機(二)
11. OpenGL 圖形庫的使用(十一)—— 光照之顏色
12. OpenGL 圖形庫的使用(十二)—— 光照之基礎(chǔ)光照
13. OpenGL 圖形庫的使用(十三)—— 光照之材質(zhì)
14. OpenGL 圖形庫的使用(十四)—— 光照之光照貼圖
15. OpenGL 圖形庫的使用(十五)—— 光照之投光物
16. OpenGL 圖形庫的使用(十六)—— 光照之多光源
17. OpenGL 圖形庫的使用(十七)—— 光照之復習總結(jié)
18. OpenGL 圖形庫的使用(十八)—— 模型加載之Assimp
19. OpenGL 圖形庫的使用(十九)—— 模型加載之網(wǎng)格
20. OpenGL 圖形庫的使用(二十)—— 模型加載之模型
21. OpenGL 圖形庫的使用(二十一)—— 高級OpenGL之深度測試
22. OpenGL 圖形庫的使用(二十二)—— 高級OpenGL之模板測試Stencil testing
23. OpenGL 圖形庫的使用(二十三)—— 高級OpenGL之混合Blending
24. OpenGL 圖形庫的使用(二十四)—— 高級OpenGL之面剔除Face culling
25. OpenGL 圖形庫的使用(二十五)—— 高級OpenGL之幀緩沖Framebuffers
26. OpenGL 圖形庫的使用(二十六)—— 高級OpenGL之立方體貼圖Cubemaps
27. OpenGL 圖形庫的使用(二十七)—— 高級OpenGL之高級數(shù)據(jù)Advanced Data
28. OpenGL 圖形庫的使用(二十八)—— 高級OpenGL之高級GLSL Advanced GLSL
29. OpenGL 圖形庫的使用(二十九)—— 高級OpenGL之幾何著色器Geometry Shader
30. OpenGL 圖形庫的使用(三十)—— 高級OpenGL之實例化Instancing
31. OpenGL 圖形庫的使用(三十一)—— 高級OpenGL之抗鋸齒Anti Aliasing
32. OpenGL 圖形庫的使用(三十二)—— 高級光照之高級光照Advanced Lighting
33. OpenGL 圖形庫的使用(三十三)—— 高級光照之Gamma校正Gamma Correction
34. OpenGL 圖形庫的使用(三十四)—— 高級光照之陰影 - 陰影映射Shadow Mapping
35. OpenGL 圖形庫的使用(三十五)—— 高級光照之陰影 - 點陰影Point Shadows
36. OpenGL 圖形庫的使用(三十六)—— 高級光照之法線貼圖Normal Mapping
37. OpenGL 圖形庫的使用(三十七)—— 高級光照之視差貼圖Parallax Mapping
38. OpenGL 圖形庫的使用(三十八)—— 高級光照之HDR
39. OpenGL 圖形庫的使用(三十九)—— 高級光照之泛光
延遲著色法
我們現(xiàn)在一直使用的光照方式叫做正向渲染(Forward Rendering)
或者正向著色法(Forward Shading)
,它是我們渲染物體的一種非常直接的方式种玛,在場景中我們根據(jù)所有光源照亮一個物體藐鹤,之后再渲染下一個物體瓤檐,以此類推赂韵。它非常容易理解,也很容易實現(xiàn)挠蛉,但是同時它對程序性能的影響也很大祭示,因為對于每一個需要渲染的物體,程序都要對每一個光源每一個需要渲染的片段進行迭代谴古,這是非常多的质涛!因為大部分片段著色器的輸出都會被之后的輸出覆蓋,正向渲染還會在場景中因為高深的復雜度(多個物體重合在一個像素上)浪費大量的片段著色器運行時間掰担。
延遲著色法(Deferred Shading)
汇陆,或者說是延遲渲染(Deferred Rendering)
,為了解決上述問題而誕生了带饱,它大幅度地改變了我們渲染物體的方式毡代。這給我們優(yōu)化擁有大量光源的場景提供了很多的選擇,因為它能夠在渲染上百甚至上千光源的同時還能夠保持能讓人接受的幀率勺疼。下面這張圖片包含了一共1874個點光源教寂,它是使用延遲著色法來完成的,而這對于正向渲染幾乎是不可能的(圖片來源:Hannes Nevalainen)执庐。
延遲著色法基于我們延遲(Defer)或推遲(Postpone)大部分計算量非常大的渲染(像是光照)到后期進行處理的想法酪耕。它包含兩個處理階段(Pass):在第一個幾何處理階段(Geometry Pass)
中,我們先渲染場景一次轨淌,之后獲取對象的各種幾何信息迂烁,并儲存在一系列叫做G緩沖(G-buffer)的紋理中看尼;想想位置向量(Position Vector)
、顏色向量(Color Vector)
盟步、法向量(Normal Vector)
和/或鏡面值(Specular Value)
狡忙。場景中這些儲存在G緩沖中的幾何信息將會在之后用來做(更復雜的)光照計算。下面是一幀中G緩沖的內(nèi)容:
我們會在第二個光照處理階段(Lighting Pass)
中使用G緩沖內(nèi)的紋理數(shù)據(jù)址芯。在光照處理階段中灾茁,我們渲染一個屏幕大小的方形,并使用G緩沖中的幾何數(shù)據(jù)對每一個片段計算場景的光照谷炸;在每個像素中我們都會對G緩沖進行迭代北专。我們對于渲染過程進行解耦,將它高級的片段處理挪到后期進行旬陡,而不是直接將每個對象從頂點著色器帶到片段著色器拓颓。光照計算過程還是和我們以前一樣,但是現(xiàn)在我們需要從對應(yīng)的G緩沖而不是頂點著色器(和一些uniform變量)那里獲取輸入變量了描孟。
下面這幅圖片很好地展示了延遲著色法的整個過程:
這種渲染方法一個很大的好處就是能保證在G緩沖中的片段和在屏幕上呈現(xiàn)的像素所包含的片段信息是一樣的驶睦,因為深度測試已經(jīng)最終將這里的片段信息作為最頂層的片段。這樣保證了對于在光照處理階段中處理的每一個像素都只處理一次匿醒,所以我們能夠省下很多無用的渲染調(diào)用场航。除此之外,延遲渲染還允許我們做更多的優(yōu)化廉羔,從而渲染更多的光源溉痢。
當然這種方法也帶來幾個缺陷, 由于G緩沖要求我們在紋理顏色緩沖中存儲相對比較大的場景數(shù)據(jù)憋他,這會消耗比較多的顯存孩饼,尤其是類似位置向量之類的需要高精度的場景數(shù)據(jù)。 另外一個缺點就是他不支持混色(因為我們只有最前面的片段信息)竹挡, 因此也不能使用MSAA了镀娶。針對這幾個問題我們可以做一些變通來克服這些缺點,這些我們留會在教程的最后討論揪罕。
在幾何處理階段中填充G緩沖非常高效梯码,因為我們直接儲存像是位置,顏色或者是法線等對象信息到幀緩沖中耸序,而這幾乎不會消耗處理時間忍些。在此基礎(chǔ)上使用多渲染目標(Multiple Render Targets, MRT)
技術(shù),我們甚至可以在一個渲染處理之內(nèi)完成這所有的工作坎怪。
G緩沖
G緩沖(G-buffer)
是對所有用來儲存光照相關(guān)的數(shù)據(jù)罢坝,并在最后的光照處理階段中使用的所有紋理的總稱。趁此機會,讓我們順便復習一下在正向渲染中照亮一個片段所需要的所有數(shù)據(jù):
- 一個3D位置向量來計算(插值)片段位置變量供
lightDir
和viewDir
使用 - 一個RGB漫反射顏色向量嘁酿,也就是反照率
(Albedo)
- 一個3D法向量來判斷平面的斜率
- 一個鏡面強度
(Specular Intensity)
浮點值 - 所有光源的位置和顏色向量
- 玩家或者觀察者的位置向量
有了這些(逐片段)變量的處置權(quán)隙券,我們就能夠計算我們很熟悉的(布林-)馮氏光照(Blinn-Phong Lighting)
了。光源的位置闹司,顏色娱仔,和玩家的觀察位置可以通過uniform變量來設(shè)置,但是其它變量對于每個對象的片段都是不同的游桩。如果我們能以某種方式傳輸完全相同的數(shù)據(jù)到最終的延遲光照處理階段中牲迫,我們就能計算與之前相同的光照效果了,盡管我們只是在渲染一個2D方形的片段借卧。
OpenGL并沒有限制我們能在紋理中能存儲的東西盹憎,所以現(xiàn)在你應(yīng)該清楚在一個或多個屏幕大小的紋理中儲存所有逐片段數(shù)據(jù)并在之后光照處理階段中使用的可行性了。因為G緩沖紋理將會和光照處理階段中的2D方形一樣大铐刘,我們會獲得和正向渲染設(shè)置完全一樣的片段數(shù)據(jù)陪每,但在光照處理階段這里是一對一映射。
整個過程在偽代碼中會是這樣的:
while(...) // 游戲循環(huán)
{
// 1. 幾何處理階段:渲染所有的幾何/顏色數(shù)據(jù)到G緩沖
glBindFramebuffer(GL_FRAMEBUFFER, gBuffer);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
gBufferShader.Use();
for(Object obj : Objects)
{
ConfigureShaderTransformsAndUniforms();
obj.Draw();
}
// 2. 光照處理階段:使用G緩沖計算場景的光照
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glClear(GL_COLOR_BUFFER_BIT);
lightingPassShader.Use();
BindAllGBufferTextures();
SetLightingUniforms();
RenderQuad();
}
對于每一個片段我們需要儲存的數(shù)據(jù)有:一個位置向量镰吵、一個法向量檩禾,一個顏色向量,一個鏡面強度值疤祭。所以我們在幾何處理階段中需要渲染場景中所有的對象并儲存這些數(shù)據(jù)分量到G緩沖中盼产。我們可以再次使用多渲染目標(Multiple Render Targets)來在一個渲染處理之內(nèi)渲染多個顏色緩沖,在之前的泛光教程中我們也簡單地提及了它画株。
對于幾何渲染處理階段辆飘,我們首先需要初始化一個幀緩沖對象啦辐,我們很直觀的稱它為gBuffer谓传,它包含了多個顏色緩沖和一個單獨的深度渲染緩沖對象(Depth Renderbuffer Object)
。對于位置和法向量的紋理芹关,我們希望使用高精度的紋理(每分量16或32位的浮點數(shù))续挟,而對于反照率和鏡面值,使用默認的紋理(每分量8位浮點數(shù))就夠了侥衬。
GLuint gBuffer;
glGenFramebuffers(1, &gBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, gBuffer);
GLuint gPosition, gNormal, gColorSpec;
// - 位置顏色緩沖
glGenTextures(1, &gPosition);
glBindTexture(GL_TEXTURE_2D, gPosition);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGB, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, gPosition, 0
// - 法線顏色緩沖
glGenTextures(1, &gNormal);
glBindTexture(GL_TEXTURE_2D, gNormal);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGB, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, gNormal, 0);
// - 顏色 + 鏡面顏色緩沖
glGenTextures(1, &gAlbedoSpec);
glBindTexture(GL_TEXTURE_2D, gAlbedoSpec);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGBA, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, GL_TEXTURE_2D, gAlbedoSpec, 0);
// - 告訴OpenGL我們將要使用(幀緩沖的)哪種顏色附件來進行渲染
GLuint attachments[3] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2 };
glDrawBuffers(3, attachments);
// 之后同樣添加渲染緩沖對象(Render Buffer Object)為深度緩沖(Depth Buffer)诗祸,并檢查完整性
[...]
由于我們使用了多渲染目標,我們需要顯式告訴OpenGL我們需要使用glDrawBuffers
渲染的是和GBuffer關(guān)聯(lián)的哪個顏色緩沖轴总。同樣需要注意的是直颅,我們使用RGB紋理來儲存位置和法線的數(shù)據(jù),因為每個對象只有三個分量怀樟;但是我們將顏色和鏡面強度數(shù)據(jù)合并到一起功偿,存儲到一個單獨的RGBA紋理里面,這樣我們就不需要聲明一個額外的顏色緩沖紋理了往堡。隨著你的延遲渲染管線變得越來越復雜械荷,需要更多的數(shù)據(jù)的時候共耍,你就會很快發(fā)現(xiàn)新的方式來組合數(shù)據(jù)到一個單獨的紋理當中。
接下來我們需要渲染它們到G緩沖中吨瞎。假設(shè)每個對象都有漫反射痹兜,一個法線和一個鏡面強度紋理,我們會想使用一些像下面這個片段著色器的東西來渲染它們到G緩沖中去颤诀。
#version 330 core
layout (location = 0) out vec3 gPosition;
layout (location = 1) out vec3 gNormal;
layout (location = 2) out vec4 gAlbedoSpec;
in vec2 TexCoords;
in vec3 FragPos;
in vec3 Normal;
uniform sampler2D texture_diffuse1;
uniform sampler2D texture_specular1;
void main()
{
// 存儲第一個G緩沖紋理中的片段位置向量
gPosition = FragPos;
// 同樣存儲對每個逐片段法線到G緩沖中
gNormal = normalize(Normal);
// 和漫反射對每個逐片段顏色
gAlbedoSpec.rgb = texture(texture_diffuse1, TexCoords).rgb;
// 存儲鏡面強度到gAlbedoSpec的alpha分量
gAlbedoSpec.a = texture(texture_specular1, TexCoords).r;
}
因為我們使用了多渲染目標字旭,這個布局指示符(Layout Specifier)
告訴了OpenGL我們需要渲染到當前的活躍幀緩沖中的哪一個顏色緩沖。注意我們并沒有儲存鏡面強度到一個單獨的顏色緩沖紋理中崖叫,因為我們可以儲存它單獨的浮點值到其它顏色緩沖紋理的alpha分量中谐算。
請記住,因為有光照計算归露,所以保證所有變量在一個坐標空間當中至關(guān)重要洲脂。在這里我們在世界空間中存儲(并計算)所有的變量。
如果我們現(xiàn)在想要渲染一大堆納米裝戰(zhàn)士對象到gBuffer
幀緩沖中剧包,并通過一個一個分別投影它的顏色緩沖到鋪屏四邊形中嘗試將他們顯示出來恐锦,我們會看到向下面這樣的東西:
嘗試想象世界空間位置和法向量都是正確的。比如說疆液,指向右側(cè)的法向量將會被更多地對齊到紅色上一铅,從場景原點指向右側(cè)的位置矢量也同樣是這樣。一旦你對G緩沖中的內(nèi)容滿意了堕油,我們就該進入到下一步:光照處理階段了潘飘。
延遲光照處理階段
現(xiàn)在我們已經(jīng)有了一大堆的片段數(shù)據(jù)儲存在G緩沖中供我們處置,我們可以選擇通過一個像素一個像素地遍歷各個G緩沖紋理掉缺,并將儲存在它們里面的內(nèi)容作為光照算法的輸入卜录,來完全計算場景最終的光照顏色。由于所有的G緩沖紋理都代表的是最終變換的片段值眶明,我們只需要對每一個像素執(zhí)行一次昂貴的光照運算就行了艰毒。這使得延遲光照非常高效,特別是在需要調(diào)用大量重型片段著色器的復雜場景中搜囱。
對于這個光照處理階段丑瞧,我們將會渲染一個2D全屏的方形(有一點像后期處理效果)并且在每個像素上運行一個昂貴的光照片段著色器。
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
shaderLightingPass.Use();
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, gPosition);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, gNormal);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, gAlbedoSpec);
// 同樣發(fā)送光照相關(guān)的uniform
SendAllLightUniformsToShader(shaderLightingPass);
glUniform3fv(glGetUniformLocation(shaderLightingPass.Program, "viewPos"), 1, &camera.Position[0]);
RenderQuad();
我們在渲染之前綁定了G緩沖中所有相關(guān)的紋理蜀肘,并且發(fā)送光照相關(guān)的uniform變量到著色器中绊汹。
光照處理階段的片段著色器和我們之前一直在用的光照教程著色器是非常相似的,除了我們添加了一個新的方法扮宠,從而使我們能夠獲取光照的輸入變量西乖,當然這些變量我們會從G緩沖中直接采樣。
#version 330 core
out vec4 FragColor;
in vec2 TexCoords;
uniform sampler2D gPosition;
uniform sampler2D gNormal;
uniform sampler2D gAlbedoSpec;
struct Light {
vec3 Position;
vec3 Color;
};
const int NR_LIGHTS = 32;
uniform Light lights[NR_LIGHTS];
uniform vec3 viewPos;
void main()
{
// 從G緩沖中獲取數(shù)據(jù)
vec3 FragPos = texture(gPosition, TexCoords).rgb;
vec3 Normal = texture(gNormal, TexCoords).rgb;
vec3 Albedo = texture(gAlbedoSpec, TexCoords).rgb;
float Specular = texture(gAlbedoSpec, TexCoords).a;
// 然后和往常一樣地計算光照
vec3 lighting = Albedo * 0.1; // 硬編碼環(huán)境光照分量
vec3 viewDir = normalize(viewPos - FragPos);
for(int i = 0; i < NR_LIGHTS; ++i)
{
// 漫反射
vec3 lightDir = normalize(lights[i].Position - FragPos);
vec3 diffuse = max(dot(Normal, lightDir), 0.0) * Albedo * lights[i].Color;
lighting += diffuse;
}
FragColor = vec4(lighting, 1.0);
}
光照處理階段著色器接受三個uniform紋理,代表G緩沖浴栽,它們包含了我們在幾何處理階段儲存的所有數(shù)據(jù)荒叼。如果我們現(xiàn)在再使用當前片段的紋理坐標采樣這些數(shù)據(jù),我們將會獲得和之前完全一樣的片段值典鸡,這就像我們在直接渲染幾何體被廓。在片段著色器的一開始,我們通過一個簡單的紋理查找從G緩沖紋理中獲取了光照相關(guān)的變量萝玷。注意我們從gAlbedoSpec
紋理中同時獲取了Albedo
顏色和Spqcular
強度嫁乘。
因為我們現(xiàn)在已經(jīng)有了必要的逐片段變量(和相關(guān)的uniform變量)來計算布林-馮氏光照(Blinn-Phong Lighting)
,我們不需要對光照代碼做任何修改了球碉。我們在延遲著色法中唯一需要改的就是獲取光照輸入變量的方法蜓斧。
運行一個包含32個小光源的簡單Demo會是像這樣子的:
你可以在以下位置找到Demo的完整源代碼,和幾何渲染階段的頂點和片段著色器睁冬,還有光照渲染階段的頂點和片段著色器挎春。
// GLEW
#define GLEW_STATIC
#include <GL/glew.h>
// GLFW
#include <GLFW/glfw3.h>
// GL includes
#include <learnopengl/shader.h>
#include <learnopengl/camera.h>
#include <learnopengl/model.h>
// GLM Mathemtics
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>
// Other Libs
#include <SOIL.h>
// Properties
const GLuint SCR_WIDTH = 800, SCR_HEIGHT = 600;
// Function prototypes
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode);
void scroll_callback(GLFWwindow* window, double xoffset, double yoffset);
void mouse_callback(GLFWwindow* window, double xpos, double ypos);
void Do_Movement();
GLuint loadTexture(GLchar* path);
void RenderCube();
void RenderQuad();
// Camera
Camera camera(glm::vec3(0.0f, 0.0f, 5.0f));
// Delta
GLfloat deltaTime = 0.0f;
GLfloat lastFrame = 0.0f;
// The MAIN function, from here we start our application and run our Game loop
int main()
{
// Init GLFW
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "LearnOpenGL", nullptr, nullptr); // Windowed
glfwMakeContextCurrent(window);
// Set the required callback functions
glfwSetKeyCallback(window, key_callback);
glfwSetCursorPosCallback(window, mouse_callback);
glfwSetScrollCallback(window, scroll_callback);
// Options
glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);
// Initialize GLEW to setup the OpenGL Function pointers
glewExperimental = GL_TRUE;
glewInit();
// Define the viewport dimensions
glViewport(0, 0, SCR_WIDTH, SCR_HEIGHT);
// Setup some OpenGL options
glEnable(GL_DEPTH_TEST);
// Setup and compile our shaders
Shader shaderGeometryPass("g_buffer.vs", "g_buffer.frag");
Shader shaderLightingPass("deferred_shading.vs", "deferred_shading.frag");
// Set samplers
shaderLightingPass.Use();
glUniform1i(glGetUniformLocation(shaderLightingPass.Program, "gPosition"), 0);
glUniform1i(glGetUniformLocation(shaderLightingPass.Program, "gNormal"), 1);
glUniform1i(glGetUniformLocation(shaderLightingPass.Program, "gAlbedoSpec"), 2);
// Models
Model cyborg("../../../resources/objects/nanosuit/nanosuit.obj");
std::vector<glm::vec3> objectPositions;
objectPositions.push_back(glm::vec3(-3.0, -3.0, -3.0));
objectPositions.push_back(glm::vec3(0.0, -3.0, -3.0));
objectPositions.push_back(glm::vec3(3.0, -3.0, -3.0));
objectPositions.push_back(glm::vec3(-3.0, -3.0, 0.0));
objectPositions.push_back(glm::vec3(0.0, -3.0, 0.0));
objectPositions.push_back(glm::vec3(3.0, -3.0, 0.0));
objectPositions.push_back(glm::vec3(-3.0, -3.0, 3.0));
objectPositions.push_back(glm::vec3(0.0, -3.0, 3.0));
objectPositions.push_back(glm::vec3(3.0, -3.0, 3.0));
// - Colors
const GLuint NR_LIGHTS = 32;
std::vector<glm::vec3> lightPositions;
std::vector<glm::vec3> lightColors;
srand(13);
for (GLuint i = 0; i < NR_LIGHTS; i++)
{
// Calculate slightly random offsets
GLfloat xPos = ((rand() % 100) / 100.0) * 6.0 - 3.0;
GLfloat yPos = ((rand() % 100) / 100.0) * 6.0 - 4.0;
GLfloat zPos = ((rand() % 100) / 100.0) * 6.0 - 3.0;
lightPositions.push_back(glm::vec3(xPos, yPos, zPos));
// Also calculate random color
GLfloat rColor = ((rand() % 100) / 200.0f) + 0.5; // Between 0.5 and 1.0
GLfloat gColor = ((rand() % 100) / 200.0f) + 0.5; // Between 0.5 and 1.0
GLfloat bColor = ((rand() % 100) / 200.0f) + 0.5; // Between 0.5 and 1.0
lightColors.push_back(glm::vec3(rColor, gColor, bColor));
}
// Set up G-Buffer
// 3 textures:
// 1. Positions (RGB)
// 2. Color (RGB) + Specular (A)
// 3. Normals (RGB)
GLuint gBuffer;
glGenFramebuffers(1, &gBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, gBuffer);
GLuint gPosition, gNormal, gAlbedoSpec;
// - Position color buffer
glGenTextures(1, &gPosition);
glBindTexture(GL_TEXTURE_2D, gPosition);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGB, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, gPosition, 0);
// - Normal color buffer
glGenTextures(1, &gNormal);
glBindTexture(GL_TEXTURE_2D, gNormal);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGB, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, gNormal, 0);
// - Color + Specular color buffer
glGenTextures(1, &gAlbedoSpec);
glBindTexture(GL_TEXTURE_2D, gAlbedoSpec);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, GL_TEXTURE_2D, gAlbedoSpec, 0);
// - Tell OpenGL which color attachments we'll use (of this framebuffer) for rendering
GLuint attachments[3] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2 };
glDrawBuffers(3, attachments);
// - Create and attach depth buffer (renderbuffer)
GLuint rboDepth;
glGenRenderbuffers(1, &rboDepth);
glBindRenderbuffer(GL_RENDERBUFFER, rboDepth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, SCR_WIDTH, SCR_HEIGHT);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboDepth);
// - Finally check if framebuffer is complete
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
std::cout << "Framebuffer not complete!" << std::endl;
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
// Game loop
while (!glfwWindowShouldClose(window))
{
// Set frame time
GLfloat currentFrame = glfwGetTime();
deltaTime = currentFrame - lastFrame;
lastFrame = currentFrame;
// Check and call events
glfwPollEvents();
Do_Movement();
// 1. Geometry Pass: render scene's geometry/color data into gbuffer
glBindFramebuffer(GL_FRAMEBUFFER, gBuffer);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glm::mat4 projection = glm::perspective(camera.Zoom, (GLfloat)SCR_WIDTH / (GLfloat)SCR_HEIGHT, 0.1f, 100.0f);
glm::mat4 view = camera.GetViewMatrix();
glm::mat4 model;
shaderGeometryPass.Use();
glUniformMatrix4fv(glGetUniformLocation(shaderGeometryPass.Program, "projection"), 1, GL_FALSE, glm::value_ptr(projection));
glUniformMatrix4fv(glGetUniformLocation(shaderGeometryPass.Program, "view"), 1, GL_FALSE, glm::value_ptr(view));
for (GLuint i = 0; i < objectPositions.size(); i++)
{
model = glm::mat4();
model = glm::translate(model, objectPositions[i]);
model = glm::scale(model, glm::vec3(0.25f));
glUniformMatrix4fv(glGetUniformLocation(shaderGeometryPass.Program, "model"), 1, GL_FALSE, glm::value_ptr(model));
cyborg.Draw(shaderGeometryPass);
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// 2. Lighting Pass: calculate lighting by iterating over a screen filled quad pixel-by-pixel using the gbuffer's content.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
shaderLightingPass.Use();
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, gPosition);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, gNormal);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, gAlbedoSpec);
// Also send light relevant uniforms
for (GLuint i = 0; i < lightPositions.size(); i++)
{
glUniform3fv(glGetUniformLocation(shaderLightingPass.Program, ("lights[" + std::to_string(i) + "].Position").c_str()), 1, &lightPositions[i][0]);
glUniform3fv(glGetUniformLocation(shaderLightingPass.Program, ("lights[" + std::to_string(i) + "].Color").c_str()), 1, &lightColors[i][0]);
// Update attenuation parameters and calculate radius
const GLfloat constant = 1.0; // Note that we don't send this to the shader, we assume it is always 1.0 (in our case)
const GLfloat linear = 0.7;
const GLfloat quadratic = 1.8;
glUniform1f(glGetUniformLocation(shaderLightingPass.Program, ("lights[" + std::to_string(i) + "].Linear").c_str()), linear);
glUniform1f(glGetUniformLocation(shaderLightingPass.Program, ("lights[" + std::to_string(i) + "].Quadratic").c_str()), quadratic);
}
glUniform3fv(glGetUniformLocation(shaderLightingPass.Program, "viewPos"), 1, &camera.Position[0]);
// Finally render quad
RenderQuad();
// Swap the buffers
glfwSwapBuffers(window);
}
glfwTerminate();
return 0;
}
// RenderQuad() Renders a 1x1 quad in NDC, best used for framebuffer color targets
// and post-processing effects.
GLuint quadVAO = 0;
GLuint quadVBO;
void RenderQuad()
{
if (quadVAO == 0)
{
GLfloat quadVertices[] = {
// Positions // Texture Coords
-1.0f, 1.0f, 0.0f, 0.0f, 1.0f,
-1.0f, -1.0f, 0.0f, 0.0f, 0.0f,
1.0f, 1.0f, 0.0f, 1.0f, 1.0f,
1.0f, -1.0f, 0.0f, 1.0f, 0.0f,
};
// Setup plane VAO
glGenVertexArrays(1, &quadVAO);
glGenBuffers(1, &quadVBO);
glBindVertexArray(quadVAO);
glBindBuffer(GL_ARRAY_BUFFER, quadVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(quadVertices), &quadVertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat)));
}
glBindVertexArray(quadVAO);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindVertexArray(0);
}
// RenderCube() Renders a 1x1 3D cube in NDC.
GLuint cubeVAO = 0;
GLuint cubeVBO = 0;
void RenderCube()
{
// Initialize (if necessary)
if (cubeVAO == 0)
{
GLfloat vertices[] = {
// Back face
-0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, // Bottom-left
0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, // top-right
0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f, // bottom-right
0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, // top-right
-0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, // bottom-left
-0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f,// top-left
// Front face
-0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, // bottom-left
0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, // bottom-right
0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, // top-right
0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, // top-right
-0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, // top-left
-0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, // bottom-left
// Left face
-0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, // top-right
-0.5f, 0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 1.0f, // top-left
-0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, // bottom-left
-0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, // bottom-left
-0.5f, -0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, // bottom-right
-0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, // top-right
// Right face
0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, // top-left
0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, // bottom-right
0.5f, 0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, // top-right
0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, // bottom-right
0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, // top-left
0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, // bottom-left
// Bottom face
-0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, // top-right
0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 1.0f, // top-left
0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f,// bottom-left
0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f, // bottom-left
-0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, // bottom-right
-0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, // top-right
// Top face
-0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f,// top-left
0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, // bottom-right
0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, // top-right
0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, // bottom-right
-0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f,// top-left
-0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f // bottom-left
};
glGenVertexArrays(1, &cubeVAO);
glGenBuffers(1, &cubeVBO);
// Fill buffer
glBindBuffer(GL_ARRAY_BUFFER, cubeVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
// Link vertex attributes
glBindVertexArray(cubeVAO);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat)));
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 8 * sizeof(GLfloat), (GLvoid*)(6 * sizeof(GLfloat)));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
// Render Cube
glBindVertexArray(cubeVAO);
glDrawArrays(GL_TRIANGLES, 0, 36);
glBindVertexArray(0);
}
bool keys[1024];
bool keysPressed[1024];
// Moves/alters the camera positions based on user input
void Do_Movement()
{
// Camera controls
if (keys[GLFW_KEY_W])
camera.ProcessKeyboard(FORWARD, deltaTime);
if (keys[GLFW_KEY_S])
camera.ProcessKeyboard(BACKWARD, deltaTime);
if (keys[GLFW_KEY_A])
camera.ProcessKeyboard(LEFT, deltaTime);
if (keys[GLFW_KEY_D])
camera.ProcessKeyboard(RIGHT, deltaTime);
}
GLfloat lastX = 400, lastY = 300;
bool firstMouse = true;
// Is called whenever a key is pressed/released via GLFW
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode)
{
if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
glfwSetWindowShouldClose(window, GL_TRUE);
if (key >= 0 && key <= 1024)
{
if (action == GLFW_PRESS)
keys[key] = true;
else if (action == GLFW_RELEASE)
{
keys[key] = false;
keysPressed[key] = false;
}
}
}
void mouse_callback(GLFWwindow* window, double xpos, double ypos)
{
if (firstMouse)
{
lastX = xpos;
lastY = ypos;
firstMouse = false;
}
GLfloat xoffset = xpos - lastX;
GLfloat yoffset = lastY - ypos;
lastX = xpos;
lastY = ypos;
camera.ProcessMouseMovement(xoffset, yoffset);
}
void scroll_callback(GLFWwindow* window, double xoffset, double yoffset)
{
camera.ProcessMouseScroll(yoffset);
}
延遲著色法的其中一個缺點就是它不能進行混合(Blending),因為G緩沖中所有的數(shù)據(jù)都是從一個單獨的片段中來的豆拨,而混合需要對多個片段的組合進行操作直奋。延遲著色法另外一個缺點就是它迫使你對大部分場景的光照使用相同的光照算法,你可以通過包含更多關(guān)于材質(zhì)的數(shù)據(jù)到G緩沖中來減輕這一缺點施禾。
為了克服這些缺點(特別是混合)脚线,我們通常分割我們的渲染器為兩個部分:一個是延遲渲染的部分,另一個是專門為了混合或者其他不適合延遲渲染管線的著色器效果而設(shè)計的的正向渲染的部分弥搞。為了展示這是如何工作的邮绿,我們將會使用正向渲染器渲染光源為一個小立方體,因為光照立方體會需要一個特殊的著色器(會輸出一個光照顏色)攀例。
結(jié)合延遲渲染與正向渲染
現(xiàn)在我們想要渲染每一個光源為一個3D立方體船逮,并放置在光源的位置上隨著延遲渲染器一起發(fā)出光源的顏色。很明顯肛度,我們需要做的第一件事就是在延遲渲染方形之上正向渲染所有的光源傻唾,它會在延遲渲染管線的最后進行。所以我們只需要像正常情況下渲染立方體承耿,只是會在我們完成延遲渲染操作之后進行。代碼會像這樣:
// 延遲渲染光照渲染階段
[...]
RenderQuad();
// 現(xiàn)在像正常情況一樣正向渲染所有光立方體
shaderLightBox.Use();
glUniformMatrix4fv(locProjection, 1, GL_FALSE, glm::value_ptr(projection));
glUniformMatrix4fv(locView, 1, GL_FALSE, glm::value_ptr(view));
for (GLuint i = 0; i < lightPositions.size(); i++)
{
model = glm::mat4();
model = glm::translate(model, lightPositions[i]);
model = glm::scale(model, glm::vec3(0.25f));
glUniformMatrix4fv(locModel, 1, GL_FALSE, glm::value_ptr(model));
glUniform3fv(locLightcolor, 1, &lightColors[i][0]);
RenderCube();
}
然而伪煤,這些渲染出來的立方體并沒有考慮到我們儲存的延遲渲染器的幾何深度(Depth)
信息加袋,并且結(jié)果是它被渲染在之前渲染過的物體之上,這并不是我們想要的結(jié)果抱既。
我們需要做的就是首先復制出在幾何渲染階段中儲存的深度信息职烧,并輸出到默認的幀緩沖的深度緩沖,然后我們才渲染光立方體。這樣之后只有當它在之前渲染過的幾何體上方的時候蚀之,光立方體的片段才會被渲染出來蝗敢。我們可以使用glBlitFramebuffer
復制一個幀緩沖的內(nèi)容到另一個幀緩沖中,這個函數(shù)我們也在抗鋸齒的教程中使用過足删,用來還原多重采樣的幀緩沖寿谴。glBlitFramebuffer
這個函數(shù)允許我們復制一個用戶定義的幀緩沖區(qū)域到另一個用戶定義的幀緩沖區(qū)域。
我們儲存所有延遲渲染階段中所有物體的深度信息在gBuffer
這個FBO中失受。如果我們僅僅是簡單復制它的深度緩沖內(nèi)容到默認幀緩沖的深度緩沖中讶泰,那么光立方體就會像是場景中所有的幾何體都是正向渲染出來的一樣渲染出來。就像在抗鋸齒教程中介紹的那樣拂到,我們需要指定一個幀緩沖為讀幀緩沖(Read Framebuffer)
痪署,并且類似地指定一個幀緩沖為寫幀緩沖(Write Framebuffer)
:
glBindFramebuffer(GL_READ_FRAMEBUFFER, gBuffer);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); // 寫入到默認幀緩沖
glBlitFramebuffer(
0, 0, SCR_WIDTH, SCR_HEIGHT, 0, 0, SCR_WIDTH, SCR_HEIGHT, GL_DEPTH_BUFFER_BIT, GL_NEAREST
);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// 現(xiàn)在像之前一樣渲染光立方體
[...]
在這里我們復制整個讀幀緩沖的深度緩沖信息到默認幀緩沖的深度緩沖,對于顏色緩沖和模板緩沖我們也可以這樣處理⌒盅現(xiàn)在如果我們接下來再渲染光立方體狼犯,場景里的幾何體將會看起來很真實了,而不只是簡單地粘貼立方體到2D方形之上:
你可以在這里找到Demo的源代碼领铐,還有光立方體的頂點和片段著色器辜王。
// GLEW
#define GLEW_STATIC
#include <GL/glew.h>
// GLFW
#include <GLFW/glfw3.h>
// GL includes
#include <learnopengl/shader.h>
#include <learnopengl/camera.h>
#include <learnopengl/model.h>
// GLM Mathemtics
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>
// Other Libs
#include <SOIL.h>
// Properties
const GLuint SCR_WIDTH = 800, SCR_HEIGHT = 600;
// Function prototypes
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode);
void scroll_callback(GLFWwindow* window, double xoffset, double yoffset);
void mouse_callback(GLFWwindow* window, double xpos, double ypos);
void Do_Movement();
GLuint loadTexture(GLchar* path);
void RenderCube();
void RenderQuad();
// Camera
Camera camera(glm::vec3(0.0f, 0.0f, 5.0f));
// Delta
GLfloat deltaTime = 0.0f;
GLfloat lastFrame = 0.0f;
// The MAIN function, from here we start our application and run our Game loop
int main()
{
// Init GLFW
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "LearnOpenGL", nullptr, nullptr); // Windowed
glfwMakeContextCurrent(window);
// Set the required callback functions
glfwSetKeyCallback(window, key_callback);
glfwSetCursorPosCallback(window, mouse_callback);
glfwSetScrollCallback(window, scroll_callback);
// Options
glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);
// Initialize GLEW to setup the OpenGL Function pointers
glewExperimental = GL_TRUE;
glewInit();
// Define the viewport dimensions
glViewport(0, 0, SCR_WIDTH, SCR_HEIGHT);
// Setup some OpenGL options
glEnable(GL_DEPTH_TEST);
// Setup and compile our shaders
Shader shaderGeometryPass("g_buffer.vs", "g_buffer.frag");
Shader shaderLightingPass("deferred_shading.vs", "deferred_shading.frag");
Shader shaderLightBox("deferred_light_box.vs", "deferred_light_box.frag");
// Set samplers
shaderLightingPass.Use();
glUniform1i(glGetUniformLocation(shaderLightingPass.Program, "gPosition"), 0);
glUniform1i(glGetUniformLocation(shaderLightingPass.Program, "gNormal"), 1);
glUniform1i(glGetUniformLocation(shaderLightingPass.Program, "gAlbedoSpec"), 2);
// Models
Model cyborg("../../../resources/objects/nanosuit/nanosuit.obj");
std::vector<glm::vec3> objectPositions;
objectPositions.push_back(glm::vec3(-3.0, -3.0, -3.0));
objectPositions.push_back(glm::vec3(0.0, -3.0, -3.0));
objectPositions.push_back(glm::vec3(3.0, -3.0, -3.0));
objectPositions.push_back(glm::vec3(-3.0, -3.0, 0.0));
objectPositions.push_back(glm::vec3(0.0, -3.0, 0.0));
objectPositions.push_back(glm::vec3(3.0, -3.0, 0.0));
objectPositions.push_back(glm::vec3(-3.0, -3.0, 3.0));
objectPositions.push_back(glm::vec3(0.0, -3.0, 3.0));
objectPositions.push_back(glm::vec3(3.0, -3.0, 3.0));
// - Colors
const GLuint NR_LIGHTS = 32;
std::vector<glm::vec3> lightPositions;
std::vector<glm::vec3> lightColors;
srand(13);
for (GLuint i = 0; i < NR_LIGHTS; i++)
{
// Calculate slightly random offsets
GLfloat xPos = ((rand() % 100) / 100.0) * 6.0 - 3.0;
GLfloat yPos = ((rand() % 100) / 100.0) * 6.0 - 4.0;
GLfloat zPos = ((rand() % 100) / 100.0) * 6.0 - 3.0;
lightPositions.push_back(glm::vec3(xPos, yPos, zPos));
// Also calculate random color
GLfloat rColor = ((rand() % 100) / 200.0f) + 0.5; // Between 0.5 and 1.0
GLfloat gColor = ((rand() % 100) / 200.0f) + 0.5; // Between 0.5 and 1.0
GLfloat bColor = ((rand() % 100) / 200.0f) + 0.5; // Between 0.5 and 1.0
lightColors.push_back(glm::vec3(rColor, gColor, bColor));
}
// Set up G-Buffer
// 3 textures:
// 1. Positions (RGB)
// 2. Color (RGB) + Specular (A)
// 3. Normals (RGB)
GLuint gBuffer;
glGenFramebuffers(1, &gBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, gBuffer);
GLuint gPosition, gNormal, gAlbedoSpec;
// - Position color buffer
glGenTextures(1, &gPosition);
glBindTexture(GL_TEXTURE_2D, gPosition);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGB, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, gPosition, 0);
// - Normal color buffer
glGenTextures(1, &gNormal);
glBindTexture(GL_TEXTURE_2D, gNormal);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGB, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, gNormal, 0);
// - Color + Specular color buffer
glGenTextures(1, &gAlbedoSpec);
glBindTexture(GL_TEXTURE_2D, gAlbedoSpec);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, GL_TEXTURE_2D, gAlbedoSpec, 0);
// - Tell OpenGL which color attachments we'll use (of this framebuffer) for rendering
GLuint attachments[3] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2 };
glDrawBuffers(3, attachments);
// - Create and attach depth buffer (renderbuffer)
GLuint rboDepth;
glGenRenderbuffers(1, &rboDepth);
glBindRenderbuffer(GL_RENDERBUFFER, rboDepth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, SCR_WIDTH, SCR_HEIGHT);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboDepth);
// - Finally check if framebuffer is complete
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
std::cout << "Framebuffer not complete!" << std::endl;
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
// Game loop
while (!glfwWindowShouldClose(window))
{
// Set frame time
GLfloat currentFrame = glfwGetTime();
deltaTime = currentFrame - lastFrame;
lastFrame = currentFrame;
// Check and call events
glfwPollEvents();
Do_Movement();
// 1. Geometry Pass: render scene's geometry/color data into gbuffer
glBindFramebuffer(GL_FRAMEBUFFER, gBuffer);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glm::mat4 projection = glm::perspective(camera.Zoom, (GLfloat)SCR_WIDTH / (GLfloat)SCR_HEIGHT, 0.1f, 100.0f);
glm::mat4 view = camera.GetViewMatrix();
glm::mat4 model;
shaderGeometryPass.Use();
glUniformMatrix4fv(glGetUniformLocation(shaderGeometryPass.Program, "projection"), 1, GL_FALSE, glm::value_ptr(projection));
glUniformMatrix4fv(glGetUniformLocation(shaderGeometryPass.Program, "view"), 1, GL_FALSE, glm::value_ptr(view));
for (GLuint i = 0; i < objectPositions.size(); i++)
{
model = glm::mat4();
model = glm::translate(model, objectPositions[i]);
model = glm::scale(model, glm::vec3(0.25f));
glUniformMatrix4fv(glGetUniformLocation(shaderGeometryPass.Program, "model"), 1, GL_FALSE, glm::value_ptr(model));
cyborg.Draw(shaderGeometryPass);
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// 2. Lighting Pass: calculate lighting by iterating over a screen filled quad pixel-by-pixel using the gbuffer's content.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
shaderLightingPass.Use();
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, gPosition);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, gNormal);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, gAlbedoSpec);
// Also send light relevant uniforms
for (GLuint i = 0; i < lightPositions.size(); i++)
{
glUniform3fv(glGetUniformLocation(shaderLightingPass.Program, ("lights[" + std::to_string(i) + "].Position").c_str()), 1, &lightPositions[i][0]);
glUniform3fv(glGetUniformLocation(shaderLightingPass.Program, ("lights[" + std::to_string(i) + "].Color").c_str()), 1, &lightColors[i][0]);
// Update attenuation parameters and calculate radius
const GLfloat constant = 1.0; // Note that we don't send this to the shader, we assume it is always 1.0 (in our case)
const GLfloat linear = 0.7;
const GLfloat quadratic = 1.8;
glUniform1f(glGetUniformLocation(shaderLightingPass.Program, ("lights[" + std::to_string(i) + "].Linear").c_str()), linear);
glUniform1f(glGetUniformLocation(shaderLightingPass.Program, ("lights[" + std::to_string(i) + "].Quadratic").c_str()), quadratic);
}
glUniform3fv(glGetUniformLocation(shaderLightingPass.Program, "viewPos"), 1, &camera.Position[0]);
// Finally render quad
RenderQuad();
// 2.5. Copy content of geometry's depth buffer to default framebuffer's depth buffer
glBindFramebuffer(GL_READ_FRAMEBUFFER, gBuffer);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); // Write to default framebuffer
// blit to default framebuffer. Note that this may or may not work as the internal formats of both the FBO and default framebuffer have to match.
// the internal formats are implementation defined. This works on all of my systems, but if it doesn't on yours you'll likely have to write to the
// depth buffer in another shader stage (or somehow see to match the default framebuffer's internal format with the FBO's internal format).
glBlitFramebuffer(0, 0, SCR_WIDTH, SCR_HEIGHT, 0, 0, SCR_WIDTH, SCR_HEIGHT, GL_DEPTH_BUFFER_BIT, GL_NEAREST);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// 3. Render lights on top of scene, by blitting
shaderLightBox.Use();
glUniformMatrix4fv(glGetUniformLocation(shaderLightBox.Program, "projection"), 1, GL_FALSE, glm::value_ptr(projection));
glUniformMatrix4fv(glGetUniformLocation(shaderLightBox.Program, "view"), 1, GL_FALSE, glm::value_ptr(view));
for (GLuint i = 0; i < lightPositions.size(); i++)
{
model = glm::mat4();
model = glm::translate(model, lightPositions[i]);
model = glm::scale(model, glm::vec3(0.25f));
glUniformMatrix4fv(glGetUniformLocation(shaderLightBox.Program, "model"), 1, GL_FALSE, glm::value_ptr(model));
glUniform3fv(glGetUniformLocation(shaderLightBox.Program, "lightColor"), 1, &lightColors[i][0]);
RenderCube();
}
// Swap the buffers
glfwSwapBuffers(window);
}
glfwTerminate();
return 0;
}
// RenderQuad() Renders a 1x1 quad in NDC, best used for framebuffer color targets
// and post-processing effects.
GLuint quadVAO = 0;
GLuint quadVBO;
void RenderQuad()
{
if (quadVAO == 0)
{
GLfloat quadVertices[] = {
// Positions // Texture Coords
-1.0f, 1.0f, 0.0f, 0.0f, 1.0f,
-1.0f, -1.0f, 0.0f, 0.0f, 0.0f,
1.0f, 1.0f, 0.0f, 1.0f, 1.0f,
1.0f, -1.0f, 0.0f, 1.0f, 0.0f,
};
// Setup plane VAO
glGenVertexArrays(1, &quadVAO);
glGenBuffers(1, &quadVBO);
glBindVertexArray(quadVAO);
glBindBuffer(GL_ARRAY_BUFFER, quadVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(quadVertices), &quadVertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat)));
}
glBindVertexArray(quadVAO);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindVertexArray(0);
}
// RenderCube() Renders a 1x1 3D cube in NDC.
GLuint cubeVAO = 0;
GLuint cubeVBO = 0;
void RenderCube()
{
// Initialize (if necessary)
if (cubeVAO == 0)
{
GLfloat vertices[] = {
// Back face
-0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, // Bottom-left
0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, // top-right
0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f, // bottom-right
0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, // top-right
-0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, // bottom-left
-0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f,// top-left
// Front face
-0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, // bottom-left
0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, // bottom-right
0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, // top-right
0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, // top-right
-0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, // top-left
-0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, // bottom-left
// Left face
-0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, // top-right
-0.5f, 0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 1.0f, // top-left
-0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, // bottom-left
-0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, // bottom-left
-0.5f, -0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, // bottom-right
-0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, // top-right
// Right face
0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, // top-left
0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, // bottom-right
0.5f, 0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, // top-right
0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, // bottom-right
0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, // top-left
0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, // bottom-left
// Bottom face
-0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, // top-right
0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 1.0f, // top-left
0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f,// bottom-left
0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f, // bottom-left
-0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, // bottom-right
-0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, // top-right
// Top face
-0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f,// top-left
0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, // bottom-right
0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, // top-right
0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, // bottom-right
-0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f,// top-left
-0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f // bottom-left
};
glGenVertexArrays(1, &cubeVAO);
glGenBuffers(1, &cubeVBO);
// Fill buffer
glBindBuffer(GL_ARRAY_BUFFER, cubeVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
// Link vertex attributes
glBindVertexArray(cubeVAO);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat)));
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 8 * sizeof(GLfloat), (GLvoid*)(6 * sizeof(GLfloat)));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
// Render Cube
glBindVertexArray(cubeVAO);
glDrawArrays(GL_TRIANGLES, 0, 36);
glBindVertexArray(0);
}
bool keys[1024];
bool keysPressed[1024];
// Moves/alters the camera positions based on user input
void Do_Movement()
{
// Camera controls
if (keys[GLFW_KEY_W])
camera.ProcessKeyboard(FORWARD, deltaTime);
if (keys[GLFW_KEY_S])
camera.ProcessKeyboard(BACKWARD, deltaTime);
if (keys[GLFW_KEY_A])
camera.ProcessKeyboard(LEFT, deltaTime);
if (keys[GLFW_KEY_D])
camera.ProcessKeyboard(RIGHT, deltaTime);
}
GLfloat lastX = 400, lastY = 300;
bool firstMouse = true;
// Is called whenever a key is pressed/released via GLFW
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode)
{
if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
glfwSetWindowShouldClose(window, GL_TRUE);
if (key >= 0 && key <= 1024)
{
if (action == GLFW_PRESS)
keys[key] = true;
else if (action == GLFW_RELEASE)
{
keys[key] = false;
keysPressed[key] = false;
}
}
}
void mouse_callback(GLFWwindow* window, double xpos, double ypos)
{
if (firstMouse)
{
lastX = xpos;
lastY = ypos;
firstMouse = false;
}
GLfloat xoffset = xpos - lastX;
GLfloat yoffset = lastY - ypos;
lastX = xpos;
lastY = ypos;
camera.ProcessMouseMovement(xoffset, yoffset);
}
void scroll_callback(GLFWwindow* window, double xoffset, double yoffset)
{
camera.ProcessMouseScroll(yoffset);
}
有了這種方法,我們就能夠輕易地結(jié)合延遲著色法和正向著色法了。這真是太棒了屡律,我們現(xiàn)在可以應(yīng)用混合或者渲染需要特殊著色器效果的物體了斯议,這在延遲渲染中是不可能做到的。
更多的光源
延遲渲染一直被稱贊的原因就是它能夠渲染大量的光源而不消耗大量的性能汹来。然而,延遲渲染它本身并不能支持非常大量的光源改艇,因為我們?nèi)匀槐仨氁獙鼍爸忻恳粋€光源計算每一個片段的光照分量收班。真正讓大量光源成為可能的是我們能夠?qū)ρ舆t渲染管線引用的一個非常棒的優(yōu)化:光體積(Light Volumes)
通常情況下,當我們渲染一個復雜光照場景下的片段著色器時谒兄,我們會計算場景中每一個光源的貢獻摔桦,不管它們離這個片段有多遠。很大一部分的光源根本就不會到達這個片段承疲,所以為什么我們還要浪費這么多光照運算呢邻耕?
隱藏在光體積背后的想法就是計算光源的半徑,或是體積燕鸽,也就是光能夠到達片段的范圍兄世。由于大部分光源都使用了某種形式的衰減(Attenuation),我們可以用它來計算光源能夠到達的最大路程啊研,或者說是半徑御滩。我們接下來只需要對那些在一個或多個光體積內(nèi)的片段進行繁重的光照運算就行了鸥拧。這可以給我們省下來很可觀的計算量,因為我們現(xiàn)在只在需要的情況下計算光照削解。
這個方法的難點基本就是找出一個光源光體積的大小富弦,或者是半徑。
1. 計算一個光源的體積或半徑
為了獲取一個光源的體積半徑氛驮,我們需要解一個對于一個我們認為是黑暗(Dark)的亮度(Brightness)
的衰減方程腕柜,它可以是0.0,或者是更亮一點的但仍被認為黑暗的值柳爽,像是0.03媳握。為了展示我們?nèi)绾斡嬎愎庠吹捏w積半徑,我們將會使用一個在投光物這節(jié)中引入的一個更加復雜磷脯,但非常靈活的衰減方程:
我們現(xiàn)在想要在Flight等于0的前提下解這個方程蛾找,也就是說光在該距離完全是黑暗的。然而這個方程永遠不會真正等于0.0赵誓,所以它沒有解打毛。所以,我們不會求表達式等于0.0時候的解俩功,相反我們會求當亮度值靠近于0.0的解幻枉,這時候它還是能被看做是黑暗的。在這個教程的演示場景中诡蜓,我們選擇5/256作為一個合適的光照值熬甫;除以256是因為默認的8-bit幀緩沖可以每個分量顯示這么多強度值(Intensity)
。
我們使用的衰減方程在它的可視范圍內(nèi)基本都是黑暗的蔓罚,所以如果我們想要限制它為一個比5/256更加黑暗的亮度椿肩,光體積就會變得太大從而變得低效。只要是用戶不能在光體積邊緣看到一個突兀的截斷豺谈,這個參數(shù)就沒事了郑象。當然它還是依賴于場景的類型,一個高的亮度閥值會產(chǎn)生更小的光體積茬末,從而獲得更高的效率厂榛,然而它同樣會產(chǎn)生一個很容易發(fā)現(xiàn)的副作用,那就是光會在光體積邊界看起來突然斷掉丽惭。
我們要求的衰減方程會是這樣:
在這里击奶,Imax是光源最亮的顏色分量。我們之所以使用光源最亮的顏色分量是因為解光源最亮的強度值方程最好地反映了理想光體積半徑吐根。
從這里我們繼續(xù)解方程:
它給我們了一個通用公式從而允許我們計算x的值正歼,即光源的光體積半徑,只要我們提供了一個常量拷橘,線性和二次項參數(shù):
GLfloat constant = 1.0;
GLfloat linear = 0.7;
GLfloat quadratic = 1.8;
GLfloat lightMax = std::fmaxf(std::fmaxf(lightColor.r, lightColor.g), lightColor.b);
GLfloat radius =
(-linear + std::sqrtf(linear * linear - 4 * quadratic * (constant - (256.0 / 5.0) * lightMax)))
/ (2 * quadratic);
它會返回一個大概在1.0到5.0范圍內(nèi)的半徑值局义,它取決于光的最大強度。
對于場景中每一個光源冗疮,我們都計算它的半徑萄唇,并僅在片段在光源的體積內(nèi)部時才計算該光源的光照。下面是更新過的光照處理階段片段著色器术幔,它考慮到了計算出來的光體積另萤。注意這種方法僅僅用作教學目的,在實際場景中是不可行的诅挑,我們會在后面討論它:
struct Light {
[...]
float Radius;
};
void main()
{
[...]
for(int i = 0; i < NR_LIGHTS; ++i)
{
// 計算光源和該片段間距離
float distance = length(lights[i].Position - FragPos);
if(distance < lights[i].Radius)
{
// 執(zhí)行大開銷光照
[...]
}
}
}
這次的結(jié)果和之前一模一樣四敞,但是這次物體只對所在光體積的光源計算光照。
你可以在這里找到Demo最終的源碼拔妥,并且還有更新的光照渲染階段的片段著色器
// GLEW
#define GLEW_STATIC
#include <GL/glew.h>
// GLFW
#include <GLFW/glfw3.h>
// GL includes
#include <learnopengl/shader.h>
#include <learnopengl/camera.h>
#include <learnopengl/model.h>
// GLM Mathemtics
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>
// Other Libs
#include <SOIL.h>
// Properties
const GLuint SCR_WIDTH = 800, SCR_HEIGHT = 600;
// Function prototypes
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode);
void scroll_callback(GLFWwindow* window, double xoffset, double yoffset);
void mouse_callback(GLFWwindow* window, double xpos, double ypos);
void Do_Movement();
GLuint loadTexture(GLchar* path);
void RenderCube();
void RenderQuad();
// Camera
Camera camera(glm::vec3(0.0f, 0.0f, 5.0f));
// Delta
GLfloat deltaTime = 0.0f;
GLfloat lastFrame = 0.0f;
// The MAIN function, from here we start our application and run our Game loop
int main()
{
// Init GLFW
glfwInit();
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "LearnOpenGL", nullptr, nullptr); // Windowed
glfwMakeContextCurrent(window);
// Set the required callback functions
glfwSetKeyCallback(window, key_callback);
glfwSetCursorPosCallback(window, mouse_callback);
glfwSetScrollCallback(window, scroll_callback);
// Options
glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);
// Initialize GLEW to setup the OpenGL Function pointers
glewExperimental = GL_TRUE;
glewInit();
// Define the viewport dimensions
glViewport(0, 0, SCR_WIDTH, SCR_HEIGHT);
// Setup some OpenGL options
glEnable(GL_DEPTH_TEST);
// Setup and compile our shaders
Shader shaderGeometryPass("g_buffer.vs", "g_buffer.frag");
Shader shaderLightingPass("deferred_shading.vs", "deferred_shading.frag");
Shader shaderLightBox("deferred_light_box.vs", "deferred_light_box.frag");
// Set samplers
shaderLightingPass.Use();
glUniform1i(glGetUniformLocation(shaderLightingPass.Program, "gPosition"), 0);
glUniform1i(glGetUniformLocation(shaderLightingPass.Program, "gNormal"), 1);
glUniform1i(glGetUniformLocation(shaderLightingPass.Program, "gAlbedoSpec"), 2);
// Models
Model cyborg("../../../resources/objects/nanosuit/nanosuit.obj");
std::vector<glm::vec3> objectPositions;
objectPositions.push_back(glm::vec3(-3.0, -3.0, -3.0));
objectPositions.push_back(glm::vec3(0.0, -3.0, -3.0));
objectPositions.push_back(glm::vec3(3.0, -3.0, -3.0));
objectPositions.push_back(glm::vec3(-3.0, -3.0, 0.0));
objectPositions.push_back(glm::vec3(0.0, -3.0, 0.0));
objectPositions.push_back(glm::vec3(3.0, -3.0, 0.0));
objectPositions.push_back(glm::vec3(-3.0, -3.0, 3.0));
objectPositions.push_back(glm::vec3(0.0, -3.0, 3.0));
objectPositions.push_back(glm::vec3(3.0, -3.0, 3.0));
// - Colors
const GLuint NR_LIGHTS = 32;
std::vector<glm::vec3> lightPositions;
std::vector<glm::vec3> lightColors;
srand(13);
for (GLuint i = 0; i < NR_LIGHTS; i++)
{
// Calculate slightly random offsets
GLfloat xPos = ((rand() % 100) / 100.0) * 6.0 - 3.0;
GLfloat yPos = ((rand() % 100) / 100.0) * 6.0 - 4.0;
GLfloat zPos = ((rand() % 100) / 100.0) * 6.0 - 3.0;
lightPositions.push_back(glm::vec3(xPos, yPos, zPos));
// Also calculate random color
GLfloat rColor = ((rand() % 100) / 200.0f) + 0.5; // Between 0.5 and 1.0
GLfloat gColor = ((rand() % 100) / 200.0f) + 0.5; // Between 0.5 and 1.0
GLfloat bColor = ((rand() % 100) / 200.0f) + 0.5; // Between 0.5 and 1.0
lightColors.push_back(glm::vec3(rColor, gColor, bColor));
}
// Set up G-Buffer
// 3 textures:
// 1. Positions (RGB)
// 2. Color (RGB) + Specular (A)
// 3. Normals (RGB)
GLuint gBuffer;
glGenFramebuffers(1, &gBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, gBuffer);
GLuint gPosition, gNormal, gAlbedoSpec;
// - Position color buffer
glGenTextures(1, &gPosition);
glBindTexture(GL_TEXTURE_2D, gPosition);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGB, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, gPosition, 0);
// - Normal color buffer
glGenTextures(1, &gNormal);
glBindTexture(GL_TEXTURE_2D, gNormal);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGB, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, gNormal, 0);
// - Color + Specular color buffer
glGenTextures(1, &gAlbedoSpec);
glBindTexture(GL_TEXTURE_2D, gAlbedoSpec);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, SCR_WIDTH, SCR_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, GL_TEXTURE_2D, gAlbedoSpec, 0);
// - Tell OpenGL which color attachments we'll use (of this framebuffer) for rendering
GLuint attachments[3] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2 };
glDrawBuffers(3, attachments);
// - Create and attach depth buffer (renderbuffer)
GLuint rboDepth;
glGenRenderbuffers(1, &rboDepth);
glBindRenderbuffer(GL_RENDERBUFFER, rboDepth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, SCR_WIDTH, SCR_HEIGHT);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboDepth);
// - Finally check if framebuffer is complete
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
std::cout << "Framebuffer not complete!" << std::endl;
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
// Game loop
while (!glfwWindowShouldClose(window))
{
// Set frame time
GLfloat currentFrame = glfwGetTime();
deltaTime = currentFrame - lastFrame;
lastFrame = currentFrame;
// Check and call events
glfwPollEvents();
Do_Movement();
// 1. Geometry Pass: render scene's geometry/color data into gbuffer
glBindFramebuffer(GL_FRAMEBUFFER, gBuffer);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glm::mat4 projection = glm::perspective(camera.Zoom, (GLfloat)SCR_WIDTH / (GLfloat)SCR_HEIGHT, 0.1f, 100.0f);
glm::mat4 view = camera.GetViewMatrix();
glm::mat4 model;
shaderGeometryPass.Use();
glUniformMatrix4fv(glGetUniformLocation(shaderGeometryPass.Program, "projection"), 1, GL_FALSE, glm::value_ptr(projection));
glUniformMatrix4fv(glGetUniformLocation(shaderGeometryPass.Program, "view"), 1, GL_FALSE, glm::value_ptr(view));
for (GLuint i = 0; i < objectPositions.size(); i++)
{
model = glm::mat4();
model = glm::translate(model, objectPositions[i]);
model = glm::scale(model, glm::vec3(0.25f));
glUniformMatrix4fv(glGetUniformLocation(shaderGeometryPass.Program, "model"), 1, GL_FALSE, glm::value_ptr(model));
cyborg.Draw(shaderGeometryPass);
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// 2. Lighting Pass: calculate lighting by iterating over a screen filled quad pixel-by-pixel using the gbuffer's content.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
shaderLightingPass.Use();
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, gPosition);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, gNormal);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, gAlbedoSpec);
// Also send light relevant uniforms
for (GLuint i = 0; i < lightPositions.size(); i++)
{
glUniform3fv(glGetUniformLocation(shaderLightingPass.Program, ("lights[" + std::to_string(i) + "].Position").c_str()), 1, &lightPositions[i][0]);
glUniform3fv(glGetUniformLocation(shaderLightingPass.Program, ("lights[" + std::to_string(i) + "].Color").c_str()), 1, &lightColors[i][0]);
// Update attenuation parameters and calculate radius
const GLfloat constant = 1.0; // Note that we don't send this to the shader, we assume it is always 1.0 (in our case)
const GLfloat linear = 0.7;
const GLfloat quadratic = 1.8;
glUniform1f(glGetUniformLocation(shaderLightingPass.Program, ("lights[" + std::to_string(i) + "].Linear").c_str()), linear);
glUniform1f(glGetUniformLocation(shaderLightingPass.Program, ("lights[" + std::to_string(i) + "].Quadratic").c_str()), quadratic);
// Then calculate radius of light volume/sphere
const GLfloat maxBrightness = std::fmaxf(std::fmaxf(lightColors[i].r, lightColors[i].g), lightColors[i].b);
GLfloat radius = (-linear + std::sqrtf(linear * linear - 4 * quadratic * (constant - (256.0 / 5.0) * maxBrightness))) / (2 * quadratic);
glUniform1f(glGetUniformLocation(shaderLightingPass.Program, ("lights[" + std::to_string(i) + "].Radius").c_str()), radius);
}
glUniform3fv(glGetUniformLocation(shaderLightingPass.Program, "viewPos"), 1, &camera.Position[0]);
RenderQuad();
// 2.5. Copy content of geometry's depth buffer to default framebuffer's depth buffer
glBindFramebuffer(GL_READ_FRAMEBUFFER, gBuffer);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); // Write to default framebuffer
// blit to default framebuffer. Note that this may or may not work as the internal formats of both the FBO and default framebuffer have to match.
// the internal formats are implementation defined. This works on all of my systems, but if it doesn't on yours you'll likely have to write to the
// depth buffer in another shader stage (or somehow see to match the default framebuffer's internal format with the FBO's internal format).
glBlitFramebuffer(0, 0, SCR_WIDTH, SCR_HEIGHT, 0, 0, SCR_WIDTH, SCR_HEIGHT, GL_DEPTH_BUFFER_BIT, GL_NEAREST);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// 3. Render lights on top of scene, by blitting
shaderLightBox.Use();
glUniformMatrix4fv(glGetUniformLocation(shaderLightBox.Program, "projection"), 1, GL_FALSE, glm::value_ptr(projection));
glUniformMatrix4fv(glGetUniformLocation(shaderLightBox.Program, "view"), 1, GL_FALSE, glm::value_ptr(view));
for (GLuint i = 0; i < lightPositions.size(); i++)
{
model = glm::mat4();
model = glm::translate(model, lightPositions[i]);
model = glm::scale(model, glm::vec3(0.25f));
glUniformMatrix4fv(glGetUniformLocation(shaderLightBox.Program, "model"), 1, GL_FALSE, glm::value_ptr(model));
glUniform3fv(glGetUniformLocation(shaderLightBox.Program, "lightColor"), 1, &lightColors[i][0]);
RenderCube();
}
// Swap the buffers
glfwSwapBuffers(window);
}
glfwTerminate();
return 0;
}
// RenderQuad() Renders a 1x1 quad in NDC, best used for framebuffer color targets
// and post-processing effects.
GLuint quadVAO = 0;
GLuint quadVBO;
void RenderQuad()
{
if (quadVAO == 0)
{
GLfloat quadVertices[] = {
// Positions // Texture Coords
-1.0f, 1.0f, 0.0f, 0.0f, 1.0f,
-1.0f, -1.0f, 0.0f, 0.0f, 0.0f,
1.0f, 1.0f, 0.0f, 1.0f, 1.0f,
1.0f, -1.0f, 0.0f, 1.0f, 0.0f,
};
// Setup plane VAO
glGenVertexArrays(1, &quadVAO);
glGenBuffers(1, &quadVBO);
glBindVertexArray(quadVAO);
glBindBuffer(GL_ARRAY_BUFFER, quadVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(quadVertices), &quadVertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat)));
}
glBindVertexArray(quadVAO);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindVertexArray(0);
}
// RenderCube() Renders a 1x1 3D cube in NDC.
GLuint cubeVAO = 0;
GLuint cubeVBO = 0;
void RenderCube()
{
// Initialize (if necessary)
if (cubeVAO == 0)
{
GLfloat vertices[] = {
// Back face
-0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, // Bottom-left
0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, // top-right
0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f, // bottom-right
0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, // top-right
-0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, // bottom-left
-0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f,// top-left
// Front face
-0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, // bottom-left
0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, // bottom-right
0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, // top-right
0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, // top-right
-0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, // top-left
-0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, // bottom-left
// Left face
-0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, // top-right
-0.5f, 0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 1.0f, // top-left
-0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, // bottom-left
-0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, // bottom-left
-0.5f, -0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, // bottom-right
-0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, // top-right
// Right face
0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, // top-left
0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, // bottom-right
0.5f, 0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, // top-right
0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, // bottom-right
0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, // top-left
0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, // bottom-left
// Bottom face
-0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, // top-right
0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 1.0f, // top-left
0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f,// bottom-left
0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f, // bottom-left
-0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, // bottom-right
-0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, // top-right
// Top face
-0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f,// top-left
0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, // bottom-right
0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, // top-right
0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, // bottom-right
-0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f,// top-left
-0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f // bottom-left
};
glGenVertexArrays(1, &cubeVAO);
glGenBuffers(1, &cubeVBO);
// Fill buffer
glBindBuffer(GL_ARRAY_BUFFER, cubeVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
// Link vertex attributes
glBindVertexArray(cubeVAO);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat)));
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 8 * sizeof(GLfloat), (GLvoid*)(6 * sizeof(GLfloat)));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
// Render Cube
glBindVertexArray(cubeVAO);
glDrawArrays(GL_TRIANGLES, 0, 36);
glBindVertexArray(0);
}
bool keys[1024];
bool keysPressed[1024];
// Moves/alters the camera positions based on user input
void Do_Movement()
{
// Camera controls
if (keys[GLFW_KEY_W])
camera.ProcessKeyboard(FORWARD, deltaTime);
if (keys[GLFW_KEY_S])
camera.ProcessKeyboard(BACKWARD, deltaTime);
if (keys[GLFW_KEY_A])
camera.ProcessKeyboard(LEFT, deltaTime);
if (keys[GLFW_KEY_D])
camera.ProcessKeyboard(RIGHT, deltaTime);
}
GLfloat lastX = 400, lastY = 300;
bool firstMouse = true;
// Is called whenever a key is pressed/released via GLFW
void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode)
{
if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
glfwSetWindowShouldClose(window, GL_TRUE);
if (key >= 0 && key <= 1024)
{
if (action == GLFW_PRESS)
keys[key] = true;
else if (action == GLFW_RELEASE)
{
keys[key] = false;
keysPressed[key] = false;
}
}
}
void mouse_callback(GLFWwindow* window, double xpos, double ypos)
{
if (firstMouse)
{
lastX = xpos;
lastY = ypos;
firstMouse = false;
}
GLfloat xoffset = xpos - lastX;
GLfloat yoffset = lastY - ypos;
lastX = xpos;
lastY = ypos;
camera.ProcessMouseMovement(xoffset, yoffset);
}
void scroll_callback(GLFWwindow* window, double xoffset, double yoffset)
{
camera.ProcessMouseScroll(yoffset);
}
2. 真正使用光體積
上面那個片段著色器在實際情況下不能真正地工作忿危,并且它只演示了我們可以不知怎樣能使用光體積減少光照運算。然而事實上没龙,你的GPU和GLSL并不擅長優(yōu)化循環(huán)和分支铺厨。這一缺陷的原因是GPU中著色器的運行是高度并行的,大部分的架構(gòu)要求對于一個大的線程集合硬纤,GPU需要對它運行完全一樣的著色器代碼從而獲得高效率解滓。這通常意味著一個著色器運行時總是執(zhí)行一個if語句所有的分支從而保證著色器運行都是一樣的,這使得我們之前的半徑檢測優(yōu)化完全變得無用筝家,我們?nèi)匀辉趯λ泄庠从嬎愎庹眨?/p>
使用光體積更好的方法是渲染一個實際的球體洼裤,并根據(jù)光體積的半徑縮放。這些球的中心放置在光源的位置溪王,由于它是根據(jù)光體積半徑縮放的腮鞍,這個球體正好覆蓋了光的可視體積。這就是我們的技巧:我們使用大體相同的延遲片段著色器來渲染球體在扰。因為球體產(chǎn)生了完全匹配于受影響像素的著色器調(diào)用缕减,我們只渲染了受影響的像素而跳過其它的像素。下面這幅圖展示了這一技巧:
它被應(yīng)用在場景中每個光源上芒珠,并且所得的片段相加混合在一起桥狡。這個結(jié)果和之前場景是一樣的,但這一次只渲染對于光源相關(guān)的片段皱卓。它有效地減少了從nr_objects * nr_lights
到nr_objects + nr_lights
的計算量裹芝,這使得多光源場景的渲染變得無比高效。這正是為什么延遲渲染非常適合渲染很大數(shù)量光源娜汁。
然而這個方法仍然有一個問題:面剔除(Face Culling)需要被啟用(否則我們會渲染一個光效果兩次)嫂易,并且在它啟用的時候用戶可能進入一個光源的光體積,然而這樣之后這個體積就不再被渲染了(由于背面剔除)掐禁,這會使得光源的影響消失怜械。這個問題可以通過一個模板緩沖技巧來解決颅和。
渲染光體積確實會帶來沉重的性能負擔,雖然它通常比普通的延遲渲染更快缕允,這仍然不是最好的優(yōu)化峡扩。另外兩個基于延遲渲染的更流行(并且更高效)的拓展叫做延遲光照(Deferred Lighting)
和切片式延遲著色法(Tile-based Deferred Shading)
。這些方法會很大程度上提高大量光源渲染的效率障本,并且也能允許一個相對高效的多重采樣抗鋸齒(MSAA)教届。然而受制于這篇教程的長度,我將會在之后的教程中介紹這些優(yōu)化驾霜。
延遲渲染 vs 正向渲染
僅僅是延遲著色法它本身(沒有光體積)已經(jīng)是一個很大的優(yōu)化了案训,每個像素僅僅運行一個單獨的片段著色器,然而對于正向渲染粪糙,我們通常會對一個像素運行多次片段著色器强霎。當然,延遲渲染確實帶來一些缺點:大內(nèi)存開銷猜旬,沒有MSAA和混合(仍需要正向渲染的配合)脆栋。
當你有一個很小的場景并且沒有很多的光源時候,延遲渲染并不一定會更快一點洒擦,甚至有些時候由于開銷超過了它的優(yōu)點還會更慢椿争。然而在一個更復雜的場景中,延遲渲染會快速變成一個重要的優(yōu)化熟嫩,特別是有了更先進的優(yōu)化拓展的時候秦踪。
最后我仍然想指出,基本上所有能通過正向渲染完成的效果能夠同樣在延遲渲染場景中實現(xiàn)掸茅,這通常需要一些小的翻譯步驟椅邓。舉個例子,如果我們想要在延遲渲染器中使用法線貼圖(Normal Mapping)
昧狮,我們需要改變幾何渲染階段著色器來輸出一個世界空間法線(World-space Normal)
景馁,它從法線貼圖中提取出來(使用一個TBN矩陣)而不是表面法線,光照渲染階段中的光照運算一點都不需要變逗鸣。如果你想要讓視差貼圖工作合住,首先你需要在采樣一個物體的漫反射,鏡面撒璧,和法線紋理之前首先置換幾何渲染階段中的紋理坐標透葛。一旦你了解了延遲渲染背后的理念,變得有創(chuàng)造力并不是什么難事卿樱。
附加資源
- Tutorial 35: Deferred Shading - Part 1:OGLDev的一個分成三部分的延遲著色法教程僚害。在Part 2和3中介紹了渲染光體積
- Deferred Rendering for Current and Future Rendering Pipelines:Andrew Lauritzen的幻燈片,討論了高級切片式延遲著色法和延遲光照
后記
本篇已經(jīng)結(jié)束繁调,下一篇和SSAO相關(guān)萨蚕。