Volume Raymarching
The basic concept behind volumetric rendering is to evaluate rays of light as they pass through a volume. This generally means returning an Opacity and a Color for each pixel that intersects the volume. If your volume is an analytical function you can probably calculate the result directly, but if your volume is stored in a texture, you will need to take multiple steps through the volume, looking up the texture at each step. This can be broken down into two parts:
體繪制背后的基本概念是評估光線通過一個體廷支。這通常意味著返回一個不透明度和顏色為每個像素相交的體積。如果你的體積是一個分析函數(shù)呵扛,你可能可以直接計算出結果击狮,但如果你的體積存儲在一個紋理中肴甸,你將需要在體積中采取多個步驟,在每一步查找紋理。這可以分為兩部分:
1) Opacity (Light Absorption)
1)不透明度(光吸收)
2) Color (Illumination, Scattering)
2)顏色(照明、散射)
Opacity Sampling
To generate an opacity for a volume, the density or thickness at each visible point must be known. If the volume is assumed to have a constant density and color, all that is needed is the total length of each ray before it hits an opaque occluder. For simple untextured fog, this is just the Scene Depth which gets remapped using a standard function: D3DFOG_EXP. ?This function is defined as:
為了生成一個體的不透明度拂封,必須知道每個可見點的密度或厚度。如果體積假設有一個恒定的密度和顏色鹦蠕,所有需要的是每條射線的總長度在它到達一個不透明的閉塞器冒签。對于簡單的無紋理霧,這只是使用標準函數(shù)D3DFOG_EXP重新映射的場景深度钟病。這個函數(shù)定義為:
F = 1/ e ^(t * d).
Where t is the distance traveled through some media and d is the density of the media. This is how cheap unlit fog has been calculated in games for quite some time. This comes from the Beer-Lambert law which defines transmittance through a volume of particles as:
其中t是通過某種介質(zhì)的距離萧恕,d是介質(zhì)的密度。這是游戲中很長一段時間以來所計算出的廉價的未點亮霧肠阱。這來自于比爾-朗伯定律票唆,該定律將通過一體積粒子的透射率定義為:
Transmittance = e ^ (-t * d).
Theses may look similar, because they are exactly the same thing. Note that x^(-y) is the same as 1/(x^y), ?so the Exponential Fog function is really just an applied version of the Beer-Lambert law. To understand how these functions apply to volumetrics, we can point out an equation from an old paper by Drebin [1]. It describes how much light will exit a voxel in the ray direction as it passes through it. It is designed to return an accurate color for a volume having a unique color at every voxel:
這些可能看起來很相似,因為它們是完全相同的東西屹徘。注意x^(-y)等于1/(x^y)所以指數(shù)霧函數(shù)實際上只是比爾-朗伯定律的一個應用版本走趋。為了理解這些函數(shù)如何應用于體積度量,我們可以從Drebin[1]的一篇舊論文中指出一個方程噪伊。它描述了在通過體素時簿煌,有多少光會以射線方向穿過體素。它被設計為在每個體素有一個獨特的顏色的體積返回一個準確的顏色:
Cout(v) = Cin(v) * (1 - Opacity(x)) + Color(x) * Opacity(x)
Cin(v) is the light color before it passes the voxel, Cout(v) is the color after passing through it. This states that as a ray of light passes through a volume, at every voxel, the color of the light will multiplied by the inverse opacity of the current voxel to simulate absorption, and the color of the current voxel times the opacity of the current voxel?will be added to simulate scattering. This code can work as is, as long as the volume is traced back to front.?If we track a variable for Transmittance that is initialized to 1, the volume can be traced in either direction. Transmittance can be thought of as the inverse to opacity.
Cin(v)是通過體素之前的淺色鉴吹,Cout(v)是通過體素之后的顏色姨伟。這表明,當一束光穿過一個體素時豆励,在每個體素上夺荒,光的顏色將乘以當前體素的不透明度的倒數(shù)來模擬吸收,而當前體素的顏色乘以當前體素的不透明度將被添加來模擬散射。這段代碼可以正常工作般堆,只要將卷跟蹤到前面在孝。如果我們跟蹤一個初始化為1的透光率變量,則可以在任意方向跟蹤體積淮摔。透過率可以被認為是不透明度的反比私沮。
This is where Exp, or the e^x function comes into play. Similar to the problem of bank account interest, the more often you apply interest to an account, the more money that will be earned but only up to a certain point. That point is defined by e. The same effect is found when comparing the results of integrating density over a volume. The more steps that are taken, the more that the final result will converge on a solution defined by the function Exp or e raised to some power. This is where the Beer-Lambert Law as well as the D3DFOG_EXP functions come from.
這就是Exp或者說e^x函數(shù)發(fā)揮作用的地方。與銀行存款利息的問題類似和橙,存款利息越多仔燕,就能賺到越多的錢,但只有在一定的范圍內(nèi)魔招。這個點由e定義晰搀。當比較密度除以體積的積分結果時,可以發(fā)現(xiàn)同樣的效應办斑。采取的步驟越多外恕,最終結果就越會收斂于Exp函數(shù)或e的某次方所定義的解。這就是比爾-朗伯定律和D3DFOG_EXP函數(shù)的由來乡翅。
The math we have explored so far gives us some hints about how to proceed to build a custom volume renderer. We know we need to figure out the thickness of the volume at each point. This thickness value can then be used with an exponential density function to approximate how much light the volume would block.
到目前為止鳞疲,我們所探索的數(shù)學為我們提供了一些關于如何繼續(xù)構建一個自定義體積渲染器的提示。我們知道我們需要算出每一點的體積厚度蠕蚜。然后尚洽,這個厚度值可以與指數(shù)密度函數(shù)一起使用,以估計體積將阻擋多少光靶累。
To sample the density of our volume, several steps are taken along each ray passing through the volume and value of the volume texture is read at each point. ?This example shows an imagined volume texture of a sphere. The camera rays show the result of sampling the volume at regular intervals to measure distance traveled within the media:
為了對我們的體積的密度進行采樣腺毫,沿著穿過體積的每條射線采取幾個步驟,并在每個點讀取體積紋理的值挣柬。這個例子展示了一個想象的球體的體積紋理潮酒。相機射線以一定的間隔顯示對體積進行采樣的結果,以測量介質(zhì)中移動的距離:
If the ray is inside the media during a step, the step length?is added to an accumulation variable. If the ray is outside of the media during a step, nothing is accumulated during that step.?At the end of this, for each pixel, we have a value describing how far the camera ray traveled while inside of the media in the volume texture. Because the distance is also multiplied by the opacity at each point, the final distance returned?represents Linear Density.
如果光線在一個步驟中是在媒體內(nèi)邪蛔,步長被添加到一個積累變量急黎。如果射線在一個步驟的媒體之外,沒有積累在該步驟店溢。最后,對于每個像素委乌,我們都有一個值來描述攝像機射線在體紋理媒體內(nèi)部傳播的距離床牧。因為距離還要乘以每個點的不透明度,所以最終返回的距離代表線性密度遭贸。
That distance is represented in the above example as the yellow line between the yellow dots. Note that when low step counts are used like in the above example, the distances may not match the actual content very well and slicing artifacts become visible. These kinds of artifacts and solutions will be described in more detail further on.
在上面的例子中戈咳,這個距離用黃點之間的黃線表示。注意,當像上面的例子那樣使用低步數(shù)時著蛙,距離可能與實際內(nèi)容不太匹配删铃,切片工件變得可見。這些類型的工件和解決方案將在后面更詳細地描述踏堡。
At this point we are just accumulating linear values and returning a linear distance at the end. In order to make this look volumetric, we use an exponential function to remap the final value. The standard Direct3D exponential fog function?D3DFOG_EXP mentioned above works well for this.
在這里猎唁,我們只是積累線性值,并在最后返回一個線性距離顷蟆。為了讓它看起來更有體積感诫隅,我們使用指數(shù)函數(shù)來重新映射最終值。上面提到的標準Direct3D指數(shù)霧函數(shù)D3DFOG_EXP可以很好地完成這個工作帐偎。
Example Opacity-Only Ray March
It is possible to do all of the ray marching code in the custom node, but that?requires nested function calls which requires multiple custom nodes. Custom nodes get auto-named by the translator which means you have to call them assuming you know the order the compiler will add them (ie, CustomExpression0, 1, 2...). The compiler can start renaming the functions just by adding new ones or changing how they are hooked up between various material pins.?
可以在自定義節(jié)點中執(zhí)行所有射線推進代碼逐纬,但這需要嵌套的函數(shù)調(diào)用,這需要多個自定義節(jié)點削樊。自定義節(jié)點由翻譯器自動命名豁生,這意味著你必須調(diào)用它們,前提是你知道編譯器添加它們的順序(例如漫贞,CustomExpression0, 1,2…)甸箱。編譯器可以通過添加新函數(shù)或改變它們在不同材質(zhì)引腳之間的連接方式來開始重命名函數(shù)。
To make this part a bit easier, I have added a PsuedoVolumeTexture function into the common.usf file. ?Simply download and overwrite the common.usf located in Engine\Shaders. You can do this with the editor running and it should work immediately. This is basically just repeated code from the previous post on pseudo volume textures. Having this function greatly simplifies the raymarching code and it can just be swapped for a standard 3d texture sample when?a future version of ue4 adds support. If you do not have one of the versions below, download one of them and just copy the last 2 functions into your version. I suggest using 4.13.2 for now over 4.14 until the 4.14.1 version is released. I will go into that at the very end.
為了使這部分更容易一些绕辖,我在通用代碼中添加了一個PsuedoVolumeTexture函數(shù)摇肌。普遍服務基金文件。只需下載并覆蓋通用代碼仪际。usf位于引擎\著色器围小。您可以在編輯器運行時這樣做,它應該立即工作树碱。這基本上是重復的代碼從上一個帖子的偽體積紋理肯适。這個功能極大地簡化了射線行進代碼,當未來版本的ue4添加支持時成榜,它可以被替換為標準的3d紋理樣本框舔。如果您沒有下面的一個版本,下載其中一個赎婚,只需復制最后兩個函數(shù)到您的版本刘绣。我建議現(xiàn)在使用4.13.2而不是4.14,直到4.14.1版本發(fā)布挣输。我會在最后講到纬凤。
common.usf (UE4.14):
https://www.dropbox.com/s/1ee9630r6fqbese/Common.usf?dl=0
common.usf (UE4.13.2):
https://www.dropbox.com/s/bagvoru81yc3aij/Common.usf?dl=0
Example Volume Texture of Smoke Ball:
https://www.dropbox.com/s/9h98z1mlhp1yw55/T_Volume_Wisp_01.tga?dl=0
RayMarching Code:
float numFrames = XYFrames * XYFrames;
float accumdist = 0;
float3 localcamvec = normalize( mul(Parameters.CameraVector, Primitive.WorldToLocal) );
float StepSize = 1 / MaxSteps;
for (int i = 0; i < MaxSteps; i++)
{
float cursample = PseudoVolumeTexture(Tex, TexSampler, saturate(CurPos), XYFrames, numFrames).r;
accumdist += cursample * StepSize;
CurPos += -localcamvec * StepSize;
}
return accumdist;
This simple code advances a ray through a specified volume texture over a distance of 0-1 in texture space and returns the linear density of the particulates traveled through. It is by no means complete and missing crucial details. Some bits will be added to the code later and some of the details will be provided in the form of material nodes.?
這個簡單的代碼在紋理空間中以0-1的距離推進一條光線通過指定的體積紋理,并返回經(jīng)過的粒子的線性密度撩嚼。它絕不是完整的蚊锹,而且遺漏了關鍵的細節(jié)。一些位稍后將添加到代碼中但骨,一些細節(jié)將以材料節(jié)點的形式提供。
This allows you to control the number of steps and frame layout you want to use.?
這允許您控制您想要使用的步驟數(shù)量和框架布局拇舀。
In this simplified example, the node BoundingBoxBased_0-1_UVW is used because its an easy way to get a local 0-1 starting position. It works with box or sphere meshes, but it is not what we will end up using by the end of this for reasons that will be soon apparent.?
在這個簡化的示例中,使用節(jié)點BoundingBoxBased_0-1_UVW蜻底,因為它是獲取本地0-1起始位置的一種簡單方法骄崩。它適用于框網(wǎng)格或球體網(wǎng)格,但它不是我們將在本節(jié)結束時使用的朱躺,原因很快就會明白刁赖。
Here is what this should look like if you put it on?StaticMesh'/Engine/EditorMeshes/EditorCube.EditorCube' with 64 steps:
如果你把它放在StaticMesh'/Engine/ editormesh /EditorCube上,這應該是這樣的长搀。EditorCube'包含64個步驟:
A random volumetric puffball, neat! But lets not get too excited yet. With the above 64 steps, the result looks pretty smooth. With 32 steps, strange slicing artifacts appear:
隨機體積的蓬松球宇弛,整齊!但我們先別太激動。通過上面的64個步驟源请,結果看起來非常平滑枪芒。在32個步驟中,奇怪的切片工件出現(xiàn):
These artifacts betray?the box geometry used to render the material. ?They are a kind of moire pattern that results from tracing the volume texture starting at exactly the surface of the box intersection. Doing that causes the pattern of sampling to continue the box shape and give it that pattern. By snapping the start positions to view aligned planes, the artifacts can be reduced.
這些文物背叛了渲染材料的盒子幾何形狀谁尸。它們是一種云紋圖案的結果舅踪,從立方體交點的表面開始跟蹤體積紋理。這樣做會使采樣的模式延續(xù)盒子的形狀良蛮,并形成那個模式抽碌。通過捕捉開始位置來查看對齊的平面,工件可以減少决瞳。
This is an example of emulating a geometric slicing approach using only the?pixel shader. It still has slicing artifacts in motion but they are far less noticeable and do not betray the box geometry which is key. Additional sampling improvements can be had with low step counts by introducing temporal jitter. More on that later. Here is the additional code to align the samples.?
這是一個僅使用像素著色器來模擬幾何切片方法的例子货徙。它仍然有運動中的切片工件,但它們遠沒有那么明顯皮胡,而且不會泄露盒子的幾何形狀痴颊,這是關鍵。通過引入時間抖動屡贺,可以對低步長計數(shù)進行額外的采樣改進蠢棱。稍后再詳細說明。下面是調(diào)整示例的附加代碼甩栈。
// Plane Alignment
// get object scale factor
//NOTE: This assumes the volume will only be UNIFORMLY scaled. Non uniform scale would require tons of little changes.
float scale = length( TransformLocalVectorToWorld(Parameters, float3(1.00000000,0.00000000,0.00000000)).xyz);
float worldstepsize = scale * Primitive.LocalObjectBoundsMax.x*2 / MaxSteps;
float camdist = length( ResolvedView.WorldCameraOrigin - GetObjectWorldPosition(Parameters) );
float planeoffset = GetScreenPosition(Parameters).w / worldstepsize;
float actoroffset = camdist / worldstepsize;
planeoffset = frac( planeoffset - actoroffset);
float3 localcamvec = normalize( mul(Parameters.CameraVector, Primitive.WorldToLocal) );
float3 offsetvec = localcamvec * StepSize * planeoffset;
return float4(offsetvec, planeoffset * worldstepsize);
Notice that both the depth and actorposition are both accounted for. That stabilizes the slices relative the actor so there no movement as the camera moves towards or away. I put this into another custom node for now. It will help to keep the setup part of the code separate from the core raymarching code so that other primitives like spheres can be added more easily. This is not a nested custom node since the value is used directly and only once. It is never called specifically by other custom nodes.
請注意泻仙,深度和角色位置都被考慮在內(nèi)。這就穩(wěn)定了相對于演員的切片量没,所以當攝像機移動時沒有移動∮褡現(xiàn)在我把它放到另一個自定義節(jié)點中。這將有助于保持代碼的設置部分與核心射線推進代碼分離允蜈,這樣就可以更容易地添加其他原語冤吨,如球體。這不是嵌套的自定義節(jié)點饶套,因為該值被直接使用且僅使用一次漩蟆。它從不被其他定制節(jié)點專門調(diào)用。
The next task is to control the step count more carefully. You may have noticed that the code so far is saturating the ray position to keep it inside the 0-1 space. That means whenever the tracer hits the edge of the box, it continues to waste time checking the volume. It also will never trace the full corner to corner distance of the volume since the trace distance is limited to 1, and the corner to corner distance of the volume is 1.732. This just happens to not be a problem in the example volume so far because the content is roundish.?One way to fix this is by checking to see if the ray exits the volume during the loop, but a solution like that is not ideal because it adds to the overhead of the loop?and that should be kept as simple as possible. A better solution is to pre-calculate the number of steps that fit.
下一個任務是更仔細地控制步數(shù)妓蛮。您可能已經(jīng)注意到怠李,到目前為止的代碼是飽和的射線位置,以保持它在0-1空間內(nèi)蛤克。這意味著跟蹤器只要碰到盒子的邊緣捺癞,它就會繼續(xù)浪費時間檢查體積。它也永遠不會跟蹤整個卷的角到角的距離构挤,因為跟蹤距離被限制為1髓介,而卷的角到角的距離是1.732。到目前為止筋现,這在示例卷中還不是問題唐础,因為內(nèi)容是圓的。解決這個問題的一種方法是檢查光線是否在循環(huán)期間退出體積矾飞,但這樣的解決方案不是理想的一膨,因為它增加了循環(huán)的開銷,這應該保持盡可能簡單洒沦。一個更好的解決方案是預先計算適合的步數(shù)豹绪。
It helps to use a simple primitive like a box or a sphere so that you can use simple math to determine thickness. While spheres may be the more performant shape due to covering?less screen pixels, boxes?let us display the entire content of volume textures and tends to be more flexible when distorting the volume. For now we will just deal with using a box. Here is how we precalculate the steps for a box. The world->local transforms allow the mesh to move. Note that this actually changes a few thing about how we calculate the above plane alignment so I just rolled the above code into this. Now the function returns the local Ray Entry Position and Thickness directly:
下一個任務是更仔細地控制步數(shù)。您可能已經(jīng)注意到申眼,到目前為止的代碼是飽和的射線位置瞒津,以保持它在0-1空間內(nèi)。這意味著跟蹤器只要碰到盒子的邊緣豺型,它就會繼續(xù)浪費時間檢查體積仲智。它也永遠不會跟蹤整個角到角的距離。它有助于使用一個簡單的原語姻氨,如一個盒子或一個球體钓辆,以便您可以使用簡單的數(shù)學來確定厚度。由于覆蓋的屏幕像素更少肴焊,球體可能是性能更好的形狀前联,而盒子讓我們顯示體積紋理的全部內(nèi)容,在扭曲體積時往往更靈活∪⒕欤現(xiàn)在我們只用一個盒子似嗤。下面是我們?nèi)绾晤A先計算一個盒子的步驟。world->局部變換允許網(wǎng)格移動届宠。請注意烁落,這實際上改變了我們?nèi)绾斡嬎闵鲜銎矫鎸R的一些事情乘粒,所以我只是把上述代碼滾動到這里。現(xiàn)在函數(shù)直接返回本地光線進入位置和厚度:ce的體積伤塌,因為跟蹤距離被限制為1灯萍,和角落到角落的距離的體積是1.732。到目前為止每聪,這在示例卷中還不是問題旦棉,因為內(nèi)容是圓的。解決這個問題的一種方法是檢查光線是否在循環(huán)期間退出體積药薯,但這樣的解決方案不是理想的绑洛,因為它增加了循環(huán)的開銷,這應該保持盡可能簡單童本。一個更好的解決方案是預先計算適合的步數(shù)真屯。
//bring vectors into local space to support object transforms
float3 localcampos = mul(float4( ResolvedView.WorldCameraOrigin,1.00000000), (Primitive.WorldToLocal)).xyz;
float3 localcamvec = -normalize( mul(Parameters.CameraVector, Primitive.WorldToLocal) );
//make camera position 0-1
localcampos = (localcampos / (Primitive.LocalObjectBoundsMax.x * 2)) + 0.5;
float3 invraydir = 1 / localcamvec;
float3 firstintersections = (0 - localcampos) * invraydir;
float3 secondintersections = (1 - localcampos) * invraydir;
float3 closest = min(firstintersections, secondintersections);
float3 furthest = max(firstintersections, secondintersections);
float t0 = max(closest.x, max(closest.y, closest.z));
float t1 = min(furthest.x, min(furthest.y, furthest.z));
float planeoffset = 1-frac( ( t0 - length(localcampos-0.5) ) * MaxSteps );
t0 += (planeoffset / MaxSteps) * PlaneAlignment;
t0 = max(0, t0);
float boxthickness = max(0, t1 - t0);
float3 entrypos = localcampos + (max(0,t0) * localcamvec);
return float4( entrypos, boxthickness );
The node marked "Ray Entry" hooks to theCurPosinput on the main ray marching node. The parameterPlane Alignmentallows toggling the alignment on and off.
標記為“Ray Entry”的節(jié)點連接到主射線行進節(jié)點上的curposinput。parameterPlane alignment允許打開和關閉對齊穷娱。
Note that parts of the code now assume that you are using a Box static mesh that has its pivot at the center of the box and not on the floor the box.
注意讨跟,現(xiàn)在代碼的部分部分假設您使用的是一個Box靜態(tài)網(wǎng)格,它的樞軸在Box的中心鄙煤,而不是在Box的地板上晾匠。
Sorting
So far we have been using the local position of the geometry to easily start a trace from the outside, but that won't let the camera go inside the volume. To support going inside, we can instead use the Ray Entry Position output from the already solved box intersection above, and then flip the faces of the polygons on the box geometry so they face inwards. ?This works because we know where the ray would have intersected the outside of the volume and we also know how long the ray will travel through the volume.
到目前為止,我們一直使用幾何體的局部位置來輕松地從外部開始跟蹤梯刚,但這不會讓相機進入體積內(nèi)部凉馆。為了支持進入內(nèi)部,我們可以使用來自上面已經(jīng)解決的盒子交點的Ray Entry Position輸出亡资,然后翻轉(zhuǎn)盒子幾何體上的多邊形面澜共,使它們面向內(nèi)部。這是可行的锥腻,因為我們知道光線與外部物體的交點我們也知道光線穿過物體的時間嗦董。
Flipping the faces and using the intersection will allow the camera to go inside the volume but it will not make objects sort correctly. Any object inside the cube will appear to draw completely on top of the volume. To solve that, we?just need to take the localized scene depth into account when calculating the ray distance within the box. This requires a few new lines to be added to the setup function:
翻轉(zhuǎn)臉部和使用交叉將允許相機進入體積內(nèi),但它不會使對象正確排序瘦黑。立方體內(nèi)的任何對象看起來都完全畫在體積的頂部京革。為了解決這個問題,我們只需要在計算框內(nèi)的射線距離時考慮局部場景深度幸斥。這需要在setup函數(shù)中添加幾行新代碼:
float scale = length( TransformLocalVectorToWorld(Parameters, float3(1.00000000,0.00000000,0.00000000)).xyz);
float localscenedepth = CalcSceneDepth(ScreenAlignedPosition(GetScreenPosition(Parameters)));
float3 camerafwd = mul(float3(0.00000000,0.00000000,1.00000000),ResolvedView.ViewToTranslatedWorld);
localscenedepth /= (Primitive.LocalObjectBoundsMax.x * 2 * scale);
localscenedepth /= abs( dot( camerafwd, Parameters.CameraVector ) );
//this line goes just before the line:?t0 = max(0, t0);
t1 = min(t1, localscenedepth);
Now, in the material settings,Disable Depth Test should be set to true in order to gain control over how the material blends with the scene. Sorting with other translucent objects will be done on a per object basis and we won't have much control over that, but at least we can solve sorting with opaque objects. While in the material settings, also change the blend mode to AlphaComposite to avoid edge blending artifacts that occur with translucency. Also make sure the material is set to unlit.
現(xiàn)在匹摇,在材質(zhì)設置中,禁用深度測試應該設置為true甲葬,以控制材質(zhì)如何與場景混合廊勃。與其他半透明對象的排序?qū)⒒诿總€對象,我們不會對此有太多的控制经窖,但至少我們可以解決與不透明對象的排序坡垫。在材質(zhì)設置中梭灿,也將混合模式更改為AlphaComposite,以避免出現(xiàn)半透明的邊緣混合工件冰悠。同時確保素材設置為未點亮狀態(tài)胎源。
Now we can generate accurate sorting with opaque geometry by adding one Scene Depth lookup. This automatically causes the ray marcher to return the correct opacity because we are stopping the ray from accumulating beyond the scene depth. There is still one artifact to fix though. Because we are stopping the ray march using whole step sizes, we will see stair step like artifacts where opaque geometry intersects the volume:
現(xiàn)在我們可以通過添加一個場景深度查找來生成不透明幾何的精確排序。這將自動使光線游行者返回正確的不透明度屿脐,因為我們正在阻止光線積累超過場景深度。但是仍然有一個工件需要修復宪卿。因為我們使用整個步長來阻止光線前進的诵,所以我們將看到像人工制品一樣的樓梯步,其中不透明的幾何圖形與體積相交:
To fix those slicing artifacts requires just taking one additional step. We track how many steps would have fit up to the scene depth and then take one final step sized to fit the remainder. That assures we end up taking a final sample right at the depth location which smooths out those seams. In order to keep the main tracing loop as simple as possible, we do this outside of the main loop as an additional density/shadow pass.
要修復這些切片工件佑钾,只需要采取一個額外的步驟西疤。我們跟蹤多少步適合場景深度,然后調(diào)整最后一步的大小以適應剩余的步驟休溶。這就保證了我們最終會在深度位置取下最終樣本代赁,從而撫平接縫。為了讓主跟蹤循環(huán)盡可能簡單兽掰,我們在主循環(huán)之外做這個作為額外的密度/陰影通道芭碍。
The resulting blend with opaque objects appears accurate as objects move and the view direction changes:
隨著物體的移動和視圖方向的改變,與不透明物體的混合結果顯示準確:
https://youtu.be/0kzmFcmV3Ag
So far we have a fairly functional density only ray marcher. As you can see, the core ray marching part of a shader is probably the simplest part. Handling the tracing behavior for different primitives, sampling?and sorting problems are the tricky bits.?
到目前為止孽尽,我們有一個相當有效的密度只有射線游行者窖壕。正如你所看到的,著色器的核心光線移動部分可能是最簡單的部分杉女。處理不同原語的跟蹤行為瞻讽、采樣和排序問題是比較棘手的部分。
Light Sampling
To render convincingly lit volumes, the behavior of light transport must be modeled. As rays of light pass through a volume, a certain amount of that light will be absorbed and scattered by the particulates in the volume. Absorption is how much light energy is lost to the volume and scattering is how much light is reflected out. The ratio of Absorption (A) to Scattering (S) determines the diffuse brightness of the particulates [shopf2007].
為了渲染出令人信服的光照體熏挎,必須模擬光傳輸?shù)男袨樗儆隆.敼饩€穿過一個體積時,一定數(shù)量的光會被體積中的微粒吸收和散射坎拐。吸收是指有多少光能損失到體積和散射是有多少光被反射出去烦磁。吸收(A)和散射(S)的比率決定了粒子的漫射亮度[shopf2007]。
In this case, we are only going to care about one kind of scattering for simplicity and performance reasons:Out-Scattering. That is basically how much light that hits the volume will be reflected back out isotropically or diffusely.In-Scattering refers to light bouncing from within the volume and that is generally too expensive to do in real time but it can be decently approximated by blurring the results of the Out-Scattering. To know the out-scattering at a given point, it must be know how much light energy was lost due to absorption as the photons reached that point from the light source as well as how much energy will then be lost heading towards the eye back out of the volume.
在本例中哼勇,出于簡單性和性能考慮个初,我們只關心一種散射:外散射。這基本上就是照射到物體上的光線會以各向同性或漫反射的方式反射回來猴蹂。內(nèi)散射指的是光線從物體內(nèi)部反射過來院溺,這通常太昂貴了,無法實時進行磅轻,但可以通過模糊外散射的結果來適當?shù)亟普湟荨R滥骋稽c的外散射逐虚,就必須知道當光子從光源到達該點時,由于吸收而損失了多少光能谆膳,以及有多少能量在從體積返回到眼睛時損失了叭爱。
There are a number of techniques to calculate?these values, but this post will deal primarily with the brute force method of performing a nested ray march towards the light from each density sample. This method is quite expensive as it means the cost of the shader will be DensitySteps * ShadowSteps, or N*M. It is also by far the easiest and most flexible to implement.
有許多技術來計算這些值,但這篇文章將主要處理從每個密度樣本執(zhí)行一個嵌套的光線行進的蠻力方法漱病。這個方法是非常昂貴的买雾,因為它意味著著色器的成本將是DensitySteps * ShadowSteps,或N*M杨帽。到目前為止漓穿,它也是最容易實現(xiàn)和最靈活的。
The above example shows nested shadow samples being traced from each density sample originating from a single camera ray. Note that only density samples that are inside of the volume media have to perform the shadow samples, and the shadow loop can quit early if a ray reaches the volume border, or if the shadow density exceeds a threshold where close to full absorption has?occurred. These few things can reduce the drastic N * M situation a bit.
上面的例子顯示了嵌套的陰影樣本被追蹤從每個密度樣本來自一個單一的相機射線注盈。注意晃危,只有在體積介質(zhì)內(nèi)部的密度樣本才能執(zhí)行陰影樣本,如果光線到達體積邊界老客,或者如果陰影密度超過了接近完全吸收的閾值僚饭,陰影循環(huán)可以提前退出。這些方法可以稍微減少激烈的N * M情況胧砰。
At each sample, the density is taken and used to determine how much light that?sample can scatter back out. That also affects how much transmittance will decrease for the next iteration. The shader then shoots rays towards the light and see how much of the potential?light energy made it to that point. Thus, the visible light transmitted from the point to the camera is controlled by the total photon path length through the volume and the scattering coefficient of the point itself. This process can still be described by the prior formula from?Drebin, 1988 [1]:
在每個樣本上鳍鸵,都要取其密度,并用來確定該樣本能散射出多少光尉间。這也會影響下一次迭代時透射率的下降权纤。然后著色器向光發(fā)射光線,看看有多少潛在的光能到達那一點乌妒。因此汹想,從點傳輸?shù)较鄼C的可見光是由光子通過體積的總路徑長度和點本身的散射系數(shù)控制的。這個過程仍然可以用Drebin 1988[1]先前的公式來描述:
Cout(v) = Cin(v) * (1 - Opacity(x)) + Color(x) * Opacity(x)
But the above formula only describes a single light path to the camera. To be able to propagate light from out-scattering as well as calculate volume opacity, we need to recreate that iterative ray sample at each sample location, towards the light. Let's define a few basic functions which describe out lighting calculations.
但上面的公式只描述了到相機的一條光路撤蚊。為了能夠從外散射傳播光線古掏,并計算體積不透明度,我們需要在每個樣本位置重新創(chuàng)建迭代的光線樣本侦啸,朝向光線槽唾。讓我們定義幾個基本的函數(shù)來描述照明計算。
Linear Density is defined at each point x along the ray as simply Opacity * Density Parameter. The parameter allows user tweaking of the density but will be dropped from the equations for simplicity from here on out, as it could also be pre-multiplied into the volume opacity.
線性密度在沿著光線的每個點x上定義為簡單的不透明度*密度參數(shù)光涂。該參數(shù)允許用戶調(diào)整密度庞萍,但從這里開始將從公式中刪除,因為它也可以預先乘以體積不透明度忘闻。
Linear Density is accumulated along a ray from point x to point x' like this:
線性密度沿著從點x到點x'的射線累積钝计,像這樣:
Thus, Transmittance over the length of a ray from point x to x' is defined as:
因此,一條射線從x點到x'點的透射率定義為:
This is how we calculated the density for the density-only ray march started above. To add lighting, we now need to account for the light scattering and absorption at each point along the ray. This involves nesting a bunch of these terms. At a point x within the volume, the amount of out-scattering that makes it to that point from a light from direction w?is equal to:
這就是我們?nèi)绾斡嬎闵厦骈_始的只有密度的射線行進的密度。為了增加照明私恬,我們現(xiàn)在需要考慮沿著光線的每一點的光散射和吸收债沮。這涉及到嵌套一堆這樣的項。在體積內(nèi)的一點x處本鸣,w方向的光向外散射的量等于:
Where w is the light direction and?l is a point outside the volume towards the negative light direction. The term -LinearDensity(x,l) represents the linear density accumulation from point x towards the light until the volume boundary is reached which represents the amount of particulate that would absorb light. Note that this is still only the value for the amount of light visible at that point, it does not yet account for the fraction of that light absorbed based on the opacity of the sample. For that, the OutScattering term gets multiplied by Opacity(x). It also does not account for further transmission loss as that light exits back out of the volume. To account for that loss, the transmittance from the camera to the point x must be determined.?
其中w為光照方向疫衩,l為體外一個朝向負光照方向的點。術語-線性密度(x,l)表示從x點到光的線性密度積累荣德,直到達到體積邊界闷煤,這表示會吸收光的顆粒的數(shù)量。請注意涮瞻,這仍然只是該點可見光量的值鲤拿,它還沒有考慮到基于樣品不透明度吸收的光的比例。為此饲宛,OutScattering項需要乘以不透明度(x)。它也不能解釋當光從體積中退出時進一步的傳輸損失嗜价。為了考慮這種損失艇抠,必須確定從相機到點x的透光率。
We can make a modified function TotalOutScattering(x', w) which describes how much out-scattering is visible along a ray w from point x ?to point x', rather than just describing it for a single point:
我們可以做一個修改的函數(shù)TotalOutScattering(x'久锥, w)來描述沿著光線w從點x到點x'可見的散射量家淤,而不是只描述單個點的散射量:
Note that OS and T are short for the OutScattering and Transmission terms above. OS should also by multiplied by Opacity(s) which I forgot to add but may recreate the expression later. This function will return the total scattering from all points along a view ray through the volume. It is actually a few nested integrals which is too nasty to bother writing out in the expanded form so we might as well start dealing with the code itself. Terms like OutScattering are implied to be multiplied by light color and diffuse color?at the beginning.
注意OS和T是上面OutScattering和Transmission術語的縮寫。OS也應該乘以不透明度(s)瑟由,我忘記添加絮重,但可能會重新創(chuàng)建表達式后。這個函數(shù)將返回從所有點沿著一個視圖射線通過體積散射的總數(shù)歹苦。它實際上是一些嵌套的積分青伤,太麻煩了,不愿意把它寫成展開的形式殴瘦,所以我們可以開始處理代碼本身狠角。像OutScattering這樣的術語意味著在開始時要乘以淺色和漫射色。
Traditionally you may see this equation written as Radiance (L) in other papers but I have excluded that because for radiance you also account for the amount of background color transmitted into the volume which is basically just SceneColor * FinalOpacity. We won't add that into the math here for reasons that I somewhat arbitrarily decided upon:
傳統(tǒng)上蚪腋,你可能會在其他文章中看到這個方程被寫成輻亮度(L)丰歌,但我已經(jīng)排除了它,因為輻亮度還包括傳輸?shù)襟w積中的背景顏色的數(shù)量屉凯,基本上就是SceneColor * final不透明度立帖。我們不會把它加到數(shù)學中,因為我有些武斷地決定:
1) We aren't going to blend the background color like that. Instead we will just use the AlphaComposite blend mode and plug in our opacity.
1)我們不打算像那樣混合背景顏色悠砚。相反晓勇,我們將只使用AlphaComposite混合模式,并插入不透明度。
2) We aren't actually going to be blurring or scattering the background color which is why I am not going to bother talking about that term too much. For much more detail on the full math, see Shopf [2]. Much of the math on this page is based on equations from that page but I have attempted to make them more artist friendly by using real words instead of greek symbols and explaining the relationships in more simplified ways.
2)我們實際上不會模糊或分散背景顏色宵蕉,這就是為什么我不想費事談論太多這個術語酝静。有關完整數(shù)學的更多詳細信息,請參閱Shopf[2]羡玛。這一頁的大部分數(shù)學都是基于那一頁的公式别智,但我試圖通過使用真實的單詞而不是希臘符號,并以更簡化的方式解釋關系稼稿,使它們更具有藝術性薄榛。
Example Shadowed Volume Code
float numFrames = XYFrames * XYFrames;
float curdensity = 0;
float transmittance = 1;
float3 localcamvec = normalize( mul(Parameters.CameraVector, Primitive.WorldToLocal) ) * StepSize;
float shadowstepsize = 1 / ShadowSteps;
LightVector *= shadowstepsize;
ShadowDensity *= shadowstepsize;
Density *= StepSize;
float3 lightenergy = 0;
for (int i = 0; i < MaxSteps; i++)
{
float cursample = PseudoVolumeTexture(Tex, TexSampler, saturate(CurPos), XYFrames, numFrames).r;
//Sample Light Absorption and Scattering
if( cursample > 0.001)
{
float3 lpos = CurPos;
float shadowdist = 0;
for (int s = 0; s < ShadowSteps; s++)
{
lpos += LightVector;
float lsample = PseudoVolumeTexture(Tex, TexSampler, saturate(lpos), XYFrames, numFrames).r;
shadowdist += lsample;
}
curdensity = saturate(cursample * Density);
float shadowterm = exp(-shadowdist * ShadowDensity);
float3 absorbedlight = shadowterm * curdensity;
lightenergy += absorbedlight * transmittance;
transmittance *= 1-curdensity;
}
CurPos -= localcamvec;
}
return float4( lightenergy, transmittance);
As you can see, just adding basic shadowing adds quite a lot of complexity to the?simple density only tracer we started with.
正如您所看到的,僅僅添加基本的陰影就給我們開始使用的簡單的僅密度跟蹤器增加了相當多的復雜性让歼。
Notice that in this version, the cameravector and lightvector get pre-multiplied by their respective stepsize in the beginning, outside of the loop. That is because shadow tracing makes the shader much more expensive so we want to move as many operations outside of the loops as possible (especially the inner loop).
注意敞恋,在這個版本中,在循環(huán)之外谋右,cameravector和lightvector在一開始就預先乘以了它們各自的步長硬猫。這是因為陰影跟蹤使著色器更加昂貴,所以我們想把盡可能多的操作移到循環(huán)之外(特別是內(nèi)部循環(huán))改执。
In the current form, the shader code above is still very slow. We did add one optimization: the shader only evaluates a voxel if it has an opacity > 0.001. This can potentially save a lot of time if our volume texture has a lot of empty space, but it won't help at all if the whole volume is written to. We need more optimizations to make this shader practical.
在當前的形式中啸蜜,上面的著色器代碼仍然非常慢。我們確實添加了一個優(yōu)化:shader只評估不透明度> 0.001的體素辈挂。如果我們的卷紋理有很多空空間衬横,這可能會節(jié)省很多時間,但如果要寫入整個卷终蒂,這一點幫助都沒有蜂林。我們需要更多的優(yōu)化使這個著色器實用。
The biggest problem with the above version is that it is going to run all shadow steps for all density samples. So if we used something like 64 density steps and 64 shadow steps, that would be 4096 samples. Because our pseudovolume function requires 2 lookups, that means our shader would be doing 8192 texture lookups per pixel! That is pretty bad, but we can optimize it significantly by quitting early if either the ray leaves the volume or full absorption is reached.
上面的版本最大的問題是拇泣,它將運行所有密度樣本的所有陰影步驟噪叙。如果我們用64個密度步和64個陰影步,那就是4096個樣本霉翔。因為我們的偽體積函數(shù)需要2次查找构眯,這意味著我們的著色器將每像素執(zhí)行8192次紋理查找!這很糟糕,但我們可以通過盡早退出來優(yōu)化它早龟,如果射線離開體積或完全吸收惫霸。
The first part can be handled by checking if the ray has left the volume at each shadow iteration. That would be something like:
第一部分可以通過檢查光線是否在每次陰影迭代時離開了體積來處理。這就像:
if(lpos.x > 1 || lpos.x < 0 || lpos.y > 1 || lpos.y < 0 || lpos.z > 1 || lpos.z < 0) break;
While a check like that works, it turns out to be pretty slow since the shadow loop runs so many times. I have also tried precalculating the number of shadow steps before each shadow loop instead, very similar to how I precalculated?the number of density iterations for a box shape. Surprisingly that turned out to be the slowest method. The fastest method I have found so far to early-terminate the shadow loop is with this simple box test math:
雖然這樣的檢查是有效的葱弟,但由于影子循環(huán)運行了很多次壹店,所以它變得非常緩慢。我還嘗試在每個陰影循環(huán)之前預先計算陰影步驟的數(shù)量芝加,這與我預先計算盒子形狀的密度迭代數(shù)量非常相似硅卢。令人驚訝的是射窒,這是最慢的方法。到目前為止将塑,我發(fā)現(xiàn)的最快的提前終止陰影循環(huán)的方法是使用這個簡單的盒子測試:
float3 shadowboxtest = floor( 0.5 + ( abs( 0.5 - lpos ) ) );
float exitshadowbox = shadowboxtest .x + shadowboxtest .y + shadowboxtest .z;
if(exitshadowbox >= 1) break;
The next bit we need to add is early termination based on an absorption threshold. Typically this means you quit the shadow loop once the transmittance is below some small number such as 0.001. The larger this threshold, the more artifacts will appear so this value should be tweaked to be as large as is visually acceptable.
接下來我們需要添加的是基于吸收閾值的提前終止脉顿。通常,這意味著當透光率低于一些小數(shù)值(如0.001)時点寥,就退出陰影環(huán)艾疟。這個閾值越大,就會出現(xiàn)更多的工件敢辩,所以這個值應該調(diào)整到視覺上可以接受的大小蔽莱。
If we wrote the shadow marching loop by just multiplying the light transmittance by the inverse opacity at each point then we would implicitly know the transmittance at every iteration and checking for the threshold would be as simple a checking:
如果我們只通過在每個點上用反不透明度乘以光的透射率來編寫陰影前進循環(huán),那么我們就可以隱式地知道每次迭代的透射率戚长,并且檢查閾值就像檢查一樣簡單:
if( transmittance < threshold) break;
But notice that we are not actually calculating transmittance during shadow iterations. We are accumulating linear density just like in our first density-only example. This is in an effort to make the shadow loop as cheap as possible, since doing a single add for each shadow accumulation is much cheaper than doing two multiplies and a 1-x which would otherwise be required. This just means we need to use some math to determine our shadow threshold in terms of a distance rather than a transmission value.
但是請注意盗冷,我們實際上并沒有在陰影迭代期間計算透光率。我們在積累線性密度就像我們第一個只考慮密度的例子一樣同廉。這是為了使陰影循環(huán)盡可能便宜仪糖,因為為每個陰影積累做一次加法要比做兩次乘法和1-x(否則就需要)便宜得多。這只是意味著我們需要使用一些數(shù)學來根據(jù)距離而不是傳輸值來確定我們的陰影閾值迫肖。
To do that, we simply invert the final transmittance term which is calculated as e ^ (-t * d). So we want to determine for what value of t would transmittance be less than our threshold. Thankfully this is exactly what the function log(x) does. The default base of log is e. It returns an answer to the question "e raised to what power equals x". So if we want to know at what value of t the transmittance would be less than 0.001, we can calculate:
為了做到這一點锅劝,我們只需將最后的透射率項(e -t * d)求反,因此我們想要確定透射率小于閾值的t值是多少咒程。謝天謝地鸠天,這正是log(x)函數(shù)的作用讼育。log的默認底數(shù)是e帐姻。它會返回“e的幾次方等于x”這個問題的答案。因此奶段,如果我們想知道在t的什么值下透光率會小于0.001饥瓷,我們可以計算:
DistanceThreshold = -log(0.001) / d;
Assuming the user defined density d = 1, ?this would give us a linear accumulation value of?6.907755 needed to reach 0.001 transmittance. We add this to our shader code with the line:
假設用戶定義的密度d = 1,這將給我們一個線性累加值6.907755痹籍,需要達到0.001的透光率呢铆。我們將這一行添加到著色器代碼中:
float shadowthresh = -log(ShadowThreshold) / ShadowDensity;
Where ShadowThreshold is a user defined transmittance threshold and ShadowDensity is a user defined shadow density multiplier. This line needs to go after the line that multiplies ShadowDensity by shadowstepsize, above the loops.
其中ShadowThreshold是用戶定義的透光閾值,ShadowDensity是用戶定義的陰影密度倍增器蹲缠。這條線需要放在將ShadowDensity乘以shadowstepsize的線之后棺克,在循環(huán)的上方。
Updated Shadow Code:
Adding in the shadow exit and transmittance thresholds, as well as the final partial step evaluation outside of the loop (which also has to perform the same shadow steps) yields this code:
添加陰影出口和透光閾值线定,以及循環(huán)外的最終部分步驟評估(也必須執(zhí)行相同的陰影步驟)娜谊,產(chǎn)生以下代碼:
float numFrames = XYFrames * XYFrames;
float accumdist = 0;
float curdensity = 0;
float transmittance = 1;
float3 localcamvec = normalize( mul(Parameters.CameraVector, Primitive.WorldToLocal) ) * StepSize;
float shadowstepsize = 1 / ShadowSteps;
LightVector *= shadowstepsize;
ShadowDensity *= shadowstepsize;
Density *= StepSize;
float3 lightenergy = 0;
float shadowthresh = -log(ShadowThreshold) / ShadowDensity;
for (int i = 0; i < MaxSteps; i++)
{
float cursample = PseudoVolumeTexture(Tex, TexSampler, saturate(CurPos), XYFrames, numFrames).r;
//Sample Light Absorption and Scattering
if( cursample > 0.001)
{
float3 lpos = CurPos;
float shadowdist = 0;
for (int s = 0; s < ShadowSteps; s++)
{
lpos += LightVector;
float lsample = PseudoVolumeTexture(Tex, TexSampler, saturate(lpos), XYFrames, numFrames).r;
float3 shadowboxtest = floor( 0.5 + ( abs( 0.5 - lpos ) ) );
float exitshadowbox = shadowboxtest .x + shadowboxtest .y + shadowboxtest .z;
shadowdist += lsample;
if(shadowdist > shadowthresh || exitshadowbox >= 1) break;
}
curdensity = saturate(cursample * Density);
float shadowterm = exp(-shadowdist * ShadowDensity);
float3 absorbedlight = shadowterm * curdensity;
lightenergy += absorbedlight * transmittance;
transmittance *= 1-curdensity;
}
CurPos -= localcamvec;
}
CurPos += localcamvec * (1 - FinalStep);
float cursample = PseudoVolumeTexture(Tex, TexSampler, saturate(CurPos), XYFrames, numFrames).r;
//Sample Light Absorption and Scattering
if( cursample > 0.001)
{
float3 lpos = CurPos;
float shadowdist = 0;
for (int s = 0; s < ShadowSteps; s++)
{
lpos += LightVector;
float lsample = PseudoVolumeTexture(Tex, TexSampler, saturate(lpos), XYFrames, numFrames).r;
float3 shadowboxtest = floor( 0.5 + ( abs( 0.5 - lpos ) ) );
float exitshadowbox = shadowboxtest .x + shadowboxtest .y + shadowboxtest .z;
shadowdist += lsample;
if(shadowdist > shadowthresh || exitshadowbox >= 1) break;
}
curdensity = saturate(cursample) * Density;
float shadowterm = exp(-shadowdist * ShadowDensity);
float3 absorbedlight = shadowterm * curdensity;
lightenergy += absorbedlight * transmittance;
transmittance *= 1-curdensity;
}
return float4( lightenergy, transmittance);
Now we have a functioning translucent ray volume ray marcher that can self shadow from one directional light. The above shadow steps would have to be repeated for each additional light supported. The code can easily support point lights in addition to directional lights by calculating inverse squared falloff in addition to each shadow term, but the vector from CurPos to the light must be calculated at each density sample.?
現(xiàn)在我們有了一個半透明的光線體稚伍,光線游行者可以從一個方向的光自陰影抛猫。上面的陰影步驟必須為每一個額外的光支持重復磁餐。通過計算每個陰影項的平方衰減的倒數(shù),代碼可以很容易地支持點光源和方向光源对雪,但是從CurPos到光源的矢量必須在每個密度樣本中計算。
Ambient Light
So far we have only been dealing with Out-Scattering contributed from a single light. This generally will not look very good as if the light is fully shadowed the volume will appear flat in the shadow. Usually some kind of ambient light term is added to address this. There are lots of ways to handle the ambient light. One way is to pre-calculate the ambience inside of the volume texture, like deep shadow maps. The downside to that approach is you won't be able to rotate and instance the volumes as the ambient light would remain?fixed. A realtime approach is to cast a few sparse rays up from each voxel to estimate overhead shadowing. This can be done with one additional offset sample, but the results get better with each additional averaged sample.
到目前為止舱馅,我們只處理了單個光的外散射硅确。這通常看起來不是很好近迁,如果光完全被陰影遮擋艺普,體積會在陰影中顯得平坦。通常會添加一些環(huán)境光術語來解決這個問題钳踊。有很多方法來處理環(huán)境光衷敌。一種方法是預先計算體紋理內(nèi)部的氛圍,就像深陰影地圖拓瞪。這種方法的缺點是你不能旋轉(zhuǎn)和實例化音量缴罗,因為環(huán)境光將保持固定。一種實時的方法是從每個體素中投射一些稀疏的光線來估計頭頂?shù)年幱凹拦 _@可以通過一個額外的偏移樣本來完成面氓,但是每個額外的平均樣本的結果會更好。
Another reason to favor a dynamic ambient term over a prebaked one is if you are planning to procedurally stack multiple volume textures. One example of this is described in the Horizon Zero Dawn cloud paper [3]. In this paper, one volume texture describes the macro shape of unique detail over an entire area and a second tiling volume texture is used to modulate the density of the base volume. An approach like this is very powerful as volume rendering techniques are currently limited by resolution. Applying blend modulation is a great way to create the appearance of more detail, but it means methods that precalculate lighting will not match the new details that arise from the combination of volume textures.
另一個喜歡動態(tài)環(huán)境的原因是蛆橡,如果你計劃循序漸進地堆疊多個卷紋理舌界。在地平線零點黎明云紙[3]中描述了這樣一個例子。在本文中泰演,一個體紋理描述整個區(qū)域獨特細節(jié)的宏觀形狀呻拌,第二個平鋪體紋理用于調(diào)節(jié)基礎體的密度。這種方法非常強大睦焕,因為體積渲染技術目前受到分辨率的限制藐握。應用混合調(diào)制是創(chuàng)建更多細節(jié)外觀的一個很好的方法,但這意味著預計算光照的方法將不匹配從體積紋理的組合產(chǎn)生的新細節(jié)垃喊。
Here is how we take three additional offset sample to estimate overhead ambient occlusion. This can go just after the transmittance was multiplied in the main loop:
下面是我們?nèi)绾尾扇∪齻€額外的偏移樣本來估計開銷環(huán)境遮擋猾普。這可以在主回路的透光率相乘后進行:
//Sky Lighting
shadowdist = 0;
lpos = CurPos + float3(0,0,0.05);
float lsample = PseudoVolumeTexture(Tex, TexSampler, saturate(lpos), XYFrames, numFrames).r;
shadowdist += lsample;
lpos = CurPos + float3(0,0,0.1);
lsample = PseudoVolumeTexture(Tex, TexSampler, saturate(lpos), XYFrames, numFrames).r;
shadowdist += lsample;
lpos = CurPos + float3(0,0,0.2);
lsample = PseudoVolumeTexture(Tex, TexSampler, saturate(lpos), XYFrames, numFrames).r;
shadowdist += lsample;
//shadowterm = exp(-shadowdist * AmbientDensity);
//absorbedlight = exp(-shadowdist * AmbientDensity) * curdensity;
lightenergy += exp(-shadowdist * AmbientDensity) * curdensity * SkyColor * transmittance;
The two commented out terms were just an attempt to reduce the number of temporaries used. The same can be done to all of the code.
這兩個被注釋掉的術語只是為了減少臨時使用的數(shù)量”久眨可以對所有代碼進行相同的操作初家。
Light Extinction Color
Notice that we are only?applying the LightColor to the?shadow term once per density sample.?Doing it in this way does not allow the scattering to change color with depth. The scattering from clouds in real life is mostly from mie scattering which scatters light wavelengths equally, so the single color scatter is not bad for clouds. Still, colored extinction can emulate extinction spectra in liquids,?sunset IBL response or artistic effects just by replacing?the ShadowDensity parameter with a V3. You divide the Shadow Density by the color you want it to show:
注意,我們只對每個密度樣本的陰影項應用一次LightColor乌助。這樣做不允許散射隨深度改變顏色×镌冢現(xiàn)實生活中云的散射主要是均勻散射光波長的mie散射,所以單色散射對云來說并不壞他托。不過掖肋,彩色消光可以模擬液體的消光光譜,日落IBL反應或藝術效果上祈,只需用V3替換ShadowDensity參數(shù)培遵。你用陰影密度除以你想要顯示的顏色:
Here is what the entire material should look like now:
下面是整個材質(zhì)現(xiàn)在的樣子:
Notice a phase function was added to the light color (that function exists in engine\content but is not exposed to the function library). It was done this way rather than on the output side of the ray marcher so that the phase function could be separated to just the directional light and not affect the ambient light.
注意浙芙,一個相位函數(shù)被添加到淺色中(該函數(shù)存在于引擎\內(nèi)容中,但沒有公開給函數(shù)庫)籽腕。它是這樣做的嗡呼,而不是在射線走行器的輸出端,這樣皇耗,相位函數(shù)可以被分離到只方向光南窗,而不影響環(huán)境光。
Additional Shadowing Options
It is possible to add support for various shadowing methods, such as the custom per-object depth based shadow maps discussed in a previous post. While a solution like that can work here, depth based shadowmaps do not look great for volumetrics because the shadow will be crisp without performing expensive custom blurring (and remember we are already inside of a crazy expensive nested loop).
我們可以添加對各種陰影方法的支持郎楼,比如在之前的文章中討論過的基于對象深度的自定義陰影貼圖万伤。雖然這樣的解決方案可以在這里工作,但基于深度的陰影貼圖對于體積指標來說并不是很好呜袁,因為在不執(zhí)行昂貴的自定義模糊的情況下敌买,陰影將是清晰的(記住,我們已經(jīng)處于一個昂貴的嵌套循環(huán)中)阶界。
I have only experimented so far with enabling Distance Field Shadows. Distance field shadows are nice for volumetrics because the shadows can be made soft without extra cost. The downside is that looking up the global distance fields many times for volumetric purposes is extremely expensive and the resolution of the distance fields themselves is not great. Only try this if you have a 980+ level gpu.
到目前為止虹钮,我只試驗了啟用距離場陰影。距離場陰影對于體積測量來說是很好的膘融,因為陰影可以在沒有額外成本的情況下變得柔和芙粱。缺點是,為了體積的目的而多次查找全局距離字段非常昂貴氧映,而且距離字段本身的分辨率并不高春畔。只有當你擁有980級以上的gpu時才能嘗試這種方法。
To add distance field shadows requires also passing in or re-computing the world space light vector outside of the loop preferably:
添加距離場陰影還需要傳入或重新計算循環(huán)外的世界空間光向量:
float3 LightVectorWS = normalize( mul( LightVector, Primitive.LocalToWorld));
Then inside of the main loop, just after the shadow steps:
然后在主循環(huán)中岛都,就在陰影步驟之后:
float3 dfpos = 2 * (CurPos - 0.5) * Primitive.LocalObjectBoundsMax.x;
dfpos = TransformLocalPositionToWorld(Parameters, dfpos).xyz;
float dftracedist = 1;
float dfshadow = 1;
float curdist = 0;
float DistanceAlongCone = 0;
for (int d = 0; d < DFSteps; d++)
{
DistanceAlongCone += curdist;
curdist = GetDistanceToNearestSurfaceGlobal(dfpos.xyz);
float SphereSize = DistanceAlongCone * LightTangent;
dfshadow = min( saturate(curdist / SphereSize) , dfshadow);
dfpos.xyz += LightVectorWS * dftracedist * curdist;
dftracedist *= 1.0001;
}
Then the term dfshadow gets multiplied by the absorbed light.
然后dfshadow乘以被吸收的光律姨。
Temporal Jitter
Sometimes slicing artifacts will show up even with high step counts and other times the resolution of the volume texture itself can cause artifacts. When low step counts are used, still images can be improved by using the plane snapping described above, but camera motion will still show the slicing artifacts as the slices rotate. Temporal Jitter basically randomly moves around the starting locations every frame and smooths the result. It generally works well unless you have moving objects in front of the jittered surface.
有時即使步長很高,切片工件也會顯示出來疗绣,而有時體積紋理本身的分辨率也會造成工件线召。當使用低步數(shù)時铺韧,靜態(tài)圖像可以通過使用上面描述的平面捕捉來改進多矮,但當切片旋轉(zhuǎn)時,攝像機運動仍然會顯示切片偽影哈打。時間抖動基本上在每一幀的起始位置周圍隨機移動塔逃,并平滑結果。它通常工作良好料仗,除非你有移動的物體在抖動的表面前湾盗。
In the past I used the DitherTemporalAA material function to do this, but there is a cheaper and better way now, thanks toMarc Olano'simproved psuedorandom functions added to UE4 in 4.12. It boils down to these three lines (note that localcamvec has bee pre-multiplied by step size at this point):
在過去,我使用DitherTemporalAA材質(zhì)函數(shù)來做這個立轧,但現(xiàn)在有一個更便宜和更好的方法格粪,感謝toMarc Olano在4.12中添加到UE4的改進偽隨機函數(shù)躏吊。它歸結為以下三行(注意,localcamvec在這一點上被預先乘以了步長):
int3 randpos = int3(Parameters.SvPosition.xy, View.StateFrameIndexMod8);
float rand =float(Rand3DPCG16(randpos).x) / 0xffff;
CurPos += localcamvec * rand.x * Jitter;
https://youtu.be/KTdj9nzZJWo
Final Notes
Earlier I suggested using 4.13.2 since 4.14 introduced a regression that prevents the material compiler from sharing instructions between pins. So connecting the opacity and emissive color means the entire raymarch function is done twice. One workaround in 4.14 is to use 1.0 for opacity and then use the opacity to lerp between emissive and scene color.
之前我建議使用4.13.2帐萎,因為4.14引入了一個回歸比伏,阻止material編譯器在引腳之間共享指令。所以連接不透明度和發(fā)射色意味著整個raymarch函數(shù)被做了兩次疆导。在4.14中一個解決方案是使用1.0的不透明度赁项,然后使用不透明度在發(fā)射色和場景顏色之間進行l(wèi)erp。
(I had more notes but turns out this blog template limits the post length and simply omits things beyond that point, so I will add more information in a followup post. it wont even let me fit all references).
(我有更多的注釋澈段,但結果發(fā)現(xiàn)這個博客模板限制了文章的長度悠菜,并簡單地省略了超出這一點的東西,所以我將在后續(xù)的帖子中添加更多的信息败富。它甚至不讓我適合所有的參考資料)悔醋。
Citations:
[1]: Drebin, R. A., Carpenter, L., and Hanrahan, P. Volume rendering.
In SIGGRAPH ’88: Proceedings of the 15th annual conference on Computer
graphics and interactive techniques (1988), pp. 65–74.