As you may already know KRI engine aims to reproduce Blender rendering pipeline as close as possible. This is supposed to free artist's arms and imagination from engine limitations, keeping the transition into real-time as smooth as possible.
Todays post is about Blender texture units and about how they change the entire rendering pipeline. Each material in blender is applied to a surface in the following way:
- The base properties are set (diffuse,specular,emission, lighting model).
- First texture unit is applied, for example, by mixing the diffuse color with a texture value.
- Second texture unit is applied, for example, by adding a value from a texture to the specularity and specular hardness.
- Third texture unit, for example, changes the normal to a one extracted from a tangential space normal map.
- ... and so on for all other units...
KRI forward-rendering meta techniques allowed any material value to be gotten from an arbitrary source, be it a texture or an uniform value. This functionality is pretty complex and worked well for many cases. However, there are several scenarios not covered by meta techniques:
- Impossible to have more than one texture to affect a value.
- The blending mode for each value is pre-defined (fixed). For example, it's not possible to add a color from a texture onto the diffuse component (the workaround is to create a separate renderer, but it's not very convenient).
- Decals are not supported in a way Blender provides.
- As we don't know in advance about normal map presence and domain space, we have to always provide quaternions (or any other tangential space basis) for vertices. This introduces excessive difficulties for simple materials applied on objects without a given tangential space (like a simple diffuse texture in screen coordinates).
One day I was looking into deferred techniques and wanted to get more flexibility with texture units at the same time. Two ideas merged together into a newly discovered Layered rendering! It's performed in the following stages:
- "G" frame-buffer is prepared containing 3 textures: (diffuse,emission), (specular,hardness),(normal). It's cleared together with a depth buffer.
- Objects are drawn into it, filling the basic material properties. This can be done either with depth filling or in a separate stage after early Z pass.
- Non-normal texture units are applied in the following way: Blend mode and affection masks are set, objects are drawn into the texture-1 and texture-2, modifying diffuse and specular properties.
- Normal texture unit(s) are applied without blending, affecting strictly the texture-3. Tangential space is provided only if required by a texture.
- A quad is drawn onto a main screen, sampling from the texture-1 and filling the emission component of the models.
- Each light source is drawn as a light volume with additive blending, sampling from all 3 textures and applying a lighting to the result color. This is equal to the traditional deferred rendering (it actually shares the same routines in KRI).
One obvious performance negative side of this approach is an additional pass for basic material properties and each texture unit (comparing to a single pass of the meta technique). But on the positive side there is a more transparent rendering sequence (easier to debug, maintain and understand). There is a one-to-one correspondence to Blender model of applying texture units, allowing artists to do any blending modes, decals and simplified meshes, seeing them right away in real-time. Once the proper texture unit configuration is set, a programmer can compose a single-pass renderer for this object to optimize performance.
Layered profile has been added to the viewer and can be tested any time. Enjoy!
Edit-1: I've added a comparison image to see how close KRI renderer is now to Blender.