AmzBee's picture

Light map occlusion...

Hey everyone, I know I've been asking allot of basic questions recently, but I just want you to know that using OpenTK has made programming for me fun again, and I am really learning every time someone helps me out.

Recently I managed to build quite an efficient light mapping class for my project that gives some really nice results. However unfortunately I think I've missed something, the process does not take into account weather the polys are facing the light. So I was wondering if there are some standard methods for checking this?

Also, I've started to look into shadows. I've read a few papers now on the subject, but haven't yet got my head around how I would take what I know and apply it to a managed language, could anyone point me in the direction please of a paper that would help me in the area, preferably one that takes advantage of shader's as I've recently come to understand their potential performance benefit, and would very much like to take advantage of that.

Once again, thanks for any replies, I always thoroughly appreciate it :)

Aimee.


Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
puklaus's picture

If you using shaders, then setup normals as always, and there you calculate value on every fragment
on fragment's normal and light direction. If this is what you mean, check ie
http://www.lighthouse3d.com/tutorials/glsl-tutorial/directional-light-pe...

and on lighthouse3d.com, there is more good examples using glsl with lighting.

tksuoran's picture

In general related to lights and shadows, understanding the rendering equation (see wikipedia) at least on some level is important.

For light maps and polygon direction: I have not used lightmaps myself, and I am not sure about the details of your implementation so far. You can factor the rendering equation in different ways and consequently store different terms to a texture map. For a perfectly diffuse surface, the exitant radiance (light that comes out from pixel) does not depend from the viewing direction, so you could store the "final" exitant radiance directly to the lightmap texture, assuming static lighting conditions. Alternatively you could store only "light visibility", which does not take any of the surface properties or viewing direction into account.

Concerning shadow maps: Before you start you need some basic tools. At minimum you need to be comfortable with render to texture, and you need basic classes like Camera, Viewport, Texture, Program (GLSL / GL program). You should also be comfortable rendering with shaders (programs). Then:

  • First pretend the light is just a camera and get it working so that you can render a texture from lights point of view. So you have two rendering passes in your render frame code: RenderShadowMap(), Render3D(). I use orthogonal camera, this would match with a directional light, but which still has position. You need to place the camera so that what you see through the light camera matches with your 3D scene camera. This is easiest in the beginning if you have a small sandbox scene and the light camera can see all of it. You can improve this later.
  • After this works, in RenderShadowMap() use a program which writes fragment depth to the texture. You can move to use single channel (red) texture at this point, perhaps using floats or 16 bit floats.
  • In Render3D() you bind the shadowmap texture in your shadowmapped shaders and shader code compares fragment depth with the shadowmap depth. Comparison result is either 0 or 1 for light visibility without PCF (add that later)
  • Once you get initial shadow mapping "working" you will notice issues such as shadow acne/light bleeding and aliasing. For acne/light bleeding you can enable bilinear filtering for your shadowmap samplers and apply slope based bias. For aliasing you can increase shadow map resolution up to few k, but after that you need to either constraint your scene and cameras or go for cascaded shadow maps. Keep in mind to optimize for the worst cases, not the best cases.

I have basic shadow mapping working on top of RenderStack (which runs on top of OpenTK), and I might release that in a while.

AmzBee's picture

Thanks guys for your replies, can't tell you how much that has cleared things up for me :) Also just before I came back to check, I managed to get the lighting to work almost how I pictured (though for some reason still having trouble getting the VBO to read the base texture coords properly for some reason hmm), I've created an adaptive static light map generator that also takes into account weather each face is seen by the light sources.

Here's a screenie of the work in progress.

Once I get this sorted, I'm gonna start working on the shadows. Though biggest gap in my knowledge at the moment probably is how to do a depth test, and how to pass uniform variables and other stuff to shaders. I've had a shader up and running before, but it did nothing more than just sit there behaving itself.

Aimee.

puklaus's picture

Ah, you using light map generator (when I suggest to use rendered texture (blender lightmap or max) with different uv:s and then using multitexturing.
I though that you are going to use Shadow mapping (render scene in light's position and direction, and using that on pass 2). If used that way, you can calculate (final color) when checking if texel is shadowed or not.
(my csat engine is using simple shadow mapping now, and it is on google code now, but im going to make it to use cascaded_shadow_maps (from nvidia demos) and vsm.

AmzBee's picture

The problem i'm having is that the UV's my code generates for the lighting works a treat and renders fine with the VBO, but for some reason when I add just regular bog standard textures back into the mix, the UV coords from the obj file seem to get ignored, their sat in the VBO data, perhaps im not setting the stride properly. Going to have to have a closer look at VBO's and multi-texturing i think.