Inertia's picture

Texture Loaders

Some info regarding the texture loaders:

There's currently 2 different loaders:
a) one using .Net's GDI+ to load uncompressed textures.
b) the other is loading compressed .dds textures in DXT1/3/5 formats with custom logic.

Both loaders will only set texture baselevel and maxlevel, they do not set filtering, wrapping or anisotropy - or touch any other OpenGL state besides the currently bound Texture.

They are loaders only, no texture pool and/or GL.DeleteTextures(). Also they should be considered as "middleware" to get an image from disk into OpenGL, not a solution that is ideal for production/release. So far every single image I've thrown at them had to be flipped upside down, which implies a copy of the Texture. For production/release-ready texture loading you might want to do a simple custom format that just GL.GetTexImage() - from what the OpenTK loaders gave you - and save that out into your custom file format. This isn't very hard to do, and your application will load faster.

What the loaders do return is a boolean for success/failure, the TextureTarget of the loaded image (TextureTarget.Texture1D, -2D, -Cube) and the TextureHandle you can use with GL.BindTexture().

My questions regarding the loaders:

1) Currently the loaders will return a GL_TEXTURE_2D for images that are neither 1D nor cube maps, should GL_TEXTURE_RECTANGLE be considered as an option? (it's primarily interesting for sprite-based games: texture coordinates are not specified in [0f..1f] range and no mipmaps)

2) The GDI+ loader has a boolean parameter to automatically build mipmaps via Glu, should there be a parameter for automatic texture compression aswell? (I'd recommend using .dds if you want compression, because the automatic one only offers GL.Hint() as influence over what the compression result *could* be like)

3) Currently the loaders will catch all exceptions thrown inside their block, and return OpenGL's default Texture handle zero if things went wrong. Should this be configurable to a custom value? (The source engine does that with a black/pink checkerboard texture, if a texture is missing. OpenGL's default is just white)

4) How far should the loaders be wrapped? Just a single LoadTexture() function, which internally figures out the file extension and calls the appropriate loader? Or expose loading .dds and GDI+ images as 2 functions? Or offer all 3 options?

The LoadTexture function would internally look something like this:

bool Success = false;
            switch ( filename.Substring( filename.Length - 4, 4 ) ) // File Extension
            {
            case "exig":
            case "tiff":
            case ".tif":
            case ".bmp":
            case ".gif":
            case ".jpg":
            case ".png":
                Success = ImageGDI.LoadFromDisk( filename, FlipImage, BuildMipMaps, out TextureID, out TexDimension );
                break;
            case ".dds":
                Success = ImageDDS.LoadFromDisk( filename, FlipImage, out TextureID, out TexDimension );
                break;
            default:
                // unrecognized format
                break;
            }
if (Success) set filters/wrapping etc.

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
Inertia's picture

Dear Inertia,

since you posted this topic 2 weeks ago nobody cared to reply to it, so I'll reply to you now:

1-4) Just do what you consider the right thing, and also do that for:

5) Should the texture loaders set some defaults filter/wrapmode, to make sure it will draw properly? Like Nearest and NearestMipmapNearest filters and Repeat wrapping?

P.S: next time don't dare to justify your decisions :P

the Fiddler's picture

5) Yes, nearest filtering by default. It sucks to get a white texture and fumble around for 15 minutes before you remember you have to set this first.

4) What's the intended usage? 2 things probably: directly upload to GL and return a texture id, or create a System.Drawing.Bitmap. Probably Load() and LoadBitmap()?

3) Throw the correct exception and leave it up to the user. He can catch and use a checkerboard texture, retry or fail. Silently catching errors is bad imho (*why* did it fail? Malformed file? Missing resource? Didn't have the permission to access the file?)

(yes the permission thing actually happened to me yesterday, I wouldn't have figured without seeing the exception text)

2) This might have to do with (5) too. Maybe pass a few optional flags to Load(), like TextureHint.Compress | TextureHint.Mipmap?

1) a) Can we detect 1d textures from 2d texture in dds? (it is trivial through System.Drawing.Bitmap).

b) What about another hint, like TextureHint.ResizeToPowerOfTwo. I'd prefer the loader to leave size intact, unless the user requests otherwise, *or* the hardware doesn't support GL 2.0 NPOT textures (autoresize through GLU/SGI then).

My 2cc :)

Inertia's picture

1.a) Actually the loader is ditching all mipmap levels smaller than 4x4, because smaller textures have undefined parts in the block. I've not supported 1D texture loading as .dds, because for the same reason: 3/4 of the pixels in the block are undefined, and you have compression artifacts. For 1D textures the memory needs are also the same for uncompressed textures vs. compressed, but you have no quality loss due to compression with a .bmp.

1.b) "automatic" resizing is one of the things i truely disliked about DevIL. That's why the loaders return you errors, so you can try load a different (maybe smaller, maybe 2^n) texture.

2&5) automatic mipmap creation is done through Glu (and implemented in the GDI+ loader). I'll take a look at how to reduce the number of parameters the functions require through enums or static variables :)

3) currently the full exception is printed with Trace.Writeline() so no information for debugging is lost. Forwarding the exceptions is a good idea, so programmers can take actions at runtime.

4) The only intent is to load an image from disk to OpenGL. If you want a Drawing.Bitmap, there's little point in using anything besides .Net's API? Cannot think of a scenario where it would be interesting to have the .dds converted to a Bitmap, because you'd lose the compression while still having it's artifacts.

Thank you very much, any feedback helps alot with the direction where this should be heading.

the Fiddler's picture

4) Makes sense. Only one Load function then :)

One suggestion: provide a Load() overload the takes a System.Drawing.Image, to allow loading resources from memory.

1.b) Agreed. If we allow this, the user should have to explicitly request it.

Inertia's picture

[System.Drawing.Image overload]
Sure, that's not a problem.

1.b) It's also extremely complicated to uncompress the DXT texture, resize and compress it again. There are a couple of compression algorythms for DXT to chose from, and either way the image quality will probably be very bad as the image is compressed twice in the end.
With a Drawing.Image overload you can manually resize the image to whatever you like, before sending it to OpenGL :)

the Fiddler's picture

1.b) I suppose anyone who uses DXT textures, will have thought about possible POT issues anyway ;)

Inertia's picture

6) something I had considered, but not felt the necessity to implement yet: Some restriction to limit the maximum texture taken from the .dds file. Let's assume you have a compressed DXT5 file with the following mipmap levels:

  1. 4096x2048
  2. 2048x1024
  3. 1024x512
  4. 512x256
  5. 256x128
  6. 128x64
  7. 64x32
  8. 32x16
  9. 16x8
  10. 8x4
  11. 4x2
  12. 2x1
  13. 1x1

Currently the loader will read all those mipmaps, but not call teximage for 11, 12 and 13 - some .dds authoring tools have come to the same conclusion and don't even create those tiny mipmaps.

But it might be interesting to tell the loader that you want only mipmap 3-10, because the graphics card does not support textures larger than 1024^2 pixels. I don't think it should be the loader's responsibility to figure the max. texture size out and try to be smart, but instead it should offer the programmer some way to set a limit and ignore larger mipmaps.

the Fiddler's picture

An optional parameter like int maxTextureSize would be useful (e.g. for limiting memory usage on low-end machines).

Figuring out max texture size is dead easy:

GL.GetInteger(GetPName.MaxTextureSize, out size);

I cannot think of any usecase where not limiting the max mipmap would be useful though. What happens if one tries to upload bigger textures than the ones supported?

Inertia's picture

The problem isn't figuring out the max. texture size, but rather that the loader tries to be smart, screws up and the user may ponder what went wrong. If you try to load an .bmp image 1027^2 and the limit is set to 1024, the loader would not return anything atm because technically nothing went wrong.

I've tried loading a 800MB texture, GL.GetError returns "invalid enum"

This whole restriction thing might be best for version 2 of the loaders, a static int MaxAllowedTextureSize could be added later on, and the GDI+ loader will manually rescale the image to meet the requirements, while the DDS loader skips larger mipmaps. There's still the problem what to do if the .dds file has no mipmaps and is too large though.

the Fiddler's picture

Yeah, no rush to get it out right now. There's room for improvements later on.

We need to see what can go wrong.

  1. Invalid, corrupt, or unavailable image.
  2. No OpenGL context.
  3. Image too large.

We should throw the correct exception on each case. What about: ArgumentNullException, ArgumentException("Invalid filename or image corrupt"), for (1), GraphicsContextException("Need a current GraphicsContext") for (2) and ImageTooLargeException("Max allowed size: {0}. Image size: {1}") for (3)?