solo's picture

Sudden slow down

Hi,

ive recently picked up open gl with the opentk using c#.
Ive got a relativly simple display of models using imported textures and poly lists.
Ive added lightmaps using multi textures, and it worked but was a bit slow so I re wrote it.

However now its even slower, although im only using calllists but the sudden slow down is puzzling,
it seems to suddenly slow down from about 100 fps to several seconds per frame when the number of textures and lightmap textures gets over about 17.

however before I rewrote it it wasnt this slow !

the only real difference is that im using textures from memory rather than from a bitmap file, the models display otherwise ok including the lightmap wich works fine for a simple room of 6 surfaces with 9 textures in total.

is there something else I might be doing wrong ? is the graphics card thrashing about with the textures wildly or something ? but its so strange it didnt do this before :s

and the strange thing is without lightmaps it runs just as fast even on maps with lots of textures, ive got a map with 148 textures wich runs at 50fps with virtualy no cpu usage.

yet as soon as I add 1 lightmap wich makes the texture count go above 17 or so it goes mega slow, ive waited over a minute for one frame lol.

Ive been scrathing my head on this for some time now, if any one has any ideas where to look ?

many thanks
Solo =^.^=


Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
Inertia's picture

Welcome,

the problem is most likely that you're card cannot handle more than 16 simultanous textures bound at the same time (i.e. you're using GL.Active[Client]Texture() for each, rather than swapping which texture is currently bound).

You can query the max. with int i; GL.GetInteger( GetPName.MaxTextureUnits, out i ); and you should check for GL.GetError() aswell.

solo's picture

thanks, but I only have 2 bound textures at a time,
one for the main texture in TextureUnit.Texture0 and one for the lightmap in TextureUnit.Texture1
I activate the texture unit and bind the texture, I assume this unbids the previous one bound to the active unit?

anyway the strange thing is that I had made sure my lightmap texture coordinates didnt go out of the lightmap texture,
I also check this and it never fails, so I no longer bothered to set TextureWrapMode.Clamp when I re wrote that part of the code,
however setting this for the lightmap textures solves this problem, how wierd !

the max bound textures was 8 for my radion 9800 aiwpro, but it was working well for 10 textures.
some of my lightmaps are quite small, I dont know if its posible they can cuase a problem if they are very small ?

I need to partition my model now into smaller chunks to speed it up, but I had to tidy up a lot of code before I could do this.
Im not sure yet if I should stick with a simple wrapper or go for a full engine like IrrLicht or axiom,
ive tried both, but are a bit hard to get into them to do exactly what I want anyway.

teichgraf's picture

You should also check that your textures are the size 2^n, when you are using TextureTarget.Texture2D.
And as Inertia mentionted above you should test for an OpenGL error. Add some code like this after your OGL calls:

ErrorCode errCode = GL.GetError();
if (errCode != ErrorCode.NoError)
{
   throw new InvalidOperationException("OGL Error: " + Glu.ErrorString(errCode));
}
solo's picture

ah i hadnt thought of the 2^n I think i might have some that arnt,
some of the shadow masks I think might of been odd sizes,
I just make the lightmap from that, il force them to be 2^n.
im sure the rest of the actual textures are 2^n

but it seems to be working ok when I use the clamp,
and not otherwise, could non 2^n textures make it go slow,
and could the clamp somehow fix it ?

does the error code work when you call the functions when making a call list ?

thanks
Solo =^.^=

teichgraf's picture

does the error code work when you call the functions when making a call list ?

GL.GetError() just checks the error flag. If an error occured during a GL call, the flag is set.
See the ManPage for details:
[...]When an error occurs, the error flag is set to the appropriate error code value. No other errors are recorded until glGetError is called, the error code is returned, and the flag is reset to GL_NO_ERROR.[...]

solo's picture

aha ok thanks, thats useful then, so whenever I look at it it shows the last error since I last looked.

I must admit I have problems with looking for manual entries, as the opentk names are modified from the opengl names, I started opengl with the opentk samples so I tend to think in terms of the opentk names.

I fixed the texture size to be a power of 2 now, and it no longer goes dead slow when I dont set clamp,
although its about the same speed as when I had odd textures and clamp.

I have a map with 32000 polygons,151000 vertexes, and 12000 textures including light textures,
just using 1 call list for all of that it still manages to run at a good enough fps to pan and move
smoothly.

wich is a quite a bit better than before, the only difference is that ive sorted the polygons by both textures so theres less texture switching.

was going to partiton it but im not so sure I need to now.

thanks for the help
Solo =^.^=

lagz's picture

I've observed massive slowdowns on Ati graphics cards when using non power of 2 textures without clamp. I have no idea why, because the card supported OpenGl 2.0. It doesn't happen on nVidia cards.

What does clamp actually do? I kind of ran into the solution by accident.

JTalton's picture

GL_CLAMP causes the coordinates to be clamped to the range [0,1] and is useful for preventing wrapping artifacts when mapping a single image onto an object.
GL_REPEAT causes the integer part of the s coordinate to be ignored; OpenGL uses only the fractional part, thereby creating a repeating pattern.

Thus if using clamp it can use the texture coorinates directly and clamps them to 0 to 1.
If using repeat it has to calculate the fractional part and use that to create a repeating texture.

The older OpenGL spec required that textures be a power of two, so many hardware providers only supported that in hardware.
I'm guessing that for those older cards, the repeat must be done in software for textures that are not a power of two.
I even have a few older cards that don't properly display textures if they are not a power of two.

I believe most newer hardware properly supports the non power of two textures.

teichgraf's picture

May you should test for the ARB_non_power_of_two extension or check the ARB_texture_rectangle extension.
Here you can find a comparsion of GL_TEXTURE_2D and GL_TEXTURE_RECTANGLE_ARB.