Hey everyone, sorry to make my first post based on a question, but just to let you know I've tried allot of things before reaching
out, I have a bit of a pickle. This morning I started down the path of implementing VBO's to boost the performance of my tile based
engine, I got my head around them, and actually like the neat way the deal with lots of verts at one time, however after getting a
entry point not defined error I realised that I don't even have the ability to use VBO's on this system spec:
Windows XP Machine
ATI Radeon 9200 (applologies for saying GeForce)
Xeon 2.4 Dual Core X2 CPU
Now I could go and get my FX5200 out of another XP machine I have, even though it's old, it does support OpenGL 1.5, but the
truth is there are plenty of games that run wonderfully on this machine, and I'm puzzled as why when drawing a 640x680 screen
with 32x32 quads would give me at best 20fps. Is there a more efficient way to stream quads to the graphics card, or should I just
admit defeat and get that FX5200?
Here are a few optimizations that have been made to speed things up:
- Entire maps are not drawn, only the tiles that fall within the screen's region are dealt with.
- Textures are only bound if the texture ID does not match the last id.
- I modified the code, so quads are defined within one set of GL.Begin(BeginMode.Quads) and GL.End()
- Also tried making the tiles 64x64 instead to low the poly count, didn't make much difference.
- Used some advice from a few months ago doing php, and replaced foreach loops with for loops making sure to
provide a int instead of a .Count() to reduce time in selecting regional tiles.
plus a load more which even included reducing the quality of textures etc just to get some more performance out of it, Thanks for
giving this post a read, and I hope someone has a bit of insight into any other areas I could explore, where there is a will there is
a way hey? :P