CheatCat's picture

[Solved] All textures appear in same size!

I have 3 images in different sizes: 32x32; 128x128 and 256x256. But when I try to use them as textures they will appear in same size!
What is wrong?

The code:

using System;
using System.Drawing;
 
using OpenTK;
using OpenTK.Graphics;
using OpenTK.Audio;
using OpenTK.Math;
using OpenTK.Input;
using OpenTK.Platform;
 
namespace StarterKit
{
    class Game : GameWindow
    {
        // Creates a new TextPrinter to draw text on the screen.
        TextPrinter printer = new TextPrinter(TextQuality.Medium);
        Font sans_serif = new Font(FontFamily.GenericSansSerif, 18.0f);
        int texture = 0, tex2 = 0, tex3 = 0;
 
        public Game() : base(800, 600, GraphicsMode.Default, "Test")
        {
            VSync = VSyncMode.On;
        }
 
        public override void OnLoad(EventArgs e)
        {
            TexLib.TexUtil.InitTexturing();
            GL.ClearColor(System.Drawing.Color.CornflowerBlue);
            texture = TexLib.TexUtil.CreateTextureFromFile("hej.png");
            tex2 = TexLib.TexUtil.CreateTextureFromFile("tjo.png");
            tex3 = TexLib.TexUtil.CreateTextureFromFile("tja.png");
            GL.Enable(EnableCap.DepthTest);
        }
 
        protected override void OnResize(ResizeEventArgs e)
        {
            GL.Viewport(0, 0, Width, Height);
            GL.MatrixMode(MatrixMode.Projection);
            GL.LoadIdentity();
            Glu.Perspective(45.0, Width / (double)Height, 1.0, 64.0);
        }
 
        public override void OnUpdateFrame(UpdateFrameEventArgs e)
        {
            if (Keyboard[Key.Escape])
            {
                Exit();
            }
        }
 
        public override void OnRenderFrame(RenderFrameEventArgs e)
        {
            GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
            GL.MatrixMode(MatrixMode.Modelview);
            GL.LoadIdentity();
            Glu.LookAt(Vector3.Zero, Vector3.UnitZ, Vector3.UnitY);
 
            //First texture
            GL.BindTexture(TextureTarget.Texture2D, texture);
            GL.LoadIdentity();
            GL.Translate(6, 0, -15);
 
            GL.Begin(BeginMode.Quads);
              GL.TexCoord2(0, 0); GL.Vertex3(-1, 1, 0);
              GL.TexCoord2(1, 0); GL.Vertex3(1, 1, 0);
              GL.TexCoord2(1, 1); GL.Vertex3(1, -1, 0);
              GL.TexCoord2(0, 1); GL.Vertex3(-1, -1, 0);
           GL.End();
 
           //Second texture
           GL.BindTexture(TextureTarget.Texture2D, tex2);
           GL.LoadIdentity();
           GL.Translate(-3, 0, -15);
 
           GL.Begin(BeginMode.Quads);
           GL.TexCoord2(0, 0); GL.Vertex3(-1, 1, 0);
           GL.TexCoord2(1, 0); GL.Vertex3(1, 1, 0);
           GL.TexCoord2(1, 1); GL.Vertex3(1, -1, 0);
           GL.TexCoord2(0, 1); GL.Vertex3(-1, -1, 0);
           GL.End();
 
           //3:rd texture
           GL.BindTexture(TextureTarget.Texture2D, tex3);
           GL.LoadIdentity();
           GL.Translate(5, 5, -15);
 
           GL.Begin(BeginMode.Quads);
           GL.TexCoord2(0, 0); GL.Vertex3(-1, 1, 0);
           GL.TexCoord2(1, 0); GL.Vertex3(1, 1, 0);
           GL.TexCoord2(1, 1); GL.Vertex3(1, -1, 0);
           GL.TexCoord2(0, 1); GL.Vertex3(-1, -1, 0);
           GL.End();
 
           SwapBuffers();
        }
 
        [STAThread]
        static void Main()
        {
            using (Game game = new Game())
            {
                game.Run(30.0, 0.0);
            }
        }
    }
}

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
objarni's picture

CheatCat; you are confusing the concepts of "Quads" with the concept of "Textures".

Forget all you know about sprites/icons/bitmaps/surfaces and such things from 2d-libraries! :)

Open Graphics Library (OpenGL) is more "general" than that.

Textures are "bitmaps in OpenGL memory" (system or on gfx card). Now they can be used to manipulate the pixel colors when drawing lines, triangles, quads or even polygons.

When you "send a primitive" to OpenGL, you specify the coordinates where to draw the primitive, for example a triangle. You also specify what texture coordinates to "attach" to each vertex via TexCoord. Then OpenGL chooses what colors to draw using the texture stored in OpenGL memory.

You can even draw a texture on the whole screen using a "big quad", or over a single pixel of the screen, using a "small quad". All depends on what you send to opengl.

So what dimensions 64x64 or 32x32 or 128x128 the texture objects have in OpenGL memory does not matter; all texture coordinates are in the range 0 (minimum) to 1 (maximum).

Hope you understand this. I know it is hard to "rethink" concepts -- I too came from a 2d-API background when I started using OpenGL.

the Fiddler's picture

To put it in different words, in OpenGL you "attach" a texture to geometry and then render that geometry. This is in contrast with typical 2d, where you blit the texture directly to screen!

As you may know, OpenGL projects geometry from 3d space onto your screen. Two things affect the final size on your screen: the projection matrix and the position of the geometry in 3d space. If you wish to emulate 2d graphics you'll have to set up an orthographic projection:

// Inside the Resize event:
GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
GL.Ortho(0, 640, 480, 0, -1, 1);

Now, you can use typical 2d coordinates to draw objects on the screen: (0, 0) is the top-left corner and (640, 480) is the bottom-right. For example, a typical 2d blit now looks like:

public void Blit(int id, float x, float y, float width, float height)
{
    GL.BindTexture(id);
    GL.Begin(BeginMode.Quads);
    GL.TexCoord(0, 0); GL.Vertex2(x, y);
    GL.TexCoord(1, 0); GL.Vertex2(x + width, y);
    GL.TexCoord(1, 1); GL.Vertex2(x + width, y + height);
    GL.TexCoord(0, 1); GL.Vertex2(x, y + height);
    GL.End();
}

This brings you back to plain 2d.

Which is fine, actually, unless you about the flexibility (and speed) true 3d could grant you.

Edit:
What do I mean about flexibility? Imagine having different "layers" of objects to simulate parallax effects. With 3d, you can do this trivially, by changing the z-coordinate of each layer. In 2d mode you have to animate each layer yourself.

objarni's picture

Great advice with the Blit function. Actually, it could be written with int's only, so that no "blurring" occurs when blitting textures.

the Fiddler's picture

It's a compromise between animation smoothness and blurriness. With float coordinates, you need to take care to position the camera on whole numbers when it stops moving.

That's a lie, actually: you have to add 0.375 to your vertex coordinates for optimal results. The reason is that texture coordinates and vertex coordinates are defined differently (top-left corner of the texel vs center of a fragment, respectively)

objarni's picture

Haha cool Fiddler.

But if using ints plus disabling all filtering, you should be fine, right ? That's the way I used to build 2d games ontop of OpenGL in SDL at least ...

the Fiddler's picture

Yes, that's the way to go if you want 100% parity with 2d libraries.

Still, you'll only disable my sweet, bilinearly-filtered, scaled textures over my dead body. :D

objarni's picture

Hehe I guess it is a matter of taste :) I prefer my pixels pristine and crisp!

CheatCat's picture

But how I do layers if I set up an orthographic projection?? :S

objarni's picture

CheatCat - you have to start invent things for yourself ;)

It is more fun that way.

I'll give you some hours to feel the pain and joy of thinking. (unless someone else tells you before the time is up!) Good luck!

the Fiddler's picture

By using a small hack:

Render layers back to front and use a "movement multiplier" to simulate parallax. For example, if a layer is further away than the "main" layer, multiply its movement by a number between (0.0, 1.0) to make it move slower relative to the main layer (this gives the illusion of depth). If it's closer, multiply by a number > 1.0. The "main" layer always moves in step with the camera (no multiplier).

As objarni said, pure 2d allows you to display pixel-perfect bitmap graphics, which is generally impossible in real 3d. This is both an advantage and a disadvantage: an advantage because pure 2d makes pixel art look good; a disadvantage because you lose in flexibility: your graphics are now bound to a specific screen resolution. Change the resolution and you'll either make everything smaller (like Baldur's Gate) or lose pixel-perferct accuracy (like World of Goo). 3d doesn't suffer from this: the higher the resolution, the greater the fidelity of the graphics (size remains the same).

This is ultimately a question of style: 2d and 3d are two fundamentaly different approaches to graphics. Which one you choose depends on your project and the art direction.

OpenGL is built with 3d in mind, but supports both styles of rendering equally well.

Edit: Hehe, I gave away the secret recipe. Doesn't matter, there are many other things to bang your head against. :)