Axelill's picture

VBO using OpenTK

I tried to create VBO with openTK without success.

I've already implemented VBO in c++ so I think my initialization is quite correct.

It is maybe a wrong type of parameter. When I want to destroy the buffer it just go with an error telling me that my memory is mayb corrupted. Any idea??

public sealed class CIndexedTrianglesMesh : CMesh
    {
        Int32[] m_indexes;
        Single[] m_data;
 
        UInt32 m_gl_index_buffer;
        UInt32 m_gl_vertex_buffer;
 
        public CIndexedTrianglesMesh(String _name)
            : base(_name)
        {
        }
 
        ~CIndexedTrianglesMesh()
        {
            //GL.DeleteBuffers(1, ref m_gl_index_buffer);
           // GL.DeleteBuffers(1, ref m_gl_vertex_buffer);
        }
 
        public override void GLBuildObject()
        {
            GL.ARB.GenBuffers(1, out m_gl_index_buffer);
            GL.ARB.GenBuffers(1, out m_gl_vertex_buffer);
 
            IntPtr t1 = new IntPtr(m_data.Length * sizeof(Single));
            IntPtr t2 = new IntPtr(m_indexes.Length * sizeof(Int32));
 
            GL.ARB.BindBuffer(GL.Enums.ARB_vertex_buffer_object.ARRAY_BUFFER_ARB, m_gl_vertex_buffer);
            GL.ARB.BufferData(GL.Enums.ARB_vertex_buffer_object.ARRAY_BUFFER_ARB, 
                t1, m_data, GL.Enums.ARB_vertex_buffer_object.DYNAMIC_DRAW_ARB);
 
            GL.ARB.BindBuffer(GL.Enums.ARB_vertex_buffer_object.ELEMENT_ARRAY_BUFFER_ARB, m_gl_index_buffer);
            GL.ARB.BufferData(GL.Enums.ARB_vertex_buffer_object.ELEMENT_ARRAY_BUFFER_ARB,
                t2 , m_indexes, GL.Enums.ARB_vertex_buffer_object.DYNAMIC_DRAW_ARB);
 
        }
        public override void GLDrawObject()
        {
            GL.ARB.BindBuffer(GL.Enums.ARB_vertex_buffer_object.ARRAY_BUFFER_ARB, m_gl_vertex_buffer);
            GL.ARB.BindBuffer(GL.Enums.ARB_vertex_buffer_object.ELEMENT_ARRAY_BUFFER_ARB, m_gl_index_buffer);
 
            GL.EnableClientState(GL.Enums.EnableCap.VERTEX_ARRAY);
            GL.VertexPointer(3, GL.Enums.VertexPointerType.FLOAT, 8 * sizeof(Single), 0);
            //GL.EnableClientState(GL.Enums.EnableCap.NORMAL_ARRAY);
            //GL.NormalPointer(GL.Enums.NormalPointerType.FLOAT, 8 * sizeof(Single), 3 * sizeof(Single));
            //GL.EnableClientState(GL.Enums.EnableCap.TEXTURE_COORD_ARRAY);
            //GL.TexCoordPointer(2, GL.Enums.TexCoordPointerType.FLOAT, 8 * sizeof(Single), 6 * sizeof(Single));
 
 
            GL.DrawElements(GL.Enums.BeginMode.TRIANGLES, m_indexes.Length, GL.Enums.All.INT, 0);
 
            GL.DisableClientState(GL.Enums.EnableCap.TEXTURE_COORD_ARRAY);
            GL.DisableClientState(GL.Enums.EnableCap.NORMAL_ARRAY);
            GL.DisableClientState(GL.Enums.EnableCap.VERTEX_ARRAY);
         }

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
the Fiddler.'s picture

Many thanks, I'll add this to the tutorial section of the site.

Inertia's picture

Correct me if I'm wrong, but I think you should set the glVertexPointer last. IIRC a nVidia paper about VBOs stated that alot of optimization of the in-video-memory-representation is done by the glVertexPointer call and it's expected to be the last pointer to be set. This kinda made sense, since anything else wouldn't work with glInterleavedArrays().

the Fiddler.'s picture

Didn't know that, although in retrospect it makes sense.

I think a small benchmark between different layouts (interleaved, not interleaved etc) and batch sizes would be very interesting. Something to add in some future version, I guess!

Axelill's picture

Yes, true. that makes sense. I 'll do the modification and try to find out if the framerate get increased or not.

Thanks for the advice.

Inertia's picture

I've read a discussion about that topic (think it was at gamedev.net?) and in essence there was only a minimal performance improvement when using glInterleavedArrays() over manually setting pointers and strides. My assumption is that the driver does some optimization behind the scenes, because it perfectly knows how your Vertices (as in if you are using Normals or not, Texcoords or not, Color or not, etc.) are laid out. glInterleavedArrays() has no concept of vertex attributes tho.

Using non-interleaved data is always slower, because the GPU cannot read 8 floats in a row. Instead it must seek the position of the normal, read 3 floats, seek the texcoord, read 2 floats, seek the position, read 3 floats.

Haven't done benchmarks to prove this, some things I just take for granted since they fit into my picture how computers work ;-)