Radar's picture

How do you manage VBO's, IBO's and Shaders?

Hi all,

i was wondering how you guys manage your assets.

At the moment i import a mesh and convert it into an VBO.
Foreach surfache of the mesh, i create an IBO and attach the textures and shader to it.
To render the object i bind the VBO and then loop over the IBO's. It works BUT as soon i try to send an uniform to the shader it starts to get messy.

What would be a better approach?
Any idea is welcome!


Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
ebnf's picture

Model {
Mesh
Material
}

Mesh {
Texture
Shader
}

Mesh {
IBO
VBO
}

Icefox's picture
ebnf wrote:

Model {
Mesh
Material
}

Material {
Texture
Shader
}

Mesh {
IBO
VBO
}

Corrections in bold, I assume? I do basically the same thing. After that it's up to the game framework; I generally keep a cache of loaded resources that hangs on to each model, plus a "graphics context" object some scene-specific data for lighting and postprocessing shaders and such.

puklaus's picture

In scene when you add objects, you can move them that they have same texture id, shader id etc. So when
rendering scene tree, these doesnt change on every object. And try to keep same textures binded on different
texture units, change them when needed.
When loading, I suggest to load ogre3d:s models and other ogre files, they are great, because there is .mesh.xml and .scene exporters
on different 3d modelling softwares (I personally have some kind simple .xml parsers and .material readers etc, because these are simple to edit, and not so hard to implement on own engine).

ebnf's picture
Icefox wrote:
ebnf wrote:

Model {
Mesh
Material
}

Material {
Texture
Shader
}

Mesh {
IBO
VBO
}

Corrections in bold, I assume? I do basically the same thing. After that it's up to the game framework; I generally keep a cache of loaded resources that hangs on to each model, plus a "graphics context" object some scene-specific data for lighting and postprocessing shaders and such.

Oh, yes. Thanks.

Radar's picture

Hi,
sry, it took so long. I just wanted to say thanks!
This helped me and i think i got a good solution now.

tksuoran's picture

Here is a very brief summary on what RenderStack will do:

Model: List, Frame
Batch: Material, Mesh
Material: Program, parameters
Mesh: VertexBufferRange, IndexBufferRange[]
VertexBufferRange: int BaseVertex, Buffer
Buffer: List, VertexFormat (not used for index buffers), GL buffer object handle, BufferTarget, BufferUsageHint
VertexFormat: List, int Stride

This way a vertex buffer can contain many meshes when vertex format strides and buffer usage hints match, similarly with index buffer. This reduces the need to change attrib pointers when drawing a different mesh. Use together with DrawElementsBaseVertex, or rebase your indices. Use of BufferRanges is not yet in the latest release RenderStack version.

Model can contain a number of batches, this is necessary if you have multiple materials in a single Model.

Mesh contains an array of IndexBufferRanges, some of them which can be null, these are indexed with MeshMode which can be for example PolygonFill, EdgeLines, CornerPoints, PolygonCentroids and so on. They all contain indices to a single VertexBufferRange in the Mesh. This way you can render an object filled or with edge lines.

nythrix's picture

My engine feeds the scene data into the renderer, which manages VBOs, IBOs, textures and the rest. Works pretty well since the renderer can have its own state (resource tracking and management), which is independent of the scenegraph.

tksuoran's picture
nythrix wrote:

My engine feeds the scene data into the renderer, which manages VBOs, IBOs, textures and the rest. Works pretty well since the renderer can have its own state (resource tracking and management), which is independent of the scenegraph.

That doesn't actually say what the renderer does - I thought that's what we were talking about.

RenderStack doesn't yet have renderer in the lower level assemblies. Graphics namespace has VertexFormat, Buffer, BufferRange, Program etc., and Mesh namespace has Mesh and Material, and Scene namespace has Camera, Frame and Transform, BUT there is no renderer yet. I think it is nice to have these lower level parts independent of the Renderer so that I can experiment with a few different renderers.

Also I have not yet bothered to write a "proper" renderer with all sorts of optimizations.. But I am working on some improvements :)

nythrix's picture

Ok. To be more precise the renderer processes the culled scene by checking every scene object ID. These steps then occur:
a) If an ID is unkown to the renderer, its data is uploaded to the GPU. According to the object type, the renderer decides whether to upload a VBO, IBO or Texture(*). The renderer binds the object ID with the generated OpenGL ID for future reference.
b) If the object ID is known from (a) or a previous frame, its OpenGL ID is retrieved and GL.Bind*()ed to the proper place.
c) If an object ID hasn't been encountered in several frames or the renderer is approaching the GPU memory limit it can consume, it will, depending on its settings, discard some of the unused data from the GPU. This step is optional.
(*) No shaders yet. I need support for several renderers (GL1.5, GL3.3, OpenCL raytracing, software raytracing) so I'm not sure how to go about doing it.

Of course, the whole thing is a bit more complicated and I could go on for hours but I'm a bit lazy right now. Is this elaborated enough? :)