raidersv's picture

Primitive selection using VBO

How to select primitive which rendered under mouse pointer if render uses VBO and count of vertexes >5M? During VBO creation each vertex has unique ID and this ID is passed to video, Vertex structure is:
[StructLayout(LayoutKind.Sequential)]
internal struct Vertex
{
public float X, Y, Z;
public byte R, G, B;
public ulong ID;
}


Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
iliak's picture

Take care of byte alignemnt. The length of you struct *must* be a multiple of 32b :

sizeof(float) => 4
sizeof(byte) => 1
sizeof(ulong) => 8

4*3 + 1*3 + 8 = 23

So change your struct to :

internal struct Vertex
{
   public float X, Y, Z;
   public byte R, G, B, A;
   public ulong ID;
}
Inertia's picture

One way to do this might be a FBO with 2 extra color textures attached, besides the usual depth/color. Using bitshifts in GLSL you can split the uint64 into 2* uint32 or 8* uint8 and write that into the 2 extra textures. In your application you can read back from the FBO attachments, reassemble the ID and test whether it is GL.ClearColor or a valid ID. Be warned: According to GLSL spec integers are always 32 bit wide and your ulong will get castrated somewhere along the way. Does it really have to be ulong? uint32 should be sufficient, only requires 1 color attachment, too.

The less hackish approach would be the Transform Feedback extension or determining collisions with help of the CPU.

raidersv's picture

Hi Inertia

The ulong is used as identifier of object which has vertices and unit doesn't have enough space to encode identifier. During picking I need to extract this ulong to be able identify which objec is picked.

I am not so familar with FBO, can you make example for this case?

Inertia's picture

http://www.opentk.com/doc/graphics/frame-buffer-objects

If there's no way around uint64, consider passing that as 2* uint32 in the vertex attribute. How do you specify ulong as type for GL.VertexAttribPointer?!

raidersv's picture

That structure I pushed to video using BufferData/MapBuffer and during render part use VertexPointer, ColorPointer and DrawElements. That ulong ID is supposed to be used as custom data.

How VertexAttribPointer may help for picking? Is it possible to read back assigned attribute under mouse pointer even that position isn't vertex of primitive?

the Fiddler's picture

If I understand this correctly, Inertia's idea involves writing the IDs of the transformed vertices to a texture (using a FBO). Once you have this texture, you can implement picking trivially: just read the texel that falls under the mouse cursor - its value is your vertex ID.

Two caveats:

  1. You need shaders to do this.
  2. Shaders don't support ulong values, so you have to split the ID into two uints (2x32bits, high- and low-order) and render those into two separate textures.

That said, 32bits afford you 4 billion vertices. Unless I was 100% certain you will need more in a single scene, I would concentrate on implementing 32bits first and extrapolating this to 64bits once the need arises.

raidersv's picture

What kind of job shader will do? pciking or rendering? In case of FBO - what is width/height has to be for FBO? The same as entire scene? How FBO may impact on memory usage? Currently my application can render >20M vertices but memory usage is huge: >1.7Gb

the Fiddler's picture

Disclaimer: I have not implemented this but it seems doable and reasonably efficient.

An uncompressed 1920x1200 texture consumes about 9MB of video memory so you are looking at a maximum overhead of about 20-25MB (depending on the implementation). Obviously, smaller viewports will consume significantly less memory: for 640x480 you would need a negligible ~3MB of texture memory.

The shader will do regular rendering in addition to writing the vertex IDs into a separate texture [1]. On a high level, the process would look like this:

  • Startup:
    1. Create 2x color textures (for regular rendering and vertex IDs) and a depth texture (for depth testing). Size should be the same as the viewport [2].
    2. Create FBO and attach the color and depth textures.
    3. Load vertex and fragment shader [3].
  • Main loop:
    1. Bind and clear FBO.
    2. Bind shader.
    3. Set up vertex attributes.
    4. Render.
    5. Get picking results from the vertex IDs texture. [4]

[1] You can also render in two passes: 1st - normal rendering to screen; 2nd - render vertex IDs to texture. This is more flexible and more robust (not all drivers can do multiple render targets (MRTs)) but comes at a performance cost: you have to transform each vertex twice. Not a problem for typical applications but >20M vertices is far from typical!

[2] Best results for texture size == viewport size. You can use smaller textures (faster, lower memory consumption) but accuracy will suffer. Larger textures are possible but very inefficient. Using a multisampled renderbuffer is probably not a good idea, as multisampling will interfere with the vertex IDs.

[3] The vertex shader simply transforms the vertices and passes the results to the fragment shader. It should keep vertex IDs intact i.e. not interpolate them (actually, I don't know if it's even possible to interpolate integer attributes). The fragment shader writes to the two separate render targets (color and vertex ID textures).

[4] You can insert a small delay before reading back the vertex ID. This can improve performance - 1 frame (~16ms) should be enough (and completely invisible to the user).

I'm sure there are ways to implement this without shaders if necessary but my OpenGL 1.0 foo is not strong enough.

Inertia's picture

I haven't implemented this either, but it should work fine if uint32 is sufficient for the ID field.

@Iliak: I seriously doubt that padding will do any good in this scenario, 1.7GB is ALOT and you will want to reduce memory reads at any cost. When you draw 20 million vertices/frame, it may be worth a try taking a look at the half-precision floating-point type we added with OpenTK.

http://www.opentk.com/doc/chapter/3/half-type

struct Vertex
{
   public Half X, Y, Z; // might give precision problems, depending on the object
   public byte R, G, B;
   public uint ID;
}

[GL-1.0-foo]
OpenGL's selection mechanism could be used to find out which triangle, but probably not at interactive rates. 1.000.000 triangles == 1.000.000 GL.Draw*** :P

Inertia's picture

Before I forget it again:

Assuming UInt32 is sufficient, newer GL drivers could use gl_PrimitiveID instead of a vertex attribute to identify the triangle number. An offset - passed as an uniform - to distinguish multiple objects is a must, but it would save memory compared to the first suggested solution.