Invoke's picture

Trouble with context sharing

Hello i've ran into a problem which I can't seem to solve on my own,

I'm having trouble using context sharing,
i've set up a 'main' context on a primary thread which does all the drawing to screen etc.
On a different thread i've made a second context which i use to create a framebuffer and a texture as target on which i draw several textures.
When the second thread is done working it stores the id's of the textures in a thread-safe place together with some other info.

The problem is that the textures created in the secondary context start at id 1 .. even if i have already loaded several textures before starting the secondary thread

[main thread]
1. Toolkit.init
2. GraphicsContext.ShareContexts = true
3. Create main context
4. Load several textures and other things
5. GameWindow.Run
6. At some point start the secondary thread
[secondary thread]
1.

            IGraphicsContext glc = null;
            bool contextCreated = false;
            if (GraphicsContext.CurrentContext == null)
            {
                GameWindow wnd = Program.createWindowContext((int)width, (int)height);
                glc = wnd.Context;
                wnd.MakeCurrent();
                glc.LoadAll();
 
                contextCreated = true;
            }

2. create textures
3. do work with framebuffer and rendering to textures
4.

            if (contextCreated)
            {   
                glc.MakeCurrent(null);
                glc.Dispose();
            }

5. return texture ids and other info

in step 2. on the secondary thread I expect the texture id to be higher than the amount of textures already created on the main thread.

I hope i'm explaining this well enough.

Help would be greatly appreciated :)
hardware is a Nvidia GTX 670 on Windows 8 pro x64 , opentk 1.1.1


Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
the Fiddler's picture

You need to create the secondary context *before* creating any resources on the main context. Make sure that both contexts have the same GraphicsMode.

Invoke's picture

Thanks for the answer, i'll try it in a second.

That's too bad though; the extra threads run on assets which are asynchronously loaded so i was hoping to create and destroy contexts during that process.
(rendering a set of tiles to a big texture and then drawing that instead of a lot of draw calls per tile each frame)

Unfortunately it's more work than just loading textures or I would have gone for pixel buffer objects.

Edit: got an idea while eating pizza... going to try making a 'contextPool' creating a predefined amount of contexts (max of 2 i think) on the main thread and 'lending' those contexts out to other threads, having the thread wait when all the contexts are in use.
(since it is possible to create multiple contexts on 1 thread as long as i only have 1 active/current per thread right?)
that way I can create all the contexts beforehand and I won't have the overhead of creating and destroying contexts on the fly.
let me know if it's a stupid idea :)

the Fiddler's picture

This approach sounds very reasonable.

I must warn you, however, that context sharing is a little bit spotty. You can expect it to work on Nvidia and Ati drivers, as well as recent Intel drivers, but it will most likely fail anywhere else (including most/all mobile platforms, which are notoriously buggy.)

Then, there is this little gem here:

Quote:

Short version:
Never use shared contexts for performance-conscious code, it costs way more than the failed (more on that later) overlap of the texture uploads.

Long version:
During early development of a major product (Steam Big Picture Mode) in the past that used multiple contexts for background uploading of OpenGL textures, we were told by multiple desktop GPU vendors that the drivers flatly mutex every OpenGL call when you have shared contexts, this can result in major (~20%) fps loss even if you don't use the other context at all, it gets worse if you do, and in particular the texture upload does NOT happen in parallel with rendering due to that mutexing.

So my advice is never do this, we changed the product to not do this before launch because it was completely not performant, we had been struggling to keep up 60fps until we did, then it easily exceeded 200fps with that one change.

The hitching of texture uploads is pretty much unavoidable in OpenGL ES (iOS, Android, etc), on desktop OpenGL you can somewhat hide it with GL_ARB_pixel_buffer_object - where you glMapBuffer on the main thread and then write the pixels from another thread, when done you glUnmapBuffer on the main thread and then issue the glTexImage2D with the pixel buffer object bound, so that it sources its pixels from that object rather than blocking on a client memory copy, but I'm sure this isn't free and I have not tried it in practice, it also requires that you more or less queue your uploads for the main thread to prepare in stages so that's some lovely ping-pong there.

While I too would greatly appreciate the addition of some background object upload functionality in OpenGL, or even an entire deferred command buffer system (I proposed this in a hardware-agnostic way but it didn't gain traction), the reality today is that OpenGL contexts and threading are completely non-viable.

I should note that Doom 3 BFG Edition seems to use a glMapBuffer on each of 3 buffer objects (vertex, index, uniforms) at the beginning of the frame, queue jobs for all of the processing it wants to do, so that threads write into those mapped buffers, and then at end of frame it does the glUnmapBuffer and walks its own command list to issue all the real GL calls that depend on that data - this works very well, but is out of the scope of most OpenGL threading discussions.

Invoke's picture

Thanks for the information it's a good read.
I'll keep a careful eye on performance and evaluate other options when needed.

While I too would greatly appreciate the addition of some background object upload functionality in OpenGL, or even an entire deferred command buffer system
Still a little hard to believe that such a thing does not exist in 2014 especially considering the parallel nature of the graphics hardware and cpu's moving towards an increased number of cores

Invoke's picture

accidental double post.

Invoke's picture

The texture id's are correct now but currently i'm having a problem with the secondary context which doesn't occur if I run the code on the main context.
Which is that the textures which i've created and have drawn to with a framebuffer are still blank.
I have no idea why..

the Fiddler's picture

Try calling GL.Finish() on the secondary context as soon as you are done with each texture.

Invoke's picture

Tried it, no changes

the Fiddler's picture

Double check that your framebuffers are complete before rendering.

I would also suggest using apitrace to capture the state of the application - it might give some insight on what is going wrong.

Invoke's picture

I've checked it again, no access attempt is made until eveything is done i've verified it with debug output.
I can't get apitrace to work properly, playing back the trace with glretrace just crashes it after a trace window comes up. I do have a dump though.