Inertia's picture

OpenGL 3.0 bindings

Project:The Open Toolkit library
Category:feature request
Assigned:the Fiddler

Ati has done its homework and over half of the GL 3.0 extensions are supported.

This feature request does not cover forward compatible contexts. No remarks about deprecation either.

I'd really like to see this going in 0.9.2, so we have all gremlins hunted down by the time 0.9.3 is due.


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
the Fiddler's picture


Regardless of how we are going to tackle forward compatibility, these resources will prove useful:

  1. Forward compatible functions
  2. Forward compatible enums
  3. All OpenGL extensions, documented and categorized.
the Fiddler's picture


I have now merged the GL3 specs into the gl3 branch.

There are still three things left to do:

  1. Make the new methods type-safe. We'll need to divide the large Version30 enum into smaller ones and modify the gl.spec to use those (like we did with Version12 - Version21).
  2. Implement the new context creation logic.

Task #1 entails reading each GL3 function spec to see which Version30 tokens it uses; creating new enums with those tokens; modifying gl.spec to use those.

For #2, we'll need to find some example code for each operating system (Windows, Linux, MacOS). Pure C is ok.

If anyone wishes to lend a hand, please make a post here.

the Fiddler's picture


Status:open» in progress
Inertia's picture


1.a) Is it ok if I'll just post new enums like we did with the version12-21 enum purification?

1.b) Do you intend to raise the bar for what is not ARB suffix'd to DirectX 10 hardware? It might be wise to find an OpenGL Extension list dumped from a S3 Chrome graphics card to see what they do support. I don't think they will kick ATI or nVidia out of the competition, but their cards appear competitive.

1.c) Taking a quick look at the port of the GL 2.1 spec, one problem that is afaik better dealt with in the tool still exists:

These are quotes from the TextureTarget enum, they can be found by hand but I think it's much better if a computer searches for them. The problem having duplicates is that when you GL.Get*** query a parameter from OpenGL and then cast it to our enums, it might return you something that is actually correct (because of the value) but looks wrong in text.

TextureRectangleArb = ((int)0X84f5),
TextureRectangleNv = ((int)0X84f5),

Texture3D = ((int)0X806f),
Texture3DExt = ((int)0X806f),

ProxyTexture3D = ((int)0X8070),
ProxyTexture3DExt = ((int)0X8070),

These 4 are most likely not a valid TextureTarget, but a parameter for GL.TexParameter(). This might be better to correct in the .spec

TextureMaxLevelSgis = ((int)0X813D),
TextureBaseLevelSgis = ((int)0X813c),
TextureMaxLodSgis = ((int)0X813b),
TextureMinLodSgis = ((int)0X813a),

C&P from existing enum TextureParameterName as proof:

TextureMaxLevel = ((int)0X813D),
TextureBaseLevel = ((int)0X813c),
TextureMaxLod = ((int)0X813b),
TextureMinLod = ((int)0X813a),

2) It is now possible to create a window not only using linear RGB color space, but also non-linear sRGB color space.

I don't think this is a priority atm, but it should not be forgotten when designing the API.

the Fiddler's picture


1.a) I'm perfectly fine with that.

1.b) Not sure I follow you here. The GL3 specs leave some DX10-level functionality out of the core, as ARB/EXT extensions. These functions are inluded in OpenTK (check svn diff on gl.spec and GLdelegates.cs), so I don't see a problem with that.

1.c) I agree - the generator should test and remove duplicate entries in tokens. I'll file this as a feature request, so it doesn't get forgotten. (Edit: filed)

However, this is more of a cosmetic issue in practice:

class Program
    enum Test { A = 1, AExt = 1 }
    static void Main(string[] args)
        System.Console.WriteLine(Test.A == Test.AExt);  // True

2) Manufacturers are starting to release wide-gamut (aRGB) monitors. I expect these will become the norm in the not-to-distant future, which makes this extension quite important for people who care about correct color reproduction. This is something I'd like to support in OpenTK (mostly because I am shopping for a new monitor now :) ), but I agree it's not a high priority.

Inertia's picture


1.b) My thinking about the Graphics namespace is that it's ultimatively limited by what existing hardware exposes. This is dictated by Microsoft's DirectX, because no graphics chip vendor would produce something without support for DX. Any Extension to OpenGL that is supported by both Ati and Nv - and is part of any recent DirectX version - is rather reliable and should be part of OpenTK's GL class for easy access.

This is OpenGL countryside, it's the application developers responsibility to query extension strings and provide fallbacks, or set minimum hardware requirements accordingly. I don't think taking too much consideration for Intel is doing anyone anything good. Hardware is cheap, workhours are not. Hiding useful Extensions like FBO in the Ext class did not help anyone, I'm aware this is core now just trying to prove my point.

1.c) I created a depth renderbuffer with DepthComponent32fArb and the query returned DepthComponent32fNv and I was quite puzzled what went wrong at first, because the test app was running fine. I see no good reason to have both in the enum, unless their values would differ.

2) If you want a hint, pick one that allows physical rotation by 90° or can be wall-mounted. It is very very cool to have this, because your code is usually larger in height than width. Check center monitor in this pic. <3

(I consider this a low priority because it's completely independent from the bindings, but it should definitely be exposed in context creation)

the Fiddler's picture


1.b) It's not that simple and I really don't think that OpenTK should deviate from the specs by moving extensions to core.

FBOs were in the GL.Ext class on GL2.1, simply because the specs defined them as EXT-class extensions. The fact that they were widely supported and useful was the reason why they were promoted in GL3.

Had we moved FBOs (or any other functionality) to the GL class, instead of GL.Ext, we would have been in big trouble, because the ARB often changes the specs whenever they promote an extension (see core GLSL vs ARB GLSL).

Also, I don't think GL.Ext.FooBar is any more obfuscated than glFooBarEXT. Any gfx programmer worth his salt should be able to RTM (specs) and find out how to do what he needs.

1.c) Implemented in the gl3 branch, I'll be committing in a few minutes.

2) Thanks, that's exactly what I had in mind. :) Just looking for a panel with good vertical viewing angles.

Edit: 1.c is now commited. I'd also love to get rid of the Sgis junk on the core API, but that's a secondary issue (a lot of work to verify that these tokens are indeed useless).

Inertia's picture


1.a) I've done all the ARB_ stuff from the registry, besides ARB_framebuffer_object. I'll take a look at that too, but I thought the stuff that wasn't touched at all is a priority.

1.b) "GL.Ext.FooBar is any more obfuscated than glFooBarEXT"
I disagree, due to intellisense this is a big difference. I don't really want to waste brain cells for memorizing what is currently Ext, Arb or core. If you stop typing to add ".Arb" to "GL.SuperCoolFunc", intellisense autocomplete feature breaks and you have to type the "tion" on your own.

We can argue now how much work this is in practice :P My point was, if DirectX *forces* hardware to support a feature, it *will* be added to OpenGL at some point. The hardware is capable of it, why wouldn't they expose such features in OpenGL?

I don't want every GL.Ext promoted to GL. only few selected candidates. For example transform feedback might not become core, due to OpenCL. But it was crystal clear that FBO, float depth buffers, geometry shaders and instanced drawing would be promoted.

1.c) Great :) Gonna grab some food first, then take a look.

the Fiddler's picture


I don't want every GL.Ext promoted to GL. only few selected candidates.
And what happens when the ARB actually promotes these, only with different parameters? All existing apps are now broken, and we are left with a perfect mess.

Sorry, but that change is too invasive. Khronos is the one at blame here - if they updated the specs in a timely manner, this would be a non-issue.

I'm testing some stuff on the GameWindow now (Kamujin's interleaved scheduler and JTalton's feature request) and I also need to finalize the text updates. After that I'll go through the ARB_ stuff.

Inertia's picture


Sweet, I really like the lighter enums :)

But TextureTarget still contains:

TextureMaxLevelSgis = ((int)0X813D),
TextureBaseLevelSgis = ((int)0X813c),
TextureMaxLodSgis = ((int)0X813b),
TextureMinLodSgis = ((int)0X813a),

From reference pages about "glTexParameter"


Sets the minimum level-of-detail parameter. This floating-point value
limits the selection of highest resolution mipmap (lowest mipmap
level). The initial value is -1000.


Sets the maximum level-of-detail parameter. This floating-point value
limits the selection of the lowest resolution mipmap (highest mipmap
level). The initial value is 1000.


Specifies the index of the lowest defined mipmap level. This is an
integer value. The initial value is 0.


Sets the index of the highest defined mipmap level. This is an integer
value. The initial value is 1000.