JTalton's picture

What's up?

The forums have been a little quite lately. I was wondering what people are working on?

I have moved my Gtk Sharp glWidget to https://sourceforge.net/projects/glwidget/. I have plans to make it a little more robust, support muti-threaded opengl, hopefully support OSX, and I still hope it can be used with OpenTK. Not actively working on it though since other projects are taking up my time.

As for Golem3D (http://www.golem3d.com), motivation has been low. I've been playing with Blender and it does everything. The problem is, Blender is a pain in the *** to learn and use. Thus I am back to playing around with Golem for fun. I have separated out the windowing/input/font section of my framework into an interface allowing a SDL and an OpenTK backend. I have also separated out my UIFramework and have heavily been working on it's databinding.

Anyone have any opinions on OpenGL 3.0? The specs were released and they stripped a lot of good stuff out.


Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
oyvindra's picture

I'm considering creating my own simple C# NURBS library, and looking forward to get back to student life in a week or so. :)

nythrix's picture

Anyone have any opinions on OpenGL 3.0? The specs were released and they stripped a lot of good stuff out.
That's not half as bad as leaving out it's replacement. Apparently they adpoted the "everyone and his sister should be able to write a GL3 driver till noon" philosophy.

After a year of waiting I'm pretty much disgusted...

Inertia's picture

I've been taking a look at PhysX in the past weeks (and C++ to write a somewhat efficient wrapper), as it appears to be the only physics API that takes advantage of multi-core CPU and GPU optimizations. The existing wrappers I found were for XNA are either mature and closed source, or open source but incomplete, so I'm considering to spend some time working on this.

Regarding OpenGL 3.0 ...

(Please note that I'm neither angry, nor expressing anyone's opinion besides my own)

My excitement for GL3 is limited, because it's not the promised Long Peaks revamp, but instead just a handful of extensions promoted, some GLSL polish and a deprecation model. I do understand that one of OpenGL's strengths is backwards compatibility and that building a new API from scratch takes it's time, but they could have told us in January that LP was put on hold (when the decision was made) rather than let people wrap their brain around the idea of a cleaner, lighter weight API. (This is imho why there is so much whine on the opengl.org forum atm)

Sure, floating point depth buffers, rgb9e5 color format, FBO promoted to core and moving instancing and geometry shaders to ARB extension status are steps forwards, but it's the least they could do - not the best. Thus I don't really feel like shouting 'hurray!', since most of these praised new extensions were available from nvidia for over a year in stable status. The version number 3.0 is not quite right - something around 2.5 and saving the 3.0 for the point where the deprecated functionality is moved back into a extension - would have better suited the progress.

Please note I'm not saying it's bad, and it's probably better than if they had released a flawed Long Peaks which would not survive the year 2010. The functionality I wanted is promoted to Core/ARB status, it's just not as shining as a new LP logo would have been ;)

objarni's picture

@Inertia
What was the big thing that didn't make it to GL3, in not-so-technical language?

the Fiddler's picture

GL3 was supposed a cleaned up API, based on objects (you create an object then pass it around). The current GL3 remains state-based, i.e. you "bind" a resource to perform operations on it (and the "bound" resource is remembered until you bind a different one).

The proposed API would be cleaner (no more 20 year old cruft), easier to implement (more robust drivers) and faster (object validation would only happen at creation time, not every time you used it).

What happened is that the ARB decided there was not enough time to make a robust implementation of that idea in the allocated time (read this for the whole explanation). Instead of rewriting the API, they decided to move forward with the current one, deprecating old features. Future versions will remove these old features in "profiles", which may, or may not be implemented at the IHVs discretion.

So in the end, what we have is not what we were promised two years ago and while better than nothing, it's still a little disappointing. On the other hand, this has motivated the community to take matters in its hands: work has started on an OpenGL SDK (tutorials, documentation, helper libraries) that will hopefully help attract newcomers to OpenGL.

Right now, I'm waiting for the specs to be updated with the new entry points to update OpenTK. The new enums are already available, so this should happen soon.

Inertia's picture

What Fiddler said, plus that a new API could clean up some old problems:

  • A more accurate error description if an error occurs. Right now the response "invalid operation" is just narrowing the possibilities down what could have lead to the error. You still have to look up the manual what is accepted, and which of the parameters caused the problem. OpenTK with it's enums narrows down the possibilities what can go wrong already (so this is not too bad for us), but I would have embraced some more verbose error report.
  • Commands often need to be used in sequence with each other, they just make no sense called alone. A new API would have given a chance to group some into one command.
  • Currently the mechanism that reports success of shader compilation is not too great. You cannot be certain that it runs in hardware or software unless you parse the log. There is no standard how the log must be formated, making this more complicated than necessary. Although this has not happened to me yet, I imagine a chinese version of a GL driver reporting "success to run in hardware" in chinese, and the parsing mechanism does not recognize the success.
  • Shaders, again: when specifying custom vertex attributes (tangent, radius, weights, whatever non-standard thing you need) you have to use a string and an integer to describe the location.
  • and one more: glGetUniformLocation calls could have been changed, so you only call them once after shader linking and the returned value remains valid every time the shader is made current.

These are some things I repeatedly ran into and have not found a nice solution for, yet. My hopes were that Long Peaks will address them and presents a more elegant solution.

The good thing about GL3 is, you don't have to change any code. You 'may' use a forward-compatible rendering context that will not support glBegin & Co, but you are not forced to. This should help alot with adding GL 3.1 renderpaths to existing applications to support future OpenGL versions.

Like I said before, it's not as bad as the people at opengl.org claim. It certainly is an improvement to 2.1. It just isn't Long Peaks.

Why I was looking forward to Long Peaks: I kinda got my hopes up that Long Peaks would be elegant enough to convince more programmers to use it, instead of DirectX. The way I understood the draft, it would set a rather high minimum hardware requirement level that would be sufficient for pretty much everything except GPGPU.
Apple and Linux marketshares have increased over the past years, and it's certainly worth considering those platforms aswell, since the competition of existing applications/games on those platforms is not as strong, as for windows. (Think I've elaborated this in another discussion already, so gonna stop here and point at the search function ;p)

objarni's picture

@Fiddler, Inertia.

Thanks for info ... This kind of reminds me of the "Betamax<->VHS" format fight:

http://en.wikipedia.org/wiki/Betamax

(I mean the sad but true lesson that "compatibility is king" - "not always the best technology wins")
(That does not mean I have a political standpoint in the OpenGL3 discussion - it's just an observation..)