objarni's picture

OpenTK API: When is it OK to favor elegance before optimal performance?

Following a recent debate in the thread about the new Half-datatype, I'm starting this thread to discuss the open/hard question of the balance of elegant API design vs performance.

For me the design of OpenTK should prefer elegant/easy to use/clean APIs before cumbersome APIs as long as the estimated effect of the performance gain using the cumbersome API is not noticeable in a typical OpenTK application.

Of course there are many vague concepts in that:

1) what is elegant? easy to use? clean?
2) what is cumbersome?
3) what is noticeable?
4) what is a typical OpenTK application?

Let me give my answers to these questions to give something to talk about:

1) what is elegant? easy to use? clean?
- A library that has a coherent design, for example using it's own Vector4/Matrix4 types in the other APIs.
- Methods with ref/out parameters are uglier than those without 'em
- Consistent naming of APIs/parameters etc.
- Operator overloads for ordinary maths types (vectors and matrices)
- Follows cultural practice of the language/environment in which it lives (.NET/C#)

2) what is cumbersome?
- ref/out/arrays/pointers in parameters to methods
- in general something that is not readable (eg. vectors/matrices could have a mathematical notation - something that is not possible in OpenTK because of the design choices made)

3) what is noticeable?
I'd say if a human being used to playing PC games cannot distinguish the "smoothness" of an application with/without an optimization, it is quite unnecessary.

4) what is a typical OpenTK application?
Hard one. But some typical apps' seem to be: 3d model viewers/editors, simple 3d games (no AAA game yet at least..), graphics driver in higher-level libraries (Agate lib eg.)


Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
Inertia's picture

Thanks for moving this into a separate topic. This is the right place for it.

1&2) The answer is simple:

Good for Programmer - Fluffy cuddle API
Good for Users - Fast API

The users will not care if you coded your application in a 5000 year old chinese fortran dialect, if it runs flawless. Faster execution -> lower hardware requirements -> more people able to run the software.

3) Below monitor refresh it's noticable. Download the Crysis demo and put all sliders to "max. detail" if you don't believe it. The games you are talking about experienced 100s of iterations of "debug, profile, optimize and repeat" before they got where they are now.

4) None. There's no limitation from OpenTK's side, the limiting factors are imagination, viability and hardware. No type of application is favored over another.

objarni's picture

1 & 2)

But if the optimization is not even noticeable..? If it is about 99 frames per second or 98 frames per second?

The way you are reasoning now is:

Elegance is NEVER higher priority than performance

=> then I think you are in the wrong language/environment. That kind of "tough attitude" brings you to C/Asm. No doubt about it!

What do others think about this?

Mincus's picture

I think it depends entirely on what area you're aiming at.
For example you wouldn't use Flash for developing an in-depth game as the interface is designed mostly for drawing.
You equally wouldn't use C (even with a simple API like SDL) for knocking up a daft 2D game.

Obviously those are extremes and C# with OpenTK fits snugly in the middle, catering to both ends to some extent. It's not the fastest and it's not the easiest way to put something together but it mixes both nicely.
Quick dev times where you don't spend ages reinventing the wheel (which often happens with C/C++) whilst maintaining a significant level of flexibility that something like Flash lacks.

On the elegance side, it can be difficult to write elegant C code, it often isn't obvious what's happening when you start delving into some of the things you can do with pointers. Eqaully Flash looks nice and is easy to use but lacks the ability to easily optimise. No matter what you do, flash is not going to hit 60fps on a 1280x960 screen whilst keeping things going on (it often struggles to hit 25fps just showing video).

That's my assessment of the situation anyway. OpenTK is just right for, imo, indie games developers and the like, with clear commercial applications where a fast dev time is required for 3D applications that don't require tons of CPU power.
It's the last part that suggests to me that developing a AAA commercial game, such as Left 4 Dead (to take a recent example) is beyond OpenTK, but as I said, it's a good middle ground between everything.

As a final note, what would take it even further for me is implementing something like the PhysX API into the tools. Obviously I don't mean in this instance, I don't even mean for the big v1.0, but maybe as an idea for a future addition?
This would remove a significant load from the CPU that would mean the reduction in speed from using C# over C/C++ would be compensated for. Or am I dreaming with that one? :o)

Inertia's picture

PhysX is pretty mature, but it has 3 major problems: no OSX, no x64 support and nvidia only. This kinda doesn't match with OpenTK's platform neutral philosophy. Other libraries like Bullet, ODE, Newton etc. keep ignoring the possibilities multi-core CPUs offer and have not made any announcements to support OpenCL. Graphics cards are moving heavily in the "FPU addon card" direction, making OpenCL for physics very attractive. I don't think there will be a decisions whether physics bindings or not, until the physics APIs have made their moves regarding OpenCL.

Mincus's picture

I agree, I was just using PhysX as a well known example of such an API, I know there are several around.
It's also why I pointed out it's likely to be a long way in the future if it happens. OpenTK is still fine tuning the graphics and there's sound to finish off before anything else is even considered, I just thought it a relevant future possibility.

Inertia's picture

then I think you are in the wrong language/environment.
C is like a gun without safety switch. C# has that switch, so it's your decision when you need the safety off.

What is the point of this topic anyways? Trying to continue the "pointers are evil and must be exorcized" crusade from half a year ago? OpenTK allows both options, and I don't think it would be smart enforcing the users either of them. Chose whatever you prefer.

objarni's picture

One of the primary reasons for me to use OpenTK instead of Tao.GL is because it feels more .NET-idiomatic.

The Tao way looks like this:

Gl.glBegin(Gl.GL_TRIANGLES);

While in OpenTK it looks like this:

GL.Begin(BeginMode.Triangles);

OpenTK uses many enums instead of one huge, reducing the risk of picking the wrong one. It also dares to go away from the gl-convention of adding "gl" to every method - and makes it more .NET-idiomatic by using "GL." instead.

These things points towards OpenTK being in favor of readility and productivity instead of convention. And from the beginning I thought this was one of the primary focuses of OpenTK.

I think me and Inertia has spoken out quite clearly about this issue. What I think we are lacking is Fiddlers opinion, or anyone else interested in OpenTK.

Now I'm starting to think (a little simplified but in essence) "it does not matter so much if I choose the Tao libraries or OpenTK - they are trying to do the same thing".

the Fiddler's picture

I started working on OpenTK, because I found the OpenGL API both frustrating and inelegant. The biggest issue was the complete lack of type safety: you have a few thousand tokens, a couple hundred functions and the documentation as the sole safeguard against errors. Not good.

While this lack of type safety may be understandable for a C API, it is hard to accept that languages like C# will have to suffer from the same problem. This is the raison d'être for OpenTK: to address the most common problems one faces when working with OpenGL. That was the initial scope at least. It has grown a little since and now fills its own niche market.

My goal is to make OpenTK:

  1. Suitable for RAD, which means intuitive, safe and *useful*.
  2. Portable, self-contained and easy to deploy.
  3. Fast enough that you won't need to drop back to unmanaged code (as that conflicts with both of the above).

#1 means removing cruft ("gl"), adding necessary functionality (WinForms, audio, fonts, math) and optimizing the safe path.

#2 means not relying on unmanaged code, minimizing dependencies and working around known bugs. From a coding standpoint, I consider this the most difficult part. However, this is what makes it possible to code on windows, copy&paste on linux and have everything work.

#3 means understanding the limits of the managed environment and working with them: minimizing GC impact, making common operations fast and, sometimes, making sacrifices for speed. Design-wise, this is the most complex part and there is usually a trade off between speed, ease of use and ease of implementation.

The last item also affects the kind of applications that can be built with OpenTK: a 10ms GC that fires when a modeling app is waiting for input will go unnoticed. A 10ms stall when you are going for the kill in an FPS is bad. A single 10ms hickup in a flight simulator is the difference between achieving verification or not. Ideally OpenTK would be able to support all these applications (realistically it won't), but the faster it is, the more diverse its uses.

When confronted with the question "When is it OK to favor elegance before optimal performance?", my answer would be "when elegance does not impact usability."

It's great when something is elegant and fast. It's ok when you are able to provide an elegant interface with an inelegant implementation, for the shake of speed. Finally, it's sometimes impossible to achieve both elegance and speed (e.g. because you have reached the limits of the language or the runtime). In that case, you have to weight the impact and pick the most suitable solution (and in exceptional cases provide both fast/ugly and slow/nice).

There is no clear cut rule of thumb - you have to weight each case individually. Example: does it make sense to implement the operator * for matrices, when it is an order of magnitude slower (figuratively speaking) than ref parameters?

Kamujin's picture

Since we are speaking in general terms and not about a specific case, I would characterize my opinion as follows.

1) Public facing classes and methods should be safe, elegant, and intuitive.*

2) Do your optimizations behind the scenes.

3) When possible use declarative patterns to coerce efficient use.

*intuitive is in the eye of the beholder. It is the framework consumer's burden to understand the subject matter before trying to solve problems in that area. It is not the framework's responsibility to educate the framework consumer. The framework should use naming conventions and solution patterns commonly used by experts on the subject. A framework should strive to be "obvious" to an expert in the subject with no prior knowledge of the framework.

objarni's picture

Thanks for your answer Fiddler! Many of the points you are making in 1-3 I think would be good to mention somewhere on the site, they represent some kind of a "project statement" or "target & scope" of OpenTK, so I think they are important. And if they are already mentioned somewhere I apologize.

[Side note: Would you consider "developer productivity" that is easy-to-deploy, easy-to-use and still fast enough for realtime 3d applications to be an accurate destillation of your three rules?]

There is no clear cut rule of thumb - you have to weight each case individually. Example: does it make sense to implement the operator * for matrices, when it is an order of magnitude slower (figuratively speaking) than ref parameters?

Is the estimated effect of the optimization a noticeable speedup in a typical application..?

What I am getting at, is if the optimization is unnoticeable (in all but performance test-programs!) - it is not necessary!

An order of magnitude is unnoticeable if the operation performed, is something that is executed at initialization time, or the operation in question is so fast that executed every frame it is a matter of 1 microsecond or 10 microseconds (both neglectable).

If we are talking about an operation performed 1.000 times per frame in a typical application - and the execution time of the operation is measured in microseconds and not nanoseconds - then it is noticeable.

Reducing those two paragraphs to one sentence: "How common is the operation in inner loops?"

Of course, this kind of analysis of how an operation typically will be used is hard - especially when designing a general-purpose library in which you are not in control of how the operations will be used at all!

To make such analysis, you have to have a kind of "human" experience in the field you are developing the library, knowing the "folk lore" of the community. In this case the computer graphics community.

That's why we all know that optimizing GL.ClearColor() is a waste of time, while microoptimizing GL.Vertex3f-calls is important (at least to an extent -- since there are so many faster methods to send geometry to the graphics card one could reason that optimizing immediate mode calls is a waste of time!)

One more short comment:

"when elegance does not impact usability."

Do you mean "usability" as in "possible applications/usage scenarios"? For me usability means ease-of-use.