# The Matrix Stack

Posted Tuesday, 15 March, 2011 - 13:05 by fkj inIn my program I use:

GL.MatrixMode( MatrixMode.Modelview ); Matrix4 modelview = Matrix4.LookAt( eye, target, up ); GL.LoadMatrix( ref modelview );

to set the camera, and it is currently the only transformation I'm using. I understand OpenGL uses a matrix stack, but I'm not sure how to use it.

If I want to "add" (matrix-multiply) another transformation, e.g.:

`Matrix4 transformation;`

to the current transformation, to turn a specific shape (e.g. a quad with a texture), should I use this template?:

GL.PushMatrix(); GL.LoadMatrix( ref transformation ); GL.Begin( BeginMode.Quads ); { // render texture } GL.End(); GL.PopMatrix();

using a stack I would expect to push transformations when I want to add them, and pop to remove the last added, but why push without a parameter?

I understand doing matrix multiplications on the GPU is much faster than to use the CPU, and I should avoid making them in my program (at least not too many of them). The transformation I want to use is a single matrix I calculate in the program, but the same transformation can be done using various translations and rotations, but it does not take more than a sin() and a cos(), and a few multiplications to make the resulting matrix. Is OpenGL slower on user defined matrices? Slow enough for 4-6 of its own transformations to be faster than one user defined?

## Comments

## Re: The Matrix Stack

OpenGL matrix stack tutorial

why push without a parameter?

`GL.Push/PopMatrix`

affect thecurrentmatrix, which is specified through`GL.MatrixMode`

.I understand doing matrix multiplications on the GPU is much faster than to use the CPU, and I should avoid making them in my program (at least not too many of them). The transformation I want to use is a single matrix I calculate in the program, but the same transformation can be done using various translations and rotations, but it does not take more than a sin() and a cos(), and a few multiplications to make the resulting matrix. Is OpenGL slower on user defined matrices? Slow enough for 4-6 of its own transformations to be faster than one user defined?

GL.MultMatrix/Rotate/Translate/Scale (and the rest of the matrix stack) are calculated on the CPU, just like the equivalent OpenTK.Matrix4 methods. (I don't know which approach is faster - I would expect the former to perform better, but some people have reported better performance with Matrix4).

To perform calculations on the GPU you'll need to use shader programs.

## Re: The Matrix Stack

The matrix stack operations push, pop, translate, scale, rotate and so on are purely implemented in the driver software. Only then current state / values of the matrices is loaded to the hardware for each draw call. So it really does not matter if you use the GL matrix stack or your own. The GL matrix stack is in fact deprecated and removed from latest GL Core profiles.

The work load of matrix multiplication operations on the CPU, even hundreds or thousands of them on modern CPU, is still very little work compared to the operations executed by GPU. This is because GPU typically executes matrix operations most commonly per vertex (in vertex shader). Vertex counts can be hundreds of thousands or millions. In this light it is much better to for example to compute a full modelviewprojection matrix using the CPU once instead of multiplying three matrices together in the GPU a million times (every time resulting the same result..).