rotard's picture

Radeon HD 4850x2 issues?

A while ago, I picked up a copy of the OpenGL SuperBible fifth edition and slowly and painfully started teaching myself OpenGL the 3.3 way, after having been used to the 1.0 way from school way back when. On my laptop, I put together a program that let the user walk around a simple landscape using a couple shaders that implemented per-vertex coloring and lighting and texture mapping. Everything was working brilliantly until I ran the same program on my desktop.

Disaster! Nothing would render! I have chopped my program down to the point where the camera sits near the origin, pointing _at_ the origin, and renders a square (technically, a triangle fan). The quad renders perfectly on my laptop, coloring, lighting, texturing and all, but the desktop renders a small distorted non-square quadrilateral that is colored incorrectly, not affected by the lights, and not textured.

I suspect the graphics card is at fault, because I get the same result whether I am booted into Ubuntu 10.10 or Win XP. I did find that if I pare the vertex shader down to ONLY outputting the positional data and the fragment shader to ONLY outputting a solid color (white) the quad renders correctly. But as SOON as I start passing in color data (whether or not I use it in the fragment shader) the output from the vertex shader is distorted again. The shaders follow. I left the pre-existing code in, but commented out so you can get an idea what I was trying to do. I'm a noob at glsl so the code could probably be a lot better.

My laptop is an old lenovo T61p with a Centrino (Core 2) Duo and an nVidia Quadro graphics card running Ubuntu 10.10
My desktop has an i7 with a Radeon HD 4850 x2 (single card, dual GPU) from Saphire dual booting into Ubuntu 10.10 and Windows XP. The problem occurs in both XP and Ubuntu.

Can anyone see something wrong that I am missing? What is "special" about my HD 4850x2?

string vertexShaderSource = @"
#version 330

precision highp float;

uniform mat4 projection_matrix;
uniform mat4 modelview_matrix;
//uniform mat4 normal_matrix;
//uniform mat4 cmv_matrix; //Camera modelview. Light sources are transformed by this matrix.
//uniform vec3 ambient_color;
//uniform vec3 diffuse_color;
//uniform vec3 diffuse_direction;

in vec4 in_position;
in vec4 in_color;
//in vec3 in_normal;
//in vec3 in_tex_coords;

out vec4 varyingColor;
//out vec3 varyingTexCoords;

void main(void)
//Get surface normal in eye coordinates
//vec4 vEyeNormal = normal_matrix * vec4(in_normal, 0);

//Get vertex position in eye coordinates
//vec4 vPosition4 = modelview_matrix * vec4(in_position, 0);
//vec3 vPosition3 = / vPosition4.w;

//Get vector to light source in eye coordinates
//vec3 lightVecNormalized = normalize(diffuse_direction);
//vec3 vLightDir = normalize((cmv_matrix * vec4(lightVecNormalized, 0)).xyz);

//Dot product gives us diffuse intensity
//float diff = max(0.0, dot(,;

//Multiply intensity by diffuse color, force alpha to 1.0
// = in_color * diff *;
varyingColor = in_color;

//varyingTexCoords = in_tex_coords;
gl_Position = projection_matrix * modelview_matrix * in_position;

string fragmentShaderSource = @"
#version 330
//#extension GL_EXT_gpu_shader4 : enable

precision highp float;

//uniform sampler2DArray colorMap;

//in vec4 varyingColor;
//in vec3 varyingTexCoords;

out vec4 out_frag_color;

void main(void)
out_frag_color = vec4(1,1,1,1);
//out_frag_color = varyingColor;
//out_frag_color = vec4(varyingColor, 1) * texture(colorMap,;
//out_frag_color = vec4(varyingColor, 1) * texture(colorMap, vec3(, 0));
//out_frag_color = vec4(varyingColor, 1) * texture2DArray(colorMap, varyingTexCoords);

Note that in this code the color data is accepted but not actually used. The geometry is outputted the same (wrong) whether the fragment shader uses varyingColor or not. Only if I comment out the line "varyingColor = in_color;" does the geometry output correctly. Originally the shaders took in vec3 inputs, I only modified them to take vec4s while troubleshooting


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
the Fiddler's picture

Do the shader logs report anything unusual? (GL.GetShaderInfoLog and GL.GetProgramInfoLog) Are you getting any OpenGL errors? (GL.GetError)

gDEBugger is now available for free, try debugging your program with that.

rotard's picture

Yes, I have been using GetShaderInfoLog and GetProgramInfoLog to receive compile and linker errors. If any error is received, nothing renders at all. I am checking GL.GetError() in my render loop and am receiving NoError

Coki04's picture

In the vertex shader code try to replace yours "in" variables as follows:

layout(location=0) in vec4 in_position;
layout(location=1) in vec4 in_color;

If you use any more "in" variable you should use location 2 and followers.

I hope it works.

JTalton's picture

I have seen many different issues getting my shaders to work across different systems.

On my older Mac laptop with and ATI card, all shader values used with a float had to specified as a float (i.e. 2.0 instead of 2). GL.GetShaderInfoLog helped track all those down.

Something that is different, is not reported with GL.GetShaderInfoLog, and can be hard to track down is the attribute locations.
The locations of attributes differ on ATI vs Nvidia in their compiled shaders. I use GL.GetAttribLocation() to determine where the attribute is actually in the shader. My tests with using GL.BindAttribLocation seemed to indicate that it did not work on NVidia, but I work around it with GL.GetAttribLocation. In general your vertex position should be the first attribute in the shader, always. Then use GL.GetAttribLocation to find its real location. ATI, I believe reorders the attributes alphabetically.

On a side note... if you try to use instancing, there is a known issue on ATI that the gl_InstanceID gets added to the attribute list, which is alphabetical, so it pushes down all the attributes that have names after that in the alphabet. That does not seem like it would be an issue if you were using GL.GetAttribLocation, but I don't think GL.GetAttribLocation takes this into account and your binds don't bind to the correct location. The only fix is to make sure your variables are named in such a way that this does not happen. I use '_' before all my attributes to get around this.

In your code you have

in vec4 in_position;
in vec4 in_color;

On ATI I believe it will reorder alphabetically and in_position will be at location 1 and in_color will be at location 0. A quick test is to rename those so the alphabetical order is switched. If that is the case, the long term fix is to use GL.GetAttribLocation.

Another benefit to GL.GetAttribLocation is that if you have an attribute in the shader, but the shader compiler detects that it is not used, it will remove that attribute and all your attributes will move locations. -1 will be returned for the location in this case so check for that before binding.

Just read Coki04s post. I did not know about "layout(location=X)". That seems like it would solve the issue too. Very handy. Thanks!

c2woody's picture

My tests with using GL.BindAttribLocation seemed to indicate that it did not work on NVidia

When exactly are you binding the attribute indices? As it has to be done before the shaders are linked.

JTalton's picture

Ah, yes, that makes sense. I think it may have been after. Interestingly enough, I think ATIs implementation will allow you to do it after the shaders are linked. Thanks for the info.

c2woody's picture

For me it was the other way round, at least ATIs drivers didn't allow changing the binding and, since their default indexing was different than the what-you-expect-ordering it was messed up. But that may even have been a specific problem of the driver/card combination, I've just removed the binding at all since the GetAttribIndex thing worked.

rotard's picture

Thank you SO MUCH! Quickly implementing Coki's explanation fixed it for me, but it was nice to see JTalton's full explanation. I had noticed that the uniforms get reordered, but since I was already using GetUniformLocation it didn't matter. I don't know why it never occurred to me that there might be a GetAttribLocation as well

c2woody's picture

GetAttribLocation seems to be OpenGL 3.2+ only (GLSL 1.5/GL_ARB_explicit_attrib_location), right?

JTalton's picture

glGetAttribLocation is available only if the GL version is 2.0 or greater.

I have not had any issues calling it on an old mac with an ATI supporting only OpenGL 2.0. It also works on my other machines under windows with nvidia and ati cards.