HermanssoN's picture

GLSL

Hello, I have some questions abot GLSL,
I wrote a small shader that will handle diffuse textures:

#version 150
 
//precision highp float;
uniform mat4 proj;
uniform mat4 view;
uniform mat4 world;
 
//These work 
layout (location = 0)in vec3 inPosition;
layout (location = 1)in vec3 inNormal;
layout (location = 2)in vec2 inTexCoord;
 
out vec4 vertexPos;
out vec3 normal;
out vec2 texCoord;
 
void main()
{
 
	//forced to use gl_Position and gl_TexCoord[0]
       gl_Position = proj * view * world * vec4(inPosition,1.0);
	gl_TexCoord[0].st = inTexCoord;
 
 
	//these wont work?? No compile errors tough, I commented gl_Position and gl_TexCoord at first
	vertexPos = proj * view * world * vec4(inPosition,1.0);
	normal 	   = inNormal;
	texCoord = inTexCoord;
}

and

#version 150
 
//precision highp float;
 
uniform sampler2D difuseMap;
 
 
//Not used right nowm tried to use inTexCoord but it did not work
in vec4 inVertexPos;
in vec2 inTexCoord;
in vec3 inNormal;
 
//out keyword works here!
out vec4 outColor;
 
void main(void)
{ 
 
 
	//must use gl_TexCoord[0] here, why? inTexCoord just reult in wierd flickering
	outColor = texture2D( difuseMap, gl_TexCoord[0].st );
}

I'm using Open gl 3.3 through the glControll.
The code i posted works, but as my comments show, I'm forced to use gl_TexCoord and gl_ gl_Position .
My question is why?

I would like to use only my attributes marked as in and out.
Has this to do with the version of glsl?


Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
mOfl's picture

Fist of all, "it did not work" in the comment is not a very precise description of your problem. What do you experience? Black screen, random colors, application crash, ...?

gl_Position is a keyword that has not much to do with your attributes. You always specify gl_Position as the homogenous vertex position for the rasterizer.
You only pass the vertex position (transformed to image space or in world space, as needed) as additional attribute to the fragment shader if you need it for calculations, e.g. for lighting. This attribute has nothing to do with the actual fragment position, however, which is solely dependent on gl_Position. So, technically - though this would lead to wrong shading results - you could set the gl_Position to the real position in the vertex shader and pass another position to the fragment shader for shading calculations.

What OpenGL profile are you using? The core profile of OpenGL 3.3 forbids deprecated shader variables (an overview over the variables can be found here: www.khronos.org/files/opengl-quick-reference-card.pdf), so gl_TexCoord is not present anymore and an assignment to it could/should lead to an error.

So, your normal and texture coordinate should be passed with out and in attributes as you already did, but your position is set in the vertex shader with gl_Position = proj * view * world * vec4(inPosition,1.0);.

A second thing is your attribute locations. I am unfamiliar with the "layout (location = 0)" practise. Can you confirm with easy debugging output that this works? Usually, you would query the attribute location from your application by its name, e.g. GL.GetAttribLocation(shaderHandle, "inPosition");. This adds more flexibility to the application as you do not have to change/remember preset locations.

HermanssoN's picture

Hi.

When I Create my VAO i use:

 GL.EnableVertexAttribArray(0);
            GL.BindBuffer(BufferTarget.ArrayBuffer, mPositionBuffer);
            GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, true, Vector3.SizeInBytes, 0);
            GL.BindAttribLocation(shader.Program, 0, "inPosition");
 
            code = GL.GetError();
 
            GL.EnableVertexAttribArray(1);
            GL.BindBuffer(BufferTarget.ArrayBuffer, mNormalBuffer);
            GL.VertexAttribPointer(1, 3, VertexAttribPointerType.Float, true, Vector3.SizeInBytes, 0);
            GL.BindAttribLocation(shader.Program, 1, "inNormal");
 
            code = GL.GetError();
 
            GL.EnableVertexAttribArray(2);
            GL.BindBuffer(BufferTarget.ArrayBuffer, mTexCoordBuffer);
            GL.VertexAttribPointer(2, 2, VertexAttribPointerType.Float, false, Vector2.SizeInBytes, 0);
            GL.BindAttribLocation(shader.Program, 2, "inTexCoord");

I tried to use:

in vec3 inPosition;
in vec3 inNormal;
in vec2 inTexCoord;

instead of :

layout (location = 0)in vec3 inPosition;
layout (location = 1)in vec3 inNormal;
layout (location = 2)in vec2 inTexCoord;

Instead of my mesh I got a small round cluster of triangles , so layout (location = 0) seems to make sure that the data is placed correctly.

I tried using GL.GetAttribLocation and got:

GL.GetAttribLocation(shader.Program, "inPosition"); // = 0
GL.GetAttribLocation(shader.Program, "inNormal"); // = -1
GL.GetAttribLocation(shader.Program, "inTexCoord"); // = 1

I tried out my code using a normal GameWindow that i set to use open gl 3.3 :

 
 
public AnimationSample(): base(1280, 720, new OpenTK.Graphics.GraphicsMode(), "Animation Sample", 0,            DisplayDevice.Default, 3, 3,
            OpenTK.Graphics.GraphicsContextFlags.ForwardCompatible | OpenTK.Graphics.GraphicsContextFlags.Debug)

about the layout (location = 0):

"With this syntax, you can forgo the use of glBindAttribLocation entirely. If you try to combine the two and they conflict, the layout qualifier always wins. "

from: http://www.opengl.org/wiki/GLSL_Type_Qualifiers

mOfl's picture
HermanssoN wrote:

When I Create my VAO i use:

 GL.EnableVertexAttribArray(0);

Please note that there is a difference between generic vertex attribute arrays and Vertex Array Objects (VAO). The latter is just a state object that collects bindings of Vertex Buffer Objects such that you can bind everything in just a single call when drawing. What you use are just regular vertex attribute arrays.

Quote:

Instead of my mesh I got a small round cluster of triangles , so layout (location = 0) seems to make sure that the data is placed correctly.

This should not happen if everything is set up correctly. When calling shader attributes by their name only, you have to do this everywhere you access them in your application instead of using explicit indices as you leave indexing to OpenGL.

Quote:

GL.GetAttribLocation(shader.Program, "inNormal"); // = -1

This indicates that there's an error somewhere in your application. Make sure that "inNormal" is used in your shader, not just defined. If you do not use it or if you only calculate things with it which are not handed over to the fragment shader, the shader compiler will optimize away the whole variable.

OpenTK.Graphics.GraphicsContextFlags.ForwardCompatible

Are your video drivers up to date? As far as I know, your application should not work correctly as long as you are using a forward-compatible OpenGL context. The flag for forward-compatibility removes all deprecated calls and should prevent to use them, including the usage of gl_TexCoord in your shader.

HermanssoN's picture

Hi.

I managed to resolve the errors, my cod now looks like this:

OpenGL

private void InitBuffers(Shader shader)
        {
            ErrorCode code = GL.GetError();
 
            //1. Create buffers for positions, normals and texCoords (VBO:s)
            //2. Bind those buffers to the vertex array buffer
            //3. Create a index buffer and bind it to the elemrnt array buffere
 
            //1. create buffer, 2.bind buffer 3. write the correct data to the buffer
            GL.GenBuffers(1, out mPositionBuffer);
            GL.BindBuffer(BufferTarget.ArrayBuffer, mPositionBuffer);
            GL.BufferData<Vector3>(BufferTarget.ArrayBuffer,new IntPtr(mPositions.Length * Vector3.SizeInBytes), mPositions, BufferUsageHint.StaticDraw);
 
            code = GL.GetError();
 
            GL.GenBuffers(1, out mNormalBuffer);
            GL.BindBuffer(BufferTarget.ArrayBuffer, mNormalBuffer);
            GL.BufferData<Vector3>(BufferTarget.ArrayBuffer, new IntPtr(mNormals.Length * Vector3.SizeInBytes), mNormals, BufferUsageHint.StaticDraw);
 
            code = GL.GetError();
 
            GL.GenBuffers(1, out mTexCoordBuffer);
            GL.BindBuffer(BufferTarget.ArrayBuffer, mTexCoordBuffer);
            GL.BufferData<Vector2>(BufferTarget.ArrayBuffer, new IntPtr(mTexCoords.Length * Vector2.SizeInBytes), mTexCoords, BufferUsageHint.StaticDraw);
 
            code = GL.GetError();
 
            GL.GenBuffers(1, out mIndexBuffer);
            GL.BindBuffer(BufferTarget.ElementArrayBuffer, mIndexBuffer);
            GL.BufferData(BufferTarget.ElementArrayBuffer, new IntPtr(sizeof(uint) * mIndices.Length), mIndices, BufferUsageHint.StaticDraw);
 
            code = GL.GetError();
 
            GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
            GL.BindBuffer(BufferTarget.ElementArrayBuffer, 0);
 
            code = GL.GetError();
 
            //Now create the vertex array objects (VAO) so we can send the correct VBO:s to the shader
            GL.GenVertexArrays(1, out mVertexArray);
            GL.BindVertexArray(mVertexArray);
 
            code = GL.GetError();
 
            int posLoc = GL.GetAttribLocation(shader.Program, "inPosition");
            int normalLoc = GL.GetAttribLocation(shader.Program, "inNormal");
            int texLoc = GL.GetAttribLocation(shader.Program, "inTexCoord");
 
            GL.EnableVertexAttribArray(posLoc);
            GL.BindBuffer(BufferTarget.ArrayBuffer, mPositionBuffer);
            GL.VertexAttribPointer(posLoc, 3, VertexAttribPointerType.Float, true, Vector3.SizeInBytes, 0);
 
 
            code = GL.GetError();
 
            GL.EnableVertexAttribArray(normalLoc);
            GL.BindBuffer(BufferTarget.ArrayBuffer, mNormalBuffer);
            GL.VertexAttribPointer(normalLoc, 3, VertexAttribPointerType.Float, true, Vector3.SizeInBytes, 0);
 
            code = GL.GetError();
 
            GL.EnableVertexAttribArray(texLoc);
            GL.BindBuffer(BufferTarget.ArrayBuffer, mTexCoordBuffer);
            GL.VertexAttribPointer(texLoc, 2, VertexAttribPointerType.Float, true, Vector2.SizeInBytes, 0);
 
            code = GL.GetError();
 
            GL.BindBuffer(BufferTarget.ElementArrayBuffer, mIndexBuffer);
 
            GL.BindVertexArray(0);
        }

vertex shader :

#version 150
 
uniform mat4 proj;
uniform mat4 view;
uniform mat4 world;
 
in vec3 inPosition;
in vec3 inNormal;
in vec2 inTexCoord;
 
out vec2 texCoord;
 
void main()
{
	//for later use
	vec3 lightDir = vec3(1,1,1);
	float dot = dot( inNormal, lightDir);
	gl_Position = proj * view * world * vec4(inPosition,1.0);
	texCoord = inTexCoord;
}

Fragment shader:

#version 150
 
uniform sampler2D difuseMap;
 
 
in vec2 texCoord;
out vec4 outColor;
 
void main(void)
{ 
	outColor = texture2D( difuseMap, texCoord );
}

this time around i changed my names so i had unique names for incoming texCoords.
If both vertex shader and fragment shader had the variable: in vec2 texCoord;
I got wierd results. Note that I removed inNormal from the fragment shader, I belive that's why the positions got the wrong data.
If I send my normals to the position slot the result will look like a small ball of messed up triangles :)

So the lesson I learned from this is to be careful with variable names inside the shader, same names in both fragment and vertex shader will not give an error, just bad results.

Thank you for your help!