I've been messing around with 3d rendering recently and was having trouble when trying to apply transformations in a vertex shader.
Usually, I'm able to do matrix transformations using standard OpenGL commands (glScalef, glRotatef, glTranslatef), but that doesn't work when using shaders.
How I tried to fix this was I applied all the transformations as I normally would and then I would get the current matrix from OpenGL and pass it to the shader as a uniform. In the vertex shader, I would then multiply the position by the transformation to get what I wanted.
However, this didn't work properly and it made some pretty wonky results.

here is my shader code:
#version 330
in vec3 position;
uniform mat4 transform;
out vec4 color;
void main()
{
gl_Position = transform * vec4(position, 1.0);
float depth = ((gl_Position.z + 1) / 2);
color = vec4(depth, depth, depth, 1);
}

I just can't seem to figure out what I'm doing wrong, so I decided to come over here to see if you guys had any ideas.

I reworked my renderer based on the example you gave me yet I still get problems.
The issue seems to be linked with the camera's transformation.
One thing that I discovered which is very interesting is that in the shader, z coordinates must be inside the range -1 to 1, but the camera transformations bring the shape outside this range which makes the object only viewable at the origin of the world.

Adjust your model view matrix to obtain the desired effect. If your object goes beyond the view frustum, only the part still within this frustum will be viewable.

Thank you! I started researching model view projection matrices and found out what I was doing wrong:
It turns out that all I had to do was incorporate the projection matrix into my calculations and now everything works great!