Login  Register

Re: Where is glVertexAttribIPointer that doesn't take a Buffer?

Posted by Jeff Quigley on Sep 03, 2012; 1:29am
URL: https://forum.jogamp.org/Where-is-glVertexAttribIPointer-that-doesn-t-take-a-Buffer-tp4026005p4026009.html

Okay, done.

I already found a workaround for my main problem, so I don't need you to answer my other question.  But if you are interested, then read on.  :)

>What is the phenomenon .. i.e. how you determine it's refused?
>No VBO buffer bound Exception ?

Nothing crashes or reports errors.  The GL accepts my int[] full of attribute data, and then in the vertex shader, the ivec2 that's supposed to take its two values from the array ends up with a weirdly altered version of those values.  0 in the array becomes 0 in the vector, but other values get their bits shifted around strangely.  I tried putting the number 0b11110111011 in the array, and when it reached the shader its bits were something like 01000100111101110110000000000000--note the many extra 0's at the right end, and the two extra 1's near the left.  I thought maybe endianness was the problem but when I told my IntBuffer to use the opposite endianness the problem persisted.  I figured I just had some offsets wrong somewhere, but in that case, telling glVertexAttribPointer() that I was sending GL_FLOATs should not have worked, right?  I'm mystified that I can send ints while telling it I'm sending floats and end up with the correct ints in the shader.  It only sees a Buffer right, not an IntBuffer or a FloatBuffer?  So I would expect that if I sent ints and said I was sending floats then the library would interpret the int bits as floats and produce some strange floats which maybe would then get casted back to ints so they could fit into the ivec2...  or else it should just crash?  I don't understand why GL_FLOAT works and I don't understand why GL_INT and GL_UNSIGNED_INT did those weird shiftings.