This post was updated on .
Hi all!
Can anyone point me an example for allocating in-memory texture with compression?
Here is my uncompressed example (from World Wind Java)
TextureData td = new TextureData( gl.getGLProfile(), // GL profile GL.GL_RGBA8, // internal format width, height, // dimension 0, // border GL.GL_RGBA, // pixel format GL.GL_UNSIGNED_BYTE, // pixel type this.isUseMipmaps(), // mipmap false, false, // dataIsCompressed, mustFlipVertically null, null); // buffer, flusher Texture t = TextureIO.newTexture(td); t.bind(gl); gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, this.isUseMipmaps() ? GL.GL_LINEAR_MIPMAP_LINEAR : GL.GL_LINEAR); gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR); gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, GL.GL_CLAMP_TO_EDGE); gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_T, GL.GL_CLAMP_TO_EDGE);I have tried this and it used to work with DXT3/DXT5/BPTC until this year (driver update I suppose, but was I wrong?) TextureData td = new TextureData( gl.getGLProfile(), // GL profile GL.GL_COMPRESSED_RGBA_S3TC_DXT3_EXT, // internal format width, height, // dimension 0, // border GL.GL_RGBA, // pixel format GL.GL_UNSIGNED_BYTE, // pixel type this.isUseMipmaps(), // mipmap false, false, // dataIsCompressed, mustFlipVertically null, null); // buffer, flusher Texture t = TextureIO.newTexture(td); t.bind(gl); gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, this.isUseMipmaps() ? GL.GL_LINEAR_MIPMAP_LINEAR : GL.GL_LINEAR); gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR); gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, GL.GL_CLAMP_TO_EDGE); gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_T, GL.GL_CLAMP_TO_EDGE);(I didn't set dataIsCompressed=true or assign the buffer) mip-map is off. It used to work fine on nVIDIA card (there are artifacts on AMD's). But now it shows complete garbage like this: PS: The garbage parts are the compressed texture tiles. But I can still load DXT3/DXT5 from files. |
I added error checks everywhere and located the problem: glFramebufferTexture2D
The texture is to be used with framebuffer (to store whatever painted onto it), and glFramebufferTexture2D throws "invalid framebuffer operation" when the given texture's internal format is DXT3/DXT5, but no error when it's just RGBA8.
But why? It doesn't seem to be documented anywhere.
Here is the simplified code - it turns out WWJ is using TextureData for empty texture allocation and I can just skip it to save the confusion.
int internalFormat = GL.GL_COMPRESSED_RGBA_S3TC_DXT3_EXT; int pixelFormat = GL.GL_RGBA; GL gl = dc.getGL(); Texture t = new Texture(0, GL.GL_TEXTURE_2D, width, height, width, height, false); t.bind(gl); gl.glTexImage2D(t.getTarget(), 0, internalFormat, width, height, 0, pixelFormat, GL.GL_UNSIGNED_BYTE, null); gl.glTexParameteri(t.getTarget(), GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR); gl.glTexParameteri(t.getTarget(), GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR); gl.glTexParameteri(t.getTarget(), GL.GL_TEXTURE_WRAP_S, GL.GL_CLAMP_TO_EDGE); gl.glTexParameteri(t.getTarget(), GL.GL_TEXTURE_WRAP_T, GL.GL_CLAMP_TO_EDGE); |
This post was updated on .
all right it turns out the compressed texture is not supposed to work with FBO writing to it. So it's probably bad luck last year to have it work.
UPDATE:
Found the correct way of doing it: Convert texture to GL_COMPRESSED_RGBA (works for DXTC too)
Basically I just have to use a renderbuffer instead of texture as color attachment, and then copy it to texture afterwards by glCopyTexImage2D. It's slower but works.
|
Administrator
|
Thank you for sharing your findings, it wasn't obvious for me.
Julien Gouesse | Personal blog | Website
|
Free forum by Nabble | Edit this page |