Getting close on the C++ interop test. I've got a textured triangle up, and am creating the CL image object correctly (there are lots of ways to make this SIGSEGV!). Now I just need to enqueue the kernel to write the texture and see what happens.
I finished the C++ interoperability test, and have some good news!
Initially it didn't work on my GeForce GTX 660. So after working on it a while and making sure I hadn't missed anything obvious, I put in my old ATI Radeon HD 5450 that I use for testing. That didn't work either! But it failed in a different way -- the texture was getting set to black.
So I tried changing write_imageui() in the OpenCL kernel to write_imagei(), which didn't work any better. But then when I tried write_imagef(), it worked like a charm! I also tried it on my MacBook Air's GeForce 320M, and it works on that too, so it's not just ATI cards.
But why does this work? I created the OpenGL texture as GL_RGBA with type GL_UNSIGNED_BYTE. Surely this should be written by write_imageui(). But if you look at table 9.4 on page 324 of the OpenCL 1.1 spec (http://www.khronos.org/registry/cl/specs/opencl-1.1.pdf), it says that the GL internal format GL_RGBA maps to the CL format CL_RGBA with channel type CL_UNORM_INT8. And in the spec for write_image (http://www.khronos.org/registry/cl/sdk/1.1/docs/man/xhtml/write_image.html), the only function that writes channel type CL_UNORM_INT8 is write_imagef()! It even says "Appropriate data format conversion will be done to convert channel data from a floating-point value to actual data format in which the channels are stored." So the cards are doing exactly what the spec says, we just weren't using the API correctly
write_imageui() can write to channel type CL_UNSIGNED_INT8, but apparently that's not compatible with CL_UNORM_INT8, even though the binary values in them should be exactly equal. Perhaps this is because normalized integer textures in GL resolve to floats inside GL shaders, whereas integer textures in GL resolve to integers inside GL shaders.
Please give this a try on your hardware and let me know what happens. I'll write a unit test for JOCL to preserve this solution.
Sorry, I was busy for the last week.
So thanks a lot, Wade, your explanation makes totally sense.
For image_writef you write values in [0, 1] to CL_UNORM_INT8 (as internal format GL_RGBA/GL_RGBA8),
and arbitrary values to CL_FLOAT (if you defined the texture as internal format GL_RGBA32F).
Could you please post which driver for Windows, NVidia Gefore GTX 660, you use?
As I am still getting this unknown exception for "createFromGLTexture2d".
Do you use a computer with two GL chips?
I think I'm using the Nvidia 335.23 WHQL drivers, on 64-bit Windows 8. I can't check directly until I pull the AMD card out and replace the Nvidia card :) My PC only has one video card, with no built-in graphics, so the only reported platform and device are Nvidia's.
I've never been able to duplicate the error on createFromGLTexture2d(), either on Mac, PC, Nvidia, or AMD card, so I'm not sure what's going on there. Once I finish writing the proper JUnit test of texture interop, I can run it on more types of system to make sure it really works on all of them.
Maybe this problem is specific to the Intel HD4600 you have? Do you have only integrated graphics, or do you also have a separate video card?
Thanks, would like to try the JUnit test also on my computer with hybrid graphics.
It is a 64-bit Windows 7 SP1 notebook with Intel Haswell CPU, Intel HD 4600 integrated graphics and a NVidia GTX 765M, exactly your driver.
I switch everything manually (Preferred graphics processor: Integrated graphics/High-performance NVidia processor)
and the CL platform with GPU device.
A notebook with hybrid graphics seems to be missing in your test suite :-)
I haven't tried this on the JogAmp cluster machines yet, I'm waiting for Jenkins to be restarted. If the problem the OP is seeing is due to driver issues on Intel integrated graphics, our cluster won't tell us (since we don't have such a card). But this test works on enough other kinds of cards where if it doesn't work for the OP, it seems likely that it's Intel's fault and not ours :)
Yes, that would be very helpful :) I'm not sure why the OP gets an error when trying to create the CL image object, it might be that their driver just doesn't support interop on some texture formats. Hopefully I'll have this unit test checked into the trunk this week so you can just download a nightly build, I'm just waiting for some previous checkins to get built first.
On 04/07/2014 09:58 PM, Wade Walker [via jogamp] wrote:
> Yes, that would be very helpful :) I'm not sure why the OP gets an error when
> trying to create the CL image object, it might be that their driver just
> doesn't support interop on some texture formats. Hopefully I'll have this unit
> test checked into the trunk this week so you can just download a nightly
> build, I'm just waiting for some previous checkins to get built first.
No problem, I'll be reviewing the build logs of the latest changes I put in to make sure everything is correct on all platforms. It looked like there was a little trouble where some builds timed out, so I need to double-check to make sure things are OK.
OK, the new texture interop test is pushed to master, and shows up in autobuild jocl-2.2-b958-20140414. It passes on all platforms except for Linux 32- and 64-bit with Nvidia cards, which is better than I expected. I'll debug those two cases next week. The failures aren't crashes at least, they're just texel value comparison failures.