Graphics Driver Crashes

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Graphics Driver Crashes

lukej
Hello,

I've got a rather simple example using VBOs that I've found seems to crash the graphics drivers on nearly every machine that I run it on.  Granted, my array of vertices is rather large (3M x 3 floats), but I know I've rendered data sets much larger than this in the past.  The interesting thing is that I've tested this on a lot of different hardware (though all on Windows 7) and it seems to crash all of them.  I've tested an NVidia 9500GT, an ATI Radeon 5700 HD, and an NVidia Quadro FX4800.  They all crash.  It is also worth noting that while I have the number of vertices set to 3M, many of the cards crashed with much lower numbers (some as low as 1M or less).

Does anything in this code look wrong?  Can anyone else replicate the crashes on their machine?

Here is a link to the pastebin with the code:
http://pastebin.com/U4SxD5Z7

Thanks,
Luke
Reply | Threaded
Open this post in threaded view
|

Re: Graphics Driver Crashes

cubus
runs on GTX 670.

maybe it's caused by the 0 stride param in glVertexAttribPointer()
stride should be 12 (3 * 4)
although the driver should calculate it, if it's 0 ...
Reply | Threaded
Open this post in threaded view
|

Re: Graphics Driver Crashes

lukej
I just tried it with a stride of 12 - same result.

The documentation says that a stride of 0 means that it is tightly packed, which should be the same thing anyway.

Cubus, what OS were you running on?  Did you try dialing up the point size and/or the number of vertices to see if/when it crashes?  Not every card I tried crashed at the same place.  But they did all eventually crash.  And they crashed at what I thought was a pretty low number (though 3M and 64 crashed all the cards I tested with).
Reply | Threaded
Open this post in threaded view
|

Re: Graphics Driver Crashes

cubus
yes, the documentation is a little bit strange. stride is always the byte-distance from the beginning of one element to the beginning of the next. 0 just forces the driver to calculate it. however...

i am running Vista 32 bit.

with POINT_SIZE = 64..1024 (and probably more) it worked with NUMBER_OF_VERTICES = 3000000 * 4 (with  3000000 * 5 it crashed)
with POINT_SIZE = 1 it worked with NUMBER_OF_VERTICES = 3000000 * 10 (could not test above because of limited RAM)

---------------------------
NVIDIA OpenGL Driver
---------------------------
The NVIDIA OpenGL driver lost connection with the display
driver due to exceeding the Windows Time-Out limit and is unable to continue.
The application must close.

Error code: 7
Would you like to visit http://nvidia.custhelp.com/cgi-bin/nvidia.cfg/php/enduser/std_adp.php?p_faqid=3007 for help?
Reply | Threaded
Open this post in threaded view
|

Re: Graphics Driver Crashes

lukej
Ok, so I'm glad you got it to crash with a larger set of points, though that's much larger than any of the cards I tested.

I'm familiar with the Windows Time-Out limit, and that is almost definitely what the problem is.  I'm just surprised I'm hitting it with such a small set of vertices (as low as 1M on some cards like the 9500GT).  I've actually rendered sets of up to 160M points on some graphics card, and I haven't had any issues until just recently.  I'm not entirely sure what changed.  We suspected there may have been a windows update that affected it, but we tried removing some updates that seemed suspect and there was no change.

Basically I just wanted to make sure that my code looked correct and that I wasn't doing anything obviously wrong.

Reply | Threaded
Open this post in threaded view
|

Re: Graphics Driver Crashes

EvilMax
Interesting issue. I also have similar problem with shader. This GLGS prorgam performs volume rendering. If I use big dataset and/or want to use rendering with significant oversampling rendering resets driver on intensive user input or at first frame render. Sometimes it even hangs driver and PC, so only hard reset is working. But similar program in C++ renderer (VTK) does not suffer these issues.

So, the main question is how to prevent driver reset/hang.

1. Is it possible without modifying registry? How to change OpenGL rendering pipeline to prevent driver thinking GPU is hang?
2. Is it possible to detect time required for frame rendering? (thoughts about dynamically change sampling rate)
3. Is it possible to 'talk' between GPU and host for progress detection purposes?

If something could be done, it will be good to have some kind of example or link to related project.

P.S. Some details. Dell E6500, nVidia Quadro NVS160M 256M VRAM. Driver version: 340.62.