b224 vs b226 depth buffer issue

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

b224 vs b226 depth buffer issue

BrickFarmer
As previously mentioned, I've been trying to track down the change that caused my GL_POLYGON_OFFSET_FILL /depth buffer issue.  This was working fine for many years until recently.  I've narrowed it down to something that changed between b224 and b226 (225 also, but that seems to be a partial build).  When I try to track this back to hudson, the dates/times don't seem to tie up with anything relevant.  So I'm suspecting the change was actually 1 day earlier. 'choose and use graphics capability' (96af6c9bf2d683115996214cd895f9f9ef7ceea6)

Further testing has shown that prior to 225 a 24bit depth buffer was selected automatically, after that date only a 16bit buffer is selected.  So it seems like the code fails to detect the best available?  If I manually select 24bit in my request, then things are working again :)
Reply | Threaded
Open this post in threaded view
|

Re: b224 vs b226 depth buffer issue

Sven Gothel
Administrator
On Wednesday, December 15, 2010 13:01:54 BrickFarmer [via jogamp] wrote:
>
> As previously mentioned, I've been trying to track down the change that caused my GL_POLYGON_OFFSET_FILL /depth buffer issue.  This was working fine for many years until recently.  I've narrowed it down to something that changed between b224 and b226 (225 also, but that seems to be a partial build).  When I try to track this back to hudson, the dates/times don't seem to tie up with anything relevant.  So I'm suspecting the change was actually 1 day earlier. 'choose and use graphics capability' (96af6c9bf2d683115996214cd895f9f9ef7ceea6)
>
> Further testing has shown that prior to 225 a 24bit depth buffer was selected automatically, after that date only a 16bit buffer is selected.  So it seems like the code fails to detect the best available?  If I manually select 24bit in my request, then things are working again :> )

A perfect analysis, good job.

indeed I have reduced the default depth value of GLCapabilities from 24 to 16
        2aa296771e3e8dd6cf027f27b0455d1803244bfe
since some GPUs don't support such high value.

So .. this will be the default now, feel free to try a higher :)

Sorry for the inconvenience.

~Sven
Reply | Threaded
Open this post in threaded view
|

Re: b224 vs b226 depth buffer issue

BrickFarmer
I thought the code was designed to pick out the best bit depth on offer?  at least I saw something during my investigations that implied that, but I could be mistaken.
Reply | Threaded
Open this post in threaded view
|

Re: b224 vs b226 depth buffer issue

Sven Gothel
Administrator
On Wednesday, December 15, 2010 13:50:05 BrickFarmer [via jogamp] wrote:
>
> I thought the code was designed to pick out the best bit depth on offer?  at least I saw something during my investigations that implied that, but I could be mistaken.

if you explicitly pass a CapsChooser .. that one may do that,
if not - and the native stuff recommends a format, we use that one.

the native recommendation is what the WGL/GLX ARB choose functions provide,
thought this matches the native behavior best.

but you may override it with your chooser ..

and maybe .. we just have a bug :)

if you like, your analysis is very welcome as always, sure thing

me trying to fix BITMAP windows bug (offscreen if no GLPbuffer is avail)

~Sven