Posted by
philjord on
Jul 27, 2022; 9:56pm
URL: https://forum.jogamp.org/how-to-transform-objects-not-using-Transform3D-tp4041515p4041801.html
This is certainly a tricky request, Java3D and the underlying OpenGL (Jogl) are 100% dedicated to the task of drawing triangles back to front by z order, determining front facing, and also to implementing phong shading lighting models when rendering faces.
So we can turn off Z buffering off by some calls on the Shape3D's appearance's rendering attributes like so
RenderingAttributes ra = new RenderingAttributes();
ra.setDepthBufferEnable(false);
app.setRenderingAttributes(ra);
now the triangles are rendered randomly, with last renders being on top.
We can also turn off backwards facing triangle cull like this:
PolygonAttributes pa = new PolygonAttributes();
pa.setCullFace(PolygonAttributes.CULL_NONE);
app.setPolygonAttributes(pa);
Now things get complex because we are forcibly trying to NOT use Java3D, but do some of its work. So what we want is "mixed-mode immediate rendering" a term meaning - [We will decide what geometry to render when and if]. Compare that to the normal "Retained Mode"
More info here:
https://docs.oracle.com/cd/E17802_01/j2se/javase/technologies/desktop/java3d/forDevelopers/j3dguide/Immediate.doc.htmlI suggest you read that article before continuing.
So to use mixed mode we need to
1/ Create our own Canvas3D
2/ Put that Canvas3D on screen (just simple pure java AWT code)
3/ Implement the renderField callback method of Canvas3D
4/ Tell the renderer when we have changed data and should re-render the scene
Here is the code:
notusingtransform3D.javaSo in there you'll see the new Canvas3D with the renderField method (and a postRender method that's not mandatory)
renderField could be called render, but it might be left eye, right eye or both due to VR support
In renderField you see we are now using only appearance and draw(Geometry) this means Shape3D is no longer need to group these up, in fact all we need is a bunch of geometries and one or more appearances.
Appearances can have lighting set up, and textures and all sorts of cool stuff, for now we are using a basic one that just turns off z-buffer, turns off culling and does render per vertex colors.
Below the new Canvas3D you'll see the new appearance setting code then some shape3Ds removed.
Notice legend is still in the scene graph, so we are rendering both regular node object "mixed" with our selected geometries.
Then a weird flush() method put into our key listener, this method tells the renderer to redraw the scene, and we call it after every geometry update. To be honest we really wan the renderer to runs continuously, but this was considered wasteful, so the default behaviour is to only update when told to in mixed or pure immediate mode.
Finally a bit of code to throw the Canvas3D on screen, very vanilla, very AWT.
Now just touching on the postRender method above, this is the easiest way to get 2D overlay interfaces going, it allows us to draw on the rendered image before display on the screen. 3D API's hate this, and it causes at least a halving of performance, so it shouldn't be used in high performance situations, but today with 2 cubes we are ok with it.
Now back to you, you need to create HEAPs of geometries, one for each face of the cubes (so 2 cubes * 6 faces = 12 geometries), or even one for each triangle of the cubes, create these by just making the cube code into small pieces, using the unchanged coord float array, but limit the number of cood indexes given to each geometry (6 coords for each face, 2 triangles worth).
Then in the renderField you decide IF you should render the face, using front/rear facing decisions, and then decide what order to render the faces in to get the scene ordered back to front so everything correctly occuldes the right thing.
The requirement to implement a phong lighting model will be almost self defeating, as 3D API literally do this for you, you can't stop them and do it for them, as that isn't a facility they need or provide.
The only way to implement the lighting model would be to get a callback from the 3D API for each pixel and implement that pixels color calculation and give the color back to the renderer. This is not only possible but the very heart and soul of 3D programming, because it is writing a (Fragment) Shader Program, the key purpose of which is to implement a program for the graphics card to run per pixel to draw the colors, however that is a huge step away form where we are now.
(As as aside Java3D 1.7 fully supports this and does some cool things, like auto generating the lighting program based on parameters passed in.)
It might be easier for you to comment out the app.setPolygonAttributes(pa); line and get the order of render correct first then try the front facing calcs after you know the order is right.
Good luck, I'm happy to give advice at any moment!