how to transform objects not using Transform3D

classic Classic list List threaded Threaded
16 messages Options
Reply | Threaded
Open this post in threaded view
|

how to transform objects not using Transform3D

Mr. Broccoli
This post was updated on .
Hi All,

Trying to figure out how can I implement transforming elements that already went 'live', but not using Transform3D.
Searching for transformations in Java3D just points me to Transform3D 'way' and I need to do the leg work.

What I do now is group Points3d into a LineArray and then into a Shape3D and then add them to a BranchGroup.
This allows me to display these wireframe cubes on the screen.

I went through some APIs for a SimpleUniverse, Canvas, Locale, BranchGroup trying to find a method that would allow me to clean the display, (transform basic points in the background) and display refreshed objects, but to no avail.

I tried using SimpleUniverse->Locale->BranchGroup and then using SU.removeAllLocales() on a key press, but I'm missing something as the window does not refresh and I see 'old' cubes. I don't know what to expect.
I couldn't find how this method is supposed to work in detail, except for a succint description that just expanded this method's name. The name is promising :)

I also tried smash&grab method of just creating a new object, but it gets created in a new (default) frame.
Maybe somebody could tell me how to keep it in the same frame? Haven't done anything with frames so far, but that could do (even though it feels wrong). I don't need to make it ideal, just working.

Maybe there is a way to extract points back in an orderly manner after placing them in a BranchGroup?

Attached is what I have now.
This is not finished, I'm struggling as I lack a general concept of how to omit Transform3D usage.

I would be thankful if somebody could point me the right way.

Commenting on attached files:

On key press 'a' new object gets created with a new universe and a small translation.
On key press 'c' I was trying to see what removeAllLocales() really does, but nothing visible happens.

There's a bunch of RealMatrix methods in tuval1 that I'm intending to use for transformations, but right now they don't matter and tuval1 is just for 'new tuval7()'.

tuval1.java
tuval7.java
cube_creation.java


Kind regards, Mr. B.

Reply | Threaded
Open this post in threaded view
|

Re: how to transform objects not using Transform3D

philjord
Hi,
I'm currently moving house so I can't give a good reply.

If the basic concept of what you want to do is transform/alter the vertices of a geometry then that can be done by simply changing all the flats or Point3f and then telling the geometry about it.

Because the geometry might be being used by the renderering pipeline at any moment we aren't allowed to call the data setting methods  (e.g.  setCoordinates(int index, float coordinates[]) ) unless we call it from within a GeometryUpdater handed into the updateGeometry method of the geometry.

updateGeometry will be called in sync with the renderer before the next frame.

You will probably be using a TriangleArray, so you'll need to know if it's a byref or not, because a byref will actually allow you to edit the original float array inside the updateGeometry/GeometryUpdater call.

I'd suggest you don't use anything but IndexedTriangleArray and raw float[] data as your geometry data, the unindexed and Pioint3f etc are handy but they aren't like the underlying data and so cause performance loss.


One interesting aside for this is that if you mix and match TransformGroup updates with updateGeometry the updateGeometry will be called before the next frame, but the TransformGroup has a 2 frame delay before it gets seen, so skin/bone animations will leave cracks.
Reply | Threaded
Open this post in threaded view
|

Re: how to transform objects not using Transform3D

Mr. Broccoli
Hi Phil,
I hope you're not moving anywhere currently ;)
Had to do some other assignments, but now I've got time to pick this up again.

Following what you've written, I've changed the data from Point3f to float[][] arrays.
Now I'm trying to use GeometryUpdater (javadoc), but that's a struggle.
I created a new class (Updater.java) that implements GeometryUpdater in which I'm kind of forced by Eclipse to use
public void updateData(Geometry geometry){}
method. In the documentation it's written that all changes should be made using that method, but the problem I'm having is that the only parameter is the geometry and I don't know how passing of the updated information is supposed to happen.

As I understand, to allow for 'byref' geometry creation, I need to use the flag BY_REFERENCE, e.g.
LineArray cube = new LineArray(cube_edges.length,GeometryArray.COORDINATES|GeometryArray.BY_REFERENCE|GeometryArray.ALLOW_REF_DATA_WRITE);

Now adding that flag throws an error, at a line where I previously used to populate that LineArray with points using setCoordinates method.
I left this as is (with this error) in cube_creation.java .
Again, I'm stuck on populating 'byref' Geometry with these coordinates.
Would you be so kind to give coding example how to properly feed 'byref' Geometry with initial data/changes?

Files I'm using:
tuval1.java main method, creates new tuval7
tuval7.java
Updater.java not sure how to use updateData method here
cube_creation.java creating geometry worked fine before adding BY_REFERENCE flag, but now I'm stuck

In tuval7.java there's a KeyEvent, where I want to use 'a' key, to translate cubes right.
This is also where I attempt to alter geometry by feeding a new point array, but to no avail (updateData method is empty, so that's probalby it, but no clue how to populate it).

Kind regards, Mr. B
Reply | Threaded
Open this post in threaded view
|

Re: how to transform objects not using Transform3D

philjord
Mr B.

You've made a lot of progress towards using the by ref work, but it needed just a little more.

I've given a "working" alteration to your code with notes at the bottom of this post

It'll move some of the vertices when pressing the keys, but it's currently only grabbing half of cube one, I'll leave it to you to get the other coordinates changed.

I've pushed all the code into one class, because it's just an example and referability outweighs code encapsulation (in this case).

The only format change you should adopt is the GeometryUpdater should be sent in as an anonymous inner class, this is just the way thread synchronized code tends to be passed around in the modern thread conscious programing paradigms (for example look at any android example from google, just filled with interface implementations for thready work).

The only down side is that it's easy to have code in a heavy duty worker thread class and put some disk or network fetching code into it, only to later realize you've pushed that code across to the main render thread and everything goes choppy.

So to get your code working I just had to flatten out your float arrays and then hang on to the reference to them, then edit the floats directly into them at update time.

The reason for flattening out the float arrays is because ByRef wants to be "closer to the metal", so the actual 3D pipeline calls always use long flat arrays. In fact they tend to only use long flat byte arrays, and the API calls have to tell the driver how many points of data there are and what the data is. Normally the coord data,the color data, the texture map data etc etc are all interleaved together, so not just every 3 floats is a coord, but every 11 floats is 3coords, 3 colors, 2 texture map, 3 normals, and in fact it's every 44 bytes, but the driver knows how to read them as floats.

So we have to use flat float arrays.

Then we need to hold onto a ref to the array (or we could ask for the geom to give it to us in the call)

I also edited the translate call to use deltas because it seemed clearer to me.

Finally I edited your way the LineArray is created to make it clearer we don't have to recreate it.

I hope this helps, please ask questions about it once you've seen it working, there is certainly a whole lot in the change from Java3D Transfrom3D to direct mods to the float data.

Phil.


package _testcase;

import org.jogamp.java3d.Appearance;
import org.jogamp.java3d.BranchGroup;
import org.jogamp.java3d.Geometry;
import org.jogamp.java3d.GeometryArray;
import org.jogamp.java3d.GeometryUpdater;
import org.jogamp.java3d.LineArray;
import org.jogamp.java3d.Locale;
import org.jogamp.java3d.Shape3D;
import org.jogamp.java3d.utils.universe.SimpleUniverse;
 

import java.awt.event.KeyEvent;
import java.awt.event.KeyListener;

 

public class notusingtransform3D {

        public static void main(String[] args) {
                System.setProperty("sun.awt.noerasebackground", "true");
                new notusingtransform3D();
        }

        static float xloc = 0;
        static float yloc = 0;
        static float zloc = 0;

        LineArray cube_one, cube_two, cube_three;
        // PJ: as we are using byref geometry we can hold onto simply the float array itself and modify inplace in an updater call
        float[] cube_oneData, cube_twoData, cube_threeData;
       
        float[][] cube1_pt = {{xloc - 2, yloc + 0, zloc - 10}, {xloc - 2, yloc + 0, zloc - 11},
                {xloc - 2, yloc + 1, zloc - 11}, {xloc - 2, yloc + 1, zloc - 10}, {xloc - 1, yloc + 1, zloc - 10},
                {xloc - 1, yloc + 0, zloc - 10}, {xloc - 1, yloc + 0, zloc - 11}, {xloc - 1, yloc + 1, zloc - 11}};

        float[][] cube2_pt = {{xloc - 3, yloc + 0, zloc - 12}, {xloc - 3, yloc + 0, zloc - 14},
                {xloc - 3, yloc + 2, zloc - 14}, {xloc - 3, yloc + 2, zloc - 12}, {xloc - 1, yloc + 2, zloc - 12},
                {xloc - 1, yloc + 0, zloc - 12}, {xloc - 1, yloc + 0, zloc - 14}, {xloc - 1, yloc + 2, zloc - 14}};

        float[][] cube3_pt = {{xloc - 2, yloc + 0, zloc - 15}, {xloc - 2, yloc + 0, zloc - 17},
                {xloc - 2, yloc + 3, zloc - 17}, {xloc - 2, yloc + 3, zloc - 15}, {xloc - 1, yloc + 3, zloc - 15},
                {xloc - 1, yloc + 0, zloc - 15}, {xloc - 1, yloc + 0, zloc - 17}, {xloc - 1, yloc + 3, zloc - 17}};

        public float[][] translateArray(float[][] cube, int dir) {
                float[][] cube1 = new float[8][3];
                //copy array
                for (int i = 0; i < 8; i++) {
                        for (int j = 0; j < 3; j++) {
                                cube1 [i] [j] = cube [i] [j];
                        }
                }

                //translate
                for (int i = 0; i < 8; i++) {
                        for (int j = 0; j < 3; j++) {
                                if (dir == 0)
                                        cube1 [i] [0] = cube [i] [0] + xloc;
                                if (dir == 1)
                                        cube1 [i] [1] = cube [i] [1] + yloc;
                                if (dir == 2)
                                        cube1 [i] [2] = cube [i] [2] + zloc;
                        }
                }

                /*System.out.println("translate");
                for(int i=0; i<cube1.length; i++) {
                        System.out.print(i + ")  ");
                        for(int j=0; j<3; j++) {
                                System.out.print(cube1[i][j] + "  ");
                        }
                        System.out.println();
                }*/

                return cube1;
        }
       
        // PJ: takes a flatten array of floats as input (each set of 3 floats is a vertex coordinate)
        // translates directly into the array as arrays are like object pointers and not passed by value like a primitive
        public void translateArray(float[] cube,float deltaX,float deltaY, float deltaZ) {
                //translate, if cases will be slower than just setting the values
                //deltas are easier to understand here
                for (int i = 0; i < 8; i++) {
                        cube [i*3+0] += deltaX;
                        cube [i*3+1] += deltaY;
                        cube [i*3+2] += deltaZ;
                }
        }

        public notusingtransform3D() {

                SimpleUniverse uni = new SimpleUniverse();
                BranchGroup group = new BranchGroup();
                Appearance app = new Appearance();
                Locale loc = new Locale(uni);

                cube_one = ccr1(cube1_pt);//creates LineArray (Geometry) from float points array
                // PJ: grab the geometry data ref for use in the keyevent, bbecause we can use it directly
                cube_oneData = cube_one.getCoordRefFloat();
                cube_two = ccr1(cube2_pt);
                cube_twoData = cube_two.getCoordRefFloat();
                cube_three = ccr1(cube3_pt);
                cube_threeData = cube_three.getCoordRefFloat();

                Shape3D cube1 = new Shape3D(cube_one, app); //creates Shape3D from Geometry and Appearance
                Shape3D cube2 = new Shape3D(cube_two, app);
                Shape3D cube3 = new Shape3D(cube_three, app);

                group.addChild(cube1); //adding Shape3D to BranchGroup
                group.addChild(cube2);
                group.addChild(cube3);

                loc.addBranchGraph(group);
                uni.getViewingPlatform().setNominalViewingTransform();

                uni.getCanvas().addKeyListener(new KeyListener() {

                        @Override
                        public void keyTyped(KeyEvent e) {
                        }

                        @Override
                        public void keyReleased(KeyEvent e) {
                        }

                        @Override
                        public void keyPressed(KeyEvent e) {
                                if (e.getKeyCode() == KeyEvent.VK_A) {
                                        System.out.println("a");
                                        notusingtransform3D.xloc++;
       
                                        GeometryUpdater updt = new GeometryUpdater() {
                                                @Override
                                                public void updateData(Geometry geom) {
                                                        //PJ: use the altered flat in place method, with the cleaner delta values
                                                        translateArray(cube_oneData,1,0,0);
                                        }};
                                        // PJ: we don't call this method, the rendering thread does later when it's ready to have the data changed (between frames)
                                        //updt.updateData(cube_one);
                                       
                                        // PJ: instead we call this to set the updater to be used once on the next rendering pass
                                        cube_one.updateData(updt);
                                       
                                        System.out.println(notusingtransform3D.xloc);
                                }
                                if (e.getKeyCode() == KeyEvent.VK_D) {
                                        notusingtransform3D.xloc--;
                                        // tighter layout, something something lamdas
                                        cube_one.updateData(new GeometryUpdater() {
                                                @Override
                                                public void updateData(Geometry geom) {
                                                        translateArray(cube_oneData,-1,0,0);
                                                }});
                                        System.out.println(notusingtransform3D.xloc);
                                }
                                if (e.getKeyCode() == KeyEvent.VK_W) {
                                        notusingtransform3D.yloc++;
                                        cube_one.updateData(new GeometryUpdater() {
                                                @Override
                                                public void updateData(Geometry geom) {
                                                        translateArray(cube_oneData,0,1,0);
                                                }});
                                }
                                if (e.getKeyCode() == KeyEvent.VK_S) {
                                        notusingtransform3D.yloc--;
                                        cube_one.updateData(new GeometryUpdater() {
                                                @Override
                                                public void updateData(Geometry geom) {
                                                        translateArray(cube_oneData,0,-1,0);
                                                }});
                                }
                               
                        }
                });
        }

 
        // PJ: so we dont want to recreate the geometry
        // we want to reset the coords in the geometry
        // so ccr methods need to be broken into 2 pieces, one that just translate the coords
        // and one that creates the geometry
        // basically pull new LineArray up out of it
        public static LineArray ccr1(float cube_pts[][]) {
                 
                LineArray cube = new LineArray(24,
                                GeometryArray.COORDINATES | GeometryArray.BY_REFERENCE );
               
                // PJ: the ALLOW flags are for capabilities not vertex formats, they are very similar looking on Geometry objects
                // java 3d demands lots of explicit allowing to get things done as it does a lot of optimization under the hood
                cube.setCapability(GeometryArray.ALLOW_REF_DATA_WRITE);
               
               
                ccr2(cube, cube_pts );
                return cube;
        }
       
        public static void ccr2(LineArray geom, float cube_pts[][]) {
                float[][] cube_edges = new float[24][3];
                int offset = 0;
                for (int i = 0; i < 7; i++) {
                        for (int j = i + 1; j < 8; j++) {
                                int counter = 0;
                                if (cube_pts [i] [0] == cube_pts [j] [0])
                                        counter++;
                                if (cube_pts [i] [1] == cube_pts [j] [1])
                                        counter++;
                                if (cube_pts [i] [2] == cube_pts [j] [2])
                                        counter++;
                                if (counter == 2) {
                                        for (int l = 0; l < 3; l++) {
                                                cube_edges [offset + 0] [l] = cube_pts [i] [l];
                                                cube_edges [offset + 1] [l] = cube_pts [j] [l];
                                        }
                                        offset += 2;
                                }
                        }
                }

                // PJ: make it a single flat array (3 times as long)
                float[] flat_pts = new float[cube_edges.length*3];
                for (int i = 0; i < cube_edges.length; i++) {
                        flat_pts[i*3+0] = cube_edges [i][0];
                        flat_pts[i*3+1] = cube_edges [i][1];
                        flat_pts[i*3+2] = cube_edges [i][2];
                }
                // PJ: notice the ref method and no start index, it uses the entire array
                geom.setCoordRefFloat(flat_pts);
               
                //for (int i = 0; i < cube_edges.length; i++) {
                        //Exception in thread "main" java.lang.IllegalStateException: GeometryArray: cannot directly access data in BY_REFERENCE mode
                // geom.setCoordinates(i, cube_edges [i]);
                        // so the essential issue here is that the coords should be
                        // set as one large float array of sets of 3 coords
                        // meaning we should hang onto the reference to the float[] and play wiht it directly
                //}

                /*System.out.println("cube pts length = " + cube_pts.length);
                for(int i=0; i<cube_pts.length; i++) {
                        System.out.print(i + ")  ");
                        for(int j=0; j<3; j++) {
                                System.out.print(cube_pts[i][j] + "  ");
                        }
                        System.out.println();
                }
               
                System.out.println("cube edges length = " + cube_edges.length);
                for(int i=0; i<cube_edges.length; i++) {
                        System.out.print(i + ")  ");
                        for(int j=0; j<3; j++) {
                                System.out.print(cube_edges[i][j] + "  ");
                        }
                        System.out.println();
                }
                */
                 
        }

}

Reply | Threaded
Open this post in threaded view
|

Re: how to transform objects not using Transform3D

Mr. Broccoli
Phil,

First of all thank you for your replies. This really helps me a lot.
E.g. I don't see how I could've come up with this implementation of updateData method on my own.

When comes to progress, I've cleaned the file a bit.
One of the requirements I had was to use matrix multiplication for all transformations, so I implemented that.
I tried to use javax.vecmath import and Quat4f with Matrix4f to keep (float) as a uniform data type, but couldn't find any methods for multiplying a Quat4f with a Matrix4f (a point and appropriate transformation matrix).
Therefore I decided to just use RealMatrix for both 4f geometry points and transformation matrices and used following imports:
        import org.apache.commons.math3.linear.MatrixUtils;
        import org.apache.commons.math3.linear.RealMatrix;
        import org.apache.commons.math3.util.FastMath;

Key mapping is as follows:
WSAD - like walking in games,
Arrows - rotating camera up/down/left/right
ZX - zoom in/out
CV - rotate camera along Z axis

To get a better feel of what is the camera position, zoom and rotation, I wanted to place a text that would display all parameters in the upper left corner of the window, but I'm struggling to display text that would change dynamically.
I thought that this would be a case of changing the variable value (so no need for setting by ref pointer - btw couldn't find applicable methods either), but I'm missing something (since it does not change on display).
I only tried with 'x' camera position initially (A, D keys change that).

Also, I've got a question about updateData method, which I changed by adding additional translate array methods.
Why does the following code change all three cubes and not only cube_one?

        cube_one.updateData(new GeometryUpdater() {
                @Override
                public void updateData(Geometry geom) {
                        translateArray(cube_oneData, -dX, 0, 0);
                        translateArray(cube_twoData, -dX, 0, 0);
                        translateArray(cube_threeData, -dX, 0, 0);
                }
        });

I mean I see that we're updating the entire 'geom' in the inner updateData method, but the call comes only from cube_one. How should I understand this?

Finally, as the last part of the assignment I've got, I need to either:
a) implement an algorithm for visible surface determination in 3D (not Z-buffer, RT or RT-alike), might be 'our' scene, but need to add surfaces or at least colour lines,
b) implement simple light reflection algorithm (i.e. Phong) on a scene with a curved surface (i.e. a sphere) and a flat surface, using a single light source, with possibility to change reflection coefficients.

If I show the mix of a & b, that's also OK. Can you point me in the right direction to achieve this? I'm askig this because it seems to me that I lost a lot of time just by 'starting coding' without asking for a bit of guidance in the first place.
I probably didn't want to get labeled as being lazy, but the effectivness of this approach is just poor.

The file:
notusingtransform3D.java

Kind regards, Mr. B
Reply | Threaded
Open this post in threaded view
|

Re: how to transform objects not using Transform3D

Mr. Broccoli
Hi again,

I tried to mess a  bit with point a) above, so I changed line colours, widths, cube positions (including overlapping) and played a bit around.
It seems to me that some algorithm of visible surface determination is already working in the background, as all the lines get displayed properly regardless of how I place these objects and move and turn the camera.
I suppose it is somehow implemented i.e. in the camera/universe/geometry, but not sure how it is actually done.
I wonder if this can be switched off somehow, so that I could write my own method for sorting these lines, i.e. using painter's algorithm and see how that works?

Kind regards, Mr. B
Reply | Threaded
Open this post in threaded view
|

Re: how to transform objects not using Transform3D

Mr. Broccoli
In reply to this post by Mr. Broccoli
Mr. Broccoli wrote
To get a better feel of what is the camera position, zoom and rotation, I wanted to place a text that would display all parameters in the upper left corner of the window, but I'm struggling to display text that would change dynamically.
I thought that this would be a case of changing the variable value (so no need for setting by ref pointer - btw couldn't find applicable methods either), but I'm missing something (since it does not change on display).
I only tried with 'x' camera position initially (A, D keys change that).
I see that Text2D has 'setString' method, which put into updateData method works as desired, so instead of updating the string directly, I set it through Text2D object. The resolution though is awful and I need to dig a bit more to make it look nice and move it to a corner.
Reply | Threaded
Open this post in threaded view
|

Re: how to transform objects not using Transform3D

philjord
In reply to this post by Mr. Broccoli
Hi,
Yes it certainly took me a very long time of playing and experimenting to get a good handle on how by ref geometry updating should work.

It be honest Java3D is really aimed at being a high level scene graph so it's engineered to want user to use teh nodes of teh graph to work with 3d object renderering, so for example it would like a TransformGroup above a Shape3D to modify an object, rather than the geometry itself being modified. However OpenGL and other 3D driver want to be as bare bones as possible, nothing extra at all, everything in long linear buffers of bytes, all modified in place by Matrix maths.

But of course anyone with a fun 3D model has it full of morphing vertices to shows cool animations of sword fighting and cloth simulations, so obviously updateData and ByRef geometries are wanted.


>>> as all the lines get displayed properly regardless of how
This is because 3D driver don't really care about lines at all, and lines get drawn very badly with wildly different mechanism in all cases.
If you want visible surface determination and depth (z buffering) to work you need to render triangles.
So to that end you need to swap from LineArray to IndexedTriangleArray. The reason for indexed Triangle array is that each Tri you render shares vertices with many other triangles, so it's easiest to give in the same set of coords like you are now but also a list of indices to tell it the order of faces to build. each 3 indices represent 3 verts to turn into one tri, from your list of coords. Note that the triangle winding determine front facing so the indices are in an unexpected order.


>>>but couldn't find any methods for multiplying a Quat4f with a Matrix4f
The Trasnform3D object handles all of this for you, it encapsulate s affine transformation matrix and does all the work, so a Quat4f times a translation would be

Transform3D t1 = new Transform3D();
t1.set(new Vector3f(1,2,3));
Transform3D t2 = new Transform3D();
t2.set(new Quat4f(0,0,0,1));
t1.mul(t2);

then use t2 via transform(Point3f)


>>>I see that Text2D has 'setString' method,
Yes, this is the right thing to do, madly 3D drivers don't ever let you mix the 2D paint operations, they want all on screen render elements to be actual literal textures applied to geometries, so every 3D interface you've ever seen in a game is just a carefully draw texture of the correct font strings on a billboard quad at just the right distance form the camera to make it work. Possibly these days with programmable shaders they've got a better approach, but I'm not aware of it.

This example might help
https://github.com/philjord/java3d-examples/tree/master/src/main/java/org/jdesktop/j3d/examples/overlay2d


>>>Why does the following code change all three cubes and not only cube_one?
Because the updateData call is guaranteed to be called back later on the correct Thread (in this case the Java3D renderer thread) at a time when it is allowable to modify the geometry data (between rendering of frames).
As it happens the thread that's correct and the time that's correct is exactly the same for all 3 GeometryArrays because they are all compiled and running against the same Universe3D. It unlikely that you've have a lot of different Universe3D but if you did you'd probably want to update each Geometry in it's own callback.


I've run out of time tonight but here is the file with the points used to build triangles.
I've added add color info to these it make it easier to see whats what

notusingtransform3D.java


Reply | Threaded
Open this post in threaded view
|

Re: how to transform objects not using Transform3D

Mr. Broccoli
"If you want visible surface determination and depth (z buffering) to work you need to render triangles. "

I want to be able to disable default visible surface determination.
Then I'd like to write my own method for setting display order (can't use Z buffering or RT).
I'd just go for sorting lines using painter's algorithm, but unfortunately(!) they get sorted really well by default.
Is there a setting to turn that sorting off?

OR

If it's simpler, I can go for displaying some entities (cube, sphere), implementing phong reflection algorithm and giving the user ability to control material reflection parameters, which I think are:
    k s, a specular reflection constant, the ratio of reflection of the specular term of incoming light,
    k d, a diffuse reflection constant, the ratio of reflection of the diffuse term of incoming light (Lambertian reflectance),
    k a, an ambient reflection constant, the ratio of reflection of the ambient term present in all points in the scene rendered, and
    α, a shininess constant for this material, which is larger for surfaces that are smoother and more mirror-like. When this constant is large the specular highlight is small.
Reply | Threaded
Open this post in threaded view
|

Re: how to transform objects not using Transform3D

philjord
This is certainly a tricky request, Java3D and the underlying OpenGL (Jogl) are 100% dedicated to the task of drawing triangles back to front by z order, determining front facing, and also to implementing phong shading lighting models when rendering faces.

So we can turn off Z buffering off by some calls on the Shape3D's appearance's rendering attributes like so

RenderingAttributes ra = new RenderingAttributes();
ra.setDepthBufferEnable(false);
app.setRenderingAttributes(ra);

now the triangles are rendered randomly, with last renders being on top.

We can also turn off backwards facing triangle cull like this:
PolygonAttributes pa = new PolygonAttributes();
pa.setCullFace(PolygonAttributes.CULL_NONE);
app.setPolygonAttributes(pa);


Now things get complex because we are forcibly trying to NOT use Java3D, but do some of its work. So what we want is "mixed-mode immediate rendering" a term meaning - [We will decide what geometry to render when and if]. Compare that to the normal "Retained Mode"

More info here:
https://docs.oracle.com/cd/E17802_01/j2se/javase/technologies/desktop/java3d/forDevelopers/j3dguide/Immediate.doc.html

I suggest you read that article before continuing.

So to use mixed mode we need to
1/ Create our own Canvas3D
2/ Put that Canvas3D on screen (just simple pure java AWT code)
3/ Implement the renderField callback method of Canvas3D
4/ Tell the renderer when we have changed data and should re-render the scene


Here is the code:
notusingtransform3D.java

So in there you'll see the new Canvas3D with the renderField method (and a postRender method that's not mandatory)

renderField could be called render, but it might be left eye, right eye or both due to VR support

In renderField you see we are now using only appearance and draw(Geometry) this means Shape3D is no longer need to group these up, in fact all we need is a bunch of geometries and one or more appearances.

Appearances can have lighting set up, and textures and all sorts of cool stuff, for now we are using a basic one that just turns off z-buffer, turns off culling and does render per vertex colors.

Below the new Canvas3D you'll see the new appearance setting code then some shape3Ds removed.

Notice legend is still in the scene graph, so we are rendering both regular node object "mixed" with our selected geometries.

Then a weird flush() method put into our key listener, this method tells the renderer to redraw the scene, and we call it after every geometry update. To be honest we really wan the renderer to runs continuously, but this was considered wasteful, so the default behaviour is to only update when told to in mixed or pure immediate mode.

Finally a bit of code to throw the Canvas3D on screen, very vanilla, very AWT.

Now just touching on the postRender method above, this is the easiest way to get 2D overlay interfaces going, it allows us to draw on the rendered image before display on the screen. 3D API's hate this, and it causes at least a halving of performance, so it shouldn't be used in high performance situations, but today with 2 cubes we are ok with it.

Now back to you, you need to create HEAPs of geometries, one for each face of the cubes (so 2 cubes * 6 faces = 12 geometries), or even one for each triangle of the cubes, create these by just making the cube code into small pieces, using the unchanged coord float array, but limit the number of cood indexes given to each geometry (6 coords for each face, 2 triangles worth).

Then in the renderField you decide IF you should render the face, using front/rear facing decisions, and then decide what order to render the faces in to get the scene ordered back to front so everything correctly occuldes the right thing.

The requirement to implement a phong lighting model will be almost self defeating, as 3D API literally do this for you, you can't stop them and do it for them, as that isn't a facility they need or provide.

The only way to implement the lighting model would be to get a callback from the 3D API for each pixel and implement that pixels color calculation and give the color back to the renderer. This is not only possible but the very heart and soul of 3D programming, because it is writing a (Fragment) Shader Program, the key purpose of which is to implement a program for the graphics card to run per pixel to draw the colors, however that is a huge step away form where we are now.

(As as aside Java3D 1.7 fully supports this and does some cool things, like auto generating the lighting program based on parameters passed in.)

It might be easier for you to comment out the app.setPolygonAttributes(pa); line and get the order of render correct first then try the front facing calcs after you know the order is right.

Good luck, I'm happy to give advice at any moment!



Reply | Threaded
Open this post in threaded view
|

Re: how to transform objects not using Transform3D

Mr. Broccoli
Hi Phil,

I believe I have managed to split cube_one into IndexedtriangleArrays, consisting of 1 triangle each, 12 geometries per cube.
It took me a while, trial and error in debug mode, to understand what is happening behind the scenes with the coordinates, vertices and indexes, but I think it works now.
I have also tweaked updateData calls. These are not the cutest, but for now they work.
Have a look notusingtransform3D.java

Is it possible you expand a bit on draw order in renderField?
I understand, that each triangle has a front and a back face.
Assuming triangle's back face should never be visible, then there's no point in displaying that triangle if we see its back face. Then, the remaining front-facing triangles shall be displayed per z order.
Now, how can I check/know if the triangle is properly oriented and the back face's normal is directed towards the inside of a cube? I created these triangles counter-clockwise, looking at each face from opposite to each face's normal direction, as per Wikipedia tutorial page.
How can I extract and use information on normals?
For the z-order, the idea I have is that I calculate average Z per triangle vertices and sort these geometries per that value.
Any hints/remarks/useful methods for what I'm about to do?

Thanks, Mr. B
Reply | Threaded
Open this post in threaded view
|

Re: how to transform objects not using Transform3D

Mr. Broccoli
Hi again,

So, I figured out that the half-plane set by the triangle normal needs to contain point 0,0,0 in order for the face to be visible. I'm checking this by looking at the dot product of a normal and one of triangle points.
I'm using methods:
boolean[] isFrontFacing   // for determining if the triangle is front-facing (ff)
IndexedTriangleArray[] frontFacingTriangles   // for crating a subset of ff triangles to display

This methods seem to work pretty neat, although I couldn't fully test them yet as I can't figure out exactly how to make it work for transformations.
I don't understand how the BY_REFERENCE bit for normals is supposed to work.
When comes to coordinates, I believe this is 'kind of' intuitive. I mean you have a long flat float[] with coords, you specify indices and then you tell that these particular 3 next indices are going to create a new triangle geometry.
It is not a problem that four triangles take the same vertex, as its x,y,z are the same for all of them.
Now, for normals, it gets problematic, as the same point, after separating triangles, is having 3 different normals.
How should I feed data into geom.setNormalRefFloat(flat_normals); in ccr2 method, or in general how should I handle normal data in java3D BY_REF? Am I supposed to generate 3*number_of_vertices float[] for each triangle separately and that's it? I cannot see any method for setting indices for normals. Do I then .getNormalRefFloat(); and transform it when striking keys? I will try that next.

Just writing the data down, not much thinking, I have coded float[][] cube1_normals with 6 normal vectors.
I have also set int[] cube_normals_idx that lists indices to normals for all 12 triangles (duplicate entries as I thought it would simplify iterating to 12).

I also did experiment with GeometryInfo class, but this led me nowhere.

I suppose that display order is just a matter of sorting ff triangles in Z order, true?

The current file: notusingtransform3D.java
Seems to me that triangles get sorted once and don't get re-evaluated after transformation.

Kind regards, Mr B.
Reply | Threaded
Open this post in threaded view
|

Re: how to transform objects not using Transform3D

philjord
Ok great stuff, your split code works.

I notice in the key listener you've added empty updaters for all the triangle geometries, however as all the geometries use different parts of teh exact same float array of coordinate points, we only need to call an update to that one shared set of float coords once. So we shouldn't need to do this, however the Java3D scene graph is once again trying to optimize and has decided the other geoms are not "dirty" so it doesn't resend the data to the 3D driver. I have however moved the "dirtying calls" into the single flush to make it tidier.

I've also for clarity put in a bit more output for the transformation variables in use.

At this point I'd note that the "zoom" call you are making are actually scaling calls, so the code as it is now literally makes the cube bigger or smaller. For an actual zoom call we would push the camera itself backwards and forwards, so I've put it a comment on how the zoom method could do that to the view platform (which is the Java3D camera). Note however that what we are doing is actually called  a "Dolly Shot", moving the camera forward and back, a real zoom in a movie would be done by changing the lens position, which can also be done in 3D by altering the field of view, and it does lots of interesting things to the way the scene looks, but is not important here.
notusingtransform3D.java

I'm a but pressed for time so I'm just going to describe concept and give URLs rather than put example code in place.

Your work on front facing is good,
concept
https://www.khronos.org/opengl/wiki/Face_Culling

complex maths
https://computergraphics.stackexchange.com/questions/9537/how-does-graphics-api-like-opengl-determine-which-triangle-is-back-face-to-cull
its about the determinant, notice that it's nearly about the dot product but not quite. You can go with dot product for now, but be aware.


So your front facing  list should be good to go.

The Normal refs you are looking into are in fact for doing something much more interesting than determining the facing of a triangle
Triangles are flat, so the normal from any coordinate is exactly the same, perpendicular to the face.

However is most rendering we actually want our triangle to be a bit curved seeming, so that a lot of them all seem like they are a smoothly curved surface, or possibly mostly smooth with some creases here and there, the use case being up to the artist who draws the 3d object.

So when doing lighting for a pair of adjacent triangle we might want to set the normal to the average of each, so that the phong shading calculation will average or interpolate between them, so there is no noticeable sharp edge.


In fact in a real engine we want every single pixel to be able to point in a different direction, so light can bounce off in any crazy way the artist wants.
https://cgcookie.com/posts/normal-vs-displacement-mapping-why-games-use-normals

Long story short, for today don't think about the per vertex normal, it's for lighting interpolation, not our uses.


>>I also did experiment with GeometryInfo class, but this led me nowhere.
GeomteryInfo is a separate system for generating GeometryArray out of other utilities, it just creates the Geometry then it is not used after that.



>>I suppose that display order is just a matter of sorting ff triangles in Z order, true?
Exactly perfect.


One final note to make life more complex for you, notice that the geometry gets transformed "up the scene graph" until you arrive at a fully transform geometry, this is referred to a virtual world coordinates, some geometry shape plus any positioning elements etc.

If a light existed in the scene it would have a virtual world coordinate (sometime called model coord) and we could compare the position of these 2 object for lighting calcs.

But in addition to these there is also a position for the camera, which is also used in light calcs (specular highlights)

Finally there is a transform required to make the mdoel as seen from the camera into a flat 2-D location on screen

So the rendering engine always wants 3 transforms set 1/ model 2/ view 3/ screen

In a 3D API set by  
gl.glMatrixMode(GL2.GL_MODELVIEW);
and
gl.glMatrixMode(GL2.GL_PROJECTION);

These transforms are simply matrix applied to your coords to figure out where they are, and use the results to do lighting calcs.


Reply | Threaded
Open this post in threaded view
|

Re: how to transform objects not using Transform3D

Mr. Broccoli
When comes to how zoom works, I was told that moving camera backwards and forwards is different - then you can see how vertices move relative to each other (unlike in zoom). This forward and backward movement is performed by pressing W and S keys, although it's not camera moving, but the object. Isn't it that the scaling I call zooming looks pretty similar to changing focal length (a true zoom)?

From the links you sent I see, that for normals perpendicular to the surface, the dot product is the solution.
Thanks for the context though. What you've written is similar to what NormalGenerator description says, I get the concept.

Since you said not to follow BY_REF normals, I briefly looked again at the code and tried to make my methods work.
I changed my isFrontFacing method so that it uses a reference to the flat point array (same as used for transformations so far).
I call this method together with final triangle sorting in a flush method so it is applied at each key press (tried calling it outside flush on a key press - with the same results).
It works as it should (looking at the display), but each key press causes this flurry of errors to pop out on 'draw':

Exception occurred during Canvas3D callback:
java.lang.NullPointerException: Cannot read field "retained" because "<parameter1>" is null
        at javax.media.j3d.GraphicsContext3D.doDraw(GraphicsContext3D.java:1981)
        at javax.media.j3d.GraphicsContext3D.draw(GraphicsContext3D.java:2128)
        at notusingtransform3D$1.renderField(notusingtransform3D.java:316)
        at javax.media.j3d.Renderer.doWork(Renderer.java:1327)
        at javax.media.j3d.J3dThread.run(J3dThread.java:271)


Any help on this?

The code: notusingtransform3D.java
Reply | Threaded
Open this post in threaded view
|

Re: how to transform objects not using Transform3D

Mr. Broccoli
Phil,

Looks like... it's almost done(?).
No errors, by the looks of it works OKish.
The only thing that is not OK is the 'zooming'. If I move when the geometry is scaled('zoomed') and then get back to original 0,0,0 (judging by the on-screen coords) then the cube alignment looks different than initially.
You gave me a code snip to change camera position, haven't tried that out yet, but I'm worried moving the camera will be the same as moving the objects and I'm already doing that with W/S keys.
Maybe playing with FOV could be the key to do it properly?
Or , just thought of it now - I could multiply translation deltas by scale factor, so that even if I don't end up in the same position that I started from, at least the coords would be realistic?

Anyway, have a look:
notusingtransform3D.java
trianglesToDraw.java

The second file is a class introduced to marry a triangle and it's Z depth to enable easier sorting.
In the original file I introduced a 'addTTDs' method which calculates centroid's Z and adds a triangleToDraw to the list I iterate through in renderField.
There is also a new 'updateDataEmpty' method which get's called for each key press for each cube reference.
This marks some triangles as 'dirty' (as you phrased it) plus helps populate the TTD list.
I also restored flush() method to its original simplicity - having just one line, being called once per key press.

Regards, Mr. B.
Reply | Threaded
Open this post in threaded view
|

Re: how to transform objects not using Transform3D

philjord
Awesome,
A quick test by flipping the front facing code
from
        if (dotProduct(p1, normal) < 0) {
to
        if (dotProduct(p1, normal) >= 0) {

shows it redering some weird inside out cubes where the back face can be seen to be fully drawn, nice work!


and reversing the z order with
Collections.reverse(listOfAllFFT);
in flush, cos there's lots of sort calls

I see crazy rendering order with that tall cube always at the front.

Yes I agree you've managed to front face and z sort very well indeed.


I'm not see the cube go out of alignment, I can zoom and rotate and move and at 0,0,0 they come back to good. I've got 2 apps running one untouched and one I'm playing with next to each other and they look like they are coming back to the same image for me.


Your extra class is definitely good, I recommend many many classes, make them inner classes to keep things tidy, if nothing else needs them as an interface then encapsulate them inside the main class as an inner, but encapsulating sub data is very important and useful.