We already know in our 2D processing experience that the x axis moves left and right, and the y axis moves up and down. But what if we add another dimension to this system? Moving in and out of the screen, otherwise known as the z axis.
The top left wall of this image is our original processing screen, but now we have an extra dimension, how close or far our objects are.
When working in 3D it can be useful to control the perspective. What actually ends up on screen is not always the same image.
Just like in the real world, perspectives can be changed, skewed, even distorted.
This image represents the "view frustum" which is the object that captures the 3D scene and decides how it will get translated into 2D before appearing on the screen.
In processing you control the size and shape of the view frustum using the perspective() method.
perspective(fov, aspect, z-near, z-far);
Where:
fov = field of view (default = PI/3.0, larger fov makes further objects smaller)
aspect = aspect ratio of screen (default = (float)width/(float)height)
z-near = z-near clipping plane, how close can the closest object be
z-far = z-far clipping plane, how far can the farthest object be
We also must specify where our camera is. This is done using the camera() method.
The camera method has 9 parameters, but can be broken down into 3 groups: position, target position, tilt.
Here's an example call to the perspective method using PVectors for each of the groups:
camera(pos.x, pos.y, pos.z,
target.x, target.y, target.z,
tilt.x, tilt.y, tilt.z);
Where:
pos = the position of the camera in 3-space
target = the viewing target of the camera (what is it looking at)
tilt = which way is up (default = (0, 1, 0) y-axis is up)
Adding an extra dimension may sound like a good idea in the beginning. However, this can open up a number of new and unexpected challenges.
There are lots of problems when it comes to choosing the type of camera for a 3D environment. Where is your camera going to be, will it capture all the action? Can a user see everything? Are there objects in the way? These are just a few of the issues that arise.
In 2D the worst problem we had to encounter was called the painter's algorithm. Each item that was drawn to the screen would overlap all other existing items. In this way, the surface of the screen was built up like a stack. The first items on were at the bottom of the stack, and the last item drawn would be at the top.
We can choose to fix our camera in 3D to a specific location, such that it never moves. This will basically leave us with the same problems of a 2D scene, although instead of the painter's algorithm, choosing the order in which to paint items, we'll have to order them along the z axis.
A fixed camera can also mean that the camera is fixed along a 1 or 2 axes, so the camera may move left and right, or up and down with the scene. The issues are the same as above. Objects must have the correct z depth to be visible.
This is where things become difficult. Not only do you have to worry about the items in your scene being in the correct place to be visible, the camera as it floats around may get stuck behind items. When items are hidden from view, this is called occlusion.
If the user should interact with Block A, but the camera is behind Block B so that Block A is occluded, there's really not much that can happen.
Additionally, a constantly moving camera must be smooth and not jerky, but fast enough to catch the action. There are many challenges here.
Up until now, our generative pieces in 2D all have had something in common. We don't clear the background so that the image can grow as it is generated.
This is possible in 3D but with one significant drawback. The camera cannot move. Which begs the question, why use 3D at all?
If the camera is moved, the entire scene must be re-drawn to account for the changed perspective. With some generative pieces drawing millions of shapes, this is not an easy task.
Think about a collision in 2D, we only have to check 2 conditions for a simple box collision. But in 3D we have that extra dimension to worry about. Now ever box has a width, height AND depth.
The bounds of the world are also an additional headache. we must make sure our objects don't go outside the x and y axis as well as the z axis. This simply means more checks and more if statements. With a large number of objects in a scene, this will theoretically slow down a program to 66% of it's 2D counterpart.
The only saving grace when moving to 3D is circular collisions / interactions. While the distance formula now needs to incorporate the z dimension. The check whether it is less than the combined radii of the two objects for a collision remains the same.
PVectors are a special datatype in Processing that store x, y, and z components. They also have a lot of useful built in math functions.
Using methods, we can add, sub, mult, div, dot, normalize, mag, limit... etc... our PVector objects.
When working in 3D there's absolutely no reason to not use PVectors. There are so many small additions and multiplications that can be avoided by learning to use the PVector object efficiently.
When working with vectors in C++, you can have vectors of any type (int, float, bool...). Also, the math operators (+, -, *, /, +=, -=, ...) can be overloaded, which means that adding two vectors is as easy as: Vec1 += Vec2.