First, a short note on the structure of our collision detection program. An overview of the processing pipeline is given in Figure
1
. We assume that before entering the pipeline a scene has been loaded containing some objects. In our implementation, these are LightWave objects listed in a little script. We also use this script to set several switches that determine the operating mode. The object-object weakness strategy is the Sweep and Prune (S&P) algorithm as presented previously. For the face level intersection tests we have the following algorithms: Axis Aligned Bounding Box trees (AABB), Oriented Bounding Box trees with or without computation of the convex hull in the box orientation calculation (OBB and OBBCV) and lastly the V-Clip algorithm ([
Mir98
]) which I didn't mention before and won't say anything about since it wasn't parallelized. As you can see from Figure
1
, you can skip the S&P steps, since in Slave Mode this step will be done by the Master. It's also possible that in a particular application the scenes are so dense that the S&P check can't deliver a speed increase and is best switched off. Also, the face-level detection can be passed on to slaves, which is of particular interest in this paper.
For our implementation, we used the Parallel Virtual Machine library, a library that allows a network of computers to be used as a single parallel machine. Later on, we will discuss what exactly the Parallel Virtual Machine (PVM) library offers. But first we'll have to temper expectations a bit.
For our implementation, we used the Parallel Virtual Machine library, a library that allows a network of computers to be used as a single parallel machine. Later on, we will discuss what exactly the Parallel Virtual Machine (PVM) library offers. But first we'll have to temper expectations a bit.