This is a work in progress. I want to put most of this explanation out now and I will keep adding figures and working on the details. Feel free to leave a comment if you have a question or want to make a suggestion.
Triangulation scanning is ultimately based on a very simple principle. A laser is mechanically attached to a camera and pointed at an angle with respect to the camera viewing angle. When the laser light reaches a target, its distance from the center line in the horizontal direction is related by geometry to its distance from the camera. The figure below illustrates this concept.
The closer the object is to the camera, the further the laser illumination point will move away from the center line. Horizontal displacement is equal to distance from the camera multiplied by the tangent of the angle between the center line and the laser beam. In order to map the surface features of the target object, the camera and laser can be moved in the horizontal direction, or the target object can be rotated about its center point. If a round laser beam is used with a point focused on the target object, the object or the camera and laser will also need to be scanned vertically (out of the plane of the paper) in order to map the surface. Commercial triangulation scanning heads are used on CNC machines that can manipulate a target or the scanner head on four axes (X, Y, Z, and Theta – rotation) so that the distance from the head to the object surface can be mapped at all points.
I chose to rotate the object about its center point, and to use two line lasers rather than a single point laser. The line lasers project vertical, thin stripes on to the target object rather than points. Using two lasers solves an important problem. As is apparent in Fig. 1, as the object rotates, there will be cases where the laser line is occluded (the laser cannot reach the object surface) due to features that stick out of the object. With two beams at positive and negative angles, there are fewer such cases.
During laser alignment, I use a flat, vertically oriented square target that I place at the center of the platform and rotate to be perpendicular to the direction of the center line of the camera. I turn on a laser aligned with the webcam. This is not used during scanning, but only for alignment. The line laser mounted with the webcam defines the center line of the scanner. For alignment I am running three line lasers. I align the lasers in three important ways. First of all I adjust their lens focus so that the minimum sized laser line falls mid way from the rotating platform edge to the platform center. Second, I rotate the laser lines to make sure that they are all vertically oriented. This is accomplished by comparing the laser line to the vertical edge of the calibration board, and by having all three laser lines close to each other, near platform center. If the target board is indeed perpendicular to the center line, the two outer beams will separate equal distances as the camera moves toward the platform center. The lines must be parallel and vertical. After I have accomplished this vertical alignment, I put down a printed paper sheet with three lines printed on it: a center line and one line each for +/- 30 degrees. I can rotate each laser in its mount to make sure that the incident laser beams are as close as possible to +/- 30 degree incident angles.
After these three steps, all lasers are focused, vertical, and aligned as close as possible to +/- 30 degrees. I also adjust the rotation of the webcam to make sure that it is aligned with the vertical direction of the lasers. This is accomplished by looking at the webcam output and moving a separate window on the desktop near the image of a vertical laser line. If the laser line is parallel to the window edge, then the camera is close to aligned.
I perform another calibration routine to determine the correct relationship between laser line offset and distance from the camera to the object feature. The calibration routine involves running a computer program that steps the camera toward the rotating platform. I collect an image of the two laser lines on a flat, perpendicular plate at each step. I calculate the average distance over the whole line on each side at each step. At the end of the calibration I fit the collected points to a second degree polynomial: line displacement = C1+C2*camera distance+C3*camera distance^2. The nonlinear term comes from lens magnification. The linear term depends on geometry: the exact angle of the incident beam. I can predict its value knowing that the beams are aligned for +/- 30 degrees, but the calibration gives a more exact value.
For the Z direction, there is no geometrical effect to consider, but the nonlinear lens effect must be considered. To calibrate this effect, I printed a 30mm black square on the test target plate and repeated the calibration collection procedure. This time I can find edges of the black square image as a function of camera distance. Again I fit to a second degree polynomial. In this case the line fit gives pixels per mm = D1+D2*camera distance+D3*camera distance^2. This fit should follow the lens equation 1/f=1/d0+1/di. I read on one web site that the focal length of the camera lens f = 32mm for the Microsoft Lifecam Cinema, but I could not find di, the distance from the lens to the CCD plane, and I do not know d0 exactly, direct calibration is again preferable to estimation and calculation. I could try to calculate the constants C1, 2, ad 3 and D1, 2, and 3 using geometry and the lens equation, but the calculations would be based on estimates and uncertain values. Direct calibration provides exactly what I need to determine point cloud coordinates.
To perform a 3D laser scan, I rotate a target object with the left and right lasers turned on and collect images at each angle through one full rotation. I end up with 400 images. I then execute my analysis program. The program opens one image at a time, and then scans one line at a time through the laser lines. A peak finding routine identifies the left and right peak. I apply the calibration of laser offset to distance from the camera to calculate object radius at the laser line. I apply geometry and platform angle to place the radius points in X, Y, Z space for the left and right beams. Here I also apply the magnification fit to the Z location of each point.
Figure 5: Scanned object on rotating platform
Figure 6: Image of laser lines on target object
I am currently running two programs to examine the first and second peak found on each line. Each program produces a separate point cloud. I cut and paste the second to the first. I am also doing a “replace all” for the lines that say “0.00, 0.00, 0.00.” These are two features that I should be able to speed up.
Now I have a complete X, Y, Z point cloud for my target object. The next task is to convert this point cloud to a mesh. To accomplish this I use Meshlab. I saved the point cloud as a long text file with a “.ASC” extension, with each line in the text file having a single X, Y, Z point.
Meshlab processing steps
Import Mesh, Select the .ASC file that I created and edited with all points, deselect “Grid triangulation”
Manually remove points corresponding to the rotating platform edge or any other points not obviously connected to the target object.
Figure 7: Example point cloud with points corresponding to the platform edge
Export mesh as, save with same file name but with a .PLY extension
Select “New Empty Project”
Open the .PLY file just saved
If you try to edit out extraneous points and then proceed with the rest of the process without saving to .PLY and re-opening, the extra points pop back up for some reason once Poisson-disk sampling is performed. Saving to .PLY and opening the point cloud file as a new project eliminates these points in subsequent steps.
Show Layer Dialog
Look at the number of points in the mesh: 380,261
Filters, Sampling, Poisson-disk Sampling, Number of samples 100000, select “Base Mesh Subsampling”
Filters, Normals Curvatures and Orientation, Compute normals for point sets, Number of neighbors 20
Filters, Point set, Surface Reconstruction: Poisson, Octree Depth 11
Switch view from “Points” to “Hidden Lines” to view the mesh
Figure 8: Mesh generated in Meshlab
Export Mesh As, Select a new file name for your mesh
Now you can import the .PLY mesh in Blender in order to reorient, scale, and render
Figure 9: Mesh opened in Blender
Export mesh as .STL format
Open the .STL in Replicator to place it and center it on the virtual printing platform and scale as desired
Figure 10: STL format opened in Replicator G