I have improved my 3D laser scanning most importantly by focusing the laser stripe on the scanned object. I used lenses from CVS or Dollar Store reading glasses. I tried lots of different spacings, but found that putting the lenses directly over the laser output was best to make a thin stripe about a foot away from the laser. This makes sense since the focal length is long for these lenses, so they just add faster convergence. The line lasers that I first purchased from eBay have fixed focus, and thin lines about 1-2 meters away from the output. For the convenience of the arrangement (whole thing in a small space) and so that I have more laser intensity on the object, quicker focus is best.
This web site has a nice video that demonstrates how I am measuring object radius by determining how far the laser line moves from the center point.
I switched back to using a webcam, but a much better webcam than before. With this one I can control focus, zoom, saturation, and sensitivity through software. I am using a Microsoft Lifecam, but there is really nothing particularly important about that brand or model. What is important are the freeware programs that I have recently found. I am recording images now using Debut Video Capture Software. This looks like a good program for recording computer actions to make instructional videos, but for me the important part is that it will take still shots based on a hotkey. Not only that, but it will take them in very rapid succession. I found instructions about how to use Python to interact with a Windows program and to deliver key presses. Here is my new Python script for 3D laser scanning.
Here is a video showing the system in action. I also narrate a description.
Here is a video of the laser stripe capture. The first frame of the video shows the center line image capture.
My next big step is determining how to effectively turn a point cloud in to a mesh. One promising program is CloudCompare. This is also a free program.
Processing the two stripe data requires identification of the center line of the scan. That is shown in the first frame of the video link above. Offset of the left and right peak from this center line gives the radii of these two points. The program requires finding multiple peaks on each row of image data, and binning each peak so that I can use geometry to convert these peaks in to X, Y, Z data. I wrote a simple program from scratch to do this job. At any given point of platform rotation, the stripe peaks represent the radius of the object, related through geometry to the X offset from the center line. This relationship is constant and linear: X = R COS(Theta), where theta is the angle of the incident laser. Since Theta never changes, the X offset gives the radius. So now I have the radius of each point, collected along Z of the object. For every rotation, I can convert this radius to X and Y. If the rotation angle of the platform is Phi, then X = Radius * COS (Phi) and Y = Radius * SIN (Phi). I wrote a program to collect these many thousands of X, Y, Z points as the platform rotates. This is a program that I plan to optimize and code in some reasonable way. I also just purchased a Raspberry Pi with a video camera. I might perform all of the image processing on that platform instead. I positioned the two stripes to be 90 degrees apart.
Loading the point cloud in to CloudCompare, I was pretty happy to see it at least looked like my scanned object. Here are two pictures of the result.
You can see that there is an alignment and scaling problem between the two sets of data here. That can come from the fact that one or both of the laser stripes might not be at exactly 45 degrees from the center line, or perhaps some effects that I don’t know about. I am doing this project intuitively, so there is a chance that I am making a mistake. When I figure it out, I will share it.
UPDATE 13 OCT 2013
My rotating table is not perfectly perpendicular to the axis of rotation. I have a plastic part that mounts to the stepper motor shaft and is epoxied to the rotation table. That part is apparently not perfectly aligned. This introduces errors in the laser offset position as a function of rotation angle. I know how to fix this problem. I think I will mount a second platform on top of the first, and use three bolts to adjust its height above the base. I should be able to flatten the surface this way.
I’m going to scan a cylinder to find out how bad the current problem is.
If I use one laser stripe only, and position the webcam so that one edge of the image is on the vertical stripe with a white card placed at the center of the platform, analysis and data organization are easier. The down side is that there is a lot of “shadowing” where the laser stripe never hits the object. That leaves blank spaces in the point cloud. This simple process does give me hope that I will ultimately be successful though.
Here is a video showing collection of the single stripe data, and here are two images of the nice looking point cloud that results.
In this case, CloudCompare successfully removed “outlier” points and identified normal vectors for the points. Mesh generation is still, however, obviously problematic…
UPDATE! PROBLEM SOLVED! (One hour later.)
Thanks to this web site.
I am using the single line laser scan.
Import Mesh, Filename.asc
Do not select “Grid Triangulation”
Grab all the X, Y = 0,0 points
Filters / Normals / Compute Normals for Point Sets
Select 20 Neighbors
Filters / Point Set / Surface Reconstruction : Poisson
Octree 11, Apply, Wait a little while
Export Mesh As Name.ply
Select Binary Encoding, Select All
Origin / Geometry to Origin
Rotate and scale for camera view
F3: collect an image
I have built a 3D Laser Scanner!!!!!