I have collected my first successful 3D scans. I used structured light and freeware from MakerBot Industries.
I am using a PC web camera, a regular projector focused as close as possible to the subject, and a laptop PC used simply to capture the images from the webcam. After I have collected the three necessary images, I run free software to turn it in to a 3D scan.
Here is the scanning arrangement. I found that it is important to try to minimize background reflections so I projected toward an opened door in to an empty room. On my next attempt I think I will set this up inside my garage and point out in to an empty driveway.
Greg scan results
Next I tried a small LED powered projector and the built-in web cam on my laptop PC. I used “Blender” to smooth the scan. Beginner instructions for Blender can also be found at the MakerBot blog that I linked above. I am still working on how to fill in holes and make a solid base so that I can next make a 3D print.
Here is a picture of the smaller, portable 3D scanning arrangement.
I switched to grey scale and used the internal Dell webcam. My settings were Backlight 0, Brightness 38, Contrast 0, Color 0, Gamma 0. The Radio Shack LED projector was mounted on a small tripod pressed against the back side of the laptop monitor. The projector was focused to the plane of my face, which I moved back only until my entire face was covered by the horizontal scan pattern. I temporarily made the projector the primary monitor so that I could start a full page slide show of the three images. Then while it was playing I switched back to making the laptop monitor the primary and collected my three snapshots. The result was my best so far. It’s tough to hold still while all three images go by!
When the brightness and contrast are not set just right, the 3D scan is scalloped with horizontal stripes. I tried defocusing the projector, but a much smoother scan resulted from working on the camera settings and keeping best focus from the projector on my face. This is at least true for the tiny projector. The brightness is such that I need to be as close as possible and focus to get the brightest and darkest possible lines from it on my face.
Here is the result of running the program “decoder” followed by smoothing in “Blender.” I plan to try to 3D print this. I closed most of the holes using Blender, but obviously missed a small hole under my nose. This image is a snapshot recorded using MeshLab.
Once I successfully extrude this image and put a base under it, I can directly export an STL file from Blender that I can open using ReplicatorG. From there I will generate G-Code for my 3D printer.
The next big job will be to try to merge some 3D scans from different angles so that I can get more of a complete 3D object instead of having disconnected ears.
Update! 17 NOV 2012
I tried using my Kinect for the XBOX 360 using a program called “Reconstructme.” This thing works so easily! I processed the raw output using Blender and produced a short video clip. I obviously need to move the scanner over the top of my head to fill in the hole where it could not see, but this looks pretty great for a few hours of effort!
It looks now very, very easy to scan an object and produce a 3D printed part. Once I have the PLY mesh in Blender, I can smooth it and export to STL. I can use the techniques shown on the MakerBot site to make a flat surface on the bottom for the platform and also to reduce the number of vertices so that the program runs better.
I thought I would need to make many meshes and merge them from different angles, but this quick process is just really easy to do. My next post should include some 3D prints!