I built a new PC for Christmas. The fast PC and great video card make ReconstructMe and the Kinect work very well together. I built a rotating platform so that I could automatically scan objects. The resolution is not as good as with my laser scanner, but the Kinect scanner using ReconstructMe can quickly be adjusted to scan much larger objects such as heads or whole people. Here are two pictures of the Kinect arrangement. I use a stepper motor to rotate the platform. I am currently building a rotating base that will allow me to spin a whole person to do full body scans. I found a blog post where someone used a rotisserie motor for this job. I think it’s a great idea. That blog is coming hopefully soon.
Here is a gallery of the initial objects that I scanned. I recently purchased a foam mannequin head just for scanning. The foam head is much more cooperative than the people I have asked to sit for a scan.
I found that it is very important to make sure that the Kinect only sees objects that are all rotating together. You want to simulate moving around the object with the Kinect, so if there are some stationary and some rotating objects in the view of the Kinect, the ReconstructMe process becomes confused. The cloth over the platform assures that the Kinect dots will not hit the tripod or motor below the platform. Since infrared light is used, a black cloth or a white cloth, or any random color should be fine for this job. Making the cloth color something different than your scanned object will make it easier to remove that part of the mesh later.
Using the cloth and positioning the Kinect so that it only viewed the object, I could spin the platform by hand as quickly as I wanted and the system would not lose lock. This is typically a very annoying aspect of ReconstructMe, but if you set things up right, the problem really goes away.
Unless you manage to move the Kinect all around the scanned object, there are always holes left in the mesh. I found out that using MeshLab Filters/Point Set/Surface Reconstruction: Poisson, and selecting Octree Depth 11, all of the holes will be filled in. If you did not rotate the object or move around with the Kinect, MeshLab will usually create a big bubble mesh over the open parts. You can next go in to NetFabb and slice off the bubble parts if you really just want a watertight mesh of one view of an object e.g. a face. Here are some objects where I used this technique to close holes. The color map is ruined, but for 3D printing purposes, this does not matter at all. Two images are from Blender, viewing the mesh captured by ReconstructMe and modified using MeshLab.
In addition to the rotating base that I plan to build, I will build an apparatus that will allow me to move the Kinect vertically during a scan. I will put a hinge point off to the side of the scanned object and use a counterweight so that I can move the Kinect along a circular path in the vertical direction. This will let me close the holes in the mesh that typically occur at the top of the scanned object.