Last Sunday was a new experience for me, it was hopefully a new experience for all those who attended the very first (as far as we’re aware) 3d Scan & 3d Print-off to take place in NZ (if not the globe!). A joint event organised by teams from the LoM(League of Makers)* and the MoNZ(MakersOrgNZ) and opened up to the public via the MoNZ meetup page (Located here).
The premise was pretty simple, use some of the latest “off-the-shelf” technologies to scan and print a 3d likeness of an existing object. The task proved to be not so simple, as it involved using some pretty convoluted approaches to capturing scans of the object, post processing the output so that it was something a 3d printer could interpret and then fudging the file through either a Makerbot Cupcake CNC or Makerbot Thing-o-matic.
Approach A: The LoM team chose to use a combination of laser scanning software, gridded reference box and web camera (pretty low res, but more on that later). The LoM team got to work immediately, building the “box”, fitting it with the necessary registration chart so that the camera could build a profile of the object being scanned. Once the software had been calibrated they used a laser controlled by hand to “paint” the object and collect data points via the webcam. They then manually rotated the object and collected another scan of the object, from the rear – to give the front and back of the object. So far so good, the results were looking pretty promising. Now the now crew had to merge the 2 sets of scans so that they would form one whole object. I believe that this is where we struggled as a group. Working with unfamiliar tools trying to merge 3d images is not as straightforward as we might have hoped. In the end the team pulled a rabbit out of the hat (it could be a rabbit…) and printed a rendition of the scan file. See below.
Approach B: The MoNZ team decided (defaulted to) the use of a Kinect and KinectToSTL software. Taking a more simplistic approach we let the KinectToStl software do most of the hard work when it came to scanning the object. We simply positioned the Kinect in the right location in relation to the object and used the software to capture STL files. This meant that the results were a little underwhelming. The scan would produce a rough estimation of the face of the object and then “surmise” the rest. We could have tried to clean the scan up, but this didn’t feel like the right time to undertake mesh merging madness – not without more precise scans to work with as a base. So we ended up with a rather vague scan one of our team. I ended up reducing the size of the scan file in order to reduce the print time and in doing so uncovered on of this approaches key failings – the quality of the scan is dismal on detailed objects and deteriorates even further when you scale them down! We had a lovely assistant, but the output ended up being referred to as the slipknot print given the printers inability to create fine detailing on such a small object. See below
Measure of success : We pulled together some criteria which we thought we’d measure each approach against. Essentially we wanted to see which technology would give the best likeness to the object being scanned. In the absence of a solid scanning rig I would imagine the result would vary greatly, but what I will say is that Approach A (the laser scanning process) is most certainly the victor. So much so that we have agreed to try and build a more solid (dare I say automated) rig for driving scans in future. I’ll update this post on our progress should we go down this path.
In the meantime feel free to read more on the results of our “rough-as-nails” battle of 3d scanning technology.
* League of Makers is a collective of Wellington design and architecture students from here in NZ.