3D machine vision – technical basics and challenges
The basic 3D technologies involved include the time based ‘time of flight’ technique on the one hand and techniques based on geometric, angle-based processes on the other. The latter includes laser triangulation, stereo vision, light stripe projection, shape from shading and white light interferometry.
In laser triangulation the object to be measured is generally probed by a line laser which creates a precise line of light through which the object is passed. A camera which is arranged at a known angle to the laser emitter records the images on the laser line and is subjected to deflection based on the geometry of the object. This allows measurement of deviating laser lines at any given time at which the object passes through the laser beam. A number of profiles are generated during this process from which a three-dimensional image is created. Thus the deviation from the undeformed laser beam for each point along the the profile is used for height information to generate a 3D image.
This height information is then colour-coded in a so-called 2.5D range map using the corresponding grey values. Some 3D cameras already calculate this information internally to give complete 3D images, thus saving processing on the host computer, whereas other systems convert the range maps via connected PCs.
Once the 2.5D range maps have been converted to real 3D point clouds, this enables balancing deviations in position and rotation of the objects in all six degrees of freedom. This means it is no longer necessary to precisely position or feed the objects mechanically. This process significantly reduces the mechanical requirements for precise feeding and ensures a high throughput rate whilst maintaining 100% inspection of all objects.
A prime requisite for conducting laser triangulation is that the object moves relative to the camera and laser emission. So-called shadowing is one of the potential problems in laser triangulation processes (see figure Shadowing.jpg). Depending on the shape of the surface, there is a risk of the laser line being blocked by higher elements of the object so that exact height information on the structures behind cannot be detected. Possible faults of shadowed areas can thus no longer be detected.
One solution to the problem is the use of several cameras which track the laser line from different angles and then merge the different data sets to a single height profile. When applying this technique, object data will only be missing if not present in one of the input data sets. Today, this merging of data from several cameras is a standard feature of modern software tools, such as the Merge 3D in the Common Vision Blox (CVB) Library by STEMMER IMAGING.
The geometric process of stereo vision is also based on using two cameras. Similar to a pair of eyes in man, two cameras are used to record 2D images of an object. Employing the triangulation technique it is then possible to calculate a three-dimensional image from the two 2D images.
This technology also allows for movement of the objects to be measured during recording.
However, in order to clearly assign each object point of the inspected object to a pixel in the two 2D images, reference markings or random patterns on the object are prerequisites for employing stereo vision. Generally speaking it is therefore not really suitable for use in a production environment. It is, however, often encountered in the measurement of coordinates, the 3D measurement of objects, in working areas with industrial, service or mobile robot applications, as well as for 3D visualisation of working areas which are hazardous or inaccessible for persons.
Light stripe projection
Light stripe projection is also based on the triangulation method. In contrast to laser triangulation or stereo vision, 3D image processing based on light stripe projection requires static objects. The measurements, however, are extremely fast – providing measurement times of a few seconds or even only fractions of a second.
This technology operates with coded light, for example, using DLP projectors to emit light onto the object. Here, light is projected in stripes onto the object in relation to the height structure which results in a light pattern on the object which is then recorded by a camera set at a known angle (see figure light stripe.jpg, please use with source reference: Source: GFMesstechnik, Dr. G. Frankowski, Teltow/Berlin). A 3D image can be calculated based on a 2D stripe projection sequence.
Compared to a laser scanner, where the maximum light intensity is evaluated over several camera pixels, light stripe projection allows the evaluation of light intensity for each individual camera pixel. This leads to an improvement in the maximum obtainable height resolution of such systems by a factor of 2 compared with laser scanners, and, according to Dr. Gottfried Frankowski, managing director at GFMesstechnik GmbH and one of the fathers of this technology, allows for height resolutions in relation to scanning lengths in a dimension greater than 1 : 10,000.
Thanks to its high speed while capturing large measurement volumes light stripe projection is perfectly suited for industrial control tasks such as shape deviation, completeness, position of components and volume measurement.
Shape from shading
Using the shape from shading method three or four gray-scale images of a surface under illumination are captured from different spatial directions. Depending on the differences between the gray values (shading), images of the surface can be calculated reflecting both the spatial structure and the texture. Shape from shading, however, does not deliver exact height values, only information on the slope (first spatial derivation of the height) and the curvature (second spatial derivation of the height) of the surface. That’s why this technique is mainly used for surface inspection.
The big advantage of shape from shading is the fact that topology characteristics can be clearly separated from texture characteristics (brightness) even on highly reflective surfaces and high resolution intensity images can be created. This allows the reliable identification of even the smallest defects.
The shape from shading method has so far been limited to static objects, but thanks to the use of line scan cameras it can now be used for moving objects as well (linear feed or rotating process).
White light interferometry
Using white light interferometry an object is illuminated with white light. By means of a beam splitter this light is separated into a reference beam (reflected to a reference mirror) and an object beam that strikes the object itself. The two light beams are then reflected and collimated again in a monochrome camera. Vertical scanning of the object provides a 2D interferogram setting the basis for the calculation of a 3D image.
White light interferometry is suited to checking the roughness or the topography of a surface. It can also be used to determine the thickness of transparent layers. It provides very high resolutions, but can not achieve high speeds. Regarding industrial 3D imaging (3D machine vision) it can be used only to a limited extent.
Time of flight
Time of flight cameras are 3D camera systems which measure distances using the flight of time principle (ToF). The physical principle behind the idea: a light pulse illuminates the area. A camera measures the time which the light takes to reach the object and return for each image point. As the time required is directly proportional to the distance, the camera provides the distance of the imaged object for each image point.
This technology covers distances from a few metres up to approx. 40 metres with up to 100 images per second, with a distance resolution of roughly 5 to 10 mm. The lateral resolution is in the range of approx. 200 x 200 pixels. Higher resolutions up to 1.3 megapixels are already being developed.
Due to their relatively low resolution ToF systems are only suitable for specific applications in an industrial environment: for instance, in logistics for depalleting or for level checks of shelving and pallets. In traffic and transportation they are used for traffic census and identification.
The pixel-based range map is a raw data format in 3D imaging which does not yet contain calibrated data. It is also known as 2.5D image, as it only shows relative height differences but not the correct metrical values for height without perspective distorsion.
By calibrating, the range map can be converted into a cloud of points (COP) which makes the six-axis matching of master and test components possible in the first place and considerably facilitates the subsequent data analysis.
Components for 3D image processing
Over the past years the available range of hard and software components for 3D image processing has increased considerably. Laser illumination units, for example, as manufactured by the Freiburg-based company Z-Laser Optoelektronik, are the most commonly used illumination products for laser triangulation. Infrared lighting is typically used for time of flight solutions.
The choice of various 3D cameras based on different technologies has also increased significantly. One example includes the 3D cameras made by Automation Technology (AT), which are available as high performance "eyes" for high end laser triangulation systems.
An interesting solution for basic 3D measurement tasks is provided by the intelligent 3D sensors of Canadian manufacturer LMI which also operate on the laser triangulation principle. The Gocator product range offers easy entry to 3D measurement as these products are easy to install and run via an intuitive web interface and do not require programming.
The LMI Gocator 3000 series is based on light stripe projection and allows 3D analysis directly within the smart camera.
Trevista Surface offers a complete vision system with dome-shaped illumination and an industrial-grade PC for shape from shading applications. Optical 3D shape measurement is achieved based on topographic relief images and texture images visualising defects even on shiny surfaces.
The demands on the quality of lenses used in 3D applications are generally higher than for 2D applications. Meanwhile renown manufacturers include lenses in their product portfolio which meet the high requirements of 3D image processing.
Last not least, successful 3D applications depend on the software used, which is a decisive criterion. It must enable quick and precise real-time detection of even minute 3D deviations in measurement to allow for a quick good/bad decision on the inspected object. Common Vision Blox by STEMMER IMAGING is certainly one of the best known and efficient libraries of tools for 3D image processing.
Learn more on this topic with our Imaging & Vision Handbook! Order it now for FREE!