Colour cameras
However, in vision applications there is often no benefit in including colour information, as the algorithms used are frequently only affected by contrast. There can even be disadvantages in using a colour camera, as spatial resolution can be decreased whilst transmission bandwidth, CPU overhead, and costs can increase.
Even where colour information is crucial, such as in the printing industry, it may be better to use a suitably coloured filter or a monochrome illumination to get a stable solution. But of course there are many applications where colours need to be accurately measured or differentiated and in these situations it is important to understand the methods that the camera uses to generate the colour information.
Single chip colour cameras
The most common type of colour cameras used in vision have a single CCD or CMOS sensor overlaid with coloured filters that cover each of the pixels. These are usually red, green and blue, arranged in the pattern shown below, called a Bayer filter array. There are twice as many green pixels compared to red or blue which mimics the resolving power and greater sensitivity to green light of the human eye.
To obtain a usable full colour image, it is necessary to interpolate the red, green and blue values for each pixel using an algorithm. There are a variety of different algorithms available that only differ in the way colour is gained.
Although this gives a full colour image, it cannot fully compensate for the fact that only 1/4 of the pixels are red, 1/4 are blue and 1/2 are green. This can reduce the spatial resolution of a sensor.
It is worth remembering that white light contains a full spectrum of light, so when it is used to illuminate a black dot on a white background, all of the pixels will see the transition and the full spatial resolution of the sensor is retained.
Using a common colour mosaic sensor, two RGB-values have to be interpolated on the basis of neighbouring pixels. This might lead to problems at both edges of the sensor and along well defined edges in an image in cases where insufficient colour information for the interpolation is available. Image artefacts are the result. The use of 3-chip prism cameras with suitable lenses can overcome this problem. Bayer to RGB conversion can be performed in the camera, on a frame grabber or on the host PC. If it is handled onboard the camera, the bandwidth needed to transmit and store the RGB data is tripled - less if a technique like YUV4:2:2 is used. If the camera transmits raw Bayer information to save bandwidth, the user has more flexibility in deciding which interpolation algorithm to use.
As the Bayer to RGB conversion is very CPU intensive, the FPGA (Field Programmable Gate Array) of a frame grabber can be used for this task. The main advantage of a colour camera based on one chip or sensor is the fact that the camera electronics is identical to the one of a monochrome camera, only the chip has to be modified with colour filters, resulting in relatively low additional cost.
There are other types of colour filter arrays such as the CYGM filters (Cyan, Yellow, Green, Magenta), also called complementary colour filters. These are sometimes used on CCTV cameras as the filters attenuate less light which increases the low light sensitivity.
Three-chip colour camera
Another method of producing a full colour image is to use a prism that splits the white light into its red, green and blue components and has one sensor for each colour. If an application requires accurate colour measure ment rather than a simple differentiation between colours, a 3-chip colour camera will be the best choice.
The obvious advantage of this method when compared with the single chip design is the fact that the spatial resolution of each colour plane is retained and the colour fidelity and colour accuracy is improved. The main disadvantage is that 3-chip colour cameras are inevitably larger and more expensive. The higher cost is due to more material needed, three sensors and a prism, and also due to the necessary high precision in aligning the sensors towards the prism. In addition these cameras require the use of special colour corrected lenses, as otherwise colour fringes will make the subsequent analysis more difficult.
Most 3-chip cameras for machine vision have sensors in which the pixels are 'co-site aligned' i.e. each pixel in each colour plane will be in the same position relative to the incoming light. This is particularly important for highly precise measurement applications. Some cameras that are not specially designed for machine vision have sensors that are spatially offset (typically only in one axis and by 1/2 a pixel), which gives the appearance of greater spatial resolution when viewed on a monitor or TV. For machine vision applications where subpixel accuracy is required, co-site aligned cameras should be used.
Multilayered colour imaging sensor technology
Multilayered colour imaging sensors use a different method of capturing colour images. This type of sensor uses a layered design, where each point on the sensor array has photosensitive receptors for all three primary colours, mounted on top of each other as shown in the diagram below.
However, multilayered sensor technology is not yet mature and there are a number of problems associated with it. There can be a degree of electron diffusion at the deepest (red) layer with a corresponding loss of sharpness. Another potential problem is colour 'noise' caused by crosstalk between the photosensitive layers. Also, because the 3 colour sensitive layers are stacked on top of each other, the output needs to be weighted in order to 'white balance' the image and the colour balance tends to alter depending on the intensity of the incoming light.