Can Embedded Vision revolutionise imaging and machine vision?
Embedded Vision systems allow technical systems to "see" in an innovative way, improving their interactions with other systems, just as sight enhances humans interactions with their surroundings. Some examples include aiding communication between people, recognition of danger in and around our surroundings, or monitoring very delicate tasks. In a similar way, mechanical sight enables technical systems to solve tasks that they would not otherwise be able to solve.
Although there is generally no standard definition to the term "Embedded Vision", a common concept of this technology does exist. Compact vision systems (on the basis of adapted camera modules) are integrated directly into machines or devices, where, together with bespoke computer platforms and lower power consumption, make intelligent image processing possible in the most diverse applications without a classic Industrial PC being required. However, there are different types of Embedded Vision systems.
According to Peter Keppler, Director of Corporate Sales at STEMMER IMAGING, the difference between Embedded Vision systems and classic vision systems is simply:
"The latter works on the basis of Industrial PCs (which are freely programmable via special image processing libraries) and the image capture is carried out by cameras fitted with suitable lenses. An essential element is the lighting, which should be optimised for the respective application and ensure adequate illumination of the test object to suit the application."
Following the image capture, the recorded camera data is forwarded via suitable interface cables to image acquisition cards, which in turn, coordinate the actual image processing in the computer's CPU. On some of these cards, which are also known as frame grabbers, image preprocessing takes place in order to reduce the load on the host CPU. Finally, this kind of system delivers the results of the evaluation, which are mostly used for quality assurance of manufactured goods.
Embedded PCs: Embedded and freely programmable
Embedded PCs differ from classic IPC systems in that the functionality of the image acquisition cards is already permanently integrated in the Embedded PC. Like classic Industrial PCs, they also enable the connection of standard external cameras with all the image sensors available on the market. The Embedded PCs are based on Windows Embedded™ operating systems and are also freely programmable, allowing for the flexible adaptation of the systems to the respective requirements (with the use of special libraries for industrial image processing).
The connection to the machine takes place via proprietary bus adaptors or special industrial Ethernet cards. According to Keppler, the Windows environments employed give the user all the well-known advantages and disadvantages. As examples of Embedded PC systems, he mentions the CVS Image Station Compact from STEMMER IMAGING, the IPD GV family from Teledyne Dalsa or the Matrix series from Adlink, among others.
Smart cameras and vision sensors: Simple to operate and intelligent
Smart cameras and vision sensors go one step further: with these systems the camera sensor, the image capture, the processor for the image evaluation and the I/O interfaces as well as in some cases the lighting and the lens are combined in what is usually a very compact and robust housing. Vision sensors usually feature a graphical "point and click" interface.
These systems also frequently work with integrated lighting and lenses, which makes the application simple but at the same time reduces the flexibility, which is a cause for concern for Keppler: "Such systems are usually optimised for certain applications and do not allow switching to a completely different one, for example from a pure presence check to measuring or reading tasks. A further restriction is the limited number of available image sensors that can be used in such products." There is no precise conceptual differentiation between smart cameras and vision sensors.
Deep Embedded Vision: specialised in a single task
For fully integrated vision systems that can also work without operating systems, Keppler suggests the Deep Embedded Vision system - a designation that has not yet established itself in the market. "Such systems are developed especially for a certain task and are not freely programmable. The communication options of such systems are firmly defined during the design process and can only be changed later with relatively high expenditure of time and effort." High initial costs are incurred during the system design of such Deep Embedded Vision systems and these costs can only be amortized through the production of large numbers. As a rule, such products are characterised by very low power consumption, allowing very long run times even when powered by a battery.
One current example of such Deep Embedded Vision systems is the RealSense technology from Intel®. These camera systems are based on the Intel® RealSense™ D4 vision processor, which features the most advanced algorithms for processing the raw image streams from the integrated image sensors, from which precise 3D depth information is calculated with a high resolution and at an impressive frame rate. These 3D images are then output as a result for further processing.
A further example of such Deep Embedded Vision systems comes from the field of text recognition: compact modules with integrated camera, OCR software and radio link are mounted directly on mechanical meters and enable inexpensive automatic recording without replacing the existing meters with electronic versions: The meter readings are forwarded to the master computer at defined time intervals. Manual reading thus becomes unnecessary. Due to the extremely low power consumption and the short duty cycles, these modules have maintenance-free running times of the order of ten years.
System on Chip: Extreme flexibility
According to Keppler, System on Chip (SoC) represents a new, extremely flexible Embedded Computer technology that has recently been in great demand. "SoCs make bespoke systems possible and enable the simple adaptation of the most diverse image sensors via standard cameras and numerous standard interfaces such as GigE Vision, USB3 Vision or MIPI. Through the integration of powerful hardware such as FPGAs, GPUs or DSPs, they make local pre-processing and data reduction available where necessary. In addition, standard-compliant image distribution for further processing and standard-compliant machine communication via OPC UA are possible."
Further advantages that Keppler says this kind of ARM-based system offers under LINUX and with the use of the right software environment are source-code compatibility with PC systems, free programmability via C/C++ and access to image processing libraries with optimised algorithms. These systems are also characterised by compact designs, simple integration and low power consumption.
"Since SoCs require only low initial investment and system costs, in addition to which duplication is possible, this technology has the potential, in my view, to revolutionise imaging and machine vision", says Keppler.
Selecting the ideal system
The world of automation is becoming increasingly complex. Buzzwords such as Industry 4.0, Internet of Things (IoT) and its extension Industrial Internet of Things (IIoT), Cloud computing, distributed computing, artificial intelligence, machine learning and many other technologies are indicative of the many innovative developments that are presenting users and developers of vision systems with big challenges in the selection of the ideal system for their respective application.
Keppler is convinced: "Against this backdrop it will become increasingly important for users to be able to rely on advice from expert partners for these key technologies. STEMMER IMAGING has been focusing on machine vision for 30 years, has played a crucial role in the development of the industry and covers all the technologies described with its portfolio."
Keppler maintains that software is the essential key to the ideal vision system for the respective application: "In order to offer the necessary flexibility it should be independent of the hardware platform and the operating system, whilst at the same time being compatible with the common source codes and standards." Here, too, Keppler regards his company as being in an ideal position to accompany users along the way to a successful application development on account of the in-house software development, the long-established Common Vision Blox software platform and professional support.
However, Keppler doesn't envisage classic image processing becoming obsolete any time soon as a result of the rapid developments in the field of Embedded Vision: "Embedded Vision systems have experienced an enormous boom in the past years with regard to both their performance and variety of uses, and they offer their users a flexible range of options. Nevertheless, there will continue to be numerous applications in which the classic PC-based vision systems represent the ideal solution."