Machine Vision Technology Forum 2019 - Presentations
15 October 2019
|CAD-based 3D object recognition
Stefan Schwager, STEMMER IMAGING AG
|Tips for setting up an industrial embedded system
Christoph Noth, Allied Vision Technologies GmbH
|Lighting - a key enabler in machine vision applications
Matthias Dingjan, CCS Europe N.V.
|Hyperspectral imaging - future technology and applications
Jon Vickers, STEMMER IMAGING AG
|Accelerating deep learning with hardware acquisition and pre-processing
Sebastien Granatelli, Silicon Software GmbH
|Key features of a quality machine vision optical filter
Georgy Das, Midwest Optical Systems Inc.
|New method for surface inspection using structure illumination
Fernando Schneider, SAC Sirius Advanced Cybernetics GmbH
|Performance comparison of different embedded processors
Martin Kersting, STEMMER IMAGING AG
|Get the glare out! New polarized sensors paired with LED lighting solutions
James Gardiner, Metaphase Technologies Inc.
|Imaging trends in 2019 and beyond
Graham Brown, Teledyne DALSA Inc.
|Machine vision in challenging environments - which IP protection rating makes sense for my application?
Andreas Beetz, Autovimation GmbH
|What could be the reason? Troubleshooting machine vision applications
Maurice Lingenfelder, STEMMER IMAGING AG
11:00-11:25 Coffee Break
|Modular compact sensors: A new type of 3D laser triangulation sensor
Stephan Kieneke, Automation Technology GmbH
|Embedded vision cookbook
Rupert Stelz, STEMMER IMAGING AG
|Intrinsic calibration of light line sensors
Stefan Schwager, STEMMER IMAGING AG
|Optical 2D measurement using the example of connectors and pins
Lars Fermum, STEMMER IMAGING AG
|Exploring the advantages of AI-enabled machine vision in intelligent manufacturing
Xavier Serra, Adlink Technology Inc.
|Vision systems of the future - a combination of technologies
Patrick Menge, Teledyne DALSA Inc.
|Prism-based multispectral imaging for machine vision applications
Michael Lund, JAI A.S.
|Random bin picking: Last step for a complete factory automation process
Toni Ruiz, INFAIMON S.L.
|Lasers for embedded vision
Stephan Broche, Z-Laser GmbH
|Modern measurement technologies using the example of connectors and pins
Lars Fermum, STEMMER IMAGING AG
|CoaXPress state-of-the-art and new features brought by version 2.0
|Deep learning as part of modern machine vision applications
Adriano Biocchi, MVTec Software GmbH
12:30-13:55 Lunch Break
|Keynote Greenhouse 4.0, New technologies to feed the world
Ton van Dijk, Global Head of Sales at LetsGrow.com
|Fabric recycling with NIR hyperspectral cameras
Dr. Mathieu Marmion, Specim Spectral Imaging Ltd.
|Bin picking from programming to CAD modeling
Pavol Cobirka, Photoneo s.r.o.
|Machine vision project approach
Harm Hannekamp, STEMMER IMAGING B.V.
|Next generation linescan imaging technologies
Andreas Lange, Teledyne DALSA Inc.
|Modern vision application development and rapid prototyping in CVB with C++, .Net and Python
Rupert Stelz, STEMMER IMAGING AG
|A deeper understanding of some of the complexities within LED ligthing control
Gardasoft Vision Ltd.
|Practical aspects of time-of-flight imaging for machine vision
Ritchie Logan, Odos Imaging
|Developing cost-effective multi-camera systems with MIPI sensor modules
Roland Ackermann, The Imaging Source Europe GmbH
|Polarisation of light – making hidden things visible
Tobias Henzler, STEMMER IMAGING AG
|Vision system validation
Steve Mott, CEI Components Express Inc.
|High performance SWIR cameras in machine vision and process control
Guido Deutz, Xenics N.V.
|Machine learning basics - an introduction to the types of machine learning
Stefan Schwager, STEMMER IMAGING AG
15:00-15:25 Coffee Break
|Improving productivity with high-quality, eye-safe 3D machine vision
Moez Tahir, Zivid
|Imaging without processing - recording image streams
Jon Vickers, STEMMER IMAGING AG
|Standards, system benefits drive convergence of machine vision, industrial internet of things (IIoT)
Tony Carpenter, Smart Vision Lights
|Scanning glass and specular surfaces with smart 3D technology
Lucien Vleugels, LMI Technologies Inc.
|But this can be easily seen - solution strategies for the selection of the ideal illumination
Lars Fermum, STEMMER IMAGING AG
|Smart infared cameras: a new technological approach to Industry 4.0)
Michael Wandelt, Automation Technology GmbH
| Calibration in machine vision - methods and requirements
Maurice Lingenfelder, STEMMER IMAGING AG
|Influence of optical components on imaging performance
Jörg Blätz, Jos. Schneider Optische Werke GmbH
|sCMOS cameras - what is the difference over CMOS
Janice Lee, Vieworks Co. Ltd.
|Hyperspectral apps - vertical solutions for industry
Markus Burgstaller, Perception Park GmbH
|Shape-from-focus - an unexpected but powerful 3D imaging technology
Tobias Henzler, STEMMER IMAGING AG
|Neural networks - functionality and alternative approaches
Stefan Schwager, STEMMER IMAGING AG
Greenhouse 4.0, New technologies to feed the world
LetsGrow.com has been the leading data platform in high-tech greenhouse horticulture for almost 20 years. From a software platform that made it possible to share data between production companies, LetsGrow.com has been developed into a company that turns data into valuable information with which companies can optimize their cultivation process and guarantee results. Where the data used to come from 1 manageable data source, nowadays the data comes from many different data sources. From wireless temperature sensors, packaging machines, but also more and more vision techniques. Correctly interpreting the data, using AI solutions and entering into partnerships ensure that the solutions that are made actually add value for companies.
Personal: Ton van Dijk, Global Head of Sales at LetsGrow.com. 3 years working for LetsGrow.com, but almost 25 years of experience in greenhouse horticulture. Worked for 15 years in the family business, a large cucumber and pepper nursery. After selling the company, I continued working in the greenhouse horticulture as a consultant at various companies. Being able to use my 'practical knowledge' and being able to combine this with new technology to really add value for the companies is the best thing there is. And within a company like LetsGrow.com that is more than possible.
Exploring the advantages of AI-enabled machine vision in intelligent manufacturing
Deep learning machine vision provides significant capability to migrate conventional AOI (Automated Optical Inspection) assets to intelligent AI-enabled equipment with comprehensive modelling capability. Locating and identifying the requisite IoT Edge device specification is a critical consideration in next generation AOI system design. Acquisition of data, execution of specific functions, and full utilization of computational nodes amongst the entire IoT network is necessary to provide not only training and execution of AI, but also the management of the entire of AI-enabled AOI nodes (equipment) across the Smart Factory.
This presentation will discuss procedures for scaling entire vision systems from a single computing device to an entire IoT Edge network quickly and easily, and streaming the featured images to storage and analytic services, to achieve a true IoT solution.
Machine vision in challenging environments - which IP protection rating makes sense for my application?
More and more applications in machine vision require creative approaches to protect sensitive technology from mechanical, chemical or thermal stress. While dust is predominantly involved in wood processing, in food and pharmaceutical industries contamination of the inspected products by the camera system must also be avoided. The IP protection classes (International Protection Codes) precisely define the extent to which the undesirable ingress of foreign particles and moisture into the interior of the device is prevented. This presentation explains the selection of an application-specific enclosure and accessories for the vision system to enable it to be used in a wide range of harsh environments.
Standards, System Benefits Drive Convergence of Machine Vision, Industrial Internet of Things (IIoT)
While Industry 4.0 and Internet of Things ( IOT) are two of today’s coolest buzzwords, machine vision solutions have been at the forefront of machine-to-machine communications since the vision industry’s inception in the 1980s. Mainly because there is no automated solution unless the machine vision result data – whether it be an offset, pass/fail judgement, or other critical data is communicated to nearby robots, manufacturing equipment and staff that operate them for subsequent action.
In the past, machine vision solutions passed data along a variety of transport layers, whether it be consumer interfaces, such as Gigabit Ethernet, USB, or dedicated industrial interfaces such as CameraLink. Supporting data interface, transport and library standards such as GigE Vision and GenICam further improved the ease at which machine vision solutions could communicate with nearby machines. Today, these standards now extend further into the system beyond just defining the camera component through standards such as the Local Interconnect Network (LIN), Mobile Industry Processor Interface (MIPI) that enable cost-effective electronic device design and sub-assembly communication. At the same time, wired networks such as industrial gigabit Ethernet are being complimented by wireless edge networks that will enable easy plug-and-play operation with all system peripherals, not just the camera-PC pipeline. This presentation will explore how these old and new standards are enabling new, cost-effective machine vision solution designs.
Tips for setting up an industrial embedded vision system
There are many advantages of embedded solutions, such as lower costs, lower energy consumption, compact design, and embedded boards with increasing performance, which all make a migration from PC-based to embedded solutions interesting in the industrial sector. However when does the use of an embedded solution really make sense? Which hardware and software architectures are suitable for the application? Which components and questions must be considered when setting up an embedded vision system?
In the evaluation phase for a new machine vision system, these aspects must be carefully understood, and requirements verified. Embedded systems not only bring advantages to users of industrial image processing who have previously worked PC-based, but also new challenges such as new hardware architecture, interfaces, new data processing paradigms, and open source operating systems. This presentation provides an overview of the most important key factors and presents possible set-up scenarios for Industrial Embedded Vision.
Creating robust machine vision systems with Windows IOT – Microsoft's embedded platform
Many people consider Embedded as small ARM and Linux powered machines however, there is an alternative closer to the traditional Windows powered PC’s. The use of Windows 7 or 10 Professional has always had issues when building an industrial vision system. This includes the risks of system corruption on sudden power loss, risk of windows updates affecting the system, risk of viruses as well as the ability for users to make changes to the machine.
With Windows IOT Enterprise Microsoft provides the ability to overcome these issues without the need to change your software by locking down and customising the PC for the embedded application it is intended for. This seminar explores the capabilities of this embedded version of Windows and shows how you can take advantage of these features with an easy point and click way when creating a PC based vision system.
Embedded Vision Cookbook
Embedded Vision covers a wide range of applications and solutions. It involves the right combination of hardware, camera and software. Getting started is often the hard part.
The goal of this presentation is to provide a rough overview of our view of Embedded Vision. We show a recipe for the design steps for an example embedded vision system and show how Common Vision Blox (CVB) can help.
Performance comparison of different embedded processors
This presentation compares different processor platforms (ARM-based, NVIDA-Jetson, Intel-ATOM-based) and deals with restrictions regarding the acquisition of camera data. The results of benchmark tests show how efficiently the same application runs on different platforms. Furthermore, a comparison of a CUDA optimized algorithm on a TX1 system and a Windows graphics card with the execution on a conventional Intel or ARM CPU is shown.
Developing cost-effective multi-camera systems with MIPI sensor modules
... CSI-2/FPD-Link III (up to 15m) Connection to Embedded Boards
Embedded vision enables compact, high-performance applications using cost-effective MIPI sensor modules for multi-camera systems with subsequent data processing via AI and deep learning. This lecture examines industrial camera interface options (USB3, GigE and MIPI) with respect to NVIDIA Jetson boards (Nano, TX2, Xavier) and compares their respective performance. Multi-camera control for these embedded systems, using MIPI CSI-2’s short internal connection or via FPD-Link III’s long (up to 15m) external connection, is explained in theory and practice. Additionally, bandwidth, maximum number of sensor heads and the advantages of direct control via the ISP are discussed. An overview of possible sensor modules and their respective feature sets (e.g. global shutter, rolling shutter etc.) concludes the presentation.
Lasers for Embedded Vision
Z-Laser Machine vision lasers are becoming more deeply integrated into optical measurement systems like 3D-displacement sensors. The form factor and cost structure of the laser system ca be reduced significantly however, it is essential to preserve a high degree of flexibility to support the high number of variants that are typically required to cover all use cases of a product platform. Further considerations also include issues such as the ability to field exchange without the need for re-calibration of the laser while supporting all possible lasers without impact on system integration and laser safety). In order to further aid the OEM in reducing cost and form factor the concept of integrating the driver electronics circuit directly into the customer’s PCB design is considered with licenced software providing the intelligence including the ability to aid predictive maintenance by flagging a calculated imminent EOL (end of life) situation.
Modular compact sensors: A new type of 3D laser triangulation sensor
3D laser triangulation sensors are being used more and more in the development of industrial inspection systems. They usually consist either of discreet setups with camera and line laser projector or they are factory assembled and calibrated devices with integrated camera and line lasers.
Discreet setups have the advantage of their flexibility to the requirements of the application (FOV, triangulation angle, working distance), but they demand an increased effort for engineering, components encapsulation, calibration and integration in the application. On the other hand, factory calibrated 3D laser triangulation sensors enable an easy integration and shorten the application development remarkably. However, their design cannot be customized to meet 100% the needs of the application without significant effort and high NRE costs.
This lecture presents a new concept of 3D laser triangulation sensors, which allows overcoming the aforementioned limitations. Thanks to their modular design, the Modular Compact Sensors (MCS) combine the design flexibility of discreet setups with the advantages of factory calibrated 3D sensors.
Random bin picking. Last step for a complete factory automation process
In modern industry, automation and the use of robots are essential parts of the production processes. A key element of the ‘factory of the future’ is the complete automation of processes and its adaptation to the more dynamic and flexible industrial environments.
Nowadays, in spite of the high degree of integration of robots in plants, some processes still involve operators doing manual picking of random placed objects from containers.
The automation at this stage of the process requires a robot and a vision system that identifies the position of the objects inside the containers dynamically. This is what we know as bin picking. Bin picking consists of a hardware solution (vision + robot) and software solution (image analysis + communication) that allows extracting random parts from containers.
Bin Picking provides the complete automation of processes with a series of advantages:
Reduction of heavy work and low-value added tasks for operators Maximization of space in the factory thanks to being more compact than current mechanical solutions Adaptation to flexible manufacturing processes Reduction of cycle times increasing machine productivity
Scanning glass and specular surfaces with smart 3D technology
This presentation will focus on using laser-based smart 3D technology to solve the challenges inherent in scanning glass and specular surfaces. Specifically, it will address cell phone glass assembly inspection, which is a common consumer electronics (CE) application in which the laser sensor scans the cell phone glass edge in its frame and generates high-resolution 3D data. The data is then used to extract edge and gap features, and measure flushness and offset in order to ensure tight assembly tolerances are met.
The presentation will explore how 3D smart sensors leverage an optimized optical design and specialized laser projection technology to achieve the best inspection results. Key sensor requirements will be discussed, including low sensitivity to the target angle; the ability to eliminate noise caused by laser scattering at the edge of the target surface; accurate measurement of different surface colors and surface types (e.g., coated, glossy, transparent); the need for scanning and inspection at speeds greater than 5 kHz in order to handle a continuous flow of production; and a low total cost of ownership to ensure maximum profitability.
Practical Aspects of Time-of-Flight Imaging for Machine Vision
Time-of-Flight (ToF) imaging is a well known technology, yet remains relatively novel in machine vision. This talk will examine the practical aspects of ToF imaging and applicability for general machine vision tasks.
The talk will look at the processing occurring on board a ToF imaging device and through the use of application examples, will also look at the post-processing steps on the client PC for successful deployment.
New method for surface inspection using structured illumination
A new approach to structured illumination opens up new possibilities for fast and efficient surface inspection. Dedicated illumination patterns help to find finest 3D defects and can identify specular reflection properties. A flat area illumination provides these patterns which are formed by electronic means. By increasing the illumination power as well as the control frequency over existing structured illuminators an effective inspection of static and moving parts is possible. Process integration is simplified with easy interfacing to the sensor and topographic images of the surface are used for automatic testing. The presentation finishes by comparing the new to existing surface inspection methods – photometric stereo and advanced technologies for specular surfaces.
Shape-from-Focus – an unexpected but powerful 3D imaging technology
An unexpected but powerful 3D imaging technology will be presented that uses an automated variation of the focus position of a telecentric lens system: shape-from focus (SFF) or focus variation. Intelligent processing of the acquired image stack allows calculating both 3D range maps and 2D intensity images with a significantly enhanced depth of focus. The seminar discusses the principle of the SFF technology, details the calculations used in CVB, the mechanical layout of the lens system and finishes off with representative application examples.
Improving productivity with high-quality, eye-safe 3D machine vision
How do you stay ahead of the competition in a fast-paced machine vision and industrial automation market?
In this 20 minutes lecture, we'll introduce you to a few crucial concepts required for implementing a flexible and scalable machine vision platform for everything from bin-picking to manufacturing, inspection and assembly, and even logistics and e-commerce.
Most of today's industrial and collaborative robots are "visually impaired". Using one of STEMMER IMAGING's latest products a unique 3D colour camera, forward-leaning customers gain a headstart in their automation processes.
Attendees will learn how eye-safe, white structured light hardware can reduce implementation time, solve more tasks over a flexible working distance, while accurately recognizing more objects.
Deep learning as part of modern machine vision applications
Machine vision is crucial to highly automated production processes, which increasingly rely on advances in artificial intelligence, such as deep learning. Besides higher automation levels, these technologies enable increased productivity, reliability, and robustness.
Since deep learning – at its core – is an approach to classify data, many other machine vision technologies have to be considered as well. The presentation highlights the technology’s role within industrial vision settings and show the latest developments for solving any machine vision application.
Accelerating deep learning with hardware acquisition and pre-processing
As sensors and interfaces increase their resolution and speed, the complexity of image processing tasks such as deep learning are reaching new boundaries. This seminar discusses how to solve these issues in real-world conditions given from the view of a field application engineer. Do not miss this interactive presentation using available development tools.
Machine learning basics - an introduction to the types of machine learning
Machine learning and in particular deep learning with neural networks is one of the most sought after technologies in computer vision. Driven by astonishing results in recognition and easy accessibility through free tools it gained wide spread acknowledgement among many researchers and practitioners. Buzzwords such as neural nets, AI, learning rates, supervised learning, linear models and more are quite common nowadays. But what does all this mean and how does it actually work?
This presentation will briefly explain what machine learning is, how it works and what the most common buzzwords mean. After this talk we hope to clear most of the confusion about this interesting topic and if you want to learn more afterwards, please join the advanced talk “Neural Networks – Functionality and Alternatives”.
Neural networks - functionality and alternative approaches
Neural Networks are some of, if not the most, prominent method for machine learning currently. While being quite versatile and powerful they are not the only means of machine learning. In this talk we will have a closer look at the functionality of neural networks. Their advantages and disadvantages compared to other machine learning methods such as support vector machines, gradient boosted trees, nearest neighbours and others will be presented. If you are unfamiliar with machine learning, this talk might still be of interest but we recommend you attend our talk “Machine Learning Basics” first.
Modern vision application development and rapid prototyping in CVB with C++, .Net and Python
Thanks to their object oriented design and consistent integration into their respective runtime environment, the CVB++, CVB.Net and CVBpy API’s released in Common Vision Blox 2019 simplify the creation of complex image processing applications through high level language features and proven design patterns on the PC as well as on embedded platforms such as ROS or similar. In addition, the various Python environments and LINQPad facilitate rapid development.
This seminar is a quick and playful exploration of different approaches of development in CVB. With the design of the three new, language-specific APIs in CVB, enhanced troubleshooting possibilities are discussed along with the bridges to common runtime libraries such as Qt, WPF, Windows Forms, NumPy, all with the aid of practical examples.
Knowledge of at least one of the three languages is an advantage when attending this seminar.
Smart Infrared Cameras: A new technological approach for Industry 4.0
Although thermal imaging with infrared cameras has a great potential especially in industrial applications, it has only made its way into automation and quality assurance to a very limited extent. While with the introduction of uncooled detectors more than 20 years ago gave the essential base technology for the design of thermal industrial cameras, many obstacles remain.
One important reason for the low spread in industry is the lack of standard software for thermal imaging. Integrators have to use SDK’s provided by the camera manufacturers to develop their own software solutions which means a significant barrier. Furthermore, the camera models available today are not designed for use in industrial applications. Many manufacturers have a lack of industrial application experience and don‘t see industry as a relevant sizeable target market. With the rise of smart cameras , we are also seeing the decline of PC-based imaging systems for many applications. Reasons include : the complexity of such systems, solution costs, stability, data safety, easy factory interfacing and maintenance effort.
The lecture presents a new device-related approach featuring smart thermal cameras to address the obstacles for practical applications and to make the potential of temperature imaging in industrial environments accessible.
Prism-based Multispectral Imaging for Machine Vision Applications
Despite the huge potential that hyperspectral imaging offers in quality and structural inspection of food, plant health and growth, environmental monitoring, pharmaceuticals, medical diagnosis, forensic sciences and thin film analysis, the scope often seems to be limited in industrial environments. This is because the hyperspectral imaging technologies available today are slow, use low-resolution sensors, require complex image handling and are a costly investment to multiply. Furthermore, it is often seen that an application starting with a hyperspectral approach often comes to a conclusion that the number of relevant wavelength bands required are just 3 or 4. Today hyperspectral applications are usually found in laboratories where the main task is to identify the relevant bands to differentiate between two or more objects.
From an industrial perspective, multispectral imaging appears to have a higher application potential. The reason being complexity of data handling is much lower due to reduced number of spectral bands, higher camera line/frame rates and lower system costs. Multispectral and Hyperspectral imaging are not competitors to each other, rather they are complementary technologies if used in the right applications. Eventually, the information on number and spectral nature of bands identified with hyperspectral cameras can be used to design multispectral cameras, which can be used in real high-speed industrial environments. This is where the latest camera solutions find applications.
Hyperspectral apps - vertical solutions for industry
Vibrational spectroscopy is based on the fact, that molecules reflect, absorb or ignore electromagnetic waves of certain wavelengths. Hyperspectral sensors measure those responses and return a spectrum per spatial point from which the chemical fingerprint of a material can be derived. This data requires extensive processing to be useable for vision systems.
Chemical Colour Imaging methods transform hyperspectral data into image streams. Configuring these streams to highlight chemical properties of interest and sending them to image processing systems via protocols like GigE Vision. Applications include: Recycling, food safety, Quality Assurance (e.g. Pharma, Food and Packaging), colour measurement etc.
The abstraction of Hyperspectral cameras to create purpose specific vision cameras is enabled by software apps. Preconfigured chemical and / or physical material properties enable inspection tasks far beyond today's limits. Predefined application chemometric processing gives the ability to scale the delivery of a solution beyond that of general scientific methods.
Fabric recycling with NIR hyperspectral cameras
Recycling is in the air and textiles should also be recycled. However, to do so, a perfect identification of their fibers type is needed. Hyperspectral cameras offer here new possibilities. So far, recycling of fabrics is done manually, having inherent and significant issues, like 1) repeatability; 2) reproducibility; 3) hygiene; 4) speed; 5) Accuracy and 6) cost. Within this context, automated machine vision systems would be very useful, addressing all the previous mentioned issues. Material identification requires contactless sensors able to measure the chemical composition from a distance, where NIR hyperspectral cameras outperform. We measured with such camera different types of fabrics, including synthetic ones (e.g. Polyester), based on animal (e.g. wool) or plant (e.g. cotton) fibers. Data were analyzed with a PLS-DA model. Results show that synthetic, animal and plant originated fibers could be sorted, regardless the color of the textile, including dark ones. We believe that those findings are of the most importance, opening a new industrial market, driven by new EU laws.
Hyperspectral imaging - future technology and applications
Hyperspectral Imaging is one of the current trends in machine vision along with Industry 4.0, Embedded Vision and Deep Learning. The combination of spectroscopy and image processing opens up new fields of applications in machine vision. Chemical information that can be visualised using Chemical Colour Imaging (CCI) enables the acquisition of data that would not be possible with conventional image processing. Along with the hardware required for hyperspectral tasks, the talk will present applications and possibilities offered by this technology.
High performance SWIR cameras in machine vision and process control
Short-wave infrared (SWIR) imaging is used extensively in industrial markets. Supporting integrated machine vision systems, they enable process control with efficiency and reliability in some applications that visible cameras cannot see. In this session, we will be introducing the use of SWIR imaging in the machine vision market. As technology advances, applications with demanding imaging requirements are emerging. We will discuss some of these requirements, helping you identify the specifications that are crucial or interesting for your application with the relevent SWIR cameras from Xenics that will be able to meet the defined requirements. Both area-scan and line-scan types of SWIR cameras will be covered and how they have evolved in recent years.
CoaXPress state-of-the-art and new features brought by the version 2.0
The presentation is introducing the CoaXPress standard and details its key features. For some of them a comparison with the other machine vision standards is tackled.
In a second step, the improvements and new features introduced by the just released version 2.0 of the standard are presented. Then some indications about the future development of the standard will be detailed during this presentation.
Imaging trends in 2019 and beyond
Machine vision interfaces have not evolved a lot in the last decade despite the emergence of USB and Ethernet ports in consumer PCs. The combination of high speed with larger resolutions do however require high bandwidth solutions to accommodate the demand in the industry. What are the current options and which new ones will become available soon. 2D sensors do not only increase in resolution and speed, polarization is one of the new trend. We will see how this is implemented and what new fields of application this opens up.
Vision systems of the future – a combination of technologies
The machine vision market is constantly driven by new innovations. The aim is to optimize or combine production processes with ever better machine vision hardware and software in order to be able to continuously expand the number of possible applications. A short overview about possible scenarios is covered in this seminar.
Next generation linescan imaging technologies
Linescan technology is evolving to meet ever demanding application requirements in machine vision today. Multifield imaging using either time division or spectral division enables end-users to capture multiple images e.g. brightfield, darkfield, and backlight in a single scan.
Combined with advanced lighting, multifield significantly improves detectability and tact time. Polarization imaging is also emerging for detecting birefringence, stress, surface morphology, and material classification. In addition, super resolution 32k TDI linescan camera has been developed using pixel offset technology to boost signal to noise ratio.
sCMOS cameras - what is the difference over CMOS
Unlike the previous existing sensors of CMOS and CCD, sCMOS is uniquely capable of providing simultaneous features such as a large field of view, high sensitivity and wide dynamic range.
Because each pixel of a CCD sensor is exposed at once and the photoelectrons are converted into signal at a common port, the speed of image acquisition is limited. More pixels that need to be transferred, the slower the total frame rate of the camera would be. However instead of waiting for an entire frame to complete its readout, sCMOS can expose sensor rows that are digitized first. This technology allows rapid frame rates. Moreover, while the other sensors suffer from image quality issues in low light conditions, sCMOS sensors have improved the performance of sensitivity and enables the capture high-quality images with very low noise even in poor conditions. With these spectacular features, sCMOS sensor cameras have become the ideal camera for biometry, medical and scientific applications. The seminar looks at the technology and explains how it’s different and the advantages it delivers.
Vision System Validation
How confident are you that your Vision system will operate without problems? The most underrated piece of the Vison System ... the cable ... needs complete performance validation to ensure the user does not need to make a support call.
This presentation will outline how cables in each of the Vision standards should be validated to ensure consumer confidence.
Lighting - a key enabler in machine vision applications
Probably the most critical feature of a machine vision applications is lighting. Illuminating a target poorly, will cause the loss of data and productivity and result in profit loss. A professional lighting technique involves a qualified selection of a light source / lighting technique and its skilled placement with respect to the object and camera to be inspected.
A deeper understanding of some of the complexities within LED lighting control
The majority of lighting control solutions within machine vision applications are ‘plug and play’. However, there are some instances where a deeper understanding of lighting control is required.
This presentation explores some of the complex areas within lighting control and explains the approach taken in providing the solutions.
Get the glare out! New polarized sensors paired with LED lighting solutions
Polarization has become a hot trend in machine vision with the launch of Sony’s polarized sensor series with many camera manufacturers embracing the technology. While polarized sensors and cameras can help make polarization easy, you need more than a polarized sensor or camera to have a perfect polarized image.
Polarized lighting can make or break a polarized image. Techniques such as cross polarization and different lighting styles that help a user produce the best polarized image. We will look how polarized lighting works and how it interacts with Sony’s polarized sensor.
In the presentation we will investigate in-detail Sony’s Polarized sensors and look at the best practices on paring Metaphase polarized LED illumination. It finishes with ana overview of some applications that can solved using polarization technology.
Key features of a quality machine vision optical filter
Optical filters are critical components of machine vision systems. They’re used to maximize contrast, improve colour, enhance subject recognition and control the light that’s reflected from the object being inspected. Learn more about the different filter types, what applications they’re best used for and the most important design features to look for in each. Not all machine vision filters are the same.
Learn how to reduce the effects of angular short-shifting. Discover the benefits of filters that emulate the bell-shaped spectral output curve of the LED illumination being used. And find out more about the importance of a high-quality inspection process that limits the possibility for imperfections and enhances system performance.
Plus, learn more about the latest advances in machine vision filters. SWIR (short-wave infrared) filters are designed to enhance the image quality of InGaAs camera technology and are useful for applications imaging from 900-2300nm. Wire-grid polarizers are effective in both visible and infrared ranging from 400-2000nm and have an operating temperature of 100 C per 1,000 hours.
Influence of optical components on imaging performance
In addition to the lens, there are often other optical components in the optical path of a machine vision system. In many cases this is an optical filter attached to the front of the lens, more rarely a filter between the lens and the sensor. Another component may be a beam splitter, e.g. for a coaxial illumination. And last but not least, the sensor itself has crucial optical components such as the cover glass and micro lenses as part of every single pixel.
It is important that all components fit together and play together in the right way. So it can be crucial whether the filter is positioned in front of the lens or in front of the sensor, whether the lens is designed for the use of a beam splitter and whether the beam characteristic of the lens harmonizes with the micro lenses of the sensor.
Only when all components are carefully matched, the result will meet the expectations of the entire imaging system.
Optical 2D measurement using the example of connectors and pins
Precise, metric measurement of components is a real challenge. By measuring a connector‘s pin tips, some of the basic procedures are covered while solving the various problems.
How should a two-dimensional camera system be set up and which optical and illumination techniques are suitable for measuring with a front light? A regular topic in this discussion is the choice of camera and the resolution. Which software methods can be used? Difficulties such as depth of field, parallax effects and material properties that can all significantly reduce measurement accuracy, are also discussed.
Modern measurement technologies using the example of connectors and pins
In addition to the classical measurements with area scan cameras and telecentric lenses, different modern measurement technologies can ease the life of a developer. How does measuring with "Shape-from-Shading" work and what are the limitations?
Very often, methods such as laser triangulation or structured light projection are used to measure pin tips. What are the advantages of these approaches when compared to 2D methods? What needs to be considered when it comes to software evaluation? What difficulties are to be expected? A further topic of the lecture is the generation of 3D data for measurement tasks using the "Depth-from-Focus" method and the required components with this approach.
Intrinsic calibration of light line sensors
We introduce a method for the intrinsic calibration of light line sensors. It is based on a collection of profiles, which are generated by randomly positioning a calibration target within the laser plane. The specific shape of the calibration target allows for erroneous tilts and rotations during profile generation. We show tools for the convenient generation of profiles and give statements about the achievable precision accuracy.
Calibration in machine vision - methods and requirements
Calibration are an important part of imaging and machine vision. It forms the basis for metrically correct measurements. In addition, suitable methods can be used to determine the relationship between several individual sensors. It is not always easy to keep track of one's own requirements.
The lecture describes the differences between intrinsic and extrinsic calibration and shows in which cases a calibration process is necessary. Examples are shown for 2D as well as 3D applications.
"But this can be easily seen" - Solution strategies for the selection of the ideal illumination
Selecting the right lighting is often underestimated and yet often it‘s the key to success. The aim is to create repeatable and reliable high contrasts to achieve a robust software evaluation. Unfortunately, image processing is not always that simple, since it is not the object itself that is evaluated, but the light reflected by the object. The material properties of components, however, can be tricky and cause many difficulties for the user.
Which illumination technologies are available, which strategies help with shiny objects? What effect does the colour of the object have? In addition to the macroscopic shape properties of the objects, the microscopic shape properties are often forgotten. In particular, micrographs, textures and other surface variations such as coatings etc. make life difficult for the user. What are the approaches here? Quick recipes from the cookbook of illumination can help you to succeed.
The polarisation of light – making hidden things visible
Do you know that we humans perceive light mainly through its intensity and wavelength? However, light has a further, mostly unknown property in which it can be distinguished: the oscillation plane or polarisation. While the human eye and common colour and monochrome cameras can detect colour and intensity differences very well, polarisation is not directly visible. Fortunately, polarisation imaging has recently attracted some attention and polarisation cameras have been brought to the market. These cameras reveal the third "dimension" of light, thus making it usable for machine vision.
In this presentation, the following questions will be answered: What is polarisation? What types of polarisation exist and how they are described? Which sensor technologies are used to measure polarisation? What is the benefit of polarisation in industrial imaging and which inspection tasks can uniquely be solved by using it?
What could be the reason? Troubleshooting machine vision applications
What to do if nothing works as planned? Vision systems are becoming more and more complex and multi-layered. Problems can be difficult to classify because the cause of error and symptoms are often far apart. This lecture will show methods of troubleshooting and will explain how to avoid errors or how to recognize them at an early stage.
Machine vision project approach
Which level of technical project management and organisation management is needed for your machine vision project and how will it cope with your day to day automation challenges?
Based on years of experience with turnkey machine vision projects this presentation will give you insights into the do’s and don’ts of machine vision projects. If you want to avoid hidden pitfalls, join this session and learn about the preparation phase and the decision tree to solve your projects with less risk and obtain better results.
Imaging without processing – recording image streams
Through the history of machine vision, there has always been a demand to record images and there is a vertical industry that has grown up around this. With TV standard cameras it was not unusual to see video recordings on videotapes, but today’s formats are much more varied with a higher bandwidth, so we assume a PC-basis for the recording solution.
The applications are many, from human training to offline inspection by machines and archiving of data, but the core specification is always bandwidth. Image size and format have an influence, but the real question is: “How much data?”
The machine vision world has always been a three-way battle between cameras, acquisition technology and PC performance and these all impact on a recording system. Newer technology raises the performance and this means that what was once a difficult, custom application is now relatively easy.
In this talk we will look at the limitations and possibilities, and how to create an efficient, high-speed and reliable recording system. There are strategies to help in high-speed, high-bandwidth and multi-camera recording systems and these will be explored through this presentation.