Machine Vision Technology Forum 2019 - STEMMER IMAGING



Applied knowledge for your success

Machine Vision Technology Forum 2019 - Presentations

Here you find an overview of all sessions offered at our Machine Vision Technology Forum 2019 in Sweden, Stockholm.

Our registration for the event will start mid July 2019.

22 October 2019


CAD-based 3D object recognition
Tips for setting up an industrial embedded system
Allied Vision Technologies GmbH
Lighting as key in machine vision applications
CCS Europe N.V.
Hyperspectral imaging - technology, applications and future
Image acquisition and pre-processing focussing on deep learning in practice and hardware
Silicon Software GmbH


Key features of a quality machine vision filter
Midwest Optical Systems Inc.
New method for surface inspection using structure illumination
SAC Sirius Advanced Cybernetics GmbH
Performance comparison of different embedded processors
Get the glare out! New polarized sensors paired with LED lighting solutions
Metaphase Technologies Inc.
Imaging trends in 2019 and beyond
Teledyne DALSA Inc.

10:30-11:25 Coffee Break


Machine vision in challenging environments - Which IP protection makes sense for my application?
Autovimation GmbH
Calibration methods and requirements
Modular compact sensors: new tpe of 3D laser triangulation sensors
Automation Technology GmbH
What could be the reason? Troubleshooting machine vision applications
Embedded vision cookbook


But this can be easily seen - Solution strategies for the selection of the ideal illumination
Exploring the advantages of AI-enabled machine vision in intelligent manufacturing
Adlink Technology Inc.
Vision systems of the future - a combination of technologies
Teledyne DALSA Inc.
Prism-based multispectral imaging for machine vision applications
Random bin picking: Last step for a complete factory automation process
Infaimon S.L.


Lasers for Embedded Vision
Z-Laser Optoelektronik GmbH
Imaging without processing - recording image streams
Deep learning as part of modern machine vision applications
MVTec Software GmbH
Fabrics recycling with NIR hyperspectral cameras
Specim Spectral Imaging Ltd.
Influence of optical components on the imaging performance
Jos. Schneider Optische Werke GmbH

12:30-13:55 Lunch Break


Optical 2D measurement using the example of connectors and pins
Next generation linescan imaging technologies
Teledyne DALSA Inc.
Modern application development and rapid protoyping with CVB++, CVB.Net and CVBpy
A deeper understanding of some of the complexities within LED ligthing control
Gardasoft Vision Ltd.
Practical aspects of time-of-flight imaging for machine vision
Odos Imaging


Modern measurement technologies using the example of connectors and pins
Developing cost-effective multi-camera systems using MIPI sensor modules
The Imaging Source Europe GmbH
High performance SWIR cameras in machine vision and process control
Xenics N.V.
The polarisation of light – making hidden things visible
Improving productivity with high-quality, eye-safe 3D machine vision

15:00-15:25 Coffee Break


Vision system validation
CEI Components Express Inc.
Standards, system benefits drive convergence of machine vision, industrial internet of Things (IIoT)
Smart Vision Lights
Machine learning basics
Scanning glas and specular surfaces with smart 3D technology
LMI Technologies Inc.
Smart infared cameras: a new technological approach to Industry 4.0)
Automation Technology GmbH


Intrinsic calibration of light line sensors
Shape-from-Focus - an unexpected but powerful 3D imaging technology
Neural networks - functionality and alternatives
Hyperspectral apps for industrial inspection
Perception Park GmbH




Exploring the advantages of AI-enabled machine vision in intelligent manufacturing


Deep learning machine vision provides significant capability to empower and update conventional AOI assets to intelligent AI-enabled equipment with comprehensive modeling applications. Locating and identifying the requisite optimized Edge devices is a critical consideration in next generation AOI system design. Acquisition of data, execution of specific applications, and full utilization of computational nodes among the entire IoT network is necessary to provide not only AI modeling training & provisional capability, but also the manageability of AI-enabled AOI nodes (equipment) in the Smart Factory.

This presentation will discuss procedures for scaling entire vision systems from a single computing device to an entire IoT Edge network quickly and easily, and streaming the featured image to storage and analytic services, to achieve a true AIoT solution.


Machine Vision in challenging environment: What IP protection makes sense for my application?

autoVimation More and more new applications in digital image processing require creative approaches to protect sensitive technology from mechanical, chemical or thermal stress. While dust is predominantly involved in wood processing, contamination of the products by the camera system must also be avoided in the food and pharmaceutical industries. The IP protection classes (International Protection Codes) precisely define the extent to which the undesirable ingress of foreign particles and moisture into the interior of the device is prevented.

This presentation explains the selection of an application-specific enclosure and accessories for the vision system to enable it to be used in a harsh environment.


Standards, System Benefits Drive Convergence of Machine Vision, Industrial Internet of Things (IIoT)

Smart Vision Lights

While IIoT is one of today’s coolest buzz words, machine vision solutions have been at forefront of machine-to-machine communications since the vision industry’s inception in the 1980s. Mainly because there is no automated solution unless the machine vision data – whether it be an offset, pass/fail judgement, or other critical data – is communicated to nearby robots, manufacturing equipment and the engineers, technicians and management that operate them for subsequent action.

In the past, machine vision solutions passed data along a variety of transport layers, whether it be consumer interfaces, such as Gigabit Ethernet, USB, or dedicated industrial interfaces such as CameraLink. Supporting data interface, transport and library standards such as GigE Vision and GenICam further improved the ease at which machine vision solutions could communicate with nearby machines.

Today, these standards now extend further into the system beyond just defining the camera component through standards such as the Local Interconnect Network (LIN), Mobile Industry Processor Interface (MIPI) that enable cost-effective electronic device design and sub-assembly communication. At the same time, wired networks such as industrial gigabit ethernet, coaxial, and others are being complimented by wireless edge networks that will enable more plug-and-play operation and control with all system peripherals, not just the camera-PC pipeline. This presentation will explore how these old and new standards are enabling new, cost-effective machine vision solution designs.



How to set up an embedded system for industrial embedded vision - Requirements, components, and solutions

Allied Vision

There are many advantages of embedded solutions, such as lower costs, lower energy consumption, compact design, and increasingly performant embedded boards which make a migration from PC-based to embedded solutions interesting in the industrial sector. But when does the use of an embedded solution really make sense? Which hardware and software architectures are suitable for the application? Which components and questions must be considered when setting up an embedded vision system?

In the evaluation phase for a new image processing solution, these aspects must be carefully considered, and requirements verified. Embedded systems not only bring advantages to users of industrial image processing who have previously worked PC-based, but also new challenges such as new hardware architecture, interfaces, new data processing paradigms, and open source operating systems.

This presentation provides an overview of the most important key factors and presents possible set-up scenarios for Industrial Embedded Vision.


Performance comparison of different embedded processors


This presentation compares different processor platforms (ARM-based, NVIDA-Jetson, Intel-ATOM-based) and deals with restrictions regarding the acquisition of camera data. The results of benchmark tests show how efficiently the same application runs on different platforms. Furthermore, a comparison of a CUDA optimized algorithm on a TX1 system and a Windows graphics card with the execution on a conventional Intel or ARM CPU is shown.


Embedded Vision Cookbook


Embedded Vision covers a fairly wide range of applications and solutions. It involves the right combination of hardware, camera and software. Getting started is often the hard part.

The goal of this presentation is to provide a rough overview on our view of Embedded Vision. We show a recipe for the design steps for an example embedded vision system and show how Common Vision Blox (CVB) can help with that.


Developing cost-effective multi-camera systems using MIPI sensor modules with ...

The Imaging Source

... CSI-2/FPD-Link III (up to 15m) Connection to Embedded Boards

  • Foundations: MIPI CSI-2 und FPD-Link III
  • Advantages of MIPI sensor modules
  • Disadvantages of MIPI sensor modules
  • Multi-camera systems
  • Maximum number of usable sensor heads
  • Bandwidth considerations
  • Which sensors are currently available?
  • Aspects to be considered
  • Software development
  • (HALCON Embedded)
  • Hardware development
  • Existing CSI-2 systems and their limits
  • From the internal interface to external connection
  • FPD-Link III
  • Jetson TX2, Nano and Xavier Embedded Boards
  • Outlook


Lasers for Embedded Vision

Z-Laser Machine vision lasers tend to become a deeper integrated part into optical measurement systems like 3D-displacement sensors. The form factor and cost structure of the laser system can be reduced significantly however, it is essential to preserve a high degree of flexibility. Otherwise the high number of variants that are typically required to cover all use cases of a product platform cannot be provided (wavelength, optics, laser power). Further considerations include issues such as field exchange without calibration of the laser, support of all possible lasers without impact on the system integration and preserving laser safety).

In order to further aid the OEM in reducing cost and form factor the driver electronics circuit can be integrated directly into the customer’s PCB design with our software running under license. At the same time the embedded laser system should provide means of predictive maintenance by flagging a calculated imminent EOL (end of life) situation.



Modular Compact Sensors (MCS): New type of 3D laser triangulation sensors

Automation Technology

3D laser triangulation sensors are being used more and more in the development of industrial inspection systems. They usually consist either of discrete setups with camera and line laser projector or they are factory assembled and calibrated devices with integrated camera and line laser.

Discrete setups have the advantage of their customizability to the requirements of the application (FOV, triangulation angle, working distance), but they demand an increased effort for engineering, components encapsulation, calibration and integration in the application. On the other hand factory calibrated 3D laser triangulation sensors enable an easy integration and shorten the application development remarkably. However their design cannot be customized to meet 100% the needs of the application without significant effort and high NRE costs.

This lecture presents a new concept of 3D laser triangulation sensors, which allows overcoming the aforementioned limitations. Thanks to their modular design the Modular Compact Sensors (MCS) combine the design flexibility of discrete setups with the advantages of factory calibrated 3D sensors.


Random bin picking. Last step for a complete factory automation process


In the current industry, automation and the use of robots are essential parts of the production processes. A key element of the ‘factory of the future’ is the complete automation of processes and its adaptation to the more dynamic and flexible industrial environments.

Nowadays, in spite of the high degree of integration of robots in plants, some processes still involve operators doing manual picking of random placed objects from containers.

The automation at this stage of the process requires a robot and a vision system that identifies the position of the objects inside the containers dynamically. This is what we know as bin picking. Bin picking consists of a hardware solution (vision + robot) and software solution (image analysis + communication) that allows extracting random parts from containers.

Bin Picking provides the complete automation of processes with a series of advantages:

  • Reduction of heavy work and low-value added tasks for operators
  • Maximization of space in the factory thanks to being more compact than current mechanical solutions
  • Adaptation to flexible manufacturing processes
  • Reduction of cycle times increasing machine productivity


Scanning glass and specular surfaces with smart 3D technology

LMI Technologies

This presentation will focus on using laser-based smart 3D technology to solve the challenges inherent in scanning glass and specular surfaces. Specifically, it will address cell phone glass assembly inspection, which is a common consumer electronics (CE) application in which the laser sensor scans the cell phone glass edge in its frame and generates high-resolution 3D data. The data is then used to extract edge and gap features, and measure flushness and offset in order to ensure tight assembly tolerances are met.

The presentation will explore how 3D smart sensors leverage an optimized optical design and specialized laser projection technology to achieve the best inspection results. Key sensor requirements will be discussed, including low sensitivity to the target angle; the ability to eliminate noise caused by laser scattering at the edge of the target surface; accurate measurement of different surface colors and surface types (e.g., coated, glossy, transparent); the need for scanning and inspection at speeds greater than 5 kHz in order to handle a continuous flow of production; and a low total cost of ownership to ensure maximum profitability.


Practical Aspects of Time-of-Flight Imaging for Machine Vision

odos imaging

Time-of-Flight (ToF) imaging is a well known technology, yet remains relatively novel in machine vision. This talk will examine the practical aspects of ToF imaging and applicability for general machine vision tasks.

The talk will look at the processing occurring on board a ToF imaging device and through the use of application examples, will also look at the post-processing steps on the client PC for successful deployment.


Bin Picking - From Programming to CAD Modeling


  • Skill set required for Bin Picking
  • CAD simulation and modeling
  • Designed for robotic integrators
  • Sustainability of Bin Picking Studio
  • Benefits of CAD modeling
  • Challenges
  • Future


New method for surface inspection using structured illumination

SAC Sirius Advanced Cybernetics GmbH

Structured illumination opens up new possibilities for fast and efficient surface inspection. Dedicated illumination patterns help to find finest 3D defects and can identify specular reflection properties. A flat area illumination provides these patterns formed by electronic means. The illumination power as well as the frequency are far higher compared to existing structured illuminations. This provides for an effective inspection of static and moving parts.

Process integration is easy using the interface of the sensor. Topographic images of the surface are used for automatic testing. The new method is compared to existing surface inspection methods – photometric stereo and advanced technologies for specular surfaces.


Shape-from-Focus – an unexpected but powerful 3D imaging technology


An unexpected but powerful 3D imaging technology will be presented that uses an automated variation of the focus position of a telecentric lens system: shape-from focus (SFF) or focus variation. An intelligent processing of the acquired image stack allows calculating both 3D rangemaps and 2D intensity images with an enormously enhanced depth of focus.

Besides the principle of the SFF technology, details about the calculation in CVB, the mechanical layout of the lens system and representative examples of application round the presentation off.


CAD based 3D object recognition


This presentation introduces the new CVB tool DNC for CAD based, fast 3D object recognition in calibrated point clouds. It shows details of the training process and the interpretation of output values, together with some workflow examples.


Improving productivity with high-quality, eye-safe 3D machine vision


How do you stay ahead of the competition in a fast-paced machine vision and industrial automation market?

In this 20 minutes lecture, we'll introduce you to a few crucial concepts required for implementing a flexible and scalable machine vision platform for everything from bin-picking to manufacturing, inspection and assembly, and even logistics and e-commerce.

Most of today's industrial and collaborative robots are "visually impaired", giving forward-leaning customers a headstart when using STEMMER IMAGING's latest 3D color cameras in their automation processes.

Attendees will learn how eye-safe, white structured light hardware can reduce implementation speed, solve more tasks over a flexible working distance, while accurately recognizing more objects.



Deep learning as part of modern machine vision applications

MVTec GmbH

Machine vision is crucial to highly automated production processes, which increasingly rely on advances in artificial intelligence, such as deep learning. Besides higher automation levels, these technologies enable increased productivity, reliability, and robustness.

Since deep learning – at its core – is an approach to classify data, many other machine vision technologies have to be considered as well. The presentation highlights the technology’s role within industrial vision settings and show the latest developments for solving any machine vision application.


Image acquisition and pre-processing focusing on deep learning in practice and hardware

Silicon Software

Beside sensors and interfaces increasing their resolution and speed, the complexity of image processing tasks is reaching new boundaries. How to solve these in realworld conditions is shown in this critical talk given out of the view of a field application engineer. Do not miss this interactive presentation on the basis of development tools.


Machine Learning Basics


Machine learning and in particular deep learning with neural networks is one of the most sought technologies in computer vision. Driven by astonishing results in recognition and easy accessibility through free tools it gained wide spread acknowledgement among many researchers and practitioners. Buzzwords such as neural nets, AI, learning rates, supervised learning, linear models and more are quite common nowadays. But what does all this mean and how does it actually work?

This presentation will briefly explain what machine learning is, how it works and what the most common buzzwords mean. After this talk we hope to clear most of the confusion about this interesting topic and if you want to learn more afterwards, please join the advanced talk “Neural Networks – Functionality and Alternatives”.


Neural Networks – Functionality and Alternatives


Neural Networks are some of, if not the most, prominent method for machine learning currently. While being quite versatile and powerful they are not the only means of machine learning. In this talk we will have a closer look at the functionality of neural networks. Their advantages and disadvantages compared to other machine learning methods such as support vector machines, gradient boosted trees, nearest neighbours and others will be presented as well.

If you are unfamiliar with machine learning, this talk might still be of interest but we recommend you attend our talk “Machine Learning Basics” first.


Modern Application Development and Rapid Prototyping with CVB++, CVB.Net and CVBpy


Thanks to their object oriented design and consistent integration into their respective runtime environment, the APIs for C++, the .Net languages and Python released in Common Vision Blox 2019 simplify the creation of complex image processing applications through high level language features and proven design patterns on the PC as well as on embedded platforms such as ROS or similar. In addition, the different Python environments and LINQPad facilitate rapid development – the quick and playful exploration of different solution approaches.

The design of the three new, language-specific APIs is presented comparatively - knowledge of at least one of the three languages is an advantage. The improved troubleshooting possibilities and the bridge to common runtime libraries (Qt, WPF, Windows Forms, NumPy) will be presented on the basis of practical examples.



Smart Infrared Cameras: A new technological approach for Industry 4.0

Automation Technology

Although thermal imaging with infrared cameras has a great potential especially in industrial applications, it has only made its way into automation and quality assurance to a very limited extend. While with the introduction of uncooled detectors the essential base technology for the design of thermal industrial cameras is available now for more than 20 years, many obstacles still remain.

One important reason for the low spread in industry is the lack of standard software for thermal imaging. Integrators have to use SDK’s provided by the camera manufacturers to develop their own software solutions which means a high hurdle. Furthermore, the camera models available today are not consistently designed for industrial applications. Manufacturers have a lack of application experience and they still don‘t see industry as a relevant target market.

To mention a third point, the acceptance of computer-based imaging systems tends to decline. Among others, some reasons are the complexity of such systems, costs, stability, data safety and maintenance effort.

The lecture presents a new device-related approach with smart thermal cameras to address the obstacles for practical applications and to make the potential of temperature imaging in industrial environments accessible.


Prism-based Multispectral Imaging for Machine Vision Applications


Despite the huge potential that hyperspectral imaging offers in quality and structural inspection of food, plant health and growth, environmental monitoring, pharmaceuticals, medical diagnosis, forensic sciences and thin film analysis, the scope often seems to be limited in industrial environments. This is because the hyperspectral imaging technologies available today are slow, use low resolution sensors, require complex image data handling and are a costly investment to multiply. Furthermore, it is often seen that an application starting with a hyperspectral approach often comes to a conclusion that the number of relevant wavelength bands required are just 3 or 4. Today hyperspectral applications are usually found in laboratories where the main task is to identify the relevant bands to differentiate between two or more objects.

From an industrial perspective, multispectral imaging appears to have a higher application potential. The reason being complexity of data handling is much lower due to reduced number of spectral bands, higher camera line/frame rates and lower system costs. Multispectral and Hyperspectral imaging are not a competition to each other, but they are complimenting technologies if used in the right applications. Eventually, the information on number and spectral nature of bands identified with hyperspectral cameras can be used to design multispectral cameras which can be used in real high-speed industrial environments. This is where the latest camera solutions find applications.


Hyperspectral apps - ready to go

Perception Park

Vibrational spectroscopy is based on the fact, that molecules reflect, absorb or ignore electromagnetic waves of certain wavelengths. Hyperspectral sensors measure those responses and return a spectrum per spatial point from which a the chemical fingerprint of a material can get derived. This data requires extensive processing to be useable for vision systems.

Chemical Colour Imaging methods transform hyperspectral data into image streams. These streams can be configured to highlight chemical properties of interest and are sent to image processing systems via protocols like GigE Vision. Applications: Recycling, food safety, Quality Assurance (e.g. Pharma, Food and Packaging), colour measurement etc.

The abstraction of Hyperspectral cameras to purpose specific vision cameras is enabled by software apps. Preconfigured chemical and / or physical material properties enable inspection tasks far beyond today's limits. Predefined chemometric processing achieves selectivity on the scale of scientific methods.


Hyperspectral imaging – technology, applications and future


Hyperspectral Imaging is one of the current trends in machine vision along with Industry 4.0, Embedded Vision and Deep Learning. The combination of spectroscopy and image processing opens up new fields of applications in machine vision. Chemical information that can be visualised using Chemical Colour Imaging (CCI) enables the acquisition of data that would not be possible with conventional image processing. Along with the hardware required for hyperspectral tasks, the talk will present applications and possibilities offered by this technology.


Fabrics recycling with NIR hyperspectral camera

Specim Spectral Imaging

Recycling is in the air. We hear about it everywhere, even for non-expected products. Who would expect that 10 years old trousers and shirts, dirty and full of holes would still have a value? Since 2018 new EU environmentally friendly rules are pushing toward the recycling of used fabrics and garments. Indeed, textile reuse and recycling reduce environmental impact compared to incineration and landfilling, the usual final feat of old textiles.

Looking at the raw materials used to make textiles, most of them could be recycled. However, to do so, a perfect identification of their fibers type is needed. Hyperspectral cameras offer here new possibilities.

So far, recycling of fabrics is done manually, having inherent and significant issues:

  • Repeatability: a person is not able to sort reliably fabrics during several hours of tedious work
  • Reproducibility: two employees do not necessarily sort fabrics in a same manner
  • Hygiene: textiles and fabrics may be dirty and could contain allergenic or have been used in hazardous environment.
  • Speed: humans do not perform as fast as automated systems
  • Accuracy: fabrics are difficult to identify just by their appearance, texture or color.
  • Cost: on the long run, manual work is always costly.

Within this context, automation would be very useful. Machine vision systems would address all the previous mentioned issues related to manual work. It is repeatable, reproducible, without contact, fast, accurate and cost efficient. A Machine vision system dedicated to sort materials requires sensors able to measure the chemical composition from a distance. This is a task where NIR hyperspectral cameras outperform all other Vision technologies.

We measured different types of fabrics with a NIR hyperspectral camera. Samples were made of different materials and included both woven and knitted ones: synthetic: acryl and polyester based on animal fibers: silk, wool, merino and alpaca * based on plant fibers: linen and cotton.

All these fabrics had different colors and textures, some even being dark and black.

Data were normalized and analyzed with a PLS-DA model. Results show that synthetic, animal and plant originated fibers could be sorted, regardless the color of the textile, including dark ones. We believe that those findings are of the most importance, opening a new industrial market, driven by new EU laws. Besides, we would like to highlight that most of garments are based on cotton materials, which is a very demanding crop in term of water, pesticides and insecticides. Recycling it can only be an asset, for all of us.


High performance SWIR cameras in machine vision and process control


Short-wave infrared (SWIR) imaging is used extensively in industrial markets today. Supporting integrated machine vision systems, they enable or facilitate process control, doing so with efficiency and reliability. In this session, we will be introducing the use of SWIR imaging in the machine vision market.

The session will be focused on two types of SWIR cameras, area-scan and line-scan. For each of these categories, we present our portfolio of cameras, and its evolution over the years. We also provide examples of SWIR applications in industrial markets, particularly towards machine vision and process control.

As technology advances, applications with demanding imaging requirements are quickly emerging. We will discuss some of these requirements, helping you identify the specifications that are crucial or interesting for your application. Also presented are the latest SWIR cameras from Xenics, that will be able to meet the requirements discussed.



Imaging trends in 2019 and beyond

Teledyne DALSA

Machine vision interfaces have not evolved a lot in the last decade despite the emergence of USB and Ethernet ports in consumer PCs. The combination of high speed with larger resolutions do however require high bandwidth solutions to accommodate the demand in the industry. What are the current options and which new ones will become available soon. 2D sensors do not only increase in resolution and speed, polarization is one of the new trend. We will see how this is done and what new fields of application this opens.


Future vision systems - a combination of technologies

Teledyne DALSA

The machine vision market is constantly driven by new innovations. The aim is to optimize or combine production processes with ever better image processing hardware and software in order to be able to continuously expand the number of possible applications. A short overview about possible scenarios.


Next generation linescan imaging technologies

Teledyne DALSA

Linescan technology is evolving to meet ever demanding application requirements in machine vision today. Multifield imaging using either time division or spectral division enables end-users to capture multiple images e.g. brightfield, darkfield, and backlight in a single scan.

Combined with advanced lighting, multifield significantly improves detectability and tact time. Polarization imaging is also emerging for detecting birefringence, stress, surface morphology, and material classification. In addition, super resolution 32k TDI linescan camera has been developed using pixel offset technology to boost signal to noise ratio


Scientific CMOS (sCMOS) cameras


Unlike the previous existing sensors of CMOS and CCD, sCMOS is uniquely capable of providing simultaneous features such as a large field of view, high sensitivity and wide dynamic range.

Because each pixel of a CCD sensor is exposed at once and the photoelectrons are converted into signal at a common port, the speed of image acquisition is limited. More pixels that need to be transferred, the slower the total frame rate of the camera would be. However instead of waiting for an entire frame to complete its readout, sCMOS can exposure sensor rows that are digitized first. This technology allows rapid frame rates.

Moreover, while the other sensors suffer from image quality issues in low light conditions, sCMOS sensor has improved the performance of sensitivity and enables to capture high-quality images with low noise even in poor conditions. With these spectacular features, sCMOS sensor camera is the ideal camera for biometry, medical and scientific applications.



Lighting as key in machine vision applications!


Probably the most critical feature of a machine vision applications is lighting. Illuminating a target poorly, will cause the loss of data and productivity and result in profit loss. A professional lighting technique involves a qualified selection of a light source / lighting technique and its skilled placement with respect to the object and camera to be inspected.


Vision System Validation


How confident are you that your Vision system will operate without problems? The most underrated piece of the Vison System ... the cable ... needs complete performance validation to ensure the user does not need to make a support call.

This presentation will outline how cables in each of the Vision standards should be validated to ensure consumer confidence.


A deeper understanding of some of the complexities within LED lighting control


The majority of lighting control solutions within machine vision applications are ‘plug and play’. However, there are some instances where a deeper understanding of lighting control is required.

This presentation explores some of the complex areas within lighting control and explains the approach taken in providing the solutions.


Get the glare out! New polarized sensors paired with LED lighting solutions


Polarization has become a hot trend in machine vision with the launch of Sony’s polarized sensor series with many camera manufacturers embracing the technology. While polarized sensors and cameras can help make polarization easy, you need more than a polarized sensor or camera to have a perfect polarized image.

Polarized lighting can make or break a polarized image. Techniques such as cross polarization and different lighting styles that help a user produce the best polarized image. We will go in depth how polarized lighting works and how it interacts with Sony’s polarized sensor.

In the presentation we will go in-depth on Sony’s Polarized sensors and best practices on paring Metaphase polarized LED illumination, and the application that can solved using polarization technology.


Key Features of a Quality Machine Vision Filter


Optical filters are critical components of machine vision systems. They’re used to maximize contrast, improve color, enhance subject recognition and control the light that’s reflected from the object being inspected. Learn more about the different filter types, what applications they’re best used for and the most important design features to look for in each. Not all machine vision filters are the same.

Learn how to reduce the effects of angular short-shifting. Discover the benefits of filters that emulate the bell-shaped spectral output curve of the LED illumination being used. And find out more about the importance of a high-quality inspection process that limits the possibility for imperfections and enhances system performance.

Plus, learn more about the latest advances in machine vision filters. SWIR (short-wave infrared) filters are designed to enhance the image quality of InGaAs camera technology and are useful for applications imaging from 900-2300nm. Wire-grid polarizers are effective in both visible and infrared ranging from 400-2000nm and have an operating temperature of 100 C per 1,000 hours.


Influence of optical components on the imaging performance

Schneider Kreuznach

In addition to the lens, there are often other optical components in the optical path of an image processing system. In many cases this is an optical filter attached in front of the lens, more rarely a filter between the lens and the sensor. Another component may be a beam splitter, e.g. for a coaxial illumination. And last but not least, the sensor itself has crucial optical components such as the cover glass and micro lenses as part of every single pixel.

It is important that all components fit together and play together in the right way. So it can be crucial whether the filter is positioned in front of the lens or in front of the sensor, whether the lens is designed for the use of a beam splitter and whether the beam characteristic of the lens harmonizes with the micro lenses of the sensor.

Only when all components are carefully matched, the result will meet the expectations of the entire imaging system.


Lecture Part 1: Optical 2D measurement using the example of connectors and pins


Precise, metric measurement of components is a real challenge. By measuring a connector‘s pin tips, some of the basic procedures are covered while solving the various problems.

How should a two-dimensional camera system be set up and which optical and illumination techniques are suitable for measuring with a front light? A regular topic in this discussion is the choice of camera and the resolution. Which software methods can be used? Difficulties such as depth of field, parallax effects and material properties that can all significantly reduce measurement accuracy, are also discussed.


Lecture Part 2: Modern measurement technologies using the example of connectors and pins


In addition to the classical measurements with area scan cameras and telecentric lenses, different modern measurement technologies can ease the life of a developer. How does measuring with "Shape-from-Shading" work and what are the limitations?

Very often, methods such as laser triangulation or structured light projection are used to measure pin tips. What are the advantages of these approaches when compared to 2D methods? What needs to be considered when it comes to software evaluation? What difficulties are to be expected? A further topic of the lecture is the generation of 3D for measurement tasks using the "Depth-from-Focus" method and the required components with this approach.


Intrinsic calibration of light line sensors


We introduce a method for the intrinsic calibration of light line sensors. It is based on a collection of profiles, which are generated by randomly positioning a calibration target within the laser plane. The specific shape of the calibration target allows for erroneous tilts and rotations during profile generation. We show tools for the convenient generation of profiles and give statements about the achievable precision accuracy.


Calibration methods and their requirements


Calibrations are an important part of imaging and machine vision. They are the basis for metrically correct measurements. In addition, suitable methods can be used to determine the relationship between several individual sensors. It is not always easy to keep track of one's own requirements.

The lecture describes the differences between intrinsic and extrinsic calibration and shows in which cases a calibration is necessary. Examples are shown for 2D as well as 3D applications.


Imaging without processing – recording image streams


Through the history of machine vision, there has always been a demand to record images and there is a vertical industry that has grown up around this. With TV standard cameras it was not unusual to see video recordings on videotapes, but today’s formats are much more varied and higher bandwidth, so we assume a PC-basis for the recording.

The applications are many, from human training to offline inspection by machines and archiving of data, but the core specification is always bandwidth. Image size and format have an influence, but the real question is: “How much data?”

The machine vision world has always been a three-way battle between cameras, acquisition technology and PC performance and these all impact on a recording system. Newer technology raises the performance and this means that what was once a difficult, custom application is now relatively easy.

In this talk we will look at the limitations and possibilities, and how to create an efficient, high-speed and reliable recording system. There are strategies to help in high-speed, high-bandwidth and multi-camera recording systems and these will be explored through this presentation.


"But this can be easily seen" - Solution strategies for the selection of the ideal illumination


Selecting the right lighting is often underestimated and yet often it‘s the key to success. The aim is to create repeatable and reliable high contrasts to achieve a robust software evaluation. Unfortunately, image processing is not always that simple, since it is not the object itself that is evaluated, but the light reflected by the object. The material properties of components, however, can be tricky and cause many difficulties for the user.

Which illumination technologies are available, which strategies help with shiny objects? What effect does the colour of the object have? In addition to the macroscopic shape properties of the objects, the microscopic shape properties are often forgotten. In particular, micrographs, textures and other surface variations such as coatings etc. make life difficult for the user. What are the approaches here? Quick recipes from the cookbook of illumination can help you to succeed.


The polarisation of light – making hidden things visible


Do you know that we humans perceive light mainly through its intensity and wavelength? However, light has a further, mostly unknown property in which it can be distinguished: the oscillation plane or polarisation. While the human eye and common colour and monochrome cameras can detect colour and intensity differences very well, polarisation is not directly visible. Fortunately, polarisation imaging has recently attracted some attention and polarisation cameras have been brought to the market. These cameras reveal the third "dimension" of light, thus making it usable for machine vision.

In this presentation, the following questions will be answered: What is polarisation? What types of polarisation exist and how they are described? Which sensor technologies are used to measure polarisation? What is the benefit of polarisation in industrial imaging and which inspection tasks can uniquely be solved by using it?


What could be the reason? Troubleshooting machine vision applications


What to do if nothing works as planned? Vision systems are becoming more and more complex and multi-layered. Problems can be difficult to classify because the cause of error and symptoms are often far apart. This lecture will show methods of troubleshooting and will explain how to avoid errors or how to recognize them at an early stage.