Machine vision: Powerful core technology

Machine Vision Technology Forum 2019 - Presentations

Here you find an overview of all sessions offered at our Machine Vision Technology Forum 2019 in Birmingham, UK.

**Registration for Day 1 is closed.

Last minute registrations for Day 2 are still possible!

Individual programme has to be selected upon arrival. Please send your last minute registration to uk.sales@stemmer-imaging.com or call +44 1252 78000 **

SCHEDULE for 13 November 2019 – In-Depth Day - limited to 50 people

On Day 1 choose from one of our European Imaging Academy training courses.

09:30-16:00 | 12:00-13:00 Lunch break | 19:30 gala dinner


SCHEDULE for 14 November 2019 – The Experience Day



08:45-09:55 Registration


10:00-10:25


 
sCMOS cameras - what is the difference over CMOS
Janice Lee, Vieworks Co. Ltd.
 
Hyperspectral apps - vertical solutions for industry
Markus Burgstaller, Perception Park GmbH
 
Calibration in machine vision - methods and requirements
Lothar Burzan, STEMMER IMAGING AG
 
Shape-from-focus - an unexpected but powerful 3D imaging technology
Tobias Henzler, STEMMER IMAGING AG
 
A new bio-inspired vision category to reveal the unseen to machines
Davide Migliore, Prophesee
 
Hands-on with the LMI Gocator (double session part 1)
STEMMER IMAGING Ltd.

10:30-10:55


 
CAD-based 3D object recognition
Phil Gralla, STEMMER IMAGING Ltd.
 
Tips for setting up an industrial embedded system
Gion-Pitschen Gross, Allied Vision Technologies GmbH
 
Lighting - a key enabler in machine vision applications
Matthias Dingjan, CCS Europe N.V.
 
Hyperspectral imaging - future technology and applications
Jon Vickers, STEMMER IMAGING Ltd.
 
Key features of a quality machine vision optical filter
Mariann Kiraly, Midwest Optical Systems Inc.
 
Hands-on with the LMI Gocator (double session part 2)
STEMMER IMAGING Ltd.

11:00-11:25 Coffee Break


11:30-11:55


 
Machine vision in challenging environments - which IP protection rating makes sense for my application?
Autovimation GmbH
 
Modular compact sensors: A new type of 3D laser triangulation sensor
Athinodoros Klipfel, Automation Technology GmbH
 
What could be the reason? Troubleshooting machine vision applications
Lothar Burzan, STEMMER IMAGING AG
 
Embedded vision cookbook
Martin Kersting, STEMMER IMAGING AG
 
Improving colour camera picture quality with oversampling
Sony Corp.
 
Hands-on with Teledyne DALSA Sherlock (double session part 1)
STEMMER IMAGING Ltd.

12:00-12:25


 
Exploring the advantages of AI-enabled machine vision in intelligent manufacturing
Richard Allan, Adlink Technology Inc.
 
Prism-based multispectral imaging for machine vision applications
David Richards, JAI A.S.
 
Intrinsic calibration of light line sensors
Phil Gralla, STEMMER IMAGING AG
 
Lasers for embedded vision
Stephan Broche, Z-Laser GmbH
 
Deep learning as part of modern machine vision applications
Adriano Biocchi, MVTec Software GmbH
 
Hands-on with Teledyne DALSA Sherlock (double session part 2)
STEMMER IMAGING Ltd.

12:30-13:40 Lunch Break


13:45-14:10


 
Fabric recycling with NIR hyperspectral cameras
Dr. Katja Lefevre, Specim Spectral Imaging Ltd.
 
Vision systems of the future - a combination of technologies
Patrick Menge, Teledyne DALSA Inc.
 
Optical 2D measurement using the example of connectors and pins
Lars Fermum, STEMMER IMAGING AG
 
Modern vision application development and rapid prototyping in CVB with C++, .Net and Python
Volker Gimple, STEMMER IMAGING AG
 
Practical aspects of time-of-flight imaging for machine vision
Ritchie Logan, Odos Imaging
 
Hands-on with MVTec Merlic (double session part 1)
STEMMER IMAGING Ltd.

14:15-14:40


 
Modern measurement technologies using the example of connectors and pins
Lars Fermum, STEMMER IMAGING AG
 
Improving productivity with high-quality, eye-safe 3D machine vision
Moez Tahir, Zivid
 
Developing cost-effective multi-camera systems with MIPI sensor modules
Roland Ackermann, The Imaging Source Europe GmbH
 
Megatrends for the digitalisation of the Internet of Things and Industry 4.0 (double session part 1)
Heinrich Munz, Munz Endeavors
 
Hands-on with MVTec Merlic (double session part 2)
STEMMER IMAGING Ltd.

14:45-15:10


 
Polarisation of light – making hidden things visible
Tobias Henzler, STEMMER IMAGING AG
 
Standards, system benefits drive convergence of machine vision, industrial internet of things (IIoT)
Matt Pinter, Smart Vision Lights
 
Machine learning basics - an introduction to the types of machine learning
Phil Gralla, STEMMER IMAGING AG
 
Scanning glass and specular surfaces with smart 3D technology
Christian Benderoth, LMI Technologies Inc.
 
Smart infared cameras: a new technological approach to Industry 4.0)
Michael Wandelt, Automation Technology GmbH
 
Current status of Industry 4.0 OPC UA TSN standardisation (double session part 2)
Heinrich Munz, Munz Endeavors

15:15-15:40 Coffee Break


15:45-16:10


 
But this can be easily seen - solution strategies for the selection of the ideal illumination
Lars Fermum, STEMMER IMAGING AG
 
Next generation linescan imaging technologies
Andreas Lange, Teledyne DALSA Inc.
 
Random bin picking: Last step for a complete factory automation process
Toni Ruiz, INFAIMON S.L.
 
Influence of optical components on imaging performance
Jörg Blätz, Jos. Schneider Optische Werke GmbH
 
Neural networks - functionality and alternative approaches
Phil Gralla, STEMMER IMAGING AG
 
High performance SWIR cameras in machine vision and process control
Mathew Vincent, Xenics N.V.

16:15-16:40


 
Get the glare out! New polarized sensors paired with LED lighting solutions
James Gardiner, Metaphase Technologies Inc.
 
Imaging trends in 2019 and beyond
Graham Brown, Teledyne DALSA Inc.
 
Creating robust machine vision systems with Windows IOT – Microsoft's embedded platform
Mark Williamson, STEMMER IMAGING Ltd.
 
New method for surface inspection using structure illumination
Fernando Schneider, SAC Sirius Advanced Cybernetics GmbH
 
Performance comparison of different embedded processors
Martin Kersting, STEMMER IMAGING AG
 
How small to large businesses can benefit from using cobots and vision
Mark Gray, Universal Robots

16:40-17:15 Last chance for exhibition


TRAINING ABSTRACTS

European Imaging Academy

Planning and implementation of machine vision solutions

The choice of components and systems in machine vision is vast and quite overwhelming for many users. Based on practical application examples, calculations and live demonstrations, experienced trainers explain how complete imaging systems can be planned perfectly, considering all technical and economic details, so you design a system that works.

Target audience: People new to machine vision or those that are looking to specify a project for implementation by a systems integrator.

European Imaging Academy

Optics and illumination systems for machine vision

An optimised optics and illumination set­up can significantly improve the performance of your machine vision system. During this one­day basic course, you will learn how to select the correct components and how to extend the service life of your illumination, helping you to decrease operational costs. The course offers in­depth theoretical knowledge with demonstrations on how the correct optics, filters and illumination can solve challenging applications.

Target audience: People new to machine vision or with those with limited experience in setting up vision systems. No prior knowledge is required.

European Imaging Academy

Hyperspectral imaging – innovative inspection in machine vision

Hyperspectral imaging allows conclusions as to the chemical structure of organic objects for identification and separation. Sounds complicated? It's not! Our solution offers a graphical user interface and integrates seamlessly with third­party machine vision software.

During live the demonstrations, our experts will explain the technical background and all necessary hardware and software components required using application examples. An essential aspect of the training is the processing of image data using the graphical user interface "Perception Studio". With the help of this software, complex hyperspectral information can be distinguished using simple teaching tools, allowing different features to be separated and classified.

Target audience: People new to hyperspectral imaging as well as those with experience that are looking to simplify the implementation of solutions based on hyperspectral imaging.

European Imaging Academy

NEW COURSE

Machine learning theory and practice – CVB, Halcon and more

Machine learning is a branch of AI that covers deep learning and more – teaching classifiers by example rather than by parameterisation. This course covers the theory of convolutional neural networks (CNNs, aka Deep Learning) and some of the alternatives including Support Vector Machines and Ridge Regression. The aim is not to be a software­heavy training course, but to give the attendees the confidence to try machine learning in their own applications, by working through training strategies and the process of creating a successful machine learning solution. By using practical examples and training on real data, the attendee will become comfortable with the different approach that machine learning demands.

The intention is to give the attendee a feeling for the differences and similarities of the various approaches so that they can make an informed decision about whether machine learning is appropriate for a given problem and how they might approach it.

Target audience: People with an engineering understanding and ideally with some level of machine vision understanding

European Imaging Academy

NEW COURSE

Introduction to MVTec HALCON – hands-on training

Join MVTec's experienced trainers to learn how to take your first steps within HALCON. Participants will learn how to use HDevelop (HALCON's IDE) to develop machine vision applications using technologies such as blob analysis, identification (OCR, barcode, datacode), matching, or metrology. Attendees should bring their own laptop to the training to gain the most from the hands on exercises.

Target audience: People with a basic understanding of using computers. Machine vision experience is not required however a little understanding will be of benefit.

PRESENTATION ABSTRACTS

HANDS-ON SESSIONS

 

Hands-On with the LMI Gocator

STEMMER IMAGING

A one-hour taster on how to set up and make measurements on a LMI Gocator 3D sensor. The session will be a mix of presentations and hands-on exercises so you experience the ease of use and power of this world-class 3D measurement system. Ideal if you’re interested in using 3D measurement sensors in the future.

 

Hands-On with Teledyne DALSA Sherlock

STEMMER IMAGING

A one hour taster on how to set up and create an inspection solution using Teledyne DALSA’s Sherlock machine vision rapid development environment. The session will be a mix of presentations and hands-on exercises so you experience the ease of use and flexibility of one of our most popular machine vision multi camera vision systems.

 

Hands-On with MVTec Merlic

STEMMER IMAGING

A one hour taster on how to set up and create an inspection solution using MVTec Merlic point and click development environment. The session will be a mix of presentations and hands-on exercises so you experience the ease of use of Merlic.

IIOT - INDUSTRIAL INTERNET OF THINGS

 

Exploring the advantages of AI-enabled machine vision in intelligent manufacturing

Adlink

Deep learning machine vision provides significant capability to migrate conventional AOI (Automated Optical Inspection) assets to intelligent AI-enabled equipment with comprehensive modelling capability. Locating and identifying the requisite IoT Edge device specification is a critical consideration in next generation AOI system design. Acquisition of data, execution of specific functions, and full utilization of computational nodes amongst the entire IoT network is necessary to provide not only training and execution of AI, but also the management of the entire of AI-enabled AOI nodes (equipment) across the Smart Factory.

This presentation will discuss procedures for scaling entire vision systems from a single computing device to an entire IoT Edge network quickly and easily, and streaming the featured images to storage and analytic services, to achieve a true IoT solution.

 

Machine vision in challenging environments - which IP protection rating makes sense for my application?

autoVimation More and more applications in machine vision require creative approaches to protect sensitive technology from mechanical, chemical or thermal stress. While dust is predominantly involved in wood processing, in food and pharmaceutical industries contamination of the inspected products by the camera system must also be avoided. The IP protection classes (International Protection Codes) precisely define the extent to which the undesirable ingress of foreign particles and moisture into the interior of the device is prevented. This presentation explains the selection of an application-specific enclosure and accessories for the vision system to enable it to be used in a wide range of harsh environments.

 

Megatrends for the digitalisation of the Internet of Things and Industry 4.0

Part 1 of this double session deals with current megatrends.

Abstract will follow shortly

 

Current status of Industry 4.0 OPC UA TSN standardisation

Part 2 explains current standardization activities for OPC UA

Abstract will follow shortly

 

Standards, System Benefits Drive Convergence of Machine Vision, Industrial Internet of Things (IIoT)

Smart Vision Lights

While Industry 4.0 and Internet of Things ( IOT) are two of today’s coolest buzzwords, machine vision solutions have been at the forefront of machine-to-machine communications since the vision industry’s inception in the 1980s. Mainly because there is no automated solution unless the machine vision result data – whether it be an offset, pass/fail judgement, or other critical data is communicated to nearby robots, manufacturing equipment and staff that operate them for subsequent action.

In the past, machine vision solutions passed data along a variety of transport layers, whether it be consumer interfaces, such as Gigabit Ethernet, USB, or dedicated industrial interfaces such as CameraLink. Supporting data interface, transport and library standards such as GigE Vision and GenICam further improved the ease at which machine vision solutions could communicate with nearby machines. Today, these standards now extend further into the system beyond just defining the camera component through standards such as the Local Interconnect Network (LIN), Mobile Industry Processor Interface (MIPI) that enable cost-effective electronic device design and sub-assembly communication. At the same time, wired networks such as industrial gigabit Ethernet are being complimented by wireless edge networks that will enable easy plug-and-play operation with all system peripherals, not just the camera-PC pipeline. This presentation will explore how these old and new standards are enabling new, cost-effective machine vision solution designs.

EMBEDDED VISION

 

Tips for setting up an industrial embedded vision system

Allied Vision

There are many advantages of embedded solutions, such as lower costs, lower energy consumption, compact design, and embedded boards with increasing performance, which all make a migration from PC-based to embedded solutions interesting in the industrial sector. However when does the use of an embedded solution really make sense? Which hardware and software architectures are suitable for the application? Which components and questions must be considered when setting up an embedded vision system?

In the evaluation phase for a new machine vision system, these aspects must be carefully understood, and requirements verified. Embedded systems not only bring advantages to users of industrial image processing who have previously worked PC-based, but also new challenges such as new hardware architecture, interfaces, new data processing paradigms, and open source operating systems. This presentation provides an overview of the most important key factors and presents possible set-up scenarios for Industrial Embedded Vision.

 

Creating robust machine vision systems with Windows IOT – Microsoft's embedded platform

STEMMER IMAGING

Many people consider Embedded as small ARM and Linux powered machines however, there is an alternative closer to the traditional Windows powered PC’s. The use of Windows 7 or 10 Professional has always had issues when building an industrial vision system. This includes the risks of system corruption on sudden power loss, risk of windows updates affecting the system, risk of viruses as well as the ability for users to make changes to the machine.

With Windows IOT Enterprise Microsoft provides the ability to overcome these issues without the need to change your software by locking down and customising the PC for the embedded application it is intended for. This seminar explores the capabilities of this embedded version of Windows and shows how you can take advantage of these features with an easy point and click way when creating a PC based vision system.

 

Embedded Vision Cookbook

STEMMER IMAGING

Embedded Vision covers a wide range of applications and solutions. It involves the right combination of hardware, camera and software. Getting started is often the hard part.

The goal of this presentation is to provide a rough overview of our view of Embedded Vision. We show a recipe for the design steps for an example embedded vision system and show how Common Vision Blox (CVB) can help.

 

Performance comparison of different embedded processors

STEMMER IMAGING

This presentation compares different processor platforms (ARM-based, NVIDA-Jetson, Intel-ATOM-based) and deals with restrictions regarding the acquisition of camera data. The results of benchmark tests show how efficiently the same application runs on different platforms. Furthermore, a comparison of a CUDA optimized algorithm on a TX1 system and a Windows graphics card with the execution on a conventional Intel or ARM CPU is shown.

 

Developing cost-effective multi-camera systems with MIPI sensor modules

The Imaging Source

... CSI-2/FPD-Link III (up to 15m) Connection to Embedded Boards

Embedded vision enables compact, high-performance applications using cost-effective MIPI sensor modules for multi-camera systems with subsequent data processing via AI and deep learning. This lecture examines industrial camera interface options (USB3, GigE and MIPI) with respect to NVIDIA Jetson boards (Nano, TX2, Xavier) and compares their respective performance. Multi-camera control for these embedded systems, using MIPI CSI-2’s short internal connection or via FPD-Link III’s long (up to 15m) external connection, is explained in theory and practice. Additionally, bandwidth, maximum number of sensor heads and the advantages of direct control via the ISP are discussed. An overview of possible sensor modules and their respective feature sets (e.g. global shutter, rolling shutter etc.) concludes the presentation.

 

Lasers for Embedded Vision

Z-Laser

Z-Laser Machine vision lasers are becoming more deeply integrated into optical measurement systems like 3D-displacement sensors. The form factor and cost structure of the laser system ca be reduced significantly however, it is essential to preserve a high degree of flexibility to support the high number of variants that are typically required to cover all use cases of a product platform. Further considerations also include issues such as the ability to field exchange without the need for re-calibration of the laser while supporting all possible lasers without impact on system integration and laser safety). In order to further aid the OEM in reducing cost and form factor the concept of integrating the driver electronics circuit directly into the customer’s PCB design is considered with licenced software providing the intelligence including the ability to aid predictive maintenance by flagging a calculated imminent EOL (end of life) situation.

3D TECHNOLOGY

 

Modular compact sensors: A new type of 3D laser triangulation sensor

Automation Technology

3D laser triangulation sensors are being used more and more in the development of industrial inspection systems. They usually consist either of discreet setups with camera and line laser projector or they are factory assembled and calibrated devices with integrated camera and line lasers.

Discreet setups have the advantage of their flexibility to the requirements of the application (FOV, triangulation angle, working distance), but they demand an increased effort for engineering, components encapsulation, calibration and integration in the application. On the other hand, factory calibrated 3D laser triangulation sensors enable an easy integration and shorten the application development remarkably. However, their design cannot be customized to meet 100% the needs of the application without significant effort and high NRE costs.

This lecture presents a new concept of 3D laser triangulation sensors, which allows overcoming the aforementioned limitations. Thanks to their modular design, the Modular Compact Sensors (MCS) combine the design flexibility of discreet setups with the advantages of factory calibrated 3D sensors.

 

Random bin picking. Last step for a complete factory automation process

Infaimon

In modern industry, automation and the use of robots are essential parts of the production processes. A key element of the ‘factory of the future’ is the complete automation of processes and its adaptation to the more dynamic and flexible industrial environments.

Nowadays, in spite of the high degree of integration of robots in plants, some processes still involve operators doing manual picking of random placed objects from containers.

The automation at this stage of the process requires a robot and a vision system that identifies the position of the objects inside the containers dynamically. This is what we know as bin picking. Bin picking consists of a hardware solution (vision + robot) and software solution (image analysis + communication) that allows extracting random parts from containers.

Bin Picking provides the complete automation of processes with a series of advantages:

Reduction of heavy work and low-value added tasks for operators Maximization of space in the factory thanks to being more compact than current mechanical solutions Adaptation to flexible manufacturing processes Reduction of cycle times increasing machine productivity

 

Scanning glass and specular surfaces with smart 3D technology

LMI Technologies

This presentation will focus on using laser-based smart 3D technology to solve the challenges inherent in scanning glass and specular surfaces. Specifically, it will address cell phone glass assembly inspection, which is a common consumer electronics (CE) application in which the laser sensor scans the cell phone glass edge in its frame and generates high-resolution 3D data. The data is then used to extract edge and gap features, and measure flushness and offset in order to ensure tight assembly tolerances are met.

The presentation will explore how 3D smart sensors leverage an optimized optical design and specialized laser projection technology to achieve the best inspection results. Key sensor requirements will be discussed, including low sensitivity to the target angle; the ability to eliminate noise caused by laser scattering at the edge of the target surface; accurate measurement of different surface colors and surface types (e.g., coated, glossy, transparent); the need for scanning and inspection at speeds greater than 5 kHz in order to handle a continuous flow of production; and a low total cost of ownership to ensure maximum profitability.

 

Practical Aspects of Time-of-Flight Imaging for Machine Vision

odos imaging

Time-of-Flight (ToF) imaging is a well known technology, yet remains relatively novel in machine vision. This talk will examine the practical aspects of ToF imaging and applicability for general machine vision tasks.

The talk will look at the processing occurring on board a ToF imaging device and through the use of application examples, will also look at the post-processing steps on the client PC for successful deployment.

 

New method for surface inspection using structured illumination

SAC Sirius Advanced Cybernetics GmbH

A new approach to structured illumination opens up new possibilities for fast and efficient surface inspection. Dedicated illumination patterns help to find finest 3D defects and can identify specular reflection properties. A flat area illumination provides these patterns which are formed by electronic means. By increasing the illumination power as well as the control frequency over existing structured illuminators an effective inspection of static and moving parts is possible. Process integration is simplified with easy interfacing to the sensor and topographic images of the surface are used for automatic testing. The presentation finishes by comparing the new to existing surface inspection methods – photometric stereo and advanced technologies for specular surfaces.

 

Shape-from-Focus – an unexpected but powerful 3D imaging technology

STEMMER IMAGING

An unexpected but powerful 3D imaging technology will be presented that uses an automated variation of the focus position of a telecentric lens system: shape-from focus (SFF) or focus variation. Intelligent processing of the acquired image stack allows calculating both 3D range maps and 2D intensity images with a significantly enhanced depth of focus. The seminar discusses the principle of the SFF technology, details the calculations used in CVB, the mechanical layout of the lens system and finishes off with representative application examples.

 

CAD based 3D object recognition

STEMMER IMAGING

This presentation introduces the new DNC for CAD based CVB tool, delivering fast 3D object recognition in calibrated point clouds. It shows details of the training process and the interpretation of output values, together with some workflow examples.

 

Improving productivity with high-quality, eye-safe 3D machine vision

Zivid

How do you stay ahead of the competition in a fast-paced machine vision and industrial automation market?

In this 20 minutes lecture, we'll introduce you to a few crucial concepts required for implementing a flexible and scalable machine vision platform for everything from bin-picking to manufacturing, inspection and assembly, and even logistics and e-commerce.

Most of today's industrial and collaborative robots are "visually impaired". Using one of STEMMER IMAGING's latest products a unique 3D colour camera, forward-leaning customers gain a headstart in their automation processes.

Attendees will learn how eye-safe, white structured light hardware can reduce implementation time, solve more tasks over a flexible working distance, while accurately recognizing more objects.

MACHINE LEARNING

 

Deep learning as part of modern machine vision applications

MVTec GmbH

Machine vision is crucial to highly automated production processes, which increasingly rely on advances in artificial intelligence, such as deep learning. Besides higher automation levels, these technologies enable increased productivity, reliability, and robustness.

Since deep learning – at its core – is an approach to classify data, many other machine vision technologies have to be considered as well. The presentation highlights the technology’s role within industrial vision settings and show the latest developments for solving any machine vision application.

 

Machine learning basics - an introduction to the types of machine learning

STEMMER IMAGING

Machine learning and in particular deep learning with neural networks is one of the most sought after technologies in computer vision. Driven by astonishing results in recognition and easy accessibility through free tools it gained wide spread acknowledgement among many researchers and practitioners. Buzzwords such as neural nets, AI, learning rates, supervised learning, linear models and more are quite common nowadays. But what does all this mean and how does it actually work?

This presentation will briefly explain what machine learning is, how it works and what the most common buzzwords mean. After this talk we hope to clear most of the confusion about this interesting topic and if you want to learn more afterwards, please join the advanced talk “Neural Networks – Functionality and Alternatives”.

 

Neural networks - functionality and alternative approaches

STEMMER IMAGING

Neural Networks are some of, if not the most, prominent method for machine learning currently. While being quite versatile and powerful they are not the only means of machine learning. In this talk we will have a closer look at the functionality of neural networks. Their advantages and disadvantages compared to other machine learning methods such as support vector machines, gradient boosted trees, nearest neighbours and others will be presented. If you are unfamiliar with machine learning, this talk might still be of interest but we recommend you attend our talk “Machine Learning Basics” first.

 

Modern vision application development and rapid prototyping in CVB with C++, .Net and Python

STEMMER IMAGING

Thanks to their object oriented design and consistent integration into their respective runtime environment, the CVB++, CVB.Net and CVBpy API’s released in Common Vision Blox 2019 simplify the creation of complex image processing applications through high level language features and proven design patterns on the PC as well as on embedded platforms such as ROS or similar. In addition, the various Python environments and LINQPad facilitate rapid development.

This seminar is a quick and playful exploration of different approaches of development in CVB. With the design of the three new, language-specific APIs in CVB, enhanced troubleshooting possibilities are discussed along with the bridges to common runtime libraries such as Qt, WPF, Windows Forms, NumPy, all with the aid of practical examples.

Knowledge of at least one of the three languages is an advantage when attending this seminar.

SPECTRAL IMAGING

 

Smart Infrared Cameras: A new technological approach for Industry 4.0

Automation Technology

Although thermal imaging with infrared cameras has a great potential especially in industrial applications, it has only made its way into automation and quality assurance to a very limited extent. While with the introduction of uncooled detectors more than 20 years ago gave the essential base technology for the design of thermal industrial cameras, many obstacles remain.

One important reason for the low spread in industry is the lack of standard software for thermal imaging. Integrators have to use SDK’s provided by the camera manufacturers to develop their own software solutions which means a significant barrier. Furthermore, the camera models available today are not designed for use in industrial applications. Many manufacturers have a lack of industrial application experience and don‘t see industry as a relevant sizeable target market. With the rise of smart cameras , we are also seeing the decline of PC-based imaging systems for many applications. Reasons include : the complexity of such systems, solution costs, stability, data safety, easy factory interfacing and maintenance effort.

The lecture presents a new device-related approach featuring smart thermal cameras to address the obstacles for practical applications and to make the potential of temperature imaging in industrial environments accessible.

 

Prism-based Multispectral Imaging for Machine Vision Applications

JAI

Despite the huge potential that hyperspectral imaging offers in quality and structural inspection of food, plant health and growth, environmental monitoring, pharmaceuticals, medical diagnosis, forensic sciences and thin film analysis, the scope often seems to be limited in industrial environments. This is because the hyperspectral imaging technologies available today are slow, use low-resolution sensors, require complex image handling and are a costly investment to multiply. Furthermore, it is often seen that an application starting with a hyperspectral approach often comes to a conclusion that the number of relevant wavelength bands required are just 3 or 4. Today hyperspectral applications are usually found in laboratories where the main task is to identify the relevant bands to differentiate between two or more objects.

From an industrial perspective, multispectral imaging appears to have a higher application potential. The reason being complexity of data handling is much lower due to reduced number of spectral bands, higher camera line/frame rates and lower system costs. Multispectral and Hyperspectral imaging are not competitors to each other, rather they are complementary technologies if used in the right applications. Eventually, the information on number and spectral nature of bands identified with hyperspectral cameras can be used to design multispectral cameras, which can be used in real high-speed industrial environments. This is where the latest camera solutions find applications.

 

Hyperspectral apps - vertical solutions for industry

Vibrational spectroscopy is based on the fact, that molecules reflect, absorb or ignore electromagnetic waves of certain wavelengths. Hyperspectral sensors measure those responses and return a spectrum per spatial point from which the chemical fingerprint of a material can be derived. This data requires extensive processing to be useable for vision systems.

Chemical Colour Imaging methods transform hyperspectral data into image streams. Configuring these streams to highlight chemical properties of interest and sending them to image processing systems via protocols like GigE Vision. Applications include: Recycling, food safety, Quality Assurance (e.g. Pharma, Food and Packaging), colour measurement etc.

The abstraction of Hyperspectral cameras to create purpose specific vision cameras is enabled by software apps. Preconfigured chemical and / or physical material properties enable inspection tasks far beyond today's limits. Predefined application chemometric processing gives the ability to scale the delivery of a solution beyond that of general scientific methods.

 

Fabric recycling with NIR hyperspectral cameras

Specim Spectral Imaging

Recycling is in the air and textiles should also be recycled. However, to do so, a perfect identification of their fibers type is needed. Hyperspectral cameras offer here new possibilities. So far, recycling of fabrics is done manually, having inherent and significant issues, like 1) repeatability; 2) reproducibility; 3) hygiene; 4) speed; 5) Accuracy and 6) cost. Within this context, automated machine vision systems would be very useful, addressing all the previous mentioned issues. Material identification requires contactless sensors able to measure the chemical composition from a distance, where NIR hyperspectral cameras outperform. We measured with such camera different types of fabrics, including synthetic ones (e.g. Polyester), based on animal (e.g. wool) or plant (e.g. cotton) fibers. Data were analyzed with a PLS-DA model. Results show that synthetic, animal and plant originated fibers could be sorted, regardless the color of the textile, including dark ones. We believe that those findings are of the most importance, opening a new industrial market, driven by new EU laws.

 

Hyperspectral imaging - future technology and applications

STEMMER IMAGING

Hyperspectral Imaging is one of the current trends in machine vision along with Industry 4.0, Embedded Vision and Deep Learning. The combination of spectroscopy and image processing opens up new fields of applications in machine vision. Chemical information that can be visualised using Chemical Colour Imaging (CCI) enables the acquisition of data that would not be possible with conventional image processing. Along with the hardware required for hyperspectral tasks, the talk will present applications and possibilities offered by this technology.

 

High performance SWIR cameras in machine vision and process control

Xenics

Short-wave infrared (SWIR) imaging is used extensively in industrial markets. Supporting integrated machine vision systems, they enable process control with efficiency and reliability in some applications that visible cameras cannot see. In this session, we will be introducing the use of SWIR imaging in the machine vision market. As technology advances, applications with demanding imaging requirements are emerging. We will discuss some of these requirements, helping you identify the specifications that are crucial or interesting for your application with the relevent SWIR cameras from Xenics that will be able to meet the defined requirements. Both area-scan and line-scan types of SWIR cameras will be covered and how they have evolved in recent years.

FUTURE TRENDS

 

A new bio-inspired vision category to reveal the unseen to machines

Prophesee

Event-based sensing offers a new paradigm for machine perception, shifting the emphasis from a frame-based analysis to a timed-event approach. This may make it possible to address current machine-vision challenges more efficiently and create new ways to apply computer vision algorithms in Industry 4.0. For example, exploiting the asynchronous nature of the sensor and the time accuracy, we can enable advanced applications like visual vibration monitoring, accurate high-speed counting or fast objects 3D reconstruction.

Machine-vision is a well-established field. However, the 4th industrial revolution is demanding new artificial vision capabilities that can exploit new information. For years, this revolution has been a promise. Now it is a reality.

 

Imaging trends in 2019 and beyond

Teledyne DALSA

Machine vision interfaces have not evolved a lot in the last decade despite the emergence of USB and Ethernet ports in consumer PCs. The combination of high speed with larger resolutions do however require high bandwidth solutions to accommodate the demand in the industry. What are the current options and which new ones will become available soon. 2D sensors do not only increase in resolution and speed, polarization is one of the new trend. We will see how this is implemented and what new fields of application this opens up.

 

Vision systems of the future – a combination of technologies

Teledyne DALSA

The machine vision market is constantly driven by new innovations. The aim is to optimize or combine production processes with ever better machine vision hardware and software in order to be able to continuously expand the number of possible applications. A short overview about possible scenarios is covered in this seminar.

 

Next generation linescan imaging technologies

Teledyne DALSA

Linescan technology is evolving to meet ever demanding application requirements in machine vision today. Multifield imaging using either time division or spectral division enables end-users to capture multiple images e.g. brightfield, darkfield, and backlight in a single scan.

Combined with advanced lighting, multifield significantly improves detectability and tact time. Polarization imaging is also emerging for detecting birefringence, stress, surface morphology, and material classification. In addition, super resolution 32k TDI linescan camera has been developed using pixel offset technology to boost signal to noise ratio.

 

sCMOS cameras - what is the difference over CMOS

Vieworks

Unlike the previous existing sensors of CMOS and CCD, sCMOS is uniquely capable of providing simultaneous features such as a large field of view, high sensitivity and wide dynamic range.

Because each pixel of a CCD sensor is exposed at once and the photoelectrons are converted into signal at a common port, the speed of image acquisition is limited. More pixels that need to be transferred, the slower the total frame rate of the camera would be. However instead of waiting for an entire frame to complete its readout, sCMOS can expose sensor rows that are digitized first. This technology allows rapid frame rates. Moreover, while the other sensors suffer from image quality issues in low light conditions, sCMOS sensors have improved the performance of sensitivity and enables the capture high-quality images with very low noise even in poor conditions. With these spectacular features, sCMOS sensor cameras have become the ideal camera for biometry, medical and scientific applications. The seminar looks at the technology and explains how it’s different and the advantages it delivers.

 

How small to large businesses can benefit from using cobots and vision.

Universal Robots

Robots are a well know automation technology for use in volume production. In recent years, however a new kind of Robot has seen significant market adoption. The collaborative robot is safe to work alongside people and opens up many more possibilities to both small and large companies.

This seminar touches on how this technology when coupled with machine vision can unlock new possibilities.

FUNDAMENTALS

 

Lighting - a key enabler in machine vision applications

CCS

Probably the most critical feature of a machine vision applications is lighting. Illuminating a target poorly, will cause the loss of data and productivity and result in profit loss. A professional lighting technique involves a qualified selection of a light source / lighting technique and its skilled placement with respect to the object and camera to be inspected.

 

A deeper understanding of some of the complexities within LED lighting control

Gardasoft

The majority of lighting control solutions within machine vision applications are ‘plug and play’. However, there are some instances where a deeper understanding of lighting control is required.

This presentation explores some of the complex areas within lighting control and explains the approach taken in providing the solutions.

 

Get the glare out! New polarized sensors paired with LED lighting solutions

Metaphase

Polarization has become a hot trend in machine vision with the launch of Sony’s polarized sensor series with many camera manufacturers embracing the technology. While polarized sensors and cameras can help make polarization easy, you need more than a polarized sensor or camera to have a perfect polarized image.

Polarized lighting can make or break a polarized image. Techniques such as cross polarization and different lighting styles that help a user produce the best polarized image. We will look how polarized lighting works and how it interacts with Sony’s polarized sensor.

In the presentation we will investigate in-detail Sony’s Polarized sensors and look at the best practices on paring Metaphase polarized LED illumination. It finishes with ana overview of some applications that can solved using polarization technology.

 

Key features of a quality machine vision optical filter

Midopt

Optical filters are critical components of machine vision systems. They’re used to maximize contrast, improve colour, enhance subject recognition and control the light that’s reflected from the object being inspected. Learn more about the different filter types, what applications they’re best used for and the most important design features to look for in each. Not all machine vision filters are the same.

Learn how to reduce the effects of angular short-shifting. Discover the benefits of filters that emulate the bell-shaped spectral output curve of the LED illumination being used. And find out more about the importance of a high-quality inspection process that limits the possibility for imperfections and enhances system performance.

Plus, learn more about the latest advances in machine vision filters. SWIR (short-wave infrared) filters are designed to enhance the image quality of InGaAs camera technology and are useful for applications imaging from 900-2300nm. Wire-grid polarizers are effective in both visible and infrared ranging from 400-2000nm and have an operating temperature of 100 C per 1,000 hours.

 

Influence of optical components on imaging performance

Schneider Kreuznach

In addition to the lens, there are often other optical components in the optical path of a machine vision system. In many cases this is an optical filter attached to the front of the lens, more rarely a filter between the lens and the sensor. Another component may be a beam splitter, e.g. for a coaxial illumination. And last but not least, the sensor itself has crucial optical components such as the cover glass and micro lenses as part of every single pixel.

It is important that all components fit together and play together in the right way. So it can be crucial whether the filter is positioned in front of the lens or in front of the sensor, whether the lens is designed for the use of a beam splitter and whether the beam characteristic of the lens harmonizes with the micro lenses of the sensor.

Only when all components are carefully matched, the result will meet the expectations of the entire imaging system.

 

Optical 2D measurement using the example of connectors and pins

STEMMER IMAGING

Precise, metric measurement of components is a real challenge. By measuring a connector‘s pin tips, some of the basic procedures are covered while solving the various problems.

How should a two-dimensional camera system be set up and which optical and illumination techniques are suitable for measuring with a front light? A regular topic in this discussion is the choice of camera and the resolution. Which software methods can be used? Difficulties such as depth of field, parallax effects and material properties that can all significantly reduce measurement accuracy, are also discussed.

 

Modern measurement technologies using the example of connectors and pins

STEMMER IMAGING

In addition to the classical measurements with area scan cameras and telecentric lenses, different modern measurement technologies can ease the life of a developer. How does measuring with "Shape-from-Shading" work and what are the limitations?

Very often, methods such as laser triangulation or structured light projection are used to measure pin tips. What are the advantages of these approaches when compared to 2D methods? What needs to be considered when it comes to software evaluation? What difficulties are to be expected? A further topic of the lecture is the generation of 3D data for measurement tasks using the "Depth-from-Focus" method and the required components with this approach.

 

Intrinsic calibration of light line sensors

STEMMER IMAGING

We introduce a method for the intrinsic calibration of light line sensors. It is based on a collection of profiles, which are generated by randomly positioning a calibration target within the laser plane. The specific shape of the calibration target allows for erroneous tilts and rotations during profile generation. We show tools for the convenient generation of profiles and give statements about the achievable precision accuracy.

 

Calibration in machine vision - methods and requirements

STEMMER IMAGING

Calibration are an important part of imaging and machine vision. It forms the basis for metrically correct measurements. In addition, suitable methods can be used to determine the relationship between several individual sensors. It is not always easy to keep track of one's own requirements.

The lecture describes the differences between intrinsic and extrinsic calibration and shows in which cases a calibration process is necessary. Examples are shown for 2D as well as 3D applications.

 

"But this can be easily seen" - Solution strategies for the selection of the ideal illumination

STEMMER IMAGING

Selecting the right lighting is often underestimated and yet often it‘s the key to success. The aim is to create repeatable and reliable high contrasts to achieve a robust software evaluation. Unfortunately, image processing is not always that simple, since it is not the object itself that is evaluated, but the light reflected by the object. The material properties of components, however, can be tricky and cause many difficulties for the user.

Which illumination technologies are available, which strategies help with shiny objects? What effect does the colour of the object have? In addition to the macroscopic shape properties of the objects, the microscopic shape properties are often forgotten. In particular, micrographs, textures and other surface variations such as coatings etc. make life difficult for the user. What are the approaches here? Quick recipes from the cookbook of illumination can help you to succeed.

 

The polarisation of light – making hidden things visible

STEMMER IMAGING

Do you know that we humans perceive light mainly through its intensity and wavelength? However, light has a further, mostly unknown property in which it can be distinguished: the oscillation plane or polarisation. While the human eye and common colour and monochrome cameras can detect colour and intensity differences very well, polarisation is not directly visible. Fortunately, polarisation imaging has recently attracted some attention and polarisation cameras have been brought to the market. These cameras reveal the third "dimension" of light, thus making it usable for machine vision.

In this presentation, the following questions will be answered: What is polarisation? What types of polarisation exist and how they are described? Which sensor technologies are used to measure polarisation? What is the benefit of polarisation in industrial imaging and which inspection tasks can uniquely be solved by using it?

 

What could be the reason? Troubleshooting machine vision applications

STEMMER IMAGING

What to do if nothing works as planned? Vision systems are becoming more and more complex and multi-layered. Problems can be difficult to classify because the cause of error and symptoms are often far apart. This lecture will show methods of troubleshooting and will explain how to avoid errors or how to recognize them at an early stage.