Forum Technologique de Vision Industrielle 2019 - Présentations et calendrier



Jeudi 17 octobre 2019


Reconnaissance d’objets en 3D avec CAD
Lighting - a key enabler in machine vision applications
CCS Europe N.V.
Conseils pour la mise en place d’un système industriel embarqué
Benoit Ostiz, Allied Vision Technologies GmbH
Hyperspectral imaging - future technology and applications
What could be the reason? Troubleshooting machine vision applications
Maurice Lingenfelder, STEMMER IMAGING AG


New method for surface inspection using structured illumination
Fernando Schneider, SAC Sirius Advanced Cybernetics GmbH
Key features of a quality machine vision optical filter
Georgy Das, Midwest Optical Systems Inc.
Acquisition et prétraitement d'images sur cartes d'acquisition grâce à l'intelligence artificielle
Sebastien Granatelli, Silicon Software GmbH
Tendances de l'imagerie en 2019 et au-delà
Christian Loeb, Teledyne DALSA Inc.
But this can be easily seen - solution strategies for the selection of the ideal illumination


Machine vision in challenging environments - which IP protection rating makes sense for my application?
Andreas Beetz, Autovimation GmbH
Polarisation de la lumière – Comment utiliser la troisième propriété de la lumière et rendre visible l’invisible ?
Laser pour la vision embarquée
Machine learning basics - an introduction to the types of machine learning
Vision system validation
Steve Mott, CEI Components Express Inc.

10:30-10:55 Pause café


Inspection du verre et de surfaces réfléchissantes avec la technologie 3D intelligente
Marion Zsitko, LMI Technologies Inc.
Get the glare out! New polarized sensors paired with LED lighting solutions
James Gardiner, Metaphase Technologies Inc.
Developing cost-effective multi-camera systems with MIPI sensor modules
Roland Ackermann, The Imaging Source Europe GmbH
Fabric recycling with NIR hyperspectral cameras
Dr. Mathieu Marmion, Specim Spectral Imaging Ltd.
Imaging without processing - recording image streams


Modular compact sensors: A new tpe of 3D laser triangulation sensors
Stephan Kieneke, Automation Technology GmbH
Calibration in machine vision - methods and requirements
Maurice Lingenfelder, STEMMER IMAGING AG
Embedded vision cookbook
Défis et solutions pour les systèmes d’enregistrement vidéo multi-caméras
Philippe Morin, IO Industries Inc.
Prism-based multispectral imaging for machine vision applications
Jochen Braun, JAI A.S.


Technologie de caméra à balayage linéaire - la nouvelle génération
Andreas Lange, Teledyne DALSA Inc.
Une meilleure compréhension de la complexité du contrôle des éclairages à LED
Gardasoft Vision Ltd.
Présentation du standard CoaXPress ainsi que des nouveautés de la version 2.0
Euresys S.A.
Neural networks - functionality and alternative approaches
Practical aspects of time-of-flight imaging for machine vision
Ritchie Logan, Odos Imaging

12:30-13:55 Pause de midi


Classical 2D measurement using the example of connectors and pins
Caméras SWIR haute performance en vision industrielle et contrôle de processus
Guido Deutz, Xenics N.V.
Standards, system benefits drive convergence of machine vision, Industrial Internet of Things (IIoT)
Tony Carpenter, Smart Vision Lights
Improving productivity with high-quality, eye-safe 3D machine vision
Henrik Schumann-Olsen, Zivid
sCMOS cameras - what is the difference over CMOS
Janice Lee, Vieworks Co. Ltd.


Modern measurement technologies using the example of connectors and pins
Bin picking from programming to CAD modeling
Pavol Cobirka, Photoneo s.r.o.
Deep learning as part of modern machine vision applications
Adriano Biocchi, MVTec Software GmbH
Industrial camera innovations beyond mainstream – solve applications more efficiently
Albert Schmidt, Baumer Optronic GmbH
Benefits from USB3 vision system with Toshiba Teli original IP core technology
Oka Shunsuke, Toshiba Teli Corporation

15:00-15:25 Pause café


Intrinsic calibration of light line sensors
Les systèmes de vision du futur ou comment combiner les technologies
Andreas Lange, Teledyne DALSA Inc.
Modern vision application development and rapid prototyping in CVB with C++, .Net and Python
Shape-from-Focus - une technologie d'imagerie 3D inattendue mais puissante
Hyperspectral apps – vertical solutions for industry
Markus Burgstaller, Perception Park GmbH


Performance comparison of different embedded processors
Composants optiques dans le trajet du faisceau d'un système de traitement d'images et leur influence sur les performances
Jörg Blätz, Jos. Schneider Optische Werke GmbH
Smart infared cameras: A new technological approach to Industry 4.0)
Michael Wandelt, Automation Technology GmbH
Random bin picking: Last step for a complete factory automation process
Exploring the advantages of AI-enabled machine vision in intelligent manufacturing
Chia-Wei Yang, Adlink Technology Inc.




Exploring the advantages of AI-enabled machine vision in intelligent manufacturing


Deep learning machine vision provides significant capability to empower and update conventional AOI assets to intelligent AI-enabled equipment with comprehensive modeling applications. Locating and identifying the requisite optimized Edge devices is a critical consideration in next generation AOI system design. Acquisition of data, execution of specific applications, and full utilization of computational nodes among the entire IoT network is necessary to provide not only AI modeling training & provisional capability, but also the manageability of AI-enabled AOI nodes (equipment) in the Smart Factory.

This presentation will discuss procedures for scaling entire vision systems from a single computing device to an entire IoT Edge network quickly and easily, and streaming the featured image to storage and analytic services, to achieve a true AIoT solution.


Machine Vision in challenging environment: What IP protection makes sense for my application?

autoVimation More and more new applications in digital image processing require creative approaches to protect sensitive technology from mechanical, chemical or thermal stress. While dust is predominantly involved in wood processing, contamination of the products by the camera system must also be avoided in the food and pharmaceutical industries. The IP protection classes (International Protection Codes) precisely define the extent to which the undesirable ingress of foreign particles and moisture into the interior of the device is prevented.

This presentation explains the selection of an application-specific enclosure and accessories for the vision system to enable it to be used in a harsh environment.


Présentation du standard CoaXPress ainsi que des nouveautés de la version 2.0

Euresys Dans un premier temps, cet exposé présente de façon détaillée le standard CoaXPress qui est un des plus récent en vision industrielle. Une comparaison avec d’autres standards est proposée pour certains points clefs permettant d’identifier les cas d’utilisations susceptibles de bénéficier de CoaXPress.

Les améliorations en termes de performances apportées par la nouvelle version 2.0 du standard seront ensuite détaillées lors de cette présentation. De plus, les possibilités d’évolutions de CoaXPress seront discutées.


Standards, System Benefits Drive Convergence of Machine Vision, Industrial Internet of Things (IIoT)

Smart Vision Lights

While IIoT is one of today’s coolest buzz words, machine vision solutions have been at forefront of machine-to-machine communications since the vision industry’s inception in the 1980s. Mainly because there is no automated solution unless the machine vision data – whether it be an offset, pass/fail judgement, or other critical data – is communicated to nearby robots, manufacturing equipment and the engineers, technicians and management that operate them for subsequent action.

In the past, machine vision solutions passed data along a variety of transport layers, whether it be consumer interfaces, such as Gigabit Ethernet, USB, or dedicated industrial interfaces such as CameraLink. Supporting data interface, transport and library standards such as GigE Vision and GenICam further improved the ease at which machine vision solutions could communicate with nearby machines.

Today, these standards now extend further into the system beyond just defining the camera component through standards such as the Local Interconnect Network (LIN), Mobile Industry Processor Interface (MIPI) that enable cost-effective electronic device design and sub-assembly communication. At the same time, wired networks such as industrial gigabit ethernet, coaxial, and others are being complimented by wireless edge networks that will enable more plug-and-play operation and control with all system peripherals, not just the camera-PC pipeline. This presentation will explore how these old and new standards are enabling new, cost-effective machine vision solution designs.



Conseils pour la mise en place d’un système industriel embarqué

Allied Vision

Il y a plusieurs avantages à utiliser des solutions embarquées, comme des bas coûts, une basse consommation d’énergie, un design compact et des cartes embarquées avec des performances toujours en croissance. Tout cela réuni fait qu’une migration des systèmes sur base PC vers des solutions embarquées est de plus en plus intéressante dans le secteur industriel. Mais quand est-il vraiment logique d’utiliser une solution embarquée ? Quelles architectures hardware et software sont appropriées pour votre application ? Quels composants et quelles questions doit-on considérer avant de choisir un système de vision embarqué ?

Dans la phase d’évaluation d’une nouvelle solution de traitement d’image, tous ces aspects doivent être étudiés avec précaution et tous les besoins vérifiés. Les systèmes embarqués n’offrent pas que des avantages aux utilisateurs de vision industrielle ayant travaillé sur des ordinateurs, mais aussi de nouveaux défis comme de nouvelles architectures matérielles et interfaces, de nouveaux paradigmes de traitement des données et des systèmes d’exploitation open source.

Cette présentation vous fournit un aperçu des principaux éléments importants et vous présente différentes solutions possibles pour une Vision Industrielle Embarquée.


Performance comparison of different embedded processors


This presentation compares different processor platforms (ARM-based, NVIDA-Jetson, Intel-ATOM-based) and deals with restrictions regarding the acquisition of camera data. The results of benchmark tests show how efficiently the same application runs on different platforms. Furthermore, a comparison of a CUDA optimized algorithm on a TX1 system and a Windows graphics card with the execution on a conventional Intel or ARM CPU is shown.


Embedded Vision Cookbook


Embedded Vision covers a fairly wide range of applications and solutions. It involves the right combination of hardware, camera and software. Getting started is often the hard part.

The goal of this presentation is to provide a rough overview on our view of Embedded Vision. We show a recipe for the design steps for an example embedded vision system and show how Common Vision Blox (CVB) can help with that.


Developing cost-effective multi-camera systems using MIPI sensor modules with ...

The Imaging Source

... CSI-2/FPD-Link III (up to 15m) Connection to Embedded Boards

  • Foundations: MIPI CSI-2 und FPD-Link III
  • Advantages of MIPI sensor modules
  • Disadvantages of MIPI sensor modules
  • Multi-camera systems
  • Maximum number of usable sensor heads
  • Bandwidth considerations
  • Which sensors are currently available?
  • Aspects to be considered
  • Software development
  • (HALCON Embedded)
  • Hardware development
  • Existing CSI-2 systems and their limits
  • From the internal interface to external connection
  • FPD-Link III
  • Jetson TX2, Nano and Xavier Embedded Boards
  • Outlook


Laser pour la vision embarquée

Z-Laser Les fabricants de capteurs à triangulation laser intègrent de plus en plus les modules laser en profondeur dans leurs systèmes et ne les considèrent plus comme de « simples » composants d'éclairage. Cela présente des avantages significatifs. Les dimensions du système laser peuvent être considérablement réduites et adaptées aux conditions structurelles. Les coûts peuvent être également optimisés. Cependant, il est important de maintenir un haut degré de flexibilité afin de couvrir les besoins du plus grand nombre d’applications pour une plate-forme donnée (longueur d'onde, optique, puissance de sortie). D'autres considérations concernent des sujets tels que la modification du champ sans recalibrage du capteur, la prise en charge de toutes les variantes laser sans impact sur l'intégration du système et le maintien de la sécurité laser.

Afin d'aider les OEM à réduire les coûts et les dimensions, l'électronique du pilote peut être intégrée directement dans la conception du circuit imprimé à l'aide de notre logiciel sous licence. Pour éviter les temps d'arrêt impromptus, il est possible de simuler une fin de vie imminente (End Of Life).



Modular Compact Sensors (MCS): New type of 3D laser triangulation sensors

Automation Technology

3D laser triangulation sensors are being used more and more in the development of industrial inspection systems. They usually consist either of discrete setups with camera and line laser projector or they are factory assembled and calibrated devices with integrated camera and line laser.

Discrete setups have the advantage of their customizability to the requirements of the application (FOV, triangulation angle, working distance), but they demand an increased effort for engineering, components encapsulation, calibration and integration in the application. On the other hand factory calibrated 3D laser triangulation sensors enable an easy integration and shorten the application development remarkably. However their design cannot be customized to meet 100% the needs of the application without significant effort and high NRE costs.

This lecture presents a new concept of 3D laser triangulation sensors, which allows overcoming the aforementioned limitations. Thanks to their modular design the Modular Compact Sensors (MCS) combine the design flexibility of discrete setups with the advantages of factory calibrated 3D sensors.


Random bin picking. Last step for a complete factory automation process


In the current industry, automation and the use of robots are essential parts of the production processes. A key element of the ‘factory of the future’ is the complete automation of processes and its adaptation to the more dynamic and flexible industrial environments.

Nowadays, in spite of the high degree of integration of robots in plants, some processes still involve operators doing manual picking of random placed objects from containers.

The automation at this stage of the process requires a robot and a vision system that identifies the position of the objects inside the containers dynamically. This is what we know as bin picking. Bin picking consists of a hardware solution (vision + robot) and software solution (image analysis + communication) that allows extracting random parts from containers.

Bin Picking provides the complete automation of processes with a series of advantages:

  • Reduction of heavy work and low-value added tasks for operators
  • Maximization of space in the factory thanks to being more compact than current mechanical solutions
  • Adaptation to flexible manufacturing processes
  • Reduction of cycle times increasing machine productivity


Inspection du verre et de surfaces réfléchissantes avec la technologie 3D intelligente

LMI Technologies

Cette présentation met en évidence l’utilisation de la technologie 3D intelligente basée sur le laser pour résoudre les challenges inhérents au scan du verre et des surfaces réfléchissantes. Concrètement, il s’agira de l’inspection de l’assemblage du verre des téléphones portables, ce qui est une application classique dans le domaine de l’électronique grand public (CE). Le capteur laser scanne les contours du verre d’un téléphone portable et génère des données 3D haute résolution. Les données sont ensuite utilisées pour extraire les caractéristiques des contours et des espaces, et pour mesurer l’affleurement et le décalage afin de garantir le respect des tolérances d’assemblage.

La présentation montrera comment les capteurs 3D intelligents utilisent un design optique optimisé et une technologie de projection laser spécialisée pour obtenir les meilleurs résultats d’inspection. Les principales exigences en matière de capteurs seront abordées, notamment une faible sensibilité à l’angle ciblé, la capacité à éliminer le bruit causé par la diffusion du laser sur les bords de la surface ciblée, la mesure précise de différentes couleurs et de différents types de surfaces (par exemple enduites, brillantes ou transparentes), la nécessité de scanner et d’inspecter à des vitesses supérieures à 5kHz afin de générer un flux de production continu et le faible coût total afin d’assurer une rentabilité maximale.


Practical Aspects of Time-of-Flight Imaging for Machine Vision

odos imaging

Time-of-Flight (ToF) imaging is a well known technology, yet remains relatively novel in machine vision. This talk will examine the practical aspects of ToF imaging and applicability for general machine vision tasks.

The talk will look at the processing occurring on board a ToF imaging device and through the use of application examples, will also look at the post-processing steps on the client PC for successful deployment.


Bin Picking - From Programming to CAD Modeling


  • Skill set required for Bin Picking
  • CAD simulation and modeling
  • Designed for robotic integrators
  • Sustainability of Bin Picking Studio
  • Benefits of CAD modeling
  • Challenges
  • Future


New method for surface inspection using structured illumination

SAC Sirius Advanced Cybernetics GmbH

Structured illumination opens up new possibilities for fast and efficient surface inspection. Dedicated illumination patterns help to find finest 3D defects and can identify specular reflection properties. A flat area illumination provides these patterns formed by electronic means. The illumination power as well as the frequency are far higher compared to existing structured illuminations. This provides for an effective inspection of static and moving parts.

Process integration is easy using the interface of the sensor. Topographic images of the surface are used for automatic testing. The new method is compared to existing surface inspection methods – photometric stereo and advanced technologies for specular surfaces.


Shape-from-Focus – une technologie de vision 3D inhabituelle mais puissante


Une technologie d'imagerie 3D inhabituelle mais très performante vous est présentée ici : elle se nomme « shape-from-focus » (SFF) et se base sur une variation automatique de la mise au point d'un objectif télécentrique. Le traitement intelligent de la pile d'images acquises permet de calculer à la fois une image de hauteur 3D et une image d'intensité 3D, avec une profondeur de champ extrêmement augmentée. Les principes de cette technique vous seront expliqués ici, ainsi que la façon d’utiliser CVB pour les calculs et la conception mécanique de votre système optique. Le tout accompagné d’exemples d’applications.


Reconnaissance d’objets en 3D avec CAD


Le thème principal de cette présentation est DNC, un nouvel outil de CVB. Il permet la reconnaissance rapide d’objets en 3D et crée des nuages de points calibrés à partir d’un modèle CAD. À l’aide d’exemples concrets, vous apprendrez les détails du processus d’entraînement et verrez quels résultats vous pouvez obtenir.


Improving productivity with high-quality, eye-safe 3D machine vision


How do you stay ahead of the competition in a fast-paced machine vision and industrial automation market?

In this 20 minutes lecture, we'll introduce you to a few crucial concepts required for implementing a flexible and scalable machine vision platform for everything from bin-picking to manufacturing, inspection and assembly, and even logistics and e-commerce.

Most of today's industrial and collaborative robots are "visually impaired", giving forward-leaning customers a headstart when using STEMMER IMAGING's latest 3D color cameras in their automation processes.

Attendees will learn how eye-safe, white structured light hardware can reduce implementation speed, solve more tasks over a flexible working distance, while accurately recognizing more objects.



Deep learning as part of modern machine vision applications

MVTec GmbH

Machine vision is crucial to highly automated production processes, which increasingly rely on advances in artificial intelligence, such as deep learning. Besides higher automation levels, these technologies enable increased productivity, reliability, and robustness.

Since deep learning – at its core – is an approach to classify data, many other machine vision technologies have to be considered as well. The presentation highlights the technology’s role within industrial vision settings and show the latest developments for solving any machine vision application.


Acquisition et prétraitement d'images sur cartes d'acquisition grâce à l'intelligence artificielle

Silicon Software

Les capteurs et les interfaces augmentent leur résolution et leur vitesse, mais la complexité des tâches de traitement d’images atteint de nouvelles limites. La manière dont ce problème est résolu dans des conditions réelles est démontrée dans cette conférence critique du point de vue d’un ingénieur d’application sur le terrain. Ne manquez pas cette présentation interactive basée sur les outils de développement.


Machine Learning Basics


Machine learning and in particular deep learning with neural networks is one of the most sought technologies in computer vision. Driven by astonishing results in recognition and easy accessibility through free tools it gained wide spread acknowledgement among many researchers and practitioners. Buzzwords such as neural nets, AI, learning rates, supervised learning, linear models and more are quite common nowadays. But what does all this mean and how does it actually work?

This presentation will briefly explain what machine learning is, how it works and what the most common buzzwords mean. After this talk we hope to clear most of the confusion about this interesting topic and if you want to learn more afterwards, please join the advanced talk “Neural Networks – Functionality and Alternatives”.


Neural Networks – Functionality and Alternatives


Neural Networks are some of, if not the most, prominent method for machine learning currently. While being quite versatile and powerful they are not the only means of machine learning. In this talk we will have a closer look at the functionality of neural networks. Their advantages and disadvantages compared to other machine learning methods such as support vector machines, gradient boosted trees, nearest neighbours and others will be presented as well.

If you are unfamiliar with machine learning, this talk might still be of interest but we recommend you attend our talk “Machine Learning Basics” first.



Smart Infrared Cameras: A new technological approach for Industry 4.0

Automation Technology

Although thermal imaging with infrared cameras has a great potential especially in industrial applications, it has only made its way into automation and quality assurance to a very limited extend. While with the introduction of uncooled detectors the essential base technology for the design of thermal industrial cameras is available now for more than 20 years, many obstacles still remain.

One important reason for the low spread in industry is the lack of standard software for thermal imaging. Integrators have to use SDK’s provided by the camera manufacturers to develop their own software solutions which means a high hurdle. Furthermore, the camera models available today are not consistently designed for industrial applications. Manufacturers have a lack of application experience and they still don‘t see industry as a relevant target market.

To mention a third point, the acceptance of computer-based imaging systems tends to decline. Among others, some reasons are the complexity of such systems, costs, stability, data safety and maintenance effort.

The lecture presents a new device-related approach with smart thermal cameras to address the obstacles for practical applications and to make the potential of temperature imaging in industrial environments accessible.


Prism-based Multispectral Imaging for Machine Vision Applications


Despite the huge potential that hyperspectral imaging offers in quality and structural inspection of food, plant health and growth, environmental monitoring, pharmaceuticals, medical diagnosis, forensic sciences and thin film analysis, the scope often seems to be limited in industrial environments. This is because the hyperspectral imaging technologies available today are slow, use low resolution sensors, require complex image data handling and are a costly investment to multiply. Furthermore, it is often seen that an application starting with a hyperspectral approach often comes to a conclusion that the number of relevant wavelength bands required are just 3 or 4. Today hyperspectral applications are usually found in laboratories where the main task is to identify the relevant bands to differentiate between two or more objects.

From an industrial perspective, multispectral imaging appears to have a higher application potential. The reason being complexity of data handling is much lower due to reduced number of spectral bands, higher camera line/frame rates and lower system costs. Multispectral and Hyperspectral imaging are not a competition to each other, but they are complimenting technologies if used in the right applications. Eventually, the information on number and spectral nature of bands identified with hyperspectral cameras can be used to design multispectral cameras which can be used in real high-speed industrial environments. This is where the latest camera solutions find applications.


Hyperspectral apps - ready to go

Perception Park

Vibrational spectroscopy is based on the fact, that molecules reflect, absorb or ignore electromagnetic waves of certain wavelengths. Hyperspectral sensors measure those responses and return a spectrum per spatial point from which a the chemical fingerprint of a material can get derived. This data requires extensive processing to be useable for vision systems.

Chemical Colour Imaging methods transform hyperspectral data into image streams. These streams can be configured to highlight chemical properties of interest and are sent to image processing systems via protocols like GigE Vision. Applications: Recycling, food safety, Quality Assurance (e.g. Pharma, Food and Packaging), colour measurement etc.

The abstraction of Hyperspectral cameras to purpose specific vision cameras is enabled by software apps. Preconfigured chemical and / or physical material properties enable inspection tasks far beyond today's limits. Predefined chemometric processing achieves selectivity on the scale of scientific methods.


Hyperspectral imaging – technology, applications and future


Hyperspectral Imaging is one of the current trends in machine vision along with Industry 4.0, Embedded Vision and Deep Learning. The combination of spectroscopy and image processing opens up new fields of applications in machine vision. Chemical information that can be visualised using Chemical Colour Imaging (CCI) enables the acquisition of data that would not be possible with conventional image processing. Along with the hardware required for hyperspectral tasks, the talk will present applications and possibilities offered by this technology.


Fabrics recycling with NIR hyperspectral camera

Specim Spectral Imaging

Recycling is in the air. We hear about it everywhere, even for non-expected products. Who would expect that 10 years old trousers and shirts, dirty and full of holes would still have a value? Since 2018 new EU environmentally friendly rules are pushing toward the recycling of used fabrics and garments. Indeed, textile reuse and recycling reduce environmental impact compared to incineration and landfilling, the usual final feat of old textiles.

Looking at the raw materials used to make textiles, most of them could be recycled. However, to do so, a perfect identification of their fibers type is needed. Hyperspectral cameras offer here new possibilities.

So far, recycling of fabrics is done manually, having inherent and significant issues:

  • Repeatability: a person is not able to sort reliably fabrics during several hours of tedious work
  • Reproducibility: two employees do not necessarily sort fabrics in a same manner
  • Hygiene: textiles and fabrics may be dirty and could contain allergenic or have been used in hazardous environment.
  • Speed: humans do not perform as fast as automated systems
  • Accuracy: fabrics are difficult to identify just by their appearance, texture or color.
  • Cost: on the long run, manual work is always costly.

Within this context, automation would be very useful. Machine vision systems would address all the previous mentioned issues related to manual work. It is repeatable, reproducible, without contact, fast, accurate and cost efficient. A Machine vision system dedicated to sort materials requires sensors able to measure the chemical composition from a distance. This is a task where NIR hyperspectral cameras outperform all other Vision technologies.

We measured different types of fabrics with a NIR hyperspectral camera. Samples were made of different materials and included both woven and knitted ones: synthetic: acryl and polyester based on animal fibers: silk, wool, merino and alpaca * based on plant fibers: linen and cotton.

All these fabrics had different colors and textures, some even being dark and black.

Data were normalized and analyzed with a PLS-DA model. Results show that synthetic, animal and plant originated fibers could be sorted, regardless the color of the textile, including dark ones. We believe that those findings are of the most importance, opening a new industrial market, driven by new EU laws. Besides, we would like to highlight that most of garments are based on cotton materials, which is a very demanding crop in term of water, pesticides and insecticides. Recycling it can only be an asset, for all of us.


Caméras SWIR haute performance en vision industrielle et contrôle de processus


L'imagerie infrarouge à ondes courtes (SWIR) est aujourd'hui largement utilisée dans les marchés industriels. Soutenant les systèmes de vision industrielle intégrés, ils permettent ou facilitent le contrôle des processus avec efficacité et fiabilité. Au cours de cette session, nous présenterons l'utilisation de l'imagerie SWIR sur le marché de la vision industrielle.

La session sera centrée sur deux types de caméras SWIR, matricielles et linéaires. Pour chacune de ces catégories, nous vous présentons notre portefeuille de caméras et son évolution au cours des années. Nous fournissons également des exemples d'applications SWIR sur les marchés industriels, en particulier pour la vision industrielle et le contrôle de processus.

Au fur et à mesure que la technologie progresse, des applications aux exigences élevées en matière d'imagerie font rapidement leur apparition. Nous discuterons de certaines de ces exigences et vous aiderons à identifier les spécifications qui sont cruciales ou intéressantes pour votre application. Sont également présentées les dernières caméras SWIR de Xenics qui seront en mesure de répondre aux exigences discutées.



Modern vision application development and rapid prototyping in CVB with C++, .Net and Python


Thanks to their object oriented design and consistent integration into their respective runtime environment, the CVB++, CVB.Net and CVBpy API’s released in Common Vision Blox 2019 simplify the creation of complex image processing applications through high level language features and proven design patterns on the PC as well as on embedded platforms such as ROS or similar. In addition, the various Python environments and LINQPad facilitate rapid development.

This seminar is a quick and playful exploration of different approaches of development in CVB. With the design of the three new, language-specific APIs in CVB, enhanced troubleshooting possibilities are discussed along with the bridges to common runtime libraries such as Qt, WPF, Windows Forms, NumPy, all with the aid of practical examples. Knowledge of at least one of the three languages is an advantage when attending this seminar.


Les tendances de la vision en 2019 et au-delà

Teledyne DALSA

Les interfaces utilisées en vision industrielle n’ont pas beaucoup évoluées ces 10 dernières années malgré l’émergence du port USB et Ethernet dans les PC grand public. La combinaison de très hautes résolutions ainsi que de vitesses élevées requiert des bandes passantes plus grandes pour s’accoutumer aux demandes de l’industrie. Quelles sont les options actuellement disponibles et quelles seront les nouvelles bientôt disponibles ? Les capteurs 2D n’augmentent pas uniquement en résolution ou vitesse, la polarisation est également un nouveau domaine émergent.

Nous allons avoir un aperçu de ces nouvelles technologies et comment elles pourraient ouvrir de nouveaux domaines d’application aux utilisateurs de vision industrielle.


Les systèmes de vision du futur ou comment combiner les technologies

Teledyne DALSA

Le marché de la vision industrielle est constamment conduit par de nouvelles innovations. Le but est d’optimiser ou de combiner des processus de production avec des algorithmes de vision matériels ou logiciels de plus en plus performants afin d’augmenter le nombre d’applications. Voici un bref aperçu des scénarios possibles.


Technologie de caméra à balayage linéaire - la nouvelle génération

Teledyne DALSA

La technologie de balayage linéaire évolue continuellement pour s’adapter aux exigences croissantes du monde de la vision industrielle. La technologie « Multifield » basée soit sur le temps soit sur un domaine spectral permet à l’utilisateur d’acquérir plusieurs images, avec par exemple un éclairage direct, rasant et un rétro-éclairage, en un seul passage. Combinée aux nouvelles possibilités d’éclairage, l’approche « multifield » améliore la détection et le temps de cycle.

L’imagerie par polarisation est également une technologie émergente pour détecter la biréfringence, le stress, la morphologie de surface ou classifier des matériaux. Une autre technologie dénommée « super résolution » à base de caméras linéaires TDI 32K est également un nouveau domaine. La « super résolution » peut être réalisée en utilisant une technologie de pixel décalé permettant de faire croître le rapport signal sur bruit tout en gardant un pixel relativement grand afin d’utiliser les objectifs existants. La connexion à base de fibre optique est une autre avancée technologique permettant des vitesses de 300 kHz, voire 600 kHz. De nouveaux développements de caméras SWIR permettront également à la vision industrielle de jouer un rôle primordial dans la révolution industrielle 4.0.


Benefits from USB3 vision system with Toshiba Teli original IP core technology

Toshiba Teli

On the vision market you find different popular interface standards including USB3.1(Gen1), GigE, Camera Link and CoaXpress. Each of these interfaces has pros and cons and different ideal use cases.

This presentation will focus on USB3.1(Gen1) which is established for industrial applications since years due to cost efficiency, tact time improvement and high reliability. As perfect add-on Toshiba Teli uses advanced IP core technologies which enable real-time communication, intelligent unique error handling, minimization of camera size and dual tap speed to realize +10G.


sCMOS cameras - what is the difference over CMOS


Unlike the previous existing sensors of CMOS and CCD, sCMOS is uniquely capable of providing simultaneous features such as a large field of view, high sensitivity and wide dynamic range.

Because each pixel of a CCD sensor is exposed at once and the photoelectrons are converted into signal at a common port, the speed of image acquisition is limited. More pixels that need to be transferred, the slower the total frame rate of the camera would be. However instead of waiting for an entire frame to complete its readout, sCMOS can exposure sensor rows that are digitized first. This technology allows rapid frame rates.

Moreover, while the other sensors suffer from image quality issues in low light conditions, sCMOS sensor has improved the performance of sensitivity and enables to capture high-quality images with low noise even in poor conditions. With these spectacular features, sCMOS sensor camera is the ideal camera for biometry, medical and scientific applications.



Industrial camera innovations beyond mainstream – solve applications more efficiently

Baumer Optronic

Due to impressive performance data, CMOS cameras dominate the mainstream segment for industrial cameras with the industry standard 29 x 29 mm form factor and with up to 20 MP resolution within 1" sensors. However, CMOS sensor technology is rapidly evolving to even higher resolutions and speeds and with the proven form factor it is a challenge to make full use of their capabilities. Furthermore, there is a trend towards more integrated vision solutions to reduce space requirements, complexity and, as a result, costs.

Using typical applications, the presentation shows how innovative camera features make optimum use of CMOS sensors up to 48 megapixel resolution. See how applications can be solved more flexibly and efficiently with an integrated lighting controller, applying modular IP protection up to IP 69K, image pre-processing or with the combination of the 10 GigE interface with a flexible memory management up to 8 GB.


Lighting as key in machine vision applications!


Probably the most critical feature of a machine vision applications is lighting. Illuminating a target poorly, will cause the loss of data and productivity and result in profit loss. A professional lighting technique involves a qualified selection of a light source / lighting technique and its skilled placement with respect to the object and camera to be inspected.


Vision System Validation


How confident are you that your Vision system will operate without problems? The most underrated piece of the Vison System ... the cable ... needs complete performance validation to ensure the user does not need to make a support call.

This presentation will outline how cables in each of the Vision standards should be validated to ensure consumer confidence.


Une meilleure compréhension de la complexité du contrôle des éclairages à LED


La majorité des solutions de commande de l'éclairage dans les applications de vision industrielle sont " plug and play ". Toutefois, dans certains cas, une compréhension plus approfondie de la commande de l'éclairage est nécessaire.

Cette présentation explore certains des domaines complexes du contrôle de l'éclairage et explique l'approche adoptée pour fournir les solutions.


Défis et solutions pour les systèmes d’enregistrement vidéo multi-caméras

IO Industries

De par sa nature, une caméra ne peut reproduire une image que sous un angle unique, limitée par son système optique et sa position. Cependant, dans certains cas, il est avantageux de voir un objet sous différents angles, d’obtenir plus de données concernant sa profondeur ou sa forme, et en général de recueillir davantage d’informations sur la scène observée à un moment donné (et à travers le temps).

De nombreuses applications utilisent pour cela un réseau de plusieurs caméras. La complexité du système de vision qui en résulte augmente en proportion du nombre de caméras utilisées. Que les caméras soient orientées vers l’extérieur à partir d’un point central (pour une vision à 360° par exemple) ou, au contraire, qu’elles ciblent toutes le même point (pour suivre un mouvement ou fournir des informations de volume), les défis et les exigences de ces systèmes complexes sont les mêmes. Avec un objectif commun : fournir des résultats de qualité.

La synchronisation des caméras, le transfert vidéo, l’enregistrement multicanal et la gestion des données : tous ces paramètres doivent être pris en compte lors de la conception d’une solution d’enregistrement vidéo à plusieurs caméras. Ces différents aspects seront détaillés dans cette présentation, avec des exemples de solutions qui existent aujourd’hui et de développements à venir.


Get the glare out! New polarized sensors paired with LED lighting solutions


Polarization has become a hot trend in machine vision with the launch of Sony’s polarized sensor series with many camera manufacturers embracing the technology. While polarized sensors and cameras can help make polarization easy, you need more than a polarized sensor or camera to have a perfect polarized image.

Polarized lighting can make or break a polarized image. Techniques such as cross polarization and different lighting styles that help a user produce the best polarized image. We will go in depth how polarized lighting works and how it interacts with Sony’s polarized sensor.

In the presentation we will go in-depth on Sony’s Polarized sensors and best practices on paring Metaphase polarized LED illumination, and the application that can solved using polarization technology.


Key Features of a Quality Machine Vision Filter


Optical filters are critical components of machine vision systems. They’re used to maximize contrast, improve color, enhance subject recognition and control the light that’s reflected from the object being inspected. Learn more about the different filter types, what applications they’re best used for and the most important design features to look for in each. Not all machine vision filters are the same.

Learn how to reduce the effects of angular short-shifting. Discover the benefits of filters that emulate the bell-shaped spectral output curve of the LED illumination being used. And find out more about the importance of a high-quality inspection process that limits the possibility for imperfections and enhances system performance.

Plus, learn more about the latest advances in machine vision filters. SWIR (short-wave infrared) filters are designed to enhance the image quality of InGaAs camera technology and are useful for applications imaging from 900-2300nm. Wire-grid polarizers are effective in both visible and infrared ranging from 400-2000nm and have an operating temperature of 100 C per 1,000 hours.


Composants optiques dans le trajet du faisceau d'un système de traitement d'images et leur influence sur les performances

Schneider Kreuznach

Dans un système de traitement d’images, l’objectif n’est pas le seul composant optique à se trouver dans le trajet du faisceau. Il y a souvent aussi un filtre optique monté devant l'objectif, plus rarement monté entre l’objectif et le capteur. Un autre composant peut être un prisme ou un séparateur de faisceau, par exemple pour un éclairage coaxial. Enfin, le capteur lui-même comporte des composants optiques essentiels tels que le verre de protection et les microlentilles placées devant chaque pixel.

Il est important que tous les éléments s'assemblent et interagissent de la meilleure façon possible. Il est donc décisif de savoir si le filtre est placé devant l'objectif ou devant le capteur, si l'objectif est conçu pour l'utilisation d'un séparateur de faisceau et si la caractéristique de faisceau de l'objectif s'harmonise avec les microlentilles du capteur.


Optical 2D measurement using the example of "Connectors, pin tips and tumbling circle testing"


Precise, metric measurement of components is a real challenge. By measuring a connector‘s pin tips, some of the basic procedures are covered while solving the various problems.

How should a two-dimensional camera system be set up and which optical and illumination techniques are suitable for measuring with a front light? A regular topic in this discussion is the choice of camera and the resolution. Which software methods can be used? Difficulties such as depth of field, parallax effects and material properties that can all significantly reduce measurement accuracy, are also discussed.


Modern measurement technologies using the example of "connectors, pin tips and tumbling circle testing"


In addition to the classical measurements with area scan cameras and telecentric lenses, different modern measurement technologies can ease the life of a developer. How does measuring with "Shape-from-Shading" work and what are the limitations?

Very often, methods such as laser triangulation or structured light projection are used to measure pin tips. What are the advantages of these approaches when compared to 2D methods? What needs to be considered when it comes to software evaluation? What difficulties are to be expected? A further topic of the lecture is the generation of 3D for measurement tasks using the "Depth-from-Focus" method and the required components with this approach.


"But this can be easily seen" - Solution strategies for the selection of the ideal illumination


Selecting the right lighting is often underestimated and yet often it‘s the key to success. The aim is to create repeatable and reliable high contrasts to achieve a robust software evaluation. Unfortunately, image processing is not always that simple, since it is not the object itself that is evaluated, but the light reflected by the object. The material properties of components, however, can be tricky and cause many difficulties for the user.

Which illumination technologies are available, which strategies help with shiny objects? What effect does the colour of the object have? In addition to the macroscopic shape properties of the objects, the microscopic shape properties are often forgotten. In particular, micrographs, textures and other surface variations such as coatings etc. make life difficult for the user. What are the approaches here? Quick recipes from the cookbook of illumination can help you to succeed.


Calibration methods and their requirements


Calibrations are an important part of imaging and machine vision. They are the basis for metrically correct measurements. In addition, suitable methods can be used to determine the relationship between several individual sensors. It is not always easy to keep track of one's own requirements.

The lecture describes the differences between intrinsic and extrinsic calibration and shows in which cases a calibration is necessary. Examples are shown for 2D as well as 3D applications.


Intrinsic calibration of light line sensors


We introduce a method for the intrinsic calibration of light line sensors. It is based on a collection of profiles, which are generated by randomly positioning a calibration target within the laser plane. The specific shape of the calibration target allows for erroneous tilts and rotations during profile generation. We show tools for the convenient generation of profiles and give statements about the achievable precision accuracy.


Imaging without processing – recording image streams


Through the history of machine vision, there has always been a demand to record images and there is a vertical industry that has grown up around this. With TV standard cameras it was not unusual to see video recordings on videotapes, but today’s formats are much more varied and higher bandwidth, so we assume a PC-basis for the recording.

The applications are many, from human training to offline inspection by machines and archiving of data, but the core specification is always bandwidth. Image size and format have an influence, but the real question is: “How much data?”

The machine vision world has always been a three-way battle between cameras, acquisition technology and PC performance and these all impact on a recording system. Newer technology raises the performance and this means that what was once a difficult, custom application is now relatively easy.

In this talk we will look at the limitations and possibilities, and how to create an efficient, high-speed and reliable recording system. There are strategies to help in high-speed, high-bandwidth and multi-camera recording systems and these will be explored through this presentation.


Polarisation de la lumière – Comment utiliser la troisième propriété de la lumière et rendre visible l’invisible ?


Le saviez-vous ? Nous autres humains percevons la lumière par le biais de son intensité et de ses longueurs d’onde. La lumière possède cependant une autre propriété, souvent inconnue, qui la rend très distincte : la plan d’oscillation ou la polarisation. Alors que l’œil humain et les caméras courantes, couleur ou monochromes, distinguent parfaitement les couleurs et les degrés d’intensité, la polarisation, elle, n’est pas directement visible. Pour cela, il existe des caméras dites de polarisation qui révèlent cette troisième « dimension » de la lumière et la rendent utilisable pour la vision industrielle. Cette présentation répondra aux questions suivantes : Qu'est-ce que la polarisation ? Quels sont les différents types de polarisation et comment peut-on les décrire ? Quelle technologie de capteurs peut être utilisée pour mesurer la polarisation ? Quel avantage la polarisation apporte-t-elle à la vision industrielle et quelles tâches d’inspection deviennent désormais possibles grâce à elle ?


What could be the reason? Troubleshooting machine vision applications


What to do if nothing works as planned? Vision systems are becoming more and more complex and multi-layered. Problems can be difficult to classify because the cause of error and symptoms are often far apart. This lecture will show methods of troubleshooting and will explain how to avoid errors or how to recognize them at an early stage.