The MVTec experts have optimized a number of core technologies in HALCON 20.11.
-
DotCode reading
A new 2D code type known as DotCode has been added. It is based on a matrix of dots and can be printed very fast, which makes it especially suitable for high-speed applications, for instance in the tobacco industry. -
OCR and deep learning
With another new feature called Deep OCR, MVTec introduces a holistic deep-learning-based approach for optical character recognition (OCR). Deep OCR can localize numbers and letters much more robustly, regardless of their orientation, font type, and polarity. The ability to group characters automatically allows the identification of whole words. This improves the recognition performance significantly and avoids the misinterpretation of characters with similar appearances. -
Improved user-friendliness and faster 3D matching
The core technology shape-based matching was also optimized in HALCON 20.11. More parameters are now estimated automatically, which improves both user-friendliness and the matching rate in low contrast and high noise situations. The new release demonstrates significant improvements in the 3D environment as well. Edge-supported, surface-based 3D matching is now significantly faster for 3D scenes with many objects and edges. Usability has also been improved by removing the need to set a viewpoint. -
Improved functionality for developers
HALCON 20.11 makes things much easier not only for users but also for developers. A new language interface enables programmers who work with Python to seamlessly access HALCON's powerful operator set. In addition, the integrated development environment HDevelop has been given a facelift. It now offers more options for individual configuration, such as a modern window docking concept. Moreover, themes are now available to improve visual ergonomics and adapt HDevelop to personal preferences. -
Precise edge detection with deep learning
HALCON 20.11 includes a new and unique method for robustly extracting edges with the aid of deep learning. Especially for scenarios where a large number of edges are visible in an image, the deep-learning-based edge extraction function can be trained with only a few images to reliably extract only the desired edges. This greatly reduces the programming effort for processes of this type.
Out of the box, the pretrained network is able to robustly detect edges in low contrast and high noise situations. This makes it possible to also extract edges that cannot be identified with conventional edge detection filters. In addition, "Pruning for Deep Learning” now enables users to subsequently optimize a fully trained deep learning network. They can now control the priority of the parameters speed, storage, and accuracy and thus modify the network precisely according to application-specific requirements.