In HALCON 19.05 the deep learning functionality can be applied to a broader range of applications. Please note that this update is only available with the progressive license model.
Enhanced Object detection
Users now have the option of aligning object detection boxes with the orientation of the object, enhancing the localised trained object classes.
Inference on ARM Processors
Inference for all three deep learning technologies – image classification, object detection, and semantic segmentation – runs out-of-the-box on Arm® processors. As this removes the need for special components like a powerful GPU or a desktop CPU, HALCON significantly broadens the range of possible deep learning applications. Execution times on Arm®-based platforms vary by complexity and the type of hardware, but MVTec benchmarks have shown them to be suitable for many conceivable applications.
Shape Based Matching
Shape-based matching is one of HALCON's most important core technologies and can be considered to be one of the most powerful matching tools on the market. MVTec continuously improves this technology to widen the application area even further. With HALCON 19.05, users can now, for example, specifically define so-called "clutter" regions (marked above in orange). These are areas within a search model that should not contain any contours.
Surface Based Matching
Edge-supported surface-based matching is now more robust against noisy point clouds: Users can control the impact of surface and edge information via multiple min-scores. Additionally, in case that no xyz-images are available, a new parameter now allows switching off 3D edge alignment entirely. This enables users to eliminate the influence of insufficient 3D data on matching results, while keeping the valuable 2D information for surface and 2D edge alignment.
Various operators in HALCON have been sped up. For example, depending on image type and settings, affine_trans_image is now up to 230 % faster on AVX2 processors. Furthermore, polar_trans_image_ext can be executed up to 160 % faster, depending on the interpolation method.