MVTec Deep Learning Tool

The easy way into Deep Learning

Labelling training data is the first crucial step towards any deep learning application. The quality of this labelled data plays a major role when it comes to the application's performance, accuracy, and robustness.

With the Deep Learning Tool, you can easily label your data thanks to the intuitive user interface – without any programming knowledge. This data can be seamlessly integrated into HALCON and MERLIC to perform deep-learning-based object detection, classification and semantic / instance segmentation. For classification projects, you can also train and evaluate your model in the Deep Learning Tool.

The Deep Learning Tool is available for free download on MVTec website.

The Deep Learning Tool offers

  • A fast path to the complete Deep Learning solution
  • An intuitive user interface
  • Active support for the optimisation of the trained networks
  • Easy integration into the MVTec portfolio
  • Full control over your own data

The Deep Learning Tool software editions

  • Supports the entire workflow for labelling, training, and evaluating models for HALCON's Global Context Anomaly Detection

  • A new model for Global Context Anomaly Detection with impressive improvements in memory consumption and inference time

  • In the Evaluation table on the Evaluation > Overview page, the calculation of precision, recall, and F1-score have been improved.

  • When starting the Deep Learning Tool for the first time, now users are informed that the DLT connects to an MVTec server to check for available updates and news.

  • In classification projects, it is no longer possible to assign unlabeled images manually to a split dataset because only labelled images can be used for training, validation, or evaluation.

  • After creating a project, the used method is now stored persistently. When the next project is created, the method type of the last project is already selected.

  • It is now possible to apply a quick filter using the good/anomaly state of the used image label class.

  • New augmentation parameters for contrast and saturation variation have been added that are supported by HALCON 22.05. Note that training results might differ slightly from previous versions.

  • The filter criterion Image State has been added, for example, to quickly find images with broken paths. Note that this may take some time if the dataset is big.

  • The aspect ratio of labels can now be fixed during resize operations by pressing the Shift key.

  • The DLT is now based on HALCON 22.05 Progress.

  • On the Evaluation page, now a context menu allows navigating to the image on the other pages or in Windows Explorer.

  • On the Image page, it is now possible to show the bounding box for all labels.

  • Exporting datasets has been improved. Now, users can also choose whether and where to save a copy of the images belonging to the export. Further, it is possible to open File Explorer at the location of the export.

  • It is now possible to assign shortcuts manually via the edit label class dialogue.

  • It is now possible to change the opacity of label regions on the Image page for all project types.

  • When the license is about to expire within 100 days, now a warning is shown.

  • The metadata dictionary of the DL model now stores additional information about the training (trained with DLT, used DLT version, used DL device, required time).

  • The Image page has been enhanced with an additional zoom step between 1:1 and 1:2. Now, zooming should work more smoothly.

  • When exporting semantic segmentation projects to an HDICT dataset file it is now possible to change the suffix of the label image filenames.

  • There is now an option to remove projects from the Recent Projects section on the Project page. The projects' context and dot menus have been extended accordingly.

  • After deleting a split and undoing this change, now the sorting is more consistent.

  • After resetting a training, now the Settings tab card is visible instead of the empty Results tab card so that the user can easily adapt the training parameters.

  • From this version on, the DLT supports labelling for Deep OCR scenarios to further improve the Deep OCR models provided by HALCON.

  • Example projects for all supported deep learning methods.

  • Adjust the shown contrast and brightness of the images to ease the labelling and assessment of difficult images.

  • Undo and redo function allowing you to revert any operation that modifies the project.

  • Scale or rotate label regions or components segmentation projects.

  • Set a project-wide (absolute or relative) image base path for the current project.

  • The Statistics dialogue has been improved for object detection and segmentation projects.

  • The speed of the dictionary export has been improved significantly for large datasets with many images and labels.

  • The DLT is now based on HALCON 20.11.2 Steady.

  • Convert bounding boxes imported from a HALCON Dictionary into polygon or mask regions.

  • Supports DirectX11 on Windows more reliably. To force software rendering in case of problems with the display, set the environment variable QSG_RHI_PREFER_SOFTWARE_RENDERER=1.

  • The help pages now offer the possibility to switch between the available languages (English, Chinese, Japanese).

  • The SOM package of the DLT has been split into four packages. Downloading and installing NVIDIA GPU support and example projects are now optional..

Working with the Deep Learning Tool

    Retraining has been shown to even further increase the recognition rate of Deep OCR, making it the leading industrial OCR technology. The Deep Learning Tool will support the automatic recognition of labelled words, making it very efficient to label even big datasets.

    With object detection, labelling is done by drawing rectangles around each relevant object and assigning these rectangles to the corresponding classes. Depending on the project requirements, the user can label his data with either axis-parallel or oriented rectangles.

    Labelling for classification is done by simply importing the images and assigning them to a class. If the images are stored in appropriately named folders, they can also be labelled automatically during import.

    Users can set all important parameters and perform training based on their labelled data.

    Users can evaluate and compare their trained networks directly in the tool. The evaluation section provides information on model accuracy, including a heatmap for the predicted classes of all processed images, as well as an interactive confusion matrix to help detect misclassifications. Users can also calculate the estimated inference time per image and export the evaluation results as a single HTML page for documentation purposes.

Seamless integration into the MVTec product portfolio

The Deep Learning Tool seamlessly integrates into the MVTec product portfolio with HALCON and MERLIC and serves as the core of your Deep Learning application.

Acquire your images and preprocess them with HALCON or MERLIC if necessary. After labelling, training as well as evaluation in the Deep Learning Tool, deploy your trained network in the respective runtime environment.