Machine Vision comprises scenarios in which the operational guidance of other devices is based on knowledge extracted from images. A machine vision system includes hardware (acquisition devices, lighting) and software (computer vision modules, communication software). In an industrial environment the choice of the acquisition device and of the illumination apparatus has a great impact on the quality of the captured images, which affects the performance of the algorithms. The software usually includes modules to drive the acquisition, to check the data, to elaborate and analyse it, and to communicate the results to a supervisor module or to actuators. In autonomous robotics algorithms should be robust to illumination changes and to unexpected situations. In machine vision real-time is usually a requirement.
Applications include guidance for autonomous navigation, guidance for robotic manipulation, industrial visual inspection for defects or damages detection, control of processes.
Vision guidance systems aim to capture three-dimensional location of objects in order to communicate them to a robotic system. To interact with objects a manipulator robot requires to know where the objects are located and how they are posed. To interact with the environment a moving robot has to recognize obstacles and landmarks for self-localization and navigation.
Determine the type and location of three-dimensional objects placed on a plane that the robot must pick up and move elsewhere
Determine the first available free point where the robot can place a certain object.
Learn the associations between images and motor actions in a moving platform to learn autonomous navigation and obstacles avoidance.
Visual inspection is a part of machine vision where products are automatically inspected in order to find defects, or evaluate their quality, to handle them appropriately. Although deep learning is changing the field of automated inspection in manufacturing, for specific tasks traditional solutions are still preferred, or preferable, in the sense that explicit models are deployed instead of learned ones. It is sometimes necessary to use a mix of techniques to solve the problem at hand: each problem requiring visual inspection is often specific and pretty different from one another.
Describe the various wood veining and determine the presence of cracks or weak points, in order to optimize the production of valuable wood objects.
Localize a damage on a product depicted by some user's photos and then retrieve similar damages in a proper data-set to contrast frauds.
Check whether the pins of a frame where a robot has to hang objects are in good condition.
Detect and classify anomalies on metal surfaces of high precision mechanical components where defects can be small scratches, scuffs, dirt, on highly reflective curved surfaces.
Check food quality before selling by visual analyzing macroscopic properties like color, shape and texture, to provide quality assessment including aesthetic characteristics.
Assess the quality of cutting porphyry slabs into strips and indicate if the strip has broken into several parts.
Y. Zhou, G. Mei, Y. Wang, F. Poiesi and Y. Wan. Attentive Multimodal Fusion for Optical and Scene Flow. IEEE Robotics Automation Letters, 2023 [arXiv]
Y. Zhou, Y. Wang, F. Poiesi, Q. Qin and Y. Wan. Loop closure detection using local 3D deep descriptors. IEEE Robotics and Automation Letters, 2022 [doi]
D. Boscaini, F. Poiesi, S. Messelodi, A. Younes, D. A. Grande, Localisation of defects in volumetric Computed Tomography scans of valuable wood logs - 1st Int. Workshop on Industrial Machine Learning, 25th International Conference on Pattern Recognition - ICPR, Milano, Italy, 2021 [pdf]
F. Cermelli, M. Mancini, E. Ricci, and B. Caputo. The RGB-D Triathlon: Towards Agile Visual Toolboxes for Robots. In IEEE/RSJ International Conference on Intelligent Robots and Systems - IROS, Macau, China, November 2019
L. Porzi, S. Rota Bulò, A. Penate-Sanchez, E. Ricci, and F. Moreno-Noguer. Learning Depth-aware Deep Representations for Robotic Perception. IEEE Robotics and Automation Letters, 2(2):468-475, 2017