Partner Spotlight: Politecnico di Torino Advances Real-Time Hand Configuration Recognition

Innovation in digital transformation often stems from cross-disciplinary research. While our project focuses on the industrial application of digital twins, our partners at Politecnico di Torino are pushing the boundaries of machine vision and human-machine interaction through their latest research in Sign Language recognition.

Breaking Communication Barriers with RGB-D Sensing

In a recently published study in Sensors, researchers from the Department of Management and Production Engineering (DIGEP) and the Biomedical Engineering department developed a framework for the real-time recognition of hand configurations in Italian Sign Language (LIS).

The research addresses a critical component of sign language: the “chereme,” or the specific hand shape used during a sign. By focusing on these configurations as a standalone classification task, the team has created a more efficient way for computers to “read” hand shapes without needing to process a full, complex temporal gesture.

How it Works: From Landmarks to Machine Learning

The team’s approach utilizes low-cost RGB-D cameras (which capture both colour and depth information) combined with sophisticated processing tools:

  • MediaPipe Integration: The system extracts 3D hand landmarks in real-time.
  • Geometric Feature Extraction: These landmarks are converted into 3D geometric features, such as the distances between fingertips and the palm, which are then normalised to account for different hand sizes.
  • SVM Classification: A Support Vector Machine (SVM) classifier then identifies the specific configuration.

The results are impressive, with the system achieving an accuracy of 96.8% across 24 distinct LIS hand configurations.

Beyond the Lab: Potential Applications

While the primary goal is to aid communication for the deaf community, this type of high-precision hand tracking has significant implications for our work in Smart Manufacturing. The ability for a system to accurately recognise complex hand configurations in real-time is essential for:

  • Advanced Human-Robot Collaboration: Allowing technicians to control industrial cobots through precise hand gestures.
  • Immersive Training: Enhancing Virtual Reality (VR) simulations where precise hand interactions are required for virtual assembly or maintenance.

We congratulate the team at Politecnico di Torino – including Luca Ulrich, Giorgia Marullo, and Enrico Vezzetti – on this contribution to the field of machine vision. This research underscores the technical excellence within our project consortium and the diverse ways in which digital modelling is shaping our future.


Full Publication: A 3D Camera-Based Approach for Real-Time Hand Configuration Recognition in Italian Sign Language, published in Sensors (2026).

Similar Posts