摘要:
Sign language recognition is of great significance to connect the hearing/speech impaired and non-sign language communities. Compared to isolated word recognition, sentence recognition is more practical in real-world scenarios, but is also more complicated because continuous, high-quality sign data with distinct features must be collected and isolated signs must be identified with high accuracy. Here, we propose a wearable sign language recognition system enabled by a convolutional neural network (CNN) that integrates stretchable strain sensors and inertial measurement units attached to the body to perceive hand postures and movement trajectories. Forty-eight Chinese sign language words commonly used in daily life were collected and used to train the CNN model, and an isolated sign language word recognition accuracy of 95.85% was achieved. For sentence-level sign language recognition, we proposed a method that combines multiple sliding windows and uses correlation analysis to improve the CNN recognition performance, achieving a correct rate of 84% for 50 sign language sentence samples, showing good extendibility.
Website