Skip to main content
Wi-Fi SIMO Radar for Deep Learning-Based Sign Language Recognition
This study focuses on leveraging Wi-Fi signals for sign language recognition, employing an advanced passive radar system based on an injection-locked quadrature receiver (ILQR). Configured in a single-input multiple-output (SIMO) setup, the ILQR-based Wi-Fi radar effectively detects the 3-D motions of two hands specialized for sign language, utilizing 2.4 GHz Wi-Fi signals. In the processing of experimental data, several pairs of baseband I- and Q-channel signals are sampled to generate multiple output time series. These series are subsequently transformed into images using the Gramian angular field (GAF) method for deep learning applications. These images not only capture the temporal and spatial information of moving hands but also aid in extracting features while minimizing interference from noise. A deep learning model, combining CNNs and LSTM networks, is employed to extract and learn features from 10,000 labeled samples, resulting in an impressive classification accuracy exceeding 90% for 10 Chinese sign language gestures.