Gesture recognition (GR) aims to interpret human gestures, having an impact on a number of different application fields. This Special Issue is devoted to describing and examining up-to-date technologies to measure gestures, algorithms to interpret data, and applications related to GR. These technologies involve camera-based systems (e.g., ground truth system, GTS; Azura Kinect), wearable sensors (e.g., inertial measurement units, IMUs; micro electro-mechanical systems, MEMS; angular displacement sensors, ADS; resistive flex sensors, RFSs), electromagnetic field measurements (e.g., leap motion sensor), acoustic-based inputs (e.g., microphone, stethoscope), radar systems (e.g., continuous wave), and tactile sensors (e.g., pressure sensitive transistors). Data interpretations are detailed by means of classifiers (e.g., neural networks, NN; convolutional neural network, CNN; hidden Markov models, HMM; and k-nearest neighbors, kNN). The applications are for medical purposes (e.g., to provide physiotherapy solutions, to assess Parkinson's disease, and to electrocardiogram detection), for social inclusion (e.g., sign language recognition: British, American, and Italian ones), for sport activity scoring (e.g., taekwondo), for machine interaction (e.g., to control a holographic display), and for safety purposes (e.g., to drowsiness recognition). This Special Issue is addressed to all the researchers, professionals, and designers interested in GR and to all the users driven by curiosity and passion. The Guest Editors would like to acknowledge and express their gratitude to all of the authors involved.