When it comes to recognizing facial emotion, the distance feature is extremely important. In the field of affective computing, identifying accurate landmarks is critical as well as a difficult issue. The appearance model detects prominent landmarks on human faces. On the human face, these prominent landmarks form a grid. Distances are calculated within the grid by comparing one landmark point to another. Normalized distances are regarded as a distance signature. To form a normalized shape signature, the possible triangles are found within the grid. Texture characteristics among the landmark points reflected in human faces are important features in facial expression recognition. Appearance-based models detect effective landmarks, and corresponding texture regions are extracted from face images. The texture feature is computed using a Local Binary Pattern (LBP). Normalizing texture signatures is accomplished with the texture feature. A novel concept of corresponding stability indices is introduced, which are eventually discovered to play an important role in facial expression recognition. For these reasons, the stability indices are calculated from each normalized distance, shape, and texture signature feature. To supplement the feature set, individual distance, shape, and texture signature features are used to calculate statistical analyses such as range, moment, skewness, kurtosis, and entropy.
The enhanced distance signature feature set is fed into a Multilayer Perceptron (MLP) to generate various expression categories such as anger, sadness, fear, disgust, surprise, and happiness. We train and test our proposed system on four benchmark datasets: Cohn-Kanade (CK+), JAFFE, MMI, and MUG. To categories the expressions, the shape signature feature set is fed into Multilayer Perceptron (MLP) and Nonlinear Auto Regressive with eXogenous (NARX). We tested our proposed system on four databases and found that it outperformed other state-of-the-art solutions. To conduct the experiments, the texture signature feature is used as an input to Nonlinear Auto Regressive with eXogenous (NARX) for recognition of human facial expressions on benchmark datasets, and the results support the effectiveness of the proposed procedure.
Following the recognition of expressions using individual signature features, we investigate the combined distance and shape (D-S), distance and texture (D-T), and shape and texture (S-T) signature features. To conduct and validate our experiment and establish its performance superiority over other existing competitors, we feed the combined distance and shape (D-S) feature set into a Multilayer Perceptron (MLP) to categorize the expressions into different categories on four databases. The combined distance-texture (D-T) signature outperforms the distance and texture signatures separately. The effectiveness of the proposed technique based on combined D-T signature is demonstrated by its extremely encouraging performance when compared to other existing arts. To classify the expression on the CK+, JAFFE, MMI, MUG, and Wild face benchmark databases, the combined shape and texture (S-T) features are fed into Multilayer Perceptron (MLP) and Deep Belief Neural (DBN) networks. Extensive testing demonstrates that our proposed methodology outperforms other existing competitors in terms of performance.
Finally, the distance signature, shape signature, and texture signature are combined to form a distance-shape-texture signature trio feature for recognizing facial expression. The experimental results also show a promising recognition rate of facial expressions of the distance-shape-texture signature trio when compared to other existing arts.