Face emotion recognition using artificial intelligence techniques
Abstract
Recently, there has been tremendous growth in the area of Human Computer Interaction (HCI). Many HCI applications were documented, and among them, the Face Emotion Recognition(FER) is one of the well known areas. Seven face emotions are considered universally in FER research: they are happy, sad, angry, fear, surprise, disgust and neutral. The FER can find applications in hospital and in home (for senior citizens, bed ridden persons and severely injured patients) and in analyzing the personal emotion psychology. The FER comes with
various approaches and methods in the way to have a good recognition package. However,
there are various reasons for the failures in the packages and one of them is due to face features that change with age, color, mental state and individual face expressions.
In this research, the problem is focused on the personalized face emotion and some studies are extended for better emotion recognition. FER is achieved in two parts, they are image processing part and classification part. The first part investigates a set of image processing
methods suitable for recognizing the face emotion. The acquired images have gone through few preprocessing methods. The edge detection has to be successful even when the intensity of light is uneven. So, to overcome the difficulty of uneven lighting, the histogram equalized image is split into two regions of interest (ROI) – eye and lip regions. These two regions have been applied with the same preprocessing methods but with different threshold values. The human eye and lip configurations are found to be more of towards ellipses. With the objective of
finding the changes in eye and lip areas, a set of new forms for ellipse fitness function is proposed. The fitness functions find changes in the minor axes of both eye and lip images. The fitness functions are utilized by genetic algorithm (GA) to find the optimized values of minor axes. Three fitness functions are developed, one for the eye and two for the lip (top and bottom
lip). These fitness functions are applied on eye and lip images of South East Asian, Japanese and Chinese subjects. Observation of various emotions of the three subjects leads to a unique characteristic of eye and lip. Outcome of optimized values indicate the ratios of the minor axes with respect to neutral emotion for the SEA, Japanese and Chinese subjects. It is found, from the optimized data, that there is no common pattern to recognize emotions within among the three subjects. The absence of common patterns leads to studies on emotion personalized to an ethnic. In order to understand the personalized face emotion recognition, the developed fitness
functions are applied on two SEA subjects. However, it is found that some emotion range
overlaps with other emotion ranges. In order to circumvent this problem in recognizing the
emotions, two Artificial Intelligence (AI) classification techniques such as neural network and fuzzy clustering are employed. Various forms of neural networks have been applied and one of them is found to perform well in achieving a success rate of 91.42% for SEA1 and 89.76% for SEA2. In the case of second classification technique, two forms of fuzzy c-mean clustering are considered and their performances are compared. One of them performs better by achieving a 90% success rate for both SEA1 and SEA2. It is concluded that the analysis of personalized
emotion through facial features of two subjects indicate higher rate of success compared to a general form of analysis that is applied to varieties of faces of several ethnic personalities.