ISSN 2394-5125
 
Pop-up Announcement


    A DEEP LEARNING MODEL FOR SPEECH AND FACIAL EXPRESSION BASED EMOTION DETECTION (2023)


    B. Suresh Kumar, Shaik. Shamik, Shaik. Mohammad Taj,Shaik. Muzamil, Shaik. Asif, Udayagiri Penchala Prasad
    JCR. 2023: 42-51

    Abstract

    Over the recent years much advancement is made in terms of artificial intelligence, machine learning, human-machine interaction etc. Voice interaction with the machine or giving command to it to perform a specific task is increasingly popular. Many consumer electronics are integrated with SIRI, Alexa, Cortana, Google assist etc. But machines have limitation that they cannot interact with a person like a human conversational partner. It cannot recognize Human Emotion and react to them. Emotion Recognition from speech is a cutting-edge research topic in the Human machines Interaction field. There is a demand to design a more rugged man-machine communication system, as machines are indispensable to our lives. Many researchers are working currently on speech emotion recognition (SER) to improve the man machines interaction. To achieve this goal, a computer should be able to recognize emotional states and react to them in the same way as we humans do. The effectiveness of the SER system depends on quality of extracted features and the type of classifiers used. In this project we tried to identify four basic emotions: anger, sadness, neutral, happiness from speech. Here we used audio file of short Manipuri speech taken from movies as training and testing dataset. This work uses CNN to identify different emotions using MFCC (Mel Frequency Cepstral Coefficient) as features extraction technique from speech.

    Description

    » PDF

    Volume & Issue

    Volume 10 Issue-3

    Keywords