Please use this identifier to cite or link to this item:
https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4929
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Sathyanjana, A | - |
dc.date.accessioned | 2025-08-21T08:31:12Z | - |
dc.date.available | 2025-08-21T08:31:12Z | - |
dc.date.issued | 2025-06-30 | - |
dc.identifier.uri | https://dl.ucsc.cmb.ac.lk/jspui/handle/123456789/4929 | - |
dc.description.abstract | Abstract Human emotions are complex, personal, and vary often across individuals, situations, and cultures. Many existing emotion recognition systems focus only on identifying emotional states at a given moment. This research aims to address that limitation by developing a personalized multimodal emotion recognition framework that identifies and adapts to each user’s emotional baseline over time. The framework combines facial and vocal signals using Decision Level Fusion, where Mean Squared Error (MSE) is used to assign personalized weights based on how close each modality’s prediction is to user reported emotions. Kernel Density Estimation (KDE) method introduced to estimate the initial emotional baseline in the arousal-valence space. This baseline is further refined through reinforcement learning, using user feedbacks through emoji-based mechanism. Experiments were conducted across five emotional categories (Happy, Angry, Sad, Boredom, and Calm) using a group of 10 participants. The fused method yields an average improvement of 33.92% over the facial method and 6.52% over the vocal method. Emotionally enhanced responses using personalized emotional inputs showed improvements of Empathy (75.3%) and Emotional Alignment (69.5%), followed by Satisfaction (37.6%). Most participants (66.67%) agreed with the computed refinement baseline values. This research makes three main contributions: a personalized emotion fusion method, baseline identification, and an iterative refinement process. While the system currently supports a limited set of emotions and uses only facial and vocal inputs, it opens pathways for including more emotional categories, physiological data, and advanced context aware fusion techniques in future work. | en_US |
dc.language.iso | en | en_US |
dc.title | Multimodal Emotional State Recognition for Personalized Responses | en_US |
dc.type | Thesis | en_US |
Appears in Collections: | 2025 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
20001681 - B A Sathyanjana - Avishka Devops.pdf | 22.92 MB | Adobe PDF | View/Open |
Items in UCSC Digital Library are protected by copyright, with all rights reserved, unless otherwise indicated.