| dc.description.abstract |
Depression is a prevalent mental health disorder that significantly affects emotional well-being, daily functioning, and
quality of life. Although traditional treatments such as psychotherapy and medication are widely used, many individuals
face challenges including limited access to mental health professionals, high treatment costs, and social stigma. Music
therapy has emerged as an effective non-pharmacological approach for emotional regulation and psychological support.
However, most existing digital music platforms lack real-time emotional awareness and therapeutic intent. This research
proposes an AI-driven personalized music recommendation system that leverages facial emotion recognition to support
individuals experiencing depression. The system employs a Convolutional Neural Network (CNN) enhanced with
attention mechanisms to automatically detect emotional states from facial expressions using the RAF-DB dataset. The
detected emotions are then mapped to therapeutically appropriate music selections based on emotion-aware principles,
enabling real-time and personalized music recommendations. The proposed model was trained and evaluated using deep
learning techniques with optimized hyperparameters. Experimental results demonstrate that the CNN model achieved a
testing accuracy of 94.87%, indicating strong generalization performance and robustness. Comparative analysis with
other architectures, including VGG16, ResNet50, and MobileNetV2, confirms the effectiveness of the proposed CNN
based approach. Overall, the findings highlight the potential of integrating facial emotion recognition with intelligent
music recommendation systems as a complementary tool for mental health support. The proposed system offers a non
invasive, accessible, and adaptive solution that enhances emotional regulation and user well-being through personalized
therapeutic music. |
en_US |