东北大学学报(自然科学版) ›› 2024, Vol. 45 ›› Issue (6): 776-785.DOI: 10.12068/j.issn.1005-3026.2024.06.003

• 信息与控制 • 上一篇    

音乐多模态数据情感识别方法的研究

韩东红1, 孔彦茹2(), 展艺萌1, 刘源1   

  1. 1.东北大学 计算机科学与工程学院,辽宁 沈阳 110169
    2.国网电力科学研究院 南瑞集团有限公司,江苏 南京 211000
  • 收稿日期:2023-02-09 出版日期:2024-06-15 发布日期:2024-09-18
  • 通讯作者: 孔彦茹
  • 作者简介:韩东红(1968-),女,河北平山人,东北大学教授.
  • 基金资助:
    国家自然科学基金资助项目(61672144);国家重点研发计划项目(2019YFB1405302)

Research on Emotion Recognition Method of Music Multimodal Data

Dong-hong HAN1, Yan-ru KONG2(), Yi-meng ZHAN1, Yuan LIU1   

  1. 1.School of Computer Science & Engineering,Northeastern University,Shenyang 110169,China
    2.NARI Group Corporation,State Grid Electric Power Research Institute,Nanjing 211000,China.
  • Received:2023-02-09 Online:2024-06-15 Published:2024-09-18
  • Contact: Yan-ru KONG
  • About author:KONG Yan-ru, Email: kong19960103@163.com

摘要:

音乐情感识别研究在音乐智能推荐和音乐可视化等领域有着广阔的应用前景.针对该研究中存在的仅利用低层音频特征进行情感识别时效果有限且可解释性差的问题,首先,构建能够学习音符语义信息的基于乐器数字接口(MIDI)数据的情感识别模型ERMSLM(emotion recognition model based on skip?gram and LSTM using MIDI data),该模型的特征是由基于跳字模型(skip?gram)和长短期记忆(LSTM)网络提取的旋律特征,利用预训练的多层感知机(MLP)提取的调性特征以及手动构建的特征3部分连接而成;其次,构建融合歌词和社交标签的基于文本数据的情感识别模型ERMBT(emotion recognition model based on BERT using text data),其中歌词特征是由基于BERT(bidirectional encoder representations from trans formers)提取的情感特征、利用英文单词情感标准(ANEW)列表所构建的情感词典特征以及歌词的词频—逆文本频率(TF-IDF)特征所组成;最后,围绕MIDI和文本两种数据构建特征级融合和决策级融合两种多模态融合模型.实验结果表明,ERMSLM和ERMBT模型分别可达到56.93%,72.62%的准确率,决策级多模态融合模型效果更优.

关键词: 音乐情感识别, 深度学习, 多模态, 长短期记忆

Abstract:

The research of music emotion recognition has broad application prospects in the fields of music intelligent recommendation and music visualization. Aiming at the problem that only using low?level audio features for emotion recognition has limited effectiveness and poor interpretability. Firstly, an emotion recognition model ERMSLM based on MIDI (musical instrument digital interface) data is constructed, which can learn the semantic information of notes. The features of this model are composed of melodic features extracted with skip?gram and LSTM(long short?term memory), tonal features extracted by pre?trained MLP and manually constructed features. Secondly, an emotion recognition model ERMBT based on text data that integrates lyrics and social tags is constructed. The lyrics features are composed of emotional features extracted with BERT, emotional dictionary features constructed by using ANEW lists and TF-IDF features of lyrics. Finally, two multimodal fusion models of feature?level fusion and decision?level fusion are constructed based on MIDI and text data. The experimental results show that the ERMSLM and ERMBT models can achieve accuracies of 56.93% and 72.62% respectively. And the decision?level multimodal fusion model is more effective.

Key words: music emotion recognition, deep learning, multimodal, LSTM

中图分类号: