JOURNAL BROWSE
Search
Advanced SearchSearch Tips
Ranking Tag Pairs for Music Recommendation Using Acoustic Similarity
facebook(new window)  Pirnt(new window) E-mail(new window) Excel Download
 Title & Authors
Ranking Tag Pairs for Music Recommendation Using Acoustic Similarity
Lee, Jaesung; Kim, Dae-Won;
  PDF(new window)
 Abstract
The need for the recognition of music emotion has become apparent in many music information retrieval applications. In addition to the large pool of techniques that have already been developed in machine learning and data mining, various emerging applications have led to a wealth of newly proposed techniques. In the music information retrieval community, many studies and applications have concentrated on tag-based music recommendation. The limitation of music emotion tags is the ambiguity caused by a single music tag covering too many subcategories. To overcome this, multiple tags can be used simultaneously to specify music clips more precisely. In this paper, we propose a novel technique to rank the proper tag combinations based on the acoustic similarity of music clips.
 Keywords
Music emotion annotation;Acoustic feature extraction;Music emotion recognition;
 Language
English
 Cited by
 References
1.
Y. Yang and H. Chen, “Ranking-Based Emotion Recognition for Music Organization and Retrieval,” IEEE Transactions on Audio, Speech,and Language Processing, vol. 19, no. 4, pp. 762-774, Aug. 2010.

2.
Y. Feng, Y. Zhuang, and Y. Pan, “Music Information Retrieval by Detecting Mood via Computational Media Aesthetics,” in Proceedings of the IEEE/WIC International Conference on Web Intelligence, Halifax, 2003, pp. 235-241.

3.
T. Li and M. Ogihara, “Toward Intelligent Music Information Retrieval,” IEEE Transactions on Multimedia, vol. 8, no. 3, pp. 564-574, Jun. 2006. crossref(new window)

4.
Y. Yang, Y. Lin, Y. Su, and H. Chen, “A Regression Approach to Music Emotion Recognition,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 16, no. 2, pp. 448-457, Feb. 2008. crossref(new window)

5.
M. Korhonen, D. Clausi, and M. Jernigan, “Modeling Emotional Content of Music Using System Identication,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 36, no. 3, pp. 588-599, Jun. 2006.

6.
L. Lu, D. Liu, and H. Zhang, “Automatic Mood Detection and Tracking of Music Audio Signals,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 14, no. 1, pp. 5-18, Jan. 2006. crossref(new window)

7.
J. Skowronek, M. McKinney, and S. Van De Par, “A Demonstrator for Automatic Music Mood Estimation,” in Proceedings of International Conference on Music Information Retrieval, Vienna, 2007, pp. 345-346.

8.
K. Tsoumakas, G. Tsoumakas, G. Kalliris, and I. Vlahavas, “Multi-label Classication of Music into Emotions,” in Proceedings of International Conference on Music Information Retrieval, Philadelphia, 2008, pp. 325-330.

9.
Y. Kim, E. Schmidt, R. Migneco, B. Morton, P. Richardson, J. Scott, J. Speck, and D. Trunbull, “Music Emotion Recognition: A State of the art review,” in Proceedings of International Conference on Music Information Retrieval, Utrecht, 2010, pp. 255-266.

10.
E. Schmidt, D. Turnbull, and Y. Kim, “Feature Selection for Content-Based, Time-Varying Musical Emotion Regression,” in Proceedings of International Conference on Music Information Retrieval, Utrecht, 2010, pp. 267–274.

11.
X. Zhu, Y. Shi, H. Kim, and K. Eom, “An Integrated Music Recommendation System,” IEEE Transactions on Consumer Electronics, vol. 52, no. 3, pp. 917-925, Aug. 2006. crossref(new window)

12.
T. Eerola, O. Lartillot, and P. Toiviainen, “Prediction of Multidimensional Emotional Ratings in Music from Audio using Multivariate Regression Models,” In Proceedings of International Conference on Music Information Retrieval, Kobe, 2009, pp. 621-626.

13.
Y. Yang, Y. Su, Y. Lin, and H. Chen, “Music Emotion Recognition: The Role of Individuality,” in Proceedings of the International Workshop on Human-centered Multimedia, Augsburg, 2007, pp. 13-22.

14.
M. Soleymani, M. Caro, and E. Schmidt, C. Sha, and Y. Yang, “1000 Songs for Emotional Analysis of Music,” In Proceedings of ACM International Workshop on Crowdsourcing for Multimedia, Barcelona, 2013, pp. 1-6.

15.
K. Bischoff, C. Firan, W. Nejdl, and R. Paiu, “How Do You Feel about Dancing Queen? Deriving Mood and Theme Annotations from User Tags,” In Proceedings of ACM/IEEE-CS Joint Conference on Digital Libraries, Austin, 2009, pp. 285-294.

16.
O. Lartillot, and P. Toiviainen, “MIR in MATLAB (II): A toolbox for musical feature extraction from audio,” in Proceedings of International Conference on Music Information Retrieval, Vienna, 2007, pp. 237-244.

17.
M. Ruxanda, B. Chua, A. Nanopoulos, C. Jensen, “Emotion-based Music Retrieval on a Well-reduced Audio Feature Space,” In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, 2009, pp. 181-184.

18.
B. Zhu and K. Zhang, “Music Emotion Recognition System Based on Improved GA-BP,” In Proceedings of International Conference on Computer Design and Applications, Qinhuangdao, 2010, pp. 409-412.

19.
M. Wang, N. Zhang, H. Zhu, “User-adaptive Music Emotion Recognition,” In Proceedings of International Conference on Signal Processing, Beijing, 2004, pp. 1352-1355.

20.
B. Rocha, R. Panda, and R. Paiva, “Music Emotion Recognition: The Importance of Melodic Features,” In Proceedings of International Workshop on Machine Learning and Music, Prague, 2013, pp. 1-4.

21.
R. Agrawal and R. Srikant, “Fast algorithm for mining association rules,” In Proceedings of International Conference on Very Large Data Bases, Santiago, 1994, pp.487-499.

22.
D. Turnbull, L. Barrington, D. Torres, and G. Lanckriet, “Semantic annotation and retrieval of music and sound effects,” IEEE Transactions on Audio, Speech and Language Processing, vol. 16, no. 2, pp. 467-476, Feb. 2008. crossref(new window)