slide1 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Overview PowerPoint Presentation
Download Presentation
Overview

Loading in 2 Seconds...

play fullscreen
1 / 17

Overview - PowerPoint PPT Presentation


  • 156 Views
  • Uploaded on

MegsRadio Promoting local music and events through adaptive recommendation algorithms Andrew Horwitz MUMT 621 3/26/2014. Overview. Look at previous research into playlist compilation/music recommendation The MegsRadio project; our algorithms Our goals.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

Overview


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide1

MegsRadioPromoting local music and events through adaptive recommendation algorithmsAndrew HorwitzMUMT 6213/26/2014

overview
Overview
  • Look at previous research into playlist compilation/music recommendation
  • The MegsRadioproject; our algorithms
  • Our goals
playlist recommendation fields 2011
Playlist RecommendationFields (2011)
  • Playlist structure is (unconsciously) influenced by Tversky (1977) – songs are considered as objects with properties that are considered similar to other objects.
  • MIREX has had an Audio Music Similarity and Retrieval (AMS) task to compare playlist algorithms.
    • “Given a collection of 7,000 30-second clips of audio… for each selected query track, find the 100 most similar tracks in the correct order and assigned the correct similarity score.”
playlist recommendation fields 20111
Playlist RecommendationFields (2011)
  • Music Recommendation vs. Playlist Generation
    • Raw recommendation is similarity between users: usually relies on drawing correlation between ratings (Amazon)
    • Playlist generation focuses more on similarity between songs
  • Focuses on/in playlist generation:
    • Art of the Mix
    • Patterns over explicit similarity
    • Cultural and situational aspects
automatic playlist construction mcfee and lanckriet 2011
Automatic Playlist ConstructionMcFee and Lanckriet (2011)
  • Uniform shuffle
    • Songs selected randomly, without weight, from a given subset.
    • Refined by disallowing consecutive repetitions: for a song xt, xt+1 is drawn fromX \ {xt}
  • Weighted shuffle
    • Songs weighted by various characteristics
    • Can be “steered” by tags/user feedback (Maillet 2009)
  • k-NN and random walks
    • Songs organized with neighbor relationships by some characteristics, random walk performed
  • Markov chains
    • System intelligently determines weights for weighted shuffle/parameters for k-NN maps
steerable playlists maillet et al 2009
Steerable PlaylistsMaillet et al. (2009)
  • Built a 180-dimensional similarity web
    • Based off radio programs, song-level characteristics and smaller-frame characteristics
  • Seed track is selected by user; tracks most similar to that song (above a threshold or to a certain quantity) are put aside
  • User refines these similarities by putting weights on any of 360 tags
steerable playlists maillet et al 20091
Steerable PlaylistsMaillet et al. (2009)
  • Different tags lead to different results;both playlists are seeded with the song Clumsy by Our Lady Peace
    • “Soft” tag cloud is made up of the tags for Imagine by John Lennon
    • “Hard” tag cloud with the tags for Hypnotize by System of a Down
what s missing from internet radio
What’s missing from internet radio
  • Increase awareness of local music
    • Contextualize with mainstream music
  • Focus on event promotion
    • Either tickets OR music, rarely linked
  • More tunable parameters
    • Feedback sometimes limited to only +/-
  • Craigslist model
our playlist algorithm
Our playlist algorithm
  • Seeded by a combination of tags and artists
  • Gets a list of:
    • all songs that are by artists similar to the seed artists, or have positive correlations to the tags
    • all songs listened to by the user and all related feedback data
      • (dislike/WTF/discovery/like/ban)
    • user’s station preferences (Echonest characteristics and a few others)
  • Correlates the three lists and adds/detracts value or removes songs accordingly
our playlist algorithm1
Our playlist algorithm
  • Master playlist vs. (more|some|less)
    • Popular music: Echonest popularity statistics
    • Repeats: based off listening history for the user on current station and based on DMCA guidelines
    • Local music: different weights when selecting songs…
our playlist algorithm2
Our playlist algorithm
  • Makes a list of all songs by seeded artists and a list of all songs by local artists
  • Picks the highest-weighted song by a seeded artist, highest-weighted local song, and highest-weighted song that is not those two
  • Randomly selects songs until a certain length is reached – there are parameters for how likely a local song is or a seed artist song is.
  • Statistical noise is introduced at various parts in the process to ensure variety if the playlist needs to be reloaded for some reason
event recommendation
Event recommendation
  • “Approximately how many music events do you attend per month?”
    • We want to increase this – as users are exposed to more local bands, they should find more events they want to attend.
  • List of events pulled in via webcrawlers tailored to local news sites/event promoters
  • Same list of all tracks listened to and of all feedback provided through all stations
event recommendation1
Event recommendation
  • If the user has listened to or provided feedback on an artist that is performing at an event, their opinion is weighted by a flat amount
  • If the user has listened to or provided feedback on an artist similar to a performer, the opinion is weighted by how similar the artist is to the performers.
    • Added rather than averaged: we figured that if a user liked multiple artists at an event, they’d like the event even more
  • Highest-scoring artist (after similarity multipliers are calculated) is given as a “Why we think you’d like this event” reason
promoting local music and events
Promoting local music and events
  • Both algorithms are linked to the same data
    • listens affect event recommendations
    • different events suggested using same similarity data in different cities
  • Visual interplay
    • radio interface has links to events: “This artist is playing in $CITY_NAME soon!”
    • event interface has “Start playlist” for each event
  • Local music interspersed with mainstream
    • recommendations can be based off both
    • events for both are shown
future research
Future research
  • Plug-and-Playlist™
    • No Markov “smart” implementation yet
      • Testing different parameters on our recent revisions
    • API construction means we can dynamically switch between algorithms
  • “Approximately how many music events do you attend per month?”
    • Do we help increase this number? How can we?
references
References
  • Fields, B. 2011. Contextualize your listening: the playlist as recommendation engine. Ph.D. Dissertation. University of London.
  • Maillet, F., D. Eck, G. Desjardins, and P. Lamere. 2009. Steerable playlist generation by learning song similarity from radio station playlists. Proceedings of the International Society for Music Information Retrieval Conference. Kobe, Japan. 345-50.
  • McFee, B., L. Barrington, and G. Lanckriet. 2012. Learning content similarity for music recommendation. IEEE Transactions onAudio, Speech, and Language Processing 20(8). 2207-18.
  • McFee, B., and G. Lanckriet. 2011. The natural language of playlists. Proceedings of the International Society for Music Information Retrieval Conference. Miami, FL. 537-42.
  • Turnbull, D., L. Barrington, D. Torres, and G. Lanckriet. (2008) Semantic annotation and retrieval of music and sound effects. IEEE Transactions onAudio, Speech, and Language Processing 16(2). 467-76.
  • Tversky, A. 1977. Features of similarity. Psychological review 84(4). 327-52.