1 / 16

jSymbolic

jSymbolic. Cedar Wingate MUMT 621 Professor Ichiro Fujinaga 22 October 2009. Types of Features. Low Level High Level Cultural. What are High Level Features?. Information that consists of musical abstractions that are meaningful to musically trained individuals .

keran
Download Presentation

jSymbolic

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. jSymbolic Cedar Wingate MUMT 621 Professor Ichiro Fujinaga 22 October 2009

  2. Types of Features • Low Level • High Level • Cultural

  3. What are High Level Features? • Information that consists of musical abstractions that are meaningful to musically trained individuals. • Examples include instruments present, melodic contour, chord frequencies and rhythmic density.

  4. Why High Level Features? • Musicological and music theoretical value • Great deal of music already encoded in MIDI or Humdrum’s kern • Optical music recognition can provide even more symbolic musical data

  5. jSymbolic application • Application to extract high-level features from MIDI files • Many high-level features cannot be extracted from audio recordings • Open Source • Designed to easily add new features with basic JAVA and MIDI skills

  6. Building a feature set • Goals • Single software system that could be applied to classical music, jazz and a wide variety of popular and traditional musics • Use this software without needing to make any manual adjustments or adaptations in order to deal with different types of music • Issues to consider • “Curse of dimensionality” • Many systems of analysis • Many of which rely on intuitive subjective judgment • jSymbolic solution • Large catalogue of general features • User can select which ones to include or exclude • Concentrate on features that can be represented by relatively simple statistics • Intermediate representations • Histograms

  7. Feature characteristics • Features that can be represented by simple numbers or small vectors • One-dimensional features • Means, standard deviations, true/false values • Multi-dimensional features • Histograms

  8. Example: Beat Histogram (McKay, C., and I. Fujinaga. 2007)

  9. The Features • Drawn from musical research • Music Theory (Julie Cumming), Ethnomusicology (Alan Lomax, Bruno Nettl), Music Cognition (Bret Aarden and David Huron) and Popular Musicology (Philip Tagg) • 160 Total Features (111 implemented) • Instrumentation • Pitched/Unpitched • Note and Time Prevalence and Variability of Note Prevalence • Fraction • Texture • Independent Voices • Voice equality • Range of Voices • Rhythm • Strength • Looseness • Polyrhythms • Density • Tempo, Meter

  10. The Featues (continued) • Dynamics • Range • Variation • Pitch Statistics • Common Pitches • Variety • Range • Glissando • Vibrato • Melody • Intervals • Arpeggiation • Repetition • Chromaticism • Melodic Arc • Chords (not implemented)

  11. More examples Twenty sample features extracted from the first two measures of Fryderyk Chopin’s Nocturne in B, Op. 32, No. 1 (McKay, C., and I. Fujinaga. 2007).

  12. More examples Twenty sample features extracted from measures 10 and 11 of the first movement of Felix Mendelssohn’s Piano Trio No. 2 in C minor, Op. 66 (McKay, C., and I. Fujinaga. 2007).

  13. Application: Automatic Genre Classification • Automatic music classification and the importance of instrument identification (McKay, C., and I. Fujinaga. 2005) • Able to correctly classify MIDI recordings among 9 categories 90% of the time and among 38 categories 57% of the time. • Root genre identified 90% for 9 categories, 80% for 38 categories. • Better than audio based classification systems (below 80% among 5 categories) • Found that features relating to instrumentation performed significantly better than other features in automatic genre classification.

  14. Application: Automatic Genre Classification (continued) (McKay, C., and I. Fujinaga. 2005)

  15. Application: Automatic Genre Classification (continued, again…) (McKay, C., and I. Fujinaga. 2005)

  16. Bibliography McKay, C. 2004a. Automatic genre classification of MIDI recordings. M.A. Thesis. McGill University, Canada. McKay, C. 2004b. Automatic genre classification as a study of the viability of high-level features for music classification. Proceedings of the International Computer Music Conference. 367-70. McKay, C., and I. Fujinaga. 2005. Automatic music classification and the importance of instrument identification. Proceedings of the Conference on Interdisciplinary Musicology. McKay, C., and I. Fujinaga. 2006. jSymbolic: A feature extractor for MIDI files. Proceedings of the International Computer Music Conference. 302-5. McKay, C., and I. Fujinaga. 2007. Style-independent computer-assisted exploratory analysis of large music collecitons. Journal of Interdisciplinary Music Studies 1 (1):63-85.

More Related