1 / 48

Music Information Retrieval based on multi-label cascade classification system

www.kdd.uncc.edu. Music Information Retrieval based on multi-label cascade classification system. CCI, UNC-Charlotte. http//:www.mir.uncc.edu. Research sponsored by NSF IIS-0414815, IIS-0968647. presented by Zbigniew W. Ras. Collaborators:

glain
Download Presentation

Music Information Retrieval based on multi-label cascade classification system

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. www.kdd.uncc.edu Music Information Retrieval based on multi-label cascade classification system CCI, UNC-Charlotte http//:www.mir.uncc.edu Research sponsored by NSF IIS-0414815, IIS-0968647 presented by Zbigniew W. Ras

  2. Collaborators: Alicja Wieczorkowska (Polish-Japanese Institute of IT, Warsaw, Poland) Krzysztof Marasek (Polish-Japanese Institute of IT, Warsaw, Poland) Former PhD students: Elzbieta Kubera (Maria Curie-Sklodowska University, Lublin, Poland ) Rory Lewis (University of Colorado at Colorado Springs, USA) Wenxin Jiang (Fred Hutchinson Cancer Research Center in Seattle, USA) Xin Zhang (University of North Carolina, Pembroke, USA) Jacek Grekow (Bialystok University of Technology, Poland) Current PhD student: Amanda Cohen-Mostafavi (University of North Carolina, Charlotte, USA)

  3. MIRAI - Musical Database (mostly MUMS) [music pieces played by 57 different music instruments] Goal: Design and Implement a System for Automatic Indexing of Music by Instruments (objective task) and Emotions (subjective task) Outcome: Musical Database represented as FS-tree guarantying efficient storage and retrieval [music pieces indexed by instruments and emotions].

  4. MIRAI - Musical Database [music pieces played by 57+ different music instruments (see below) and described by over 910 attributes] Alto Flute, Bach-trumpet, bass-clarinet, bassoon, bass-trombone, Bb trumpet, b-flat clarinet, cello, cello-bowed, cello-martele, cello-muted, cello-pizzicato, contrabassclarinet, contrabassoon, crotales, c-trumpet, ctrumpet-harmonStemOut, doublebass-bowed, doublebass-martele, doublebass-muted, doublebass-pizzicato, eflatclarinet, electric-bass, electric-guitar, englishhorn, flute, frenchhorn, frenchHorn-muted, glockenspiel, marimba-crescendo, marimba-singlestroke, oboe, piano-9ft, piano-hamburg, piccolo, piccolo-flutter, saxophone-soprano, saxophone-tenor, steeldrums, symphonic, tenor-trombone, tenor-trombone-muted, tuba, tubular-bells, vibraphone-bowed, vibraphone-hardmallet, viola-bowed, viola-martele, viola-muted, viola-natural, viola-pizzicato, violin-artificial, violin-bowed, violin-ensemble, violin-muted, violin-natural-harmonics, xylophone.

  5. Automatic Indexing of Music What is needed? Database of monophonic and polyphonic music signals and their descriptions in terms of new features (including temporal) in addition to the standard MPEG7 features. These signals are labeled by instruments and emotions forming additional features called decision features. Why is needed? To build classifiers for automatic indexing of musical sound by instruments and emotions.

  6. MIRAI - Cooperative Music Information Retrieval System based on Automatic Indexing Indexed Audio Database Query … … Instruments … Durations … Query Adapter … … Music Objects Empty Answer? User …

  7. Challenges to applying KDD in MIR The nature and types of raw data

  8. Feature extractions Signal Data Sampling 0.12 s frame size 0.04 s hop size lower level raw data Feature Extraction MATLAB Higher level representations Feature Database manageable traditional pattern recognition classification clustering regression

  9. MPEG7 features Hamming NFFT Window FFT points Power STFT Spectral Centroid Spectrum Log Attack Time Signal envelope Temporal Centroid Signal Instantaneous Harmonic Spectral Spread Harmonic STFT Peaks Instantaneous Detection Harmonic Spectral Centroid Hamming Window Fundamental Frequency Instantaneous Harmonic Spectral Deviation Instantaneous Harmonic Spectral Variation

  10. Derived Database MPEG7 features Non-MPEG7 features & new temporal features

  11. New Temporal Features – S’(i), C’(i), S’’(i), C’’(i) S’(i) = [S(i+1) – S(i)]/S(i) ; C’(i) = [C(i+1) – C(i)]/C(i) where S(i+1), S(i) and C(i+1), C(i) are the spectral spread and spectral centroid of two consecutive frames: framei+1 and frame i. The changing ratios of spectral spread and spectral centroid for two consecutive frames are considered as the first derivatives of the spread and spectral centroid. Following the same method we calculate the second derivatives: S’’(i) = [S’(i+1) – S’(i)]/S’(i) ; C’’(i) = [C’(i+1) – C’(i)]/C’(i) Remark: Sequence [S(i), S(i+1), S(i+2),….., S(i+k)] can be approximated by polynomial p(x)=a0+a1*x+a2*x2 + a3*x3 + ……… ; new features: a0, a1, a2, a3, ……

  12. Experiment with WEKA: 19 instruments [flute, piano, violin, saxophone, vibraphone, trumpet, marimba, french-horn, viola, basson, clarinet, cello, trombone, accordian, guitar, tuba, english-horn, oboe, double-bass], J48 with 0.25 confidence factor for pruning tree, minimum number of instances per leaf – 10; KNN – number of neighbors – 3 Euclidean distance is used as similarity function. Classification confidence with temporal features

  13. Confusion matrices: left is from Experiment 1, right is from Experiment 3. The correctly classified instances are highlighted in green and the incorrectly classified instances are highlighted in yellow

  14. Precision of the decision tree for each instrument Recall of the decision tree for each instrument F-score of the decision tree for each instrument

  15. Polyphonic sounds – how to handle? • Single-label classification Based on Sound Separation • Multi-labeled classifiers Problems? Polyphonic Sound Get frame Classifier . segmentation Feature extraction Sound separation Get Instrument Information loss during the signal subtraction Sound Separation Flowchart

  16. Timbre estimation in polyphonic sounds and designing multi-labeled classifiers • timbre relevant descriptors • Spectrum Centroid, Spread • Spectrum Flatness Band Coefficients • Harmonic Peaks • Mel frequency cepstral coefficients (MFCC) • Tristimulus

  17. Timbre estimation based on multi-label classifier 1 second window segmentation Get frame timbre descriptors Features Extraction Classifier

  18. Timbre Estimation Results based on different methods [Instruments - 45, Training Data (TD) - 2917 single instr. sounds from MUMS, Testing on 308 mixed sounds randomly chosen from TD, window size – 1s, frame size – 120ms, hop size – 40ms (~25 frames), Mel-frequency cepstral coefficients (MFCC) extracted from each frame Threshold 0.4 controls the total number of estimations for each index window.

  19. Polyphonic Sound (window) Polyphonic Sounds Classifiers Feature extraction Get frame Multiple labels Compressed representations of the signal: Harmonic Peaks, Mel Frequency Ceptral Coefficients (MFCC), Spectral Flatness, …. Irrelevant information (inharmonic frequencies or partials) is removed. Violin and viola have similar MFCC patterns. The same is with double-bass and guitar. It is difficult to distinguish them in polyphonic sounds. More information from the raw signal is needed.

  20. Short Term Power Spectrum – low level representation of signal (calculated by STFT) Spectrum slice – 0.12 seconds long Power Spectrum patterns of flute & trombone can be seen in the mixture

  21. Experiment: Middle C instrument sounds (pitch equal to C4 in MIDI notation, frequency -261.6 Hz Training set: Power Spectrum from 3323 frames - extracted by STFT from 26 single instrument sounds: electric guitar, bassoon, oboe, B-flat, clarinet, marimba, C trumpet, E-flat clarinet, tenor trombone, French horn, flute, viola, violin, English horn, vibraphone, Accordion, electric bass, cello, tenor saxophone, B-flat trumpet, bass flute, double bass, Alto flute, piano, Bach trumpet, tuba, and bass clarinet. Testing Set: Fifty two audio files are mixed (using Sound Forge ) by two of these 26 single instrument sounds. Classifier – (1) KNN with Euclidean distance (spectrum match based classification); (2) Decision Tree (multi label classification based on previously extracted features)

  22. Timbre Pattern Match Based on Power Spectrum n – number of labels assigned to each frame; k – parameter for KNN

  23. Schema I - Hornbostel Sachs Idiophone Membranophone Aerophone Chordophone Lip Vibration Single Reed Free Side C Trumpet Tuba Bassoon Whip Flute French Horn Oboe Alto Flute

  24. Schema II - Play Methods …… Blow Bowed Muted Picked Pizzicato Shaken Alto Flute Flute Piccolo Bassoon ……

  25. Decision Table Xin Cynthia Zhang Xin Cynthia Zhang 25 25

  26. Example 1 2 1 2 3 LevelI C[1] C[2] d[1] d[2] d[3] LevelII 1 2 1 2 C[2,1] C[2,2] d[3,1] d[3,2] Classification Attributes Decision Attributes

  27. Instrument granularity classifiers which are trained at each level of the hierarchical tree Hornbostel/Sachs We do not include membranophones because instruments in this family usually do not produce harmonic sound so that they need special techniques to be identified

  28. Modules of cascade classifier for single instrument estimation --- Hornboch /Sachs Pitch 3B 96.02% 91.80% 98.94% * = 95.00% >

  29. New Experiment: • Middle C instrument sounds (pitch equal to C4 in MIDI notation, frequency - 261.6 Hz • Training set: • 2762 frames extracted from the following instrument sounds: • electric guitar, bassoon, oboe, B-flat, clarinet, marimba, C trumpet, • E-flat clarinet, tenor trombone, French horn, flute, viola, violin, English horn, vibraphone, • Accordion, electric bass, cello, tenor saxophone, B-flat trumpet, bass flute, double bass, • Alto flute, piano, Bach trumpet, tuba, and bass clarinet. • Classifiers – WEKA: • (1) KNN with Euclidean distance (spectrum match based classification); • Decision Tree (classification based on previously extracted features) • Confidence – • ratio of the correct classified instances over the total number of instances

  30. Classification on different Feature Groups

  31. Feature and classifier selection at each level of cascade system KNN + Band Coefficients

  32. Classification on the combination of different feature groups Classification based on KNN Classification based on Decision Tree

  33. From those two experiments, we see that: • KNN classifier works better with feature vectors • such as spectral flatness coefficients, • projection coefficients and MFCC. • Decision tree works better with harmonic peaks • and statistical features. • Simply adding more features together does not improve • the classifiers and sometime even worsens classification • results (such as adding harmonic to other feature groups).

  34. HIERARCHICAL STRUCTURE BUILT BY CLUSTERING ANALYSIS Seven common method to calculate the distance or similarity between clusters: single linkage (nearest neighbor), complete linkage (furthest neighbor), unweighted pair-group method using arithmetic averages (UPGMA), weighted pair-group method using arithmetic averages (WPGMA), unweighted pair-group method using the centroid average (UPGMC), weighted pair-group method using the centroid average (WPGMC), Ward's method. Six most common distance functions: Euclidean, Manhattan, Canberra (examines the sum of series of a fraction differences between coordinates of a pair of objects), Pearson correlation coefficient (PCC) – measures the degree of association between objects, Spearman's rank correlation coefficient, Kendal (counts the number of pairwise disagreements between two lists) Clustering algorithm – HCLUST (Agglomerative hierarchical clustering) – R Package

  35. Testing Datasets (MFCC, flatness coefficients, harmonic peaks) : The middle C pitch group which contains 46 different musical sound objects. Each sound object is segmented into multiple 0.12s frames and each frame is stored as an instance in the testing dataset. There are totally 2884 frames This dataset is represented by 3 different sets of features (MFCC, flatness coefficients, and harmonic peaks) Total number of experiments = 3  7  6 = 126 Clustering: When the algorithm finishes the clustering process, a particular cluster ID is assigned to each single frame.

  36. Contingency Table derived from clustering result

  37. Evaluation result of Hclust algorithm (14 results which yield the highest score among 126 experiments w – number of clusters, α - average clustering accuracy of all the instruments, score= α*w

  38. Clustering result from Hclust algorithm with Ward linkage method and Pearson distance measure; Flatness coefficients are used as the selected feature “ctrumpet” and “batchtrumpet” are clustered in the same group. “ctrumpet_harmonStemOut” is clustered in one single group instead of merging with “ctrumpet”. Bassoon is considered as the sibling of the regular French horn. “French horn muted” is clustered in another different group together with “English Horn” and “Oboe” .

  39. Looking for optimal [classification method  data representation] in polyphonic music [Middle C pitch group - 46 different musical sound objects] Testing Data: 49 polyphonic sounds are created by selecting three different single instrument sounds from the training database and mixing them together. KNN (k=3) is used as the classifier for each experiment.

  40. Looking for optimal [classification method  data representation] in polyphonic music Testing Data: 49 polyphonic sounds are created by selecting three different single instrument sounds from the training database and mixing them together. KNN (k=3) is used as the classifier for each experiment.

  41. WWW.MIR.UNCC.EDU • Auto indexing system for musical instruments • intelligent query answering system for music instruments

  42. Questions?

  43. User entering query User is not satisfied and he is entering a new query - Action Rules System

  44. Action Rule Action rule is defined as a term [(ω) ∧ (α→β)] →(ϕ→ψ) conjunction of fixed condition features shared by both groups proposed changes in values of flexible features Information System desired effect of the action

  45. Action Rules Discovery Meta-actions based decision system S(d)=(X,A{d}, V ), with A= {A1,A2,…,Am} Influence Matrix if E32 = [a2 a2’], then E31 = [a1 a1’], E34 = [a4 a4’] Candidate action rule - r = [(A1 , a1 a1’)  (A2 , a2 a2’)  (A4 , a4 a4’)]) (d , d1 d1’) Rule r is supported & covered by M3

  46. "Action Rules Discovery without pre-existing classification rules", Z.W. Ras, A. Dardzinska, Proceedings of RSCTC 2008 Conference, in Akron, Ohio, LNAI 5306, Springer, 2008, 181-190 http://www.cs.uncc.edu/~ras/Papers/Ras-Aga-AKRON.pdf ROOT

  47. Since the window diminishes the signal on both edges, it leads to information loss due to the narrowing of frequency spectrum. In order to preserve this information, those consecutive analysis frames have overlap in time. The empirical experiments show the best overlap is two third of window size A A B A A A Time

  48. Windowing Hamming window spectral leakage

More Related