1 / 1

Programming by playing and approaches for expressive robot performances

Angelica Lim , Takeshi Mizumoto , Toru Takahashi, Tetsuya Ogata, and Hiroshi G. Okuno (Kyoto University) . Goal A robot accompanist with your musical expression. Programming by playing

jaeger
Download Presentation

Programming by playing and approaches for expressive robot performances

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Angelica Lim, Takeshi Mizumoto, Toru Takahashi, Tetsuya Ogata, and Hiroshi G. Okuno (Kyoto University) Goal A robot accompanist with your musical expression. • Programming by playing • It is difficult to describe expression, but easy to demonstrate. We want to allow the human to program the accompanist by playing in their desired style. • A continuous pitch and volume representation captures almost all aspects of expression in a piece. • Extraction from recording • Robot noise filtering • Pitch extraction • Power estimation • The robot then generates a (flute theremin) performance in one of two ways: • 1. Inter-instrument performance transfer The extracted pitch and power scores are constrained based on player and instrument differences, then played by thereminist robot. • 2. Expression from a score We use (a) prosody rules such as “the last note in a phrase is longer”, and (b) note volume envelopes based on a human’s articulation example, to suggest emotion • Background • Our ensemble thereminist robot1 can respond to a flutist’s cues to change tempo. • But continuously indicating subtle changes and variations is impossible to cue continuously. These minute changes in volume, tempo, pitch, etc. are known as “expression”. • Who would be the perfect musical partner?A player with the same expression as the other player would be in perfect sync. Programming by playing and approaches for expressive robot performances Robot thereminist1 Human flutist • Challenges • Expressivenessshould take into account: • note length • intra- and inter-note volume changes • vibrato • articulation • pitch bends • timbre (if applicable) A model for expression How can we make robots sound more human? Based on music psychologist Juslin’s GERMS model2. Prosody controller Emotion controller Humanness controller Motor smoothness controller Originality controller • Related work • Expressive flutist robot, Solis et al. 2007 • Neural network to reproduce a human flutist’s note length and vibrato (flute  flute) • VocaListener, Nakano and Goto, 2008 • Synthesized voice with expressive pitch and volume of a human singer (singer  singer) Watch our demo! Future Work Implementing full suite of controllers in GERMS model, and learning parameters for prediction 1 Lim et al., "Robot Musical Accompaniment: Integrating Audio and Visual Cues For Real-time Synchronization with a Human Flutist," IROS, 2010 2Juslin, “Five Facets of Musical Expression: A Psychologist's Perspective on Music Performance,” Psychology of Music, 2003

More Related