1 / 21

Compositional Technology Week 2

Compositional Technology Week 2. Musique concrète and elektronische Musik - theory and practice.

Download Presentation

Compositional Technology Week 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Compositional Technology Week 2 Musiqueconcrète and elektronischeMusik - theory and practice

  2. Musiqueconcrète is a form of electroacoustic music that is made in part from acousmatic sound. It can feature sounds derived from recordings of musical instruments, voice, and the natural environment. Originally it was contrasted with "pure" elektronischeMusik (based solely on the production and manipulation of electronically produced sounds rather than recorded sounds), the theoretical basis of musiqueconcrète as a compositional practice was developed by Pierre Schaeffer, beginning in the early 1940s.

  3. Karlheinz Stockhausen worked briefly in Schaeffer's studio in 1952, and afterward for many years at the WDR Cologne's Studio for Electronic Music. In his 1949 thesis ElektronischeKlangerzeugung: ElektronischeMusik und SynthetischeSprache, Meyer-Eppler conceived the idea to synthesize music entirely from electronically produced signals; in this way, elektronischeMusik was sharply differentiated from French musiqueconcrète, which used sounds recorded from acoustical sources. • "With Stockhausen and Mauricio Kagel in residence, it became a year-round hive of charismatic avante-gardism... Stockhausen stated that his listeners had told him his electronic music gave them an experience of "outer space," sensations of flying, or being in a "fantastic dream world" (Lebrecht 1996, p. 75)

  4. More elegant approaches to additive synthesis In Max • Harmonics and partials • All sounds can be thought of as being made up of mixtures of sine tones. We call this the ‘spectrum’ of the sound, because it is a bit like splitting white light into its component colours. In theory, we can simulate any sound by adding sine tones together, although in practice we need very many sine tones (at least several hundred), and very complex control over their loudness over time. But we can still do quite a lot with just a few sine tones.

  5. As we saw last time, adding sine tones together to build up a spectrum is called additive synthesis. Each sine tone is called a ‘partial’, with the lowest frequency partial called the ‘fundamental’. In harmonic sounds (like instruments or voices) the frequencies of the partials relate to that of the fundamental by whole numbers (2* fundamental, 3* etc.) This is called a ‘harmonic spectrum’, and the partials are sometimes called ‘harmonics’. If we don’t use whole numbers we have an ‘inharmonic’ spectrum, rather like a bell. We should not call the individual sine tones ‘harmonics’ in this case – because they are not! • Early experiments with recorded sound showed that when we hear real-world sounds it is not so much the mixture of partials that gives the sound its characteristic ‘timbre’, but the way those partials change over time. So to make realistic, vibrant and ‘alive’ sounds we need to control not just the frequency of each partial, but also its amplitude over time.

  6. Generating partials • We can make a subpatch and save it as an ‘abstraction’, which can take arguments, and which we can use over and over again. Since we want to make sounds with many partials, it makes sense to design an abstraction to generate one partial, then re-use this for however many partials we want. (NOTE: when making this abstraction, it’s fine to end the name with ~ if we want. This reminds us that it is a signal based object.)

  7. The parameters we will supply to our abstraction are: • Fundamental frequency in Hz. • Multiplier (how many times our current partial is higher than the fundamental i.e. a multiplier of 2 would make a fundamental of 440Hz. have a partial of 880Hz.) • Amplitude envelope (a series of line break point pairs.) The fundamental frequency will simply come from the note we want to play.

  8. The multiplier can be supplied as a variable, which can be supplied by a inlet to the object. We can also use a number argument in this case 1. until a new variable in supplied. To achieve this effect we can use a * object, and give it the argument 1. but we also attach the inlet to the right inlet of the * object. So the #1 will be replaced with whatever argument we supply, but if a new value appears at the inlet later on, the 1 value will be overridden with the new value. • The amplitude will come from an envelope generator which will provide a list of value/time pairs. We can feed these into a line~ object. • The output of the abstraction will be a sine tone. We could send this out using a send~ object. If all abstractions use the same send~ and receive~ arguments, they will be mixed together (which is what we want). If we want to, we can make the abstraction stereo, with two sine generators in it. In this case we would send the outputs via two send~ objects with different names. • When we have made our abstraction, we can put however many we want in a parent patch, and supply them with arguments. If we give them simple numbers to start off with (say 1,2,3,4,5,6 etc.). we will generate a harmonic sound. If we use complex numbers (1 2.3 4.5 6.6 6.9 etc.) we will get an inharmonic sound, rather like a bell.

  9. Drawing envelopes • To control the amplitude, we can use a function object. This allows us to draw a trajectory, and use this to send values to the line~ object in our partial generator. In our function object the x-axis of the function will be the length of our note in ms. We can set this with the setrange $1 message ($1 being the variable representing how many ms. we want the function to run for . Now if we bang the envelope, it will send out x y pairs to line~ and thus control the amplitude of our sine. • Now all we need to do is add the ability to ‘play’ the sounds. To do simply we can use notein then the object mtof can convert MIDI note numbers into frequencies. Functionality can be added by including an on screen keyboard and midi selection.

  10. FM synthesis. • FM synthesis • As we have seen, additive synthesis is achieved by adding sine tones together. But we need many sine tones for sounds to be interesting. Slightly paradoxically FM synthesis can create complex sounds in a much simpler way. The basic idea is take a sine wave oscillator, and use another sine wave to continuously change (or modulate) the frequency of the first (hence ‘frequency modulation’). Because the controlling oscillator is also at audio frequency, we get ‘interference patterns’ between the two signal that create extra partials, and these patterns can be heard as additional partials. The pattern of partials created depends on the ratio between the original two frequencies. A simple ratio (e.g. 1:2) creates a more simple, harmonic sound. A complex ratio creates a complex inharmonic sound.

  11. FM terminology • There are two sine waves. The first is called the ‘carrier’ (the basic pitch of our note) and the second is referred to as the ‘modulator’. This changes the frequency of the carrier wave, and determines the timbre or tone colour). The third parameter is the ‘modulation index’. This is the amount by which the second sine wave changes the frequency of the first.

  12. Making an FM patch • To make an FM patch, first we create a cycle~ for the carrier, with a gain~ to control the level, and and a dac~ for output. This is the basic frequency of our note. But we are going to modulate this frequency, which means we add or take away a small amount from the frequency. NOTE: we are not adding or taking away from the signal output by the cycle~, rather we are adding or taking away from the frequency of the cycle~ not its amplitude. • To do this we create a second cycle~. This is the modulator. This will be summed with the frequency of the first, so the freq of the first will vibrate up and down. But we need to control how much of the modulator is added, because this makes a big difference. This is the ‘modulation index’ (basically the input volume of the modulator). To do this we need a *~ object at the output of the modulator cycle~, and a floanum to control it.

  13. When we change the carrier frequency, the ‘note’ changes, but do we not want the timbre to change. We want the ratio between the carrier and modulator to remain constant. We do this by multiplying the carrier by a ‘harmonicity ratio’. If we use (say) a harmonicity ration of 2, we guarantee that the modulator is always double the frequency of the carrier, no matter what note the carrier is playing. • In order to change the values continuously without clicks or glitches, we need to use signals rather than numbers. So we can put sig~ after all the numbers, to turn the numbers into signals. • We may also need one more modification. To get an even timbre, we need more modulation at high frequencies, and less at low. To do this we times the modulation index by the modulation frequency. This means we can use small numbers for the mod index, rather than huge ones. It also means we get more modulation as we play higher notes. This is not strictly essential for FM synthesis but can be very useful.

  14. Envelopes and shaping • We can use function and line~ to create envelopes here to control the timbre. With many real instruments, the timbre changes over time. So we can use another function to control the modulation index.

  15. Task 1 - Build and demonstrate one of the synthesis methods above, in max. Variation and experimentation is encouraged.

  16. Musique concrete & l'objetsonore • Pierre Schaeffer, with whom the French version of this term l'objetsonore is most associated, describes it as an acoustical "object for human perception and not a mathematical or electroacoustical object for synthesis." The sound object may be defined as the smallest self-contained element of a soundscape, and is analysable by the characteristics of its spectrum, loudness and envelope. • Though the sound object may be referential (e.g. a bell, a drum, etc.) it is to be considered primarily as a phenomenological sound formation, independent of its referential qualities as a sound event. Schaeffer: "The sound object must not be confused with the sounding body by which it is produced," for one sounding body "may supply a great variety of objects whose disparity cannot be reconciled by their common origin." Similarly, the sound object may be considered independently of the social or musical contexts in the soundscape from which it is isolated. Sound objects may be classified according to their morphology and typology.

  17. Money where my mouth is • https://soundcloud.com/virtual440/postcards-fom-home-low-bit-mp3-version • ‘Written’ in 8 channel surround sound entirely from sound objects along the N Wales coast. • Performed as part of the International Computer Music Conference 2012 - Slovenia

  18. Task 2- Sound Objects • Part 1 Source recording and editing • Create TEN examples of original ‘sound objects’ you have recorded properly edited (‘topped and tailed’) • These need not be ten different physical sound producing objects, you might produce ten interesting ‘sound objects’ from just one physical object. • They need not be ten very different sound objects. For example, you might submit three versions of striking the same physical metal object to produce similar but clearly different resonances with different distributions of partials (‘harmonics’). These would be three different ‘sound objects’. Essentially the question to ask is whether including two or more versions of the same type of sound really increases the musical possibilities open to you when composing. • Part 2 source development • Using just ONE of the ten sound-objects submitted above, submit FIVE examples of that one sound-object processed or transformed in different ways PLUS FIVE further transformations of just ONE of the first five.Inother words, you need to submit five ‘1st generation’ plus five ‘2nd generation’ transformations.

  19. Sample module submissions: • Submission of a portfolio of 3 stereo tracks totalling approximately 8 mins, comprising music made from entirely found sounds and processed sound objects. • Submission of a portfolio of 3 stereo tracks totalling approximately 8 mins, comprising music made from synthesised sounds. • A 3 minute demonstration of your own digital instrument built in Max controlled using live triggering of your own sampled and processed audio. • A 4 minute demonstration of your own digital instrument built in Max controlled over by iPhone touch screen over OSC. • Generation of an 8 minute notated score by algorithmic methods • A sound installation work capable of sustaining interest for a minimum of 8 minutes.

More Related