1 / 35

Tiled Display Walls: Spatial Audio Integration

Tiled Display Walls: Spatial Audio Integration. Glenn Bresnahan. Opening Remarks. A word from our sponsor: Overview: Sound: it’s more than meets the eye. Implementation options BU DAFFIE soundserver overview Examples “Hello, world” Sharks. Uses of computer sound.

liliha
Download Presentation

Tiled Display Walls: Spatial Audio Integration

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Tiled Display Walls: Spatial Audio Integration Glenn Bresnahan

  2. Opening Remarks • A word from our sponsor: • Overview: • Sound: it’s more than meets the eye. • Implementation options • BU DAFFIE soundserver overview • Examples • “Hello, world” • Sharks

  3. Uses of computer sound • Warnings, system messages, alarms, etc. • Speech I/O • Networked telephony • Sound effects • Music

  4. Uses of computer sound (cont.) • Auralization: Realistic rendering of sonic environments (e.g., room acoustics simulation). Example (see www.netg.se/~catt/): • Audification: Direct sound playback (time/frequency scaling only). Example: plasma waves in outer magnetosphere (see www- istp.gsfc.nasa.gov/ istp/polar/polar_pwi_descs.html): • Sonification: Data-driven sound playback. Example: stock data at time of 1987 crash mapped to pitch (see www.icad.org/websiteV2.0/Conferences/ICAD92/ proceedings92.html):

  5. Spatial audio • Increases the credibility of computer/VR objects and environments. • Can be used for orienting in 3D space (e.g., sound 'beacon' for origin of model). • Can be used as another sonification ‘axis'. • In telephony applications, makes multiple simultaneous voices easier to distinguish (e.g., localized avatar speech in VR).

  6. Localization methods: HRTF • “Head Related Transfer Function” • A measure of the filtering effect of head, torso, and ears on incoming sound • Using microphones placed in subject’s ear canals, measure sound impulses played from multiple locations around, above and below listener • Convolve filter pair (L/R) corresponding to a given location with “dry” sound to produce perception of sound at that location.

  7. Localization Methods: Amplitude Panning • Use amplitude differences to create phantom audio sources between loudspeakers • Computations for multiple loudspeaker array : • VBAP: Vector Base Amplitude Panning (see http://www.acoustics.hut.fi/research/abstracts/vbap.html • Ambisonics (see Winter '95 Computer Music Journal)

  8. Localization methods: Ambisonics • Pros: • Low computational cost • Using external decoder, can produce 8-channel output with 4-channel audio hardware • Cons: • Small “sweet spot” (I.e., location in room where listener hears accurate localization) • Assumes regular (square, pentagonal, cubic) speaker layout.

  9. Localization methods: Ambisonics • For each sound source, generate 4-channel "B-format" signal: • W: omnidirectional = signal*0.707 • X: left-right = signal*cos(horizontal_angle)*cos(vertical_angle) • Y: front-back = signal*sin(horizontal_angle)*cos(vertical_angle) • Z: up-down = signal*sin(vertical_angle)

  10. Localization methods: Ambisonics • Decode combined (i.e., after mixing all sources) B-format signal to eight speaker signals: • All eight speakers get W. • Left speakers get +X; right speakers get -X • Front speakers get +Y; rear speakers get -Y • Upper speakers get +Z; lower speakers get -Z

  11. Loudspeakers versus headphones • Advantages of loudspeakers • Lower encumbrance • Richer group experience • Computationally inexpensive localization method easily supports numerous simultaneous sources • Advantages of headphones • Minimal space requirements • Higher localization precision • No problems with echo in telephony

  12. BU DAFFIE Soundserver • Inspired by Robin Bargar’s VSS (see www.isl.uiuc.edu/software/vss_reference/vss3.0ref.html) • Soundserver runs on dedicated audio-enabled computer. • Soundserver scheme relieves host applications of the hardware, software, and computation burdens of loading, mixing, and panning digital sound effects, managing multiple telephony streams, etc.

  13. BU DAFFIE Soundserver • Features: • Spatial model built into API: application sets listener and sound position; soundserver computes relative distance and position. • Computationally inexpensive spatialization method permits many simultaneous localized sources • Many speaker configurations supported: mono, stereo, 4-channel (square), 8-channel (cube)

  14. BU DAFFIE Soundserver • Features (cont.) • Integrated localized IP telephony supports communication among navigators in distributed VR environments • Developed on SGI, ported to Windows, Linux

  15. Soundserver network I/O • Dedicated socket connection to host application • Typical use: connection to application which needs audio to be tightly coupled with graphics. • DAFFIE event server connection • Typical use: telephony streams from multiple sites in distributed VR.

  16. Er, what’s DAFFIE, Doc? • “Distributed Applications Framework For Immersive Environments” • Clients communicate via messages (or “events”) • Point-to-point • Broadcast • Selective subscription to event classes • Streaming data: telephony, video. • General communications package, easily adapted to non-VR applications.

  17. (Selected) Soundserver API • ds_connect_to_sound_server(char *server); • ds_read_sound_config_file(char *fname); • ds_set_listener_location(float loc[4][4]); • ds_set_sound_location_mono(int sound, float loc[4][4]); • ds_play_sound(int sound); • ds_set_sound_pitch_factor(int sound, float pitch_factor);

  18. Sound config file example sound_name hello sound_file ../sounds/hello.aiff nominal_loudness_left 0.5 standard_sound_distance_left 15.0 pitch_factor 0.85 loop_n_times 2 loop_start 12537 loop_end 34516 fade_time 0.65

  19. Basic soundserver example ds_connect_to_sound_server(“jeckel”); handle = ds_read_sound_config_file(“ribbet”); ds_set_listener_location(listener_loc); ds_set_sound_location_mono(handle, sound_loc); ds_play_sound(handle); (heckel) (jeckel) Host application soundserver

  20. Sharks example • SGI demo modified to run in stereo on Tiled Display Wall. • Sharks circulate, try to avoid collisions, sometimes “attack” (i.e., swim faster, open and close jaws).

  21. Sharks sound design • Play ambient water sounds around listener. • At each time step, for each shark: • Pan shark bubble sound to current shark location. • Relate pitch of shark bubble sound to current shark velocity -- when shark attacks, raised pitch/speed of bubble sound suggests greater water turbulence.

  22. Sharks sound software design considerations • Design goal: Add minimal sound-related code to host application • Facilitate independent development of graphics and sound code • Minimize performance hit to graphics application. • Implementation:graphics application broadcasts shark locations and velocities, leaving further processing, such as converting shark velocity to pitch, to “sound agent” which communicates via socket to soundserver.

  23. Sharks system diagram Main sharks wall application event server Sound agent xyz,v xyz,v Shark tile Put Image here Load and play sounds, set location, pitch, etc. Here too And here As well soundserver

  24. Sharks code • At startup, join event_server: char *event_server; int i,num_users,Send_events=1; strcpy(event_server, argv[i]); if (!event_join(event_server, &num_users)) { fprintf(stderr, "ERROR: Could not join %s.\n”, event_server); Send_events = 0; }

  25. Sharks code • In main loop, for each shark, send XYZ, V: struct _fishRec fish; EVENT_XYZV_DATA ev, *e = &ev; if (Send_events) { e->tag = fish->id; e->data[0] = fish->x; e->data[1] = fish->y; e->data[2] = fish->z; e->data[3] = fish->v; event_send (EVENT_BCAST_NOTME, ET_XYZV_DATA, (EVENT *)e, sizeof(EVENT_XYZV_DATA)); }

  26. Sound agent code • Connect to soundserver, event server: char sound_server[LEN], event_server[LEN]; if (!ds_connect_to_sound_server(sound_server)) { fprintf(stderr, “ERROR: Unable to connect to %s\n”, sound_server); bail_out(); } set_listener_location(0.0,0.0,0.0); if (!event_join(event_server, &num_users)) { fprintf(stderr, "ERROR: Could not join %s\n”, event_server); bail_out(); }

  27. Sound agent code • Load sound files into soundserver: int S_sound_handles[NUMBER_OF_SHARKS]; int A_sound_handles[NUMBER_OF_AMBIENTS]; for(i=0;i<NUMBER_OF_SHARKS;i++) { S_sound_handles[i] = ds_read_sound_config_file(“scuba”); } for(i=0;i<NUMBER_OF_AMBIENTS;i++) { A_sound_handles[i] = ds_read_sound_config_file(“scuba-ambient”); }

  28. Sound agent code • Initialize and trigger looping ambient sounds: pfMatrix sound_location; int h; h = A_sound_handles[0]; pfMakeTransMat(sound_location,-2.0,2.0,0.0); ds_set_instrument_location_mono(h,sound_location); ds_set_instrument_pitch_factor(h, 0.41); ds_play_sound(h); h = A_sound_handles[1]; pfMakeTransMat(sound_location,2.0,2.0,0.0); ds_set_instrument_location_mono(h,sound_location); ds_set_instrument_pitch_factor(h, 0.43); ds_play_sound(h); h = A_sound_handles[2]; pfMakeTransMat(sound_location,-2.0,-2.0,0.0); /… etc…/

  29. Sound agent code • Main loop: while(1) { while (event_receive((EVENT *)e)) { switch(e->ev_head.type) { case ET_XYZV_DATA: process_shark_data((EVENT_XYZV_DATA *)e); break; case ET_LEAVE: process_leave((EVENT_LEAVE *)e); break; default: break; } } }

  30. Sound agent code • XYZV event processing, part 1 /* scale to more intuitive units, roughly = feet */ #define X_SCALE 0.001 #define Y_SCALE 0.0007 #define Z_SCALE 0.01 /* don’t go subsonic at slow swim speeds */ #define MINIMUM_PITCH_FACTOR 0.1 /* adjust ratio of shark velocity to pitch */ #define PITCH_SCALE_FACTOR 0.3 void process_shark_data(EVENT_XYZV_DATA *e) { float x,y,z,v,pitch_factor; int i; int sharknum = e->tag1; pfMatrix sound_location; …

  31. Sound agent code • XYZV event processing, part 2 void process_shark_data(EVENT_XYZV_DATA *e) { … x = e->data[0]*X_SCALE; y = e->data[1]*Y_SCALE; z = e->data[2]*Z_SCALE; pfMakeTransMat(sound_location,x,y,z); ds_set_instrument_location_mono( S_sound_handles[sharknum], sound_location); …

  32. Sound agent code • XYZV event processing, part 3 void process_shark_data(EVENT_XYZV_DATA *e) { … v = e->data[3]; pitch_factor = (v - 1.0)*PITCH_SCALE_FACTOR; if (pitch_factor < MINIMUM_PITCH_FACTOR) { pitch_factor = MINIMUM_PITCH_FACTOR; /* quiet down those sharks which are not attacking */ ds_set_instrument_amplitude_mono(S_sound_handles[sharknum], Slow_swim_amplitude); } else { ds_set_instrument_amplitude_mono(S_sound_handles[sharknum], 1.0); } ds_set_instrument_pitch_factor(S_sound_handles[sharknum], pitch_factor); …

  33. Sound agent code • XYZV event processing, part 4 void process_shark_data(EVENT_XYZV_DATA *e) { … /* trigger looping sound only once */ if (!Sharks[sharknum].triggered) { ds_play_instrument(S_sound_handles[sharknum]); Sharks[sharknum].triggered = 1; } }

  34. Demonstration • Example 1

  35. Demonstration • Example 2 • Questions…?

More Related