1 / 23

SE in RT Audio Applications

SE in RT Audio Applications. Bert Schiettecatte [bschiett@vub.ac.be] Promotor: Prof. D. Vermeir Co-promotor: Prof. R. Lauwereins Advisors: S. Himpe & T. Vander Aa. Introduction. Sound: infinite discrete sequence of normalized numbers = ‘samples’

zofia
Download Presentation

SE in RT Audio Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SE in RT Audio Applications Bert Schiettecatte [bschiett@vub.ac.be] Promotor: Prof. D. Vermeir Co-promotor: Prof. R. Lauwereins Advisors: S. Himpe & T. Vander Aa

  2. Introduction • Sound: infinite discrete sequence of normalized numbers = ‘samples’ • RT Audio Synthesis: real-time (RT) computation and playback of synthetic sound • Synthetic Sound: sound generated using a mathematical model • RT Deadlines: buffer scheduling, event handling

  3. Goals & Limitations • Not: • Detailed treatment of digital signal processing (DSP) arithmetic for audio processing • Experiments on the performance of OO in RT applications • Instead: • Realistic exposure to application of some SE techniques in a RT application • Roadmap for building a RT audio synthesizer

  4. Case Study • Real-time audio synthesizer on handheld device: StrongARM running PocketPC • To be used by musicians on the road • Goals: • Feasibility • Exposure to realistic RT application • Exposure to DSP • SE principles realistic in RT applications

  5. Implementation • Trade-offs: • Floating-point VS fixed-point • Double buffering, tripple buffering or ring buffering of sound • Static VS dynamic networks • OO VS best-fit implementation

  6. Implementation • Strategy: bench & correctness of • Arithmetic • Synthesis Algorithm • Buffer Scheduling & OS Timing • Event Scheduling

  7. Fixed VS float arithmetic • Arithmetic: major impact on overall performance & sound quality (quantization noise) • Benchmark for comparing floating-point / fixed-point performance: LINPACK • Case study HW: factor 10 speedup (assembly-level analysis reveals float arithmetic implementation: library)

  8. Fixed-point arithmetic and GP • Floating-point arithmetic: sometimes regarded as ‘unlimited range’ by developers • Fixed-point arithmetic: limited per definition, requires detailed range analysis in some algorithms, resulting in confusing code • Solution: use generative programming to derive best precision for a variable • Generalized: explicit type for a variable can be substituted for a set of constraints

  9. Synthesis Algorithm • Additive synthesis: sum (harmonically related) weighted sine waves • Envelope: piecewise linear function changing weight over time • Requirements: • Oscillator: efficient sine algorithm (e.g. CORDIC) • Envelopes: linear interpolation

  10. Synthesis Algorithm • Use fixed-point arithmetic to implement algorithm • Dynamic network: runtime sequencing of DSP blocks, buffer granularity • Static network: compile time sequencing of DSP blocks, sample granularity • Here: static network because of fixed-point arithmetic & inlining

  11. Buffer Scheduling: OS API • In general: • Check device capabilities • Open device • Prepare buffer • Queue buffer in the OS for playback • Wait until playback done • Goto 3 • Unprepare buffer • Close device

  12. Buffer Scheduling • Algorithms: • Naive • Double/Triple buffering • Ring buffering

  13. Buffer Scheduling: Naive • Algorithm: • Prepare buffer • Queue buffer in the OS for playback • Wait until OS signals playback done • Goto 1 • Problems: • sound pauses (prepare + queue) • OS event is overhead

  14. Buffer Scheduling: Double/Tripple • Algorithm: • Prepare buffer 1 • Queue buffer 1 in the OS for playback • Wait until OS signals buffer 2 done • Prepare buffer 2 • Queue buffer 2 • Wait until OS signals buffer 1 done • Goto 1 • Problems: • OS event overhead

  15. Buffer Scheduling: Ring Buffer • Algorithm: • Fill ½ & queue buffer for looping playback • Get playing position of the OS in buffer • Fill buffer from the last known rendering position up to the OS position • Update rendering position • Wait ¼ buffer size • Goto 2 • Problems: • Tricky: sleep accuracy (WinCE 3.0: 1 to 2 ms VS Win98: 20ms!) • Choosing wait time is hard

  16. Buffer Scheduling: Reality • Scheduling: very hard • Requires inspiration and experimenting (e.g. buffer size problems in WinCE 3.0) • Buffer underrun detection/avoidance (e.g. MP3)

  17. Event Scheduling • Possibilities: • Seperate event thread updating settings • Handle events in the synthesizer class: keep a song pointer & update when necessary • Seperate thread: requires additional CPU time for scheduling, synchronization • In the synth class: better, keep a song pointer (in samples) and update settings based on the song’s tempo (e.g. 120 BPM = update every 0.5s * 22050 Hz = 11025 samples)

  18. Event Scheduling • Latency: the delay on events caused by the buffer scheduler • Latency: • In seperate event thread: because of synchronization, at least 1 sound buffer • In synthesis loop: almost none, since events can be handled almost every sample

  19. Implementation: Best-fit • In general: mix paradigms to write clear code • OO not always the best choice: depends on levels of re-use (e.g. OS timer) • Machine-level inspection of code: crucial when using OO in RT applications • OO not per definition a bad thing

  20. Implementation: Object-orientedness • Here: OO is the right paradigm when • It doesn’t keep the compiler from optimizing • It entirely disappears after compilation (e.g. inlining): no run-time OO • It simplifies code • In general: OO languages without real ‘compile-time’ tend to have performance problems • Meta-programming: welcome addition

  21. Contribution • This dissertation: one of the few detailed available roadmaps for RT audio synthesis implementation • Not much new here (but still large amount of research, everything proprietary) • Application of generative programming to audio processing and fixed-point arithmetic

  22. Conclusion • Implementing a RT audio synthesizer is hard • Series of benchmarks & tests crucial • OO design has to be applied with common sense and is not always the best choice • Generative programming can be a powerful mechanism to abstract at little or no cost • A meta-programming level adds expressive power • Compiler features (inlining, templates, …) essential

  23. Synthesis engine demonstration • Sets up the additive synthesis engine, starts the sound buffer scheduler, loads events from a text file and feeds them to the additive synthesizer

More Related