Download
slide1 n.
Skip this Video
Loading SlideShow in 5 Seconds..
N a t i o n a l S c i e n c e F o u n d a t i o n E n g i n e e r i n g R e s e a r c h C e n t e r PowerPoint Presentation
Download Presentation
N a t i o n a l S c i e n c e F o u n d a t i o n E n g i n e e r i n g R e s e a r c h C e n t e r

N a t i o n a l S c i e n c e F o u n d a t i o n E n g i n e e r i n g R e s e a r c h C e n t e r

134 Views Download Presentation
Download Presentation

N a t i o n a l S c i e n c e F o u n d a t i o n E n g i n e e r i n g R e s e a r c h C e n t e r

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. RMI: Remote Media Immersion Roger Zimmermann Alexander A. Sawchuk, Cyrus Shahabi Ulrich Neumann, Chris Kyriakakis Tom Holman, Christos Papadopoulos Integrated Media Systems Center University of Southern CaliforniaLos Angeles, CA 90089 http://dmrl.usc.edu and http://imsc.usc.edu N a t i o n a l S c i e n c e F o u n d a t i o n E n g i n e e r i n g R e s e a r c h C e n t e r

  2. Outline • IMSC Introduction • RMI Goals and Challenges • System Components • Experiments • Streaming Media Architecture: Yima • Research Challenges • Future Possibilities

  3. IMSC ERC Research Structure

  4. Integrated Media Systems Charter: Immersipresence 3 Vision Areas 6 Research Areas Sensory Interfaces Information Management Communication Media Communications Education Application Research Media Immersion Environment User Centered Sciences Entertainment

  5. What is the RMI? “The goal of the Remote Media Immersion system is to build a testbed for the creation of immersive applications.” Immersive application aspects: Multi-model environment (aural, visual, haptic, …) Shared space with virtual and real elements High fidelity Geographically distributed Interactive

  6. Reproduce the complete audio and video ambience placing people in a virtual space Experience events occurring at remote site(s) Naturalcommunication,interactionand collaboration Application scenarios: Unidirectional off-line acquisition, processing and storage of immersidata and synchronized display (rendering) Real-time,two-way version Stereoscopic visual display Remote Media Immersion Goals Immersed in a college football game Doctors assisting in a remote procedure Business people negotiating like they are in the same room Students visiting an aquarium a thousand miles away

  7. RMI Challenges • Immersive, high-quality video acquisition and rendering • High Definition video 1080i and 720p (40 Mb/s) • Immersive, high-quality audio acquisition and rendering • 10.2 channels of uncompressed audio (12 Mb/s) • Storage and transmission of media streams across networks • Synchronization between streams (A/V, A/A, V/V)!

  8. RMI Architecture

  9. Remote Media Immersion Client HD Video Rendering HD Video HD Video Uncompressed Network Linear Time Code Yima Client SW 10.2 Ch. Audio Word Clock 10.2 Ch. Audio Rendering 10.2 Ch. Audio (Digital) Network ...

  10. ISI East IMSC Experimental Setup Synchronized combinations of • Immersive audio and HDTV streamed playback from Yima • Streaming of 16 channels of immersive audio, uncompressed at 12 Mb/s • Streaming of 1920x1080i HDTV content, MPEG-2 compressed at 40 Mb/s

  11. Internet2 Fall ‘02Member Meeting Video: HDTV 1280x720p Audio: 10.2 channel, immersive soundsystem New World Symphony, Miami, FL

  12. Internet2 Demonstration

  13. Storage, Streaming & Rendering • Focus:End-to-end streaming architecture. • Server • Storage • Scheduling • Scalability • Clients • Multi-stream Synchro-nized playback • Transmission • Robust VBR Flow control • Requirements:A streaming platform that can scale and handle synchronized, high-bandwidth streams.

  14. Multi-node, multi-disk architecture • Scalable • Industry-standard network protocols: • RTP, RTSP • Robust media transmission: • Adaptive flow control • Selective retransmission • Clients: • Multi-stream synch. Yima Architecture

  15. Research Focus and Unique Approaches • Scalability (enable large scale systems) • Multi-node architecture with distributed scheduler and distributed file system on commodity PCs[Computer ‘02] • Incremental system growth: SCADDAR– an efficient randomized technique to reorganize continuous media blocks[ICDE 2002] • Robust stream delivery • Multi-threshold flow control between clients and server: avoids data starvation and overflow, supports variable bit rate media[MTAP 2003?] • Selective packet retransmission protocol[NOSSDAV’96, MMCN’03]

  16. Challenge: Real-Time Media • Bandwidth requirements for different media types: 100 Mb/s 50 Mb/s 31 Mb/s 20 Mb/s 4-6 Mb/s 1 Mb/s

  17. Yima Server S/W Architecture

  18. Challenge: Scalability • As continuous media (CM) repositories increase, the need for larger storage capacity arises • Multi-node, multi-disk support • Disk scaling (adding new and/or removing old disks)  SCADDAR • Quick access to data • Online 24/7 operation (i.e. no downtime) • Fault-tolerance • Load balancing of the data before/after scaling to ensure maximum utilization of disk I/O and capacity

  19. Scalability: Multi-Node, Multi-Disk • Data and control network traffic can be routed with different logical topologies • Yima-1: single data path (high inter-node traffic) • Yima-2: multiple data paths (low inter-node traffic) Yima-1 Yima-2

  20. SCADDAR • Disk scaling (adding new and/or removing old disks) • Example of adding one disk to an existing 4-disk storage system:

  21. Challenge: Robust Stream Delivery Motivation & Objectives • Variable bit rate (VBR) media encoders allocate more bits to complex scenes and less bits to simple ones • Smoothing of VBR media traffic has the following quality benefits: • Better resource utilization (less bursty) • More streams with the same network capacity • Multi-Threshold Flow Control(MTFC) algorithm objectives: • Online operation • Content independence • Minimizing feedbackcontrol signaling • Rate smoothing

  22. MTFC Buffer Management • Multiple Thresholds: goal is middle of buffer • Send rate adjust command to server whenever threshold is crossed

  23. Multi-Threshold Flow ControlResults

  24. Challenge: Packet Loss! • IP networks are based on “best effort” delivery • Client at USC, LA, server at ISI East, Arlington, VA • One aspect: high-bandwidth video and audio transmissions • HDTV @ 40-45 Mb/s • 16-channels of uncompressed PCM audio @ 11-22 Mb/s • RTP/UDP is industry standard, but UDP loss creates problems • Tests on high-performance network: Internet2 and DARPA NGI SuperNet (WAN) & Gigabit Ethernet (LAN) • In the order of 10 packets lost every 1 million (10-5) • Such low loss is still visible/audible! • Loss may result in synchronization problems

  25. Solution Space Tradeoffs Reliability vs. BW Reliability vs. Latency • No silver bullet (“one size fits all”) • We chose selective retransmissions FEC Fast Sacrifices BW for reliability Vulnerable to burst loss Concealment Fast Optimal use of BW Quality may suffer Media dependent Retransmission One RTT to recover Optimal use of BW

  26. Selective Retransmissions • Originally proposed in [NOSSDAV’96] by Papadopolous and Parulkar • Need: sender retransmission buffer and receiver playout buffer Receiver-driven operation: Ask for retransmissions only missing data will be consumed after estimated RTT

  27. 111101 110111 101011 Fast Error Recovery Receiver Sender read frame every T secs send new frame every T secs k j i i j k Retransmit buffer Play-out buffer New frame discard NAK

  28. Multi-node Server • Originally Data and control network traffic can be routed with different logical topologies • Centralized: single data path (high inter-node traffic) • Bipartite: multiple data paths (low inter-node traffic) Centralized Design Bipartite Design

  29. Multi-node Server: RBEC • Challenge: • If a packet does not arrive at the client side, how does the client know which node attempted to send it? • To improve scalability and online data reorganization data blocks are randomly assigned to server nodes. • Possible solutions: • 1. Broadcast retransmission requests • 2. Compute which node should have the data • 3. Introduce node-specific local sequence numbers (LSN) in addition to a global sequence number (GSN)

  30. Yima Approach Assumption: 3 packets per storage block Server Node 1: Payload Payload Payload LSN GSN LSN GSN LSN GSN 3 3 2 2 1 1 Server Node 2: Payload Payload Payload LSN GSN LSN GSN LSN GSN 3 6 2 5 1 4 Server Node 1: Payload Payload Payload LSN GSN LSN GSN LSN GSN 6 9 5 8 4 7

  31. LSN Retransmission Operation

  32. Experiments • Multi-node server with 1, 2 and 4 nodes • LAN and WAN: DARPA NGI SuperNet, cross-continental link (4000km) • Gilbert loss model: andwith p = 0.0192 and q = 0.8454, therefore Ploss is approx. 2.2% • Media file “Twister” (MPEG-2) • avg. BW of 698 kB/s • length 25 minutes • throughput std. dev. 308283

  33. Results for WAN 25 Minutes 25 Minutes 4% 4% 3% 3% 2% 2% 1% 1% 0% 0% Raw Loss 1 Node 4% 4% 3% 3% 2% 2% 1% 1% 0% 0% 2 Nodes 4 NodesNatural Losses

  34. Yima Client Features • Synchronization between multiple clients • Coarse-grained via flow & rate control • Fine-grained via hardware support (30 fps & 48,000 s/sec) • Media streams can come from different physical locations

  35. COMPUTER RMI & Yima Accomplishments • Publications: • SCADDAR ICDE, March 2002 • IEEE Computer, June 2002 • GMeN IEEE TPDS, June 2002 • RMI Transcontinential Tests: • Server at ISI East, Arlington, VA • Internet2 Fall’02 Meeting • RMI Press Coverage: • New York Times, May 9, 2002 • NBC-4, May 9, 2002 • KTLA-5, May 9, 2002

  36. Future Possibilities • Distributed virtual social events • Immersive gaming • Large screen displays • Multiple cameras and microphones; 3-D scene description • Speech and gesture extraction • Face and body tracking • Wireless glasses or head-mounted displays • Stereo display without glasses (autostereoscopic)

  37. Distributed Immersive Performance • Outgrowth of Remote Media Immersion (RMI) • Create seamless immersive environment for distributed musicians, conductor (active) and audience (passive) • Compelling relevance for any human interaction scenario: education, journalism, communications • Scenario: • Orchestra not available in town • Famous soloist cannot fit travel into schedule • Multiple soloists in different places

  38. 60 ms 20 ms 40 ms 30 ms 10 ms 30 ms Challenge: network latency

  39. latency (delay) multi-stream synchronization data rates, error characteristics processing power compression Technical Challenges

  40. Yima Ongoing Work • Real-time recording of multiple streams • For example, from a panoramic camera with 5 individual camera heads: • Streams need to recorded in sync and played back in sync • Statistical admission control algorithm for better utilization of the storage system • Interactive, live streaming

  41. Thank You! Questions? • More info at: • Data Management Research Lab • http://dmrl.usc.edu • Integrated Media Systems Center • http://imsc.usc.edu • Acknowledgments: • Kun Fu, Didi Shu-Yuen Yao, Beomjoo Seo, Shihua Liu, Mehrdad Jahangiri, Farnoush Banaei-Kashani, Nitin Nahata, Sahitya Gupta, Vasan N. Sundar, Rishi Sinha, Hong Zhu