1 / 51

SAHARA: A Revolutionary Service Architecture for Future Telecommunications Systems

SAHARA: A Revolutionary Service Architecture for Future Telecommunications Systems. Randy H. Katz, Anthony Joseph Computer Science Division Electrical Engineering and Computer Science Department University of California, Berkeley Berkeley, CA 94720-1776. Project Goals.

frey
Download Presentation

SAHARA: A Revolutionary Service Architecture for Future Telecommunications Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SAHARA: A Revolutionary Service Architecture for Future Telecommunications Systems Randy H. Katz, Anthony Joseph Computer Science Division Electrical Engineering and Computer Science Department University of California, Berkeley Berkeley, CA 94720-1776

  2. Project Goals • Delivery of end-to-end services with desirable properties (e.g., performance, reliability, “qualities”), provided by multiple potentially distrusting service providers • Architectural framework for • Economics-based resource allocation • Third-party mediators, such as Clearinghouses • Dynamic formation of service confederations • Support for diverse business models

  3. Presentation Outline • Motivation • Project SAHARA • Initial Investigations • Testbeds • Summary and Conclusions

  4. Presentation Outline • Motivation • Project SAHARA • Initial Investigations • Testbeds • Summary and Conclusions

  5. The Huge Expense of New Telecomms Infrastructures • European auctions for 3G spectrum: 50 billion ECU and counting • Capital outlays likely to match spectrum expenses, all before the first ECU of revenue! • Compelling motivation for collaborative deployment of wireless infrastructure

  6. Any Way to Builda Network? • Partitioning of frequencies independent of actual subscriber density • Successful operator oversubscribe resources, while less popular providers retain excess capacity • Different flavor of roaming: among collocated/competing service providing • Duplicate antenna sites • Serious problem given community resistance • Redundant backhaul networks • Limited economies of scale

  7. The Case for Horizontal Architectures “The new rules for success will be to provide one part of the puzzle and to cooperate with other suppliers to create the complete solutions that customers require. ... [V]ertical integration breaks down when innovation speeds up. The big telecoms firms that will win back investor confidence soonest will be those with the courage to rip apart their monolithic structure along functional layers, to swap size for speed and to embrace rather than fear disruptive technologies.” The Economist Magazine, 16 December 2000

  8. Applications (Portals, E-Commerce, E-Tainment, Media) Appl Infrastructure Services (Distribution, Caching, Searching, Hosting) AIP ISV Application-specific Servers (Streaming Media, Transformation) ASP Internet Data Centers Application-specific Overlay Networks (Multicast Tunnels, Mgmt Svrcs) ISP CLEC Internetworking (Connectivity) Global Packet Network Horizontal Internet Service Business Model

  9. Feasible Alternative: Horizontal Competition vs. Vertical Integration • Service Operators “own” the customer, provide “brand”, issue/collect the bills • Independent Backhaul Operators • Independent Antenna Site Operators • Independent Owners of the Spectrum • Microscale auctions/leases of network resources • Emerging concept of Virtual Operators

  10. VirtualOperator • Local premise owner deploys own access infra-structure • Better coverage/more rapid build out of network • Deployments in airports, hotels, conference centers, office buildings, campuses, … • Overlay service provider (e.g., PBMS) vs. organizational service provider (e.g., UCB IS&T) • Single bill/settle with service participants • Support for confederated/virtual devices • Mini-BS for cellular/data + WLAN for high rate data

  11. Presentation Outline • Motivation • Project SAHARA • Initial Investigations • Testbeds • Summary and Conclusions

  12. The “Sahara” Project • Service • Architecture for • Heterogeneous • Access, • Resources, and • Applications

  13. SAHARA Assumptions • Dynamic confederations to better share resources & deploy access/achieve regional coverage more rapidly • Scarce resources efficiently allocated using dynamic “market-driven” mechanisms • Trusted third partners manage resource marketplace in a fair, unbiased, audited and verifiable basis • Vertical stovepipe replaced by horizontally organized “multi-providers,” open to increased competition and more efficient allocation of resources

  14. Architectural Elements • “Open” service/resource allocation model • Independent service creation, establishment, placement, in overlapping domains  • Resources, capabilities, status described/exchanged amongst confederates, via enhanced capability negotiation • Allocation based on economic methods, such as congestion pricing, dynamic marketplaces/auctions • Trust management among participants, based on trusted third party monitors

  15. Architectural Elements • Forming dynamic confederations • Discovering potential confederates • Establishing trust relationships • Managing transitive trust relationships & levels of transparency • Not all confederates need be competitors--heterogeneous, collocated access networks to better support applications

  16. Architectural Elements • Alternative View: Service Brokering • Dynamically construct overlays on component services provided by underlying service providers • E.g., overlay network segments with desirable performance attributes • E.g., construct end-to-end multicast trees from subtrees in different service provider clouds • Redirect to alternative service instances • E.g., choose instance based on distance, network load, server load, trust relationships, resilience to network failure, …

  17. Deliverables • Architecture and Mechanisms for • Fine grain market-driven resource allocation • Application awareness in decision making • Confederations and Trust Management • Dynamic marshalling, observation/verification of participant behaviors, dissolution of confederations • Mechanisms to “audit” third party resource allocations, insuring fairness and freedom from bias in operation • New Handoff Concepts Based on Redirection • Not just network handoff for lower cost access • Also alternative service provider to balance loads

  18. Research Methodology Analyze & Design • Evaluate existing system to discover bottlenecks • Analyze alternatives to select among approaches • Prototype selected alternatives to understand implementation complexities • Repeat Prototype Evaluate

  19. Presentation Outline • Motivation • Project SAHARA • Initial Investigations • Testbeds • Summary and Conclusions

  20. Initial Investigations • Congestion-Based Pricing • Economics-based resource allocation • Clearinghouse Architecture • Trusted Resource Mediators • Measurement-based Admission Control with traffic policing • Service Composition • Achieving performance, reliability from multiple placed service instances

  21. Congestion-Based Pricing • Hypothesis: Dynamic pricing influences user behavior • E.g., shorten/defer call sessions;accept lower audio/video QoS • Critical resource reaches congestion levels, modify prices to drive utilization back to “acceptable” levels • E.g., available bandwidth, time slots, number of simultaneous sessions

  22. Computer Telephony Services (CTS) Testbed • E.g., Dialpad.com & Net-to-Phone • Gateways as bottlenecks (limited PSTN access lines) • Use congestion pricing (CP) to entice users to • Talk shorter • Talk later • Accept lower quality PSTN Internet-to-PSTN Gateways Internet

  23. Berkeley User Study • Goal: determine effectiveness of CP • Figure of merits • Maximize utilization (service not idling) • Reduce provisioning • Reduce congestion (reduced blocking probability) • Users acceptance/reactions to CP • Talk shorter • Wait • Defer talk at another time • Use alternative access device • Use reduced connection qualities

  24. Experiments • Vary Price, Quality, Interval of Price Changes • Experiments • Congestion pricing: rate depends on current load • Flat rate pricing: same rate all the time • Time-of-day pricing: higher rate during peak-hours • Call-duration pricing: higher rate for long duration calls • Access-device pricing: higher rate for using a phone instead of a computer

  25. Computers vs. phones to make/receive free phone calls Different pricing policies: 1000 tokens/week RT pricing, connection quality & accounting information Experimental Setup & Limitations

  26. Peak hours from 7-11pm Peak shifted! High bursts right before & right after peak hours Flat Rate Versus Time-of-day

  27. Initial Results • Call-duration pricing • Hypothesis: Less long duration calls & more short duration calls • Result: fewer long duration calls, but no increase in short duration calls • Congestion pricing • Congestion: two or more simultaneous users • Hypothesis: Talk less when encounter CP • Result: Each user used service for 8.44 minutes (standard error 11.3) more. Observed reduction in call session when CP encountered: 2.31 minutes (2.68) less. • Not statistically significant (t-test) • Not enough users to cause much congestion

  28. Preliminary Findings • Feasible to implement/use CP in real system • Pricing better utilizes existing resources, reduces congestion • CP is better than other pricing policies • Based on surveys, users prefer CP to flat rate pricing if its average rate is lower • Service providers can better utilize existing resources by providing users with incentives to use CP • Limitations • Too few users • Only apply to telecommunication services

  29. H.323 Gateway PSTN Web surfing, emails,TCP connections IP Based Core GSM VoIP (e.g. Netmeeting) Wireless Phones Video conferencing,Distance learning Clearinghouse Vision: data, multimedia (video, voice, etc.) and mobile applications over one IP-network Question: How to regulate resource allocation within and across multiple domains in a scalable manner to achieve end-to-end QoS?

  30. Clearinghouse Goals • Design/build distributed control architecture for scalable resource provisioning • Predictive reservations across multiple domains • Admission control & traffic policing at edge • Demonstrate architecture’s properties and performance • Achieve adequate performance w/o edge per-flow state • Robust against traffic fluctuations and misbehaving flows • Prototype proposed mechanisms • Min edge router overhead for scalability/ease of deployment

  31. Clearinghouse Architecture • Clearinghouse distributed architecture--each CH-node serves as a resource manager • Functionalities • Monitors network performance on ingress & egress links • Estimates traffic demand distributions • Adapts trunk/aggregate reservations within & across domains based on traffic statistics • Performs admission control based on estimated traffic matrix • Coordinates traffic policing at ingress & egress points for detecting misbehaving flows

  32. Host ISP m Ingress Router Host ER IR ER ISP 1 ISP n ISP 2 Egress Router IR Multiple-ISP Scenario • Hybrid of flat and hierarchical structures • Local hierarchy within large ISPs • Distribute network state to various CH-nodes and reduces the amount of state information maintained • Flat structure for peer-to-peer relationships across independent ISPs

  33. LD1 CHo CHo LD0 LD0 Illustration Host • A hierarchy of Logical domains (LDs) • e.g., LD0 can be a POP or a group of neighboring POPs EdgeRouter ISP1 CH1 • A CH-node is associated with each LD • Maintains resource allocations between ingress-egress pairs • Estimates traffic demand distributions & updates parent CH-nodes

  34. Host Host LD1 ISP n EdgeRouter CHo CHo ISP m CH1 ISP1 LD0 LD0 CH1 CH1 Peer-Peer Illustration • Parent CH-node • Adapt trunk reservations across LDs for aggregate traffic within ISP • Appears flat at the top level • Coordinate peer-to-peer trunk reservations across multiple ISPs

  35. Key Design Decisions • Service model: ingress/egress routers as endpoints • IE-Pipe(s,d) = aggregate traffic entering an ISP domain at IR-s, and exits at ER-d • Reservations set-up for aggregated flows on intra- and inter-domain links • Adapt dynamically to track traffic fluctuation • Core routers stateless; edge maintain aggregate states • Traffic monitoring, admission control, traffic policing for individual flows performed at the edge • Access routers have smaller routing tables; experience lower aggregation of traffic relative to backbone routers • Most congestion (packet loss/delay) happens at edges

  36. Rnew Accept or Reject B Traffic Monitor A Traffic-Matrix Admission Control Host Network IR-s • Mods to edge routers • Traffic monitors passively measure aggregate rate of existing flows, M(s,d) • IR-s forwards control messages (Request/Accept/Reject) between CH and host/proxy • Estimate traffic demand distributions, D(s,:), and report to the CH POP 1 CH POP 2 • CH • Leverages knowledge of topology and traffic matrix to make admission decisions ER-d Host Network

  37. x y Request x a TBF for group-x Accept (with Fid) x b Traffic Policer at IR-saggregate flows based on FidIn for group policing Update TBFs x y t y TBF for group-y w y Traffic Policer at ER-daggregate flows based on FidEg for group policing B A TBF Traffic Policer Group Policing for Malicious Flow Detection IR-s • CHassigns Fid if the flow is admitted • Let FidIn = x, FidEg = y POP 1 CH POP 2 ER-d Host Network * Traffic Policer at IR or ER only maintains total allocated bandwidth to the group (aggregate state) and not per-flow reservation status

  38. Service Composition • Assumptions • Providers deploy services throughout network • Portals constructed via service composition • Quickly enable new functionality on new devices • Possibly through SLAs • Code is initially non-mobile • Service placement managed: fixed locations, evolves slowly • New services created via composition • Across service providers in wide-area: service-level path

  39. Provider A Replicated instances Provider B Service Composition Cellular Phone Video-on-demand server Provider A Provider R Provider B Text to speech Transcoder Email repository Thin Client Provider Q Reuse, Flexibility

  40. Architecture for Service Composition and Management Composed services Application plane Service location Service-level path creation Peering relations, Overlay network Network performance Logical platform Detection Handling failures Service clusters Recovery Hardware platform

  41. Source Internet Composedservices Destination Application plane Peering: monitoring & cascading Peering relations, Overlay network Logical platform Serviceclusters Hardware platform Service cluster: compute cluster capable of running services Architecture • Overlay nodes are clusters • Compute platform • Hierarchical monitoring • Overlay network provides context for service-level path creation & failure handling

  42. Service-Level Path Creation • Connection-oriented network • Explicit session setup plus state at intermediate nodes • Connection-less protocol for connection setup • Three levels of information exchange • Network path liveness • Low overhead, but very frequent • Performance Metrics: latency/bandwidth • Higher overhead, not so frequent • Bandwidth changes only once in several minutes • Latency changes appreciably only once an hour • Information about service location in clusters • Bulky, but does not change very often • Also use independent service location mechanism

  43. Service-Level Path Creation • Link-state algorithm for info exchange • Reduced measurement overhead: finer time-scales • Service-level path created at entry node • Allows all-pair-shortest-path calculation in the graph • Path caching • Remember what previous clients used • Another use of clusters • Dynamic path optimization • Since session-transfer is a first-order feature • First path created need not be optimal

  44. Session Recovery: Design Tradeoffs • End-to-end: • Pre-establishment possible • But, failure information has to propagate • Performance of alternate path could have changed • Local-link: • No need for information to propagate • But, additional overhead Finding entry/exit Service location Service-level path creation Overlay n/w Network performance Detection Handling failures Recovery

  45. The Overlay Topology: Design Factors • How many nodes? • Large number of nodes implies reduced latency overhead • But scaling concerns • Where to place nodes? • Close to edges so that hosts have points of entry and exit close to them • Close to backbone to take advantage of good connectivity • Who to peer with? • Nature of connectivity • Least sharing of physical links among overlay links

  46. Presentation Outline • Motivation • Project SAHARA • Initial Investigations • Testbeds • Summary and Conclusions

  47. Testbeds at Different Scale • Room-scale • Bluetooth devices working as ensembles, cooperatively sharing bandwidth within microcell • Inherent trust, but finer grained intelligent and active allocation as opposed to etiquette rules • How lightweight? Too heavyweight for Bluetooth? •  Building-scale • Multiple wireless LAN “operators” in building • Experiment with “evil operators”; third party audit mechanisms to determine offender • GoN offers alternative telephony, dynamic allocation of frequencies/time slots to competing/confederating providers

  48. Testbeds at Different Scale • Campus-scale • Departmental WLAN service providers with overlapping coverage out of doors • Regional-scale • Possible collaborations with AT&T Wireless (NTTDoCoMo), PBMS, Sprint?

  49. Presentation Outline • Motivation • Project SAHARA • Initial Investigations • Testbeds • Summary and Conclusions

  50. Summary • Congestion Pricing, Clearinghouse, Service Composition first attempts at service architecture components • Next steps • Generalization to multiple service providers • Introduction of market-based mechanisms: congestion pricing, auctions • Composition across confederated service providers • Trust management infrastructure • Understand peer-to-peer confederation formation vs. hierarchical overlay brokering

More Related