1 / 24

VTHD PROJECT (Very High Broadband Network Service): French NGI initiative

VTHD PROJECT (Very High Broadband Network Service): French NGI initiative. C. GUILLEMOT FT / BD / FTR&D / RTA christian.guillemot @francetelecom.com. Presentation Overview. VTHD: french NGI initiative project objectives partnership VTHD network QoS engineering rationale

dhackler
Download Presentation

VTHD PROJECT (Very High Broadband Network Service): French NGI initiative

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. VTHD PROJECT(Very High Broadband Network Service): French NGI initiative • C. GUILLEMOT • FT / BD / FTR&D / RTA • christian.guillemot @francetelecom.com

  2. Presentation Overview • VTHD: french NGI initiative • project objectives • partnership • VTHD network • QoS engineering • rationale • service model • implementation issues • Provisioning & traffic engineering • dynamic provisioning with optical networks • Interworking of IP and X-connected WDM networks • layer 2 traffic engineering • Conclusion

  3. VTHD Project objectives • To set up a strong partnership with higher education and research institutions within the framework of french RNRT and european IST networking development programms. • Open internet R&D • To develop new applications and to ensure that they can be put in use in the broader global Internet. • To experiment optical internetworking with two jointed technological objectives: • to assess scalable capacity upgrading techniques • to assess traffic management tools necessary to operate a QoS capable test-bed. • To deploy and operate a high performance network that provides nationwide high capacity interconnection facilities among laboratories at the IP level • that supports experiments for new designs for networking. • with actual traffic levels consistent with interconnexion capacity.

  4. Partnership & Applications (1) • Partnership: • France Telecom/FTR&D • INRIA (Computering National Institute) & European G. Pompidou Hospital • High Telecommunications Engineering Schools: ENST ; ENST-Br ; INT • Institut EURECOM (ENST + EPFL: Switzerland) • Data applications: • Grid-computing(INRIA). • Middleware platform for distributed computing • High performance simulation & monitoring • 3D virtual environment (INRIA) • Data base recovery, data replication (FTR&D) • Distributed caching(Eurecom Institute)

  5. Partnership & Applications (2) • Video-streaming • Video-on-demand, Scheduled live-transmission, TV broadcasting (FTR&D) • MPEG 1: ~ 1 Mb/s • MPEG 4: <~ 1 Mb/s (adaptative video-streaming, multicast) • MPEG 2: ~6 Mb/s : high quality video  TV/IP • Real time applications • Tele-education (High Telecommunications Engineering Schools). • Distant-learning,Educational cooperative environment, digital libraries • Tele-medecine (INRIA+ G. Pompidou hospital) • High-definition medical images distant analysis & processing • Surgery training under distant control • Voice over IP (FTR&D) • PABX interconnection: E1 2Mb/s emulation • Adaptative VoIP: hierarchical coding • Video-conferencing (FTR&D)

  6. VTHD network • 8 points of presence • interconnected by an IP/WDM backbone • aggregating traffic from campuses • using Giga Ethernet p2p access links. • Transmission resources (access fibers, long haul WDM optical channels) supplied by France Telecom Network Division on spare resources. • VTHD Network management carried by FT operational IP network staff in a « best effort » mode. • VTHD network usage • No survivability commitment ( neither for links nor routers faults) • Acceptable Usage Policy: notifiable « experimentations » • partners are committed to have a commercial Internet access

  7. Atrium Caen Lannion Rouen WDM Nancy WDM WDM Rennes Paris Back-office WDM Grenoble Lyon WDM Sophia access router Network Architecture • A weakly meshed topology moving towards • a larger POPs connectivity • and peering with IST Atrium network 8 POPs connected to 18 campuses Backbone router

  8. FTR&D ENST INRIA FTR&D INRIA FTR&D FT/BD INT ENST Cisco 12000 Cisco 6509 GigaEthernet Avici TSR STM1/OC 3 Juniper M40 Juniper M20 EURECOM FTR&D INRIA VTHD Routers & DWDM systems VTHD: A multi-supplier infrastructure FTR&D FT/BD 2.5 Gb/s STM-16 POS INRIA 2.5 Gb/s STM-16 POS 2.5 Gb/s STM-16 POS 4 channel STM-16 ring FTR&D INRIA HEGP

  9. FTR&D INRIA • IS-IS • I-BGP4 AS VTHD Static FTR&D FTR&D FTR&D E-BGP4 INRIA FTR&D HEGP INRIA INT ENST INRIA ENST FTR&D Protection /IP rerouting RENATER (~ 10 s) VTHD: Routing Eurécom INRIA

  10. QoS engineering: rationale • Context • VTHD: experimental & operational network • that encompasses both the core network, the CPEs and the dedicated (V)LANs. • that will progressively have FTR&D operational hosts reachibility (VPN engineering permitting) • traffic: VTHD network • interconnects distributed communities (FTR&D, INRIA, Telecom. Engineering schools) • supports bandwidth demanding applications for bulk traffic (metacomputing, web traffic, data base back up) • VTHD supports applications that need QoS guarantees : • VoIP, E1 virtual leased lines, 3D virtual environment , video conferencing • Traffic load is expected to remain low in the VTHD core network with occasional congestion events: a context indicative of actual ISPs backbones. • Objective • to experiment a differentiated QoS capable platform involving all architectural components, even if their functionalities are basic.

  11. Grid cluster  Web servers  1 Gb/s 1 Gb/s 42 Web clients Expected VTHD bulk traffic • Bulk traffic is data traffic: • « web traffic »: INRIA WAGON tool • WAGON is a software tool generating web requests • Web browsing user behaviour is simulated using a stochastic process & starting from data traces of actual web servers. • Web servers generate actual back traffic to virtual users requests • WAGON first objective is web server architecture improvement. • Traffic /server:  160 Mbit/s (CPU limited), 7 servers. • Grid computing (INRIA): • Parallel computing using a Distributed Shared Memory between 16 (soon 32) PC clusters. • Processes (computing, data transfers) are synchronized by the grid middleware. • Data transfers are built on independent PC to PC file transfers • Mean traffic level/ cluster transfer:  500 Mbit/s • Data base recovery (FTR&D) • 80 Gigabyte transfers (~ few 100 Mb/s ?)

  12. Actual VTHD bulk traffic

  13. ¿ À À ¿ Back Office PHB, AC engineering policy manager ¿ ¿ ¿ FTR&D directory À À SLA VTHD BO directory Traffic matrix OSSIP QoS manager correlation engine ¿ Modelling VTHD PE : Ç ¡ ¿ VTHD CPE À À ¿ À ¿ À : Cisco 7206 Switches FE , GE Policy server Policy server VTHD directory À ¿ À DNS/DHCP operational interconnection facility VTHD backbone ¡ : Ç measurements QoS Architecture components • Building blocks integral to QoS engine: • VTHD service model • (PHB, Admission control) • Performance metering (QoS parameters measurem.) • modelling (traffic matrix, correlation engine) • policy based management (policies,COPS protocol) • SLA

  14. VTHD backbone Service model (1) • 3service classes mapped to EF and AF Diff Serv classes both for admission control and service differentiation in the core network. • Scheme applied at PEs ingress interfaces • CPEs in charge of flows classification,traffic conditioning, packet marking. • Class 1: Expedited forwarding • intended to stream traffic • traffic descriptor: aggregated peak rate • QoS guarantees: bounded delay, low jitter, low packet loss rate • admission control: token bucket (peak rate, low bucket capacity) • suitable to high speed links: individual flow peak rate is small fraction of link rate so that variations in combined input rate remain low • Class 3: Best effort • intended to elastic traffic • no traffic descriptor, no admission control • best effort delivery

  15. Conforming METER QUEUE ABSOLUTE ABSOLUTE EF DROPPER - Feedback AF1 ALG. COUNTER QUEUE DROPPER SCHEDULER CLASSIFIER REMARKING Feedback BE ALG. QUEUE 3 DROPPER VTHD backbone Service model (2) • Class 2: Assured forwarding • intended to elastic traffic that needs minimum throughput guarantee • traffic descriptor: ? • QoS guarantees: minimum throughput • admission control: based on number of active flows & TCP . • whatever the traffic profile, fair sharing of dedicated bandwidth among flows ensures that flow throughput never decreases below some minimum acceptable level for admitted flows (after J.W. Roberts) • assumes that TCP flow control is good approximation for fair sharing • RED algorithm may improve fair sharing by punishing aggressive flows. DS VTHD node • Admission control should keep EF & AF cumulative traffic load below congestion and low enough to enable the close loop feedback to take place properly .

  16. Closed loop operation • loose traffic engineering • admission control: hose model - based on local traffic profile and per interf. SLA - not on global network status - unknown local traffic profile per outgress /destination • traffic dynamics - Topology changes may require admission control & service model to be re-engineered to meet SLAs. - Relevant times scales (minutes to hours) are not consistent with capacity planning.

  17. Implementation issues • Admission control: • EF class: PIRC only supported on GE line cards on Cisco GSR • PIRC is lightweight CAR: no access-group, dscp, or qos-group matching is available; rule matches *all* traffic inbound on that interface. • AF class: status information on active flows not available (classification and filtering rules enforcement at the flow granularity level with Internet II Juniper processor) • AF flows aggregate filtering based on token bucket descriptor • appropriate token bucket parameters ? • Performance metering • On shelve tools for passive measurements at backbone border are not available at Gb/s rate • Policy based management • COPS protocol not supported by Cisco GSR, Juniper M40, Avici TSR • & many other issues to be addressed: QoS policies, SLA/SLS definition, correlation engine,….

  18. Dynamic provisioning & optical networks • IP pervasiveness & WDM optical technologies are key drivers for: • high demand for bandwidth & transmission cost lowering. which in turn lead to • exponential traffic growth and huge deployment of transport capacities • Exponential nature of traffic growth shifts network capacity planning paradigm from: • fine network dimensioning to • coarse network dimensioning for pre-provisioned transport networks. • Coarse network dimensioning and elastic demand for networking services shift the business model from demand driven to supply driven which in turn calls for. • new service velocity : fast lambda provisioning • arbitrary transport architecture for scalibility & flexibility: shift from ring-based to meshed topology • efficient and open management systems • wider SLA capability • rapid response to dynamic network traffic and failure conditions

  19. IP control network 12 34 …….. …… 12 34 …….. …… 12 34 …….. …… Out of band control channel 12 34 …….. …… MP(Lambda)S optical networks • Soft-ware centric architecture leveraging on IP protocols • Distributed link state routing protocol: OSPF, (PNNI) • Signaling: Multi Protocol Label Switching (MPLS) / CR-LDP (RSVP-TE) • : LDP queries OSPF for the optimal route, resources are checked prior to path set-up • IP control plane interconnection facility decoupled from data plane. • IP router address (control) + “IP” switch address (data) per X-connect. « optical » X-connect

  20. VTHD Configuration Rennes Paris AUB • Sycamore opaque LSA features • Switch Capability LSA • Switch IP address • Minimum grooming unit supported by the node • Identified user groups that have reserved and available grooming resources • User groups resources to be pre-emptable • Software revision • Trunk Group LSA • Administrative cost of trunk group • Protection strategy for individual trunks within trunk group • User group assignment of trunk group • Conduit through which the trunks run • Available bandwidth of the trunk group • Trunk allocated for preemption Rouen Avici TSR 1 l 2 l 2 l 3 l Sycamore Xconnected network 3 l Avici TSR Paris STL Paris MSO

  21. Dynamic provisioning for l trunks • TSR Composite Links: bundling of STM16 links • Composite link is presented as a single PPP connection to IP and MPLS • IP traffic is load balanced over member links based on a hash function • Link failures are rerouted over surviving member links in under 45msecs • may be faster than restoration at optical level • Decoupling of IP routing topology (software/control plane) from router throughput (hardware/data plane). • Relevant to IP/WDM backbone router: number of line cards scaled on nbr of l x nbr of fibres. • l - dynamic provisioning for composite link capacity upgrading • pre-provisined transport network: capacity pool • standard or diversely routed additional link (packet ordering preservation) • need signaling between router & optical X-connect.

  22. Optical Network UNI UNI UNI-C UNI-N UNI-C UNI-N Internal Connectivity ND ND Client Client ONE UNI-N UNI-N ONE:Optical Network Element ND UNI ND UNI UNI-C UNI-C ND: UNI neighbor discovery Client Client O-UNI signaling • UNI signaling : • OIF draft: oif2000.125.3 • signaling protocols: RSVP-TE or CR-LDP • Avici & Sycamore first release scheduled next June • VTHD experiment: Avici/FTR&D/Sycamore partnership • UNI functions : • Connection creation, deletion, status enquiry • Modification of connection properties • End points • Service bandwidth • Protection/restoration requirements • Neighbor discovery • Bootstrap the IP control channel • Establish basic configuration • Discover port connectivity • Address resolution • registration • query • client addresses type: IPv4, IPv6, ITU-T E.164, ANSI DCC ATM End System Address address , NSAP) • COP usage for UNI for outsourcing policy provisioning within the optical domain

  23. Conclusion • Where do we stand now • French partnership kernel. • IP network deployment completed • Partners usage and related applications rising up. • Sycamore platform lab tests. • What ’s to come • VPN service provisioning (first IPSEC based then MPLS based) to enable secured usage from « regular » hosts. • QoS capable test-bed. • IPv6 service provisioning. • New applications/services support within the RNRT/ RNTL or IST framework ?

  24. Thank you!

More Related