1 / 49

AARNet 3 The Next Generation of AARNet

AARNet 3 The Next Generation of AARNet. Status Report 27 January 2004. STOP NOW!!. If you are displaying this page you aren’t using one of the custom slide shows :-). AARNet 3 The Next Generation of AARNet. Techs in Paradise 2004 January 2004. AARNet 3 The Next Generation of AARNet.

luella
Download Presentation

AARNet 3 The Next Generation of AARNet

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AARNet3The Next Generation of AARNet Status Report 27 January 2004

  2. STOP NOW!! If you are displaying this page you aren’t using one of the custom slide shows :-)

  3. AARNet3The Next Generation of AARNet Techs in Paradise 2004 January 2004

  4. AARNet3The Next Generation of AARNet

  5. Background to AARNet • AARNet Pty Ltd (APL) is a not for profit company owned by 37 Australian Universities and the Commonwealth Scientific & Industrial Research Organisation (CSIRO) • Operates a national network providing commodity and research Internet access to members and clients • Clients include Defence Science and Technology Organisation (DSTO), National Library of Australia, Australian Institute of Marine Science • Current network deployed in 1997, based on ATM mesh between state and territory networks (RNO) • Also operates a STM-1 ring to the USA (Hawai‘i and Seattle) on Southern Cross, primarily for research but some commodity via Pacific Wave • Currently buys commodity access at each RNO from Optus or Telstra

  6. Request For Proposal (RFP) Team • Mary Fleming (chair) • Director - Business Development • Don Robertson • Deputy Executive Director • George McLaughlin • Director - International Developments • Steve Maddocks • Director - Operations • Mark Prior • Network Architect

  7. RFP Process • Issued RFP on 25 February 2003 • RFP closed on 21 March 2003 • Received 25 responses • Wide variety of responses • Some covered whole of RFP • Some very specialised • RFP Team divided task • Domestic Transmission • International Transmission • Internet Transit • Other issues

  8. Design Issues • Redundancy & Resilience • Support for IPv4 and IPv6 • unicast and multicast • Traffic Accounting and Monitoring • End to end performance measures • Support QoS (diffserv) • Support for large traffic flows, jumbo frames

  9. Redundancy & Resilience • Dual points of presence (POP) in major capital cities • Diverse, dual unprotected national links • Will use MPLS Fast Reroute for protection • Provides ability to burst above capacity • Use single metro dark fibre pair to connect intra city POP sites • Creates rings between cities • Provides opportunity for members and customers to build diverse, redundant connections to AARNet

  10. IPv4 and IPv6 • Native IPv4 and IPv6 (Dual Stack) network • Unicast and Multicast for both IPv4 and IPv6 • EFT IPv6 Multicast (initially intra-domain only) • Line rate performance for IPv4 and IPv6 • Peering to both R&E and Commodity Internet • Hexago IPv6 Migration Broker to aid member and client IPv6 deployment • DNS, AARNet Mirror and USENet News accessible over IPv4 and IPv6 • Jumbo frames, 9000 byte

  11. Traffic Accounting and Monitoring • Flow based accounting • Differentiate traffic into classes for billing • Scaling issues require accounting function to be moved to the edge of the network • Use anycast addressing so data supplied to a central collector in an emergency • Centralise reporting to a POP based server • Also provides AARNet with measurement device on network edge to improve performance monitoring

  12. Other Issues • End to end performance measures • Desire to measure performance from member site • Provide connectivity reports on core services • Support QoS (diffserv) • Need to support VoIP and VideoIP traffic • Possibly introduce scavenger service • Support for large traffic flows, jumbo frames

  13. Australian Network

  14. 10Gbps Backbone • Provided on the “Nextgen Networks” network • Two fibre pairs on each path • STM-64 service provided on first pair for inter capital trunks • Second pair may be lit with CWDM to allow Gigabit Ethernet drop off to regional members, other solutions to be considered • Member must provide tail to the regional network

  15. Additional National Network Links • Need to provision a diverse East/West path, (Melbourne/Adelaide/Perth), at least STM-4 • Connectivity to Tasmania and the Northern Territory required • STM-1 (155Mbps) Melbourne to Hobart • E3 (34Mbps) Adelaide to Darwin via Alice Springs† † Subject to supplemental funding support

  16. Trans Pacific Transmission • “SX TransPORT” - Dual STM-64 (10Gbps) • Hawai‘i - Manoa and Seattle (Abilene, CA*net 4) • Los Angeles (Abilene, CENIC, CUDI) • Look to add Mauna Kea to Los Angeles path later • Dual STM-4 (622Mbps) for commodity Internet • PAIX Palo Alto (Silicon Valley) • Los Angeles • Add drop offs to existing STM-1’s (155Mbps) • University of South Pacific, Fiji • Possibly Auckland, New Zealand • Connects to 155Mbps path to Tokyo from Hawai‘i

  17. AARNet’s Pacific Ocean links

  18. Services • DNS Cache and Secondary Servers • Usenet News • Hexago IPv6 Migration Broker • DDoS Detection and Mitigation • Investigate appliances • Interest in automatic detection and filtering • Locate next to transit (and peering) links • AARNet Mirror • VoIP Gateways • NLANR and/or RIPE Test Traffic Measurement

  19. Connections through the GigaPOPs

  20. National Rings • Inter city trunks • Single, unprotected SDH circuit between backbone class routers • Intra city trunks • 10 Gigabit Ethernet connection between backbone class switches • Backbone router and switch within a GigaPOP connected using 10 Gigabit Ethernet

  21. AARNet GigaPOP Requirements • Available for equipment and service installation in January 2004 • Space for 4 consecutive 600x1000 45RU racks • Individually locked suite of racks or located in private, locked caged area • False floor with underfloor air-conditioning • VESDA • Fire suppression systems, such as FM-200 or Inergen • Dual, redundant AC power feeds to each rack • Backup AC power, with uninterrupted transition from mains to generator • Air-conditioning available on backup power • 24x7 secure access • 24x7 “remote hands” for basic hardware changes • 2 PSTN lines • Unencumbered access provided for any APL nominated carrier • Access provided for other AARNet clients to install suitable communications equipment • Accessible via multiple carriers over diverse paths

  22. Intra city POP Requirements • Power supply diversity, each POP fed by different sub stations • Availability of diverse fibre paths between POP sites • Physical separation of at least 2km but no more than 20km

  23. Member Connections • Diverse connection to each POP • Two diverse, independent links, one to each POP • Dual connection connecting each POP • Two links over same infrastructure to single POP • AARNet trunks one link to the second POP though switches • AARNet provided diversity • Single link to one POP, AARNet provides LAN linking both AARNet POP sites and the member

  24. Member Connections • At least one AARNet supplied and managed edge router • No firewall functionality, a member responsibility • Member provides “last mile” link between institution and POP site • What technology will the members need? • Gigabit Ethernet over metro fibre is preferred • Managed Ethernet service • E3 Microwave • Will members dual home to both POP sites?

  25. Equipment • Core Router • 40Gbps capable • Redundant power but not CPU • Packet over SDH to STM-64 (roadmap to STM-256) • Gigabit and 10 Gigabit Ethernet • Core Switch • Pure L2 switching • Fast, Gigabit and 10 Gigabit Ethernet only • Member Edge and POP based “Legacy” routers • 3 x Gigabit Ethernet (Member, POP “A”, POP “B”) • 1 x Fast Ethernet dedicated to flow accounting • Capability to handle legacy (slow) interfaces

  26. Backbone Routers - Procket 8812 • 22RU (95.3 x 44.2 x 64.8 cm) • 12 Line Cards • 48 Media Adapters (MA) • Route Processor • Procket developed System Control Chip • 500MHz IBM Power PC • 2GB main memory • 512MB Compact Flash (system program storage with redundant images) • 20GB Hard Disk Drive (system log files) • 960Gbps 1.2Bpps • 1 Port STM-64 MA • 1 Port 10Gigabit Ethernet MA • 10 Port Gigabit Ethernet MA • 8 Port STM-1/STM-4 MA

  27. Core Backbone Switches - Cisco 6509 • 20 RU (84.4 X 43.7 x 46.0 cm) • 9 Slot Chassis • Supervisor 720 • 720 Gbps • 30Mpps Centralized, up to 400 Mpps for CEF720 interface modules equipped with dCEF (DFC3) or aCEF daughter cards • 4 port 10 Gigabit Ethernet • 48 port 10/100/1000 UTP based Ethernet • 24 port SFP Gigabit Ethernet • Potential for service modules later

  28. Edge Routers - Cisco 7304 • 4-RU (10cm) compact chassis • 4-slot modular system • Network Equipment Building Standards (NEBS) Level 3 compliance • NPE-G100 Processor • Three onboard Gigabit Ethernet ports • 1 GB of Synchronous Dynamic RAM (SDRAM) • 256 MB of removable Compact Flash memory • Better than 1 mpps processing performance • Redundant power supplies • Front-to-back airflow for optimal cooling

  29. Current Status (1) • National Transmission • Confirmation of POP sites • Testing STM-64 circuits • Build new GigaPOP sites • Obtain fibre between GigaPOPs and COs • Solution for Tasmania and Northern Territory • International Transmission • Planning progressing with US partner organisations on connecting “SX TransPORT” • STM-4 to Palo Alto should be enabled during February • Direct Asian links dependant on available funds and member demand

  30. Current Status (2) • Commodity Internet Transit • Access Commodity Internet in Palo Alto • Connected to the PAIX fabric • Obtain transit from MCI/UUnet and NTT/Verio • Peer with other organisations at PAIX • Add second commodity POP in Los Angeles • Need to determine • data centre location • backhaul from Morro Bay (San Luis Obispo) • Will use the same transit providers as at Palo Alto

  31. Current Status (3) • Peering • Developing national and local (state) policies • A consideration for POP site location • Regional links • Investigate CWDM options • Possibly issue another RFP • Priorities are: • inland Sydney/Brisbane via the telescopes • coastal Sydney/Brisbane route • Sydney to Albury

  32. IPv6 Migration Broker (1) • What… • Hexago IPv6 Migration Broker • Tunnel Broker used by FreeNet6 • User setup for 6in4 tunnels, via web form • Can be used just for end systems • But can also assign prefix for local LAN • No routing functionality, static routing only • Open access but targeted to “local” community, not just AARNet members & clients

  33. IPv6 Migration Broker (2) • Why? • Members & clients are not ready to fully deploy IPv6 across their network but some interest within their organisation • Some common firewalls, eg PIX, don’t support IPv6 • Tunnel allows traversal of firewalls • But doesn’t provide firewall function unless end point can do it

  34. IPv6 Migration Broker (3) • Experience… • Most configure account but don’t configure tunnel • Some setup tunnel but for whatever reason only use it for a short time… • Perhaps just looking at the Kame :-) • Maybe forgot to add to startup • Small number of users permanent fixture

  35. Transition Plan (1) • Next traffic peak in March/April 2004 • New network won’t be in place so an interim plan is required to supplement existing network • Build (UTS) Sydney GigaPOP • Enable STM-4 commodity link to Palo Alto • If necessary migrate existing ATM STM-1’s to a new 7304 • Connect 7304, Grangenet and Sydney Basin to the GigaPOP

  36. Transition Plan (2)

  37. Transition Plan (3) • Add additional commodity capacity via Optus Gigabit Ethernet solution in Melbourne and Brisbane • Add Optus Gigabit Ethernet interface in Canberra, migrate AARNet Mirror to this dedicated interface • Maximise ATM capacity in Adelaide • Use Amnet for commodity in Perth and divert Adelaide commodity traffic to Perth if absolutely necessary • No changes necessary for Darwin and Hobart

  38. Deployment Summary (1) • Build GigaPOP at UTS • Connect UTS to PAIX for commodity Internet • Build second Sydney GigaPOP at Nextgen Networks (NXG) and link to UTS GigaPOP • Build NXG Melbourne GigaPOP • Link NXG GigaPOP’s in Sydney & Melbourne • Link NXG Melbourne to 7304 in UniMelb, Thomas Cherry • Build Canberra GigaPOP at TransACT • Link TransACT to UTS GigaPOP

  39. Deployment Summary (2) • Build GigaPOP at QUT and link to UTS GigaPOP • Build GigaPOP at 10 Pulteney Street, Adelaide and link to NXG Melbourne • Build GigaPOP at RBA Building, Perth and link to 10 Pulteney Street • Build and link remaining GigaPOPs • UQ Prentice Centre • ANU • University of Melbourne • Hostworks • CSIRO ARRC

  40. Deployment Plan (Sydney) • Ensure fibre capacity between UTS and NXG for 10Gigabit Ethernet link • Build first AARNet3 GigaPOP at UTS • Connect existing Grangenet and Sydney Basin routers, via new backbone switch, to the GigaPOP • If necessary retire NSW RNO 7500 and move Optus ATM services to “legacy” router • Build connection to Nextgen Networks Customer Connection Network (Nextgen CCN) for connectivity to Nextgen Networks based interstate capacity • Acquire fibre for NXG to Brookvale (SCCN) links

  41. Deployment Plan (Melbourne) • Deploy “legacy” router in Thomas Cherry building of University of Melbourne to replace the VRNO 7500 • Acquire additional Optus Gigabit Ethernet based commodity capacity • Build new GigaPOP sites • Nextgen Networks - West Melbourne • Law Faculty building, University of Melbourne • Acquire fibre between UniMelb and the Nextgen Networks CO • Attempt to provide connectivity to Sydney via Nextgen Networks ASAP

  42. Deployment Plan (Canberra) • Build GigaPOP at TransACT • Replace CARNO 7500 with 7304 • Provide connection from 7304 at ANU to backbone router at TransACT GigaPOP • Connect TransACT GigaPOP to Sydney UTS GigaPOP

  43. Deployment Plan (Brisbane) • Acquire supplemental commodity via Optus Gigabit Ethernet service • Acquire transmission from both POP sites (UQ and QUT) to Nextgen Networks CO • Depending on which site has transmission to Nextgen Networks site first then build first GigaPOP there and connect it to the UTS GigaPOP

  44. Deployment Plan (Adelaide) • New GigaPOP needs to be built at 10 Pulteney Street (University of Adelaide, ITS) • Acquire diverse backhaul from Nextgen Networks site to 10 Pulteney Street to handle Adelaide/Melbourne and Adelaide/Perth circuits • Build/acquire intra-POP link to new Hostworks GigaPOP • Install link between GigaPOP at 10 Pulteney Street and existing SAARDNet router in Plaza building • Migrate existing member and client Ethernet services to new equipment

  45. Deployment Plan (Perth) • Build AARNet3 GigaPOP at the RBA building • Connect existing RBA based PARNet router to new GigaPOP • Build connection from GigaPOP into Nextgen CCN so as to provide connection to STM-64 circuit to Adelaide • Build second GigaPOP at ARRC and connect to first via 10Gigabit Ethernet across existing fibre

  46. Deployment Plan (Services) • Deploy first DNS cache in Sydney ASAP • Deploy USENET news system, possibly, in PAIX • Deploy additional DNS cache systems and secondary DNS servers as GigaPOP’s are linked • Migrate IPv6 Migration Broker to new network

  47. Indicative timeframe • Commodity link - PAIX to UTS - February 04 • Intra Sydney link - March 04 • MEL/SYD link - March 04 • CBR/SYD link - April 04† • BNE/SYD #1 link - May 04† • MEL/ADL/PER #1 link - May 04† • “SX TransPORT” - June 04 • Commodity link - LA to SYD #2 - June 04 † Dependent on POP site readiness and suitable CO/POP links

  48. Further Discussion

  49. www.aarnet.edu.au

More Related