1 / 30

PERN Network Analysis, 2010-2011

PERN Network Analysis, 2010-2011. Prepared by NUST SEECS in collaboration with SLAC, USA. Focus of this presentation. Internet performance monitoring – motivation. PERN network performance monitoring - progress. PERN network analysis – (2010). Outstanding issues that require attention.

aulani
Download Presentation

PERN Network Analysis, 2010-2011

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PERN Network Analysis, 2010-2011 Prepared by NUST SEECS in collaboration with SLAC, USA For full report please see: https://confluence.slac.stanford.edu/display/IEPM/Pakistani+Case+Study+2010-2011

  2. Focus of this presentation • Internet performance monitoring – motivation. • PERN network performance monitoring - progress. • PERN network analysis – (2010). • Outstanding issues that require attention. • Conclusion of analysis.

  3. Internet Performance Monitoring - Motivation

  4. World Internet Penetration in 2010

  5. Correlation of Internet Performance and the UNDP Human Development Index

  6. Role of network measurements in minimizing the Digital Divide • If digital divide is not measured, there is no way we can eliminate it. • SEECS-NUST in collaboration with SLAC provide: • A driving force to help minimize the digital divide. • Monitoring and tracking of bandwidth (BW) progress. • Raise awareness: locally, regionally and globally. • Technical help with modernizing the infrastructure: • Provide tools for effective use. • Designing, commissioning and development • Encourage and work on inter-regional projects: • Asia-Pacific: TEIN2, TEIN3 • US-Brazil: RedCLARA

  7. Overview and Introduction to PingER • Funded by HEC in Pakistan since 2008. • Measurements since 1995 • Reports link reliability and quality. • Complete overhaul by NUST researchers in 2003. • Complete update in progress. • Countries monitored: • Contain 98% of the world population. • 99% of the world’s internet users. • 930 remote nodes at 786 sites in: • 164 nations; 55 monitoring nodes • 169 nodes in 50 African countries • Strong collaboration with ICTP/Trieste, Italy and NUST SEECS, Pakistan. • 35 monitoring nodes in Pakistan. • Excellent, vital work. Countries:N. America (3), Latin America (21), Europe (30), Balkans (10),Africa (50),Middle East (13), Central Asia (9), South Asia (8),East Asia (4), SE Asia (10),Russia (1),China (1)andOceania (4)

  8. How PingER works? • Monitoring hosts ping remote hosts with 10 pings every 30 minutes. • From this data we measure: • minimum and average round trip times (RTT), • Jitter (IPDV), • loss, • Un-reachability (all 10 pings fail) • and derive throughput and mean opinion score (MOS). • Data gathered from monitoring sites on a daily basis by the archiving sites at NUST, SLAC and FNAL.

  9. PingER-Pakistan deployment

  10. Status of PingER-Pakistan deployment • 8 nodes till January 2009: • red = monitoring nodes • green = monitored nodes • RTT as seen from SLAC:

  11. Development and deployment in 2010 • Put together PERN network monitoring infrastructure. • Possible because of: • PingER network administrators training workshops. • Site visits by NUST SEECS team. • Strong collaboration between NUST and SLAC. • Installed PingER monitoring tools and started gathering data at 35 sites. • Working on an additional 25 monitoring sites. • Monitoring host – remote host pairs increased from 30 to over 500. • Deployment of 3rdPingERarchive site at NUST SEECS. • The other two being at SLAC, USA and FNAL, USA. • Pakistani data archived at NUST only. • World wide data also archived at NUST. • NUST manages the archive repository both at NUST and SLAC. • Deployment of visualization tools and aids. • Smokeping graphing utility. • Enhancement of PingER coverage maps. • Archival of traceroutes among all Pakistani universities.

  12. Issues during deployment • Difficulty varies from site to site. • Installation of PingER software has not caused any delays. • 14+ years of development effort has gone into PingER. • NUST SEECS has been associated with it for 7+ years. • The delays (from installation to data gathering) have mostly been due to: • getting administrative approval within university • getting access to the concerned local people • delays in making the DNS record entry • No DNS entry for Lahore School of Economics. Required to enhance PingER tools. • Problems once it starts taking data are: • poor power availability • lack of backup power

  13. Analysis from PingER-Pakistan data

  14. PERN2 Topology differences • Regions: • Peshawar • Islamabad • Lahore • Karachi • Quetta

  15. Unreachability • An unreachable host doesn’t reply to any pings. • We chose a reliable host at SLAC (pinger.slac.stanford.edu) and analyzed the unreachability of Pakistani hosts.

  16. RTT and packet loss (inter-city) • Pak to Pak RTT analysis. • The minimum RTT to Peshawar and Quetta (graph at left) appears to have reduced dramatically after April 2010. • Partially due to bringing on new hosts having lower RTT. • Also most nodes shifted to the PERN network in April and May 2010. • Blue dots = median losses between pairs. • Red line = number of pairs. • Packet loss has increased over the last year. • This is due to the shift to PERN network, which means that it is nearly at maximum utilization.

  17. Intra-city, e.g. Islamabad • Large differences in minimum RTT. • PERN and NCP (N.E. Islamabad) • Less than 10msec (blue line, exactly 1.3msec) • PERN and NUST (S.W. Islamabad) • 40-80msec (red line, exactly 44msec) • Presumably due to the public routing in Islamabad region.

  18. Throughput • We derive the throughput from the loss and RTT measurements as: • throughput = 1460*8[bits]/(RTT[msec]*sqrt(loss)) kbits/s • Note: this is not actual throughput. • Throughput has generally increased as number of nodes have increased.

  19. Mean Opinion Score (MOS) • Telecom industry uses MOS as a voice quality metric. • 1= bad; 2=poor; 3=fair; 4=good; 5=perfect. • Typical range: 3.5 to 4.2 • Excellent connection : >4.2 • Trend shows overall improvement (bottom graph) thanks to PERN network. • If new hosts included then performance drops a bit (left graph).

  20. Measuring Alpha • The speed of light in fibre is roughly 0.66*c • ‘c’ = speed of light in vacuum i.e. 299,792,458 m/s • Using 300,000 km/s as ‘c’ this yields: • Alpha = round trip distance[km] / 100[km/ms] * min RTT [ms] • Alpha is a way to derive distance between two hosts (using minimum RTT). • Large values of alpha close to one indicate a direct path. • Small values usually indicate a very indirect path. • This assumes no queuing and minimal network device delays.

  21. Alpha • Direct links between (alpha close to 1): • Karachi and Lahore • Karachi and Islamabad • Karachi and Peshawar • Very indirect link between Islamabad and Quetta (low alpha). • Route goes via Karachi in the south and then back northwards to Quetta. • More indirect links (lower alpha): • Islamabad and Lahore • Islamabad and Peshawar • Lahore and Peshawar • Islamabad is a common element between these links • Islamabad's intra-city traffic experiences multiple hops (within a few square kms). • Outbound Islamabad traffic also experiences a slightly indirect route (multiple hops). • Traffic passing between Peshawar and Lahore shows a much direct route.

  22. UET Taxila Case Study

  23. The peculiar case of UET Taxila • UET Taxila shows unusual behaviour. • Started monitoring UET Taxila from SEECS since September 2010. • Conclusions from smokeping graph below: • The monthly average RTTs are typically 100ms. • The min_RTTs are under 10ms. • Jitter/IPDV are typically quite large (> 20ms) • Unreachability is high to UETTAXILA from all over Pakistan. • Losses from SEECS are between 2.5% and 7%, which is high.

  24. UET Taxila - Congestion • Network congestion: • Smokeping plot shows min RTT < 50ms and very large differences in min and max. • Region between Nov 18 and Dec 1 shows much variability. • At nights the RTTs are low (since people are asleep). • RTTs increase as load goes up and links getting congested. • Heavy queuing ensues with losses and extended RTTs. • Could be a last mile problem: Could be a last mile problem: traceroutes reveal larger variation (congestion queuing) delays at last few hops (Taxila routers). SEECS (Islamabad) Rawalpindi Exchange Router at Taxila UET Taxila

  25. UET Taxila – Indepth traffic analysis • Minimum RTT drops to 35 ms from 60 ms on or about November 10th, 2010. • More confusion, we decided to archive pings from SEECS to UET Taxila. • Initial ping data analysis from Jan 14th to 27th: • Heavy network utilization from midnight to 4am in the morning. • Possible data transfer between NCP and UET Taxila (research collaboration). • Some activity details are below.

  26. Concluding words ..

  27. Achievements • Extensive end-to-end (E2E) PERN network monitoring infrastructure. • In 2010, grew from 30 monitoring-remote node pairs to over 500 covering most of the major universities in main regions of Pakistan. • Strong collaboration between NUST SEECS and SLAC. • Exchange visitors (NUST students/RAs) visit SLAC. • NUST manages the PingER project both at SLAC and NUST SEECS. • Students working on Pinger project invariably get fully funded PhD opportunities. • Development of new PingER tools. • Smokeping graphing utility. • MOS and Alpha incorporated. • All the tools have been replaced by NUST SEECS with newer versions. • Enhancement of existing PingER tools.

  28. Issues needing attention • Very good MOS, VoIP tools such as Skype and PERN conferences should work well between PERN connected hosts. • High variability in the reliability (unreachability) of hosts. • Loss of power and power shortage. • An effort needs to be made to understand and improve power reliability and the provision of backup for several sites. • Delays in installation and start up of monitoring hosts. • Due to weak local support at some sites. • More work needs to be done to understand why Karachi looks bad. • Low values of alpha suggest that there may be a lot of indirect routing in the Islamabad region. • Further work with PERN is required to see if this can be remedied. • PERN network configuration changes with time. • Archiving traceroutes between all Pakistani universities to record topology history. • PERN are encouraged to provide the addresses and locations of the routers and if possible the rough fibreroutes or lengths between sites

  29. Acknowledgements • Dr. Arshad Ali - arshad.ali@seecs.edu.pk, NUST • Dr. Les Cottrell - cottrell@slac.stanford.edu, Stanford University • Dr. AnjumNaveed - anjum.naveed@seecs.edu.pk, NUST • Dr. Adnan Khalid - adnan.khalid@seecs.edu.pk, NUST • ZafarGilani – zafar@slac.stanford.edu, NUST/Stanford University • FahadSatti – fahad@slac.stanford.edu, NUST/Stanford University • Muhammad Zeeshan - muhammad.zeeshan@seecs.edu.pk, NUST • KashifSattar - 08msitkashifsattar@seecs.edu.pk, NUST • Amber Zeb - 08mscseazeb@seecs.edu.pk, NUST • SadiaRehman - 08mscsesrehman@seecs.edu.pk, NUST • AjmalFarooq - ajmal.farooq@seecs.edu.pk, NUST • ImranAshraf - imran.ashraf@seecs.edu.pk, NUST • SEECS Systems' Administration Team • Pakistani Universities collaborating with us. • We appreciate this but further assistance/input from HEC is requested. • UmarKalim - umar.kalim@seecs.edu.pk, NUST/Virginia Tech

  30. Thank you! For full report please visit: https://confluence.slac.stanford.edu/display/IEPM/Pakistani+Case+Study+2010-2011

More Related