1 / 35

Can you green the academic institution?

Can you green the academic institution?. David Wallom Technical Manager Oxford e -Research Centre University of Oxford. Overview. Academia in the UK and IT Example greening solutions Desktops Research Computing HTC HPC. HE/FE HAVE A LARGE FOOTPRINT. 760,000 PCs 215,000 servers

mahon
Download Presentation

Can you green the academic institution?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Can you green the academic institution? David Wallom Technical Manager Oxford e-Research Centre University of Oxford

  2. Overview • Academia in the UK and IT • Example greening solutions • Desktops • Research Computing • HTC • HPC

  3. HE/FE HAVE A LARGE FOOTPRINT • 760,000 PCs • 215,000 servers • 147,000 networked printers • 512,000 Mwh of electricity • 275,000 tonnes of CO2 • Over £60 million in 2009

  4. Methods for driving change • Sticks- Regulations- Energy and other costs- Stakeholder requirements • Carrots- Financial and operational benefits- Teaching and research

  5. Howard Noble (OUCS) Kang Tang (OeRC) David Wallom (OeRC) Project supported by: Joint Information Systems Committee (JISC) Oxford University Estates Department Oxford Environmental Change Institute (ECI) Oxford e-Research Centre (OeRC) Oxford University Computing Services (OUCS) Green desktop computing

  6. PC Energy Report UK, 2009. 1E

  7. Green IT Services in Oxford • Monitoring Service • Wake-on-LAN Service

  8. Monitoring Service • No installation • Single Sign-on • Attributes based AuthZ • Central managed data repository • Ping sweep vs. ARP scan Network ICMP Data Link ARP Physical

  9. Monitoring Service in Oxford Monitoring Server Gateway WebAuth SSO Gateway Gateway Gateway Oak LDAP

  10. History graph example

  11. Explore by category (cont’d) • Desktop • Server • Virtual Machine • Network device • Other

  12. Explore by category

  13. What do users need? A gateway server in the right place 1.2GHz Marvell Sheeva CPU 512 MB RAM 512 MB flash memory Gigabit LAN interface and USB 2.0 port. 1.6GHz Atom CPU 1 GB RAM 80 GB SATA-2 HDD Gigabit LAN interface and USB 2.0 ports

  14. Possible outcomes • Everybody turns off their computer • Nobody turns off their computer • Somewhere between

  15. Wake on LAN (WoL) Service • Encourage OFF by enable ON • A standard of decade old • Supported by most motherboards • One gateway, two services

  16. Who can use the service? • Registered owner • Registered IT admin • Scheduled timer • Third party services

  17. Service interfaces WS-SECURITY 3rd PARTY SERVICES SOAP MACHINE CENTRAL SERVER HUMAN HTTP REQ BROWSER KEBEROS

  18. WoL Service in Oxford OUCS Central WOL Server Gateway WebAuth SSO Registration Server Gateway Gateway Gateway HFS Service

  19. Wake on LAN Service

  20. Secured communications Central Server INTERNET Gateway X.509 Signature + SSL Encryption

  21. WoL outside Oxford Subnet Central WOL Server Shibboleth SP Gateway Subnet IdP Gateway Subnet Subnet WoL Service in Liverpool University IdP Gateway +

  22. Desktop computers and energy consumption Power (kW) x Time (hours) x Number of devices x Cost (£ per kWh) 0.105 x 8760 x 16 000 x 0.12 = £1,766,000 0.105 x 1808 x 16 000 x 0.12 = £ 364,000

  23. Five steps: Estimate Scenario A: 100 computers (80W) and monitors (25W) left on all year will consume 92,000 kWh over the next year: • 49,400 kg CO2eq. • £11,000 (at 12p/kWh) Scenario B: Same stock switched off at the end of each working day (over night, weekends and 25 days of holiday) will consume 19,800 kWh over the next year: • 10,600 kg CO2eq. • £2,400 (at 12p/kWh)

  24. Five steps: Research Energy Star has compiled a list of case studies (mostly for US organisations) and we have started to do the same at Oxford e.g. policy at OUCS: http://www.oucs.ox.ac.uk/greenit/oucs.xml

  25. Five steps: Implement Four tools: • Monitor and report • Switch computers on remotely • Automatically power down computers safely and reliably • Display real time electricity meter data

  26. Five steps: Communicate • The Carbon Reduction Commitment league table • IT-related energy costs • Staff morale: It all comes down to protecting the brand of your group and the collegiate University as a whole

  27. Five steps: Share Write your approach up so others can learn from your experience. For more information about the 5 steps: http://www.oucs.ox.ac.uk/greenit/desktop.xml

  28. Participating Institutions • University of Liverpool, providing a location independent national service • Utilizing the above service • Manchester • York • Southampton Solent

  29. Research Computing • Large contributor for institutional consumption • Crucial research facility with significant user community from across the university constituency • Institutional HPC may consume ~4-5MW • Utilisation not always 100% • Therefore; • Increasing efficiency is essential as every little step counts

  30. Possible solutions • Already • Virtualisation – OK for smaller services, not large resource utilisation • Resource Management – Interface for starting and stopping workers within a task farm/beowulf cluster

  31. Condor HTC Power Optimization • Integration between Condor resource management system and power control facilities • Separate daemon that manages which resources (worker nodes) are running compared to incoming task queue • Insert damping factor and ‘round-robin’ listing of workers to ensure systems aren’t turned on and off too frequently

  32. Powering down Supercomputers Dr Jon Lockley

  33. Project • 9 month JISC Funded Demonstrator project • Oxford Supercomputing centre and Streamline Computing • Possible to make 10-20% energy savings during normal operation

  34. Background • 25-30 UK HEI with supercomputing resources • Energy use (directly and in associated facilities) is large • Any reduction in nominal consumption would result in a large saving • Resource utilisation managed by a relatively small number of job schedulers • Little vendor drive to reduce consumption

  35. The plan • Actively control the compute nodes • Switch ‘on and off’ depending on load • Containing enough intelligence to power on the right number of nodes for the work queued • Job scheduler independent development to allow for widespread utilization and integration by academic and commercial systems integrators in their management stack • Initial targets are PBSPro, Torque and SGE • Maui and other DRM/JS as resources allow

More Related