1 / 36

Emerging Infrastructure and Data Center Architecture – Principles and Practice

Emerging Infrastructure and Data Center Architecture – Principles and Practice. Richard Fichera Director, BladeSystems Strategy BladeSystem & Infrastructure Software. Today’s Agenda. The problem – complexity and physics catch up with the data center

tex
Download Presentation

Emerging Infrastructure and Data Center Architecture – Principles and Practice

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Emerging Infrastructure and Data Center Architecture – Principles and Practice Richard Fichera Director, BladeSystems Strategy BladeSystem & Infrastructure Software

  2. Today’s Agenda • The problem – complexity and physics catch up with the data center • The building blocks – servers, storage and fabrics • Evolution in Data Center architectures • Infrastructure in motion – VMs, automation and orchestration • Infrastructure and data center transformation

  3. HP BladeSystem c-Class Server Blade Enclosure Background – Overwhelming Complexity and Increasing Scale

  4. Shifting Costs Define Future Investments Source: IDC, Virtualization and Multicore Innovations Disrupt the Worldwide Server Market, March 2007

  5. HP BladeSystem c-Class Server Blade Enclosure Infrastructure Building Blocks – Fundamental Physics and Trends

  6. Chef’s Special - Sautéed Data Center

  7. Legacy Thermal Management Was an Afterthought Preliminary studies suggest… • Overall PUE was often in the neighborhood of 2.0 • More energy used to remove the heat than was used to do productive work • For decades the only real decisions were water or air and how many CRACs Cooling Loads Dominate the Data Center Percentage of Power Used Source: C.G. Malone & Uptime Institute

  8. Power & Cooling Will Continue to Dominate Data Center Architecture Relative datacenter spending per serverunit Power+Cooling Compute Network Admin. 100% Storage Collapse complexityand take cost out 2009 2007 2008 Datacenter spending based on IDC Forecast and report: Datacenter of the Future II, January 2009Spending is per server unit, normalized for CY2008 = 100%

  9. The Power & Cooling Chain is Complex “Podular DC Design: up to 45% cooling cost saving Optimizing from chip to facilities Up to 60% power savings Storage Thin Provisioning/Dynamic Capacity Mgtsaves up to 45% Virtualization/Consolidation: up to 40% reduction in power cost for data centers Advanced Power Management: 10% - 20% (with group power management) Power Distribution - 3% Basic Blade Enclosure: 25% cost savings to power & cool Power Optimized Servers: 18% less power Disk Drives: 2.5” 9 watts vs 18watts for 3.5” Net-Net – Change the PUE from 2.0+ to 1.25 or less Power Supplies: 90% +efficient supplies Low Power processors: up to half the power consumption 9 21 August 2014

  10. Servers – Market and Drivers • Market • The x86 server market represents approximately 8,000,000 servers per year, and will remain the center of innovation and investment. • The market is split 35/50/15 in terms of the tower/rack/blade form factors, with blades and extreme scale-out as the fastest-growing segments. • Key Drivers • Acquisition cost will always be important • Energy consumption has become a priority, but focus will shift to larger aggregates as marginal gains on servers get smaller • Total infrastructure cost, including management, becomes a focus at a system/DC level • This is the jumping off point for debates about unified fabrics, shared and virtualized I/O, new virtualization management models, etc.

  11. Server Performance • Server performance will continue to increase • By 2010, a 2 socket server will have approximately 4 - 6 times the performance of the same server in 2008 • Continued improvements in architecture along with density • Niche architectures will have freedom to embed other systems elements on chip – comm, crypto, etc.

  12. Processors trends • Silicon compaction continues (65nm, 45nm, 32nm) • Higher levels of functional blocks integration  Large gate count • Caches, Memory controller(s), I/O, TPM • All server processors going to NUMA using processor links (no more FSB) • More efficient coherency protocols (Intel: Home Snooping; AMD: HT Assist) • Higher number of and faster interfaces  Large pin-count pkg • One or more processor links  More flexible designs • Intel QPI • AMD HT • Multiple memory links  Flexible memory configurations • Integrated I/O links(PCIe3, USB3) I/O closer to processor & memory • Core count increase continues (4, 6, 8, 10, 12, 16) • Core clock frequencies increase slow down (topping around 3GHz) • More physical memory address bits (Intel: 46; AMD: 48) • Wide range of power (TDP) bins (Intel: 37…150W; AMD: 45…140W) • Depends on core count, cache size, coherent link count

  13. Memory trends • Increase DDR3 speeds with tradeoffs on # DIMMs per channel (DPC) • DRAM chip capacity increase • DIMM capacity increase • 8GB DIMM will be linearly priced in 2010 • Reduced DIMM power rail and consumption • DIMM interfaces (DDR, SMI/VMSE) changing to address DDR bus limitations • Non-volatile components will add memory/storage hierarchy

  14. Server Futures • Continued escalation of core count and memory • Expect differentiation in choice of on-board peripherals and accelerators at both chip and board level • Continual pressure toward denser, higher layer count boards • “Communications radius” effects, SI and connector limits • Changingoptions for design • Link-based connections for more flexible design • More options for local and near storage • Design differentiation as requirements bi/trifurcate • GP, scale out, virtualization designs • Value increasingly in packaging, rack-scale and larger integration

  15. Changing Focus for Server Design • Server design is increasingly merging with DC design for rack-level and larger aggregates • As designs become more aggregate, the optimizations become more complex Increased demand for scale-out is shifting the focus to rack, module and entire DC scale designs Server design has been focused on the chip to chassis domain

  16. Storage Density • Storage density will follow a pattern similar to server performance • By 2010 -11, usable densities will exceed 1 PB/rack • Expect significant changes and differentiation in • Storage services • Packaging • Choices of connection fabric

  17. Block storage device trends • Cost competiveness drove HDD industry consolidation • HDD interfaces going fast serial links: SAS/SATA • SAS growing to be the interface of choice in enterprise • FC HDD growth is flat or shrinking • Switched SAS also enables storage fabric for shared block storage • But, lots of things need to be developed for complete solutions • HDD capacity continue to increase, while rpm tops at 15K • HDD areal density ~30-40% AGR [SFF 0.5TB in ’10, TB in ’11] • SFF dominates in enterprise • Enterprise SFF 10K adoption growing (largest segment) while LFF 15K vol. shrinking • Flash storage is disruptive • SSD $/GB cross-over with SFF SAS 15K rpm in ’11-’12 • 256G/512G in ’10, TB in ‘11 • PCIe-based Flash storage significantly improves storage I/O • New storage hierarchies and models, including memory cache, disc cache, i/o accelerators 17

  18. Storage - Virtualized Data Path & Services Snapshot Clones Migration Thin provisioning/Dedup Mirroring Reference StorageArchitecture Data Path Control Path Data Path Modules Storage VirtualizationManager Servers LUNs IBM Sun EMC HP Physical Media 18 21 August 2014

  19. Data Center Logical Architecture – Changing Resource Distribution Strategies Fabric storage WAN & Campus Core • Changes in density and fabric are changing the approach to modularity of storage and servers • Converged fabrics allow more flexibility in location and reduce interconnect costs • Local “mini-SANs” such as switched SAS allow refactoring storage to bring it near consumers and producers – and away from the SAN team • Increasingly flexible storage services models Data CenterCore SLB Distribution/Aggregation Firewall Access(ServerEdge) Rack-mount Server farms Blade server Chassis Virtual Machines SAN SAN storage 19 21 August 2014

  20. Physical Architecture – Is There a Podular DC in Your Future? • Lower TCO • Higher PUE and power/cooling efficiency vs traditional DC • Geographic flexibility • Can deploy closer to customers, and in locales not suitable for brick & mortar • Controlled/hybrid co-lo environments • Faster time to Revenue for customers • Brick & Mortar 18+ months design/build vs Container in <6 months • Improved return on capital • “Pay as you go” vs. $millions up-front investment for brick & mortar • More efficient procurement chunk size • Rack too small, datacenter takes too long • Scalable with enterprise architecture • Core/Regional Gateway/Point-of-Purchase

  21. HP BladeSystem c-Class Server Blade Enclosure Virtualization, Orchestration, Automation and Infrastructure Agility

  22. Virtualization – A Blessing & a Curse • Virtualization – of servers, storage, networks and I/O hardware – brings major benefits … • Capital resource efficiency (the initial sell) • Standardization and ease of migration • A gateway to adaptive architectures • … as well as significant burdens – management, management, management • Are you substituting one vendor lock-in for another? • How many more tools do you want to add to your environment? • How do you integrate the physical and virtual management layer? • Be prepared for major innovation and vendor conflict in this arena for the next five years • You need to have a strategy, metrics and a roadmap

  23. Enterprise Customers continue to be challenged managing infrastructure • Server admin and management costs grow with the installed base of servers1 • Basic operations such as installing a server typically take weeks requiring manual coordination across multiple customer organizations • Power, cooling and facilities limitations continue to loom as limits - the “$10 Million server” • This will drive multiple deployment options such as cloud in an attempt to tap economies of scale • Virtualization helps some things, but potentially complicates the management environment • Expect continued experimentation in virtualization management models, expanded virtualization options

  24. Typical infrastructure deploymentBuilt one unit at a time • Many people • Many manual steps • Many weeks • Human error Line of businessselects application Get purchase approvals Project planningmeetings And moremeetings Order server facilities storage network server Server delivery Move to test center unpack inventory Build process Move to production environment Change controlapprovals Re-cable andmove into production

  25. The Goal – Automated ProvisioningProvisioned when needed • Fewer people and steps • Guaranteed compliance • Integrated information • Same interface for virtual and physical resources Line of businessselects application Choose infrastructure application template (right size?, right app?) Verify resource allocation Tool determines available resources and when Workflow starts automatically Push “go” A full application infrastructure up and running!

  26. What You Need to Add • Comprehensive VM management CONVERGED with physical management • Power-aware load placement and movement • Physical/logical discovery & visualization • Multi-tier provisioning of VMs, networks and applications • Lifecycle management of VMs • Resilience, changing how we do HA • And the good news is that you have at least 100 niche/startup vendors to choose from • As well as the feuding major vendors • We ALL want to be your management console of record

  27. HP BladeSystem c-Class Server Blade Enclosure Infrastructure Transformation – How to Get There From Here

  28. The Path to Infrastructure Transformation The future is cloudy Physical refresh? Automate What? Outsource? Virtualize VMs Storage Networks Standardize Current State

  29. Some Essential Principles • Draconian standardization • It’s really amazing how simple you can make an enterprise environment if you just don’t let anyone complain (or at least stop listening to them) • Vendor simplification • Software is particularly important • You may want to maintain very coarse-grained hardware heterogeneity for vendor management • Almost always, fewer is better • Locations, software titles, options • Once standardization has been in place for a full dev cycle, requests for variations become few and far between

  30. Data Center TransformationWorkstream Approach • Define the optimal to-be architecture, migration approach, sourcing strategy and business case by workstream. • Define dependencies and order between workstream items • Prioritize high ROI opportunities. • Holistic, total implementation provides highest ROI.

  31. Data Center & IT TransformationWhat Can You Achieve? Reduce Cost • Overall lower total IT costs – your mileage will vary • Up to 50% savings from IT consolidation, apps rationalization • Up to 60% energy savings from modern facilities • Up to 25% real estate, location savings Mitigate Risk • Centralize & standardize IT and data center processes • Establish compliance with industry best practices • Protect company revenue, brand & reputation from outage or disaster Grow Business • Timely response to new business initiatives (that old alignment thing) • Spend more time focusing on business value instead of fighting fires and managing MAC addresses

  32. Best Practices to Achieve the Vision • Simplify through standardization: Standard & consistent data center architecture and design; standard hardware, tools, and infrastructure • Establish PMO for governance: Provides framework for how effort will be structured, who will make decisions • Go modular: Allows for fast build, flexibility, scalability, and efficiencies; isolates and separates risk • Break plan into bite-size chunks: Divide into workstreams, engage proper expertise, identify clear goals & deliverables by quarter • Synchronize—timing is everything: Facilities must be ready to receive servers; servers must be ready to receive applications • Define one set of processes: A properly documented single set of processes aligned to ITIL V3 model ensures desired outcomes, allows for automation • Actively manage and communicate change: Change management and well-executed communication strategy critical for success

  33. What Lies Beyond: Cloud Computing The Vision A pool of abstracted, highly scalable, and managed compute infrastructure capable of hosting end customer applications and billed by consumption. Cloud computing's ecosystem in the future will include Google-like public clouds as a platform for applications, and virtual private clouds, which are third-party clouds, or segments of the public cloud with additional features for security, compliance, etc. The data centre of the future also will include private (internal) clouds, which will be an extension of virtualization and used primarily because of their capital or operational efficiencies. For some applications, data just won't leave the enterprise. “If managing a massive data center isn’t a core competency of your business, maybe you should get out of this business and pass the responsibility to someone who has “ Amazon CTO Werner Vogels, 2007 Next Generation Data Center Conference

  34. Clouds – A Long Haul The Reality • Good concept, great marketing buzz. • Hey, where are the applications? • Welcome to the world of almost consistent data. • Where did you say my data is? • Did someone say standards? • Hi, I’m Coke. Am I sharing my cloud with Pepsi? • What’s the difference between a well designed shared services platform and an internal cloud? • But it does have a future …

  35. Thank You Richard Fichera Director, BladeSystems Strategy BladeSystem & Infrastructure Software richard.fichera@hp.com

  36. Expanding on the Themes at NGDC • Beyond Power and Cooling: Improving Data Center Productivity Speaker, John Pflueger, Technology Strategist, Dell • How the Sustainable Data Center Will Reduce Costs and Improve IT, Doug Washburn, Forrester Research • Creating the Most Efficient, Resilient and Sustainable Data Centers, Patrick Leonard, Senior Manager, Strategic Initiatives , Equinix, Inc. • Working With our Utilities: Getting What You Need When You Want It, Mark Bramfitt, Principal Program Manager, PG&E Corporation • From Monitoring to Management: Gaining Comprehensive Visibility into Data Center Operations, Traci Yarbrough, Product Marketing Manager, Aperture Technologies

More Related