1 / 42

Inside Windows Azure: the cloud operating system

SAC-853T. Inside Windows Azure: the cloud operating system. Mark Russinovich Technical Fellow Microsoft Corporation. About Me. Technical Fellow, Windows Azure, Microsoft Author of Windows Sysinternals tools Coauthor of Windows Internals book series With Dave Solomon and Alex Ionescu

Gabriel
Download Presentation

Inside Windows Azure: the cloud operating system

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SAC-853T Inside Windows Azure: the cloud operating system Mark Russinovich Technical Fellow Microsoft Corporation

  2. About Me • Technical Fellow, Windows Azure, Microsoft • Author of Windows Sysinternals tools • Coauthor of Windows Internals book series • With Dave Solomon and Alex Ionescu • Coauthor of Sysinternals Administrator’s Reference • With Aaron Margosis • Author of Zero Day: A Novel

  3. Agenda • Windows Azure Datacenter Architecture • Deploying Services • Maintaining Service Health • Developing and Operating Windows Azure

  4. Windows Azure Datacenter Architecture

  5. Datacenter Architecture Datacenter Routers Aggregation Routers and Load Balancers Agg Agg Agg Agg Agg Agg LB LB LB LB LB LB LB LB LB LB LB LB Top of Rack Switches TOR TOR TOR TOR TOR TOR TOR TOR TOR TOR TOR TOR TOR TOR TOR … … … … … … Racks Nodes Nodes Nodes Nodes Nodes Nodes Nodes Nodes Nodes Nodes Nodes Nodes Nodes Nodes Nodes PDU PDU PDU PDU PDU PDU PDU PDU PDU PDU PDU PDU PDU PDU PDU Power Distribution Units

  6. Windows Azure Datacenters

  7. Datacenter Clusters • Datacenters are divided into “clusters” • Approximately 1000 rack-mounted server (we call them “nodes”) • Provides a unit of fault isolation • Each cluster is managed by a Fabric Controller (FC) • FC is responsible for: • Blade provisioning • Blade management • Service deployment and lifecycle Datacenter network FC FC FC Cluster 1 Cluster 2 … Cluster n

  8. The Fabric Controller (FC) • The “kernel” of the cloud operating system • Manages datacenter hardware • Manages Windows Azure services • Four main responsibilities: • Datacenter resource allocation • Datacenter resource provisioning • Service lifecycle management • Service health management • Inputs: • Description of the hardware and network resources it will control • Service model and binaries for cloud applications Datacenter Fabric Controller Service Server Kernel Process Word SQL Server Exchange Online SQL Azure Windows Kernel Fabric Controller Server Datacenter

  9. Cluster Resource Description • The Fabric Controller is bootstrapped with a Utility Fabric Controller (UFC) • Single-instance FC • Used for bootstrap and FC updates • UFC feeds FC a description of the cluster physical and logical resources in Datacenter.xml • Server IP addresses • Pool of network IP addresses to assign services • Network hardware and Power Distribution Unit addresses

  10. Inside a Cluster • FC is a distributed, stateful application running on nodes (servers) spread across fault domains • Top blades are reserved for FC • One FC instance is the primary and all others keep view of world in sync • Supports rolling upgrade, and services continue to run even if FC fails entirely TOR TOR TOR TOR TOR AGG LB LB LB LB LB FC3 FC4 FC5 FC2 FC1 FC3 Nodes … … … … … … … … … … Rack

  11. Provisioning a Node Fabric Controller Windows Deployment Server Image Repository • Power on node • PXE-boot Maintenance OS • Agent formats disk and downloads Host OS via Windows Deployment Services (WDS) • Host OS boots, runs Sysprep /specialize, reboots • FC connects with the “Host Agent” Maintenance OS Windows Azure OS Maintenance OS Parent OS Role Images Role Images Role Images Role Images PXE Server Node Windows Azure OS FC Host Agent Windows Azure Hypervisor

  12. Deploying Services

  13. Service Deploying a Service to the Cloud:The 10,000 foot view System Center AppManager Windows Azure Portal • Package upload to portal • System Center Concero provides IT Pro upload experience • Windows Azure portal provides developer upload experience • Service package passed to RDFE • RDFE sends service to a Fabric Controller (FC) based on target region and affinity group • FC stores image in repository and deploys service RDFE Service US-North Central Datacenter Fabric Controller

  14. RDFE • RDFE serves as the front end for all Windows Azure services • Subscription management • Billing • User access • Service management • RDFE is responsible for picking clusters to deploy services and storage accounts • First datacenter region • Then affinity group or cluster load • Normalized VIP and core utilization A(h, g) = C(h, g) /

  15. The Several Service Models of Windows Azure • Service Model undergoes several translations on the way to a node • .CSDEF/.CSCFG deployed to RDFE • SDK schema for external use • Internal schema for infrastructure services (more Update Domains, Fault Domains, early access to features) • RDFE converts .CSDEF to .WAZ • Includes advanced service concepts • Native hardware • Role colocation • RDFE then converts .WAZ to .SVD/.TRD • Fabric Controller internal model

  16. Service Deployment Steps • Process service model files • Determine resource requirements • Create role images • Allocate compute and network resources • Prepare nodes • Place role images on nodes • Create virtual machines • Start virtual machines and roles • Configure networking • Dynamic IP addresses (DIPs) assigned to blades • Virtual IP addresses (VIPs) + ports allocated and mapped to sets of DIPs • Configure packet filter for VM to VM traffic • Programs load balancers to allow traffic

  17. Service Resource Allocation • Goal: allocate service components to available resources while satisfying all hard constraints • HW requirements: CPU, Memory, Storage, Network • Fault domains • Secondary goal: Satisfy soft constraints • Prefer allocations which will simplify servicing the host OS/hypervisor • Optimize network proximity: pack nodes • Service allocation produces the goal state for the resources assigned to the service components • Node and VM configuration (OS, hosting environment) • Images and configuration files to deploy • Processes to start • Assign and configure network resources such as LB and VIPs

  18. Deploying a Service Role B Worker Role Count: 2 Update Domains: 2 Size: Medium Role A Web Role (Front End) Count: 3 Update Domains: 3 Size: Large www.mycloudapp.net www.mycloudapp.net Load Balancer 10.100.0.185 10.100.0.36 10.100.0.122

  19. Mapping Instances to Update/Fault Domains • Algorithm must round-robin across two axes: fault domains and update domains • APIs return FD and UD for an instance, but the mapping is invariant IN_3 IN_0 IN_6 IN_7 IN_4 IN_1 IN_8 IN_5 IN_2

  20. Inside a Deployed Node Physical Node Guest Partition Guest Partition Guest Partition Guest Partition Role Instance Role Instance Role Instance Role Instance Trust boundary Guest Agent Guest Agent Guest Agent Guest Agent Host Partition Image Repository (OS VHDs, role ZIP files) FC Host Agent Fabric Controller (Primary) Fabric Controller (Replica) Fabric Controller (Replica) …

  21. Deploying a Role Instance • FC pushes role files and configuration information to target node host agent • Host agent creates three VHDs: • Differencing VHD for OS image (D:\) • Host agent injects FC guest agent into VHD for Web/Worker roles • Resource VHD for temporary files (C:\) • Role VHD for role files (first available drive letter e.g. E:\, F:\) • Host agent creates VM, attaches VHDs, and starts VM • Guest agent starts role host, which calls role entry point • Starts health heartbeat to and gets commands from host agent • Load balancer only routes to external endpoint when it responds to simple HTTP GET (LB probe)

  22. Role Instance VHDs Role Virtual Machine C:\ Resource Disk D:\ Windows Differencing Disk E:\ or F:\ Role Image Differencing Disk Role VHD Windows VHD

  23. Inside a Role VM OS Volume Resource Volume Role Volume Guest Agent Role Host Role Entry Point

  24. Performance Improvements: JBOD • Today: blade disks are striped • VM VHDs share the multi-disk volume • I/O contention can be high during boot of multiple VMs • Failure of one disk affects all VMs • Improvement: create one volume per disk • VHDs span volumes only if necessary • Leads to higher I/O isolation • Failure of a disk affects only VMs hosted on that disk VHD2 VHD1 VHD2 VHD1 Volume Volume 1 Volume 2 Volume 3 Disk 1 Disk 2 Disk 3 Disk 1 Disk 2 Disk 3 JBOD Striping

  25. Performance Improvements: Preallocation and Prefetching • Today: Differencing VHDs expanded on demand • Causes seeks and zero-filling for expansion • Volume becomes fragmented over time • Improvement: Preallocate and prefetch VHDs • Differencing and dynamic VHDs preallocated and demand zero-filled • OS VHD prefetched from lab-generated prefetch file VHD1 VHD2 Preallocation Dynamic Allocation

  26. Updating the Host OS • Initiated by the Windows Azure team • Typically no more than once per month • Goal: update all machines as quickly as possible • Constraint: must not violate service SLA • Service needs at least two update domains and role instances for SLA • Can’t allow more than one update domain of any service to be offline at a time • Essentially a graph coloring problem • Edges exist between vertices (nodes) if the two nodes host instances of the same service role in different update domains • Nodes that don’t have edges between them can update in parallel Note: your role instance keeps the same VM and VHDs, preserving cached data in the resource volume

  27. Allocation Constraints Service A Role A-1 UD 1 Service B Role A-1 UD 1 Service A Role A-1 UD 2 Service B Role A-1 UD 2 • Host OS upgrade rollout is 2x faster with allocation 1 • Both allocations are valid from the service’s point of view • Allocation 1 allows for 2 nodes rebooting simultaneously • Allocation 2 allows only one node to be down at any time • Allocation algorithm: • Prefer nodes hosting same UD as role instance’s UD Service A Role B-1 UD 1 Service B Role B-1 UD 1 Service A Role B-2 UD 2 Service B Role B-2 UD 2 Allocation 1 Service A Role A-1 UD 1 Service A Role A-1 UD 2 Service B Role B-1 UD 1 Service B Role A-1 UD 2 Service A Role B-1 UD 1 Service A Role B-2 UD 2 Service B Role B-2 UD 2 Service B Role A-1 UD 1 Allocation 2

  28. Updating a Cluster: The Long Tail • Cluster Host OS updates can take many hours because of the “long tail” • Some services have many UDs, serializing server updates • Because of utilization, sometimes UDs from different services must be placed on the same server, forcing serialization • Worst case: servers must be updated one-by-one Number of Servers Updated 3 6 12 0 9 Hours

  29. Maintaining Service Health

  30. Node and Role Health Maintenance • FC maintains service availability by monitoring the software and hardware health • Based primarily on heartbeats • Automatically “heals” affected roles

  31. Hardware Issue Breakdown • Most hardware issues as clusters age are disk related • SMART errors are crucial to catching them before customers are affected • JBOD is more important as number of disks in a server increase • Memory errors are DIMM related and can be detected at boot by mismatch of available RAM and configured RAM • System must diagnose and recover from hardware issues automatically

  32. Moving a Role Instance (Service Healing) • Moving a role instance is similar to a service update • On source node: • Role instances stopped • VMs stopped • Node reprovisioned • On destination node: • Same steps as initial role instance deployment • Warning: Resource VHD is not moved

  33. Service Healing Role B Worker Role Count: 2 Update Domains: 2 Size: Medium Role A – V2 VM Role (Front End) Count: 3 Update Domains: 3 Size: Large www.mycloudapp.net www.mycloudapp.net Load Balancer 10.100.0.36 10.100.0.185 10.100.0.191 10.100.0.122

  34. Developing and Operating Windows Azure

  35. Windows Azure Service Development • Multiple teams in Windows Azure develop infrastructure services • Fabric Controller • Storage Location Service (XLS) and Storage • Monitor and Diagnostic Service (MDS) • Sydney Management (Windows Azure Connect) • Portal • … • Each team releases on its own cadence • Every day there’s at least one team deploying • Releases go through three stages: INT, STAGE, PROD • Stop for a few days to a week at each

  36. Windows Azure Monitoring • All services emit events to Monitoring and Diagnostics (MDS) tables • MDS analysis tasks constantly aggregate and scan events • Some events trigger alarms and/or actions • MDS logs allow us to analyze performance and availability Core Infrastructure Alarming Monitoring Data Store (Windows Azure Tables Reporting Analysis Service

  37. Monitoring API Performance (RDFE) • MDS log analysis of RDFE Deployment operation performance:

  38. Canary Display • Canary display provides a realtime view of Canary health and status Red instances are instances being upgraded

  39. Conclusion • Platform as a Service is all about reducing management and operations overhead • The Windows Azure Fabric Controller is the foundation for Windows Azure’s PaaS • Provisions machines • Deploys services • Configures hardware for services • Monitors service and hardware health • The Fabric Controller continues to evolve and improve

  40. thank you Feedback and questions http://forums.dev.windows.com Session feedbackhttp://bldw.in/SessionFeedback

  41. © 2011 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

More Related