1 / 31

CP5092 CLOUD COMPUTING TECHNOLOGIES

CP5092 CLOUD COMPUTING TECHNOLOGIES. Ms.M.ShanmugaPriya AP/CSE. UNIT I VIRTUALIZATION 1.1 Basics of Virtual Machines:

sharbin
Download Presentation

CP5092 CLOUD COMPUTING TECHNOLOGIES

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CP5092 CLOUD COMPUTING TECHNOLOGIES Ms.M.ShanmugaPriya AP/CSE

  2. UNIT I VIRTUALIZATION 1.1 Basics of Virtual Machines: Virtualization refers to the act of creating a virtual version of something, including virtual computerhardware platforms, storage devices, and computer network resources. Virtual machine creates an efficient, dynamic, secure and portable object-oriented facility. Virtual machine provides an interface. The resources of the physical computer are shared to create the virtual machines. Virtual machine software can run in monitor mode and execute in user mode. Benefits • Complete protection of the resources i.e., there is no direct sharing of resources, it is implemented by software. • Virtual machine is completely isolated from all other virtual machines, so there are no security problems. • Virtual machine system is a perfect vehicle for operating systems research and development. Virtual Machine Manager ■ A subsystem that manages all VMs ■ System gives each VM a handle on creation ■ Handles stored in user workspace

  3. System Design and Implementation: S Design Goals Design of the system will affect the choice of hardware and type of system i.e., batch, time-shared, single-user, multi-user, distributed, real-time or general purpose. Requirements can be divided into two basic groups. User Goals User desire is system should be easy to learn, easy to use, reliable, safe and fast. System goals Similar set of requirements are there for the people who must design, create, maintain and operate the system, i.e., the operating system should be easy to design, implement, and maintain. It should be flexible, reliable, error free and efficient. S Mechanisms and Policies Mechanisms determine how to do something. For example mechanism for ensuring CPU protection is the timer construct. That is the decision of how long the timer is set for a particular user is a policy decision. Policies decide what will be done. Policies are likely to change from place to place or time to time. Each change in policy require a change in the underlying mechanism. Policy decisions are important for all resource allocation and scheduling problems. S Process Concepts Process is nothing but a program in execution, or the instance of a program (or job) in execution. The terms job and process are used interchangeably. The term process has superseded. Major requirements of the operating system are to maximize processor use, minimize response time, avoiding deadlocks, and support interprocess communication.

  4. S Process States The principal function of a processor is to execute machine instructions residing in main memory. The execution of an individual program is referred to as a process or task. The life of a process is bounded by its creation and termination. Creation of a Process When a new process is to be added, operating system immediately allocates the address space. Reasons for Process Creation o Submission of a new batch job. o Interactive log on by a user at a terminal. o Created by operating system to provide a service. o One process to cause the creation of another process. Termination of Process Reasons for termination of a process The collection of attributes related to a process hold in a block called process control block or task

  5. control block or process descriptor or task descriptor. This collection of program, data, stack and attributes is the process image. This is shown in the figure below: Memory limits List of open files Elements of a Process Image • User data • User program • System stack • Process control block The process image is maintained as a contiguous block of memory. This block is maintained in secondary memory. For execution of a process, the entire process image should be loaded into main memory. In modern operating systems, these image blocks need not be stored contiguously. Basing on the technique used, these blocks may be of variable length (called segments) or fixed length (called pages) or a combination.

  6. Process Attributes Different systems will organize the process control block information in different ways: • Process number or identification Each process is assigned a unique numeric identifier. The identifier may be an index into the primary process table. Similar references will appear in Input/output and file tables. Identifiers: o Identifier of this process o Identifier of the parent process o User identifier • Processor state information It contains the contents of processor registers. o User Visible Registers These registers are referenced by means of the machine language. o Control and Status Registers Program counter, Condition codes (zero, sign, carry),Status information (interrupt enable, disable) o Stack pointers Stack pointer points to the top of the stack, in Last in First Out (LIFO) system. Stack store parameters and calling addresses. • Process Control Information It contains the additional information needed for the operating system to control and co-ordinate various active processes o Scheduling and state information o Data structuring o Interprocess communication o Process privileges o Memory management o Resource ownership and utilization

  7. 1.2 Process Virtual Machines: A process virtual machine sometimes referred as an application virtual machine, which runs as a normal application process inside operating systems and also supports single process. Process Virtual machine is created when that particular process is started and also destroyed when it exits. The Process Virtual Machine aims to provide a platform independent programming environment which abstracts away the details of the underlying infrastructure. A process VM can able to provide a high-level abstraction — that of a high-level programming language. Process VMs are being implemented using an interpreter; performance comparable to compiled programming languages is gained by the use of just-in time compilation. Above the bare metal hardware of host machine, a host OS is installed and after that a virtualizing software is there to virtualize the hardware resources in run-time environment. Likewise the System VM, according to the need of the user, a particular VM is created and the resources are provisioned. When the task is completed, Virtual machine is also destroyed or shut down. Fig: Process Virtual Machine

  8. 1.3 System virtual machine: A system virtual machine enables one computer to behave like two or more computers by sharing the host hardware’s resources. Multiple virtual machines each of which running their own OS (This operating system is called guest operating system) are frequently utilized in server consolidation, where different cloud services are used to execute on separate machines. The use of virtual machines to support different guest OS is becoming popular in embedded systems; a typical use is to support a real-time operating system at the same time as a high-level OS such as Linux or Windows. Fig: System Virtual Machine

  9. Applications: • Implementing multiprogramming • Multiple single-application virtual machines • Multiple secure environments • Mixed-OS environments • Legacy applications • Multiplatform application development • Gradual migration to new system • New system software development • Operating system training • Help desk support • Operating system instrumentation • Event monitoring • System encapsulation and check pointing 1.4 Emulation: Emulation is the process of implementing the interface and functionality of one system (or subsystem) on a system (or subsystem) having different interface and functionality. Emulation is the process where the virtualizing software mimics that portion of hardware, which is provided to the guest operating system in the virtual machine. The presented emulated hardware is independent of the

  10. underlying physical hardware. Emulation provides VM portability and wide range of hardware compatibility, which means the possibility of executing any virtual machine on any hardware, as the guest operating system interacts only with the emulated hardware. In an emulated environment, both the application and guest operating system in virtual machines run in the user mode of base operating system. In simple terms, the behavior of the hardware is produced by a software program. Emulation process involves only those hardware components so that user or virtual machines does not understand the underlying environment. Only CPU & memory are sufficient for basic level of emulation. Typically, emulation is implemented using interpretation. The emulator component takes each and every instruction of user mode and translates to equivalent instruction suitable according to the underlying hardware. This process is also termed as interpretation. Emulation can be carried out using: 1. Interpretation 2. Binary translation 1.5 Interpretation: Interpreter state: An interpreter needs to maintain the complete architected state of the machine implementing the source ISA - registers -memory • code • data • stack Interpretation involves a4-step cycle: 1. Fetching a source instruction 2. Analyzing it 3. Performing the required operation 4. Then fetching the next source instruction

  11. Decode and dispatch interpreter: - step through the source program one instruction at a time - decode the current instruction - dispatch to corresponding interpreter routine - very high interpretation cost Indirect Threaded Interpretation: • High number of branches in decode-dispatch interpretation reduces performance - Overhead of 5 branches per instruction • Threaded interpretation improves efficiency by reducing branch overhead - append dispatch code with each interpretation routine - removes 3 branches - Threads together function routines Dispatch occurs indirectly through a table - Interpretation routines can be modified and relocated independently • Advantages - Binary intermediate code still portable - improves efficiency over basic interpretation • Disadvantages - Code replication increases interpreter size 1.6 Binary Translation: • Performance can be significantly enhanced by mapping each individual source binary instruction to its own customized target code This process of converting the source binary program into a target binary program is referred to as binary translation

  12. Binary translation attempts to amortize the fetch and analysis costs by: 1. Translating a block of source instructions to a block of target instructions 2. Caching the translated code for repeated use ❖ Static Binary Translation: • It is possible to binary translate a program in its entirety before executing the program • This approach is referred to as static binary translation • However, in real code using conventional ISAs, especially CISCISAs, such a static approach can cause problems due to: - Variable-length instructions - Data interspersed with instructions - Pads to align instructions - Register indirect jumps Dynamic Binary Translation: • A general solution is to translate the binary while the program is operating on actual input data (i.e., dynamically) and interpret new sections of code incrementally as the program reaches them • This scheme is referred to as dynamic binary translation.

  13. 1.7 Taxonomy of Virtual Machines: There are five main domains in which virtualization technologies can be categorized. These are server, application, desktop, storage and network. In UML, the five domains are defined as objects that are called classes. In the second part of the analysis, two new types of virtualization technologies were introduced: management and security tools. Management and security tools are also added as classes in UML. They are comprised of a set of virtualization technologies that provide some form of management or security measures and must not be confused with other aspects of management and security, such as the making of policies and their execution. When we look at the relations of the management class, the security class and the five domain classes, both management and security classes are associated with the five domain classes. For example, management technologies can have a relation with all domains as each domain can be managed. The same goes for security, where virtualization security technologies can be used to provide security for each domain. When we translate this relationship to UML, it can be visualized as follows.

  14. > Server: Server virtualization can be divided in 3 types or sub classes: para-virtualization, full virtualization and OS partitioning. Furthermore, full virtualization technologies can be divided into two more sub classes: “Type 1” and “Type 2” hypervisors. B a re-metal Hosted Fig: Server Virtualization Classes

  15. Applications: There are two types of application virtualization technologies: sandbox and application streaming. Fig: Application Virtualization Classes

  16. Desktop virtualization There are two general types of desktop virtualization: client and server. Client desktop virtualization technologies are used to host virtual desktops (or virtual machines) locally on the clients’ computer. Server desktop virtualization can be divided into two types: personal and shared. Shared desktops refer to desktops that are shared among users and personal desktops refer to users having their own completely isolated desktop. Personal desktops can further be divided into virtual or physical. Physical desktops are equipped with additional graphic processing power for graphic intensive applications. A new virtualization technology was introduced that allows a personal virtual desktop to become available offline.

  17. Offline Virtual Physical Virtual

  18. Storage virtualization Storage virtualization as the pooling of data from multiple storage devices. Examples of storage devices are storage attached network (SAN) and network attached storage (NAS). While storage virtualization can be used in different or a combination of storage devices, storage virtualization can be broken up into two general classes: - Block Virtualization - File Virtualization Fig: Storage Virtualization Classes

  19. Network virtualization Network virtualization was characterized by three types of technologies: Virtual LAN (VLAN), Virtual IP (VIP) and Virtual Private Network (VPN). Fig: Network Virtualization Classes 1.8 Management Virtualization: Virtualization management is the process of overseeing and administering the operations and processes of a virtualization environment. It is part of IT management that includes the collective processes, tools and technologies to ensure governance and control over a virtualized infrastructure. Virtualization management is primarily done from a virtual machine manager (VMM) application / utility. The primary goal of virtualization management is to ensure that virtual machines deliver services and perform computing operations as expected.

  20. Typically, virtualization management may include processes such as: • Creation, deletion and modification of virtual machines, virtual networks and/or the entire virtualization infrastructure. • Ensures that all virtual machine software / hypervisors are up to date along with the installed OS and/or application. • Establish and maintain network connectivity / interconnectivity across the virtualization environment. • Monitor and manage performance of each virtual machine and/or virtualization environment in whole. Virtualization management tools: Capacity Management Multi-processor, multi-core servers and acres of RAM made planning for server capacity almost moot. With virtual servers, however, the question isn't the power of the server, it's how that capacity is doled out to specific workloads on specific virtual machines, and monitoring the performance of the VMs to make sure all the resource demands are satisfied. Performance Optimization Performance problems in physical servers are relatively easy to spot because most functions are associated with a specific component. Swap it out and you're good to go. for example, offers a set of tools called Hyper9 VEO (Virtual Environment Optimization) designed to discover all the VMs in an infrastructure, all the applications running on them, the relationships between the applications, VMs and physical servers and to collect data on performance, configuration and capacity.

  21. Storage Management Companies have been able to plan their CPU and memory density, anticipate boot storms that generate a lot of I/O, but they haven't always been able to optimize tiered storage for virtual servers, or do things like queue data locally so you aren't pushing as much data through the pipe Desktop Virtualization Planning and Management Virtual server environments are an order of magnitude more complex than physical server environments because of the additional ecosystem they add to the physical one. Desktop virtualization adds even one more ecosystem and a lot more. Virtual desktops can also be delivered in more ways than virtual servers—ranging from full-on VDI in which each user gets a dedicated VM with a single OS running on a backend server, to virtual applications that can be viewed from almost any machine. 1.9 Hardware Maximization:

  22. 1.10 Architectures: A virtualization architecture is a conceptual model specifying the arrangement and interrelationships of the particular components involved in delivering a virtual -- rather than physical -- version of something, such as an operating system (OS), a server, a storage device or network resources. TRADITIONAL AND VIRTUAL ARCHITECTURE TRADITIONAL ARCHITECTURE VIRTUAL ARCHITECTURE Virtualization is commonly hypervisor-based. The hypervisor isolates operating systems and applications from the underlying computer hardware so the host machine can run multiple virtual machines (VM) as guests that share the system's physical compute resources, such as processor cycles, memory space, network bandwidth and so on.

  23. Type 1 hypervisors, sometimes called bare-metal hypervisors, run directly on top of the host system hardware. Bare-metal hypervisors offer high availability and resource management. Their direct access to system hardware enables better performance, scalability and stability. Examples of type 1 hypervisors include MicrosoftHyper-V, Citrix XenServer and VMware ESXi. Type 2 hypervisor, also known as a hosted hypervisor, is installed on top of the host operating system, rather than sitting directly on top of the hardware as the type 1 hypervisor does. Each guest OS or VM runs above the hypervisor. The convenience of a known host OS can ease system configuration and management tasks. However, the addition of a host OS layer can potentially limit performance and expose possible OS security flaws. Examples of type 2 hypervisors include VMware Workstation, Virtual PC and Oracle VM VirtualBox. The main alternative to hypervisor-based virtualization is containerization. Operating system virtualization, for example, is a container-based kernel virtualization method. OS virtualization is similar to partitioning. In this architecture, an operating system is adapted so it functions as multiple, discrete systems, making it possible to deploy and run distributed applications without launching an entire VM for each one. Instead, multiple isolated systems, called containers, are run on a single control host and all access a single kernel. 1.11 Virtualization Management: Virtualization management is software that interfaces with virtual environments and the underlying physical hardware to simplify resource administration, enhance data analyses, and streamline operations. Each virtualization management system is unique, but most feature an uncomplicated user interface, streamline the VM creation process, monitor virtual environments, allocate resources, compile reports, and automatically enforce rules.

  24. Virtualization management is the process of overseeing and administering the operations and processes of a virtualization environment. It is part of IT management that includes the collective processes, tools and technologies to ensure governance and control over a virtualized infrastructure. Virtualization management is primarily done from a virtual machine manager (VMM) application / utility. The primary goal of virtualization management is to ensure that virtual machines deliver services and perform computing operations as expected. Typically, virtualization management may include processes such as: • Creation, deletion and modification of virtual machines, virtual networks and/or the entire virtualization infrastructure. • Ensures that all virtual machine software / hypervisors are up to date along with the installed OS and/or application. • Establish and maintain network connectivity / interconnectivity across the virtualization environment. • Monitor and manage performance of each virtual machine and/or virtualization environment in whole. Virtualization management Tools: • RV Tools from Robware.net (free) For VMware environments, this handy little free application is written in Microsoft .NET and leverages the VMware SDK’s to collect information from vCenter Servers and ESX/ESXi hosts. It supports both VI3 and vSphere and displays a wide variety of valuable information in a simple row-column spreadsheet-like interface. • PowerShell from Microsoft (free) For VMware ESX/ESXi and Microsoft Hyper-V environments, PowerShell is a free extensible command-line shell and associated scripting language developed by Microsoft. This virtualization management tool can be used to help automate common administration tasks and provide information about your Microsoft and VMware environments. • Citrix Essentials from Citrix (paid/free) For Citrix XenServer and Microsoft Hyper-V environments, Citrix Essentials is an application with separate versions for Hyper-V and XenServer that adds some powerful virtualization management capabilities and features to each. For both versions, it adds features like dynamic provisioning services, stage

  25. and lab management, workflow orchestration and StorageLink technology for array integration. For XenServer, it adds a high-availability feature as well and dynamic workload management. ❖ vControl from Vizioncore (paid) VControl is a multi-hypervisor Web-based self-provisioning and VM virtualization management tool for Citrix XenServer, Microsoft Hyper-V and VMware ESX/ESXi. It’s a Windows application that uses open source software and consists of two components -- a master server and a workflow server. ❖ VMC Management Console from Reflex Systems (paid) From its roots as a virtualization security product, Reflex VMC has evolved into a complete virtualization management product that provides monitoring, reporting, asset management and automation for the whole VMware environment. Featuring a nice graphical interface, VMC consists of the main management console application with reporting, alerting, event correlation and policy automation. ❖ VMC Management Console from Reflex Systems (paid) Its roots as a virtualization security product, Reflex VMC has evolved into a complete virtualization management product that provides monitoring, reporting, asset management and automation for the whole VMware environment. Featuring a nice graphical interface, VMC consists of the main management console application with reporting, alerting, event correlation and policy automation. 1.12 Storage Virtualization: Storage virtualization uses virtualization to enable better functionality and more advanced features in computer data storage systems. • Within the context of a storage system, there are two primary types of virtualization that can occur: • Block virtualization used in this context refers to the abstraction (separation) of logical storage (partition) from physical storage so that it may be accessed without regard to physical storage or heterogeneous structure. This separation allows the administrators of the storage system greater flexibility in how they manage storage for end users. • File virtualization addresses the NAS challenges by eliminating the dependencies between the data accessed at the file level and the location where the files are physically stored. This provides opportunities to optimize storage use and server consolidation and to perform non-disruptive file migrations._

  26. Implementation • Using Cloud Services - Simplicity, Reliability, Economies of scale, Scalability, Flexibility, etc. • Convergence of Protocols HTTP/DAV... • APIs for Data Services e.g., Transfer, Authentication & Authorization, Group membership, etc. Storage Virtualization The Needs • Integrated global services: Resources are aggregated and federated by cloud and distributed computing infrastructure -Data management platform and services for user communities across institute boundaries • Scalable infrastructure and analysis services: use any mountable storage system as local data store ]

  27. -Using standard POSIX access via global mount point • Scientific Gateway Requirements: for both big data and long-tail sciences - Data sharing, access, transmission, publication, discovery, archive, etc. -Combined with the infrastructure and middleware, reducing the cost to create and maintain an ecosystem of integrated research applications • Simplify Research Data Management: reuse and reproducibility -Agility in adding/removing storage - Geo-Redirection - Load balancing/ failover • Bridging to Public Clouds: e.g, Amazon S3 endpoints, Open Stack Ceph object storage -Performance improvements • Cost of storage is much higher than the cost of compute • Better storage resource efficiency •Easier deployed and shared by/with APAN communities 1.13 Network Virtualization: Network virtualization is a method of combining the available resources in a network by splitting up the available bandwidth into channels, each of which is independent from the others, and each of which can be assigned (or reassigned) to a particular server or device in real time. Each channel is independently secured. Every subscriber has shared access to all the resources on the network from a single computer. Network Virtualization is defined by the ability to create logical, virtual networks that are decoupled from the underlying network hardware to ensure the network can better integrate with and support increasingly virtual environments. Over the past decade, organizations have been adopting virtualization technologies at an accelerated rate.

  28. Network virtualization (NV) abstracts networking connectivity and services that have traditionally been delivered via hardware into a logical virtual network that is decoupled from and runs independently on top of a physical network in a hypervisor. Beyond L2-3 services like switching and routing, NV typically incorporates virtualized L4-7 services including firewalling and server load-balancing. NV solves a lot of the networking challenges in today’s data centers, helping organizations centrally program and provision the network, on-demand, without having to physically touch the underlying infrastructure. With NV, organizations can simplify how they roll out, scale and adjust workloads and resources to meet evolving computing needs. Applying Virtualization to the Network When applied to a network, virtualization creates a logical software-based view of the hardware and software networking resources (switches, routers, etc.). The physical networking devices are simply responsible for the forwarding of packets, while the virtual network (software) provides an intelligent abstraction that makes it easy to deploy and manage network services and underlying network resources. As a result, NV can align the network to better support virtualized environments. Server Virtualization Network Virtualization . Virtual[ Machines __________Virtual Networks_________ AUTOMATE1 - L M Logical Switch ■ Logical Router REPRODUCE DECOUPLE Logical Load Balancer Logical Firewall Automated VIRTUALIZATION LAYER Hardware Manual Keyboard, Mouse, CLI Keyboard, Mouse, CLI I NIC CPU 2001—« Compute l HD ^ RAM ■ I FWf VLANs I _ ^ _ Network |J 2012 I LB . WF I Packet Forwarding BRAD HEDLUND .1 Fig: Network Virtualization

  29. Virtualization technologies have recently moved from server virtualization to network virtualization. Server virtualization, also known as host or computer virtualization, allows multiple users to share the same server through virtual machines (VMs) by abstracting and decoupling the computing functionalities from the underlying hardware. Server virtualization has played a pivotal role in cloud computing as one of its main enablers. Through server virtualization, on-demand provisioning and flexible management of computing resources are made possible. Strictly speaking, server virtualization also includes the virtualization of network interfaces from the operating system point of view. However, it does not involve any virtualization of the network fabric, such as switches and routers. On the other hand, network virtualization enables multiple isolated virtual networks to share the same physical network infrastructure. Thus, it decouples the network infrastructure from the services that it provides. This paradigm shift allows virtual networks with truly differentiated services to coexist on the same infrastructure, maximizing its reusability. These virtual networks can be deployed on demand and dynamically allocated, just as VMs in server virtualization would. The functionalities that each virtual network can obtain from the virtualized infrastructure range from simple connectivity and performance guarantees to advanced support for new network protocols.

More Related