1 / 45

Cloud@Home: Building Open and Interoperable Clouds

This chapter introduces the Cloud@Home paradigm, which merges volunteer and cloud computing to create open and interoperable clouds. The chapter discusses the motivations behind cloud computing, the importance of virtualization, service-oriented architecture, and other key technologies. It also explores the different layers of the cloud architecture and their services.

vsam
Download Presentation

Cloud@Home: Building Open and Interoperable Clouds

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 6 Open and Interoperable Clouds: The Cloud@Home Way Vincenzo D. Cunsolo, Salvatore Disrefano, Antonio Puliafito, And Marco Scarpa

  2. Abstract Cloud computing focuses on the idea of service as the elementary unit for building any application. Even though Cloud computing was originally developed in commercial applications, the paradigm is quickly and widely spreading in open contexts such as scientific and academic communities. Two main research directions can thus be identified: provide an open Cloud infrastructure able to provide and share resources and services to the community;

  3. Abstract And implement an interoperable framework, allowing commercial and open Cloud infrastructures to interact and interoperate. In this chapter, we present the Cloud@Home paradigm that proposes to merge Volunteer and Cloud computing as an effective and feasible solution for building open and interoperable Clouds. In this new paradigm, user’s hosts are not passive interfaces to Cloud services anymore, but can interact (for free or by charge) with other Clouds, which therefore must be able to interoperate.

  4. 6.1 Introduction and Motivation Cloud computing is a distributed computing paradigm that mixes aspects of Grid computing, Internet computing, Autonomic computing, Utility computing, and Green computing. Cloud computing is derived from the service-centric perspective that is quickly and widely spreading in the IT world. From this perspective, all capabilities and resources of a Cloud (usually geographically distributed) are provided to the users as a service, to be accessed through the Internet without any specific knowledge of, expertise with, or control over the underlying technology infrastructure that supports them.

  5. 6.1 Introduction and Motivation Cloud computing offers a user-centric interface that acts as a unique, user friendly, point of access for users' needs and requirements. Moreover, it provides on-demand service provision, QoS guaranteed offer, and autonomous system for managing hardware, software, and data transparency to the users [25]. In order to achieve such as goals, it is necessary to implement a level of abstraction of physical resources, uniforming their interfaces, and providing means for their management, adaptively, to user requirements. This is done through virtualizations, service mashups (Web 2.0), and service-oriented architectures (SOA). These factors make the Kleinrock outlook of computing as the fifth utility [13], following gas, water, electricity, and telephone.

  6. 6.1 Introduction and Motivation Virtualization [4,23] allows execution of a software version of a hardware machine in a host system in an isolated way. It “homogenizes” resources: problems of compatibility are overcome by providing heterogeneous hosts of a distributed computing environment (the Cloud) using the same virtual machine software. Web 2.0 [20] provides an interesting way to interface Cloud services, implementing service mashups. It is mainly based on an evolution of JavaScript with improved language constructs (late binding, closures, lambda functions, etc.) and AJAX interactions.

  7. 6.1 Introduction and Motivation SOA (Service Oriented Architecture) is a paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains [14]. In SOA, services are the mechanism by which needs and capabilities are brought together. SOA defines standard interfaces and protocols that allow developers to encapsulate information tools as services that clients can access without the knowledge of, or control over, their internal workings [8]. An interesting attempt to fix Cloud concepts and ideas is provide in [26] through an ontology that demonstrates a dissection of the Cloud into the five main layers shown in Fig. 6.1. In this, higher layers services can be composed from the services of the underlying layers, which are:

  8. 6.1 Introduction and Motivation

  9. 6.1 Introduction and Motivation • Cloud Application Layer: provides interface and access-management tools (Web 2.0, authentication, billing, SLA, etc.), specific application services, services mashup tools, etc. to the Cloud and users. This model is referred to as Software as a Service (SaaS). • Cloud Software Environment Layer: providers of the Cloud software environments supply the users and Cloud applications developers with a programming-language-level environment with a set of well-defined APIs. The services provided by this layer are referred to as Platform as a Service (PaaS).

  10. 6.1 Introduction and Motivation • Cloud Software Infrastructure Layers: providers fundamental resources to other higher-level layers, Services can be categorized into: • (a)Computational resources – provides computational resources (VM) to Cloud end users. Often, such services are dubbed Infrastructure as a Service (IaaS). • (b) Data storage – allows users to store their data at remote disks and access them anytime from any place. These services are commonly known as Data-Storage as a Service (DaaS). • (c) Communications – providers some communication capabilities that are service-oriented, configurable, schedulable, predictable, and reliable. The concept of Communication as a Service (CaaS) emerged toward this goal, to support such requirements.

  11. 6.1 Introduction and Motivation • OAP and REST are examples of interface protocols used with some Cloud computational resources. • Software Kernel: provides the basic software management for the physical servers that comprise the Cloud. OS kernel, hypervisor, virtual machine monitor, clustering, grid middleware, etc. • Hardware and Firmware: from the backbone of the Cloud. End users directly interacting with the Cloud at this layer have huge IT requirement in need of subleasing Hardware as a Service (HaaS).

  12. 6.1 Introduction and Motivation Great interest in Cloud computing has been manifested (表露) from both academic and private research centers, and numerous projects from industry and academia have been proposed. In commercial contexts, among the others, we highlight: Amazon Elastic Compute Cloud, IBM’s Blue Cloud, Sun Microsystems Network.com, Microsoft Azure Services Platform, Dell Cloud computing solutions, etc.

  13. 6.1 Introduction and Motivation There are also several scientific activities driving toward Open Cloud-computing middlewares and infrastructures, such as: Reservoir [18], Nimbus-Stratus-Wispy-Kupa [22], OpenNebula [7], Eucalyptus [17], etc. All of them support and provide an on-demand computing paradigm, in the sense that a user submits his/her requests to the Cloud, which remotely, in a distributed fashion, processes them and gives back the results.

  14. 6.1 Introduction and Motivation This client-server model fits the aims and scope of commercial Clouds: the business. But, on the other hand, it represents a restriction for open/scientific Clouds, requiring great amounts of computing-storage resources usually not available from a single open/scientific community. This suggests the necessity to collect such resources from different providers and/or contributors who could share their resources with the specific community, perhaps by making “symbiotic” federations. In fact, one of the most successful paradigms in such contexts is Volunteer computing.

  15. 6.1 Introduction and Motivation Volunteer computing (also called Peer-to-Peer computing, Global computing, or Public computing) uses computers volunteered by their owners as a source of computing power and storage to provide distributed scientific computing [2]. It is the basis of the “@home” philosophy of sharing/donating network connected resources for supporting distributed scientific computing.

  16. 6.1 Introduction and Motivation • We believe that the Cloud-computing paradigm is also applicable at lower scales, from the single contributing user who shares his/her desktop, to research groups, public administrations, social communities, and small and medium enterprises, who can make their distributed computing resources available to the Cloud. Both free sharing and pay-per-use models can be easily adopted in such scenarios.

  17. 6.1 Introduction and Motivation From the utility point of view, the rise of the “techno-utility complex” and the corresponding increase in computing resource demands, in some cases growing dramatically faster than Moore’s Law, predicted by the Sun CTO Greg Papadopoulos in the red shift theory for IT [15], could take us in a close future, toward an oligarchy (寡頭政治), a lobby or a trust of few big companies controlling the whole computing resources market.

  18. 6.1 Introduction and Motivation To avoid such a pessimistic but achievable scenario, we suggest addressing the problem in a different way: instead of building costly private data centers that the Google CEO, Eric Schmidt, likes to compare with the prohibitively expensive cyclotrons [3], we propose a more “democratic” from of Cloud computing, in which the computing resources of single users accessing the Cloud can be shared with others in order to contribute to the elaboration of complex problems.

  19. 6.1 Introduction and Motivation As this paradigm is very similar to the Volunteer computing one, it can be named as Cloud@Home. Both hardware and software compatibility limitations and restrictions of Volunteer computing can be solved in Cloud computing environments, allowing to share both hardware and software resources and/or services.

  20. 6.1 Introduction and Motivation The Cloud@Home paradigm could also be applied to commercial Clouds, establishing an open computing-utility market where users can both buy and sell their services. Since the computing power can be described by a “long-tailed” distribution, in which a high-amplitude population (Cloud providers and commercial data centers) is followed by a low-amplitude population (small data centers and private users) that gradually “tails off” asymptotically, Cloud@Home can catch the Long Tail effect [1], providing similar or higher computing capabilities than commercial provider' data centers, by grouping small computing resources from many single contributors.

  21. 6.1 Introduction and Motivation In the following, we demonstrate how it is possible to realize all these aims through the Cloud@Home paradigm. In Section 2, we describe the functional architecture of the Cloud@Home infrastructure, and in Section3, we characterize the blocks implementing the functions previously identified into the Cloud@Home core structure. Section 4 concludes the chapter by recapitulating our work and discussing challenges and future work.

  22. 6.2 Cloud@Home Overview The idea behind Cloud@Home is to reuse “domestic” computing resources to build voluntary contributor' Clouds that are interoperable and, moreover, interoperable with other foreign, and also commercial, Cloud infrastructures. With Cloud@Home, anyone can experience the power of Cloud computing, both actively by providing his/her own resources and services, and passively by submitting his/her applications and requirements.

  23. 6.2.1 Issues, Challenges, and Open Problems Ian Foster summarizes the computing paradigm of the future as follows [9]; “…we will need to support on-demand provisioning and configuration of integrated “virtual systems” providing the precise capabilities needed by an end user. We will need to define protocols that allow users and service providers to discover and hand off demands to other providers, to monitor and manage their reservations, and arrange payment.

  24. 6.2.1 Issues, Challenges, and Open Problems • We will need tools for managing both the underlying resources and the resulting distributed computations. We will need the centralized scale of today’s Cloud utilities, and the distribution and interoperability of today’s Grid facilities….”

  25. 6.2.1 Issues, Challenges, and Open Problems We share all these requirements, but in a slightly different way: we want to actively involve users into such a new form of computing, allowing them to create their own interoperable Clouds. In other words, we believe that it is possible to export, apply, and adapt the “@home” philosophy to the Cloud-computing paradigm. In this way, by merging Volunteer and Cloud computing, a new paradigm can be created: Cloud@Home.

  26. 6.2.1 Issues, Challenges, and Open Problems This new computing paradigm gives back the power and control to users, who can decide how to manage their resources/services in a global, geographically distributed context. They can voluntarily sustain scientific projects by freely placing their resources/services at the scientific research centers‘ disposal (處理), or can earn money by selling their resources to Cloud-computing providers in a pay-per-use/share context.

  27. 6.2.1 Issues, Challenges, and Open Problems Therefore, in Cloud@Home, both the commercial/business and Volunteer/scientific viewpoints coexist: in the former case, the end-user orientation of Cloud is extended to a collaborative two-way Cloud in which users can buy and/or sell their resources/services; in the latter case, the Grid philosophy of few but large computing requests is extended and enhanced to open Virtual Organizations. In both cases, QoS requirements could be specified, introducing in to the Grid and Volunteer philosophy (best effort) the concept of quality.

  28. 6.2.1 Issues, Challenges, and Open Problems • Cloud@Home can also be considered as a generalization and a maturation of the @home philosophy: a context in which users voluntarily share their resources without compatibility problems.

  29. 6.2.1 Issues, Challenges, and Open Problems This allows knocking down both hardware(processor bits, endianness, architecture, and network) and software (operating systems, libraries, compilers, applications, and middlewares) barriers of Grid and Volunteer computing. Moreover, Cloud@Home allows users to share not only physical resources, as in @home projects or Grid environments, but any kind of service. The flexibility and extensibility of Cloud@Home can allow to easily arrange, manage, and make available with significant computing resources (greater than those in Clouds, Grids, and/or @home environments) to everyone who owns a computer.

  30. 6.2.1 Issues, Challenges, and Open Problems • Another significant improvement of Cloud@Home with regard to Volunteer computing paradigms is the QoS/SLA management: starting from the credit management system and other similar experiments on QoS, a mechanism for adequately monitoring, ensuring, negotiating, accounting, billing, and managing, in general, QoS and SLA will be implemented.

  31. 6.2.1 Issues, Challenges, and Open Problems On the other hand, Cloud@Home can be considered as the enhancement of the Grid-Utility vision of Cloud computing. In this new paradigm, user’s hosts are not passive interfaces to Cloud services, but can be actively involved in computing. At worst, single nodes and services could be enrolled by the Cloud@Home middleware to build own-private Cloud infrastructures that can with interact with other Clouds.

  32. 6.2.1 Issues, Challenges, and Open Problems The Cloud@Home motto (格言) is: heterogeneous hardware for homogeneous Clouds. Thus, the scenario we prefigure is composed of several coexisting and interoperable Clouds, as depicted in Fig. 6.2. Open Clouds (yellow) identify open VO operating for free Volunteer computing;

  33. 6.2.1 Issues, Challenges, and Open Problems

  34. 6.2.1 Issues, Challenges, and Open Problems Commercial Clouds (blue) characterize entities or companies selling their computing resources for business; and Hybrid Clouds (green) can both sell or give for free their services. Both Open and Hybrid Clouds can interoperate with any other Clouds, as well as Commercial, while these latter can interoperate if and only if the Commercial Clouds are mutually recognized. In this way, it is possible to make federations of heterogeneous Clouds that can work together on the same project. Such a scenario has to be implemented transparently for users who do not want to know whether their applications are running in homogeneous Clouds.

  35. 6.2.1 Issues, Challenges, and Open Problems The differences among homogeneous and heterogeneous Clouds are only concerned with implementation issues, mainly affecting the resource management: in the former case, resources are managed locally to the Cloud; in heterogeneous Clouds, interoperable services have to be implemented in order to support discovery, connectivity, translation, and negotiation requirements amongst Clouds.

  36. 6.2.1 Issues, Challenges, and Open Problems The overall infrastructure must deal with the high dynamism of its nodes/resources, allowing to move and reallocate data, tasks, and jobs. It is therefore necessary to implement a lightweight middleware, specifically designed to optimize migrations. The choice of developing such middleware in existing technologies (as done in Nimbus-Stratus starting from Globus) could be limiting, inefficient, or not adequate from this point of view.

  37. 6.2.1 Issues, Challenges, and Open Problems This represents another significant enhancement of Cloud@Home against Grid: a lightweight middleware allows to involve limited resources‘ devices into the Cloud, mainly as consumer hosts accessing the Cloud through “thin client” but also, in some specific applications, as contributing hosts implementing (light) services according to their availabilities.

  38. 6.2.1 Issues, Challenges, and Open Problems Moreover, the Cloud@Home middleware does not influence code writing as Grid and Volunteer computing paradigms do. Another important goal of Cloud@Home is security. Volunteer computing has security concerns, while the Grid paradigm implements complex security mechanisms. Virtualization in Clouds implements isolation of services, but does not provide any protection from local access. With regard to security, the specific goal of Cloud@Home is to extend the securitymechanisms of Clouds to the protection of data from local access.

  39. 6.2.1 Issues, Challenges, and Open Problems As Cloud@Home is composed of an amount of resources potentially larger than commercial or proprietary Cloud solutions, its reliability can be compared with Grid or the Volunteer computing and should be greater than other Clouds.

  40. 6.2.1 Issues, Challenges, and Open Problems • Lastly, interoperability is one of the most important goals of Cloud@Home. This is an open problem in Grid, Volunteer, and Cloud computing, which we want to address in Cloud@Home. • The most important issues that should be taken into account in order to implement such a form of computing can be listed as follows:

  41. 6.2.1 Issues, Challenges, and Open Problems • Resources and Services management – a mechanism for managing resources and services offered by Clouds is mandatory. This must be able to enroll, discover, index, assign and reassign, monitor, and coordinate resources and services. A problem to face at this level is the compatibility among resources and services and their portability.

  42. 6.2.1 Issues, Challenges, and Open Problems • Frontend – abstraction is needed in order to provide users with a high-level service-oriented point of view of the computing system. The frontend provides a unique, uniform access point to the Cloud. It must allow users to submit functional computing requests, only providing requirements and specifications, without any knowledge of the system-resources deployment.

  43. 6.2.1 Issues, Challenges, and Open Problems • The system evaluates such requirements and specifications, and translates them into physical resource demands, deploying the elaboration process. Another aspect concerning the frontend is the capability of customizing Cloud services and applications.

  44. 6.2.1 Issues, Challenges, and Open Problems • Security – effective mechanisms are required to provide authentication, resources and data protection, data confidentiality, and integrity. • Resource and service accessibility, reliability, and data consistency – it is necessary to implement redundancy of resources and services, and hosts recovery policies because users voluntarily contribute to the computing, and therefore, can asynchronously, at any time, log out or disconnect from the Cloud.

  45. 6.2.1 Issues, Challenges, and Open Problems • Interoperability among Clouds – it should be possible for Clouds to interoperate. • Business models – for selling Cloud computing, it is mandatory to provide QoS and SLA management for both commercial and open-volunteer Clouds (traditionally best effort) to discriminate (區別) among the applications to be run.

More Related