1 / 20

A Collaborative Monitoring Mechanism for Making a Multitenant Platform Accoutable

A Collaborative Monitoring Mechanism for Making a Multitenant Platform Accoutable. HotCloud 10 By Xuanran Zong. Background. Applications are moving to cloud Pay-as-you-go basis Resource multiplexing Reduce over-provisioning cost Cloud service uncertainty

tobias
Download Presentation

A Collaborative Monitoring Mechanism for Making a Multitenant Platform Accoutable

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Collaborative Monitoring Mechanism for Making a Multitenant Platform Accoutable HotCloud 10 By XuanranZong

  2. Background • Applications are moving to cloud • Pay-as-you-go basis • Resource multiplexing • Reduce over-provisioning cost • Cloud service uncertainty • How do the clients know if the cloud provider handles their data and logic correctly • Logic correctness • Consistency constraints • Performance

  3. Service level agreement (SLA) • To ensure data and logic are handled correctly, service provider offers service level agreement to clients • Performance • e.g. One EC2 compute unit has the computation power of 1-1.2 GHz • Availability • e.g. the service would up 99.9% of the time

  4. SLA • Problems • Few means are provided to clients to make a SLA accountable when problem occurs • Accountable means we know who is responsible when things go wrong • Monitoring is provided by provider • Clients are often required to furnish evidence all by themselves to be eligible to claim credit for SLA violation

  5. EC2 SLA Reference: http://usenix.org/events/hotcloud10/tech/slides/wangc.pdf

  6. Accountability service • Provided by third party • Responsibility • Collect evidence based on SLA • Runtime compliance check and problem detection

  7. Problem description • Clients has a set of end-points {ep0, ep1, … , epn-1} that operate on data stored in multitenancy environment • Many things can go wrong • Data is modified without owner’s permission • Consistency requirement is broken • The accountability service should detect these issues and provide evidence.

  8. System architecture • Wrapper provided by third party • Wrapper captures input/ouput from epi and send to accountability service

  9. Accountability service • The accountability service maintains a view of the data state • Reflects what data should be from users’ perspective • Aggregates data updating requests of users to calculate the data state • Authenticates query results based on the calculated data state

  10. Evidence collection and processing • Logging service, wep, extract operation information and send log message to accountability service W • If it is a update service, W updates MB-tree • If it is a query service, W authenticates the result with MB-tree and ensures correctness and completeness • MB-tree maintains the data state

  11. Data state calculation • Use Merkle B-tree to calculate data state • By combining the items in VO, we can recalculate the root of the MB-tree and compare it with the root to reveal the correctness and completeness of the query result

  12. Consistency issue • What if the log messages arrive out-of-order? • Assume eventual consistency • Clocks are synchronized • Maintains a sliding window of sorted log messages based on timestamp • Time window size is determined by the maximum delay of passing a log message from client to W

  13. Collaborative monitoring mechanism • Current approach • Centralized: availability, scalability, trustworthy • Let’s make it distributed • Data state is maintained by a set of services • Each service maintains a view of the data state

  14. Design choice I • Log send to one data state service and the service then propagate the log to other services in a synchronous manner • Pros • Strong consistency • Request can be answered by any service • Cons • Large overhead due to synchronous communication

  15. Design choice II • Log send to one service and the service propagate the log asynchronously • Pros • Better logging performance • Cons • Uncertainty in answering an authentication request

  16. Their design • Somewhere in between of the two extremes • Partition the key range into a few disjoint regions • Log message only sends to its designated region • Log message is propagate synchronously within the region and asynchronously across regions • Authentication request is directed to service whose region overlaps most with request range • Answer with certainty if request range falls inside service region • Wait, if not

  17. Evaluation • Overhead • Centralized design • Where does the overhead come from?

  18. Evaluation • VO calculation overhead

  19. Evaluation • Performance improvement with multiple data state service

  20. Discussion • Articulate the problem clearly and show one solution that employs third party to make the data state accountable • Which part is the main overhead? • Communication? VO calculation? • Distributed design does not help much when query range is large • Do people want to sacrifice their performance(at least double the time) in order to make the service accountable? • Can we use similar design to make other parts accountable? For instance, performance?

More Related