1 / 48

Service-Oriented Computing Service Discovery and Composition

Service-Oriented Computing Service Discovery and Composition. Service Discovery. UDDI Discovery based on WSDL information WS-Discovery Provide an interface for service discovery Define a multicast discovery protocol Limitations: No service liveness information, limited service description

EllenMixel
Download Presentation

Service-Oriented Computing Service Discovery and Composition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Service-Oriented ComputingService Discovery and Composition

  2. Service Discovery • UDDI • Discovery based on WSDL information • WS-Discovery • Provide an interface for service discovery • Define a multicast discovery protocol • Limitations: No service liveness information, limited service description • Universal Plug and Play (UPnP) • Enables dynamic networking of intelligent appliances, wireless devices, and PCs • Not exactly web services, but is another form of standardization for physical systems

  3. WS-Discovery • Extend UDDI to make it distributed • WS-Discovery multicast message types: • Hello: Sent by a Target Service when it joins a network • Bye: Sent by a Target Service when it leaves a network • Probe: Sent by a Client searching for a Target Service • Search by Type and/or Scope • Resolve: Sent by a Client searching for a Target Service by name • Already know the target service by name, but may not know the communication details • Response uni-cast message types: • Probe Match: a Target Service matches a “Probe” • Resolve Match: a Target Service matches a “Resolve”

  4. UPnP • Devices • When entering a network, register to the control point • When leaving the network, information the control point • Control point • When entering the network, search for devices

  5. UPnP • Device specification • Device name, vender name, URL for the device, etc. • Commands, actions, parameters, etc. • Current state information • Interactions • Control: Control point sends commands/actions (in XML) to activate the device • Event: • Control point can request device to send updates when some variables (specified in the event) are updated • Device can accept the request and respond with a event duration • Presentation • If the device has an URL, control point can request and fetch it • Some devices can be controlled through the URL interface

  6. UPnP • Compare to UDDI • Similar, but has an additional interaction feature • Lack of semantics, hard to compose the devices together to achieve a client goal • Can we wrap devices into high-level services and use the OWL-S technologies to add semantics to devices?

  7. Semantic Web Service Discovery • Semantic search template • User’s service requirements • Semantic web service descriptions • A similarity based matching scheme (just an example) • The Search Template is matched against a set of candidate Web services in a registry and the match Scores are calculated • Overall similarity = Weighted average of syntactic similarity and Functional similarity (normalized sum of operational similarity) • Syntactic similarities = Weighted average of the Name similarity and Description similarity • Operation similarity = Weighted average of syntactic similarity, conceptual similarity, and I/O similarity • … (further decompose the similarity definitions)

  8. Semantic Web Service Discovery • Example search template

  9. Semantic Web Service Discovery • Example candidate service

  10. Semantic Web Service Discovery • Similarity definitions Best match! Compare a desired service name with the names of several candidate services Best match! Compare a desired operation with all the operations of a candidate service

  11. Service Composition • How to put services together to achieve the desired goal • Same old problem • Design patterns has shown to be effective in software design • Many industrial efforts on SOA design patterns • What is a pattern • "A solution to a problem in a context"? • Each pattern describes a problem which occurs over and over again ... and then describes the core of the solution to that problem, in such a way that you can use this solution over and over again • Research • Pattern based research considers how to specify patterns, i.e., how to specify the problem, solution, effects • Semantic Web service composition community consider AI planning techniques for composition reasoning

  12. AI Planning for Service Composition • Planning • move(x,y) • Pre-condition: clear(x) and clear(y) • Effect: on(x,y) and clear(x) • Delete effects: on(x,?), clear(y) • Always • clear(table) • Does not conflict with on(x, table) Initial state on(c,table) on(b,table) on(a,b) clear(a), clear(c) Goal state on(a,table) on(c,a) on(b,c) clear(b) Composition move(a,table) move(c,a) move(b,c) B A C B C A

  13. on(c,table) on(b,table) on(a,b) clear(a) clear(c) clear(table) on(a,c) move(a,c) move(a,table) on(c,table) on(b,table) on(a,b) clear(a) clear(c) clear(table) on(a,table) . . . move(c,a) AI Planning for Service Composition on(c,table) on(b,table) on(a,b) clear(a) clear(c) clear(table) . . . Planning is different from other search algorithms - which generally based on a quantitative measure Planner involves state computation and maintenance

  14. AI Planning for Service Composition • Map semantic web to the planning domain • Definitions for web services • Syntactical definition: I/O parameters • Semantic definition: pre-condition and effects • Supported in OWL-S and WSMO • Map services to actions in planning domain • Pre-condition/effects of the services become the pre-condition/effects of the actions • I/O definitions are translated t o the pre-condition/effects • Map the problem to the planning domain • Define the goal for the problem • Define the initial facts

  15. AI Planning for Service Composition • Limitations of traditional planners for service composition • Atomic actions with deterministic effects, only able to generate sequential plans • Conditional, iterative planning: Construct a plan with branches, taking all possible nondeterministic effects and contingencies into account • Complete knowledge of the world, full observability • Conformant planning: Find a plan which works in any initial situation or incomplete knowledge • Contingency planning: Consider all possible nondeterministic effects or replan when unexpected situation occurs

  16. AI Planning for Service Composition • General limitations for service composition • Pre-conditions and effects, initial states, and goals are mostly simple conjunctions of propositions • Can real-world web services be easily specified based on these? • Has been a core problem in SE for 20+ years!!! Will it work now? • Scaling issues • There may be thousands of services each with multiple ports • Even worse in cyber-physical systems • A lot devices with similar functionalities • Solutions • Hierarchical search: First use keyword based search to filter out unlikely actions, then use planner to explore the possible actions • Service ontology: Categorize services and specify service relations using an ontology

  17. AI Planning for Service Composition • General limitations for service composition • Assume a static and finite set of actions • In SE, a problem can be decomposed and then find the corresponding components • Research work trying to handle partial planning with missing actions • Interaction with users • Planer better be more mixed-initiative • knowledge engineering issues • Efficiently and effectively interacting with XML based information • This should be the simplest problem among the many issues

  18. QoS in Service Composition • QoS (quality of service) • Nonfunctional properties to be satisfied • E.g., availability, performance, price, reliability • Service composition with QoS considerations • Find the best service to meet user QoS requirements • Need to specify client QoS requirements • First, need to define what QoS is (the properties of concern) • Need to know the QoS properties of the services • For a single service, QoS properties can be measured • For a composite service, how to derive the properties of the composed service? • For some properties, property aggregation can be very difficult • Also, need to understand the interaction behaviors among services • Decision making: which services to select?

  19. QoS in Service Composition • Need to have • A formal process to do this systematically, from QoS specification to negotiation, to finalize the selection • An agreement between the involved entities to ensure that the negotiated QoS terms are exercised  Service Level Agreement (SLA) • WS-Agreement • Provides the specification standards for SLA between the client and the service providers • Dynamically established and dynamically managed QoS • Use QoS ontology for QoS specifications, negotiation, and management

  20. Factory Negotiation create() Ops: terminate(limits) negotiate(...) ... SDEs: negotiate() Terms Status Related Agrmts. Negotiator WS-Agreement • Negotiation Layer • Agreement Layer • Service Layer • Factory: creates the instance Factory Agreement create() Ops: terminate(limits) inspect(query) ... SDEs: inspect() Terms Status Related Agrmts. Manager Factory create() Policy Application Instance foo() Consumer Provider

  21. WS-Agreement • Negotiation layer • Provides a Web service-based generic negotiation capability • Newly added (original only has two layers) • Negotiation state transition:

  22. Name Context WS-Agreement • Context • Agreement initiator, responder, expire time, etc. • Service Terms • Identify the specific services to be provided • Guarantee Terms • The service levels that the parties are agreeing upon • Can be used for monitoring Agreement Terms Service Terms Guarantee Terms

  23. WS-Agreement hasGuaranteeTerm GuaranteeTerm hasBusinessValue A guarantee term has a scope – e.g. operation of service hasScope hasObjective hasCondition Scope BusinessValue ServiceLevelObjectives Qualifying Condition hasReward A guarantee term may have collection of service level objectives e.g. responseTime < 2 seconds There might be business values associated with each guarantee terms. Business values include importance, confidence, penalty, and reward. e.g. Penalty 5 USD Reward A guarantee term may have a qualifying condition for SLO’s to hold. e.g. numRequests < 100 Predicate Predicate hasPenalty hasImportance Penalty Parameter Parameter Unit Importance Value ValueExpression Unit Value ValueUnit ValueExpression OWL ontology Assessment Interval Assessment Interval ValueUnit TimeInterval Count Count TimeInterval

  24. WS-Agreement – Agreement Schema WS-Agreement lacks formalism for requirement specification This can make the negotiation and selection process difficult. SWAPS provides these specifics.

  25. Agreement Reasoning • Semantic WS-Agreement Partner Selection (SWAPS) • WS-Agreement • Temporal Concepts: time.owl • OWL version of time (http://www.isi.edu/~pan/damltime/time.owl) • Example concepts: seconds, dayOfWeek, ends • QoS ontology • E.g., Ont-Qos (IBM) • E.g., QoS ontology in Metero-S • Example concepts: responseTime, failurePerDay • Domain Ontology • Represent the domain knowledge • Semantics of predicates rules, such as <, =, etc. • User defined rules • Allow users to customize matchmaking declaratively

  26. Example QoS Ontology

  27. Example Domain Rules • Consumer: • Requirement: Availability is greater than 95% • Provider: • Mean time to recover (MTTR) = 5 minutes • Mean time between failures (MTBF) = 15 hours • Domain rule: • Availability = MTBF / (MTBF + MTTR) • Reasoning • Availability of the provider = 99.4%.

  28. Agreement Reasoning • An alternative alt1 is a suitable match for alt2 if: • (("Gi) such that Gi alt1  requirement(alt1, Gi)  ($Gj) such that Gj alt2  capability(alt2, Gj)  scope(Gi) = scope(Gj)  obligation(Gi) = obligation(Gj)  satisfies(Gj, Gi) • G: a term in the agreement • requirement(alt, G): True if G is a requirement of alt • capability(alt, G): True if G is a capability of alt • scope(G): The scope (the service operation) of G • obligation(G): The obligated party of G • satisfies(Gj, Gi): True if the SLO of Gj is equivalent to or stronger than the SLO of Gi

  29. Agreement Reasoning isEquivalent Provider responseTime < 14 s QC: day of week = weekday Penalty: 15 USD Provider 99% of responseTimes < 14 s Consumer Provider1 Provider FailurePerWeek < 7 Penalty 10USD Provider failurePerWeek < 10 isStronger Provider transmitTime < 4s QC: maxNumUsers < 1000 Penalty: 1 USD Provider processTime < 5 s QC: numRequests < 500 Penalty: 1 USD Provider2 Provider failurePerWeek < 7 Penalty: 2USD

  30. Agreement Reasoning Provider responseTime < 14 s QC: day of week = weekday Penalty: 15 USD isStronger Provider 99% of responseTimes < 14 s Consumer Provider1 Provider FailurePerWeek < 7 Penalty 10USD Provider failurePerWeek < 10 Provider responseTime < 9s QC: maxNumUsers < 1000 QC: numRequests < 500 Penalty: 1 USD Domain Specific Rules: responseTime = processTime + transmitTime  Provider2 Provider failurePerWeek < 7 Penalty: 2USD isStronger

  31. Agreement Reasoning • Problem • There may be no matching providers • Specification can be optimization based • E.g., minimize responseTime  Choose the provider that yields best fit • How to combine different terms • Some non-exact-matching terms, each with a satisfaction level • User define a weighted function to combine them • Multi-objective optimization • When there are a large number of choices • Use some filtering mechanisms first • E.g., consider one term first

  32. Agreement Reasoning • Problem • Only consider selection of a single service • Requirements are frequently specified as end-to-end requirements • E.g., the response time of the composite service is < 1 sec • But the response times are specified only for the atomic services • How to aggregate the properties • For some QoS aspects, this can be very difficult • When there are a large number of choices • Each customer required service may have many providers • Combination can be extensive  Use efficient search algorithms to find the best match • Linear programming, genetic algorithms, etc. have been used

  33. QoS Property Aggregation • Time • Generally is the simplest property to handle • Sum for sequential composition, Max for parallel composition, Min for choices • Is this accurate? • Some works consider multi-tier queuing network for time estimation • How about communication cost • Keeping a table of pair-wise communication cost? O(N2) space • Where to maintain the cost? • Infeasible in case each service may be able to compose with many other services • Cost • Sum for sequential/parallel composition, Min for choices

  34. QoS Property Aggregation • Power consumption • Sum for sequential composition, Min for choices • Is this accurate? How about cumulative heat, etc. • Complicated aggregation for parallel composition • Availability • MTTF / (MTTF+MTTR) • Mean time to failure (MTTF) • Mean time to repair (MTTR) • How to aggregate the availability of the software, hardware, and communication? • Large versus small MTTF & MTTR • Pacemaker: MTTF = 1years, MTTR = 5min, A = 0.99999 • Pacemaker: MTTF = 3 hours, MTTR = .01 sec, A = 0.99999 • Need to consider request frequency and failure consequences

  35. QoS Property Aggregation • Reliability • Software reliability • A: reliability = 1, B: reliability = 0.99 • A+B: reliability = ? • A always invokes B’s incorrect path • A never invokes B’s incorrect path • Need to consider operational profile for software reliability estimation • Determine the probability of each type of input • Mostly partitioned based on a group of program paths • Determine the reliability according to the probability of failure for each input type (testing results) and probability of the input • So, aggregation need to consider mapping of the operational profiles • From A’s operational profile to estimate B’s operational profile • System reliability • Combining software and hardware reliability, how?

  36. QoS Property Aggregation • Quality • Difficult to measure quality even for a single service • Use fuzzy logic to express quality • How to compose them? • Security • Focus on policy based composition, not quantitative measures • No established method to quantitatively measure security yet • Too many factors • Policy issues, attacks, etc. • Generally only consider the policies, but not the attacks • Even for policies, mostly focus on the policies for individual services, not the information flow in the composite service • Attacks may compromise the system, making policies useless

  37. QoS Property Aggregation • Simulation approach • Obtain the aggregate QoS properties based on simulation • Time: Rapidly compose the system and obtain the cumulative time • Reliability: Run test cases with the actual composite service • If there are many different combinations, simulation of each case is infeasible

  38. Search for a Composition Solution • Genetic algorithm • Selection • E.g., Roulette-wheel selector • Probability of selecting Si = E(Si) / k E(Sk) • Mutation, Crossover

  39. Search for a Composition Solution • Multi-object problem and Pareto-optimal solutions • Separate considerations of the objectives • Comparison problem (which is better) • Dominate solution (one is definitely better than the other) • Pareto-optimal solutions: non-dominated solutions

  40. Search for a Composition Solution • Problem • Services may be QoS reconfigurable • E.g., a security service may offer many different security configurations, providing security and time tradeoffs • E.g., an AI search algorithm may offer different thresholds, providing quality and time tradeoffs • Need to consider not only which provider, but also the specific configurations of the provider services • Greatly increase the search space • Consider the paper • QoS-Reconfigurable Web Services and Compositions for High-Assurance Systems

  41. Search for a Composition Solution • Example: Media security gateway for VoIP

  42. Search for a Composition Solution • Example: • Reconfigurable SRTP service: • Implement secure and real-time audio packets transmission • Configurable parameters: Different encryption key sizes, encryption algorithms, authentication key sizes, and authentication algorithms • Reconfigurable AMR codec service • Implement the AMR codec standards • Configurable parameters: Different encoding (compression) rates • Tradeoffs between voice quality, commu/exec times • Reconfigurable Packetization service • Put audio frames into packets and add error correction and retransmission control codes • Configurable parameters: Different error correction codes, retransmission windows, etc. • Tradeoffs between the packet loss rate (which impacts voice quality) and the commu/exec times

  43. Search for a Composition Solution • Compositional approach • First find the Pareto-optimal solutions within the services • Compose the Pareto-optimal solutions • Compose Pareto-optimal solutions as the initial population • If PO-solutions of the components do not guarantee the PO-solutions of the composed system • Only compose Pareto-optimal solutions as the final solutions • Much more efficient due to the greatly reduced search space

  44. Grounding Selections Service Domain - Life detection service - Terrain search service -  Swarm of robots Search for a Composition Solution • Problem: independent admission control in each domain QoS-driven Service Composition Service Composition Service Domain Service Domain Services Services • Execution environment in each domain: • Each service runs on a VM • How to allocate cores to VMS • (or other resources to services) • Manage resources and power such that: • Best satisfy QoS goals of all clients • Minimize power consumption • (e.g., dynamic voltage scaling) QoS properties of the composed system …... Platforms Platforms Platform Services VMs Service Composition Cores …... Need to perform admission control: To ensure that the admitted workloads will have agreed upon levels of QoS and will not violate power constraints • Correspondingly, service composition should consider • Execution environment (e.g., admission control and energy-awareness) • Dynamic adaptation (e.g., selecting configurable services rather than • those that provide the most satisfactory QoS for the time being • System QoS prediction from QoS of individual constituent services • All the above: May have many potential candidates to consider •  efficiency concerns

  45. Search for a Composition Solution • Major issues: • Each domain requires admission control in order to achieve the guaranteed service agreement • But during the service selection time, the execution environment (whether the service can be admitted) is not specifically considered • If we do consider admission at the selection time: • There may be many matching services • Interacting with each matching service to find out the feasibility can be overkill • When considering composition, this can be worse

  46. Search for a Composition Solution • Reference: Service Composition for Real-Time Assurance • Multi-level agreement approach • First: find a set of top-choice compositions based on published provider information, without concerning the admission • When deciding top choices, better ensure that there are no common services in all choices; otherwise, if admission for one of these services fails, no composition is valid • Then proceed to check the admission and proceed with agreement • When to proceed with the agreement? In a composite service, if one of the atomic service selected cannot admit the customer, then all the agreements should be voided • If a provider agrees to provide a service and reserved the resources for customer x, and because of this, it denied another customer y. When customer x revokes all the agreements, what can be done? • Need a pre-agreement phase

  47. Search for a Composition Solution • Multi-level agreement approach • In the first level, use simple QoS estimation methods • May have a large number of compositions to be estimated • Time: simply add together • Reliability: simply multiply together • Security: only evaluate important policies • In the higher levels: Predict QoS with more sophisticated methods • Time: consider queuing analysis • Reliability, consider sophisticated mapping of operational profiles • In the final stage • Simulation to validate the correctness of the composition (and estimation)

  48. QoS in Service Composition • Decision algorithms • Has been the focus of many papers • But is not the major difficulties in QoS-driven service composition • QoS property aggregation • Highly challenging • Admission issues • No perfect solution • Multi-level approach would help • Do not always go for best-choice would help diversify the selections of different customers

More Related