1 / 21

Model Driven Techniques for Evaluating QoS of Middleware Configurations

Model Driven Techniques for Evaluating QoS of Middleware Configurations. Arvind S. Krishna , Emre Turkay Andy Gokhale, & Douglas C. Schmidt Institute for Software Integrated Systems (ISIS) Vanderbilt University Nashville, TN 37203. Real-time Application Symposium (RTAS 2005)

barth
Download Presentation

Model Driven Techniques for Evaluating QoS of Middleware Configurations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Model Driven Techniques for Evaluating QoS of Middleware Configurations Arvind S. Krishna, Emre Turkay Andy Gokhale, & Douglas C. Schmidt Institute for Software Integrated Systems (ISIS) Vanderbilt University Nashville, TN 37203 Real-time Application Symposium (RTAS 2005) San Francisco, California

  2. Presentation Summary • Component middleware technologies • Focus on business logic • Automates the plumbing code to configure & deploy middleware • Component encapsulate business logic • Difficulty in provisioning & deploying • Error prone task of handcrafting XML • Model Driven Generative Technologies (MDD) • Focus is on • Modeling – System composition technique • Validating – Correct by construction • Generating – Deployment, configuration info • multiple layers of middleware • Supports configuring, provisioning, & deploying quality of Service (QoS)-enabled middleware This presentation addresses key configuration & QoS evaluation challenges of middleware for DRE applications

  3. Motivating DRE Application Robot Assembly Application • Human Machine Interface (HMI) Component – human accepts/rejects watch • Management Work Instructions (MWI) Component – decide what action to perform on the watch, e.g. set the appropriate time • Watch Setting Manager (WSM) Component– Executes action on every watch • Palette Conveyor Manager (PCM) Component – Watch Assembly line that moves watches from source to destination • Robot Manager Component – Robotic Arm that moves the watches • Goal • Increase number of items processed by minimizing end-to-end latency

  4. Robot Assembly Challenges (1/2) Configuration Challenges • Map component level features & requirements to middleware configurations • WSM component interacts with HMI & Pallet Manager Component • Configuring component properties • Configuring package properties • Configuring underlying middleware Hook for the request demuxing strategy Hook for marshaling strategy Hook for the event demuxing strategy Hook for the concurrency strategy Hook for the connection management strategy Hook for the underlying transport strategy

  5. Management Work Instructions Pallet Conveyor Manager Human Machine Interface Watch Setting Manager Critical Flow Path Robot Manager Robot Assembly Challenges (2/2) Configuration Evaluation Challenges • How do we make sure chosen middleware configurations lead to overall goal of the system • Minimizing end-to-end latency of the overall system • What configuration of middleware hosting HMI & WSM components lead to best end-to-end latency

  6. Research Challenges Ensuring syntactically & semantically valid middleware configurations Understanding consequences of deployment decisions on overall QoS Alleviating accidental complexities in evaluating/ benchmarking QoS www.dre.vanderbilt.edu/cosmic

  7. Context Different middleware implementations provide different configuration mechanisms to configure the middleware CIAO provides service configuration options to tune middleware performance www.dre.vanderbilt.edu/ CIAO.html Resolving Configuration Challenges (1/2) • Problem • This approach is error prone since: • Need to know the syntax • Need to remember names of strategies • Need to know compatiblestrategies

  8. Resolving Configuration Challenges (2/2) Solution • Developed a domain-specific modeling language for TAO/CIAO called Options Configuration Modeling Language (OCML) • OCML is used by • Middleware developer to design the configuration model • Application developer to configure the middleware for a specific application • OCML metamodel is platform-independent • OCML models are platform-specific • Generates a Wizard to set configuration options and provides documentation for each option • OCML ensures syntactic & semantic validity of middleware configurations • Detect error at model construction time

  9. Resolving Evaluation Challenges (1/3) Context • Component integrators must make appropriate deployment decisions, including identifying the entities (e.g., CPUs) of the target environment where the packages will be deployed Pallet Conveyor Manager Human Machine Interface Watch Setting Manager How do we simulate load & background load for benchmarking? How do we measure & monitor QoS for a given deployment Robot Manager Problem How to ensure a particular deployment configuration meets QoS requirements How do we measure & monitor QoS for a given deployment

  10. Resolving Evaluation Challenges (2/3) Solution • Provide a model-driven tool-suite to empirically evaluate & refine configurations to maximize application QoS BGML Workflow • End-user composes the scenario in the BGML modeling paradigm • Associate QoS properties with this scenario, such as latency, throughput or jitter • Synthesize the appropriate test code to run the experiment & measure the QoS • Feed-back metrics into models to verify if system meets appropriate QoS at design time • The tool enables synthesis of all the scaffolding code required to set up, run, & tear-down the experiment • Using BGML it is possible to synthesize: • Benchmarking code • Component implementation code • Build & Component IDL files

  11. Resolving Evaluation Challenges (2/3) template <typename T> void Benchmark_AcceptWorkOrderResponse<T>::svc (void) { ACE_Sample_History history (5000); ACE_hrtime_t test_start = ACE_OS::gethrtime (); ACE_UINT32 gsf = ACE_High_Res_Timer::global_scale_factor (); for (i = 0; i < 5000; i++) { ACE_hrtime_t start = ACE_OS::gethrtime (); (void) this->remote_ref_-> AcceptWorkOrderResponse (arg0, arg1); ACE_CHECK; ACE_hrtime_t now = ACE_OS::gethrtime (); history.sample (now - start); } } • BGML allows actual composition of target interaction scenario, auto-generates benchmarking code • Each configuration option can then be tested to identify the configuration that maximizes the QoS for the scenario • These empirically refined configurations can be reused across applications that have similar/same application domains • These configurations can be viewed as Configuration & Customization (C&C) patterns

  12. Human Machine Interface Need for Tool Integration (MDD Process) (1/2) Context • OCML tool resolves accidental complexity in configuring components • BGML tool resolves accidental complexity in evaluating QoS Problem • Using each tool in isolation does not provide complete information • OCML does not know about performance • BGML does not know what the configuration is OCML  Correct Configuration BGML  Measures critical flow path latency

  13. PICML MDD Process BGML OCML Need for Tool Integration (MDD Process) (2/2) • Solution  MDD Process • MDD Process leveraging PICML, OCML & BGML • PICML  interaction scenario, Deployment & Component configuration • OCML  Model middleware hosting individual Components • BGML  Capture Evaluation Concerns Least latency • Apply MDD process to DRE application scenario to answer: • How does Middleware Configuration affect QoS? • How do Deployment decisions affect QoS? Candidate configuration (s)

  14. MDD Process (1/3) Step 1: PICML Tool • PICML used to generate deployment plan information Mapping Virtual nodes Process Collocation • Step 2: Middleware Configuration • OCML associated with Implementation Artifacts • OCML provides a wizard with documentation to configure the artifacts • Configuration of middleware that hosts the “executors” a.k.a Servants in CORBA 2.0 Artifact Option selection Documentation Pane

  15. MDD Process (2/3) Step 2  Choosing Configurations • How best to configure middleware hosting HMI and WSM components to minimize end-to-end latency • Component roles • Display component – pure client • Watch Manager component – “peer role” does not need concurrency • For each component (Display) narrow down selected configurations • Fixed part – determined a priori • Dynamic – cannot determine without testing Configuration Space HMI Component WSM Component Step 3  Capturing QoS concerns • Profile & Generate Multiple work-orders exchanged between Watch Manager Component and Human for Acceptance/Rejection • Use Timers to measure end-to-end critical path latency in the scenario • Same code can be used to evaluate different combinations of configurations

  16. MDD Process (3/3) Load generator for the accept operation Time-stamp send & receive Solution Workspace & Glue Generation • Create workspace and projects to generate build files for the scenario • To enact a scenario, this process automates: • Deployment Plan – XML deployment information • svc.conf – Configuration for each component implementation • Benchmark code – source code for executing benchmarks • IDL & CIDL files • Build Files – MPC files (www.ociweb.com) Projects having artifacts workspace { RobotManager WatchSettingManager PalletteConveyorManager HumanMachineInterface ManagementWorkInstructions }

  17. Experimental Results / Highlights (1/3) Automation / Code Generation Experiment Execution • Totally we conducted 64 experiments for different combinations of Human Machine Interface & Watch Setting Manager Components • The latency measures were tabulated to look for the configuration that minimized latency • Corresponding end-to-end measures were also checked Automated execution of experiments: scripts used to set-up & tear down experiments

  18. Experimental Results / Highlights (2/3) Observations • Similar configurations affected QoS similarly • For both cases we observed (G1,H1,I2,J2) minimized latency the most • Both cases showed that G is the most important configuration • Penalty for not setting G to G1 is ~4 µsecs in BasicSP & ~60 µsecs in RobotAssembly • Other options are not important, i.e., setting them or leaving to defaults leads to same behavior • Figure shows a visualization of the configuration space • Circles represent a point in the configuration space • Edge represents the distance (performance) degradation from moving from one point to another Defining operating regions enable setting more important configurations allowing flexibility in others

  19. Experimental Results / Highlights (3/3) • How does platform affect QoS? • Providing feedback on deployment plan i.e. Provides Component – Node mappings • BasicSP scenario • Tried two combinations as shown in table • Process • No changes required from earlier experiment: capture same end-to-end latency • Change component node mapping to re-generate the deployment plan • Observe & tabulate latency changes • Real-time component placement decided a priori software tied to the hardware • During failure: • Important to decide where to place components to ensure QoS • This process aids for making this decision

  20. Scoreboard Map Features to Configurations Identify Configuration Patterns Patterns Database Concluding Remarks • MDD process provides a flexible model-based approach for evaluating QoS of middleware configurations • Auto-generates most of the code required to run the experiment • OCML does not automatically generate configuration space • The script for automatically evaluating different configurations was not generated • Feedback to “Planner” allows refinement of configuration during testing phase • Our Future work: • EMULab ns style script generation for easy simulation • Strategies for interfacing with higher level performance monitoring tools • Identifying patterns in configuration allows mapping features directly onto middleware configurations

  21. Downloading the Middleware & Tools • Beta & stable releases can be accessed from http://www.dre.vanderbilt.edu/Download.html OCML & BGML are part of the CoSMIC MDD tool suite • http://www.dre.vanderbilt.edu/cosmic

More Related