1 / 48

Key design time challenges

Pollux & RACE R&D Challenges: Design Time. Key design time challenges Convert commander’s intent , along with static/dynamic environment, into QoS policies Quantitatively evaluate & explore complex & dynamic QoS problem & solution spaces to evolve effective solutions

kana
Download Presentation

Key design time challenges

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Pollux & RACE R&D Challenges: Design Time • Key design time challenges • Convert commander’s intent, along with static/dynamic environment, into QoS policies • Quantitatively evaluate & explore complex & dynamic QoS problem & solution spaces to evolve effective solutions • Assure QoS in face of interactive and/or autonomous adaptation to fluid environment . . Artifact Generator Goal: Significantly ease task of creating new QoS-enabled information management TSoS & integrating them with existing artifacts in new/larger contexts/constraints if (inactiveInterval != -1) { int thisInterval = (int)(System.currentTimeMillis() - lastAccessed) / 1000; if (thisInterval > inactiveInterval) { invalidate(); ServerSessionManager ssm = ServerSessionManager.getManager(); ssm.removeSession(this); } } } private long lastAccessedTime = creationTime; /** * Return the last time the client sent a request associated with this * session, as the number of milliseconds since midnight, January 1, 1970 * GMT. Actions that your application takes, such as getting or setting * a value associated with the session, do not affect the access time. */ public long getLastAccessedTime() { return (this.lastAccessedTime); } this.lastAccessedTime = time; ConfigurationSpecification Code Analysis Tool

  2. Pollux & RACE R&D Challenges: Run Time • Key run time challenges • Convert commander’s intent, along with static/dynamic environment, into QoS policies • Enforce integrated QoS policies at all layers (e.g., application, middleware, OS, transport, network) to support COIs within multiple domains • Manage resources in the face of intermittent communication connectivity • e.g., power, mission, environments, silence/chatter • Compensate for limited resources in tactical environments • e.g., bandwidth, compute cycles, primary/secondary storage Goal: Regulating & adapting to (dis)continuous changes in difficult runtime environments

  3. Resource Allocation & Control Engine (RACE) • Resource management framework atop CORBA Component Model (CCM) middleware (CIAO/DAnCE) • Motivating Applications: • Total Ship Computing Environment (TSCE) • ~1000 nodes • ~5000 applications • Task (re)distribution • Switch modes of operation • Adaptation to • Loss of resources • Changing task priorities • NASA’s Magnetospheric Multi-scale (MMS) mission • Spacecraft constellation • Adaptation to varying: • Regions of interest (ROI) • Modes of operation

  4. RACE MDD Tools – Design Time Challenges • Carry out commander’s intent by • focusing on generic functionality in Platform Independent Model (PIM) • using transformation engine to generate detailed Platform Specific Model (PSM) Platform Independent Real-Time Policies Model Platform Specific (CCM) Real-Time Policies Model

  5. RACE MDD Tools – Design Time Challenges • Carry out commander’s intentby • focusing on generic functionality in Platform Independent Model (PIM) • using transformation engine to generate detailed Platform Specific Model (PSM) • Explore prob & soln space by • easily modifying visual model • passing generated artifacts to Bogor Model Checker • getting all possible valid & invalid states

  6. RACE MDD Tools – Design Time Challenges B A C SS1 SS2 SS3 SS5 SS4 Distributed Application with RT Config Description • Carry out commander’s intentby • focusing on generic functionality in Platform Independent Model (PIM) • using transformation engine to generate detailed Platform Specific Model (PSM) • Explore prob & soln spaceby • easily modifying visual model • passing generated artifacts to Bogor model checker • getting all possible valid & invalid states Bogor Model Checker (with RT Extensions) • Assure QoS by • performing safety, validity & behavioral checks • passing set of valid states to RACE middleware

  7. RACE Middleware – Run Time Challenges • Carry out commander’s intent by executing deployment plan from mission planner • Enforce QoS policies (at the middleware level) with: • multiple, pluggable algorithms for allocation & control • automation of D & C with uniform interfaces • Manage resources by monitoring & adapting component resource allocation • Compensate for limited resources by migrating/swapping components • adjusting application parameters (QoS settings)

  8. Architectural Overview of RACE Input Adapter • Application metadata can be represented in various formats • XML descriptors • In memory C++/IDL structure • Converts application metadata into in memory IDL structure used by RACE Orchestrator • Type & requirement of resource may very for different application • Resource utilization overhead may be associated with allocation/control algorithms themselves • Examines application metadata & selects appropriate allocation & control algorithms based on application characteristics & resource availability

  9. Architectural Overview of RACE Allocators • Implementations of resource allocation algorithms • Simple bin-packing algorithms • Resource constraint partitioning bin-packing • Time & space overhead • Controllers • Implementations of run-time resource management algorithms • EUCON – rate adaptation • FMUF – flexible maximum urgency first • Adapts system behavior in response to varying operating conditions • Configurators • Configure middleware, operating system, and network parameters • Middleware threading model • Priority policy • OS Priority • Network diffserv priority

  10. RACE’s Monitoring and Control Framework Monitors • Resource utilization monitors • Measures CPU, memory, &n/w bandwidth utilization • QoS Monitors • Measures application end-to-end latency • Other application specific monitors can be “hooked in” • Controller • Responds to variations in resource utilization and application QoS • Computes system-wide adaptation • Effectors • Centralized Effector • Decomposes system-wide adaptation decisions into per node adaptation decisions • Nodal Effectors • Modify nodal parameters based on per-node adaptation decisions • Modifies OS priority of processes hosting application components

  11. Performance Evaluation of RACE • Overhead of the RACE framework: • Monitoring overhead : 37.97 micro seconds • Control overhead: 800 nano seconds System Performance with RACE Baseline System Performance (Without RACE)

  12. Applying RACE to DDS-Based DRE Systems • All DRE systems have architectural features in common • Adapting RACE to DDS-based DRE systems won’t require major mods • DDS will make some of RACE’s tasks simpler • QoS validation & matching • QoS enforcement

  13. DDS Implementation Architectures • Decentralized Architecture • embedded threads to handle communication, reliability, QoS etc Network node node

  14. DDS Implementation Architectures • Decentralized Architecture • embedded threads to handle communication, reliability, QoS etc • Federated Architecture • a separate daemonprocess to handle communication, reliability, QoS, etc. Network node node node node Network daemon daemon

  15. DDS Implementation Architectures • Decentralized Architecture • embedded threads to handle communication, reliability, QoS etc • Federated Architecture • a separate daemonprocess to handle communication, reliability, QoS, etc. • Centralized Architecture • one single daemonprocess for domain Network node node node node Network daemon daemon node control control node daemon node data data Network

  16. Pub/Sub Benchmarking Lessons Learned • Performance of DDS is significantly faster than other pub/sub architectures • Even the slowest was 2x faster than other pub/sub services • DDS scales better to larger payloads, especially for simple data types

  17. Pub/Sub Benchmarking Lessons Learned • Performance of DDS is significantly faster than other pub/sub architectures • Even the slowest was 2x faster than other pub/sub services • DDS scales better to larger payloads, especially for simple data types • DDS implementations are optimized for different use cases & design spaces • payload size • # of subscribers • collocation http://www.dre.vanderbilt.edu/DDS/DDS_RTWS06.pdf

  18. Configuration Aspect Problems XML Configuration Files XML Property Files • Application developers • Must understand middleware constraints & semantics • Increases accidental complexity • Different middleware uses different configuration mechanisms Middleware developers • Documentation & capability synchronization • Semantic constraints & QoS evaluation of specific configurations CIAO/CCM provides ~500 configuration options 21 interrelated QoS policies

  19. QoS Policies Supported by DDS • DCPS entities (e.g., topics, data readers/writers) configurable via QoS policies • QoS tailored to data distribution in tactical information systems • Request/offered compatibility checked by DDS at Runtime • Consistency checked by DDS at Runtime • DEADLINE • Establishes contract regarding rate at which periodic data is refreshed • LATENCY_BUDGET • Establishes guidelines for acceptable end-to-end delays • TIME_BASED_FILTER • Mediates exchanges between slow consumers & fast producers • RESOURCE_LIMITS • Controls resources utilized by service • RELIABILITY (BEST_EFFORT, RELIABLE) • Enables use of real-time transports for data • HISTORY (KEEP_LAST, KEEP_ALL) • Controls which (of multiple) data values are delivered • DURABILITY (VOLATILE, TRANSIENT, PERSISTENT) • Determines if data outlives time when they are written • … and 15 more … • Implications for Trustworthiness

  20. DDS QoS Policies DataWriter DataReader Durability- Volatile Durability- Transient Deadline- 10ms Deadline- 20ms Timebased- 15ms Topic DataWriter Liveliness- Manual By Topic Liveliness- Automatic Reliability- Best Effort Reliability- Reliable • Interactions of QoS Policies have implications for: • Consistency/Validity • e.g., Deadline period < TimeBasedFilter minimum separation (for a DataReader) • Compatibility/Connectivity • e.g., best-effort communication offered (by DataWriter), reliable communication requested (by DataReader) Will Data Flow? Or Will QoS Settings Need Updating? DataReader Will Settings Be Consistent? Or Will QoS Settings Need Updating?

  21. DDS Trustworthiness Needs (1/2) • Compatibility and Consistency of QoS Settings • Data needs to flow as intended • Close software loopholes that might be maliciously exploited • Fixing at run-time untenable • Updating QoS settings on the fly • Introduces inherent complexity • Unacceptable for certain systems (e.g., RT, mission critical, provable properties) • Fixing at code time untenable • Implies long turn-around times • Code, compile, run, check status, iterate • Introduces accidental complexity • DDS QoS Modeling Language (DQML) models QoS configurations and allows checking at design/modeling time • Supports quick and easy fixes by “sharing” QoS policies • Supports correct-by-construction configurations

  22. DDS Trustworthiness Needs (2/2) QoS Settings • QoS configurations generated automatically • Eliminate accidental complexities • Close configuration loopholes for malicious exploitation • Decouple configurations from application logic • Refinement of configuration separate from refinement of code • DQML generates QoS settings files for DDS Applications • Creates consistent configurations • Promotes separation of concerns • Configuration changes unentangled with business logic changes • Increases confidence

  23. Typical DDS Application Development QoS Configuration • Business/application logic mixed with QoS configuration code • Accidental complexity • Obfuscation of configuration concerns Publisher QoS configuration & publisher creation DataWriter QoS configuration & datawriter creation • DQML decouples QoS configuration from business logic • Facilitates configuration analysis • Reduces accidental complexity Business logic = Higher confidence DDS application

  24. DQML Design Decisions • No Abortive Errors • User can ignore constraint errors • Useful for developing pieces of a distributed application • Initially focused on flexibility • QoS Associations vs. Containment • Entities and QoS Policies associated via connections rather than containment • Provides flexibility, reusability • Eases resolution of constraint violations

  25. Use Case: DDS Benchmark Environment (DBE) DataWriter DataWriter DataWriter DataWriter DataReader DataReader DataReader DataReader QoS QoS QoS QoS QoS QoS QoS QoS • Part of Real-Time DDS Examination & Evaluation Project (RT-DEEP) • http://www.dre.vanderbilt.edu/DDS • Developed by DRE Group at ISIS • DBE runs Perl scripts to deploy DataReaders and DataWriters onto nodes • Passes QoS settings files (generated by hand) • Requirement for testing and evaluating non-trivial QoS configurations

  26. DBE Interpreter QoS Settings QoS Settings Invoke the DBE Interpreter Model the Desired QoS Policies via DQML DataReader DataWriter Generates One QoS Settings File for Each DBE DataReader and DataWriter to Use No Manual Intervention DBE Have DBE Launch DataReaders and DataWriters with Generated QoS Settings Files

  27. DQML Demonstration • Create DDS entities, QoS policies, and connections • Run constraint checking • consistency check • compatibility check • fix at design time • Invoke DBE Interpreter • automatically generate QoS settings files

  28. Future Work • Incorporate with TRUST Trustworthy Systems • Combine QoS polices and patterns to provide higher level services • Build on DDS patterns1 • Continuous data, state data, alarm/event data, hot-swap and failover, controlled data access, filtered by data content • Fault-tolerance service (e.g., using ownership/ownership strength, durability policies, multiple readers and writers, hot-swap and failover pattern) • Security service (e.g., using time based filter, liveliness policies, controlled data access pattern) • Real-time data service (e.g., using deadline, transport priority, latency budget policies, continuous data pattern) • Incorporate into Larger Scale Tool Chains • e.g., Deployment and Configuration Engine (DAnCE) in CoSMIC Tool Chain DQML DAnCE 1 Gordon Hunt, OMG Workshop Presentation, 10-13 July, 2006

  29. Component QoS Modeling • Platform Independent Component Modeling Language (PICML) • Captures CCM application development lifecycle • e.g., Design, Assembly, Packaging, Deployment, etc. CORBA’s Secure Invocation Model • Security QoS Modeling Language (SQML) • Leverage and enhance to capture security requirements for eDRE applications • Component QoS Modeling Language (CQML) • Enhances PICML (uses it as a library) • Captures Component QoS requirements • Defines 4 different types of QoS for Port, Component, Connection & Component Assembly (Application) • Any new type of QoS should conform to these QoS types i.e., CQML in general 29

  30. CORBA Security Model v1.8 • Security protection based upon policy • Policy may be domain specific • Policy is enforced by ORB • The ORB enforces • Access Control • Message Protection • Audit Policy • The ORB implements • PEP (Policy Evaluation Points) • PDP (Policy Enforcement Points) • The ORB Services implement • Policy Repository • Security Protocols • Authentication Methods • Cryptographic Algorithms CCM Security adopts the EJB Security Specification

  31. CCM QoS Levels for Security The CORBA Component Model • Address three CCM QoS levels – ports, components and assemblies • SQML provides fine-grained as well as coarse-grained access control and security guarantees Configure Security QoS Properties A CCM Assembly

  32. Access Control Granularity • Fine-grained: • Interface Operation • Assembly Property • Component Attribute • Coarse-grained: • Interface • Set of Operations • Class of Operations (based on Required Rights - corba:gsum) • Inter-Component Execution Flow (Path in an Assembly)

  33. User-Role-Rights Mapping (Effective Rights) • Responsibility of the System Administrator and defined in the application server deployment site through access control policies • The roles can be application specific or platform (CCM) specific • CCM Specific roles: Designer, Developer, Implementer, Assembler, Packager, Deployer, End-User • Application Specific roles: Administrator, User, Director, Programmer, Manager, etc. The Users & Groups are shown for completeness. User/Group → Role mappings are defined in the application access policies

  34. Operation/Interface Classification (Required Rights) • Responsibility of the Component Developer • Operations/Interfaces are classified according to the standard CORBA family rights [corba:gsum] • Well-defined Component Interfaces • Allows for coarse-grained control over operation access • Used underneath in the container to determine access decisions to the operations • Effective Rights vs. Required Rights Rights assignment on a two-way method Rights assignment on an interface

  35. Policy Definition Rules Two-Level evaluation: Operation name & Required Rights Two-Level Evaluation: Operation name & Required Rights Two-Level Evaluation: Operation name & Required Rights Critical path in the system (part of a functionality/workflow) Allow/Deny access to all operations with same rights Component Attributes have implicit get/set rights

  36. Security QoS Interpreter • Generate User → Role → Granted-Rights mappings defined by the system administrator • Generate Operation → Required-Rights mappings for an interface that are determined by the component designer. • Generate security policy definition files • Generate method permissions based on the mappings and policy rules • Generate additional metadata to configure the container

  37. Benefits of Security QoS Modeling • Express cross-cutting concerns that can be implemented at the interface level, component level as well as component assembly (application) level. • Shed some responsibility from the ORB through definition of well-formed policies, rule combining and conflict resolution. • A higher level tool for declarative security specification for the deployment of large-scale component-based systems. • Incorporates security into the QoS aspects of component systems which is an important step towards complete QoS modeling of such systems enabling their trustworthiness. • Allows modeling of security QoS with much more generality and flexibility than existing solutions (OpenPMF)

  38. Future Work • Define efficient rule and policy validation and rule combining algorithms. • Extend the critical path functionality to provide Business Process & Workflow security • Provide middleware infrastructure support for security in the CCM container through container portable interceptors, leveraging the facilities of the CORBA security service implementation available with TAO • Enable D & C tools like DAnCE to integrate security QoS properties with application deployment and configure the CCM middleware to enforce them • Unified QoS Modeling through CQML • FT, RT, Security, NetworkQoS, Event Chanel Configuration conform to CQML • Any new QoS requirement model should conform to CQML • DQML can conform to CQML enabling different platforms to be

  39. MDD Solutions for Configuration Options Configuration Modeling Language (OCML) ensures semantic consistency of option configurations • OCML is used by • Middleware developers to • design the configuration model • Application developers to • configure the middleware for • a specific application • Configuration model • validates application model • OCML metamodel is platform- • independent • OCML models are platform- • specific

  40. Applying OCML • Middleware developers specify • Configuration space • Constraints • OCML generates config model

  41. Applying OCML • Middleware developers specify • Configuration space • Constraints • OCML generates config model • Application developers provide a model of desired options & their values, e.g., • Network resources • Concurrency & connection management strategies

  42. Applying OCML • Middleware developers specify • Configuration space • Constraints • OCML generates config model • Application developers provide a model of desired options & their values, e.g., • Network resources • Concurrency & connection management strategies • OCML constraint checker flags incompatible options & then • Synthesizes XML descriptors for middleware configuration • Generates documentation for middleware configuration • Validates the configurations

  43. Supporting DDS QoS Modeling With OCML • Integrate OCML with DRE system modeling languages • Enable association of option sets with system model elements • PICML • ORB/POA/Container • Ports using DDS (proposed DDS-4-LWCCM spec) • DDS-specific ML • DDS entities CIAO Pub Port DDS Option Set XML • More generation options • Other config file formats • Parameters for simulations • Code blocks C++

  44. Modeling QoS With Design Patterns • Reliability = BEST_EFFORT • Time-Based Filter = X • Use keys & multicast • History = KEEP_LAST, 1 • Ownership = EXCLUSIVE • Deadline = X • Continuous Data • constant updates • many-to-many • last value is best • seamless failover

  45. Modeling QoS With Design Patterns • State Information • persistent data • occasional mods • latest & greatest • must deliver • must process • Durability = PERSISTENT • Lifespan = X • Reliability = RELIABLE • Pub History = KEEP_ALL • Sub History = KEEP_LAST, n

  46. Modeling QoS With Design Patterns • Alarms & Events • asynchronous • must deliver • authorized sender • Liveliness = MANUAL • Reliability = RELIABLE • Pub History = KEEP_ALL • Ownership = EXCLUSIVE

  47. Carry out commander’s intent by automated mapping of familiar scenarios to models Pollux MDD Tools –Design Time Challenges • Explore prob & soln space with • easily grokable/modifiable visual language • multiple artifact generators • Assure QoS by • explicit representation in model • automatic consistency checks

  48. Carry out commander’s intent by DDS getting the right information to the right place at the right time Pollux Perf. Eval. –Run Time Challenges • Enforce QoS policies - built in to • DDS implementations • Manage resources with • Resource Limits policy • Time-Based Filter policy • Lifespan policy • History policy • filter migration to source • Compensate for limited resources by • leveraging mutable QoS policies • detecting & acting on meta- events (built-in QoS policies)

More Related