Download
slide1 n.
Skip this Video
Loading SlideShow in 5 Seconds..
Under the supervision of Christian Pérez INSA de Rennes PowerPoint Presentation
Download Presentation
Under the supervision of Christian Pérez INSA de Rennes

Under the supervision of Christian Pérez INSA de Rennes

79 Views Download Presentation
Download Presentation

Under the supervision of Christian Pérez INSA de Rennes

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Ph.D. Defense by Julien Bigot, December 6th 2010 Generic Support of Composition Operators inSoftware Component Models,Application to Scientific Computing Under the supervision of Christian Pérez INSA de Rennes INRIA Team-Projects PARIS(IRISA) / GRAAL(LIP)

  2. Complex applications Coupling of independent codes Developed by distinct teams Long life-cycle longer than hardware Computational intensive (HPC), => requires complex hardware Supercomputers Computing clusters Computing grids Clouds Context: Scientific Applications ? programming models Structural Mechanics Optics Dynamics Thermal Satellite modeling SAN SAN Cluster Cluster LAN WAN Supercomputer Supercomputer Grid

  3. Context:Parallel/Distributed Computing Paradigms • Shared memory • Implementations • OS (node level) • kDDM (cluster level) • Juxmem (grid level) • … DSM

  4. Shared memory Implementations OS (node level) kDDM (cluster level) Juxmem (grid level) Message passing Implementations MPI PVM Charm++ … Variations Collective operations Barrier Broadcast Scatter … Context:Parallel/Distributed Computing Paradigms

  5. Shared memory Implementations OS (node level) kDDM (cluster level) Juxmem (grid level) Message passing Implementations MPI PVM Charm++ Variations Collective operations Remote procedure/methods calls Implementations ONC RPC Corba Java RMI … Variations Parallel procedure call (PaCO++) Master / worker (GridRPC) Context:Parallel/Distributed Computing Paradigms

  6. Shared memory Implementations OS (node level) kDDM (cluster level) Juxmem (grid level) Message passing Implementations MPI PVM Charm++ Variations Collective operations Remote procedure/methods calls Implementations ONC RPC Corba Java RMI Variations Parallel procedure call (Paco++) Master / worker (GridRPC) Context:Parallel/Distributed Computing Paradigms • Increased abstraction level • Easier programming • Various implementations for various hardware • Centered on communication medium • Application architecture not dealt with • Makes reuse difficult

  7. Components Black boxes Interactions through well defined interaction points Application Component instance assembly High level view of architecture Implementation Primitive: C++, Fortran, Java Composite (Assembly) Dimension Spatial (classical models) CCA, CCM, Fractal, SCA Temporal (workflow/dataflow) Kepler, Triana Context:Composition-Based Programming e f C3 b c d C2 C1 a T1 T2

  8. Algorithmic skeletons Predefined set of composition patterns Farm (master/worker) Pipeline Map Reduce … Focus: decoupling User: application related concerns (what) Skeleton developer:hardware related concerns (how) Implementations Multiple libraries P3L Assist … Context:Composition-Based Programming Farm

  9. Parallel/distributed programming paradigms  Hidden architecture  Reuse complex Hardware resource abstraction Efficiency on various hardware Composition-based models High-level view of architecture Eased reuse  Hardware specific assemblies Context:Components for Parallel Computing Parallelism-oriented composition-based model

  10. Memory sharing between components CCA & CCM Extensions Parallel components CCA, SCIRun2, GridCCM Collective communications CCM Extension Parallel method calls SCIRun2, GridCCM Master / worker support CCA & CCM Extensions Some algorithmic skeletons in assemblies STKM Two type of features Component implementations ≈ skeletons Component interactions Context:Components for Parallel Computing

  11. Context:Existing Models Limitations • Limited set of features in each model • => Combination in a single model • Unlimited set of features (including application specific) • => Extensibility of composition patterns Optics Structural Mechanics CollComm CollComm Thermal Dynamics

  12. Goals for an extensible component model User code reuse Reusable composition patterns Reusable inter-component interactions Efficiency on various hardware Required concepts Components User-definable skeletons User-definable connectors Multiple implementations with hardware-dependant choice Context:Problematic • Problems • How to support user-defined skeleton implementations ? • How will all the concepts behave when combined ?

  13. Outline • Context of the Study • Software Skeletons as Generic Composites • HLCM: a Model Extensible with Composition Patterns • HLCM in Practice • Conclusion

  14. Goal Generating pictures of the Mandelbrot set Embarrassingly parallel problem C = (x,y) Z0 = 0, Zn+1 = Zn² + C Bounded → black Unbounded → blue Hardware Quad-core processor Task farm 4 workers Input: coordinate Output: pixel color Software Skeletons with Components:Motivating Example coord pixel

  15. Software Skeletons with Components:A Component Based Farm MandelbrotFarm Mandel Worker Coord Pixel Mandel Worker Coord Disp Pix Coll Mandel Worker Mandel Worker

  16. Hard Coded in the composite Type of the worker Type of the interfaces Number of workers Limited reuse For a distinct calculation For distinct hardware Software Skeletons with Components:Limitations of Component Based Farm MandelbrotFarm Mandel Worker Coord Pixel Mandel Worker Coord Disp Pix Coll Mandel Worker Mandel Worker

  17. Software Skeletons with Components:A Component Based Farm MandelbrotFarm Mandel Worker Coord Pixel Mandel Worker Coord Disp Pix Coll Mandel Worker Mandel Worker

  18. Software Skeletons with Components:Introducing Parameters Farm<W> W Coord Pixel W Coord Disp Pix Coll W W

  19. Software Skeletons with Components:Introducing Parameters Farm<W, I, O> W I O W Disp<I> Coll<O> W W

  20. Software Skeletons with Components:Introducing Parameters Farm<W, I, O, N> W I O W Genericity Disp<I> Coll<O> . . . N times W

  21. Generic artifacts Accept 2nd order parameters Use the values of parameters in their implementation Software Skeletons with Components:Concepts for Genericity public class GenClass<T> { private T _val; public T getVal() { return _v; } … } Java GenCmp<C> template<typename T> T increment (T val) { T result = val; result += 1; return result; } C CmpA C++

  22. Specializations Use of generic artifacts Arguments bind parameters to actual values Software Skeletons with Components:Concepts for Genericity GenClass<String> gs = new GenClass<String>(); String s = gs.getVal(); GenClass<Integer> gi = new GenClass<Integer>(); … Java MyCmp int i = 42; i = increment<int>(i); float f = 35.69; f = increment<double>(f); … MyComposite GenCmp<MyCmp> Another C++ MyCmp CmpA

  23. Software Skeletons with Components:Metamodel-based Transformation GenericCM Meta-Model GenericCM Meta-Model CM Meta-Model GenericCM: • Generic • Unsupported Parsing Transformation Dump CM: • Non-generic • Supported S: GenericCM source files S Model D Model D: CM source files A D A D component App { content { composite { decl { Decrement_Composite D; Viewer v; Decrement d1; d1.h_log_port -- v.v_log_port; D.h_log_port -- v.v_log_port; d1.dp_dec_port -- D.du_dec_port; d1.du_dec_port -- D.dp_dec_port; set d1.Number 3; } service run { seq { exec d1.export; exec D; } } } } } component App { content { composite { decl { Decrement_Composite D; Viewer v; Decrement d1; d1.h_log_port -- v.v_log_port; D.h_log_port -- v.v_log_port; d1.dp_dec_port -- D.du_dec_port; d1.du_dec_port -- D.dp_dec_port; set d1.Number 3; } service run { seq { exec d1.export; exec D; } } } } } C Semantically equivalent B_0 B<T> B_1 T B_0 C A B<A> B_1 B<D> D

  24. Example Generic artifact ComponentType Artifact usable as parameter PortType Additional modifications Default values for parameters Constraints on parameter values Explicit specializations Meta-programming Software Skeletons with Components:Deriving a Metamodel with Genericity Port PortTypeArg PortTypeParam ComponentType ComponentInstance PortType

  25. Specialized compilation C++ approach Makes template meta-programming possible Algorithm Copy non generic elements For each generic reference Create a context(parameter  value) Copy the generic element in that context Software Skeletons with Components:Transformation C A B<T> B<A> T D B<D> B_0 C A A B_0 D B_1 B_1 D

  26. Software Skeletons with Components:Summary • Defined genericity for component models • Straight-forward meta-model extension • Independent of the component model • SCA (100 class)  GenericSCA (+20 class) • ULCM (71 class)  GULCM (+24 class) • Proposed an approach for genericity execution support • Transformation • generic model  non-generic, executable equivalent • Relies on model-driven engineering approach • Implemented using Eclipse Modeling Framework (EMF) • Make skeleton implementation possible • Implemented skeletons: task farm (pixel, tile), … • Computed pictures of the Mandelbrot set

  27. Outline • Context of the Study • Software Skeletons as Generic Composites • HLCM: a Model Extensible with Composition Patterns • HLCM in Practice • Conclusion

  28. Combining All Concepts in HLCM:Aims & Approach • Aims • Support user-defined extensions Hierarchy Genericity • Interactions: connectors • Adaptation to hardware: implementation choice • Maximize reuse • Existing user code => component models • Approach • Abstract model: primitive concepts taken from backend model • HLCM Specializations (HLCM + CCM => HLCM/CCM) • Deployment time transformation • Hardware resources taken into account • Fall back to backend • Skeletons:

  29. Distinguish component type & component implementation Component type Generic List of ports Component implementation Generic Implemented component type specialization Content description Primitive Composite Choice at deployment time Match used specialization & implemented specialization Choose amongst possible Combining All Concepts in HLCM:Component Implementation Choice CmpA<T> T=B T=B T=X T=C A1 A3<X> A3<B> A2 component CmpA<component T> exposes { provide<Itf> p; } primitive A1 implements CmpA<B> { …} composite A3<component X> implements CmpA<X> { …} CmpA<B>

  30. Without connectors Direct connection between ports Predefined set of supported interactions Connectors A concept from ADLs Connectors reify interaction types A Name Set of named roles Instance are called connections Each role fulfilled by a set of ports Fisrt class entity Combining All Concepts in HLCM:Introducing Connectors from ADLs CmpA CmpB CmpC CmpD ports CmpB CmpA UP user provider CmpD CmpC Event roles

  31. Generator = connector implementation 1 connector  multiple generators Distinct constraints Port types Component placement Two kinds of generators Primitive User-defined Composite Intrinsically generic Type of ports fulfilling roles generic parameters Combining All Concepts in HLCM:Generators When PT subtype of UT && user.host == provider.host user provider UP Stub <UT> Skel<PT> Unix socket UP UP IPC interface = PT interface = UT connector UP < role user, role provider >; generator IPC < interface UT, interface PT > implements UP<user={UT}, provider={PT}> when UT super PT && user.host == provider.host { Stub<UT> stub; Skel<PT> skel; … }

  32. Combining All Concepts in HLCM:Concepts Interactions SeqA SeqB SeqA SeqB Shared Data

  33. Combining All Concepts in HLCM: Concepts Interactions ParallelA ParallelB M P I M P I 0 C0 0 D0 C D 1 D1 C1 1 D C Shared Data Adapt Adapt . . . . . . N times M times CM DN C D M N

  34. Combining All Concepts in HLCM: Concepts Interactions ParallelA ParallelB M P I M P I 0 C0 0 D0 C D Shared Data 1 D1 C1 1 D C . . . . . . N times M times CM DN C D M N

  35. Combining All Concepts in HLCM:Introducing Open Connections Shared Data ParallelA ParallelB Shared Data Shared Data MPI C D MPI Merge Merge Shared Data Shared Data MPI C D MPI Merge Merge . . . Merge . . . Shared Data Shared Data Shared Data Merge Merge N times M times Shared Data Shared Data MPI C D MPI

  36. Combining All Concepts in HLCM:Introducing Open Connections • Connection: • Connector • Mapping: role -> set(ports) • Merge({ ConnA, ConnB }) • Pre: ConnectorA == ConnectorB • Each ( role r ) • Mapping: r -> set(Port)A union set(Port)B • Component • Expose named connection • Open connection • Composite • Expose internal connection • Primitive • Fill exposed connections roles merge UP UP UP merge UP UP UP

  37. Combining All Concepts in HLCM:Introducing Open Connections ParallelA ParallelB User: multiply<MatrixBlock> UP UP MPI MPI User: multiply<Matrix> UP UP MPI MPI UP UP . . . . . . Provider: multiply<Matrix> N times M times MPI UP MPI UP Provider: multiply<MatrixLine>

  38. Combining All Concepts in HLCM:Connection Adaptors & Bundles • Bundle • Regroup multiple open connections to fulfill a role • Connection adaptor • Supports open connection polymorphism • Supported connection • Behavior definition • Two views • A connection implementation • An open-connection exposer • Implemented by an assembly • Used only if necessary Receptable<Push> Facet<Pull > UP Q UP UP Receptable<Push> Facet<Pull > UP Q UP UP adaptor PushPull supports UseProvide < user={Receptacle<Push>}, provider={} > //< supported as UseProvide < user={}, provider={Facet<Pull>} > //< this { BufferComponent buffer; merge({ buffer.pushSide, supported }); merge({ this, buffer.pullSide }); }

  39. Approach Abstract model: primitive elements from backend E.g. HLCM + CCM  HLCM/CCM Transformation: HLCM specialization  backend E.g. HLCM/CCM  pure CCM Source model (PIM) Four main concepts Hierarchy, genericity, implementation choice, connectors Additional concepts Open connections, merge, connection adaptors, bundles Transformation Merge connections (use adaptors if required) Choose implementation Expose composite content Destination model (PSM) Direct map to backend Core: 127 Ecore classes CCM specialization: 3 Ecore classes Combining All Concepts in HLCM:Summary • 41 Ecore classes

  40. Outline • Context of the Study • Software Skeletons as Generic Composites • HLCM: a Model Extensible with Composition Patterns • HLCM in Practice • Conclusion

  41. Experimental Validation:A Framework for HLCM Implementation • Model-transformation based • Eclipse Modeling Tools • HLCM/CCM source model (PIM) • 490 Emfatic lines (130 Ecore classes)  25 000 generated Java lines • 2000 utility Java lines • HLCM destination model (PSM) • 160 Emfatic lines (41 Ecore classes)  1500 generated Java lines • 800 utility Java lines • Transformation engine • 4000 Java lines • Already implemented connectors (HLCM/CCM) • Shared Data • Collective Communications • Parallel method calls

  42. Experimental Validation:Implementing shared memory C1 provider LocalMemoryStore C3 Local UseProvide C2 C1 C3 SharedMem C2 role user generator LocalSharedMem<Integer N> implements SharedMem< access = each(i:[1..N]) { LocalReceptacle<DataAccess> } > { LocalMemoryStore<N> store; each(i:[1..N]) { store.access[i].user += this.access[i]; } }

  43. Experimental Validation:Implementing shared memory role user role provider C1 C1 PosixSharer C3 SharedMem C3 PosixSharer C2 PosixSharer C2 Locality Constraint «same process» UseProvide generator PosixSharedMem<Integer N> implements SharedMem< access = each(i:[1..N]) { LocalReceptacle<DataAccess> } > when samesystem ( each(i:[1..N]) { this.access[i] } ) { each(i:[1..N]) { PosixSharer node[i]; node[i].access.user += this.access[i]; } }

  44. Experimental Validation:Implementing shared memory JuxMem Manager C1 C1 JuxMem Peer C3 SharedMem JuxMem Peer C3 C2 JuxMem Peer C2 UseProvide generator JuxMem<Integer N> implements SharedMem < access = each(i:[1..N]) { LocalReceptacle<DataAccess> } > { JuxMemManager<N> manager; each(i:[1..N]) { JuxMemPeer peer[i]; peer[i].access.user += access[i]; merge ({ peer[i].internal, manager.internal[i] }); } }

  45. component ServiceProvider exposes { UseProvide<provider={Facet<Service>}> s; } bundletype ParallelFacet<Integer N, interface I> { each(i:[1..N]) { UseProvide<provider={Facet<ServicePart>}> part[i]; } } composite ParallelServiceProvider<Integer N> implements ServiceProvider { each(i:[1..N]) { PartialServiceProvider p[i]; } this.s.provider += ParallelServiceFacet<N> { each(i:[1..N]) { part[i] = providerPart[i].s; } } } Experimental Validation:Parallel Method Calls Server ParallelServiceProvider<3> ParallelServiceUser<2> PartialService Provider PartialServiceUser S0 provider Merge C0 S1 PartialServiceUser ParallelServiceFacet<N> C1 S2 ProviderPartialServiceFacet

  46. Experimental Validation: Parallel Method Calls Result Serverside Redistributor<2,3> PartialService Provider Userside Redistributor<2,3> S0 C0 PartialService Provider Serverside Redistributor<2,3> S1 C1 PartialService Provider Serverside Redistributor<2,3> Userside Redistributor<2,3> S2

  47. Comparison HLCM/CCM Paco++ Single cluster 1Gb Ethernet Parallelism 3 clients 4 servers Experimental Validation:Parallel Method Calls Performance Bandwith par caller (Mb/s) Message size (Byte)

  48. Experimental Validation:Conclusion • HLCM Implementation Developed • HLCMi: a framework for HLCM specializations implementations • HLCM/CCM: a CCM based specialization • Other specializations • Plain Java • CCM + plain C++ • Charm++ • Validated • Good expressiveness •  Good performance •  No real automated choice yet

  49. Outline • Context of the Study • Software Skeletons as Generic Composites • HLCM: a Model Extensible with Composition Patterns • HLCM in Practice • Conclusion

  50. Conclusion • Identified need for an extensible component model • User-defined inter-component interactions (composite generators) • User-defined parameterized component implementations (skeletons) • Introduced genericity in component models • Supports user-defined skeleton implementation • Applicable to existing component models • Model transformation approach • Described an extensible component model: HLCM • Abstract model that relies on a backend for primitive concepts • Concepts combined in HLCM • Hierarchy • Genericity • Implementation choice • Connectors with open connections & connection adaptors • Implemented & used • A model-based implementation framework using EMF: HLCMi • Used for synthetic example implementations • memory sharing • parallel method calls