1 / 37

Message-based MVC and High Performance Multi-core Runtime

Message-based MVC and High Performance Multi-core Runtime. Xiaohong Qiu xqiu@indiana.edu December 21, 2006. Session Outline. My Brief Background Education and Work Experiences Ph.D. Thesis Research Message-based MVC Architecture for Distributed and Desktop Applications

colton
Download Presentation

Message-based MVC and High Performance Multi-core Runtime

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Message-based MVCand High Performance Multi-core Runtime Xiaohong Qiu xqiu@indiana.edu December 21, 2006

  2. Session Outline • My Brief Background Education and Work Experiences • Ph.D. Thesis Research Message-based MVC Architecture for Distributed and Desktop Applications • Recent Research Project High Performance Multi-core Runtime

  3. My Brief Background I • 1987 ─ 1991 Computer Science program at Beihang University • CS was viewed a promising field to get into at the time • Four years of foundation courses, computer hardware & software courses, labs, projects, and internship. Programming languages used include assembly language, Basic, Pascal, Fortran 77, Prolog, Lisp, and C. Programming environment were DOS, Unix, Windows, and Macintosh. • 1995 ─ 1998 Computer Science graduate program at Beihang University • Graduate Research Assistant at National Lab of Software Development Environment • Participated in a team project SNOW (shared memory network of workstations) working on an improved algorithm of parallel IO subsystem based on two-phase method and MPI I/O. • 1991 ─ 1998 Faculty at Beihang University • Assistant Lecturer & Lecturer, teaching Database and Introduction to Computing courses.

  4. My Brief Background II • 1998 ─ 2000 M.S., Computer Information Science program at Syracuse University • 2000 ─ 2005 Ph.D., Computer Information Science program at Syracuse University • The thesis project involved survey, designing, and evaluating a new paradigm for the next generation of rich media software applications that unifies legacy desktop and Internet applications with automatic collaboration and universal access capabilities. Attended conferences for presenting research papers and exhibiting projects • Awarded with Syracuse University Fellowship from 1998 to 2001 and Outstanding Graduate Student of College of Electrical Engineering and Computer Science in 2005 • May 2005 ─ present Visiting Researcher at Community Grids Lab, Indiana University • June ─ November 2006 Software Project Lead at Anabas Inc. • Analysis of Concurrency and Coordination Runtime (CCR) and Dynamic Secure Services (DSS) for Parallel and Distributed Computing

  5. Message-based MVC (M-MVC) • Research Background • Architecture of Message-based MVC • Collaboration Paradigms • SVG Experiments • Performance Analysis • Summary of Thesis Research

  6. Research Background • Motivations • CPU speed (Moore’s law) and network bandwidth (Gilder’s law) continue to improve bring fundamental changes • Internet and Web technologies have evolved into a global information infrastructure for sharing of resources • Applications getting increasingly sophisticated • Internet collaboration enabling virtual enterprises • Large-scale distributed computing • Requires new application architecture that is adaptable to fast technology changes with properties such as simplicity, reusability, scalability, reliability, and performance • General area is technology support for Synchronous and Asynchronous Resource Sharing • e-learning(e.g. video/audio conferencing) • e-science(e.g. large-scale distributed computing) • e-business (e.g. virtual organizations) • e-entertainment (e.g. online game) • Research on a generic model of building applications • Application domains • Distributed (Web) • Service Oriented Architecture and Web Services • Desktop (Client) • Model-View-Controller (MVC) paradigm • Internet collaboration • Hierarchical Web Service pipeline model

  7. Architecture of Message-based MVC Decomposition of SVG Browser Model View Controller Message-based MVC Semantic Model Events as messages Rendering as messages Controller High Level UI Input port Output port View Events as messages Rendering as messages Raw UI Display Display Messages contain control information a. MVC Model b. Three-stage pipeline • A comparison of MVC, Web Service Pipeline, and Message-based MVC • Features of Message-based MVC Paradigm • M-MVC is a general approach for building applications with a message-based paradigm • It emphasizes a universal modularized service model with messaging linkage • Converges desktop application, Web application, and Internet collaboration • MVC and Web Services are fundamental architectures for desktop and Web applications • Web Service pipeline model provides the general collaboration architecture for distributed applications • M-MVC is a uniform architecture integrating the above models • M-MVC allows automatic collaboration, which simplifies the architecture design

  8. Collaboration Paradigms I Model 1 Model 2 Model m-1 Model m Model View 1 View 2 View n-1 View n View 2 View 1 View n-1 View n b) Multiple Model Multiple View a) Single Model Multiple View • SMMV vs. MMMV as MVC interactive patterns • Flynn’s Taxonomy classifies parallel computing platforms in four types: SISD, MISD, SIMD, and MIMD. • SIMD– A single control unit dispatches instructions to each processing unit. • MIMD– Each processor is capable of executing a different programindependent of the other processors. It enables asynchronous processing. • SMMV generalizes the concept of SIMD • MMMV generalizes the concept of MIMD • In practice, SMMV and MMMV patterns can be applied in both asynchronous and synchronous applications, thus form general collaboration paradigms

  9. Collaboration Paradigms II NaradaBrokering Identical programs receiving identical events Model Model Model Model SVG SVG SVG SVG as WS as WS as WS as WS browser browser browser browser master master other other master master other master client client client client View View View View View View View View master client master client other client other client other client other client other client other client NaradaBrokering Model as Web Service NaradaBrokering Broker Broker Broker Broker MMMV SMMV • Monolithic collaboration • CGL applications of PowerPoint, OpenOffice and data visualization • Collaboration paradigms deployed with M-MVC model • SMMV (e.g. Instructor led learning) • MMMV (e.g. Participatory learning)

  10. SVG Experiments I Players Observers • Monolithic SVG Experiments • Collaborative SVG Browser • Collaborative SVG Chess game

  11. SVG Experiments II Notification service (NaradaBrokering) View(Client) Model(Service) Event Processor Output (Rendering) GVT tree’ DOM tree’ (mirrored) Event Processor DOM tree’ (after mutation) T4 T3 T2 T1 Broker JavaScript Input (UI events) GVT tree DOM tree (mirrored) Event Processor Event Processor DOM tree (before mutation) T0 T0 Machine A Machine B Machine C • Decomposed SVG browser into stages of pipeline • T0: A given user event such as a mouse click that is sent from View to Model. • T1: A given user event such as a mouse click can generate multiple associated DOM change events transmitted from the Model to the View. T1 is the arrival time at the View of the first of these. • T2: This is the arrival of the last of these events from the Model and the start of the processing of the set of events in the GVT tree • T3: This is the start of the rendering stage • T4: This is the end of the rendering stage

  12. Performance Analysis I • Average Performance of Mouse Events

  13. Performance Analysis II • Immediate bouncing back event

  14. Performance Analysis III • Basic NB performance in 2 hops and 4 hops

  15. Comparison of performance results to highlight the importance of the client All EventsMousedownMouseupMousemove Events per 5 ms bin Events per 5 ms bin All EventsMousedownMouseupMousemove Time T1-T0 milliseconds NB on View; Model and View on two desktop PCs with “high-end” graphics Dell (3 Ghz Pentium) for View; 1.5 Ghz Dell for model; local switch network connection. NB on Model; Model and View on two desktop 1.5 Ghz PCs; local switch network connection.

  16. All EventsMousedownMouseupMousemove All EventsMousedownMouseupMousemove Comparison of performance results with Local and remote NB locations Events per 5 ms bin Events per 5 ms bin Time T1-T0 milliseconds NB on 8-processor Solaris server ripvanwinkle; Model and View on two 1.5 Ghz desktop PCs; remote network connection through routers. NB on local 2-processor Linux server; Model and View on two 1.5 Ghz desktop PCs; local switch network connection.

  17. Observations • This client to server and back transit time is only 20% of the total processing time in the local examples. • The overhead of the Web service decomposition is not directly measured in tests shown these tables • The changes in T1-T0 in each row reflect the different network transit times as we move the server from local to organization locations. • This overhead of NaradaBrokering itself is 5-15 milliseconds depending on the operating mode of the Broker in simple stand-alone measurements. It consists forming message objects, serialization and network transit time with four hops (client to broker, broker to server, server to broker, broker to client). • The contribution of NaradaBrokering to T1-T0 is about 30 milliseconds in preliminary measurements due to the extra thread scheduling inside the operating system and interfacing with complex SVG application. • We expect the main impact to be the algorithmic effect of breaking the code into two, the network and broker overhead, thread scheduling from OS • We expect our architecture will work dramatically better on multi-core chips • Further Java runtime has poor thread performance and can be made much faster

  18. Summary of Thesis Research • Proposing an “explicit Message-based MVC” paradigm (M-MVC) as the general architecture of Web applications • Demonstrating an approach of building “collaboration as a Web service” through monolithic SVG experiments. • Bridging the gap between desktop and Web application by leveraging the existing desktop application with a Web service interface through “M-MVC in a publish/subscribe scheme”. • As an experiment, we convert a desktop application into a distributed system by modifying the architecture from method-based MVC into message-based MVC. • Proposing Multiple Model Multiple View (MMMV) and Single Model Multiple View (SMMV) collaboration as the general architecture of “collaboration as a Web service” model. • Identifying some of the key factors that influence the performance of message-based Web applications especially those with rich Web content and high client interactivity and complex rendering issues.

  19. High Performance Multi-core Runtime • Multi-core Architecture are expected to be the future of “Moore’s Law” with single chip performance coming from parallelism with multiple cores rather than from increased clock speed and sequential architecture improvements • This implies parallelism should be used in all applications and not just the familiar scientific and engineering areas • The runtime could be message passing for all cases. It is interesting to compare and try to unify runtime for MPI (classic scientific technology), Objects and Services which are all message based • We have finished an analysis of Concurrency and Coordination Runtime (CCR) and DSS Service Runtime

  20. Research Question: What is “core” multicore runtime and its performance? • Many parallel and/or distributed programming models are a supported by a runtime consisting of long-running or dynamic threads exchanging messages • Those coming from distributed computing often have overheads of a millisecond or more when ported to multicore (See M-MVC thesis results earlier) • Need microsecond level performance on all models – like the best MPI • Examination of Microsoft CCR suggests this will be possible • Current CCR spawning threads in MPI mode 2-4 microsecond overhead • Two-way service style messages around 30 microsecond • What are messaging primitives (adding to MPI) and what are their performance

  21. Intel Fall 2005 Multicore Roadmap March 2006 Sun T1000 8 core Server and December 2006 Dell Intel-based 2 Processor, each with 4 Cores

  22. Summary of CRR and DSS Project • CCR is a message based run time supporting interacting concurrent threads with high efficiency • Replaces CLR Thread Pool with Iteration • DSS is a Service (not a Web Service) environment designed for Robotics (which has many control and analysis modules implemented as services and linked by workflow) • DSS is built on CCR and released by Microsoft • We used a 2 processor 2-core AMD Opteron and a 2-processor 2-core Intel Xeon and looked at CCR and DSS performance • For CCR we chose message patterns similar to those used in MPI • For DSS we chose simple one way and two way message exchange between 2 services • This is first step in examining possibility of linking science and more general runtime and seeing if we can get very high performance in all cases • We see for example about 50 times better performance than Java runtime used in thesis

  23. Implementing CCR Performance Measurements • CCR is written in C# and we built a suite of test programs in this language • Multi-threaded performance analysis tools • On the AMD machine, there is the free CodeAnalyst Performance Analyzer • It allows one see how work is assigned to threads but it cannot look at microsecond resolution needed for this work • Intel thread analyzer (VTune) does not currently support C# or Java • Microsoft Visual Studio 2005 Team Suite Performance Analyzer (no support WOW64 or x64 yet) • We looked at several thread message exchange patterns similar to basic Exchange and Shift in MPI • We took a basic computation whose smallest unit took about 1.4(AMD)-1.5(Intel) microseconds • We typically ran 107 such units on each core to take 14 or 15 seconds • We divided this run from 1 to 107 stages where at end of each stage the threads sent messages (in various patterns) to the next threads that continued computation • We measured total execution time as a function of number of stages used with 1 stage having no overheads

  24. Typical Thread Analysis Data View

  25. Port0 Port2 Port0 Port1 Port3 Port0 Port1 Port0 Port3 Port1 Port1 Port3 Port0 Port2 Port0 Port2 Port1 Port1 Thread2 Thread1 Thread3 Thread0 Thread3 Thread0 Thread1 Thread1 Thread2 Thread1 Thread0 Thread1 Thread1 Thread0 Thread3 Thread2 Thread0 Thread0 Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message Message One Stage Next Stage Message Message Message Message Message Message Pipeline which is Simplest loosely synchronous execution in CCRNote CCR supports thread spawning model MPI usually uses fixed threads with message rendezvous

  26. Port0 Port1 Port3 Port2 Thread0 Thread1 Thread3 Thread2 Message Message Message Message Message Message Message Message Message Message Message Message Thread0 Message Message Thread1 EndPort Thread2 Message Message Thread3 Idealized loosely synchronous endpoint (broadcast) in CCR An example of MPI Collective in CCR

  27. Read Messages Write ExchangedMessages Write ExchangedMessages Port0 Thread0 Port0 Thread0 Thread0 Thread1 Port1 Thread1 Port1 Thread1 Thread2 Port2 Port2 Thread2 Thread2 Thread3 Port3 Port3 Thread3 Thread3 Exchanging Messages with 1D Torus Exchangetopology for loosely synchronous execution in CCR

  28. (a) Pipeline (b) Shift Port0 Port0 Thread0 Thread0 Port1 Port1 Thread1 Thread1 Port2 Thread2 Port2 Thread2 Port3 Port3 Thread3 Thread3 (d) Exchange (c) Two Shifts Port0 Thread0 Port0 Thread0 Port1 Thread1 Port1 Thread1 Port2 Thread2 Port2 Thread2 Port3 Thread3 Port3 Thread3 Four Communication Patterns used in CCR Tests. (a) and (b) use CCR Receive while (c) and (d) use CCR Multiple Item Receive

  29. 4-way Pipeline Pattern 4 Dispatcher Threads HP Opteron Time Seconds Overhead = Computation 8.04 microseconds per stage averaged from 1 to 10 million stages Computation Component if no Overhead Stages (millions) Fixed amount of computation (4.107 units) divided into 4 cores and from 1 to 107 stages on HP Opteron Multicore. Each stage separated by reading and writing CCR ports in Pipeline mode

  30. 4-way Pipeline Pattern 4 Dispatcher Threads Dell Xeon Time Seconds Overhead = Computation 12.40 microseconds per stage averaged from 1 to 10 million stages Computation Component if no Overhead Stages (millions) Fixed amount of computation (4.107 units) divided into 4 cores and from 1 to 107 stages on Dell Xeon Multicore. Each stage separated by reading and writing CCR ports in Pipeline mode

  31. Summary of Stage Overheads for AMD Machine These are stage switching overheads for a set of runs with different levels of parallelism and different message patterns –each stage takes about 28 microseconds (500,000 stages)

  32. Summary of Stage Overheads for Intel Machine These are stage switching overheads for a set of runs with different levels of parallelism and different message patterns –each stage takes about 30 microseconds. AMD overheads in parentheses These measurements are equivalent to MPI latencies

  33. AMD Bandwidth Measurements • Previously we measured latency as measurements corresponded to small messages. We did a further set of measurements of bandwidth by exchanging larger messages of different size between threads • We used three types of data structures for receiving data • Array in thread equal to message size • Array outside thread equal to message size • Data stored sequentially in a large array (“stepped” array) • For AMD and Intel, total bandwidth 1 to 2 Gigabytes/second

  34. Intel Bandwidth Measurements • For bandwidth, the Intel did better than AMD especially when one exploited cache on chip with small transfers • For both AMD and Intel, each stage executed a computational task after copying data arrays of size 105 (labeled small), 106 (labeled large) or 107 double words. The last column is an approximate value in microseconds of the compute time for each stage. Note that copying 100,000 double precision words per core at a gigabyte/second bandwidth takes 3200 µs. The data to be copied (message payload in CCR) is fixed and its creation time is outside timed process

  35. 4-way Pipeline Pattern 4 Dispatcher Threads Dell Xeon Time Seconds Slope Change (Cache Effect) Total Bandwidth 1.0 Gigabytes/Sec up to one million double words and 1.75 Gigabytes/Sec up to 100,000 double words Array Size: Millions of Double Words Typical Bandwidth measurements showing effect of cache with slope change5,000 stages with run time plotted against size of double array copied in each stage from thread to stepped locations in a large array onDell Xeon Multicore

  36. Timing of HP Opteron Multicore as a function of number of simultaneous two-way service messages processed (November 2006 DSS Release) DSS Service Measurements • CGL Measurements of Axis 2 shows about 500 microseconds – DSS is 10 times better

  37. References • Thesis for download • http://grids.ucs.indiana.edu/~xqiu/dissertation.html • Thesis project • http://grids.ucs.indiana.edu/~xqiu/research.html • Publications and Presentations • http://grids.ucs.indiana.edu/~xqiu/publication.html • NaradaBrokering Open Source Messaging System • http://www.naradabrokering.org • Information about Community Grids Lab project and publications • http://grids.ucs.indiana.edu/ptliupages/ • Xiaohong Qiu, Geoffrey Fox, Alex Ho, Analysis of Concurrency and Coordination Runtime CCR and DSS for Parallel and Distributed Computing, technical report, November 2006 • Shameem Akhter and Jason Robert, Multi-Core Programming, Intel Press, April 2006

More Related