1 / 17

University of Wisconsin at Madison, November 29, 2007

The Dataflow Interchange Format (DIF): A Framework for Specifying, Analyzing, and Integrating Dataflow Representations of Signal Processing Systems.

tia
Download Presentation

University of Wisconsin at Madison, November 29, 2007

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Dataflow Interchange Format (DIF): A Framework for Specifying, Analyzing, and Integrating Dataflow Representations of Signal Processing Systems Shuvra S. Bhattacharyya, University of Maryland at College Park, with contributions from Chia-Jui Hsu, Ivan Corretjer,Ming-Yung Ko, and William Plishker. University of Wisconsin at Madison, November 29, 2007

  2. Motivation • Implementation of digital signal processing (DSP) applications on System-on-chips (SoCs) is a multi-faceted problem • Typical design flow consists of several complex steps • Implementation constraints and objectives are multidimensional and complex • Dataflow-based analysis and optimizations provide an effective class of techniques in the design flow for deriving more efficient solutions

  3. DSP Hardware and Software Implementation • Wide range of established and emerging processing architectures • Programmable DSPs, FPGAs, embedded multiprocessors, ASICs, etc. • Multi-dimensional implementation constraints • Performance (latency, throughput, ...) • Energy and power consumption • Buffering constraints and memory requirements • Cost • Accuracy • DIF provides a framework for designing, optimizing, integrating, and evolving software/firmware code in designs involving heterogeneous components and constraints

  4. Dataflow Models of Computation • Used widely in design tools for DSP system design • Application is modeled as a directed graph • Nodes (actors, components) represent functions. • Edges represent communication channels between functions. • Nodes produce and consume data from edges. • Edges buffer data in FIFO (first-in first-out) fashion. • Data-driven execution model • A node can execute whenever it has sufficient data on its input edges. • The order in which nodes execute is not part of the specification. • The order is typically determined by the compiler, the hardware, or both. • Iterative execution • Body of loop to be iterated a large or infinite number of times • Static periodic schedules are often used: a schedule is constructed once and executed over and over again for as many samples that need to be processed. • Bounded memory verification is crucial for such schedules because they operate on large, sometimes unbounded volumes of input data.

  5. Example: Dataflow-based design for DSP Example from Agilent ADS tool

  6. Rate Control QAM Encoder PassbandSignal TransmitFilters Dataflow Example: QAM Transmitter in National Instruments LabVIEW Source: [Evans 2005]

  7. Dataflow Features and Advantages • We emphasize that we employ dataflow primarily as a programming model • Exposes high-level structure that facilitates analysis, verification, and optimization. • Captures multi-rate behavior. • Encourages desirable software engineering practices: modularity and code reuse. • Intuitive to DSP algorithm designers: signal flow graphs. • A wide range of different programming languages can be derived based on the dataflow framework, depending on the specific form of dataflow (dataflow model) that is chosen as the semantic basis for the language.

  8. C BDF DDF PCSDF PSDF SDF Expressive Power CSDF MDSDF SSDF WBDF Simplicity/IntuitiveAppeal Verification/Synthesis DSP Modeling Design Space: Co-design of Dataflow Models and Transformations

  9. DSP Modeling Co-design: Key Challenges • Providing increasing expressive power while leveraging techniques and properties from more restricted dataflow models • For example, from synchronous dataflow (SDF) • Designing static, dynamic, and hybrid scheduling techniques • Common bridge between abstract application models and concrete implementations • Handling large scale and repetitive graph structures • Proving common representations and analysis techniques • Across related dataflow models • For meta-modeling techniques

  10. High Level Dataflow Transformations • A well designed dataflow representation exposes opportunities for high level algorithm and architecture transformations. • High level of abstraction  high implementation impact • Dataflow representation is suitable both for behavior-level modeling, structural modeling, and mixed behavior-structure modeling • Transformations can be applied to all three types of representations to focus subsequent steps of the design flow on more favorable solutions • Complementary to advances in • C compiler technology (intra-actor functionality) • Object oriented methods (library management, application service management) • hardware (HDL) synthesis (intra-actor functionality)

  11. Representative Classes of Dataflow Transformations • Clustering of actors into atomic scheduling units to incrementally constrain the design space • Buffer minimization: minimize communication cost • Multirate loop scheduling: optimize code/data trade-off • Parallel scheduling and pipeline configuration • Heterogeneous task mapping and co-synthesis • Quasi-static scheduling: minimize run-time overhead • Probabilistic design: adapt system resources and exploit slack • Data partitioning: exploit parallel data memories • Vectorization: improve context switching, pipelining • Synchronization optimization: efficient self-timed implementation

  12. Dataflow Interchange Format (DIF) Project • Designing a standard, textual language for specifying dataflow semantics. • Dataflow Interchange Format [5, 6] • DIF support for configuration of complex graphs [1] • Developing a software package for working with, developing, and integrating dataflow models and techniques. • DIF package [5] • Porting DSP applications across design tools. • DIF-based porting methodology [4] • Automated derivation of implementations from dataflow modeling specifications. • DIF-to-C software synthesis framework [3] • Integrating VSIPL support in DIF • Intermediate actor library [2] • DIF-to-VSIPL software synthesis [2]

  13. The DIF Package • DIF representation • Java classes for representing and manipulating dataflow graphs. • DIF front-end (language parser) • Translates between DIF specifications and DIF representations. • Algorithm implementations • Dataflow-based analysis, scheduling, and optimization. • Infrastructure • Porting • Software synthesis

  14. DIF-based Design Flow for Hardware Synthesis HDL-based module libraries, encapsulated and characterized as dataflow components. Application constraints and optimization objectives Dataflow graph analysis and transformations DIF based application model • The DIF-based application model can be derived automatically from • another dataflow environment, or written directly in DIF. Code generation HDL implementation (Verilog / VHDL) Conventional HDL-based synthesis flow

  15. Trends in FPGA-based System Design • Heterogeneous resources • Hard vs. soft processor cores • Fabric for implementing random logic • Domain-specific accelerators (e.g., “DSP slices”) • Different kinds of memory resources • Potential to customize the interprocessor communication network • Optimized IP (intellectual property) modules are highly specialized for specific platforms • Performance/cost-trade-offs are highly parameter- and platform-specific

  16. DIF-based design flow for FPGA implementation HDL-based and block-box-IP module libraries Constraints in terms of performance and FPGA resource utilization Characterization: performance and resource costs Platform-specificresource-sharing models Dataflow graph analysis and transformations DIF-based application model Code generation HDL implementation Conventional HDL-based synthesis flow

  17. Next Steps • Starting with the current trigger system, model components and subsystems as DIF actors • Determine a suitable dataflow model for each module • Revise the interface of the module implementation so it is consistent with this dataflow model • The module can then be represented as a DIF actor in any DIF program (regardless of how the module is implemented internally) • Model system-level interfaces and performance constraints in terms of DIF graph properties • Continue our ongoing work on CAL-to-DIF translation • Lays a foundation for integrating DIF-based analysis and transformation techniques with the CAL language and Xilinx back-end capabilities.

More Related