1 / 62

Paolo Palazzari Luca Baldini Moreno Coli ENEA – Computing and Modeling Unit

SYNTHESIS of PIPELINED SYSTEMS for the CONTEMPORANEOUS EXECUTION of PERIODIC and APERIODIC TASKS with HARD REAL-TIME CONSTRAINTS. Paolo Palazzari Luca Baldini Moreno Coli ENEA – Computing and Modeling Unit University “La Sapienza” – Electronic Engineering Dep’t. Outline of Presentation.

ata
Download Presentation

Paolo Palazzari Luca Baldini Moreno Coli ENEA – Computing and Modeling Unit

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SYNTHESIS of PIPELINED SYSTEMS for the CONTEMPORANEOUS EXECUTION of PERIODICand APERIODIC TASKS with HARD REAL-TIME CONSTRAINTS Paolo Palazzari Luca Baldini Moreno Coli ENEA – Computing and Modeling Unit University “La Sapienza” – Electronic Engineering Dep’t

  2. Outline of Presentation • Problem statement • Asynchronous events • Mapping methodology • Searching space • Optimization by RT-PSA Algorithm • Results • Conclusions

  3. Outline of Presentation • Problem statement • Asynchronous events • Mapping methodology • Searching space • Optimization by RT-PSA Algorithm • Results • Conclusions

  4. Problem Statement • We want to synthesize a synchronous pipelined system which executes both the task PSy , sustaining its throughput, and m mutually exclusive tasks PAs1, PAs2, …, PAsm whose activations are randomly triggered and whose results must be produced within a prefixed time.

  5. Problem Statement • We represent the tasks as Control Data Flow Graphs (CDFG) G = (N, E) N = {n1, n2, …, nN}: operations of the task E= (data and control dependencies)

  6. Problem Statement • Aperiodic tasks, characterized by random execution requests and called asynchronous to mark the difference with the synchronous nature of periodic tasks, are subjected to Real-Time constraints (RTC), collected in the set RTCAs = {RTCAs1, RTCAs2, ..., RTCAsm}, where RTCAsi contains the RTC on the ith aperiodic task. • Input data for the synchronous task PSy arrive with frequency fi = 1/Dt, being Dt the period characterizing PSy.

  7. Problem Statement We present a method to determine • The target architecture: a (nearly) minimum set of HW devices to execute all the tasks (synchronous and asynchronous); • The feasible mappingonto the architecture: the allocation and the scheduling on the HW resources so that PSyis executed sustaining its prefixed throughput and all the mutually exclusive asynchronous tasks PAs1, PAs2, …, PAsmsatisfy the constraints in RTCAs.

  8. Problem Statement • The adoption of a parallel system can be mandatory when Real Time Constraints are computationally demanding • The iterative arrival of input data makes pipeline systems a very suitable solution for the problem.

  9. Problem Statement • Example of a pipeline serving the synchronous task PSy

  10. Problem Statement • Sk = (k-1)Tck and Sk = kTck • In a pipeline with L stages, SL denotes the last stage. • DII = t/Tck

  11. Problem Statement • We assume the absence of synchronization delays due to control or data dependencies: Throughput of the pipeline system =1/DII.

  12. Outline of Presentation • Problem statement • Asynchronous events • Mapping methodology • Searching space • Optimization by RT-PSA Algorithm • Results • Conclusions

  13. Asynchronous events • We assume the asynchronous tasks to be mutually exclusive, i.e. the activation of only one asynchronous task can be requested between two successive activations of the periodic task

  14. Asynchronous events In red the asynchronous service requests in a pipelined system.

  15. Asynchronous events • Like the synchronous events, we represent the asynchronous events {PAs1, PAs2, ..., PAsm} through a set of CDFG ASG = {AsG1(NAs1,EAs1), ... , AsGm(NAsm,EAsm)}

  16. Asynchronous events • We consider a unique CDFG made up by composing the graph of the periodic task with the m graphs of the aperiodic tasks: G(N, E) = SyG(NSy, ESy) AsG1(NAs1, EAs1)  AsG2(NAs2, EAs2)  ..……. AsGm(NAsm, EAsm)

  17. Asynchronous events • Aperiodic tasks are subjected to Real-Time constraints (RTC): • As all RTC must be respected, mapping function M has to define a scheduling so that - Di 0  RTCAsiRTCAs

  18. Outline of Presentation • Problem statement • Asynchronous events • Mapping methodology • Searching space • Optimization by RT-PSA Algorithm • Results • Conclusions

  19. Mapping methodology In order to develop a pipeline system implementing G • a HW resource rj = D(nj) and • a time step Sk must be associated to each njN

  20. Mapping methodology • We must determine the mapping function M: NURS • UR is the set of the used HW resources (each rj is replicated kj times),

  21. Mapping methodology • rj = D(ni) is the HW resource on which ni will be executed • S(ni) is the stage of the pipeline, or the time step, in which ni will be executed

  22. Mapping methodology We search for the mapping function M’ which, for a given DII: • Respects all the RTC • Uses a minimum number ur of resources • Gives the minimum pipeline length for the periodic task

  23. Mapping methodology • The mapping is determined by solving the following minimization problem: • C1(M) is responsible of the fulfillment of all the RTC • C2(M) minimizes the used silicon area • C3(M) minimizes the length of the pipeline.

  24. Mapping methodology • While searching for a mapping of G, we force the response to aperiodic tasks to be synchronous with the periodic task • The execution of an aperiodic task, requested at a generic time instant t0, is delayed till the next start of the pipeline of the periodic task.

  25. Mapping methodology

  26. Mapping methodology

  27. Mapping methodology In a pipelined system with DII=1 • the used resource set is maximum • the execution time of each AsGi on the pipeline is minimum • A lower bound for the execution time of AsGi is given by the lowest execution time of the longest path of AsGi: LpAsi is such a lower bound, expressed in number of clock cycles

  28. Mapping methodology Maximum value allowed for DII, compatible with all the RTCAsiRTCAs: • LpAsi Tck gives the minimal execution time for AsGi • The deadline associated to AsGi is Di.

  29. Mapping methodology Maximum value allowed for DII, compatible with all the RTCAsiRTCAs (continued): • The request for the aperiodic task can be sensed immediately after the pipeline start, the aperiodic task will begin to be executed DIITck seconds after the request: at the next start of the pipeline.

  30. Mapping methodology Maximum value allowed for DII, compatible with all the RTCAsiRTCAs (continued): • A necessary condition to match all the RTCAsiRTCAs is that the lower bound of the execution time of each asynchronous task must be smaller than the associated deadline diminished by the DII, i.e. DiDII  Tck + LpAsiTck ,  i = 1, 2, ..., m

  31. Mapping methodology • Combining previous relations with a congruence condition between the period of the synchronous task (Dt) and the clock period (Tck), we obtain the set DIIp wich contains all the admissible DII values.

  32. Mapping methodology Steps of the Mapping methodology: • A set of allowed values of DII is determined • Sufficient HW resource set UR0 is determined • At the end of optimization process the number of used resources ur could be less than ur0 if mutually exclusive nodes are contained in the graph

  33. Mapping methodology Steps of the Mapping methodology (continued): • An initial feasible mapping M0 is determined; SL0 is the last time step needed to execute P by using M0. • Starting from M0, we use the Simulated Annealing algorithm to solve the minimization problem

  34. Outline of Presentation • Problem statement • Asynchronous events • Mapping methodology • Searching space • Optimization by RT-PSA Algorithm • Results • Conclusions

  35. Searching space • In order to represent a mapping function M we adopt the formalism based on the Allocation Tablest(M) • t(M) is a table with ur horizontal lines and DII vertical sectors Osi with i=1,2,...,DII • Each Osi contains time steps Si+kDII (k=0, 1, 2, ...) which will be overlapped during the execution of P

  36. Searching space • Each node is assigned to a cell of t(M), i.e. it is associated to an HW resource and to a time step. • For example, we consider the 23-node graph AsG1

  37. Searching space

  38. Searching space • For DII=3, a possible mapping M is described through the following t(M)

  39. Searching space An allocation table t(M) must respect both 1. Causality condition And the 2. Overlapping condition

  40. Searching space • We define the Ω searching space over which minimization of C(M) must be performed. • Ω is the space containing all the feasible allocation tables: ={t(M) | t(M) is a feasible mapping}; • is not feasible.

  41. Searching space • We can write the minimization problem in terms of the cost associated to the mapping M represented by the allocation table:

  42. Searching space • We solve the problem by using a Simulated Annealing (SA) algorithm • SA requires the generation of a sequence of points belonging to the searching space; each point of the sequence must be close, according to a given measure criterion, to its predecessor and to its successor.

  43. Searching space • As  consists of allocation tables, we have to generate a sequence of allocation tables t(Mi)Neigh[t(Mi-1)] being Neigh[t(M)] the set of the allocation tables adjacent tot(M) according to some adjacency criteria

  44. Searching space • Searching space connection: Theorem 2. The  searching space is connected adopting the adjacency conditions.

  45. Outline of Presentation • Problem statement • Asynchronous events • Mapping methodology • Searching space • Optimization by RT-PSA Algorithm • Results • Conclusions

  46. Optimization by RT-PSA Algorithm • We start from a feasible allocation table t(M0)W • We entrust in the optimization algorithm to find the wanted mapping M

  47. Optimization by RT-PSA Algorithm • We iterate over all the allowed values of DII • The final result of the whole optimization process will be the allocation table characterized by minimal cost.

  48. Outline of Presentation • Problem statement • Asynchronous events • Mapping methodology • Searching space • Optimization by RT-PSA Algorithm • Results • Conclusions

  49. Results • In order to illustrate the results achievable through the presented RT-PSA algorithm, we consider the following graphs

  50. Results

More Related