1 / 14

Paper Report

Paper Report. Analysis of shared-link AXI. N.Y.-C. Chang Y.-Z. Liao T.-S. Chang Computers & Digital Techniques,IET. Presenter: H oung Wei- Zhuang. Abstract.

mimi
Download Presentation

Paper Report

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Paper Report Analysis of shared-link AXI N.Y.-C. Chang Y.-Z. Liao T.-S. Chang Computers & Digital Techniques,IET Presenter: Houng Wei-Zhuang

  2. Abstract Shared-link AXI provides decent communication performance and requires half the cost of its crossbar counterpart. The authors analysed the performance impact of the factors in a shared-link AXI system. The factors include interface buffer size, arbitration combination and task access setting (transfer mode mapping). A hybrid data locked transfer mode was also proposed to improve the performance due to AXI’s extra transition cycle. The analysis is carried out by simulating a multi-core platform with a shared-link AXI backbone running a video phone application. The performance is evaluated in terms of bandwidth utilisation, average transaction latency and system task completion time. The analysis showed that channel-independent arbitration could contribute up to 23.2% of bandwidth utilisation and completion time difference.

  3. Abstract (cont.) Moreover, the analysis suggests that the proposed hybrid data locked mode should be used only by long access latency devices. Such setting resulted in up to 21.1% completion time reduction compared with the setting without the hybrid data locked mode. The design options in shared-link AXI bus are also discussed.

  4. Introduction • What’s the problems? pipeline-based bus suffers from bus contention and inherent blocking characteristics due to the protocol. multi-layer bus structure or using proper arbitration policies has blocking characteristic reduces the bus bandwidth utilisation when accessing long latency devices. • Proposed method proposed a hybrid data locked transfer mode. This work focused on the performance analysis of a shared-link AXI.

  5. Related work [12]Automated throughput driven synthesis of bus-based communicationarchitectures [19]A new multi-channel on chip-bus architecture for system-on-chips built a crossbar AXI platform and a single layered shared-link AHB platform Communication architecture synthesis framework to AXI [18]High level design space exploration of shared bus communication architectures [20]Scalability analysis of evolving SoC interconnect protocols compared the performance of a shared-link AXI and a single-layered AHB. studied the scalability of AHB This paper

  6. Transfer modes • Proposed data locked mode Unlike the interleaved mode, which can be applied to both request and data channels, the proposed data locked mode supports only burst data transfer.

  7. Transfer modes • Proposed hybrid data locked mode hybrid data locked mode is proposed to allow additional data locked mode transaction requests to be transferred using the normal or interleaved mode when the data locked mode buffer is full.

  8. Arbitration framework for a share-link AXI bus • Address channel arbitration

  9. Arbitration framework for a share-link AXI bus • Data channel arbitration

  10. Block diagram of the target platform • Using AXI • Using AHB

  11. AXI interface buffer size and bus arbitration impact The BWU stopped increasing when the buffer size is greater than 8 because of the required bandwidth limit. The average transaction latency is also proportional to the interface buffer size. This is because with a larger buffer, a transaction would spend more time pending.

  12. Single-layer shared-link AXI againstfive-layer AHB-lite • Bandwidth utilisation • Average transaction latency • System task completion time

  13. Task access settings • HN was the best task access setting in most • cases in terms of BWU and completion time. • HH achieved the shortest transaction latency.

  14. Conclusion • Conclusions When the hybrid data locked mode is adopted for only long access latency devices, the simulation showed up to 21.1% completion time and 14.3%transaction latency reduction with respect to the setting without the hybrid data locked mode. Although the analysis was conducted using AXI, we believe that the experience can be extended and applied to other Shared-link packet-based bus as well. • My comments Understand the work on AXI bus. Aoubt for the experimental data in this paper.

More Related