1 / 34

CS548_ ADVANCED INFORMATION SECURITY

Paper Presentation #2. Evaluation of Hardware Performance for the SHA-3 Candidates Using SASEBO-GII. CS548_ ADVANCED INFORMATION SECURITY. 20103272 Jong Heon, Park / 20103616 Hyun Woo, Cho. Contents. Introduction Before the paper Evaluation Platform Design Strategy Evaluation Criteria

claral
Download Presentation

CS548_ ADVANCED INFORMATION SECURITY

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Paper Presentation #2 Evaluation of Hardware Performance for the SHA-3 Candidates Using SASEBO-GII CS548_ADVANCED INFORMATION SECURITY 20103272 Jong Heon, Park / 20103616 Hyun Woo, Cho

  2. Contents • Introduction • Before the paper • Evaluation Platform • Design Strategy • Evaluation Criteria • Conclusion • References

  3. Paper Introduction • Evaluation of Hardware Performance for the SHA-3 Candidates Using SASEBO-GII, Jan, 2010 • They propose following issues for a fair evaluation, • Evaluation environment (platform) • Implementation method (design strategy) • Performance comparison method (criteria) Kazuyuki Kobayashi, Jun Ikegami, Shin’ichiro Matsuo, Kazuo Sakiyama, Kazuo Ohta Author

  4. Before the paper http://csrc.nist.gov/groups/ST/hash/sha-3/index.html

  5. Evaluation Platform • SASEBO (Side-channel Attack Standard Evaluation Board) • The purpose of side-channel attack experiments within a single cryptographic circuit • SASEBO-GII • The purpose of additional experiments for security evaluation for a comprehensive cryptographic system

  6. Evaluation Platform Fig 1. SASEBO-GII

  7. Evaluation Platform Fig 2. Evaluation Environment Using SASEBO-GII

  8. Evaluation Platform • Protocol between two FPGAs • Init : initialize a hash function in the cryptographic FPGA • Load & Fetch :transmitting and receiving the message data and the hash value • Ack : response signal for the load & fetch signals • idata / odata : input / output data (16bit) • EoM : end of message signal

  9. Evaluation Platform • Performance depends on communication overheadbetween two FPGAs • We use a practical interface that can support a 16-bit data communication in 3 cycles • It takes 3*(256/16) = 48 cycles for 256-bit data • But, we ignore the overhead to evaluate hash core

  10. Design Strategy • Specification of Data Input to Cryptographic FPGA • Message Padding • Input data must be a multiple of the block size • EoM (End of Message) • Some candidates need to know where is end of data • Bit Length of Message • Bit length is included in idata→ candidates which need the information takes more time than the others.

  11. Design Strategy • Architectures of Cryptographic FPGA • Fully Autonomous • Stores all of the intermediate values in register • animplementationassuming to be used in a real system

  12. Design Strategy • Architectures of Cryptographic FPGA • External Memory • Only the data necessary for executing are stored register, the other data are stored external memory • Low-cost, but it makes overhead cycles • not suitable for high-speed implementation

  13. Design Strategy • Architectures of Cryptographic FPGA • Core Functionality • Only the core part of a hash function • Used for estimating performance under ideal interface, where the overhead of the data access is ignored

  14. Design Strategy • Performance : Throughput • Input Block Size = the size of input data • Number of Clock Cycles which is necessary to hash the data • Max Clock Frequency = 1/critical path delay • Increasing Max Clock Frequency, Decreasing Number of Clock Cycles Improve Throughput!

  15. Design Strategy • Technique to Improve Throughput • Retiming Transformation holds down the critical path delay by averaging a processing time! • After Transformation, critical path consists of two adders. Therefore the maximum clock frequency improves. Before After

  16. Design Strategy • Technique to Improve Throughput • Unfolding Transformation decreases the total number of clock cycles. • After Transformation, the DFG performs operations in one cycle. Although maximum clock frequency becomes lower, throughput improves. Before After

  17. Design Strategy • How to deal with these optimization techniques : • Applying the Unfolding Transformation

  18. Evaluation Criteria • Evaluation Items • Eight SHA-3 hash candidates on the cryptographic FPGA • Check the hardware performance(speed) and cost • Speed performance • Latency, throughput • Cost • Number of slices, registers, LUTs and size of a RAM • High throughput with a low hardware cost

  19. Evaluation Criteria • Evaluation Metrics • Hashing process for each data with a input block sizes • Uses the result as the next input data • Clock cycles, Hashing |M|-bit data Number of hash core operation

  20. Evaluation Criteria • Evaluation Metrics • : Number of clocks used to input data • : To execute hashing process in the core • : To perform the final calculation process • : To output the hash result

  21. Evaluation Criteria • Evaluation Metrics • : Number of clocks used to input data • : To execute hashing process in the core • : To perform the final calculation process • : To output the hash result - Only executed when outputting the result

  22. Evaluation Criteria • Evaluation Metrics • Throughput • Latency See this equation. Latency is an important metrics for a short message

  23. Evaluation Criteria • Evaluation Metrics • Throughput • When |Mp| is sufficiently large, Short massage for authentication Long massage

  24. Evaluation Criteria • Evaluation Metrics

  25. Evaluation Criteria • Result for Eight SHA-3 Candidates, Interface overhead

  26. Evaluation Criteria • Result for Eight SHA-3 Candidates, Core Function Block

  27. Evaluation Criteria • Performance Results of the SHA-3 Candidates

  28. Evaluation Criteria • Hardware Costs of the SHA-3 Candidates on Virtex5

  29. Evaluation Criteria • Latency of Hash Function including interface

  30. Evaluation Criteria • Latency of Core Function Block for Short Massage Likely to be a Bottle Neck Performance of Interface is important part

  31. Conclusions • Propose a consistent evaluation criteria • Basic design of an evaluation environment using SASEBO-GII(interface spec, architecture…) • Propose evaluation items(speed, cost…) • Implement eight SHA-3 candidates • Future work. • Rest of the SHA-3 candidates • Evaluation for low power device(RFID tags)

  32. References 1. National Institute of Standards and Technology (NIST), “Cryptographic Hash Algorithm Competition,” http://csrc.nist.gov/groups/ST/hash/sha-3/index. html. 2. S. Tillich, M. Feldhofer, M. Kirschbaum, T. Plos, J. -M. Schmidt, and A. Szekely, “High-Speed Hardware Implementations of BLAKE, Blue Midnight Wish, Cube- Hash, ECHO, Fugue, Grøstl, Hamsi, JH, Keccak, Luffa, Shabal, SHAvite-3, SIMD and Skein,” Cryptology ePrint Archive, Report 2009/510, 2009. 3. A. H. Namin and M. A. Hasan, “Hardware Implementation of the Compression Function for Selected SHA-3 Candidates,” CACR 2009-28 (2009). 4. B. Baldwin, A. Byrne, M. Hamilton, N. Hanley, R. P. McEvoy, W. Pan, and W. P. Marnane, “FPGA Implementations of SHA-3 Candidates:CubeHash, Grøstl, Lane, Shabal and Spectral Hash,” Cryptology ePrint Archive, Report 2009/342, 2009.

  33. References 5. B. Jungk, S. Reith, and J. Apfelbeck, “On Optimized FPGA Implementations of the SHA-3 Candidate Grøstl,” Cryptology ePrint Archive, Report 2009/206, 2009. 6. “National Institute of Advanced Industrial Science and Technology (AIST), Research Center for Information Security (RCIS)􀉿“Side-channel Attack Standard Evaluation Board (SASEBO)”,” http://www.rcis.aist.go.jp/special/SASEBO/ SASEBO-GII-ja.html. 7. Z. Chen, S. Morozov, and P. Schaumont, “A Hardware Interface for Hashing Algorithms,” Cryptology ePrint Archive, Report 2008/529, 2008. 8. ECRYPT II, “SHA-3 Hardware Implementations,” http://ehash.iaik.tugraz. at/wiki/SHA-3 Hardware Implementations. 9. Y. K. Lee, H. Chan, and I. Verbauwhede, “Iteration Bound Analysis and Throughput Optimum Architecture of SHA-256 (384, 512) for Hardware Implementations,” in In Information Security Appliciations, 8th International Workshop, WISA 2007, vol. 4867 of LNCS, pp. 102-114, Springer, 2007.

  34. EYP_Z H^D / Thank You Korex527 at gmail.com Betelgs at chol.com

More Related