1 / 13

Yantai Shu, Gang Zhang, Zheng Zhao, Jie Yang, Song Wang

Measurement Based Intelligent Prefetch and Cache Technique & Intelligent Proxy Techniques in Plasma Physics LAboratories. Yantai Shu, Gang Zhang, Zheng Zhao, Jie Yang, Song Wang 1999 IEEE Canadian Conference on Electrical and Computer Engineering, Volume: 1 , 1999 , Page(s): 168 -173 vol.1

ron
Download Presentation

Yantai Shu, Gang Zhang, Zheng Zhao, Jie Yang, Song Wang

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measurement Based Intelligent Prefetch and Cache Technique&Intelligent Proxy Techniques in Plasma Physics LAboratories Yantai Shu, Gang Zhang, Zheng Zhao, Jie Yang, Song Wang 1999 IEEE Canadian Conference on Electrical and Computer Engineering, Volume: 1 , 1999 , Page(s): 168 -173 vol.1 Real Time Conference on Santa Fe 1999. 11th IEEE NPSS , 1999 , Page(s): 338 -341 Mike Tien Syslab of Yzu-Ze unversity Miketien@syslab.cse.yzu.edu.tw

  2. Outline • 1.Introduction • 2.Improvement to Cache Algorithm • 3.The Prediction Algorithm • 4.Implementation of Intelligent Proxy • 5.Modification in Cache Algorithm • 6.Conclusion

  3. 1.Introduction • The prefetch technique has been used to combine with the client browser, but has never been implemented in a proxy. • Two parts in this paper: --Improvement to Cache Algorithm. --The Prediction Algorithm.

  4. 2.Improvement to Cache Algorithm • Cache Replacement Policy --LRU --LRU-MIN (1).set T to S. (2).Set L to all documents equal to or larger then T(L may be NULL). (3).Remove the LRU documents of list L until the list is empty or the free cache space is at least T. (4).If the free cache space is not at least S,set T to T/2 and goto (2) --LRU-THOLD—No documents larger than a threshold size is cached.

  5. .Improvement to Cache Algorithm(cont.) • Simulation Result

  6. Improvement to Cache Algorithm(cont.) • Our approach: Using LRU-MIN until the cache size approached 100% of the available disk size and then change to LRU-THOLD with a threshold that is gradually reduced until the cache size reaches a low water mark.

  7. 3.The Prediction Algorithm • Maintian two kind of counters: --page counter CA --link counter C(A,B) • P(B/A) means that “a user is to access page B right after he or she access page A” P(B/A)=C(B/A) / CA --there are k members in the group, the group access probability of Bi is defined as: P(Bi/A)= Cj(A,B) / CjA

  8. .The Prediction Algorithm(cont.) • Personal access probability Pu Group access probability Pg P=ßPu+(1-ß)Pg • Prefetch with Threshold H --prefetch all the files with access probability P minimize the cost, for P>=H H=1- { (1-ρ)*r / ( (1-ρ)2*b+r )} ρ:system load b:system capacity r: αT / αB (αT:delay cost αB:system resource cost )

  9. The Prediction Algorithm(cont.) • Modified threshold HR HR=fn*H fn: network performance factor

  10. 4.Implementation of Intelligent Proxy • Concurrent—not allow a single client to hold all resources. • Multithread—read and manipulate console commands,accept clients connecting request,data from web server to client and from proxy to client. • H’=HR+0.1*HR*U U:utilization of computer resource and can be computed approximately using a given number dividing the number of current threads.

  11. 5.Modification in Cache Algorithm

  12. .Modification in Cache Algorithm(cont.) • LFU*-Aging maintains a reference counter for each document in the cache,and only files with a reference count of one in the cache are candidates for replacement. --Mrefs—the maximum reference count that one file can acquire --Amax—used to age the reference counts of files in the cache Whenever the average number of references per file exceeds Amax ,the reference count of each file in the cache is reduced to

  13. 6.Conclusion • Using LFU*-Aging to against one-timers,retain popular objects for long time periods and age the object set to prevent cache pollution.

More Related