1 / 14

Transport-layer optimization for thin-client systems

2007 International CQR Workshop May 15-17, 2007. Transport-layer optimization for thin-client systems. Yukio OGAWA Systems Development Laboratory, Hitachi, Ltd. E-mail: yukio.ogawa.xq@hitachi.com Go HASEGAWA, Masayuki MURATA Osaka University. Isolating computer resources from users

quant
Download Presentation

Transport-layer optimization for thin-client systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 2007 International CQR Workshop May 15-17, 2007 Transport-layer optimization for thin-client systems Yukio OGAWA Systems Development Laboratory, Hitachi, Ltd. E-mail: yukio.ogawa.xq@hitachi.com Go HASEGAWA, Masayuki MURATA Osaka University

  2. Isolating computer resources from users resource management, user mobility user event screen updates Overview of thin-client systems data center desktop service server TCP proxy VPN gateway Internet intranet System performance depends on network performance thin client thin client without data, apps office satellite office, home, ‥ VPN: Virtual Private Network

  3. Research objective and our approach drawback System performance (usability) depends on network performance. - intranet performance – designed in advance, controllable - Internet performance – uncontrollable Improve performance of thin-client traffic - especially of flows traversing Internet - thin-client traffic = long-lived interactive TCP data flows - affected by TCP's Nagle algorithm and delayed ACK - affected by buffering of TCP segments and SSR Transport layer optimization on basis of actual traffic observations - observation of Hitachi SDL's prototype system - Dec. 20, 2006 to Jan. 25, 2007 - 168 pairs of a server and a thin-client - number of co-existing sessions during office hours: several dozen research objective our approach

  4. size of data segment m ( nMSS+ a ) MSS time MSS: Maximum Segment Size Characteristics of thin-client traffic- traffic patterns interactive data flow (character information) bulk data flow (screen update information) client server client server request request • ~102 packets • short interval response response large interval time time time time • distinguished by interarrival time of response packets

  5. access from Internet 4.0 3.5 3.0 2.5 data segment size (log10 bytes) 2.0 1.5 1.0 0.5 0.0 -6 interarrival time of request packets (log10 sec) -5 -4 -3 -2 -1 0 1 2 3 Characteristics of thin-client traffic- interarrival time distribution of request packets

  6. access from Internet 4.0 bulk (head of 'nMSS+a') bulk (head) 3.5 3.0 2.5 data segment size (log10 bytes) 2.0 data segment size bulk 1.5 MSS head bulk (inside of 'nMSS+a') interactive inside 1.0 time interactive 0.5 10-2.2(6.3 m)sec inside of 'nMSS+a' head of 'nMSS+a' 0.0 -6 -5 -4 -3 -2 -1 0 1 2 3 interarrival time of response packets (log10 sec) Characteristics of thin-client traffic- interarrival time distribution of response packets

  7. Proposed methods for improving performance- interactive data flow gateway (TCP proxy) client server request response ti Ti sending copy of data packet sending interval: ti = min( RTT – RTTmin , Ti / 2 ) × 1 h time time

  8. Proposed methods for improving performance- bulk data flow gateway (TCP proxy) data segment size client server m ( nMSS+ a ) MSS request response time resegmenting TCP data segments data segment size no SSR nMSS+ a MSS time time time paused for buffering SSR: Slow-Start Restart MSS: Maximum Segment Size

  9. Simulation model- system model sender hosts sender host (server) 20 Mbps, 5 msec gateway (TCP proxy) thin-client traffic intranet background traffic (UDP: 64 bytes, 128 Kbps) x n 20 Mbps, 0.1 msec R bottleneck link tail-drop router (buffer size: 50, 1024 packets) Internet 1 Mbps, 30 – 300 msec packet drop ratio: 0, 3% R 100 Mbps, 0.1 msec receiver host (client) router R receiver hosts

  10. Simulation model- thin-client traffic for evaluation access from Internet 3 2 1 0 (-0.6, -0.6) interactive average interarrival time of response packets (log10 sec ) -1 bulk (-0.6, -1.3) : -1.3 = mean - 2 std -2 -3 -4 • evaluation traffic • number: 30 • duration: 60 sec -5 -6 average interarrival time of response data flows (contiguous packets) (log10 sec) -6 -5 -4 -3 -2 -1 0 1 2 3

  11. 102 102 102 101 101 101 100 100 100 10-1 10-1 10-1 101 101 101 102 102 102 103 103 103 bottleneck link - 1 Mbps - 3% drop ratio router buffer - 50 packets Simulation results- interactive data flow – packet drop (UDP 1024 Kbps) (UDP 1152 Kbps) (UDP 1280 Kbps) send no copies drop from tail-drop router send a copy without pause send a copy with pause average number of packet drops (log10) random drop from bottleneck link • transmission delay of bottleneck link (log10 msec) bg: background

  12. 101 101 100 100 10-1 10-1 10-2 10-2 100 100 101 101 Simulation results- bulk data flow – transfer time bottleneck link - 1 Mbps - 80 msec - 0% drop ratio background - 3 UDP flows (= 384 Kbps) buffer size = 1024 packets buffer size = 50 packets SSR, resegmentation SSR median transfer time (log10 sec) no-SSR no-SSR, resegmentation • number of packets in bulk data flow (log10)

  13. 102 102 101 101 100 10-1 100 10-2 101 101 102 102 103 103 Simulation results- bulk data flow – drop from tail-drop router (UDP 384 Kbps) (UDP 768 Kbps) bottleneck link - 1 Mbps - 0% drop ratio router buffer - 50 packets no-SSR no-SSR, resegmentation average number of packet drops (log10) SSR SSR, resegmentation • transmission delay of bottleneck link (log10 msec) bg: background

  14. Conclusion TCP optimization for improving performance of thin-client traffic • for interactive data flows (transferring character information) • - send a packet copy with pause • ⇒ increases tolerance for packets drops • for bulk data flows (transferring screen update information) • - disable TCP slow-start restart • ⇒ increases packet sending rate • ⇒ increases burstiness of traffic • - resegment TCP data segments • ⇒ reducees burstiness of traffic

More Related