CATNIP – Context Aware Transport/Network Internet Protocol Carey Williamson Qian Wu Department of Computer Science University of Calgary
vs. Good design Bad design Application Transport Good: providing a unifying framework Bad: compromise performance Network Link Physical Why CATNIP • Layered protocol stacks
Why CATNIP (Cont’d) • Observations in Web data transfer using TCP/IP • Poor protocol interactions; • TCP’s window-based flow control mechanism produces data bursts; • Not all packet losses are created equal. Packet losses are costly for small document transfer; • A TCP source has limited control over packet loss effects; • An IP router has significant control over packet loss effects.
Document Size Packet Priority Design of CATNIP • Can we make the TCP/IP protocols “smarter” about the specific job? • Convey application-layer context information to the TCP and IP layers Application Transport Network
Design of CATNIP (Cont’d) • Adding context-awareness to TCP: • Rate-Based Pacing of the Last Window (RBPLW) • Early Congestion Avoidance (ECA) • Selective Packet Marking (SPM): Use the reserved high-order bit in the TCP header to convey packet priority information
Design of CATNIP (Cont’d) • Adding context-awareness to IP: • CATNIP-Good • CATNIP-Bad • CATNIP-RED: RED + CATNIP-Good
Evaluation of CATNIP Simulation: ns-2 Evaluation Emulation: use WAN emulation to test a prototype implementation of CATNIP in the Linux kernel of an Apache Web server.
Evaluation using simulation • Network model: Client 1 Server 1 10 Mbps, 5 ms 10 Mbps, 5 ms Client 2 1.5 Mbps, 5 ms Server 2 RouterS RouterC Client 99 10 Mbps, 5 ms 10 Mbps, 5 ms Client 100 Server 10
Evaluation using simulation (Cont’d) • Web workload model: • 10 Web pages • Use empirically-observed distribution to determine the size, and the number of embedded images
Evaluation using simulation (Cont’d) • Factors and Levels: • Performance metrics: • the transfer time for each Web page • the average packet loss
ECA Reno/ RBPLW ECA/RBPLW Reno • Simulation results • DropTail routers: • Mean and standard deviation of transfer times
Packet loss: • Observations: • TCP endpoint control algorithms have little advantage to offer.
Reno/SPM/RBPLW/Good ECA/SPM/Good ECA/SPM/RBPLW/Good Reno/SPM/Good Reno/DropTail • Simulation results (Cont’d) • CATNIP-Good routers: • Mean and standard deviation of transfer times
Packet loss: • Observations: • Adding context-awareness at the IP routers improves the mean Web page transfer times and the standard deviation of the transfer times. • The average packet loss rates with CATNIP-Good are higher than for the DropTail routers.
Reno/SPM/Bad ECA/SPM/Bad Reno/DropTail • Simulation results (Cont’d) • CATNIP-Bad routers: • Mean and standard deviation of transfer times
Packet loss: • Observations: • Packet losses are shifted to the high priority TCP packets, that is, throw away the “wrong packet” at the “wrong time”, therefor makes matters worse.
Reno/SPM/CATNIP-RED Reno/RED ECA/SPM/CATNIP-RED ECA/RED Reno/DropTail • Simulation results (Cont’d) • CATNIP-RED routers: • Mean and standard deviation of transfer times
Observations: • Reno and ECA perform similarly in almost all cases. • The effect of CATNIP-RED is greater than the effect of ECA.
Experimental Implementation and Evaluation • Experimental environment: • WAN emulator: IP-TNE (Internet Protocol Traffic and Network Emulator) • Web server: Apache Web server (version 1.3.19-5) runs on top of modified Linux 2.4.16 kernel. • Implementation focused on the SPM feature only
Network model Client 1 10 Mbps, 5 ms Client 2 10 Mbps, 5 ms 1.5 Mbps, 5 ms RouterS RouterC Server Endpoint Client 99 10 Mbps, 5 ms Client 100 WAN Emulation • Primary Factor: • buffer size of the bottleneck link (64 KB -- 512 KB)
Conclusions • Not all packet losses are created equal; • A TCP source alone has limited control over Web data transfer performance, even with application-layer information; • The IP layer has a significant influence on Web data transfer performance, particularly when application-layer context information is available; • A simple change to the TCP/IP stack implementation can provide the context information; • Changes to the queue management at routers can provide significant performance advantages for the context-aware TCP/IP.