1 / 1

FUTURE WORK

Request. Offload Engine. CPU. Figure 1 . Tandem Queues. A Protocol Offload Framework Applied to a TCP Offload Engine and Web Traffic Juan M. Solá-Sloan Ph.D. Candidate in Computer and Information Science and Engineering (CISE) Ph.D. CISE Lab Advisor: Dr. Isidoro Couvertier.

branxton
Download Presentation

FUTURE WORK

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Request Offload Engine CPU Figure 1. Tandem Queues A Protocol Offload Framework Applied to a TCP Offload Engine and Web Traffic Juan M. Solá-Sloan Ph.D. Candidate in Computer and Information Science and Engineering (CISE) Ph.D. CISE Lab Advisor: Dr. Isidoro Couvertier PERFORMANCE ANALYSIS Figure 3 presents the utilization estimated by the probabilistic model and the actual utilization measured by the test setup. Also the Non-TOE is presented ABSTRACT Protocol Offload Engines has been proposed as ways to alleviate the CPU of the host of the cost of processing the protocols used in an end-to-end communication. However, an increase in performance is not guaranteed after the installation of this type of engine. The operating system of the host is capable of processing the protocols faster than the engine in some particular occasions. Stating that the inclusion of an offload engine upgrades the system is elusive. An analytical model that address these issues is investigated. The model was validated using an emulator to reproduced the behavior of a TCP Offload Engine installed as part of a Web server that runs Apache 2.2. PERFORMANCE MEASURES The utilization of the CPU of the host of a server without an OE is obtained by : Where Xa is the r.v. of the time it takes to initiate a session by the Web Server (WS) application and Xp is the r.v of the time it takes to convert the requested objects into packets of protocol P. A host with the support of an OE has a utilization: where Xo is the r.v. of the time it takes to process the communication between the CPU with the OE at the CPU of the host. If E[Xo] > E[Xp] then the OE degrades the system. The delay of the first queue is given by: PROBLEM DESCRIPTION On the advent of multi-core multi-threaded processors and new technologies on outboard processors a new debate related two where protocol processing must be perform arises [1]. Interactions between the host and the offload engine (OE) are independent of the underlying network technology. A poorly designed interface between the host and the OE is likely to ruin most of the performance benefit brought by protocol offload. The uniqueness of the research presented in this poster contributes into abstracting this characteristic for a proper protocol offload implementation. Our approach captures the overheads by abstracting them into the analytical model. In our case study, the model is validated versus a mixture between a real implementation at the Web server and the emulation of an OE in an external PC. The experimental setup includes a study focused on the behavior of the host CPU when confronted to different file classes using an approach similar to [2]. The novelty of this research present novel findings about the pros and cons of protocol offloading and the environment for its implementation. Figure 3. Utilization of the CPU (files 10-100K) Figure 4 presents the delay estimated by the model and the response time obtained by the benchmark. Delay in the second queue is given by: where Xq is a r.v. of the time it takes to process the protocols at the OE and Xc is the r.v. of the time it takes to process communication between the CPU and the OE at the OE. The delay is then: max(Tcpu,Toe). If the utilization of the CPU is less than before and the delay is similar or less, then, the inclusion of the OE is beneficial to the system. Figure 4. Delay vs. Response time (files 10-100K) FUTURE WORK Further results analysis is needed. A Kolmorov-Smirnov analysis will be used to formalize our findings and making our results more robusts. Moreover a Q-Q plot is study as a candidate for further analysis of the results. Also, an article is been written and it will be submitted for disseminating our investigation farther than previous works [6,7]. EXPERIMENTAL SETUP In order to validate our model a test bed was setup. The test bed is composed of two PCs that emulate a single host. One of these machines acts as the WS and the other as the TOE device (see Figure 2). The PC that acts as the TOE device, is called the front-end PC (FEPC). These two nodes are connected together via a dedicated line. The WS is connected to the Internet via the FEPC. This PC emulates the TOE with a TCP Offload Engine Emulator (TOE-Em). Two more computer nodes were used to generate the workload and requests sent to the test bed. PROBABILISTIC MODEL OVERVIEW Our system is modeled as a network of queues (see Figure 1). The first queue is modeled as an M/M/1 queue while the second queue is modeled as an M/G/1 queue. However, a host without an offload engine has been modeled in previous works as an M/G/1 queue [3,4,5]. The main performance metrics that this investigation is centered is the utilization of the CPU of the host that is acting as a Web server and the delay of the whole system. Examining only the utilization does not guarantees a performance increase even when the utilization of the CPU has reduced. REFERENCES [1] HyongYoub Kim and Scott Rixner. Connection handoff policies for TCP offload network interfaces. Operating System Design and Implementation, 1(1):293-306, November 2006. [2] Yiming Hu, Ashwini Nanda, and Qing Yangl. Measurements, analysis and performance improvements of the Apache Web server. In Proceedings of the IPCCC’99 Performance Computing and Communications Conference, pages 261-267, February 1999. [3] Jianhua Cao, Mikael Andersson, Christian Nyberg, and Maria Kihl. Web server performance modeling using an M/G/1/K*PS queue. In IEEE ICT 2003: 10th International Conference on Telecomunications, volume 2, pages 1501-1506, March 2003. [4] Mikael Andersson, Jianhua Cao, Christian Nyberg, and Maria Kihl.Performance modeling of an Apache Web server with bursty arrival traffic. In IC ‘03: Proceedings of International Conference on Internet Computing, pages 508-514, June 2003. [5] Chang Heng Foh, Moshe Zuckerman and Juki Wirawa Tantra. A Markovian framework for performance evaluation of IEEE 802.11. IEEE Transactions on Wireless Communications, 6(4):1276-1285, April 2007 [6] J. M. Solá-Sloan and Isidoro Couvertier. A parallel TCP/IP offload framework for TOE devices. in Proceedings of the Applied Telecommunication Symposium (SCS ATS ‘03),pp 115-121. Orlando Florida (USA). April 2003. [7] J. M. Solá-Sloan and Isidoro Couvertier. A parallel TCP/IP offloading framework for a TCP/IP offloading implementation. In Proceedings of IP Based System on Chip Design, Grenoble (France) October 2002. The main performance metrics that this investigation is centered is the utilization of the CPU of the host that is acting as a Web server and the delay of the whole system. An efficient offload engine reduces CPU utilization and maintains similar or lower response time to the clients in service. If the delay on the system is greater when the offload engine is added, then, the inclusion of it degrades the whole system. Figure 2. Test Setup This research combines notions from different benchmarks to stress our test bed in different ways. Our target is to study how an offloaded host behaves for different types of requested objects. Webstone 2.5 was used for testing our system. Since Webstone 2.5 is bundled with a limited file system, SURGE was used for generating the set of files for our test bed.

More Related