1 / 27

TPACT: the Transparent Proxy Agent Control proTocol

TPACT: the Transparent Proxy Agent Control proTocol. Presented to CS558 May 7, 1999 Alberto Cerpa & Jeremy Elson. outline of our talk. Motivations Related Work Design & Standardization Effort Implementation Details Future Work. Motivations for TPACT. $. To Server Farms. Internet

clancy
Download Presentation

TPACT: the Transparent Proxy Agent Control proTocol

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TPACT: theTransparent Proxy AgentControl proTocol Presented to CS558 May 7, 1999 Alberto Cerpa & Jeremy Elson

  2. outline of our talk • Motivations • Related Work • Design & Standardization Effort • Implementation Details • Future Work

  3. Motivations for TPACT

  4. $ To Server Farms Internet (Large Backbone ISP) Congested, Slow, and/or Expensive ISP Network transparent caches Clients Transparent Caches: Reduce traffic without client configuration

  5. Internet (Large Backbone ISP) Proxy Caches ISP User Network typical cache setup Critical observation: Cache usually knows what should be intercepted, but the Switch is the interceptor Router L4 Switch

  6. the role of TPACT Proxy Caches (or any other type of transparent proxy) L4 Switch TPACT allows the cache and switch to exchange control traffic

  7. what control traffic? • When caches come up, they can tell the switch: “add me to your cache group” • Switches immediately stop sending work to dead caches using periodic KEEPALIVEs • Caches (and switches) can say “I’m busy -- stop sending work to me.” • Caches can tell switches to allow direct connections for clients (e.g., on auth failure)

  8. why write a standard? • There are many switch vendors and many cache vendors -- no one makes both (but Cisco) • An open standard with a good, open-source reference implementation promotes use • Standards are subject to peer review • Doing it right, once, will (hopefully) prevent others from needing to reinvent the wheel

  9. Related Work

  10. wpad:web proxy autodiscovery • Lets clients automatically discover proxies • We’d prefer clients use proxies this way • Transparent Proxies are “just a hack” until clients that understand proxies are deployed • Once proxy-aware clients are around, using WPAD (or similar), transparent proxies -- and TPACT -- will be obsolete • Until then, TPACT and WPAD will probably both be used on network elements

  11. snmp: why not use it? • Initially, it seemed perfect to us -- it’s a generic way for net devices to interoperate • But, we found ourselves redesigning things that were already in TCP. We use TCP’s: • stream demultiplexing • retransmission policy • segmentation & reassembly of large messages • flow control • congestion control • Like ICP designers: “Is SNMP too heavy?”

  12. other related protocols • ICP (Internet Cache Protocol) • Used by caches to create hierarchies • Complementary to TPACT; will run in parallel • Proxied app-level protocols: HTTP, RTSP, etc. • Big influence on design of a proxy, of course • But, not directly related to TPACT

  13. Design and Standardization

  14. the basic design • Turns out that many design parameters come from switch implementation details. • TCP • Hard state • Our own data format (very simple)

  15. redirection semantics • If you ask the switch to allow a client through, do existing flows break? • We assume that all switches keep per-flow state, and can redirect new connections without breaking old ones. • Multiple services -- Redirection occurs per service.

  16. NAT and GRE • Earlier versions of the protocol include complex NAT queries in case the original IP dest addr was lost. • Why not encapsulate? • Generic Routing Encapsulation to tunnel application packets from proxy to cache • Now - no NAT problems; reduces complexity of design and implementation

  17. authentication • Both sides share a secret (say, a password) • Sender: • appends the secret to its message • calculates an SHA-1 hash • replaces the secret with the SHA-1 • Receiver: • Saves the SHA-1 • Replaces the SHA-1 with the secret • Calculates the SHA-1 (should match) • Sequence numbers to prevent replay attacks • Note: this is authentication, not encryption

  18. current status • Internet-Draft done • Major issues decided • Lots of details have been sorted out. • Skeleton reference implementation of the TPACT library done. • Draft will be reviewed by people at NetApp and will be submitted to IETF soon (hopefully)

  19. Implementation of TPACT

  20. library design issues • Who is charge of checking for ready fd’s? • give all fd’s to the application, which will call the appropriate msg handle library routines. • Or, we can do all the fd processing as long as the application calls us periodically. • Or, fork another thread from the library to do the work. (RTOS in the switches may not have a multithreaded model) • We chose periodic call; best tradeoff between ease-of-use and OS overheat

  21. library functions • All functionality built into the library. • read, write • message parsing • SHA-1 operations • keepalive messages • initialization procedure (with exponential backoff)

  22. issues, issues.. • Library is non-blocking (following squid design) • multiple callbacks functions to pass tpact-pre-digested info to the application. • Implementation is not thread safe. If used in such environment, we need to add synchronization to it. • Message-Chaining API; Incremental Hash

  23. Future Work

  24. our best laid plans • Submit draft to IETF and shepherd it through the standardization process • Use our library to build a working prototype system based around TPACT: • Modify Squid to use the client side • Modify FreeBSD to use the server side • Possibly work with vendors to help implementation, or revise the standard

  25. squid implementation • “I am here” upon Squid initialization. • New Event in charge of sending KEEPALIVEs and flow control (based on # fd’s opened). • “Direct” this client to the Internet. • Send HTTP redirect after ACK from NE.

  26. FreeBSD switch • Use same library • Use the IP filter package to turn FreeBSD into an L4 switch • Use the server-side piece of the TPACT library to have it listen, modify its filter lists appropriately • Slow but useful

  27. that’s all, folks! Thank you!

More Related