Evaluation of Flmke IPC Semantics for Remote IPC Linus Kamb Master’s Thesis Defense December 16th, 1997
Communication between two applications. IPCInter-process communication CLIENT SERVER KERNEL
Motivation SERVER • Distributed computing • Components • Client-Server applications • Communication is fundamental! CLIENT CLIENT CLIENT SERVER SERVER
Motivation SERVER • Communication issues • simple CLIENT CLIENT CLIENT SERVER SERVER
Motivation SERVER • Communication issues • simple • efficient CLIENT CLIENT CLIENT SERVER SERVER
Motivation SERVER • Communication issues • simple • efficient • transparent CLIENT CLIENT CLIENT SERVER SERVER
Motivation SERVER • Communication issues • simple • efficient • transparent • optimized for client-server architectures. CLIENT CLIENT CLIENT SERVER SERVER
Outline • Fluke architecture and IPC semantics. • Implementation of network IPC system for Fluke. • Evaluation of issues affecting remote IPC. • Performance numbers. • Future research and other issues.
Communication between client and server. Client - Server IPC CLIENT SERVER KERNEL
Remote IPC • How is this different from local IPC? CLIENT REF SERVER PORT KERNEL KERNEL NETWORK
Thesis Work • Implementation of remote IPC for Fluke. • Analysis of the issues affecting remote IPC. • Fluke architecture • Fluke IPC semantics and mechanisms • Identify elements suited to a distributed environment and those which complicate the remote IPC implementation.
Modular architecture IPC is critical Capability-based Ports Opaque references Interposition Fluke IPC KERNEL
Fluke IPC • Client connects to server. CLIENT REF SERVER PORT KERNEL
Fluke IPC • Thread-to-thread connection established. CLIENT REF SERVER PORT KERNEL
Fluke IPC • Server replies. CLIENT REF SERVER PORT KERNEL
Fluke IPC • Client can repeat, if desired. • and so on. CLIENT REF SERVER PORT KERNEL
Remote IPC • Network IPC system: NetIPC. • Proxy ports. CLIENT REF PROXY SERVER PORT NetIPC NetIPC KERNEL KERNEL NETWORK
NetIPC Architecture Proxy ports NODE Proxy Server threads Client Server Proxy Client threads NetIPC System Local refs KERNEL Network messages
Proxy Ports • Server sends a reference to its port. • The NetIPC system keeps this local reference. CLIENT NetIPC NetIPC SERVER PORT KERNEL KERNEL NETWORK
Proxy Ports • NetIPC system sends a remote reference. • The remote NetIPC system creates a proxy port. CLIENT NetIPC NetIPC SERVER PORT KERNEL KERNEL NETWORK
Proxy Ports • A reference to the proxy port is sent to the client. CLIENT NetIPC NetIPC SERVER PORT KERNEL KERNEL NETWORK
Bootstrapping Communication • Server mounts port in file system. • Lookup IPC returns reference to server port. • NetIPC system exports local file system. • Remote lookup returns reference. • This creates a local proxy for server’s port.
Evaluation • Ports and References. • General use of references. • IPC Flavors. • Connections. • Buffers.
Fluke IPC • Capability-based messaging. • Ports : Receive points • Port references : ability to send to a port • 3 “Flavors” of IPC with different semantics. • Fully reliable connection, exactly-once delivery. • At-least-once delivery of request. • Connectionless, at-most-once delivery. • Thread-to-thread connections. • Persistent connections.
Ports and References • Proxy port mechanism worked for NetIPC. • References are “opaque”. • Interposition. CLIENT REF PROXY SERVER PORT NetIPC NetIPC KERNEL KERNEL NETWORK
Capability and Port Transfer • No explicit port migration in Fluke : good. • Difficult for distributed system. • Makes process migration difficult. • Reference counting. • Garbage collection of ports. • Difficult in a distributed system.
Remote File Lookup • IPC lookup for each component in the path. • NetIPC creates a proxy for each lookup. • Only used for the next lookup. • Except the last. CLIENT NetIPC NetIPC SERVER PORT KERNEL KERNEL NETWORK
Use of References • Most Fluke kernel objects can be referenced. • References of all types are passed in IPC. • Requires additional external servicing to implement in distributed environment. • The NetIPC system cannot do it alone. • Inherent limitation to semantically equivalent remote IPC.
Fluke’s IPC Flavors • Narrow interfaces. • Not “option-based” • Separate code paths. • Cleaner implementation of each path. • Still provides flexibility.
Connections Proxy ports Proxy Server threads Proxy Client threads Local refs Network messages
Connections Proxy ports • Between specific threads. • required additional demultiplexing of packets. • Persistent connections. • Considered as an optimization. • Added a lot of complication. Proxy Server threads Connected threads IPC msgs Proxy Client threads Local refs Network messages
IPC Buffer Management • Scatter/gather support. • Avoids copy for marshaling/unmarshaling. Headers Client send buffers Serverrecv buffers Network packet
Pluses Ports No port migration Narrow interfaces Buffer management Minuses General references Long connections No reference counting Summary • Port-based model works well for transparent remote IPC. • Reference counting issue is not resolved. • Fluke’s generalized references do not extend through NetIPC. • Keep to a simple interface geared to client-server requirements.
Future Research • Handling non-port reference transfer • Work in conjunction with external servers • Call-out mechanism • Location transparency • Process migration • Reference counting • Optimizations • network access • reliable packet protocol
Conclusions • Implemented remote IPC system extending Fluke local IPC into a distributed environment. • Evaluated the mechanisms and semantics of Fluke and its IPC system as to their effects on the implementation of remote IPC.
Contributions • Implemented remote IPC system extending Fluke local IPC into a distributed environment. • Evaluated the mechanisms and semantics of Fluke and its IPC system as to their effects on the implementation of remote IPC. • Designing and implementing a network protocol is not trivial.