Xensocket vm to vm ipc
Download
1 / 21

XenSocket: VM-to-VM IPC - PowerPoint PPT Presentation


  • 88 Views
  • Uploaded on

Presented at ACM Middleware: November 28, 2007. virtual machine. inter-process communication. XenSocket: VM-to-VM IPC. John Linwood Griffin Jagged Technology. Suzanne McIntosh, Pankaj Rohatgi, Xiaolan Zhang IBM Research. What we did: Reduce work on the critical path.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'XenSocket: VM-to-VM IPC' - zelig


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Xensocket vm to vm ipc l.jpg

Presented at ACM Middleware: November 28, 2007

virtual machine

inter-process communication

XenSocket: VM-to-VM IPC

John Linwood GriffinJagged Technology

Suzanne McIntosh,

Pankaj Rohatgi, Xiaolan Zhang

IBM Research


What we did reduce work on the critical path l.jpg
What we did: Reduce work on the critical path

Put packet into a page

Ask Xen to remap page

Route

packet

Ask Xen to remap page

before XenSocket:

VM 1

Domain-0

VM 2

Xen

Allocate pool of pages (once)

Ask Xen to share pages (once)

with XenSocket:

VM 1

VM 2

Xen

Write into pool

Read from pool


The standard outline l.jpg
The standard outline

  • What we did

  • (Why) we did what we did

  • (How) we did what we did

  • What we did (again)


Ibm building a stream processing system with high throughput requirements l.jpg

Video

IBM building a stream processing system with high-throughput requirements

Enormous volume of data enters the system

Independent nodes process and forward data objects

Design for isolated, audited, and profiled execution environments


X86 virtualization technology provides isolation in our security architecture l.jpg
x86 virtualization technology provides isolation in our security architecture

2

2

Other physical nodes

Other physical nodes

1

1

4

4

3

3

VM 1

Node 1

VM 2

Node 2

VM 3

Node 3

VM 4

Node 4

3

1

2

4

Xen


Using xen virtual network resulted in low throughput @ max cpu usage l.jpg
Using Xen virtual network resulted in low throughput @ max CPU usage

UNIX socket

14 Gbit/s

Linux

Process 1

Process 2

TCP socket

0.14 Gbit/s

VM 1

Domain-0

VM 2

Xen

Xen

100% CPU

20% CPU

100% CPU


Our belief root causes are xen hypercalls and network stack l.jpg
Our belief: root causes are CPU usageXen hypercalls and network stack

Put packet into a page

Ask Xen to swap pages

Packet routed

Ask Xen to swap pages

before XenSocket:

VM 1

Domain-0

VM 2

Xen

Victim pages must be zeroed

Uses 1.5 KB of 4 KB page

May invoke Xen hypercall after only 1 packet queued


The standard outline8 l.jpg
The standard outline CPU usage

  • What we did

  • (Why) we did what we did

  • (How) we did what we did

  • What we did (again)


Xensocket hypothesis cooperative memory buffer improves throughput l.jpg
XenSocket hypothesis: Cooperative memory buffer improves throughput

Allocate 128 KB pool of pages

Ask Xen to share pages

Pages reused in circular buffer

with XenSocket:

VM 1

VM 2

Xen

Writes are visible immediately

No per-packet processing

Still requires hypercalls for signaling (but fewer)


Caveat emptor l.jpg
Caveat emptor throughput

  • We used Xen 3.0—Latest is Xen 3.1

    • Xen networking is reportedly improved

  • Shared-memory concepts remain valid

  • Released under GPL as XVMSockethttp://sourceforge.net/projects/xvmsocket/

Community is porting to Xen 3.1


Sockets interface new socket family used to set up shared memory l.jpg

Server throughput

socket();

bind(sockaddr_inet);

listen();

accept();

socket();

bind(sockaddr_xen);

Client

socket();

connect(sockaddr_inet);

socket();

connect(sockaddr_xen);

Sockets interface; new socket family used to set up shared memory

  • Local port #

  • Remote address

  • Remote port #

  • Remote VM #

  • Remote VM #

  • Remote grant #

System returns grant # for client


After setup steady state operation needs little if any synchronization l.jpg
After setup, steady-state operation needs little (if any) synchronization

write(“XenSocket”)

read(3)

 “Xen”

VM 1

VM 2

X

e

n

S

o

c

k

e

t

If receiver is blocked, send signal via Xen


Design goal future work support for efficient local multicast l.jpg
Design goal (future work): synchronizationSupport for efficient local multicast

Future writes wrap around;block on first unread page

X

e

n

S

o

c

k

e

t

read(3)

 “Xen”

write(“XenSocket”)

VM 2

VM 1

VM 3

read(5)

 “XenSo”


The standard outline14 l.jpg
The standard outline synchronization

  • What we did

  • (Why) we did what we did

  • (How) we did what we did

  • What we did (again)


Figure 5 pretty good performance l.jpg
Figure 5: Pretty good performance synchronization

UNIX socket: 14 MB/s

14

XenSocket: 9 MB/s

7

Bandwidth (Mbit/s)

INET socket: 0.14 MB/s

0.5

16

Message size (KB, log scale)


Figure 6 interesting cache effects l.jpg
Figure 6: Interesting cache effects synchronization

UNIX socket

14

7

Bandwidth (Mbit/s)

XenSocket

INET socket

0.01

0.1

1

10

100

Message size (MB, log scale)


Throughput limited by cpu usage advantageous to offload domain 0 l.jpg
Throughput limited by CPU usage; Advantageous to offload Domain-0

XenSocket

9 Gbit/s

TCP socket

0.14 Gbit/s

VM 1

Domain-0

VM 2

Xen

Xen

100% CPU

20% CPU

100% CPU

VM 1

Domain-0

VM 2

Xen

Xen

100% CPU

1% CPU

100% CPU


Adjusted communications integrity and relaxing of pure vm isolation l.jpg
Adjusted communications integrity and relaxing of pure VM isolation

Possible solution: Use a proxy for pointer updates along the reverse path

But now this path is bidirectional(?)

VM 2

VM 1

VM 3

Any masters students looking for a project?


Potential memory leak xen didn t doesn t support page revocation l.jpg
Potential memory leak: Xen didn’t (doesn’t?) support page revocation

VM 1

VM 2

Setup

VM 1 shares pages

VM 1

VM 2

Scenario #1

VM 2 releases pages

VM 1

VM 2

Scenario #2

VM 1 cannot safely reuse pages


Xen shared memory hot topic l.jpg
Xen shared memory: Hot topic! page revocation

XenSocket

Middleware’07 | make a better virtual network

MVAPICH-ivc: Huang and colleagues (Ohio State, USA)

SC’07 | What we did, but with a custom HPC API

XWay: Kim and colleagues (ETRI, Korea)

’07 | What we did, but hidden behind TCP sockets

Menon and colleagues (HP, USA)

VEE’05, USENIX’06 | make the virtual network better


Conclusion xensocket is awesome l.jpg
Conclusion: XenSocket is awesome page revocation

Shared memory enables high-throughput VM-to-VM communication in Xen

(a broadly applicable result?)

John Linwood GriffinJohn.Griffin @ JaggedTechnology.comAlso here at Middleware: Sue McIntosh


ad