1 / 12

Quake Data Distribution System

Quake Data Distribution System. Alan Jones Stephen Jacobs David Oppenheimer. QDDS. Goal Distribute rapid notification of earthquake parametric information from all regional networks and NEIC to both sophisticated earthquake recipients and the general public. Blame

kiefer
Download Presentation

Quake Data Distribution System

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Quake Data Distribution System Alan Jones Stephen Jacobs David Oppenheimer

  2. QDDS • Goal • Distribute rapid notification of earthquake parametric information from all regional networks and NEIC to both sophisticated earthquake recipients and the general public. • Blame • Designed, written by Stephen Jacobs in 1998 • Design input from SM, DO, RS, WE • Supported by Alan Jones since 2000 • Retired IBM’er, USGS Volunteer • Affiliated with SUNY Binghamton • Administered by DO and SM

  3. Clients/Contributors • Delivers information to • QDM (Recenteqs systems) • CISN Display • Software at RSN’s for teleseismic waveform triggers EqInTheNews. • PG&E GIS system • CBS News (Seismic/Eruption) • 3 distribution hubs (2 USGS and one at IRIS). • 13 permanent , 21+ transient

  4. Design Strategy • Based on an expanded client-server model. • In client-server model, the client makes requests to a server, and the server merely responds to each request. In this case, the server initiates contact when it has data to send as well as responding to requests. • “Hub” terminology for the server-like systems at the center of the model; “Leaf" for the client-like systems • Each seismic network wishing to participate in the data exchange would have a "leaf“ • Leaves would, in general, talk to two or more "hubs". • Hub does not care about at data content (usually)

  5. Strategy continued • Subscription to QDDS is self-initiated for users who only need to receive information • Requires no intervention by maintainers of QDDS distribution systems. • Information delivery is designed to be robust in the presence of network outages and support large numbers of clients. • Written in Java and therefore cross-platform portable

  6. Strategy continued • When a seismic network has data on an event to share, they put data in a file in a spool directory which the QDDS software polls every seconds. • The leaf picks up the contents of the file in the spool directory, and submits/uploads it to every hub using a TCP connection. • When a hub receives the message, it • stores it to its output and storage directory. • assigns it a unique message number independently of other hubs numbering systems • initiates distribution of message to all leaves • Log everything

  7. Permanent leaves • Known to hub through its entry in the hub's comm.lst • Keeps trying to upload for a period of time if it does not get an immediate connection. • Can stop running for any length of time with no problem. When it starts back up, it will again receive events. Any events lost during its down time will be requested when the leaf comes back up. • A permanent leaf receives "alive" messages but it does not send alive messages. When it receives an alive message, it responds with an "Alive Reply": message which contains the Version and Level of the copy of QDDS it is running.

  8. Transient leaves • Can only receive messages • Can self (re)register with hubs • Not known to the hub when the hub starts. When the transient leaf starts, it issues a "request to register" command to the hub. When the hub accepts the request, it creates a new file called "comm.lst.trans" and writes the leaf's data into it. • sends alive messages periodically to each hub to which it is attached. If a period of time goes by with no alive messages from a transient leaf, it is removed from the hub's in-memory list of leaves. The comm.lst.trans file is re-written with the leaf removed.

  9. Transmission • If message < 64kb hub sends UDP datagrams • Minimizes use of CPU and I/O • Low connection overhead • UDP datagrams not guaranteed to reach all leaves • Larger files are sent via TCP • TCP socket connections are only attempted once in message distribution

  10. Error Detection/Recovery • A leaf determines that it has missed messages in one of two situations • It will receive, for example, message 3 followed by message 5. • Hubs send out an "alive"/heartbeat message if no events have been dispatched in the last few minutes. The "alive" message contains the ID of the last message that was dispatched. • The leaves keep track of which messages have been missed, and every few minutes send out a request for each message that is currently missing using the message number (ID). • The hub will respond either with a data message corresponding to that ID, or a message to say "I received your request, but I do not have a message with that ID".

  11. Limitations • Cannot pass through firewalls without explicit configuration. • Message delivery is not guaranteed • Primitive security, but Alan Jones has implemented public/private keys to the system. • No Subscriptions (clients get everything) • No installation package • 1-sec polling, but supports callback method to receive messages directly • Protocol largely undefined

More Related