1 / 29

SenseIT Integration & Experimentation SenseIT PI Meeting October 8, 1999

SenseIT Integration & Experimentation SenseIT PI Meeting October 8, 1999. Agenda. What Is Unique About SenseIT System Architecture Integration Process FY00 Experimentation. SenseIT provides a wide range of user benefits through integrated information technologies.

admon
Download Presentation

SenseIT Integration & Experimentation SenseIT PI Meeting October 8, 1999

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SenseIT Integration & ExperimentationSenseIT PI Meeting October 8, 1999

  2. Agenda • What Is Unique About SenseIT • System Architecture • Integration Process • FY00 Experimentation

  3. SenseIT provides a wide range of user benefits through integrated information technologies What is Unique About SenseIT Unique User Benefits Provided by SenseIT Enabling Information Technology

  4. System Architecture

  5. Mobile Code InfoP InfoP Data Interest Subscriptions Security Manager Tamper Sensor Node Architecture High Level Info GPS Device Status (Local & Neighbor) Time Series Detect/ Classify SigP InfoP Function Manager Data Acq. Module Data Acq. API Data Management Interface Sensor HW Message Handling Repositories Experiment Support and Control Network Routing Flow Control Services Software Modules Communications API Platform & O/S independent distributed services Hardware Components Communications Hardware

  6. Data Management Interface Layer Data Repositories and Functional Managers Node Data Repository Access [TS, DC, HI, DS] Data Sharing & Trigger Manager Data Subscribe Manager [DI] Mobile Code Manager [MC] Query Engine/ Proxy Function Request Handler Read Access Write Access S e c u r i t y S e r v i c e s Network and Communications Layers

  7. Security Manager User Platform Architecture Time Series Operator GUI Experiment Monitor & Control GUI DB Language Query Generator DB Proxy Manager Data Access & Store Function Request Generator Mobile Code Manager Detect/ Classify S e c u r i t y S e r v i c e s High Level Info Ethernet Connection TCP/IP wrapped sensor net message Device Status Gateway Passthrough Message Handling Mobile Code Network Routing Flow Control Services User Platform Communications API Gateway Node Communications Hardware

  8. Integration Process

  9. Philosophy • Need to keep a working system at all times • Facilitates research and experimentation • Strive towards hardware independence • Avoid limiting choices, hardware will evolve over time • Ability to juggle between research and experiments • Developers always invested in the process

  10. Schedule • Integration schedule driven by experiment schedule • Experiments are critical to the success of the program • Detailed schedule in-progress • Identify dependencies and requirements now • Need resource estimates (CPU, memory) ASAP • Dependency specifications (at least) for December • Staggered schedule to reduce risk, ease development and integration • Integration task impossible if everyone delivers at once • Developers’ access to previously tested working code reduces problems

  11. Communications • Weekly Telecons • Tuesdays at 2P Eastern, 11A Pacific (tent.) • Starts October 12 (tent.) • Raise issues to the community • Identify schedule problems early • Quarterly Meetings • Jan 2000 • Preparation for first experiments • Use senseit_all, senseit_pi, senseit_bbn • Integration website for documentation, other information (http://javamap.bbn.com:4840)

  12. Deliveries • Software • Needs to meet predefined system requirements to support experiments • May only deliver a subset of research development - research not completely driven by operational requirements • Upload software to website to begin integration testing process • Website holds repository of tested software available for download • Hardware • Delivered hardware catalogued and a sample set aside for configuration management

  13. Testing • Test procedures collaboratively developed ahead of delivery • Must address experiment needs • Can address research, other issues • Developers involved with testing process • Only people with complete understanding of delivered software • Testing complete and software available to community faster • Please, no software development during testing • Of course, bug fixes are OK • Tests run over two-week period

  14. Testing (continued) • Successful test completion • Interfaces with other delivered software • Meets predefined system functionality • Then released to others on website • Documented test results also available on test site • Debugging libraries • Need developers to contribute to and use debugging libraries • Lots of players in a small box • Test with and without debug

  15. FY00 Experimentation

  16. Experimentation Overview • Experimentation Provides • Data for on-going development • Opportunities to showcase technologies in operational setting • Schedule Framework • Notional Plan • 1 field experiment / FY • 1 or more lab experiments / FY • Lab experiment is precursor to field experiment • FY00 Experiment • Initial plan in place • FY01&02 Experiments • Depend on achievable development schedule

  17. FY00 Experiment Goals • Wring out basic end-to-end functionality & operability • Establish performance baseline • e.g. sensing performance, network traffic, latency, scaling, survivability, etc. • Highlight unique features (expand as devel. permits) • User Benefits • Multiple users/tasking, dynamic (re)tasking, basic collaborative processing • Enabling Technologies • Declarative languages, mobile code, advanced routing techniques, collaborative processing, tactical user interface • Gather data to aid PIs in further development efforts Program must balance experiment “reach” vs. “risk” Requires prioritization

  18. FY00 Experiment Scenario - Overview • Transporter Erector/Launchers (TELs) are on the move • Command needs to determine when and where they are moving. • Plan • Deploy sensor groups over a wide area. • Use sensors to determine TEL traffic patterns. • Send in Special Operations Force (SOF) to confirm identification and destroy TELs.

  19. Roads FY00 Experiment Scenario - Geography The scenario centers around a road intersection and a nearby chokepoint. “Chokepoint”: e.g. valley or village

  20. FY00 Experiment Scenario - Sensor Groups • Sensors deployed and tasked by command with TEL/convoy surveillance. • Single function • Single Task Group #1 Group #2 Sensor fields(10-15 nodes ea.) Surveillance coverage

  21. FY00 Experiment Scenario - Detected TEL traffic Surveillance reveals that TELs frequently pass through the intersection to the “chokepoint”. Apparent TEL traffic

  22. FY00 Experiment Scenario - SOF Insertion Sensors continue TEL convoy surveillance for command. • SOF team inserts at chokepoint and tasks both sensor groups for full surveillance (truck, small vehicles, personnel) • Mobile code • Multi-tasked sensors

  23. FY00 Experiment Scenario - Ambush Group 1 sensors detect TELs moving towards chokepoint and alert SOF team Team achieves positive identification, destroys targets and extracts

  24. Baseline Scenario Provides • Test basic “end-to-end” operation in realistic setting • Highlight unique SenseIT benefits & technologies • Multiple users/tasking, dynamic (re)tasking, basic collaborative processing • Declarative languages, mobile code, advanced routing techniques, collaborative processing, tactical user interface • Experimental data & performance to aid PIs, e.g. • Vehicle & foot traffic signatures • Detection/localization/tracking performance with different • sensor combinations • target types and densities • noise events • Statistics on network traffic loading and latency • Low risk expansion of scope as development permits

  25. Execution • Potential Locations • 29 Palms • Aberdeen Proving Grounds • Other • Traffic “Targets” • Heavy trucks, tanks, light vehicles (e.g. HMMVs), dismounted personnel, other • Target Timeframe • August 2000 • Experimentation Practicalities • Use battery eliminators (i.e. nodes are AC powered) • Use Ethernet or other hard wired connection to collect data • Some level of experiment monitoring resides on each node

  26. FY00 - Notional Surveillance Processing Node #1 Single Node Detection & Identification Sensor #1 Sensor #M Multi-Node Collaborative Localization Tracking Node #N Single Node Detection & Identification Sensor #1 Sensor #M

  27. Single Node Detection & Identification Assume target signatures are dominated by unique tonals or lines Narrowband processing: FFT, normalization & peak picking Mic Local “Intensity” Acoustic Time-series Exchange observations with neighboring nodes: * Detect Target X * Level = Y Acoustic Line List Compare to library of known targets Target ID Seismic Line List Seismic Time-series Narrowband processing: FFT, normalization & peak picking Geo-phone Local “Intensity” Independent Process implemented on each Node

  28. Multi-Node Collaborative Localization • Nodes share detection/intensity messages • All nodes that hold the target estimate target location • Candidate localization schemes • Location of maximum observed intensity • Geographically weighted sum of measured intensities • “Best fit” of measured intensities to very simple source/propagation model

  29. X X X X X Tracking • Automated track formation performed using simple data association algorithm • Locations plotted to visualize target position versus time

More Related