M4 technical committee review of the final m4 design and features
This presentation is the property of its rightful owner.
Sponsored Links
1 / 56

M4 Design and Features Review January 2007 PowerPoint PPT Presentation

  • Uploaded on
  • Presentation posted in: General

M4 Technical Committee Review of the Final M4 Design and Features. M4 Design and Features Review January 2007. Meeting Agenda. M4 application to replace legacy MiniMon and MultiMon applications and will run on current Microsoft Windows platforms with the following objectives:

Download Presentation

M4 Design and Features Review January 2007

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

M4 technical committee review of the final m4 design and features

M4 Technical Committee Review of the Final M4 Design and Features

M4 Design and Features Review January 2007

Meeting agenda

Meeting Agenda

M4 project objectives

M4 application to replace legacy MiniMon and MultiMon applications and will run on current Microsoft Windows platforms with the following objectives:

All interrogation data collected by this system will be 100% valid.

Interrogation data will be provided to PTAGIS in “near-real” time.

99.9% uptime of all system components.

SxC functionality must have as good or better efficiency as MULTIMON.

Interface with G2 readers and all legacy hardware.

Interface with PTAGIS data management systems.

Ease of use.

Common application for all deployment scenarios.

Monitoring will take precedence over SxC control operations.

Provide, as an option, continuous operation with automated fail-over from system or application faults.

M4 Project Objectives

M4 project milestones

M4 Project Milestones

Project Milestones from M4 Delivery on PTAGIS Wiki

Project status

Large portion of project already implemented as M4 alpha release

Decision to drop Marathon platform and provide custom failover solution

Development put on hold as of 10/2006 per failover decision

Architecture revised to include requested features for software failover

Completion of PLC communication evaluation

Presentation of revised architecture and design to M4 Committee for approval

Review and finalize SxC requirements

Continue production of M4 development

Project Status

Project schedule proposal

Jan. 2007: M4 Committee approves the revised design of M4; SxC requirements are reviewed by the SxC Subcommittee

Feb. 2007: SxC requirements are complete and approved by SxC Subcommittee

June 2007: Delivery of a beta M4 that performs basic monitoring and data-submission

Some basic regression testing on SxC alpha releases throughout summer

Sept. 2007: Delivery of a beta M4 that performs all functions including SxC and fail-over

Thorough regression testing and tuning are performed on the beta

Live fish test is scheduled to evaluate performance of tuned SxC

Sept. 2007: Delivery of MobileMonitor 2.0 that interfaces with M4 beta

Dec. 2007: M4 Committee approves production release of M4

Project Schedule Proposal

Meeting objective

To begin production development, the M4 Committee will need to approve the following:

Project Schedule Proposal

Development Tools and Target Platforms

Revised System Architecture

Topology Configuration Features

M4 Client Features

Data Submission Design and Features

Legacy Data Migration Design and Features

Failover Cluster Design and Features

Meeting Objective

Development toolset

The following development tools will be used to develop M4:

.NET Framework 2.0 and C# Language

.NET Framework 3.0 is considered for Windows Communication Foundation to replace 2.0 Remoting features and includes Windows Vista support.

SQL Server Express (client database)

Free, lightweight and popular

Ease of integration and management in .NET; XML support

File-based deployment; automatic tuning and patching

Powerful, reliable, secure and scalable

Reporting and Replication Services

SQL Server Standard (data submission staging database)

Low TCO: pricing supports low-connectivity with large data volumes

Hosts native XML Web Services without need for IIS

Processor License: $6K; Server + 5 CALS: $2K + $162 additional CAL

Parijet PLC Communication Library

Development Toolset

Target platforms

M4 will support the following platforms:

Windows XP SP2 or better

Windows 2003 Server

Windows Vista

Windows 2000*

*Windows 2000 will not be supported if .NET 3.0 is used

Failover Cluster Requirements:

Windows 2003 Server

Redundant NIC card supporting fail-over (Private Network)

Single NIC card (Public Network)

High Performance System (dual-core, 2GB, RAID)

Target Platforms

Supported devices

Supported Devices

Revised system architecture

Revised System Architecture

Online Revised M4 System Architecture

M4 data

The advantages of storing data into a structured database instead of text file:

Data is relational in nature (status reports linked to topology version)

Data retrieval for robust reporting and viewing on the client

Database can be secured

Adaptable: new types of messages can be easily added to system

Reliable: data is stored immediately into database and not buffered

M4 Data

Message data

M4 collects a variety of messages (data) from various sources:


System (application and OS)

Separation-by-Code Operations

Failover Operations

Each message has the following attributes:

Timestamp in Local and PST time (with millisecond resolution)

Source information (machine, topology, device, SxC, application type)

Message Type:

Real-time or buffered tag

Device (alarms, status, noise, GPS coordinates)

Monitor operation (start, stop, pause, system status, pulse)

Error (system and device)

SxC operation data

Failover (planned, system fault)

Message Data

Topology configuration

Topology configuration describes a set of physical devices and their topologic relationships that provide instrumented monitoring at one or more interrogation sites over a period of time

It provides location-specific context to interrogation data

A version of a topology configuration has a one-to-one relationship with collected data; M4 maintains this historical relationship between topology version and data

Topology information is submitted to PTAGIS and integrated into SiteConfig data table

Topology configurations are managed by a version number

Topology Configuration

Topology component relationships

Topology Component Relationships

Topology component relationships1

Topology Component Relationships

Topology component features

Topology Component Features

*Read Only

Site component features

Site Component Features

Antenna group component features

Antenna Group Component Features

Device component features

Device Component Features

Device component features continued

Device Component Features Continued

Mux antenna component features

Mux Antenna Component Features

Gate component features

Gate Component Features

Creating or modifying topology configuration

M4 distinguishes between two types of topology changes:

Major Changes

Adding or removing a topology component (device, antenna, gate)

Renaming a device id, mux-antenna identifier or site code

Changing the relationship between any of the components, i.e. moving a device from one antenna group to another

Changing the type of a device

Minor Changes

Changing a serial port or other serial or Ethernet setting

Changing data protocol or port type for a device

Changing the description of a component

Changing any of the gate settings.

Creating or Modifying Topology Configuration

Topology versioning rules

Major changes require a new topology version

Topology version number will increment (example 1.0 to 2.0)

A new topology can be created while monitoring with the New Topology Manager

User activates a new topology and restarts the monitor to use the new topology version

User can import a new topology version from a file.

Minor changes can be made to the active topology version

Topology version number will increment (example 1.0 to 1.1)

Monitor must be stopped to make minor changes

User performs minor changes from the Topology Viewer

Topology Versioning Rules

General topology rules

A valid topology version must exist before the monitor can start

M4 will be installed with a default, empty topology version (0.0)

Any start actions will be disabled if version is not valid

Starting monitor from Service Control Manager will fail and generate an error

A valid topology version has:

At least one site defined

At least one reader device defined for a site

Any antenna-groups must contain two or more readers

All mandatory settings for each component are specified and valid

Only one device can be enabled for a single port address

Importing a topology version will create a new topology version

Topology configuration will override any device id transmitted in data

Clustered machines must run the same topology version

Any changes to topology take effect the next time the monitor is started

General Topology Rules

M4 client

M4 Client

Behind the scenes of m4 client

Behind the Scenes of M4 Client

M4 client features

M4 Client Features






Other Topologies




Data Viewer

M4 client features1

M4 Client Features

Additional m4 client features

Data Viewer displays pages of data

Rows per page is user-defined

Sorting Data (TBD)

Set Data Viewer to Auto-Refresh

Any filters apply

Cannot scroll (only data that will fit in viewer displayed)

Right-click device to enable or disable

Right-click device or component to generate context-sensitive reports

SxC Operations

Start/Stop/Refresh SxC Operations independently from stopping/restarting Monitor

Access SxC Configuration (from topology component or menu)

Export message data in variety for formats (XML, CSV)

Import data from MobileMonitor 2.0 or other M4 installation

User-initiated data submissions to PTAGIS

Reporting: Device Diagnostics, Noise, Tag Hits, Site Operations, Antenna-Group Efficiency (TBD), SxC Gate Efficiency (TBD)

Manage Application Settings (schedules for upload, trigger devices, pulse records; failover configuration, upload settings, time zone)

Download Wizard to download stored data from remote readers

Supports single serial port for multiple remote readers (maps to existing topology)

Converts buffered tags to real-time tags if timestamps are available

Additional M4 Client Features

Creating a new topology

The New Topology Manager is used to create new topology versions

Accessed from M4 Client menu

Available even if monitor is running

Create new topology:


Existing Topology



Provides validation tool

User must activate a new topology version to be used on restart of monitor

New topology version can be saved, closed, and updated at a later time

Creating a New Topology

M4 data submission to ptagis

M4 Data Submission to PTAGIS

  • Upload process is initiated either automatically or by user

  • Upload Manager reads the configuration file (user, connection)

  • Upload Manager connects to WS-PDS web service at PTAGIS

  • Authentication and Authorization with WS-PDS based upon evidence supplied from client

  • Upload outstanding Topology Versions and Message Data

  • Upload Manager reports feedback from WS-PDS service

Data submission step 1 initiating the upload

Two ways to start an upload:

Manual Upload

End-user initiates upload manually by selecting UploadData command from M4 Client menu

This upload can be initiated independent of the state of the monitor

Feature will allow a user to reset data for resubmission to PTAGIS

User will be provided feedback during the upload process with the ability to cancel the process

Automatic Upload

Data is uploaded to PTAGIS on a user-defined schedule

Monitor must be running

Data will be uploaded on the next scheduled interval when monitor is started (will not perform a make-up)

Uploading data should not impact performance of the system

Feedback from upload sessions can be viewed from a system report or the Data Viewer

Data Submission Step 1: Initiating the Upload

Data submission step 2 read the configuration file

Data Submission Step 2: Read the Configuration File

Before an M4 installation can upload, these settings must be configured from the M4 Client:

The M4 Client configuration manager will have a Test command to validate the configuration settings with WS-PDF web service.

Data submission step 3 connecting to the web service

M4 Upload Manager on client computer queries PTAGIS host server over the network for the existence of the WS-PDS service.

If service is disabled or network connection fails, upload session is terminated and condition is logged

For M4 installations within Commission network

Use VLAN setting is true

A faster, more reliable TCP connection is used

For M4 installations outside of Commission network

An HTTPS connection is used instead

Less prone to Firewall issues

Note: Windows Communication Foundation in .NET 3.0 simplifies the task of building this web service and communicating with this web service with various network bindings

Data Submission Step 3: Connecting to the Web Service

Data submission step 4 authentication and authorization

Once a connection is established, the client requests authentication and authorization from the WS-PDS web service:

The PTAGIS user name and encrypted password are sent to WS-PDS.

WS-PDS queries the PTAGIS LDAP server with credentials for an authorization role (Data Coordinator)

If authenticated and authorized, upload session will continue

If not authenticated or authorized, upload session is terminated at both service and client. Condition is logged on both client and server

PTAGIS personnel can be alerted to any failed connection attempts

Data Submission Step 4: Authentication and Authorization

Data submission step 5 upload outstanding data to ptagis

Upload Manager is connected and authorized, now it must determine what data to submit:

Each data message and topology version has a status flag indicating if it has been previously uploaded

All new topology and message data are packaged together into an XML file (preserving referential integrity) and transferred to the WS-PDS service

The WS-PDS service verifies the XML file integrity with a file hash:

if file is not valid, it request a resubmission from the client

If file is valid, both client and server consider this a success (in case connection is broken or database is offline)

WS-PDS loads data in XML file into staging database on server

XML file is preserved on server for integrity

Data Submission Step 5: Upload Outstanding Data to PTAGIS

Data submission step 6 upload session feedback

The WS-PDS service provides asynchronous feedback to the Upload Manager residing on the client indicating any exceptions or the success of the loaded data

If success is reported, Upload Manager updates the status field for all records in the client database that were uploaded in the session.

The Upload Manager records the session’s success or failure in the client database and Windows Event Log viewer

Custom alerts can be configured to notify users via email that an upload session failed.

Data Submission Step 6: Upload Session Feedback

Legacy data migration

Initiate Migration

Load New Topology Data (alert PTAGIS personnel)

Update New Message Data

Update Staging Data State

Compact Staging Data

Legacy Data Migration

Data migration step 1 initiate migration

Staging database hosts custom SQL Server Integration Service (SSIS) called PIT Data Migration Service (PDMS)

PDMS can be configured and maintained from SQL Server Management Studio or custom application interface

PDMS service is initiated automatically upon a user-defined schedule to correspond with existing IDL service for optimum processing of data

PDMS service can be initiated manually by PTAGIS personnel

Data Migration Step 1: Initiate Migration

Data migration step 2 load new topology data

PDMS service performs a query within staging database to determine if any new topology data needs to be migrated

A report is generated providing a summary of topology changes at each site and emailed to target PTAGIS personnel

PDMS connects to the PTAGIS3 database and inserts the new topology data directly into SiteConfig schema

PDMS will alert PTAGIS personnel to any errors or faults

If PDMS cannot migrate topology data, the migration session is aborted (no data will be uploaded until problem resolved)

Data Migration Step 2: Load New Topology Data

Data migration step 3 load new message data

PDMS generates an in-memory dataset of all new message data that corresponds to PTAGIS interrogation data specifications with these configurable options:

Generate real-time tag records only (this could be set for a site-by-site basis)

Sites to exclude (can be set for period of time)

Limit number of real-time tags per second (Unique Off)

PDMS transforms dataset into standard PTAGIS interrogation data files with these configurable options:

Allow interrogation data files to span multiple days (generates less files to load)

Suppress interrogation files that do not contain interrogation records

PDMS submits interrogation records for traditional PTAGIS loading:

Generates XML header for PTTP loading and puts them into staging directory

Submits them directly to IDL

This method of loading data will ease deployment of M4 with existing client applications (MiniMon/MultiMon)

Data Migration Step 3: Load New Message Data

Data migration step 4 update staging data

PDMS service updates data in Staging Database to prevent it from being migrated when the service runs again

PDMS service can provide a utility to allow a manual reset of select data in Staging Database for reloading to PTAGIS3

Each data record has a status field that will be used indicate state:

New: (default) generated and stored in M4 Client Database

Uploaded: transferred from M4 Client Database to Staging Database

Migrated: migrated into PTAGIS3 database

Compact: record is compacted in the Staging Database

The PDMS service logs the success or failure of the migration session to be used for administrative reporting

Data Migration Step 4: Update Staging Data

Data migration step 5 compacting the staging database

PDMS service will initiate a sub-service, either scheduled or user-driven, to compact the M4 data in the Staging Database:

All message and topology data older than a designated period of time will lose all of their ancillary data, retaining only the minimum data (keys and state) to prevent duplication.

Before data is compacted, PDMS will generate an XML file representing M4/MobileMonitor 2.0 data for possible future use.

The staging database is designed to be a temporary store to facilitate data submission and migration. It is not intended to serve data to a web application.

If end-users want to use M4/MobileMonitor 2.0 data, they should maintain it on the M4 Client Database.

Data Migration Step 5: Compacting the Staging Database

M4 failover services

To meet continuous operational requirements, M4 can provide automatic failover with a redundant (clustered) server

Supports two types of failover conditions:

System or application fault

Planned failover for server maintenance

Failover service has specific use case scenarios:

Interrogation sites that perform Separation-by-Code operations

Interrogation sites that collect a large segment of PTAGIS data and require operational redundancy

Failover is integrated into M4 as a configurable option:

Does not require overhead of maintaining multiple software versions of the same application

By default, Failover Services are disabled in M4 to reduce complexity for casual end-user

Failover Services should not impact system performance

M4 Failover Services

Failover service architecture and features

Two redundant systems host independent M4 monitoring services

Data is duplicated in separate local databases

Both process SxC requests

Both systems capture same Serial I/O via Ethernet using DeviceMaster

Only one system communicates to a PLC device to provide SxC gate control

Two monitoring services communicate health status via heartbeat channel

Uses private network with redundant NICs

M4 Client provides management of failover configuration to end-user

NTP synchronizes system time between two servers

provides coarse synchronization of the two sets of data via TimeStamp field

Failover Service Architecture and Features

M4 failover assumptions

To reduce complexity and not impact system performance, the following assumptions for M4 Failover Services:

System platforms should be identical and configured for high-performance:

Dual or Quad Core, 2GB RAM, RAID

Install transaction log of M4 Client Database on separate partition

Data is not mirrored between two systems:

Data events are not synchronized and may not be recorded in same order

Data recovery from a failover requires manual user intervention

Data is synchronized with scheduled checkpoints to facilitate data recovery

Separation-by-Code counters are computed independently on two systems

Counters could be synchronized if necessary

Heartbeat communication channel represents the single point of failure

No guarantee of failover or gate control if this channel fails

Topology Versions and SxC Configuration must be identical on both machines

M4 will detect the topology version and will abort starting the monitor

M4 will provide utilities to push configuration changes between two machines

M4 Failover Assumptions

M4 failover service states and cluster roles

M4 Failover Service Operational States

Active: monitoring service is controlling separation-by-code gates

Standby: monitoring service is computing separation-by-code operations but is not controlling the gates

M4 Failover Cluster Roles

Two redundant monitoring services hosted on separate machines will be configured to start as one of two types:

Primary: service attempts to start in the Active state

Secondary: service attempts to start in the Standby state and will promote itself to Active if primary system does not respond

M4 Failover Service States and Cluster Roles

M4 failover service primary system startup procedures

When Primary server follows this startup procedure:

Sends heartbeat message proposing it is the Active service

Monitors heartbeat channel for messages from redundant service for a specified period of time (Discovery Period)

If it does not receive heartbeat message from an already Active service, it promotes itself as the Active service.

If it does receive heartbeat message from an already Active service, it demotes itself as Standby and continues operating in this state.

If communication channel fails or no heartbeat message is received at all, it will report failure and send alert, and continue operating in Active state.

M4 Failover Service: Primary System Startup Procedures

M4 failover service secondary system startup procedures

When Secondary server follows this startup procedure:

M4 Failover Service: Secondary System Startup Procedures

  • Sends a heartbeat message proposing it is the Standby service

  • Monitors heartbeat channel for messages from redundant service for a specified period of time (Discovery Period)

  • If it receives a heartbeat message from an Active service, it resumes operations in Standby state.

  • If it receives a heartbeat message from an Standby service – or receives no message at all, it promotes itself as the Active service.

  • If the heartbeat communication channel fails, it will report the failure and send an alert, however, it will remain in the Standby state.

M4 failover service failover system procedures

When an Activeservice fails, the following occurs:

The failed active service stops sending heartbeat messages

The standby service notices the active service has not sent a heartbeat message in a specified amount of time (Failover Interval) and promotes itself as active and takes control of the PLC.

The new active service reports the error and sends any alerts indicating the condition.

When a Standby service fails, the following occurs:

The standby service stops sending heartbeat messages

The active service notices the standby service has not sent a heartbeat message in a specified amount of time (Failover Interval) and reports the error and sends any alerts indicating the condition.

M4 Failover Service: Failover System Procedures

M4 failover service configuration

M4 Failover Service: Configuration

The M4 Client provides the end-user with management of the following Failover Service Configuration Settings:

M4 failover service operational control

M4 Client provides user with simultaneous operational control of both clustered monitors (start, stop, pause)

User selects the control commands from File Menu:

Prevents failover contention

Users can enable/disable failover on both servers

Topology Viewer displays both monitors

Identifies Active and Standby

Remote monitor has limited display features

Right-clicking a monitor in Topology Viewer provides independent operational control with a context-menu:

Allows for planned shutdown

Issues an immediate checkpoint

Failover will not report failure or send any alerts

Context-menu provides hint: ‘Stop this Monitor’

M4 Failover Service: Operational Control

M4 failover service standard operating procedures

User must take care to provide the same topology version to each server

M4 Client will provide utility to push topology and SxC changes to both servers

User must configure Failover Service correctly

Servers must be identified with correctPrimary and Secondary roles

Heartbeat communication will provide a test utility in the configuration manager

Perform a manual failover for planned shutdown of an Active server

Reporting concurrently with SxC processing should be performed on failover server to avoid impacting performance

Only the Primary server should be configured to automatically upload data to PTAGIS

Submitting data from the Secondary can result in duplicate data

The Secondary server should be use for patching data after a failover event

Normal startup and shutdown of systems should use dual operational controls from menu to start/stop both monitors simultaneously

M4 Failover Service: Standard Operating Procedures

M4 failover service data recovery

M4 Client will provide a Data Recovery tool to facilitate manual patching of data after a failover event

Data Recovery tool will provide a side-by-side viewers of both databases

Viewers will be aligned by checkpoints and timestamps

User will select patch data from viewer and upload it to PTAGIS

M4 Failover Service: Data Recovery

  • Login