1 / 23

AWIPS Product Improvement Architecture Review 24 June 2004

AWIPS Product Improvement Architecture Review 24 June 2004. Overview. Joint discussions led to an analysis of technology and development of a “to-be” architecture which was briefed to the SET on 2/3/04.

hateya
Download Presentation

AWIPS Product Improvement Architecture Review 24 June 2004

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AWIPS Product Improvement Architecture Review 24 June 2004

  2. Overview • Joint discussions led to an analysis of technology and development of a “to-be” architecture which was briefed to the SET on 2/3/04. • To-be architecture provides the framework to complete individual product improvement tasks while maintaining a common goal. X-terminals Redundant LDAD firewalls Router Upgrade Redundant LDAD servers DS replacement Full DVB deployment Serial mux upgrade • To-be architecture facilitates development of a roadmap for schedule, budget and deployment planning. • Roadmap assists in issue and dependency identification and resource planning for risk reduction.

  3. AWIPS To-Be Architecture • Hardware • Utilize Network Attached Storage (NAS) technology • Deploy commodity servers on GbE LAN • Incrementally deployed and activated • Promote reuse of select hardware • Remove limitations of direct attached storage • Software • For availability, move from COTS solution to use of public domain utilities • Some experience with NCF and REP • Can be decoupled from operating system upgrades • Supports NAS environments • Can be augmented for load balancing if required • Deploy low cost Linux database engine

  4. AWIPS To-Be Architecture • Methodology • Stage hardware, move software when ready • Reuse hardware to ease transition and allow planned decommissioning • Deploy flexible availability framework • Deploy Linux database engine • Goals • Decommission AS, AdvancedServer, DS and FDDI • Support universal deployment of single Linux distribution (currently RHE3.0 with OB6) • Provide dedicated resource for local applications • Expandable within framework (easy to add servers)

  5. Step 1 – Initial Hardware Deployment • Release OB4 is a prerequisite • Initial staging of hardware • New rack • NAS w/LTO-2 tape (~400GB storage and backup) • 2 commodity servers – DX1/DX2 • 2 GbE switches and associated cables, etc • 2 8-port serial mux replacements (installed in PX1/2) • NAS key to incremental deployment and activation • Serial mux replacements installed but not activated • LTO-2 drive for site backup • New hardware on GbE LAN • LDAD firewall upgrade deployed independently

  6. Step 1 and Release OB4 Red text indicates newly ported or moved processes

  7. Step 1 - Incremental Activation • Activate NAS • Point DS to NAS • Allows DS1/DS2 to be used as active/active pair using existing MC/ServiceGuard infrastructure • Decommission DS mass storage and autoloader • Point PX1/PX2 to NAS • Maintain active/active pair • Deliver new availability mechanism • De-activate cluster management portion of AS2.1 • PowerVault (PV) deactivated for near-term • Cooked data to the NAS, temporary files continue to be written to local disk • Short (2 week OAT) to verify DS1/DS2 and PX1/PX2 failover and NAS data • OAT in early October to allow full deployment decision and complete deployment to initial 41 sites by 1 Feb 05

  8. Step 1 – Current Action Items • Coordinate serial mux upgrade hardware delivery to site with DS replacement. This reduces number of hardware installs to be monitored by NCF, but as APS software will not deploy the final cable connections will be delayed until Step 2. (NGIT - L. Dominy) • Required infrastructure changes include consolidate data from DS mass storage and PX power vault, verify MC/SG on DS1/2, deactivate cluster manager portion of AS2.1 on PX1/2, install new availability software on PX1/2 and deactivate the power vault. (NGIT - L.Dominy) • Verify only cooked data on NAS (currently all /data/fxa, /data/fxa_local, and /home). Review /data/fxa structure. (SEC - E. Mandel/FSL - D. Davis) • Review IFP file system to determine which directories go to NAS for data storage (all binaries should be on a server(s) and temporary file directories to stay on PX1/2. (FSL - D. Davis)

  9. Step 2- Decommission AS1/AS2 • Release OB4 maintenance release of stable OB5 software required for AS decommissioning • Activate DX1/DX2 • Infrastructure/decoders move to DX1 • IFP/GFE to DX2 • Newly ported functionality to DX1 • Reuse PX1 and PX2 as PX1 and SX1 • PX1 for applications/processes using “cooked data” • SX1 for Web Server, local applications and eventually LDAD • Activate serial mux replacement and ported APS • Non-ported software from AS1 to DS2 • Hosts remain on RH 7.2 as risk reduction

  10. Step 2 and OB4.x/OB5 (w/RH7.2) Red text indicates newly ported processes, green text for moved processes.

  11. Step 2- Incremental Activation • Failover scheme • DX1 to DX2 • DX2 to DX1 • PX1 applications (less LAPS which does not failover) and servers to DX2 • PX1 processes APS and NWWSProduct to SX1 (require serial mux access) • SX1 baseline software to PX1 • Decommission AS1/AS2 and excess rack • Linux SMTP MTA deployed as start of migration from x.400 (required for DS decommission)

  12. Step 2 - Current Action Items • Ensure OB5 tasking with priority check-in for Linux AsyncProductScheduler and NWWS Scheduler. Does NWWS Product need to ported at same time? (SEC - E. Mandel) • Ensure OB5 tasking with priority check-in for Linux LAPS, notificationServer, Redbook and radarStorage. If prototype testing is successful check-in Linux routerStoreNetcdf. (FSL - D. Davis) • Activate serial mux upgrade and move/connect cables. (NGIT - L. Dominy) • Requires movement of processes between servers, review architecture roadmap and verify no dependencies have been overlooked. (SEC, FSL, MDL, OH and NGIT)

  13. Step 3 – Deploy Data Base and OS • Deploy Linux POSTGES data base engine to DX1 • Move existing PV to new rack and connect to DX1/2 • Reconfigure PV (possibly into 2 separate direct attached disk farms, one for each DX) • Database availability via mirrored or replicated databases is TBD at this time • Migrate ported databases • Upgrade operating system (currently RHE3.0) on all applicable hosts LX/XT CP (if full DVB deployment complete) DX AX (Linux Informix migration?) PX/SX RP?

  14. Step 3 and OB6 (RHE3.0) Red text indicates newly ported processes.

  15. Step 3 – Incremental Activation • Decommission AS 2.1 and HP Informix • HP Informix engine can remain on DS for site use (though if all databases are ported not sure why?) • Transition to SMTP and decommission x.400

  16. Step 3 - Action Items/Questions • Verify if RHE3.0 to be procured for all hosts in OB6 timeframe. (OST – Ed Mandel) • Can/should POSTGRES be delivered early (OB5.x) to RH7.2 DXs? • Most sites run software against GFS databases, not their own databases. • Database and software will be tested with RHE3 only as part of OB6. • If databases delivered early how/when does parallel ingest get developed and tested? • Should RFCs be handled differently? • Determine plans for RFC archive server’s use of Informix. • Do the RP servers get RHE3.0 with OB6?

  17. Step 4 – Deploy LDAD Upgrade • Deploy LS1 and LS2 • Deploy redundant server pair (requirements still tbd) • Could reuse PX1/SX1 as LS1/LS2 and use new generation hardware for PX1/SX1 • Activate LS1/LS2 • Migrate internal LDAD processing to SX1 • Some internal and external LDAD processing must transition at same time • Reuse existing HP LS on internal LAN • Existing 10/100 MB LAN card

  18. Step 4 and OBx Red text indicates newly ported processes.

  19. Step 5 – Decommission DS1/DS2 • Continue to move ported software/databases to Linux servers • Combine with step 6 if additional DXs are required • DX1 for database server and infrastructure • DX2 for IFP/GFE • DXn for decoders • All Linux devices on GbE LAN • Remaining functionality on LS on 100MB LAN • DialRadar/wfoAPI (tied to FAA/DoD requirements) may be OBE at this point. If so, Simpacts and LS can be decommissioned at all non-hub sites. • Netmetrix (required at hub sites only)

  20. Step 5 and OBx Red text indicates newly ported processes.

  21. Step 6 – Process Loading • Incrementally add DX hosts for load balancing for new functionality and data sets. • DX1 for database server and infrastructure • DX2 for IFP/GFE • DXn for decoders

  22. Step 6 and OBx Green text indicates moved processes.

  23. High Level Schedule • Key dates • notificationServer port and enhancements • Step 1 OAT (must complete before initial deployment) • Step 2 OB4.x OAT and Full Deployment • OB6 check-in

More Related