System Wide Information Management (SWIM) FTI Enterprise Solutions for NAS DNS and IP-based NTP Presented to: By: Date: June 20, 2012 SOA Brown Bag #15 The SWIM Governance Team & Jerry Hancock, Enterprise Engineering Services
Overview Current Needs Programs Requesting the Service DNS Architecture Planned Capabilities Impact to National Airspace System (NAS) Programs Provisioning Process DNS and Internet Protocol (IP) Address Assignment Processes Implementation Schedule Next Steps DNS Solutions Architecture Review Concept of Operations (ConOps) DNS Portal Demonstrations Review of IP-Network-Based Solutions Status
Current Needs • Deployment of SWIM services and Enterprise Identity and Key Management (IKM) is dependent on Domain Name resolution services • NAS program requirements for DNS-based fail-over and load-balancing • Hard-coding of IP addresses into applications can be avoided with availability of an Enterprise DNS service • Cost savings due to economy of scale associated with an Enterprise approach vs. individual DNS efforts by NAS programs • Improved security and operational efficiency with a single Enterprise solution • Increased interoperability among NAS programs
DNS Architecture Consortium Internet R&D Ext DMZ DNS NESG Ext DMZ DNS FNTB FNTB DNS R&D Int DMZ DNS NESG Int DMZ DNS R&D Mission Support Op IP R&D DNS NAS Ops IP DNS Public Ext DMZ DNS Public Int DMZ DNS Legend: DMZ – De-Militarized Zone FNTB – FAA Telecommunications Infrastructure (FTI) National Test Bed NESG – NAS Enterprise Security Gateway Harris provided DNS illustrated in colored boxes. Each domain is shown in a different color.
Planned Capabilities • Enterprise DNS service will be managed by FTI and will offer NAS Programs name resolution capabilities with Level 3 Reliability, Maintainability, and Availability (RMA3) service (0.9998478 availability and 8-minute restoral) • FTI has deployed DNS services in the NAS, NESG, FNTB, public Internal DMZ, and in parts of the Research and Development (R&D) Domain • NAS DNS services will offer programs the ability to fail-over and load-balance between end-points • Load-Balancing with traffic routing via Round-Robin, Sequential, or Random patterns • DNS Fail-Over through and limited application aware Fail-Over • NAS DNS implementation will support traditional DNS and DNS Security Extensions (DNSSEC), authentication, and integrity validation security controls consistent Federal security • The Enterprise FTI NAS DNS has a defined naming architecture and has the ability to integrate with existing program solutions. (engineering reviews and some technical conflicts may need to be resolved on a program-by-program basis) • NESG DNS services shall be available to authorized NESG external users
Impact to NAS Programs • The FTI Enterprise DNS is a subscription service • Users of this service MUST be connected to the FTI Operations (Ops) IP network or other defined IP domains • NAS Programs that do not want DNS do not have to use this service • NAS Programs that want DNS and do not currently have a DNS solution will follow our provisioning process • NAS Programs that currently have DNS will be integrated into the Enterprise DNS service • Through the FTI DNS Portal, this will be a NAS Program’s self managing service. Programs will have the capability to make additions, deletions and modifications to all of their DNS functions through the DNS portal once the initial architecture and engineering review have been completed • DNS-based load-balancing and fail-over for NAS Program end systems can be achieved through name assignments at either Facility or Program-level zones based on geographical requirements • NAS Programs will not be charged for the use of DNS services
The NAS DNS namespace will implement its own root zone and will not utilize the Internet root zone. Internal NAS DNS records will not be routable on the Internet Root gov These zones will not contain host or service records (managed by FTI) Zone delegation: legacy zones can be transferred to the NAS Enterprise DNS faa nas Program-level zones (managed by FTI) eram stars lid abc lgcy Facility- / Service-level zones will contain records managed by the NAS user PO zlc -- svc Typical facility zone names will be based on LID and will contain server and device host records, for example: Example service host record: flightinfo.svc.eram.nas.faa.gov a1.zlc.eram.nas.faa.gov NAS DNS Namespace
DNS ConOps • For initial configuration, Harris will establish domains/zones, IP addresses, and users based on data from the FAA • FAA needs to send Harris the following data for initial loading: • Domain/zone tree structure (format: figure) • IP addresses for each zone (format: spreadsheet) • People authorized for changes to each zone (format: spreadsheet) • Proposed template
Portal Screens: Main Screen • Features • Gray = user not authorized to change • Green = user authorized to change • ConOps Questions • What tasks should FAA users be allowed to perform after the initial configuration? • Add/change/delete records for a zone? • Add/change/delete zones? • Others? • Comments from previous discussion • Users can A/C/D records and zones, but need to set up permissions for level of authorization (this will also affect the template on previous slide)
Zone Records 1 Records for the selected zone are displayed 1 2 • User can choose to • Edit record • Inspect record (Dig) • Remove record 3 2 3 User can choose to add a new record
Add Record • ConOps Question: • What type of records should be available for the user to create? • A • AAAA • CNAME • DNAME • MX • NAPTR • NS • SRV • TXT • DNSKEY • DS • HOST • NSEC • NSEC3 • PTR • RRSIG
Process for updates prior to Portal May also include add/change/delete actions after the portal is available, where FAA users are not authorized to complete
Next Steps • Continue Customer Outreach – DNS solution has been vetted with major NAS Programs: • NAIMES, TFM, ERAM (completed) • NAS DNS solution will has been presented at: • SWIMposium – September 2011 (completed) • SWIM TIM – October 2011 (completed) • SWIM TIM March 01, 2012 (completed) • TRB and Systems Engineering March 8th 2012 (completed) • Get feedback from NAS Programs regarding the NAS DNS deployment architecture, ConOps, and naming hierarchy (completed: Lifecycle on going effort) • Develop a NAS DNS User Guide that will provide in-depth guidance regarding NAS DNS service offering, capabilities, and architectures (In process) • Develop transition plan for legacy DNS services by gathering information regarding existing DNS implementations proprietary to the NAS Programs and developing a timeframe to integrate those zones within the Enterprise DNS solution (TBD/program-based) • For programs that have a need for DNS, the use of Enterprise DNS services will be a requirement as part of NAS RD2011 and NAS RD2025 (In process, working with NextGen Office (ANG)) • Create NAS DNS policy that will dictate the use of Enterprise DNS services, will set compliance requirements, and dictate ConOps. (In process, working with ANG)
Overview • NTP and PTP Overview • Description of Needs • Current Capabilities • NAS NTP Architecture • Proposed Deployment • Timeframes
NTP and PTP Overview • NTP – an IP-network-based distributed protocol for synchronizing the clocks of computer systems over data networks (LAN/WAN) • An IP-network-based distributed protocol (LAN/WAN) • Compensates for network jitter and latency • Allows for accuracy up to 200 microseconds • Precise Timing Protocol (PTP) – a much more precise timing protocol than NTP • An IP-network-based distributed protocol (LAN/WAN) • Allows for accuracy in the sub-microsecond range (nanosecond accuracy) • More sensitive to the number of hops between the authoritative time source and end-client • High Availability • Automatically selected alternative time source if the primary fails • All Network Clock servers will have rubidium oscillators as backup for the GPS signal. The rubidium clock will provide timing services for 140 days after a GPS failure.
Sources of Time and Frequency Chart provided by Mitch Naris • WWVB • 0.1 – 15 ms Time Accuracy • 1 x 10-10 - 1 x 10-12 Frequency Stability • GPS • 10 ns Time Accuracy • 1 x 10-13Frequency Stability • ITS* (NTP) • 10 ms Time Accuracy • 1 x 10-7Frequency Stability • ITS* (PTP) • 0.1 ms Time Accuracy • 1 x 10-9Frequency Stability • Rubidium (Rb) Clock • 10 μs Time Accuracy • 5 x 10-11 Frequency Stability • Cesium (Cs) Clock • 10 ns Time Accuracy • 1 x 10-13 Frequency Stability *Internet Time Service *Internet Time Service PTP NTP 10-7 10-4 10-3 100 10-9 10-8 10-6 10-5 10-2 10-1 GPS Cs Rb WWVB 1 ns 1 μs 1 ms 1s
Description of Needs • As the NAS operational environment becomes more cohesive through federated data distribution, telecommunications, and boundary protection; ensuring consistent common time across the NAS will become exceedingly important • Accurate time is fundamental to any IP network, systems or communications infrastructure and normally provided as an Enterprise function, on which many other services rely: • Network Management and Monitoring systems • Application servers, centralized correlation, tracking and mosaics overlays, Various Operational needs • Synchronization of network, messaging and telecommunications infrastructure
Current Capabilities • Lack of authoritative timing sources has led NAS Programs to implement their own time services. This is problematic because: • It is expensive • Causes consistency problems across the Enterprise • There are no sync or fail-over capabilities between clocks • Conflicting time-stamps may cause operational issues • Operational example – Digital Audio Legal Recorder (DALR) • Mission Support (MS) program used as a source of time for NAS Programs • This is risky since an MS program does not adhere to the same availability standards as NAS.
NTP/PTP High-Level Requirements NTP Requirements Odd Number of NTP servers implemented NTP Synchronization at the 10 ms level on WAN links of 2000km NAS Stratum one time servers shall be available to users Time Servers need to be connected through diverse paths Security and integrity of time transmissions shall support Access List (ACL) and encryption mechanisms PTP requirements are similar to NTP Redundancy NAS/NESG/IAG Design follows the Symmetricom PTP deployment best practices Servers are distributed within X hops of each other for implementation
OPS Enterprise NTP/PTP Design 8 NTP/PTP regions throughout the country 3 NTP/PTP servers assigned to each region (requirement) NTP/PTP servers located at all major FAA facilities NTP/PTP clocks share the antenna with the LARUS GPS server NTP/PTP servers have Rubidium backup oscillator Servers are peered with a partial mesh with each other (requirement) Servers peer with at least 3 other NTP/PTP servers (failover requirement) PTP deployment is based on Symmetricom best practices recommendations
Enterprise NTP/PTP Service ConOps • Once deployed, NTP/PTP services will require a minimum amount of effort to be used by the NAS programs • As long as the program has existing FTI IP services, NTP/PTP will be available • To start using NTP/PTP, a NAS program has to simply point its systems to the correct multicast/unicast server. No additional setup/ paperwork is required • Additional bandwidth requirement is negligible • NAS programs will not be charged for the use of NTP Services
Deployment Overview • NAS Deployment • Deployed at 23 ARTCCs • 8 NTP regions, with three Network Clock servers per region • All NAS users will be able to access a minimum of 3 Network Clock servers • The proposed design has Network Clock servers located at 3 NESG sites: ATL, SLC, ACY • High Availability - Network Clock servers are peered between the NESG • IAP Deployment • ACY will be the initial deployment site • Both ACY and OEX will sync clocks across the IAP Matrix network (future).
Provisioning Process • NAS Programs would request a new IP address range or DNS name records by sending an email request to a dedicated FTI email box • 9-AJW-NAS-DNS-Request/AWA/FAA, • 9-AJW-NAS-IPAddressRequest/AWA/FAA, • FTI would validate the request, schedule a Technical Information meeting to review with program and define all requirements • Once the engineering team has jointly finalized the initial request, the basic information will be populated within the portal and user accounts will be created on the DNS User Portal for programs to self-manage • NAS Program can create, delete, and modify name records • User Portal is a Web-based GUI (requires FTI security token) • The DNS portal will enforce compliance with the NAS DNS Naming Hierarchy or approved solution and allow the NAS Program to only make changes to its own name records and within the naming convention rules • The NAS Enterprise DNS deployment will support a variety of methods to migrate existing DNS implementations with a minimal operational impact
DNS and IP Address Assignment Processes 9-AJW-NAS-DNSRequest/AWA/FAA, 9-AJW-NAS-IPAddressRequest/AWA/FAA,
Grid Availability Member is disconnected from master: Changes maintained by member and then synchronized with master upon reconnection Grid Master Catastrophic failure of Master (both devices fail): Admin may promote any ‘Grid Master Candidate’ to Master – members re-synch automatically Grid Master Candidate Member Member Infoblox Grid HA — High Availability Individual device failover: Plug in new device and it instantly inherits all attributes of previously deployed device via master Device failure in HA: Failover to secondary device via VRRP—applies to members and master Member Member VRRP — Virtual Router Redundancy Protocol 34
Overview of DNS Structure DNS Cache servers can be ordered on a site-by-site basis for critical services at a cost to the program. 16-Aug-14
ConOps • Network Enterprise Management Center (NEMC) will be central coordination point for operational support to NAS Programs • NEMC will identify if the problem is network or infrastructure related and work with appropriate entity to resolve the issue • NEMC will work with Harris to resolve any issues with the DNS service • Harris will provide DNS service status visibility to the NEMC and NEMS support staff via the IP Monitoring Toolset (IPMT)