1 / 7

SMOS

SMOS. DPGS at KP0 Production Product Quality Problems Identified Positives. Production - 1. Key events: First acquisition on the evening of November 17; first sensed data at 20091117T125113 Switch from dual to full at 20091120T103000

aggie
Download Presentation

SMOS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SMOS DPGS at KP0 • Production • Product Quality • Problems Identified • Positives

  2. Production - 1 Key events: • First acquisition on the evening of November 17; first sensed data at 20091117T125113 • Switch from dual to full at 20091120T103000 • SODAP dumps starting from November 23 (big dumps, strange time sequences, large overlaps between data in passes,…) • L1PP “mirroring” problem corrected in operational chain on November 24 using updated G/J Matrix (all data reprocessed to at least this baseline) • Level 2 OS activated on December 1 • Level 2 SM activated on December 3 • Switch from full to dual at 20091203T040000 • L1 operational processor updates December 3-7 to align with L1PP including updated G/J Matrix (may eventually reprocess to this baseline if time permits)

  3. Production - 2 Status: • All processing active except FWF/G/J Matrix processing (this will be activated this week but the results will not be used in the processing chains – use L1PP results until operational results are verified) • Level 1 usage of generated CRSD1A and FTT is inhibited (use pre-launch results until operational results can be verified) • Level 2 SM post-processor is active but its results are not being fed to the L2 SM processor (use default tables until operational results can be verified) • NRT is processing most of the data it has received; but there is little data arriving to the processor that can be processed in NRT • Few problems detected processing from Level 0 to Level 2 • Almost all data acquired has been processed - see MF “DPGS Production” for details – but since November 30th the production is delayed

  4. Product Quality • The latest Level 1 operational chains baseline (both nominal and NRT) are generating products compatible with the L1PP 2.2 plus the most important L1PP SPRs known by November 13. Generally few problems but the “orchestration” so far is “easy” (pre-launch calibration data rather than that generated in the operational chain). • Level 2 processors in use are the launch baseline. Thus far no significant problems detected.

  5. Problems- 1 The DPGS worked great for the first two weeks (equal to the longest duration of a pre- launch rehearsal test) but since November 30 is performing poorly and the presence of INSA is needed on a continuous basis to address these short-comings. Production has slowed but not stopped Major Problems: • PDPC Core database performance is now poor (massive queue to access database). This results in delayed movements of files (in many cases 12 hours delayed). Where this is critical today is the movement of the MPL_XBDOWN (pass planning) to Svalbard and the availability of pass related files to the RF SODAP teams. Indra is working on the issue and the mitigation strategy for the moment is to have INSA move files manually that are critical to the commissioning activities. • Level 0 is not operating systematically after each acquisition and must be often re-started in order to resume production. In addition the L0 appears to be a fragile design that shows little basic understanding for the mission downlink scenarios (many passes have only been processed successfully by manually removing the “bad” parts of the input telemetry. Short-term solutions from GMV should allow the system to be operable again but in the long-term it would likely be best to find a more robust implementation.

  6. Problems- 2 Secondary Problems: • The DAS has worked with a few exceptions that were quickly recovered (2 ESAC passes were problematic) but the support has been poor to the reported problems. SMP/ELTA/Areva has provided insufficient support and there is every sign the situation will get worse. With the FEP-AIT due back at ESAC before Christmas there is enough redundancy to keep ESAC going with work-arounds available to address the odd failure case. In the longer term a replacement with the Cortex system (used at Svalbard) is probably advisable. • Link from Svalbard to ESAC: Although not systematic, the data flow from Svalbard (telemetry after the passes) to ESAC is seen to be very slow (~1 Mbps compared to the expected 5 Mbps)) – the problem here has been localised to the KSAT infrastructure

  7. Positives • XBAS and Svalbard acquisitions have thus far performed well • INSA team has shown the ability to keep the data flowing despite the PDPC Core and Level 0 problems • Level 1 update last week went smoothly • the operational chains (nominal and NRT) are both performing well when triggered and generating products of comparable quality to the L1PP • we are well ahead of schedule in terms of activating the full processing chain • LTA active and all data received has been archived The major short-term objectives are: 1) to recover the current set of problems to ensure that the system works smoothly (production and distribution) over the holiday period. INSA will be present but hopefully the conclusion in mid-January is that their presence can be reduced to normal shifts. 2) take on-board the next set of Level 1 and 2 changes coming from the results of the instrument and algorithm commissioning activities.

More Related