1 / 34

Ifremer Planning of Cal/Val Activities during In orbit commisioning Phase

Ifremer Planning of Cal/Val Activities during In orbit commisioning Phase N. Reul, J. Tenerelli, S. Brachet, F. Paul & F. Gaillard, ESL & GLOSCAL teams. In orbit commissioning phase meeting ESAC 10/2009. Context:.

halona
Download Presentation

Ifremer Planning of Cal/Val Activities during In orbit commisioning Phase

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ifremer Planning of Cal/Val Activities during In orbit commisioning Phase N. Reul, J. Tenerelli, S. Brachet, F. Paul & F. Gaillard, ESL & GLOSCAL teams In orbit commissioning phase meeting ESAC 10/2009

  2. Context: • There is no history of L-band Tbs over the global ocean • Previous & currently orbiting EOS low MW frequency radiometers (TMI,AMSR,WindSat) are (1)classical radiometers and (2) Operating at higher frequencies (C, X bands) =>low noise Tbs data (0.1-0.5 K) But very low sensibility to SSS (10-40 times less sensible than at L-band) SMOS is : • an interferometric radiometer and the reconstruction process accuracy is key for the SSS retrieval success • much more sensible to SSS than previous sensors • BUT 20 times more noisy than previous sensors (2-5 K) At the beginning of the mission, although they might be in errors, when can only Rely on: • Predictions from forward emissivity models applied to a priori « best known » geophysical conditions viewed by the instrument, AND to account for instrumental imaging features

  3. Main activities during commisioning: =>First Priority activity Validate « calibrated » L1c Tbs & reconstruction process accuracy over the ocean based on forward model estimates: Focuss on sunglint & land effects =>Second priority: Conduct Level 2 validation by comparison with in situ & reference fields

  4. How to reach our first priority aims? =>First: look @ averaged measured dependencies (incidence, polarization,SSS, SST, wind speed) and compare them to the expected ones from forward models. Separation by pass type (asc,desc) & polarization mode Ideally, what we would like to do: SMOS data Globally averaged Expected mean From forward model For the period & Spatial domain considered

  5. In practice, however: 1) we cannot do that in the surface frame but only in the antenna frame (because of polarization mixing and singularities when going from antenna to surface). AND 2) We have to take into account and monitor the image Reconstruction process impacts on the Tbs => This definitively requires an end-to-end simulation strategy

  6. TRAP SMOS Software Sets of SMOS L1C Data Geophys Aux Data (SMOS ECMWF or external data If not yet available) L1C Aux Data Forward Model Package Modeled Calibrated Visibilities Image Reconstruction soft Modeled Tx,Ty,Uxy,Vxy without reconstruction Modeled T’x,T’y,U’xy,V’xy including reconstruction effects

  7. TRAP-SMOS • The software is an extension of the aircraft analysis package, TRAP, that will enable some basic analysis of SMOS data down to level 1a. • Based on a set of C++ classes representing the calibrated visibilities, the G matrix, the correlations, and the J+ matrix. • Currently a basic MEX interface provides a means of computing calibrated visibilities and then inverting them to obtain a reconstructed scene for an arbitrary scene provided by the user. This can be done zith just a few lines of matlab code. • For the G matrix, unix memory mapping is used to enhance efficiency, allowing the entire G matrix to reside in memory even on a 32 bit machine (with sufficient memory). As only one sub-block of G is mapped into the process at any one time, 64 bit addressability is not needed. Also, multiple processes can share the same G matrix.

  8. TRAP-SMOS: Two Examples of what we will do • Sun glint contamination: the current L1 Processor uses a fixed wind speed to compute glint contamination. Is this a serious source of error? • Land contamination: how might the spatially varying (in each snapshot) bias evolve with changing land distribution?

  9. Possible Sunglint Contamination Kirchhoff scattering model; 7 m/s wind speed

  10. Possible Sunglint Contamination Kirchhoff scattering model; 20 m/s wind speed

  11. Possible Sunglint Contamination Instrument Reconstruction: Kirchhoff scattering model; 7 m/s wind speed

  12. Possible Sunglint Contamination Instrument Reconstruction error: 7 m/s wind speed

  13. Possible Sunglint Contamination Instrument Reconstruction error: 20 m/s wind speed

  14. Land Contamination

  15. Possible Land Contamination

  16. Possible Land Contamination

  17. Decomposition of Reconstruction

  18. Decomposition of Reconstruction Heremuch of the spatiallyvaryingerrorisassociatedwithbrightnessbeyond the fundamentalhexagon.

  19. IFREMER OS ESL: Plan of Action up to KP1 • Once we obtain calibrated visibilities after launch we can begin to compare modeled and instrument reconstructed scene brightness in a variety of circumstances. For example, over uniform sea do we observe a consistent bias pattern consistent with that implied by the measured antenna patterns? Can we detect the distant impact of land in the measurements? • It may be very difficult to extract the important biases from the noise and we are still considering how to achieve this. • We would like to be able to interact with Deimos and the Level 1 ESLs as we examine the visibilities. We do not expect to address issues below level 1a and therefore such interaction will be important if we cannot reconcile the forward models and instrument at a higher level. • => Joe tenerelli will visit the C-EC at least up to KP1-KP2

  20. IFREMER OS ESL: Plan of Action after KP1: J. Tenerelli => Continue on previous ocean scene bias activities N. Reul and S.Brachet J. Tenerelli: =>Look @ local and expected « strong » or « weakly variable »local patterns in the Tropical ocean =>Starts L2 validation if possible, with in situ in these zones

  21. We will need to perform Tb space-time averaging to detect stable geophysical & noise-generated features on local scales In practice, we will try to implement this analysis in order to assess Our ability to detect expected Large scale Hot & Cold spots over the ocean Over monthly periods Time Averaging

  22. Hot Spots Cold Spots Expected Range Expected monthly averaged surface Tbs @ L-band, nadir and for January

  23. To study Local Tbs behaviour: Analyze simplest conditions by selecting data for which : - we will try to minimize anticipated dominant image reconstruction problems: • Land • Sunglint -we will try to avoid strong geophysical contaminations and difficult situations: • Galactic & sunglints, • low & high winds, • low sst (tropics), • rain

  24. Other Local « strong source » validation site: The dipole of the Northern Indian Ocean

  25. Built up of a validation database • Constitution of a validation database including: • *Available SMOS L1C & L2 • *Co-located geophysical products: • SMOS ECMWF • climatologies of SSS & AMSR-E SSS over the tropics • Satellite Sea Surface Temperature • Satellite & blended winds, • Wave model products, … • +Qualified in situ data • =>Match-up Data Base analysis of • L1C, L2 & L3

  26. Control L2 large-scale behaviour with reference climatological fields over identified ocean basins

  27. Match-up with In situ: accounting for the natural Variability In high variability zone: in situ need to be compared to 1 day to 10 days-averaged SMOS data

  28. Data that will fill the match-up Data base And L2 Validation process For the commissioning

  29. Statisitics from the match-up per in situ sensor type: SMOS versus Argo profilers

  30. Statisitics from the match-up per in situ sensor type: AMSR versus moorings

  31. Statisitics from the match-up per in situ sensor type: SMOS versus Piratta moorings

  32. Merged Statisitics from all the match-up : SMOS versus all in situ data

  33. Merged Statisitics from all the match-up : SMOS versus all in situ data: Rms error per 1 psu SSS bins

More Related