slide1
Download
Skip this Video
Download Presentation
Outline

Loading in 2 Seconds...

play fullscreen
1 / 35

Outline - PowerPoint PPT Presentation


  • 122 Views
  • Uploaded on

Outline. PRISM: goals & benefits FP5 project and the Support Initiative organisation the PRISM Areas of Expertise OASIS: historical background the community today some key notes The OASIS4 coupler: model adaptation component model description coupled model configuration

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Outline' - ballard


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide2
Outline
  • PRISM:
    • goals & benefits
    • FP5 project and the Support Initiative
    • organisation
    • the PRISM Areas of Expertise
  • OASIS:
    • historical background
    • the community today
    • some key notes
  • The OASIS4 coupler:
    • model adaptation
    • component model description
    • coupled model configuration
    • communication
    • regridding/transformations
    • grids supported
    • future developments and conclusions
slide3
PRISM: the goals

Metadata

Source management

tomorrow

Compiling + running env.

Coupling & I/O

Science

MPI – LAPACK - …

today

tomorrow

Fortran & C Compilers

  • Increase what Earth system modellers have in common today (compilers, message passing libraries, algebra libraries, etc.)
  • Share the development, maintenance and support of a wider set of Earth System Modelling software tools and standards.
slide4
PRISM: the benefits
  • reduce the technical development efforts of each individual research team
  • facilitate assembling, running, monitoring, and post-processing of ESMs based on state-of-the-art component models

Help climate modellers spend more time on science:

  • promote the key scientific diversity
  • increase scientific collaboration
  • stimulate computer manufacturer contribution (tool portability, optimization of next generation of platforms for ESM needs)
slide5
PRISM: FP5 project and Support Initiative

http://prism.enes.org

  • 2001-2004: the PRISM EU project
    • a European project funded for 4.8 M€ by the EC
    • 22 partners
  • 2005-2008: the PRISM Support Initiative:
    • 7 partners: CERFACS, CGAM, CNRS, ECMWF, MPI-M&D, UK MetOffice, NEC-CCRLE
    • 9 associate partners: CSC, CRAY, IPSL, Météo-France, MPI-M, NEC-HPCE, SGI, SMHI, SUN
    • 8 py/y for 3 years
slide6
PRISM: the organisation

PRISM SB Chair

P. Bougeault (ECMWF)

PRISM Steering Board

CERFACS, CGAM, CNRS, ECMWF, MPI-M&D, UK MetOffice, NEC-CCRLE

PRISM Coordinator(s)

E. Guilyardi (CNRS), S. Valcke (CERFACS)

PUG chair

R. Budich (MPI)

PRISM Core Group

(7 people)

PRISM User Group

PRISM areas of expertise (PAEs)

Code Coupling & I/O

Integration & modelling environments

Data processing, visualisation and management

Meta data

Computing

slide7
PRISM: the Areas of Expertise

PRISM is organised around 5 “PRISM Areas of Expertise”:

  • Promotion and, if needed, development of software tools for ESM
  • Organisation of related network of experts
  • Technology watch
  • Promotion of community standards
  • Coordination with other international efforts
slide8
PAE Code Coupling and IO

Leader: S. Valcke (CERFACS)

  • development and support of tools for coupling climate modelling component codes:
    • OASIS3 and OASIS4 couplers
  • technology watch on coupling tools developed outside PRISM:
    • PALM coupler (CERFACS)
    • Bespoke Framework Generator (U. of Manchester)
    • CCSM (NCAR), …
  • relations with projects involving code coupling:
    • UK Met Office FLUME project
    • US ESMF project, GENIE project
    • ACCESS
slide9
PAE Integration and modelling environments

Leader: M. Carter (UK MetOffice)

  • source version control for software development
  • code extraction and compilation
  • job configuration & running
  • Subversion for source version control (PRISM SVN server in Hamburg)
  • Standard Compiling and Running Environments (SCE/SRE, MPI-M&D):
    • integrated environment to compile and run different coupled models based on different component models (standard directory structure, coding rules)
  • Flexible Configuration Management (FCM, UK Met Office):
    • version control, compilation
  • prepIFS (ECMWF):
    • tailored graphical interface for model configuration (Web services tech.)
    • prepOASIS4: GUI to configure a coupled model using OASIS4
  • Supervisor Monitor Scheduler (SMS, ECMWF):
    • management of networks of jobs across a number of platforms
slide10
PAE Data processing, visualisation and management

Leader: M. Lautenschlager (MPI-M&D)

  • data processing, visualisation, archiving and exchange for Earth system research
  • standards and infrastructure for networking between geographically distributed archives
  • CDO (Climate data Operators form MPI-M)
  • CDAT (Climate Data Analysis Tools, from PCMDI)
  • CERA-2 data model (World Climate Data Centre):
    • geo-referenced climate data detection, browse and use
  • MARS (ECMWF):
    • meteorological data access and manipulation (for major NWP sites)
slide11
PAE Meta-data

Leader: L. Steenman-Clark (CGAM)

Meta-data: data about data, models, runs, ...

… a hot topic in the last few years

    • exchange and use of data
    • interchangeability of Earth system models or modelling components
  • forum to discuss, develop, and coordinate metadata issues:
    • Numerical Model Metadata (U. of Reading): numerical code bases, simulations
    • CURATOR project (USA) : data, codes, simulations
    • Numerical grid metadata (GFDL, USA): grid
    • netCDF CF convention (PCMDI and BADC): climate and forecast data files
    • OASIS4 metadata coupling and IO interface
    • UK Met Office FLUME project: management of model configuration
slide12
PAE Computing

Leader: M.-A. Foujols (IPSL), R. Redler (NEC-CCRLE)

  • Computing aspects are highly important for Earth system modelling
  • Computer vendors have to be kept informed about requirements emerging from the climate modelling community
  • Earth system modellers have to be informed about computing issues to preview difficulties and evolutions
    • file IO and data storage
    • algorithmic development
    • portable software to fit the needs of parallel and vector systems
    • sharing of experience (e.g. work on the Earth Simulator)
    • establishment of links with computing projects (e.g. DEISA)
    • information about important conferences and workshops.
slide13
OASIS: historical background

OASIS: developed since 1991 to couple existing GCMs

19912001

|-- |--- PRISM 

OASIS 1  OASIS 2  OASIS3

 OASIS4 

OASIS1, OASIS2, OASIS3:

  • low resolution, low number of 2D fields, low coupling frequency:
    • flexibility very important, efficiency not so much!

OASIS4:

  • higher resolution parallel models, massively parallelplatforms, 3D fields
    • need to optimise and parallelise the coupler
slide14
OASIS: the community today
  • CERFACS (France) ARPEGE3 - ORCA2-LIM ARPEGE4 - NEMO-LIM - TRIP
  • METEO-FRANCE (France)ARPEGE4 - ORCA2 ARPEGE medias - OPAmed ARPEGE3 - OPA8.1-GELATO
  • IPSL- LODYC, LMD, LSCE (France) LMDz - ORCA2LIM LMDz - ORCA4
  • MERCATOR (France) (for interpolation only)
  • MPI - M&D (Germany) ECHAM5 - MPI-OM ECHAM5 - C-HOPE PUMA - C-HOPE EMAD - E-HOPE ECHAM5 - E-HOPE ECHAM4 - E-HOPE
  • ECMWFIFS - CTMIFS - ORCA2
slide15
OASIS: the community today
  • IFM-GEOMAR (Germany) ECHAM5 - NEMO (OPA9-LIM)
  • NCAS / U. Reading (UK) ECHAM4 - ORCA2 HADAM3-ORCA2
  • SMHI (Sweden) RCA(region.) – RCO(region.)
  • NERSC (Norway) ARPEGE - MICOM
  • KNMI (Netherlands) ECHAM5 - TM5/MPI-OM
  • INGV (Italy) ECHAM5 – MPI-OM
  • ENEA (Italy) MITgcm - REGgcm
  • JAMSTEC (Japan) ECHAM5(T106) - ORCA ½ deg
  • IAP-CAS (China) AGCM - LSM
  • BMRC (Australia) TCLAPS - MOM
  • U. of Tasmania (Australia) Sea Ice code - MOM4
  • RPN-Environment Canada (Canada) MEC - GOM
  • UQAM (Canada) GEM - RCO
  • U. Mississippi (USA) MM5 - HYCOM
  • IRI (USA) ECHAM5 - MOM3
  • JPL (USA) UCLA-QTCM - Trident-Ind4-Atlantic
slide16
OASIS: Some key notes
  • Developers: CERFACS, NEC CCRLE, CNRS, SGI, (NEC HPCE)
  • Public domain; open source license (LGPL)
  • Programming language: Fortran 90 and C
  • Public domain libraries; vendor optimized versions may exist:
    • MPI1 and/or MPI2; NetCDF/parallel NetCDF; libXML
    • mpp_io; SCRIP
  • Static coupling
slide17
The OASIS3 coupler
  • Coupler developed since more than 15 years in CERFACS
  • Stable, well-debugged, but limited
  • Last version: oasis3_prism_2-5 delivered in September 2006
  • User support provided but most development efforts go to OASIS4
  • Mono-process coupler + parallel coupling library (PSMILe)
      • synchronisation of the component models
      • coupling fields exchange
      • I/O actions
      • mono-process interpolation
  • Platforms:
      • Fujitsu VPP5000, NEC SX5-6-8, Linux PC, IBM Power4, CRAY XD1, Compaq, SGI Origin, SGI O3400
slide18
OASIS3: model adaptation

PRISM System Model Interface Library (PSMILe) API :

  • Initialization
  • Global grid definition (master process only)
  • Local partition definition
  • Coupling or I/O field declaration
  • Coupling or I/O field sending and receiving:
    • in model time stepping loop
    • depending on user’s specifications in namcouple:
      • user’s defined source or target (end-point communication)
      • coupling or I/O sending or receiving at appropriate times
      • automatic averaging/accumulation
      • automatic writing of coupling restart file at end of run
    • call prism_put (var_id, time, var_array, ierr)
    • call prism_get (var_id, time, var_array, ierr)
slide19
OASIS3: coupled model configuration
  • In text file namcouple, read by OASIS3 main process, and distributed to component model PSMILes at beginning of run:
    • total run time
    • component models
    • number of coupling fields
    • for each coupling field:
      • source and target names (end-point communication) (var_name)
      • grid acronym (grid_name)
      • coupling and/or I/O status
      • coupling or I/O period
      • transformations/interpolations
slide20
OASIS3: communication

O

A

O

  • Parallel communication between parallel models and Oasis3 interpolation process

A

Oasis3

O

A

O

B

A

B

A

  • Direct communication between models with same grid and partitioning

B

A

  • I/O functionality (automatic switch between coupled and forced mode): GFDL mpp_io library

A

A

file

A

PSMILe based on MPI1 or MPI2 message passing

slide21
OASIS3: interpolations/transformations

O

Oasis3

A

O

A

Oasis3

O

A

O

  • separate sequential process
    • neighbourhood search
    • weight calculation
    • interpolation per se during the run
  • on 2D scalar or vector fields
    • Interfacing with RPN Fast Scalar INTerpolator package
      • nearest-neighbour, bilinear, bicubic for regular Lat-Lon grids
    • Interfacing with SCRIP1.4 library
      • nearest-neighbour, 1st and 2nd order conservative remapping for all grids
      • bilinear and bicubic interpolation for «logically-rectangular» grids
    • Bilinear and bicubic interpolation for reduced atmospheric grids
    • Other spatial transformations: flux correction, merging, etc.
    • General algebraic operations
slide22
The OASIS4 coupler
  • A parallel central Driver/Transformer:
    • launches models at the beginning of the run (MPI2)
    • reads the user-defined configuration information and distributes it to the component PSMILes
    • performs parallel transformations of the coupling fields during the run
  • A parallel model interface library (PSMILe) that performs:
    • weight-and-address calculation for the coupling field interpolations
    • MPI-based coupling exchanges between components
    • component I/O (GFDL mpp_io library)
slide23
OASIS4: model adaptation(1/3)
  • Initialization:

call prism_init_comp(comp_id, comp_name, ierr)

call prism_get_localcomm(comp_id, local_comm, ierr)

  • Definition of grid (3D)

call prism_def_grid(grd_id, grd_name, comp_id, grd_shape, type, ierr)

call prism_set_corners(grd_id, nbr_crnr, crnr_shape, crnr_array, ierr)

  • Placement of scalar points and mask on the grid:

call prism_set_points (pt_id, pt_name, grd_id, pt_shape, pt_lon, pt_lat, pt_vert ,ierr)

call prism_set_mask(msk_id, grd_id, msk_shape, msk_array, ierr)

  • Function overloading to keep the interface concise and flexible
slide24
OASIS4: model adaptation(2/3)

prism_def_grid

Example

prism-set-corners

prism_set_points

prism_set_points

prism_set_points

prism_set_mask

slide25
OASIS4: model adaptation (3/3)
  • Coupling or I/O field declaration

call prism_def_var

(var_id, var_name, grd_id, pt_id, msk_id, var_nbrdims,

var_shape, var_type, ierr)

  • End of definition

call prism_enddef(ierr)

  • Coupling or I/O field sending and receiving:
    • in model time stepping loop
    • depending on user’s specifications in SMIOC:
      • user’s defined source or target, component or file (end-point communication)
      • coupling or I/O sending or receiving at appropriate times
      • averaging/accumulation

call prism_put (var_id, date, date_bounds, var_array, info, ierr)

call prism_get (var_id, date, date_bounds, var_array, info, ierr)

  • Termination

call prism_terminate(ierr)

slide26
OASIS4: component model description

Application and component description (XML files):

  • For each application (code): one Application Description (AD):
      • possible number of processes
      • components included
  • For each component in the application:

one Potential Model Input and Output Description (PMIOD)

      • component general characteristics: name, component simulated, …
      • grid information: domain, resolution(s), grid type, …
      • potential I/O or coupling variables:
        • local name, standard name
        • units, valid min and max
        • numerical type
        • associated grid and points
        • intent –input and/or output
slide27
OASIS4: coupled model configuration

(Through a GUI), the user produces

  • a Specific Coupling Configuration (SCC):
    • experiment and run start date and end date
    • applications, components for each application
    • host(s), number of processes per host, ranks for each component
  • For each component,

a Specific Model Input and Output Configuration (SMIOC)

      • grid information: chosen resolution, …
      • I/O or coupling variables:
        • name, units, valid min max, numerical type, grid
        • activated intent –input and/or output
        • source and/or target (component and/or file)
        • coupling or I/O dates
        • transformations/interpolations/combinations
slide28
ATM-LAND AD

Driver

ATM PMIOD

Va1: in, metadata Va1

Va2: out, metadata Va2

Va3: out, metadata Va3

user

user

OCE AD

user

OCE PMIOD

Vo1: out, metadata Vo1

Vo2: in, metadata Vo2

user

  • SCC
    • ATM:...
    • OCE:...
    • LAND:...

OCE

OCE SMIOC

Vo1 : to ATM Va1, T1

to File1

Vo2 : from ATM Vo1

ATM

ATM SMIOC

Va1 : from OCE Vo1

Va2: to OCE Vo2, T2

Va3 : to LAND Vl1

T

LAND

Va2

Vo2

Va3

Vl3

Va1

Vo1

LAND PMIOD

Vl1: in, metadata Vl1

Vl2: in, metadata Vl2

user

Vl2

File1

File2

Definition Phase

LAND SMIOC

Vl1 : from ATM Va3

Vl2 : from file File2

Composition Phase

Deployment Phase

slide29
OASIS4 communication(1/2)

OB

C

OB

T

C

O2

O1

C

C

OB

OB

C

C

T

O2

O1

C

C

OB

OB

C

C

Different grid and decomposition

  • via parallel Transformer:
  • for each target point, parallel calculation of source interpolation neighbours in source PSMILe

Same grid, different decomposition

  • direct repartitioning:
  • for each target point, parallel calculation of source matching point in source PSMILe
  • Model interface library: PSMILe based on MPI1 or MPI2
  • Parallel communication including repartitioning:
    • based on geographical description of the partitions
    • parallel calculation of communication patterns in source PSMILe
slide30
A

file1

A

A

A

A

file

file2

file

A

A

A

file3

A

single

distributed

parallel

OASIS4: communication(2/2)

  • end-point communication (source doesn’t know target and vice-versa)
  • parallel 3D neighbourhood search, based on efficient multigrid algorithm, in each source process PSMILe
  • extraction of useful part of source field only
  • one-to-one, one-to-many
  • parallel I/O (vector, bundles, vector bundles): GFDL mpp_io, parNetCDF
slide31
OASIS4 regridding/transformations
  • source time transformations (prism_put):
    • average, accumulation
  • target time transformations (prism_get)
    • time interpolation (for I/O only)
  • statistics
  • local transformations:
    • addition/multiplication by scalar
  • interpolation/regridding (3D):
    • nearest-neighbour 2D in the horizontal, “none” in the vertical
    • nearest-neighbour 3D
    • bilinear in the horizontal, “none” in the vertical
    • bicubic (gradient, 16 nghbrs) in the horizontal, “none” in the vertical
    • trilinear
slide32
Grids supported by OASIS4
  • Regridding, repartitioning, I/O:
    • Regular in lon, lat, vert (“Reglonlatvrt”):
      • lon(i), lat(j), height(k)
    • Irregular in lon and lat, regular in the vert (“irrlonlat_regvrt”):
      • lon(i,j), lat(i,j), height(k)
    • Irregular in lon, lat, and vert (“irrlonlatvrt”) (not fully tested)
      • lon(i,j,k), lat(i,j,k), height(i,j,k)
    • Gaussian Reduced in lon and lat, regular in the vert (“Gaussreduced_regvrt”)
      • lon(nbr_pt_hor), lat(nbr_pt_hor), height(k)
  • Repartitioning and I/O only:
    • “Non-geographical” fields
      • no geographical information attached
      • local partitions described in the global index space (prism_def_partition)
  • I/O only:
    • Unstructured grids (“unstructlonlatvrt”)
      • lon(npt_tot), lat(npt_tot), height(npt_tot)
slide33
OASIS4: developments & perspectives
  • OASIS4 tested and run with toy examples on:
    • NEC SX6 and SX8
    • IBM Power4
    • SGI O3000/2000
    • SGI IA64 Linux server Altix 3000/4000
    • Intel® Xeon™ Infiniband and Myrinet Clusters
    • Linux PC DELL
  • OASIS4 now being used in a reduced number of real applications:
    • EU project GEMS: atmospheric dynamic and chemistry coupling
    • SMHI: ocean-atmosphere regional coupling
    • UK Met Office: global ocean-atmosphere coupling (currently prototyping)
    • IFM-GEOMAR (Kiel) in pseudo-models to interpolate high-resolution fields.
  • Current developments:
    • 2D conservative remapping
    • Parallel global search for the interpolation
    • Transformer efficiency
    • Full validation of current transformations
  • Full public release planned beginning 2007.
slide34
Conclusions

PRISM provides:

  • framework promoting common software tools for Earth system modelling
  • some (more or less) standard tools (OASIS, source management, compiling, …)
  • network allowing ESM developers to share expertise and ideas
  • visible entry point for international coordination
      • metadata definition
      • WCRP white paper with ESMF on “Common Modeling Infrastructure for the International Community”
  • PRISM current decentralized organisation (bottom-up approach):
    • allows “best of breed” tools to naturally emerge
    • relies on the developments done in the different partner groups
  • Additional funding needed:
    • more networking and coordination activities
    • specific technical developments within the partner groups
  • Additional contributors are most welcome to join !
ad