1 / 20

HIRLAM Use of the Hirlam NWP Model at Met Éireann (Irish Meteorological Service)

HIRLAM Use of the Hirlam NWP Model at Met Éireann (Irish Meteorological Service) (James Hamilton -- Met Éireann). COMPUTER SYSTEM and VERSIONS of HIRLAM IBM RS/6000 SP : 9-nodes each with 4-CPU’s Operational Hirlam : 438x284 grid pts; 31-levels Nested Hirlam : 222x210 grid pts; 40-levels

Download Presentation

HIRLAM Use of the Hirlam NWP Model at Met Éireann (Irish Meteorological Service)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HIRLAM Use of the Hirlam NWP Model at Met Éireann (Irish Meteorological Service) (James Hamilton -- Met Éireann)

  2. COMPUTER SYSTEM and VERSIONS of HIRLAM • IBM RS/6000 SP : 9-nodes each with 4-CPU’s • Operational Hirlam : 438x284 grid pts; 31-levels • Nested Hirlam : 222x210 grid pts; 40-levels • LINUX PC : Twin Xeon Processors [2.0Ghz] • Backup Hirlam : 218x144; 31-levels • LINUX PC : Twin Xeon Processors [500Mhz] • Hourly Hirlam : 97x98 grid pts; 24-levels • LINUX Cluster : Nine Twin Xenon Processors [3.2GHz] • Operational Hirlam : Test mode

  3. OPERATIONAL HIRLAM … IBM RS/6000 SP • HIRLAM 5.0.1 with 3DVAR • 3-Hour assimilation cycle with 48-hour forecasts every 6-hours • Rotated lat/long 0.15x0.15 grid with 438x284 grid points • Hybrid [eta] coordinates with 31-levels • CBR vertical diffusion scheme; Sundqvist condensation scheme • STRACO cloud scheme; Savijarvi radiation scheme • Digital filter initialisation • Two time-level Semi-Lagrangian semi-implicit scheme • Use of ‘frame’ boundaries from ECMWF

  4. BACKUP HIRLAM … LINUX-PC [Dual 2.0Ghz CPU] • HIRLAM 4.3.5 with OI-Analysis • 3-Hour assimilation cycle with 48-hour forecasts every 6-hours • Rotated lat/long 0.30x0.30 grid with 218x144 grid points • Hybrid [eta] coordinates with 31-levels • SAME AREA AS OPERATIONAL HIRLAM

  5. NESTED HIRLAM … IBM RS/6000 SP • HIRLAM 6.0.0 with 3DVAR • 3-Hour assimilation cycle with 24-hour forecasts every 6-hours • Model runs at intermediate hours … 03Z, 09Z, 15Z, 21Z • Rotated lat/long 0.12x0.12 grid with 222x210 grid points • Hybrid [eta] coordinates with 40-levels • Kain-Fritsch/Rasch-Kristjansson convection/condensation scheme • ISBA surface scheme • Output post-processed with MOS

  6. HOURLY HIRLAM ... LINUX-PC [Dual 500Mhz CPU] • HIRLAM 4.3 with OI Analysis • Hourly assimilation cycle with hourly analysis and 3-hour forecast • Rotated lat/long 0.15x0.15 grid with 97x98 grid points • Hybrid [eta] coordinates with 24-levels • Analysis shown on public web-site

  7. OPERATIONAL USES of HIRLAM • General Forecasting • Forecast guidance out to 48-hours • WAM wave model • Forecast 10-metre winds used to drive model • Roadice Prediction System • Forecast parameters are used as first guess for [human] forecaster • SATREP • Overlay on satellite plots [ZAMG SATREP analysis scheme]

  8. RESEARCH ACTIVITIES Better Specification of B.C.’s for NWP models Operational Implementation of Nested System Regional Climate Analysis Modelling & Prediction Centre Community Climate Change Consortium for Ireland [RCAMPC and C4I]

  9. EXPERIMENTS with LINUX CLUSTER … • Purchased small experimental Linux-Cluster • Dell PowerEdge 1750 Twin Xeon nodes – rack mounted • 1 x Master node [2.8GHz, 4Gbytes] • 9 x Compute nodes [3.2GHz, 2Gbytes] • Compute nodes in 3x3 2-dimensional torus [Dolphin cards] • RedHat ES-3/WS-3; PGI compilers; Intel compilers • Scali MPI connect, Scali TCP connect, Scali Manage • Original cluster had 6-nodes : upgraded to 9-nodes

  10. EXPERIMENTS with LINUX CLUSTER … • Running Operational Hirlam on Cluster [9-node] • Hirlam 5.0.1 with 3DVAR – identical to operational system • Forecast takes 62 mins on IBM/SP; 71 mins on cluster • 3DVAR takes 15 mins on IBM/SP; 9 mins on cluster • Full run takes 77 mins on IBM/SP; 80mins on cluster • Therefore … 9-node cluster is as fast as IBM/SP • Experience with Cluster Generally Positive

  11. FUTURE PLANS for CLUSTER …. • Investigate Use of Intel Compilers and Libraries • Should be faster for running Hirlam • Use Cluster as test-bed to implement latest Hirlam 6.4 • Use Cluster to prepare benchmark for new mainframe • Replace Mainframe end of 2006 / start of 2007

  12. FUTURE PLANS … Investigate Options to Replace Mainframe in 2006 Get Hirlam 6.4 Running on Cluster Investigate MOS/Neural-Nets for Post-Processing Use Hirlam as input to Point Data-Base

  13. Thank You

More Related