1 / 20

Use of the HIRLAM NWP Model at Met Éireann (Irish Meteorological Service)

This article discusses the computer systems and versions of the HIRLAM NWP model used at Met Éireann. It includes information on the operational, nested, backup, and hourly HIRLAM models, as well as their specifications and use cases. The article also mentions future plans for the HIRLAM model and the use of a Linux cluster for experiments.

wellsl
Download Presentation

Use of the HIRLAM NWP Model at Met Éireann (Irish Meteorological Service)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HIRLAM Use of the Hirlam NWP Model at Met Éireann (Irish Meteorological Service) (James Hamilton -- Met Éireann)

  2. COMPUTER SYSTEM and VERSIONS of HIRLAM • IBM RS/6000 SP : 9-nodes each with 4-CPU’s • Operational Hirlam : 438x284 grid pts; 31-levels • Nested Hirlam : 222x210 grid pts; 40-levels • LINUX PC : Twin Xeon Processors [2.0Ghz] • Backup Hirlam : 218x144; 31-levels • LINUX PC : Twin Xeon Processors [500Mhz] • Hourly Hirlam : 97x98 grid pts; 24-levels • LINUX Cluster : Six Twin Xenon Processors [3.2GHz] • Operational Hirlam : Test mode

  3. OPERATIONAL HIRLAM … IBM RS/6000 SP • HIRLAM 5.0.1 with 3DVAR • 3-Hour assimilation cycle with 48-hour forecasts every 6-hours • Rotated lat/long 0.15x0.15 grid with 438x284 grid points • Hybrid [eta] coordinates with 31-levels • CBR vertical diffusion scheme; Sundqvist condensation scheme • STRACO cloud scheme; Savijarvi radiation scheme • Digital filter initialisation • Two time-level Semi-Lagrangian semi-implicit scheme • Use of ‘frame’ boundaries from ECMWF

  4. BACKUP HIRLAM … LINUX-PC [Dual 2.0Ghz CPU] • HIRLAM 4.3.5 with OI-Analysis • 3-Hour assimilation cycle with 48-hour forecasts every 6-hours • Rotated lat/long 0.30x0.30 grid with 218x144 grid points • Hybrid [eta] coordinates with 31-levels • SAME AREA AS OPERATIONAL HIRLAM

  5. NESTED HIRLAM … IBM RS/6000 SP • HIRLAM 6.0.0 with 3DVAR • 3-Hour assimilation cycle with 24-hour forecasts every 6-hours • Model runs at intermediate hours … 03Z, 09Z, 15Z, 21Z • Rotated lat/long 0.12x0.12 grid with 222x210 grid points • Hybrid [eta] coordinates with 40-levels • Kain-Fritsch/Rasch-Kristjansson convection/condensation scheme • ISBA surface scheme • Output post-processed with MOS

  6. HOURLY HIRLAM ... LINUX-PC [Dual 500Mhz CPU] • HIRLAM 4.3 with OI Analysis • Hourly assimilation cycle with hourly analysis and 3-hour forecast • Rotated lat/long 0.15x0.15 grid with 97x98 grid points • Hybrid [eta] coordinates with 24-levels • Analysis shown on public web-site

  7. OPERATIONAL USES of HIRLAM • General Forecasting • Forecast guidance out to 48-hours • WAM wave model • Forecast 10-metre winds used to drive model • Roadice Prediction System • Forecast parameters are used as first guess for [human] forecaster • SATREP • Overlay on satellite plots [ZAMG SATREP analysis scheme]

  8. RESEARCH ACTIVITIES Better Specification of B.C.’s for NWP models Operational Implementation of Nested System Regional Climate Analysis Modelling & Prediction Centre Community Climate Change Consortium for Ireland [RCAMPC and C4I]

  9. EXPERIMENTS with LINUX CLUSTER … • Purchased small experimental Linux-Cluster • Dell PowerEdge 1750 Twin Xeon nodes – rack mounted • 1 x Master node [2.8GHz, 4Gbytes] • 6 x Compute nodes [3.2GHz, 2Gbytes] • Compute nodes in 3x2 2-dimensional torus [Dolphin cards] • RedHat ES-3/WS-3; PGI compilers; Intel compilers • Scali MPI connect, Scali TCP connect, Scali Manage

  10. EXPERIMENTS with LINUX CLUSTER … • Running Operational Hirlam on Cluster [6-node] • Hirlam 5.0.1 with 3DVAR – identical to operational system • Forecast takes 62 mins on IBM/SP; 81 mins on cluster • 3DVAR takes 15 mins on IBM/SP; 30 mins on cluster • Full run takes 77 mins on IBM/SP; 111mins on cluster • [3DVAR highly optimised for IBM/SP e.g. FFT routines] • Expect 9-node cluster to be as fast as IBM/SP • Experience with Cluster Very Positive

  11. FUTURE PLANS for CLUSTER …. • Investigate Use of Intel Compilers and Libraries • Should be faster for running Hirlam • Investigate Other FFT packages • 3DVAR has option to use ‘fftw’ – should be faster • Use Cluster as test-bed to implement latest Hirlam • Upgrade Cluster by adding 3 more nodes

  12. FUTURE PLANS … Investigate Options to Replace Mainframe in 2006 Get Hirlam 6.2.3 Running on Cluster Investigate MOS/Neural-Nets for Post-Processing Use Hirlam as input to Point Data-Base

  13. Thank You

More Related