Journal Article FZJ-2015-00813

http://join2-wiki.gsi.de/foswiki/pub/Main/Artwork/join2_logo100x88.png
Implementation and scaling of the fully coupled Terrestrial Systems Modeling Platform (TerrSysMP v1.0) in a massively parallel supercomputing environment – a case study on JUQUEEN (IBM Blue Gene/Q)

 ;  ;  ;  ;  ;  ;

2014
Copernicus Katlenburg-Lindau

This record in other databases:  

Please use a persistent id in citations:   doi:

Abstract: Continental-scale hyper-resolution simulations constitute a grand challenge in characterizing nonlinear feedbacks of states and fluxes of the coupled water, energy, and biogeochemical cycles of terrestrial systems. Tackling this challenge requires advanced coupling and supercomputing technologies for earth system models that are discussed in this study, utilizing the example of the implementation of the newly developed Terrestrial Systems Modeling Platform (TerrSysMP v1.0) on JUQUEEN (IBM Blue Gene/Q) of the Jülich Supercomputing Centre, Germany. The applied coupling strategies rely on the Multiple Program Multiple Data (MPMD) paradigm using the OASIS suite of external couplers, and require memory and load balancing considerations in the exchange of the coupling fields between different component models and the allocation of computational resources, respectively. Using the advanced profiling and tracing tool Scalasca to determine an optimum load balancing leads to a 19% speedup. In massively parallel supercomputer environments, the coupler OASIS-MCT is recommended, which resolves memory limitations that may be significant in case of very large computational domains and exchange fields as they occur in these specific test cases and in many applications in terrestrial research. However, model I/O and initialization in the petascale range still require major attention, as they constitute true big data challenges in light of future exascale computing resources. Based on a factor-two speedup due to compiler optimizations, a refactored coupling interface using OASIS-MCT and an optimum load balancing, the problem size in a weak scaling study can be increased by a factor of 64 from 512 to 32 768 processes while maintaining parallel efficiencies above 80% for the component models.

Classification:

Contributing Institute(s):
  1. Agrosphäre (IBG-3)
  2. John von Neumann - Institut für Computing (NIC)
  3. JARA - HPC (JARA-HPC)
Research Program(s):
  1. 246 - Modelling and Monitoring Terrestrial Systems: Methods and Technologies (POF2-246) (POF2-246)
  2. 255 - Terrestrial Systems: From Observation to Prediction (POF3-255) (POF3-255)
  3. Scalable Performance Analysis of Large-Scale Parallel Applications (jzam11_20091101) (jzam11_20091101)

Appears in the scientific report 2014
Database coverage:
Creative Commons Attribution CC BY 3.0 ; DOAJ ; OpenAccess ; Current Contents - Physical, Chemical and Earth Sciences ; IF >= 5 ; JCR ; Science Citation Index Expanded ; Thomson Reuters Master Journal List ; Web of Science Core Collection
Click to display QR Code for this record

The record appears in these collections:
Document types > Articles > Journal Article
JARA > JARA > JARA-JARA\-HPC
Institute Collections > IBG > IBG-3
Workflow collections > Public records
Publications database
Open Access
NIC

 Record created 2015-01-26, last modified 2022-09-08


OpenAccess:
Download fulltext PDF
External link:
Download fulltextFulltext by OpenAccess repository
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)