Interface tracking simulations of bubbly flows in PWR relevant geometries

Autor: Jun Fang, Igor A. Bolotnov, Michel Rasquin
Rok vydání: 2017
Předmět:
Zdroj: Nuclear Engineering and Design. 312:205-213
ISSN: 0029-5493
Popis: The advances in high performance computing (HPC) have allowed direct numerical simulation (DNS) approach coupled with interface tracking methods (ITM) to perform high fidelity simulations of turbulent bubbly flows in various complex geometries. In this work, we have chosen the geometry of the pressurized water reactor (PWR) core subchannel to perform a set of interface tracking simulations (ITS) with fully resolved liquid turbulence. The presented research utilizes a massively parallel finite-element based code, PHASTA, for the subchannel geometry simulations of bubbly flow turbulence. The main objective for this research is to demonstrate the ITS capabilities in gaining new insight into bubble/turbulence interactions and assisting the development of improved closure laws for multiphase computational fluid dynamics (M-CFD). Both single- and two-phase turbulent flows were studied within a single PWR subchannel. The analysis of numerical results includes the mean gas and liquid velocity profiles, void fraction distribution and turbulent kinetic energy profiles. Two sets of flow rates and bubble sizes were used in the simulations. The chosen flow rates corresponded to the Reynolds numbers of 29,079 and 80,775 based on channel hydraulic diameter ( D h ) and mean velocity. The finite element unstructured grids utilized for these simulations include 53.8 million and 1.11 billion elements, respectively. This has allowed to fully resolve all the turbulence scales and the deformable interfaces of individual bubbles. For the two-phase flow simulations, a 1% bubble volume fraction was used which resulted in 17 bubbles in the smaller case and 262 bubbles in the larger case. In the larger simulation case the size of the resolved bubbles is 0.65 mm in diameter, and the bulk mesh cell size is about 30 microns. Those large-scale simulations provide new level of details previously unavailable and were enabled by the excellent scaling performance of our two-phase flow solver and access to the state-of-the-art supercomputing resources. The presented simulations used up to 256 thousand processing threads on the IBM BG/Q supercomputer “Mira” (Argonne National Laboratory).
Databáze: OpenAIRE