Incompressible Turbulent Flows

DNS pseudospectral codes

The codes solve the incompressible Navier-Stokes or the Boussinesq equations in case of stratified flow together with the passive scalar transport equation in a parallelepiped domain with periodic boundary conditions. They implement a pseudospectral Fourier-Galerkin spatial discretization and an explicit low storage fourth order Runge-Kutta time integration scheme. In the MPI implementation, one direction only is distributed among the processors. That feature limits the maximum number of cores that can be used to the number of mesh points in the smaller dimension of the parallelepiped domain. The most computationally expensive part of the code is the pseudospectral computiation of the nonlinear convective terms (on average, between 70% and 85% of the total time), which requires to inverse transform the data in the physical space and then to transform the convective terms back in the wavenumber space. The code uses one dimensional real-to-real FFT subroutines and a data transposition method which is integrated with the dealiasing procedure (see CPC 2001).
Most of the development work has been performed on the Ibm Power series machines at Cineca.
The code has been under constant improvement over the last ten years and has been optimized in collaboration with Cineca. It and has reached a good level of maturity, with a good strong scaling up to 2048 cpu in the Ibm Power 6. A similar scaling is expected in fat node systems with infiniband connection.
Licence has not yet defined. However, all codes are available on request for scientific collaboration to the developers.


Code Development and Versions

All the codes has been developed and maintaned by our group:

  • Version 1.0 (2001): parallelization of the pseudospectral Navier-Stokes code. The code solves the Navier-Stokes equations in a cube with N grid points in each direction (see M. Iovieno, C. Cavazzoni, D. Tordella, A new technique for a parallel dealiased pseudospectral Navier-Stokes code, Comp. Phys. Comm., 141, 365-374, (2001)).
  • Version 1.2 (2005): extension to parallelepiped domains with NxNxN3 points.
  • Version 1.4 (2008-2016, see here for the model equations):
    • Version 1.4 (2008): optimization of the computation of the nonlinear convective terms. The modification reduced the total number of FFT/inverse FFT at the expenses of an increased memory usage.
    • Version 1.4scal (2009): the code has been extended with the addition of the transport equations of passive scalars. The transport of up to six scalar quantities with different Schmidt numbers or initial conditions can be simulated.
    • Version 1.4strat (2011-2012): the code has been extended to solve stratified flows within the Boussisesq approximation.
    • Version 1.4p (2016): the particle/droplet module has been added to version 1.4scal. This version is able to track small inertial particles/droplets transported in the fluid (see here for droplet model and its parallelization).
  • Version 1.5 (2012): two-directions parallelization, "pencil" data distribution, towards larger number of cores, as IBM Blue-Gene architecture ([PDF]). Replaced by the more efficient v 1.7.
  • Version 1.6 (2013): mixed MPI/openMPI parallelization.
  • Version 1.7 (2014-2015):
    • Version 1.7 (2014-2015): two-direction "pencil" parallelization. This version includes some code optimization with implementing new libraries as MPI 3.0, Fortran 2008 standard, FFTW 3.3. New data distribution allows to reduce the needed FFTs by 30% - Note: -main code is complete, but Pre and Post processing routines are yet under convertion). See here for information about the parallelization.
    • Version 1.7p: version 1.7 with the inertial particle/droplet module. Under construction.

 

Documentation

  • User manual for scalar code. [PDF]
  • User manual for parallel (MPI) code version, version 1.4. [PDF]
  • User manual for parallel (MPI) code version, version 1.7. [PDF]
  • M. Iovieno, C. Cavazzoni, D. Tordella, A new technique for a parallel dealiased pseudospectral Navier-Stokes code, Comp. Phys. Comm., 141, 365-374, (2001). [PDF]
  • Scaling of the code.

 

Production codes available for download (version 1.4):

Note: the MPI parallel codes are versions 1.4, which use a "slab" parallelization: only one direction is distributed among the cores. Version 1.7, which uses a "pencil" decomposition, is available but does not contain all the post-processing of version 1.4.

Please, when you fill the request form, specify what version are you interested in. The form will send an email to the software maintainers.


DNS - Spectral code (scalar code) [DOWNLOAD]

  • Homogeneous and isotropic turbulence
  • Shearless turbulent mixing

The code solves the incompressible Navier-Stokes in a parallelepiped domain with periodic boundary conditions. It implements a pseudospectral Fourier-Galerkin spatial discretization and an explicit low storage fourth order Runge-Kutta time integration scheme.

 

DNS - Spectral code (parallel, MPI) [DOWNLOAD]

  • Homogeneous and isotropic turbulence
  • Shearless Mixings

The code solves the incompressible Navier-Stokes in a parallelepiped domain with periodic boundary conditions. It implements a pseudospectral Fourier-Galerkin spatial discretization and an explicit low storage fourth order Runge-Kutta time integration scheme.
In the MPI parallelization (version 1.4), one direction only is distributed among the processors.That feature limits the maximum number of cores that can be used to the number of mesh points in the smaller dimension of the parallelepiped domain. [CPC 2001] In version 1.7 two directions are distributed.

 


DNS - Passive scalar transport (parallel, MPI) [DOWNLOAD]

  • Homogeneous and isotropic turbulence
  • Shearless Turbulent Mixing
  • Arbitrary passive scalar initial conditions (e.g. uniform shear, scalar step)

The code solves also the advection-diffusion equations for up to six passive scalars in a parallelepiped domain with periodic boundary conditions. It implements a pseudospectral Fourier-Galerkin spatial discretization and an explicit low storage fourth order Runge-Kutta time integration scheme.

In the MPI parallelization, one direction only is distributed among the processors. in version 1.4  (that feature limits the maximum number of cores that can be used to the number of mesh points in the smaller dimension of the parallelepiped domain), while two directions are distributed in version 1.7.


DNS - Stratified flows (parallel, MPI)

  • Homogeneous stratification
  • Passive scalar transport

The code solves the Boussinesq equations for a stratified flow and can solve also up to five passive scalars in a parallelepiped domain with periodic boundary conditions. Density stratification is in direction z. It implements a pseudospectral Fourier-Galerkin spatial discretization and an explicit low storage fourth order Runge-Kutta time integration scheme. In the MPI parallelization, one direction only is distributed among the processors. That feature limits the maximum number of cores that can be used to the number of mesh points in the smaller dimension of the parallelepiped domain.


Post-processing (scalar and parallel, MPI)

One-point statics post-processing codes are available. They compute the first fouth order moments of the velocity and scalar fields and of the velocity and scalar derivatives by averaging on planes with z=const the data obtained from codes (1-4). 1D and 3D Spectra.

 

Credits