The Open FUSION Toolkit 1.0.0-8905cc5
Modeling tools for plasma and fusion research and engineering
|
This example outlines the basic elements of a program which uses the Open FUSION Toolkit (OFT) to solve a PDE. In this case computing the lowest eigenvalue of the scalar Helmholtz equation \( ( \nabla^2 - \lambda ) u = 0 \).
The code consists of three basic sections, required imports and variable definitions, finite element setup, and system creation and solution.
The first thing that you must do is include the OFT modules which contain the required functions and variables. It is good practice to restrict the included elements to only those needed. This is done using the ONLY
clause to specifically include only certain definitions. The exceptions to this practice are the local and oft_base modules, which contain a controlled set of commonly used elements that can be safely imported as a whole.
The first two modules import runtime and helper functions. The tetmesh_local module contains the main global mesh object mesh and multigrid_build contains the grid construction driver. The three oft_lag_*
modules contain the finite element basis and setup routines, vector definitions, and operator routines respectively for a Lagrange finite element representation. oft_cg contains the native Conjugate-Gradient solvers and oft_io contains output routines for producing plot files.
This defines the vector object used to compute the eigenvalue/vector pair as well as the matrix objects for the Laplace and Mass operators. An eigensolver is also instantiated along with a preconditioner which utilizes diagonal scaling.
This call setups of the basics OFT run environment, including initializing MPI and PETSc if applicable.
This call constructs the grid levels through heirarchical refinement, multigrid_construct.
This call sets up metadata files for I/O and saves the mesh for use with solution fields output later in the program. The plotting grid handles high order fields by tesselating the mesh to produce additional tets using the new node points. The level of tesselation is set by the first argument to mesh%setup_io.
oft_lag_setup builds the Lagrange finte elements on each grid and polynomial level. This create element interaction lists as well as boundary and seam information. All FE index fields are encapsulated in the oft_fem_type structure, see fem_setup.
Solving the Helmholtz eigensystem requires the operators coresponding the general eigenvalue problem \( Ax = \lambda Mx \). These operators are the distcretized Laplace operator \( A = \int \nabla u_i^T \cdot \nabla u_j dV \) and the finite element mass matrix \( M = \int u_i^T u_j dV \). Where u
and \( u^T \) are the Lagrange basis and test functions respectively. A vector defining the solution space is also required, to store the solution and to create worker vectors. In this case a dirichlet boundary condition is used where u
is 0 everywhere on the boundary, denoted by the BC flag ‘'zerob’` used in matrix construction.
This section assembles the solver object by fill the required references. The solver object used here is the native non-linear Conjugate-Gradient iterative method. It requires the right and left hand side matrices for the generalized EV problem, A
and M
. The convergence tolerance is also specified to be double precision convergence of the eigenvalue, see oft_cg_eigsolver. Preconditioning is supplied by diagonal scaling, which requires identifying the preconditioner matrix, in this case A
.
The vector is then initialized with a guess solution and the boundary condition is applied to make sure the initial guess is consistent with the boundary condition. Finally, the assembled solver is used to compute the lowest eigenvalue/eigenvector pair.
The solution is output to the HDF5 dump files for plotting at a later time in VisIt. In order to save the field the local representation must be retrieved from the vector object using get_local. These values may then be saved using the hdf5_spdata subroutine. When the program has completed oft_finalize is called to cleanup the runtime environment and terminate the process. This subroutine calls any required MPI and PETSc finalize subroutines as well. After completing the run, the build_xdmf
script may be used to construct VisIt files and view the solution field, saved as tag T
.
Below is an input file which can be used with this example in a serial environment. For this example we have disabled additional debug output, debug=0
, and are using the built in cube mesh type, cad_type=92
, with 4 grid refinements, nbase=nlevels=4
.
&runtime_options ppn=1 debug=0 / &mesh_options meshname='cube' cad_type=92 nlevels=4 nbase=4 /
In order to run the code in a distributed memory environment, with MPI, the mesh must be decomposed. This requires that nlevels>nbase
as explained in Mesh settings. To perform a parallel run that is analogous to the serial run shown above the input file must be modified as below. The choice of nbase is somewhat arbitrary here, but generally is a function of the number of MPI tasks and the size of the base mesh. nlevels
however, must be incremented by 1 in order to account for the additional transfer level create during decomposition.
&runtime_options ppn=1 debug=0 / &mesh_options meshname='cube' cad_type=92 nlevels=5 nbase=3 /