Tips & Tricks¶
We introduced this section to share some useful tips based on our and users’ modeling experience and aim to successively update the subsequent list.
We ask users to give us a hint if they are missing some information that would have been useful to avoid modeling bugs or saving time for designing scripts.
Feel free to contact us in case of any issue, questions, or wish of a functionality which does not exist so far.
Many useful features such as automated Rx/Tx handling and interpolation, utilitiy functions and others are not represented by the provided examples. Again, feel free to ask if setting up problems feels too tricky.
If not interested in approaches or a self-validation with different custEM approaches, we strongly recommend to use the total electric-field formulation in frequency-domain. Of course, for flat-surface models, the secondary electric or magentic fields formulations are also very useful. Time-domain approaches and Natural-source modeling can only use the generally applicable total electric-field formulation for now.
All approaches (except A-V-nodal, only VTI) support generally anisotropic conductivities and magentic permeabilities. Modeling with induced polarization (IP) effects and electric permittivities is only supported in frequency-domain with the total electric field formulation.
Airspace conductivities are usually set to 1e-10 or 1e-12 Ohmm. For most CSEM setups, we found that 3-4 orders of magnitude contrast between airspace and subsurface conductivities are totally sufficient to obtain accurate solutions. Higher contrasts result in worse conditioning of the system matrix and slightly longer computation times but do not increase the solution accuracy significantly (the effect is << 1 % of relative errors).
The first time assembly, in particular for p2 function-spaces, can take a while due to initial pre-compiling tasks of dolfin
Change the recently implemented debug_levels to your needs: 10 is verbose 20 is standard output, might be furhter reduced in future versions 30 is almost quiet as there are only a few warnings 40 is not used internally 50 only print critical errors Uncought exceptions are always raised with traceback information.
With TetGen 1.6, a new tetrahedralization algorithm was introduced. If TetGen crashes unexpectetly or freezes, try with the old algorithm via adding the “-D” option to the TetGen call
In custEM, we always use a right-handed coordinate system: x - pointing positive towards geographical East direction y - pointing positive towards geographical North direction z - pointing positive towards geographical Upwards direction (height values positive above surface and negative below)
Useful values for TetGen quality: 1.2 for p1 and 1.4 - 2.0 (1.6 often good compromise between accuracy and mesh size) for p2
It is strongly recommended to use the refine_rx method for refining all receiver positions with surrounding triangles of a certain radius, generally useful values: r between 0.1 and 10 m for p1 and r between 1. and 100 m for p2
Include Tx either on the surface with the build_surface method (in case of topography, any z-values will be ignored and shifted to the surface height from the elevation model) or anythere in the mesh by using the add_tx method before calling TetGen
Same holds for Rx. Important: Since the optimum refinement is achieved by using the surrounding triangle-discretization as explained above, the refined shapes must be included as path in the mesh. Obviously, the mesh does not contain the original Rx coordinates as nodes. It is possible to store the original coordinate positions of single or multiple Rx paths to the mesh parameter file (for automated parallel interpolation during the simulation) by using the add_rx method.
If you are working with digital elevation models (DEM) in a gridded .asc format, check if the sortation of the gridded data fits to the definition of x/y/z axes in custEM (right-handed coordinate system).
It is recommended to avoid very short or large segments in a transmitter path description.
Increase the TetGen tolerance in case of crashes due to the combination of very large domains and very short segments
Appending a boundary mesh for halfspace models is not required and equivalent to increasing the original domain dimensions about the same factors.
Even though described in other instructions separately, we want to point out that for implementation reasons, currently the mpirun syntax needs to be used for serial computations as well. For instance, (instead of python run_example_1.py) use:
–> mpirun -n 1 python run_example_1.py
Mesh generation or visualization scripts always have to be called in serial with the usual command line syntax or using a python editor, for instance:
–> python mesh_example_1.py –> python plot_example_1.py
In very few cases, the FE simulation crashes during the magnetic field conversion because of the errors:
–> ERROR: hdgraphFold2: out of memory
–> PETSc reason DIVERGED_PC_FAILED, residual norm ||r|| = 0.000000e+00
They are caused by MUMPS and the best chance to avoid them is to set the MOD class keyword argument serial_ordering to True or slightly change the number of mpirun processes for the simulation of a specific setup.
The common handling so far is to provide a mesh, run, and visualization script, but it is also possible to add all tasks to a single script, performing the meshing and plotting tasks in serial while embedded in the mpirun environment. It can be also useful to start any combination of custEM-scripts with help of a master bash-file. In case of related questions, feel free to ask the developers for further advice.
Syntax change advise¶
General anisotropic conductivities and magnetic permeabilites (except A-V nodal approach) are now set as list of 1 value or scalar (isotropic), 3 values (VTI-anisotropic, main diagonal of conductivity tensor), or 6 values (general anisotropic, upper triangle matrix of conductivitiy tensor) for any domain marker. Example: [1e-2, [1e-1, 1e-1, 1e-2]] specifies one isotropic value for domain marker 1 and a vti value for domain marker 2, domain marker 0 always belongs to the airspace using vacuum properties.
The overwrite flag was replaced by two different flags - oberwrite_results (alias for old overwrite flag) and overwrite_mesh. The usage is described in the MOD class.
To support more general setups for secondary-field computations, the new syntax requires to specify sigma_ground (containing all sigma values except the airspace, corresponding to domain markers 1-N) and a sigma_0 background resitivity vector of equal length. Hence, now any domain can be handled as anomly in contrast to the old syntax, which supported only anomlies in layers (the arguments sigma_anom and anomaly_layer_markers) are not allowed anymore.
Spawning additional processes for interpolation was disabled in favor of an optimized serial interpolation routine. It can take now a list of interpolation meshes and qunatities at once which is simpler and faster