# README - libMesh Example
## Description
The libMesh library provides a framework for the numerical simulation of partial differential equations using arbitrary unstructured discretizations on serial and parallel platforms. (http://libmesh.github.io/index.html)
This example is a slightly modified version of the official libMesh example ["fem_system_ex4"] (https://github.com/libMesh/libmesh/tree/master/examples/fem_system/fem_system_ex4). In this example, a heat transfer equation is solved with an FEM System.
The example demonstrates:
* Initialization of libMesh
* Definition of a equation system
* Usage of adaptive mesh refinement
* IO: reading and exporting meshes in different file formats
The example is structured as followed:
* `fem_system_ex4.C`: The main program, Initialization, IO, refinement
* `heatsystem.C|h`: Heat system with laplace heat equation
* `meshes/bridge.e|xda`: FEM-Mesh of a bridge
Screenshot of the result:
![Screenshot of Result](hpc_kernel_samples/unstructured_grids/libmesh/meshes/bridge_screenshot.PNG)
Further reading: [PRACE Summer School 2013 Slides](http://www.training.prace-ri.eu/uploads/tx_pracetmo/libmesh.pdf)
## Release Date
2016-09-08
## Version History
* 2016-09-08 Initial Release on PRACE CodeVault repository
## Contributors
* Thomas Steinreiter - [thomas.steinreiter@risc-software.at](mailto:thomas.steinreiter@risc-software.at)
## Copyright
This code is available under LGPL, Version 2.1 - see also the license file in the CodeVault root directory.
## Languages
This sample is written in C++ 11.
## Parallelisation
This sample uses PETSCs internal parallelisation.
## Level of the code sample complexity
Advanced
## Compiling
Follow the compilation instructions given in the main directory of the kernel samples directory (`/hpc_kernel_samples/README.md`).
Note: libMesh needs to be built against the correct version of MPI. PETSC is needed for parallelism.
## Running
To run the program, use something similar to
mpiexec -n [nprocs] ./8_unstructured_libmesh
either on the command line or in your batch script, where `nprocs` specifies the number of processes used. Node: if `nprocs` > 1, libMesh must be built with PETSC enabled.
### Example
If you run
mpiexec -n 8 ./8_unstructured_libmesh
the output should look similar to
Mesh Information:
elem_dimensions()={2, 3}
spatial_dimension()=3
n_nodes()=25227
n_local_nodes()=3322
n_elem()=127818
n_local_elem()=15972
n_active_elem()=127818
n_subdomains()=2
n_partitions()=8
n_processors()=8
n_threads()=1
processor_id()=0
EquationSystems
n_systems()=1
System #0, "Heat"
Type "Implicit"
Variables="T"
Finite Element Types="LAGRANGE"
Approximation Orders="FIRST"
n_dofs()=25227
n_local_dofs()=3322
n_constrained_dofs()=1887
n_local_constrained_dofs()=122
n_vectors()=1
n_matrices()=1
DofMap Sparsity
Average On-Processor Bandwidth <= 13.6686
Average Off-Processor Bandwidth <= 0.533635
Maximum On-Processor Bandwidth <= 27
Maximum Off-Processor Bandwidth <= 15
DofMap Constraints
Number of DoF Constraints = 1887
Average DoF Constraint Length= 0
Assembling the System
*** Warning, This code is deprecated, and likely to be removed in future library versions! /usr/local/include/libmesh/libmesh_common.h, line 497, compiled Sep 1 2016 at 15:28:09 ***
Nonlinear Residual: 8495.66
Linear solve starting, tolerance 0.001
Linear solve finished, step 55, residual 3.10027
Trying full Newton step
Current Residual: 14.2871
Nonlinear step: |du|/|u| = 1, |du| = 510394
Assembling the System
Nonlinear Residual: 14.2871
Linear solve starting, tolerance 0.001
Linear solve finished, step 58, residual 0.00263574
Trying full Newton step
Current Residual: 0.012002
Nonlinear step: |du|/|u| = 0.000564056, |du| = 287.935
Assembling the System
Nonlinear Residual: 0.012002
Linear solve starting, tolerance 1.2002e-05
Linear solve finished, step 94, residual 2.75721e-08
Trying full Newton step
Current Residual: 1.29858e-07
Nonlinear solver converged, step 2, residual reduction 1.52852e-11 < 1e-07
Nonlinear solver relative step size 5.66919e-07 > 1e-07
L2-Error is: 59155