Installation of FLARE

Installation using pip

Pip can automatically fetch the source code from PyPI and install.

$ pip install mir-flare

For non-admin users

$ pip install --upgrade --user mir-flare

Manual Installation with Git

First, copy the source code from https://github.com/mir-group/flare

$ git clone https://github.com/mir-group/flare.git

Then add the current path to PYTHONPATH

$ cd flare; export PYTHONPATH=$(pwd):$PYTHONPATH

Acceleration with multiprocessing and MKL

If users have access to high-performance computers, we recommend Multiprocessing and MKL library set up to accelerate the training and prediction. The acceleration can be significant when the GP training data is large. This can be done in the following steps.

First, make sure the Numpy library is linked with MKL or Openblas and Lapack.

$ python -c "import numpy as np; print(np.__config__.show())"

If no libraries are linked, Numpy should be reinstalled. Detailed steps can be found in Conda manual.

Second, in the initialization of the GP class and OTF class, turn on the GP parallelizatition and turn off the OTF par.

gp_model = GaussianProcess(..., parallel=True, per_atom_par=False, n_cpus=2)
otf_instance = OTF(..., par, n_cpus=2)

Third, set the number of threads for MKL before running your python script.

export OMP_NUM_THREADS=2
python training.py

Note

The “n_cpus” and OMP_NUM_THREADS should be equal or less than the number of CPUs available in the computer. If these numbers are larger than the actual CPUs number, it can lead to an overload of the machine.

Note

If gp_model.per_atom_par=True and OMP_NUM_THREADS>1, it is equivalent to run with OMP_NUM_THREADS * otf.n_cpus threads because the MKL calls are nested in the multiprocessing code.

The current version of FLARE can only support parallel calculations within one compute node. Interfaces with MPI using multiple nodes are still under development.

If users encounter unusually slow FLARE training and prediction, please file us a Github Issue.

Environment variables (optional)

Flare uses a couple environmental variables in its tests for DFT and MD interfaces. These variables are not needed in the run of active learning.

# the path and filename of Quantum Espresso executable
export PWSCF_COMMAND=$(which pw.x)
# the path and filename of CP2K executable
export CP2K_COMMAND=$(which cp2k.popt)
# the path and filename of LAMMPS executable
export lmp=$(which lmp_mpi)