ADF
Introduction
ADF (Amsterdam Density Functional) is an accurate, parallelized, powerful computational chemistry program to understand and predict chemical structure and reactivity with density functional theory. Heavy elements and transition metals are accurately modeled with ADF's reliable relativistic ZORA
approach and all-electron basis sets for the whole periodic table (H-Uuo). A vast range of spectroscopic properties and comprehensive analysis tools yield invaluable insight in chemical structure and reactivity. DFT calculations are easily prepared and analyzed with our GUI.
License
ADF is a licensed software. The license file is installed on all nodes of the partition mendoza_q
. Currently only members of Professor Jose Mendoza Cortes' research group have access to this software.
Note. The GUI tool of ADF is not installed on the computing nodes of the HPC.
Usage of ADF at the RCC
There are two versions of ADF installed on the HPC cluster, 2014.07
and 2016.101
.
To use ADF, first, load the module for the version of ADF you want to use. To use the latest 2016
version (the default),
$ module load adf
To use the older 2014
version instead,
$ module load adf/2014.07
Note. The two versions should NOT be used together. Consequently, only one module file should be loaded. To unload a module use a syntax like the following,
$ module unload adf/2014.07
To check if this module is successfully loaded,
$ module list
(You should find adf
in the outputs of the above command).
To check if you have the correct environment set up,
$ which dirac
(If you can not find dirac
, them something is wrong).
To check license information
$ dirac check
The last line should say License Ready
.
Scratch Space
Some ADF jobs need large scratch disk space. By default, the scratch directory was set to be
/gpfs/research/mendozagroup/scratch/adf/$USER/2014
for the 2014
version, and
/gpfs/research/mendozagroup/scratch/adf/$USER/2016
for the 2016
version, respectively. In the above $USER
is your rcc user name.
If you want a specific scratch directory for each Slurm job, you can redefine the scratch dicrectory like the following
module load adf
export SCM_TMPDIR=${SCM_TMPDIR}/${SLURM_JOBID}
echo $SCM_TMPDIR
if [ ! -d $SCM_TMPDIR ]; then
mkdir -p $SCM_TMPDIR
fi
Note. In the above, we need to create the new scratch directory ${SLURM_JOBID}
since it does not exist.
Example of ADF job
The following is a simple example SLURM
job submission script (asking for 3 cores) :
#!/bin/bash
#SBATCH -J "adf_test"
#SBATCH -n 3
#SBATCH -N 1
#SBATCH -o test-%J.o
#SBATCH -e test-%J.e
#SBATCH -p mendoza_q
#SBATCH --mail-type=ALL
#SBATCH -t 15:00
cd $SLURM_SUBMIT_DIR
module purge
module load adf
export SCM_TMPDIR=${SCM_TMPDIR}/${SLURM_JOBID}
echo $SCM_TMPDIR
if [ ! -d $SCM_TMPDIR ]; then
mkdir -p $SCM_TMPDIR
fi
if [ ! -d $SCM_TMPDIR ]; then
echo "temporary scratch directory could not be created"
exit
fi
export NSCM=3
echo $NSCM
which adf
adf -n 3 < HCN_4P.inp > HCN_4P.out
The input data file HCN_4P.inp
is
Title HCN Linear Transit, first part
NoPrint SFO, Frag, Functions, Computation
Atoms Internal
1 C 0 0 0 0 0 0
2 N 1 0 0 1.3 0 0
3 H 1 2 0 1.0 th 0
End
Basis
Type DZP
End
Symmetry NOSYM
Integration 6.0 6.0
Geometry
Branch Old
LinearTransit 10
Iterations 30 4
Converge Grad=3e-2, Rad=3e-2, Angle=2
END
Geovar
th 180 0
End
End Input
Upon successful run, you will see in the slurm error output file test-280037.e
NORMAL TERMINATION
NORMAL TERMINATION
NORMAL TERMINATION
NORMAL TERMINATION
NORMAL TERMINATION
NORMAL TERMINATION
The slurm standout file test-280037.o
/gpfs/research/mendozagroup/scratch/adf/bchen3/2521776
3
/gpfs/research/mendozagroup/ADF/2016/openmpi/adf2016.101/bin/adf
...
To check if adf has run in parallel
$ cat HCN_4P.out | grep Nodes
You will find at least one line like the following:
ADF 2016 RunTime: Apr11-2016 11:19:24 Nodes: 1 Procs: 3
Note. Only the CPU-intensive part of the job will be run in parallel. So you will also see some lines like the following
ADF 2016 RunTime: Apr11-2016 11:19:22 Nodes: 1 Procs: 1
Example of Band Job
First, copy the example to your directory
$ cp -r /gpfs/research/mendozagroup/ADF/2016/openmpi/adf2016.101/examples/band/BasisDefaults .
$ cd BasisDefaults
$ ls
BasisDefaults_orig.out BasisDefaults.run
Next, create a band job slurm script, band.sub
#!/bin/bash
#SBATCH -J "bandjob"
#SBATCH -n 1
#SBATCH -N 1
#SBATCH -o test-%J.oe
#SBATCH -p mendoza_q
#SBATCH --mail-type=ALL
#SBATCH -t 15:00
cd $SLURM_SUBMIT_DIR
module purge
module load adf
export SCM_TMPDIR=${SCM_TMPDIR}/${SLURM_JOBID}
echo $SCM_TMPDIR
if [ ! -d $SCM_TMPDIR ]; then
mkdir -p $SCM_TMPDIR
fi
if [ ! -d $SCM_TMPDIR ]; then
echo "temporary scratch directory could not be created"
exit
fi
dirac check
./BasisDefaults.run
Submit the job using
sbatch band.sub
The SLURM output fill will be like
/gpfs/research/mendozagroup/scratch/adf/bchen3/2016/2522139
Checked:
/gpfs/research/mendozagroup/ADF/2016/openmpi/adf2016.101/license.txt
License termination date (mm/dd/yyyy): 4/ 1/2017
....
<Apr11-2016> <11:03:41> NORMAL TERMINATION
<Apr11-2016> <11:03:41> END
NORMAL TERMINATION