ADF

Amsterdam Density Functional

Introduction

ADF (Amsterdam Density Functional) is an accurate, parallelized, powerful computational chemistry program to understand and predict chemical structure and reactivity with density functional theory. Heavy elements and transition metals are accurately modeled with ADF's reliable relativistic ZORA approach and all-electron basis sets for the whole periodic table (H-Uuo). A vast range of spectroscopic properties and comprehensive analysis tools yield invaluable insight in chemical structure and reactivity. DFT calculations are easily prepared and analyzed with our GUI.

License

ADF is a licensed software. The license file is installed on all nodes of the partition mendoza_q. Currently only members of Professor Jose Mendoza Cortes' research group have access to this software.

Note. The GUI tool of ADF is not installed on the computing nodes of the HPC.

Usage of ADF at the RCC

There are two versions of ADF installed on the HPC cluster, 2014.07 and 2016.101.

To use ADF, first, load the module for the version of ADF you want to use. To use the latest 2016 version (the default),

$ module load adf

To use the older 2014 version instead,

$ module load adf/2014.07

Note. The two versions should NOT be used together. Consequently, only one module file should be loaded. To unload a module use a syntax like the following,

 $ module unload adf/2014.07

To check if this module is successfully loaded,

$ module list

(You should find adf in the outputs of the above command).

To check if you have the correct environment set up,

$ which dirac

(If you can not find dirac, them something is wrong).

To check license information

$ dirac check

The last line should say License Ready.

Scratch Space

Some ADF jobs need large scratch disk space. By default, the scratch directory was set to be

     /gpfs/research/mendozagroup/scratch/adf/$USER/2014

for the 2014 version, and

    /gpfs/research/mendozagroup/scratch/adf/$USER/2016

for the 2016 version, respectively. In the above $USER is your rcc user name.

If you want a specific scratch directory for each Slurm job, you can redefine the scratch dicrectory like the following

  1. module load adf
  2. export SCM_TMPDIR=${SCM_TMPDIR}/${SLURM_JOBID}
  3. echo $SCM_TMPDIR
  4. if [ ! -d $SCM_TMPDIR ]; then
  5. mkdir -p $SCM_TMPDIR
  6. fi

Note. In the above, we need to create the new scratch directory ${SLURM_JOBID} since it does not exist.

Example of ADF job

The following is a simple example SLURM job submission script (asking for 3 cores) :

  1. #!/bin/bash
  2. #SBATCH -J "adf_test"
  3. #SBATCH -n 3
  4. #SBATCH -N 1
  5. #SBATCH -o test-%J.o
  6. #SBATCH -e test-%J.e
  7. #SBATCH -p mendoza_q
  8. #SBATCH --mail-type=ALL
  9. #SBATCH -t 15:00
  10. cd $SLURM_SUBMIT_DIR
  11. module purge
  12. module load adf
  13. export SCM_TMPDIR=${SCM_TMPDIR}/${SLURM_JOBID}
  14. echo $SCM_TMPDIR
  15. if [ ! -d $SCM_TMPDIR ]; then
  16. mkdir -p $SCM_TMPDIR
  17. fi
  18. if [ ! -d $SCM_TMPDIR ]; then
  19. echo "temporary scratch directory could not be created"
  20. exit
  21. fi
  22. export NSCM=3
  23. echo $NSCM
  24. which adf
  25. adf -n 3 < HCN_4P.inp > HCN_4P.out

The input data file HCN_4P.inp is

  1. Title HCN Linear Transit, first part
  2. NoPrint SFO, Frag, Functions, Computation
  3. Atoms Internal
  4. 1 C 0 0 0 0 0 0
  5. 2 N 1 0 0 1.3 0 0
  6. 3 H 1 2 0 1.0 th 0
  7. End
  8. Basis
  9. Type DZP
  10. End
  11. Symmetry NOSYM
  12. Integration 6.0 6.0
  13. Geometry
  14. Branch Old
  15. LinearTransit 10
  16. Iterations 30 4
  17. Converge Grad=3e-2, Rad=3e-2, Angle=2
  18. END
  19. Geovar
  20. th 180 0
  21. End
  22. End Input

Upon successful run, you will see in the slurm error output file test-280037.e

  1. NORMAL TERMINATION
  2. NORMAL TERMINATION
  3. NORMAL TERMINATION
  4. NORMAL TERMINATION
  5. NORMAL TERMINATION
  6. NORMAL TERMINATION

The slurm standout file test-280037.o

  1. /gpfs/research/mendozagroup/scratch/adf/bchen3/2521776
  2. 3
  3. /gpfs/research/mendozagroup/ADF/2016/openmpi/adf2016.101/bin/adf
  4. ...

To check if adf has run in parallel

 $ cat HCN_4P.out | grep Nodes

You will find at least one line like the following:

 ADF 2016  RunTime: Apr11-2016 11:19:24  Nodes: 1  Procs: 3

Note. Only the CPU-intensive part of the job will be run in parallel. So you will also see some lines like the following

ADF 2016  RunTime: Apr11-2016 11:19:22  Nodes: 1  Procs: 1

Example of Band Job

First, copy the example to your directory

  1. $ cp -r /gpfs/research/mendozagroup/ADF/2016/openmpi/adf2016.101/examples/band/BasisDefaults .
  2. $ cd BasisDefaults
  3. $ ls
  4. BasisDefaults_orig.out BasisDefaults.run

Next, create a band job slurm script, band.sub

  1. #!/bin/bash
  2. #SBATCH -J "bandjob"
  3. #SBATCH -n 1
  4. #SBATCH -N 1
  5. #SBATCH -o test-%J.oe
  6. #SBATCH -p mendoza_q
  7. #SBATCH --mail-type=ALL
  8. #SBATCH -t 15:00
  9. cd $SLURM_SUBMIT_DIR
  10. module purge
  11. module load adf
  12. export SCM_TMPDIR=${SCM_TMPDIR}/${SLURM_JOBID}
  13. echo $SCM_TMPDIR
  14. if [ ! -d $SCM_TMPDIR ]; then
  15. mkdir -p $SCM_TMPDIR
  16. fi
  17. if [ ! -d $SCM_TMPDIR ]; then
  18. echo "temporary scratch directory could not be created"
  19. exit
  20. fi
  21. dirac check
  22. ./BasisDefaults.run

Submit the job using

sbatch band.sub

The SLURM output fill will be like

  1. /gpfs/research/mendozagroup/scratch/adf/bchen3/2016/2522139
  2. Checked:
  3. /gpfs/research/mendozagroup/ADF/2016/openmpi/adf2016.101/license.txt
  4. License termination date (mm/dd/yyyy): 4/ 1/2017
  5. ....
  6. <Apr11-2016> <11:03:41> NORMAL TERMINATION
  7. <Apr11-2016> <11:03:41> END
  8. NORMAL TERMINATION