QNANO
Installation

Installation

Requirements

PETSc configuration

The configuration of PETSc provides lots of options and it takes some effort to understand what is important and what is not. To help the user we suggest exemplary configurations that have worked for us.

First of all, we need the PETSc configured with the complex data type (due to spin-orbit coupling). Then, it is important that PETSc is configured with a working MPI implementation. For best parallel execution MUMPS and Scalapack are recommended.

An exemplary configure line for PETSc is (download and unpack PETSc; from the command line in the unpacked directory call:)

./configure   --with-scalar-type=complex --download-mumps --download-scalapack --with-blas-lapack-dir=/opt/intel/mkl/  --download-mpich; make 

This will download MUMPS and Scalapack as well as MPICH and install it. The BLAS and LAPACK routines of MKL will be used. Alternatively, if MKL is not available use the configure option "--download-fblaslapack". The use of gcc can be enforced by "--with-cc=gcc --with-cxx=g++ --with-fc=gfortran".

An example for using pre-installed MPI on a cluster with intel compiler:

module load intel_parallel_studio_xe_2019; ./configure --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpifort --with-scalar-type=complex --download-mumps=downloads/v5.1.2-p1.tar.gz --download-scalapack --with-blas-lapack-dir=$MKLROOT/lib/intel64/ ; make 

On computecanada's graham cluster the following command line was successful

module load intel/2016.4 openmpi/2.1.1; ./configure --with-cc=/cvmfs/soft.computecanada.ca/easybuild/software/2017/avx2/Compiler/intel2016.4/openmpi/2.1.1/bin/mpicc --with-cxx=/cvmfs/soft.computecanada.ca/easybuild/software/2017/avx2/Compiler/intel2016.4/openmpi/2.1.1/bin/mpicxx --with-fc=/cvmfs/soft.computecanada.ca/easybuild/software/2017/avx2/Compiler/intel2016.4/openmpi/2.1.1/bin/mpifort --with-scalar-type=complex --with-blas-lapack-dir=$MKLROOT/lib/intel64/ --download-mumps --download-scalapack --download-ptscotch ; make 

Configuring and compiling QNANO

We have tried to build a tool that follows the typical setup on any Unix/Linux packages. In particular, the compilation can be done using Makefiles.

The configuration is done by setting a few environment variables before calling the Makefile (for example in bash: "export VARIBALE=VALUE", preferably set in the .bashrc file in the user home directory):

If these parameters are not set, only those parts of QNANO will be compiled that don't need the respective libraries.

After having set the environment variables run

make

If you want to specify the c++ compiler, e.g. for intel compiler, run:

CXX=icpc make

Execution

After successful compilation, the different tools of the QNANO framework can be found in the "bin" directory. For all of our examples, we assume that the user has set the environment variable "QNANO_DIR" to the base directory of QNANO, so that a binary can be executed like, e.g.

$QNANO_DIR/bin/generate_structure -materialfile $QNANO_DIR/resources/wzInP_v9.dat

(This generates a 1x1x1 Unit cell of wurtzite InP, cf. tutorial)

In general, parameters are specified on the command line

$QNANO_DIR/bin/TOOL -KEY1 VALUE1 VALUE2 -KEY2 VALUE3 ...

where the allowed parameters (KEY1, KEY2, ...) are specific to the concrete tool.

If many parameters are specified, it can be more convenient to use a driver file instead by calling

$QNANO_DIR/bin/TOOL -driver driverfile

where "driverfile" has the form

KEY1 VALUE1 VALUE2 
KEY2 VALUE3 ...

Note, however, that driver files don't resolve environment variabes. Some tools need resources from $QNANO_DIR/resources. Then, this specific parameter (e.g., "-resources_dir $QNANO_DIR/resources" for "$QNANO_DIR/bin/tightbinding_slepc") can be specified on the command line and the rest of the parameters is conveniently stored in the driver files. Similarly, PETSc and SLEPc parameters have to be provided directly on the command line.

Examples

We suggest the user to look at the examples contained in the "example" directory. The respective subdirectories contain bash scripts with names "run_example.sh". This will perform the operations to be demonstrated in the respective example and it will write the results to the directory from which the script is called. Thus, we suggest creating a temporary directory outside of the QNANO package and run from there, e.g.,

$QNANO_DIR/examples/graphene/01_monolayer/run_example.sh

Furthermore, we suggest going through the first couple of examples in the graphene directory, because graphene is the computationally least expensive material supported in QNANO.

The CdSe/CdS/ZnS nanoplatelets are an example for semiconductors using the full spds* model. They are small enough to be run on a laptop.

For the InAsP quantum dots, a computational cluster is required as we have to deal with hundreds of thousands of atoms.