next up previous
Next: References

Simplified and Biased Introduction
to Density Functional Approaches in Chemistry
THIS WAS WRITTEN IN 1996 - IT IS OBSOLETE!!!
There are many typos in this paper - Igor Vilfan from J. Stefan Institute (igor.vilfan@ijs.si) found a score of them, but I am sure there are more: Thank you Igor!!!.
Jan K. Labanowski (jkl@ccl.net)
Ohio Supercomputer Center, 1224 Kinnear Rd, Columbus, OH 43221-1153
(Unreviewed, uncorrected, and unfinished draft currently in preparation)

There are many approaches of computational chemistry which are popular in molecular modeling in biological sciences:

There is a justified interest in Density Functional Approaches among chemists. While they have been a domain of physicists for a long time, these methods have now found their way into mainstream chamistry. The traditional ab initio approaches, which are the workhorse of Quantum Chemistry offer a prescription for calculated truth. In principle, one can calculate chemical properties with any desired accuracy. The methods are known and proven to work. The only problem is that for systems larger than hydrogen or lighter H$_{n}$X molecules, the calculations involved in obtaining accurate results are frequently impractical. We know how to compute it, but we do not have computational power to do it (probably for many years to come, even with the spectacular progress in computer technology). For this reason, the contemporary research in traditional ab initio methods is concerned mainly with better approximations to a Full CI method, or infinite order perturbational expansions, which would give good quality reliable results with reasonable computational effort. Still, the most promising Coupled Cluster (CC) and Complete Active Space SCF (CASSCF) calculations scale more than 5th power in molecular size, and are impractical for molecules containing tens of atoms.

Density Functional Theory does not provide a prescription how to calculate truth. It only provides the existence proof for possibility of obtaining accurate results, but no prescription for systematic improvement. The DFT is accurate, if we knew how to derive necessary relations between density and energy. Unfortunately, energy functionals relating electronic density to energy are unknown, and there is no general way to improve them beside trying new ones and judging their quality by the results. As Prof. Perdew once said: "We are like a blind person who wants to find out how the elephant looks like by touching its legs" (sorry, the quatation is from memory, and may not be accurate). But the Density Functional Theory provides a hope for a method which scales with the size as $N^3$ in the worst case, and possibly linearly for larger molecules (Zhou, 1995; Yang, 1991). So it is used in the spirit of late prof. Slater saying (about his X$\alpha$ method): Do you want to calculate it, or do you want it to be accurate? The DFT results are in many cases surprisingly good if one takes into account the crude approximations on which some of them are based. For example, Local Spin Density calculations yield results on many molecular properties which are of quality comparable with higher order ab initio correlated methods, yet, the LSD assumes that electrons in molecules behave like electrons in an uniform electron gas. Moreover, DFT methods frequently provide reliable answer for cases which are especially difficult to address with conventional ab initio methods, like, e.g., transition metals. On the other hand, they frequently fail miserably, e.g., in charge-transfer complexes.

The fact that more, and more ab initio packages provide options for DFT calculations, is a sign of changing times, and the indication that even the most vigorous opponents of this method see that it is just another way of doing things. On the other hand, DFT is in principle only applicable to the ground state, and there is little hope that it will be extended in a practical way to excited states in a straightforward manner any time soon.

Wave Functions
Since inception of quantum mechanics by Heisenberg, Born, and Jordan in 1925, and Schrödinger in 1926, there were basically two competing approaches to find the energy of a system of electrons. One was rooted in statisticial mechanics and the fundamental variable was the total electron density $\rho({\bf r})$, i.e., the number of electrons per unit volume at a given point in space (e.g., in cartesian coordinates: ${\bf r} = (x, y, z)$). In this approach, electrons were treated as particles forming a special gas, called electron gas. The special case, the uniform electron gas, corresponds to the $\rho({\bf r})=const$.

Another approach was to derive the many particle wave function $\Psi({\bf r}_1, {\bf r}_2, \cdots {\bf r}_N, t)$ (where the ${\bf r}_1$ denotes the coordinates of the 1st electron, ${\bf r}_2$ the 2nd electron, and so on, and $t$ is time) and solve the stationary (time-independent) Schrödinger equation for the system:

\begin{displaymath}
\hat{H}\Psi_k({\bf r}_1, {\bf r}_2, \cdots {\bf r}_N) =
E_k \Psi_k({\bf r}_1, {\bf r}_2, \cdots {\bf r}_N)
\end{displaymath} (1)

(where $\hat{H}$ is the hamiltonian, i.e., the operator of the total energy for the system), and calculate the set of possible wave functions (called eigenfunctions) $\Psi_k$ and corresponding energies (eigenvalues) $E_k$. The eigenfunctions have to be physically acceptable, and for finite systems:
1.
they should be continuous functions,
2.
they should be at least doubly differentiable,
3.
its square should be integrable,
4.
they should vanish at infinity (for finite systems),
When the Schrödinger equation is solved exactly (e.g., for the hydrogen atom), the resulting eigenfunctions $E_k$ form a complete set of functions, i.e., there is an infinite number of them, they are orthogonal to each other (or can be easily made orthogonal through a linear transformation), and any function which is of ``physical quality'' can be expressed through the combination of these eigenfunctions. Orthogonal means, that:
\begin{displaymath}
\int \Psi_k^\ast \Psi_l d^N{\bf r} = 0  {\rm if} k\ne l
\end{displaymath} (2)

The eigenfunction $\Psi_0$ corresponding to the lowest energy $E_0$, describes the ground state of the system, and the higher energy values correspond to excited states. Physicists like to call $\Psi$'s the states (since they contain all possible information about the state), while chemists usually call them wave functions.

Once the function $\Psi$ (or its approximation, i.e., in case when the Schrödinger equation is solved only approximately) is known, the corresponding energy of the system can be calculated as an expectation value of the hamiltonian $\hat{H}$, as:

\begin{displaymath}
E = \frac{\int \int \cdots \int \Psi^\ast({\bf r}_1, {\bf r}...
... \cdots {\bf r}_N) d {\bf r_1}
d {\bf r_2} \cdots d {\bf r_N}}
\end{displaymath} (3)

where $\Psi^\ast$ denotes the complex conjugate of $\Psi$ since in general these functions may produce complex numbers. This is needed since the operator in this case represents a physical observable, and the result has to be a real number. This equation is frequently written using Dirac bra ( $\left<\vert\right.$) and ket ( $\left.\vert\right>$) notation to save space (and to confuse the innocent):
\begin{displaymath}
E=\frac{\left<\Psi\vert H\vert\Psi\right>}{\left<\Psi\vert\Psi\right>}
\end{displaymath} (4)

And if $\left<\Psi\vert\Psi\right> = 1$, i.e., the wave function is normalized, the equation looks even simpler:
\begin{displaymath}
E=\left<\Psi\vert H\vert\Psi\right>
\end{displaymath} (5)

Figure 1: Volume element for a particle
\begin{figure}\epsfbox{coord50.eps}\end{figure}
Once we know the wave function $\Psi$ for a given state of our system, we can calculate the expectation value for any quantity for which we can write down the operator. The wave function itself, does not correspond to any physical quantity, but its square represents the probablity density. In other words:
\begin{displaymath}
\vert\Psi({\bf r}_1, {\bf r}_2, \cdots {\bf r}_N)\vert^2 d{\bf r}_1 d{\bf r}_2 \cdots d{\bf r}_N
\end{displaymath} (6)

or
\begin{displaymath}
\Psi^\ast({\bf r}_1, {\bf r}_2, \cdots {\bf r}_N)\Psi({\bf r...
...}_2, \cdots {\bf r}_N)
d{\bf r}_1 d{\bf r}_2 \cdots d{\bf r}_N
\end{displaymath} (7)

or
\begin{displaymath}
\left.\vert\Psi\right>\left<\Psi\vert\right. d V
\end{displaymath} (8)

represents the probablity that electron 1 is in the volume element $d{\bf r}_1$ around point ${\bf r}_1$, electron 2 is in the volume element of the size $d{\bf r}_2$ around point ${\bf r}_2$, and so on. If $\Psi$ describes the system containing only a single electron, the $\vert\Psi({\bf r})\vert^2 d{\bf r}$ simply represents the probability of finding an electron in the volume element of a size $d{\bf r}$ centered around point ${\bf r}$. If you use cartesian coordinates, then $d {\bf r} = dx dy dz$ and the volume element would be a brick (rectangular parallelipiped) with dimensions $dx\times dy \times dx$ whose vertex closes to the origine of coordinate system is located at $(x, y, z)$. Now, if we integrate the function $\Psi$ over all the space for all the variables (i.e., sum up the probablilities in all the elements $d{\bf r}_i$), we should get a probability of finding our electrons anywhere in the Universe, i.e., 1. This is why it is a good idea to normalize fundtion $\Psi$. If it is not normalized, it can easily be done by multiplying it by a normalization constant:
\begin{displaymath}
\Psi_{normalized} = \overbrace{\frac{1}{\sqrt{\left<\Psi_{un...
...lized}\right>}}}^{Normalization constant}
\Psi_{unnormalized}
\end{displaymath} (9)

Since square of $\Psi$ represents the probablility density of finding electrons, one may suspect, that it should be easy to calculate the total electron density from it. And actually it is:
\begin{displaymath}
\rho({\bf r}) = N\left<\Psi\vert\delta({\bf r} - {\bf r}_i)\vert\Psi\right>
\end{displaymath} (10)

where $N$ is the total number of electrons, and $\delta({\bf r} - {\bf r^{\prime}})$ is the famous Dirac delta function. In cartesians, it simply amounts to integrating over all electron positions vectors ${\bf r}_i$ but one. Which one, is not important, since electrons are indistinguishable, and a proper wave function has to reflect this:
\begin{displaymath}
\rho({\bf r}_1) = N \underbrace{\int \int \cdots \int}_{N-1}...
...t^2
\underbrace{d{\bf r}_2 d{\bf r}_3 \cdots d{\bf r}_N}_{N-1}
\end{displaymath} (11)

It is interesting to note, that for the wave function which describes the system containing only a single electron (but only then !!!):
\begin{displaymath}
\rho({\bf r}) = \Psi^\ast({\bf r})\Psi({\bf r}) = \vert\Psi({\bf r})\vert^2 = \left.\vert\Psi\right>\left<\Psi\right.\vert
\end{displaymath} (12)

i.e., logically, the electron density and the probability density of finding the single electron are the same thing. Function, Operator, Functional
Before we move any further, let us introduce a few definitions.

FUNCTION
Functions is a prescription which maps one or more numbers to another number. For example: take a number and multiply it by itself: $y = f(x) = x^2$, or take two numbers and add them together: $z = g(x, y) = x + y$. Sometimes function does not have a value for some numbers, and only certain numbers can be used as an argument for a function. E.g., square root is only defined for nonnegative numbers (if you want to have a real number as a result). OPERATOR
Operator (usually written with a hat, e.g., $ \hat{F} $ or in calligraphic style $\cal F$ ) is a prescription which maps one function to another function. For example: take a function and square its value: $\hat{F} = {}^2$, e.g., $\hat{F}sin(x) = sin^2(x)$. Or calculate second derivative of a function versus $x$: $\hat{F} = \frac{\partial^2}{\partial x^2}$, and $\hat{F} f(x) = \frac{\partial^2}{\partial x^2} f(x) = \frac{\partial^2 f(x)}{\partial x^2}$. Nabla is the popular differenial operator in 3 dimensions. In cartesians it is:

\begin{displaymath}\nabla = \left(\vec{\bf i}\frac{\partial}{\partial x} + \vec{...
...al}{\partial y} +
\vec{\bf k}\frac{\partial}{\partial z}\right)\end{displaymath}

It is used to calculate forces (which are vectors), i.e., gradients of potential energy: $Force = - grad   V = -\nabla$. Its square is called laplacian, and represents the sum of second derivatives:

\begin{displaymath}\Delta = \nabla^2 = \left(\frac{\partial^2}{\partial x^2} + \...
...rtial^2}{\partial y^2}
+ \frac{\partial^2}{\partial z^2}\right)\end{displaymath}

The $\nabla^2$ or $\Delta$ appears in the kinetic energy operator. The prescription of forming the quantum mechanical operators, are called Jordan rules. For cartesian coordinate representation (coordinate space), they are obtained as follows:
1.
write a classical expression for the physical quantity and rearrange it in such a way, that everything depends either on coordinates, or momenta (e.g., if something depends on the component of velocity, $v_{x_i}$, change it to $p_{x_i}/m_i$).
2.
replace coordinates with the operators of multuplying by the coordinate:

\begin{displaymath}x_i \rightarrow \hat{x_i} = x_i \cdot\end{displaymath}


\begin{displaymath}y_i \rightarrow \hat{y_i} = y_i \cdot\end{displaymath}


\begin{displaymath}z_i \rightarrow \hat{z_i} = z_i \cdot\end{displaymath}

3.
replace components of momenta with their operators:

\begin{displaymath}p_{x_i} \rightarrow \hat{p}_{x_i} = -i\hbar\frac{\partial}{\partial x}\end{displaymath}


\begin{displaymath}p_{y_i} \rightarrow \hat{p}_{y_i} = -i\hbar\frac{\partial}{\partial y}\end{displaymath}


\begin{displaymath}p_{z_i} \rightarrow \hat{p}_{z_i} = -i\hbar\frac{\partial}{\partial z}\end{displaymath}

The operators can also be obtained in momentum space - the physicists like them this way very much. Chemists are interested more in where is the electron, rather than how fast it moves, so they use coordinate space representation as a rule.

The Schrödinger equation (1) is an example of an eigenproblem, i.e., the equation in which an operator acts on a function and as a result it returns the same function multiplied by a constant. For some operators, there are no nontrivial solutions (trivial means: $\hat{F}0 = C\cdot 0$), but for operators which correspond to some physical quantity of some physical system, these equations have solutions in principle, i.e., one can find a set of functions, and corresponding constants, which satisfy them. While these eigenproblems have solutions in principle, these equations may not be easy to solve. They frequently have the form of partial second order differential equations, or integro-differential equations, and they may not be analytically solved in general, some special cases may have an analytic solution (e.g., one particle, or two particles which interact in a special way).

FUNCTIONAL
Functional takes a function and provides a number. It is usually written with the function in square brackets as $F[f] = a$. For example: take a function and integrate it from $-\infty$ to $+\infty$: $F[f] = \int\limits_{-\infty}^{+\infty} f(x) dx$. Note that the formula for the expectation value (3) is the total energy functional $E[\Psi]$, since it takes some function $\Psi$ and returns the value of energy for this $\Psi$.

Functionals can also have derivatives, which behave similarly to traditional derivatives for functions. The differential of the functional is defined as:

\begin{displaymath}
\delta F[f] = F[f + \delta f] - F[f] = \int \frac{\delta F}{\delta f(x)}
\delta f(x) dx
\end{displaymath} (13)

The functional derivatives have properties similar to traditional function derivatives, e.g.:
\begin{displaymath}
\frac{\delta}{\delta f(x)}(C_1F_1 + C_2F_2) =
C_1\frac{\delta F_1}{\delta f(x)} + C_2 \frac{\delta F_2}{\delta f(x)}
\end{displaymath} (14)


\begin{displaymath}
\frac{\delta}{\delta f(x)}(F_1F_2) \frac{\delta F_1}{\delta f(x)}F_2 +
\frac{\delta F_2}{\delta f(x)}F_1
\end{displaymath} (15)

Hamiltonian, variational principle, Hartree and Hartree-Fock method
Let us say more about quantum mechanical approches to calculating energy of the system of electrons. One very important principle of quantum mechanics is called variational principle or variational theorem which leads to a method (variational method or variation method). It was discovered very early in the history of quanum mechanics. The variational principle states that if we take some wave function $\Psi$ (e.g., some approximate one) for a system and calculate the expectation value of energy $E$, this energy will be either higher, or equal to the ground state energy $E_0$ for this system.
\begin{displaymath}
E = \frac{\left<\Psi\vert H\vert\Psi\right>}{\left<\Psi\vert...
...si_0\vert H\vert\Psi_0\right>}{\left<\Psi_0\vert\Psi_0\right>}
\end{displaymath} (16)

The $E = E_0$ occurs only if $\Psi$ is equivalent to $\Psi_0$, but otherwise, $E > E_0$. This theorem can be easily proven (see for example: Levine, 1983; Slater, 1968, or Szabo & Ostlund, 1989). What is more important however, that this theorem provides us with the prescription how to find the good wave function: try to go as low with the expectation value of energy, as you can (you cannot go lower than the true value). And the lower you go, the better you are. The word of caution here... The principle is true only if we are applying a true hamiltonian in the expression for expectation energy. Of course, if your hamiltonian does not represent a physical system, you can get whatever you wish, $-\infty$ included!!!

As said before, we can rarely solve the Schrödinger equation exactly. And we have to use approximations.

The first successful attempt to derive approximate wave functions for atoms was devised by Hartree (1928). In this approach the many-electron wave function $\Psi$ is approximated by the product of one-electron functions $\phi$ for each of the $N$ electrons:

\begin{displaymath}
\Psi({\bf r}_1, {\bf r}_2, \cdots {\bf r}_N) = \phi_1({\bf r}_1)
\phi_2({\bf r}_2) \cdots \phi_N({\bf r}_N)
\end{displaymath} (17)

In this equation, ${\bf r}_i$ are the positional coordinates and a spin coordinate for the $i$-th electron. For example, in cartesians ${\bf r}_i = (x_i, y_i, z_i, m_{s_i})$ though to solve this equation for atoms, the spherical coordinates are more useful ${\bf r}_i = (r_i, \theta_i, \varphi_i, m_{s_i})$, where $m_{s_i}$ can adopt only 2 values: $+\leavevmode
\kern.1em \raise .5ex \hbox{\the\scriptfont0 1}\kern-.1em $/ $\kern-.15em \lower .25ex \hbox{\the\scriptfont0 2}$ (spin up, or $\alpha$ or $\uparrow$) or $-\leavevmode
\kern.1em \raise .5ex \hbox{\the\scriptfont0 1}\kern-.1em $/ $\kern-.15em \lower .25ex \hbox{\the\scriptfont0 2}$ (spin down, $\beta$, or $\downarrow$). The individual one-electron functions $\phi_i$ are called orbitals, (or spinorbitals) and describe each electron in the atom (or molecule). Is this a reasonable approximation? Not really...
1.
It assumes that electrons in the atom can be described independently, i.e., their movements do not depend upon each other and their interaction is not pairwise, but each electron interacts with some averaged field of other electrons. This is a neat idea, the problem is that it is not true. Electrons have to avoid each another (correlate their movements), since they repel each other being of the same charge.
2.
The function does not have a proper symmetry for interchanging particle indices for fermions. It was discovered long time ago that the many electron wave function has to be antisymmetric to the exchange of neighboring indices, i.e., change sign:
\begin{displaymath}
\Psi({\bf r}_1, {\bf r}_2, \cdots {\bf r}_i, {\bf r}_{i+1}, ...
... {\bf r}_2, \cdots {\bf r}_{i+1}, {\bf r}_i, \cdots {\bf r}_N)
\end{displaymath} (18)

e.g., $\Psi({\bf r}_1, {\bf r}_2, {\bf r}_3) = - \Psi({\bf r}_1, {\bf r}_3, {\bf r}_2)
= - \Psi({\bf r}_2, {\bf r}_1, {\bf r}_3)$.
Does the Hartree method work? Yes, it works quite well, at least for atoms... Now, let us describe how it works. For an atom or molecule, the hamiltonian operator can be written as:
\begin{displaymath}
\hat{H}_{tot} = \hat{T}_{nucl} + \hat{T}_{e} + \hat{U}_{nucl} + \hat{V}_{ext} + \hat{U}_{ee}
\end{displaymath} (19)

where

Since nuclei are much heavier than electrons (a proton is 1836 times heavier than an electron), they have much larger intertia than electrons, and we can consider that electrons will adjust their positions instantly whenever nuclei move. Prof. Rod Bartlett once used an analogy of flies around the wedding cakes - you move the cakes, and the flies move with them. Without much error, we can therefore separate the movement of electrons and nuclei, and assume that the movement of electrons depends on positions of nuclei in a parametric way. This is the contents of the Born-Oppenheimer (1927) approximation. It allows us to use the electronic hamiltonian $\hat{H}_{el}$ to study electrons, and add the components of energy coming from nuclei (i.e., their kinetic energy, $T_{nucl}$, and internuclear repulsion energy, $V_{nucl}$) at the end of calculations.

\begin{displaymath}
\hat{H}_{el} = \hat{T}_{e} + \hat{V}_{ext} + \hat{U}_{ee}
\end{displaymath} (20)

The form of these operators will be given below. Atomic units are used here, i.e., mass of electron $m_e = 1$; $\hbar = 1$; length is expressed in bohrs (1 bohr = 0.529177249Å; it is the radius of the first Bohr orbit in hydrogen atom); energy is in hartrees (1 hartree is 2 rydbergs, and the ground state energy of hydrogen atom is $-1$ rydberg, or 0.5 hartree, 1 hartree = 627.51 kcal/mol = 2625.5 kJ/mol); 1 unit of charge is a charge of a proton, the gaussian electrostatic system is used (the unit of permittivity is $4\pi\epsilon_0$ and it does not appear in the Coulomb law for vacuum).

\begin{displaymath}
\hat{T}_{e} = \sum_{i=1}^{N}-\frac{1}{2}\nabla^2_i
\end{displaymath} (21)


\begin{displaymath}
\hat{V}_{ext} = \sum_{i=1}^{N}\hat{v}_i =
\sum_{i=1}^{N}\un...
...-Z_\alpha}{\vert{\bf r}_i - {\bf R}_\alpha\vert}\right)}_{v_i}
\end{displaymath} (22)


\begin{displaymath}
\hat{U}_{ee} = \sum_{i=1}^{N-1}\sum_{j=i+1}^{N} \frac{1}{\vert{\bf r}_i - {\bf r}_j\vert}
\end{displaymath} (23)

Here, the $Z_\alpha$ is the charge of an $\alpha$-th nucleus (atomic number), $\vert{\bf r}_i - {\bf R}_\alpha\vert$ is the distance between electron $i$ and the nucleus $\alpha$, $N_{nucl}$ is the total number of nuclei in the molecule (but it is obviously equal to 1 for an atom), $\vert{\bf r}_i - {\bf r}_j\vert$ is the distance between electron $i$ and $j$. Note, that $\vert{\bf r}_i - {\bf R}_\alpha\vert$ and $\vert{\bf r}_i - {\bf r}_j\vert$ can be expressed in cartesian coordinates:

\begin{displaymath}\vert{\bf r}_i - {\bf R}_\alpha\vert = \sqrt{(x_i - X_\alpha)^2 + (y_i - Y_\alpha)^2 + (z_i - Z_\alpha)^2}\end{displaymath}

(note that nuclear coordinates for each of the $N_{nucl}$ nuclei: $X_\alpha$, $Y_\alpha$, $Z_\alpha$ are constants/parameters rather than variables in the electronic hamiltonian $\hat{H}_{el}$),

\begin{displaymath}\vert{\bf r}_i - {\bf r}_j\vert = \sqrt{(x_i - x_j)^2 + (y_i - y_j)^2 + (z_i - z_j)^2}\end{displaymath}

We can rewrite again the electronic hamiltionian:

\begin{displaymath}
\hat{H}_{el} = \sum_{i=1}^{N} \hat{h}_i + \hat{U}_{ee}
\end{displaymath} (24)

where operator
\begin{displaymath}
\hat{h}_i = -\frac{1}{2}\nabla^2_i + \hat{v}_i
\end{displaymath} (25)

and $\hat{h}_i$ only depends on coordinates of ${\bf r}_i$ of a given $i$-th electron. In the product function from equation (17), $\hat{h}_i$ would only affect the function for the $i$-th electron. Unfortunately, there is still this $\hat{U}_{ee}$ which depends on pairs of electrons, and we cannot separate variables in the the Schrödinger equation, (a second order differential equation). So, Hartree found an approximation, in which electron does not interact with other electrons one by one, but with the averaged density of electrons. If we assume for the moment that we know the one-electron functions $\phi_i({\bf r})$, for each electron (they make the total wave function in equation (17)) we can calculate densities corresponding to each electron using equation (12):
\begin{displaymath}
\rho_i({\bf r}) = \vert\phi_i({\bf r})\vert^2
\end{displaymath} (26)

And the total density of the electrons will be a sum of individual electron densities:


\begin{displaymath}
\rho_{tot}({\bf r}) = \sum_{i=1}^{N}\rho_i({\bf r}) = \sum_{i=1}^{N}\vert\phi_i({\bf r})\vert^2
\end{displaymath} (27)

However, the $k$-th electron does not interact with the whole density $\rho_{tot}$, since it is itself a part of it. Electron cannot interact with itself! It is not a human being!!! This would be a self-interaction. So, if we want to find a correct density with which the $k$-th electron interacts (let us denote it by $\rho^{(k)}({\bf r})$), we need to subtract its own density from $\rho_{tot}$:
\begin{displaymath}
\rho^{(k)}({\bf r}) = \rho_{tot}({\bf r}) - \rho_k({\bf r}) ...
...t^2 = \sum_{i=1 \atop i \ne k}^{N} \vert\phi_i({\bf r})\vert^2
\end{displaymath} (28)

Now, let us assume that we know some approximate functions for the orbitals. And we want to calculate the energy of interation of point charge (our electron) located at position ${\bf r}$, with other electrons represented as a smeared electron density. Out point charge is $-e$, which in atomic units is $-1$, and the electron density is also negative, so we get a positive contribution to energy:
\begin{displaymath}
\hat{g}_k({\bf r}) = \int \rho^{(k)}({\bf r^\prime})\frac{1}{\vert{\bf r} - {\bf r}^{\prime}\vert} d{\bf r}^\prime
\end{displaymath} (29)

Now, if we make such an approximation, we can write:
\begin{displaymath}
\hat{U}_{ee} \approx \sum_{i=1}^N \hat{g}_i({\bf r})
\end{displaymath} (30)

(it is not entirely true, as we will see later, since we doubly count interaction, but let us leave it at that for a moment), and our $\hat{H}_{el}$ consists of sums of one-electron operators:
\begin{displaymath}
\hat{H}_{el} \approx \sum_{i=1}^{N} (-\frac{1}{2}\nabla_i^2 + \hat{v}_i + \hat{g}_i)
\end{displaymath} (31)

and the many electron Schrödinger equation can be solved as $N$ independent one electron equations:
\begin{displaymath}
(-\frac{1}{2}\nabla_i^2 + \hat{v}_i + \hat{g}_i)\phi_i({\bf r}) = \epsilon_i \phi_i({\bf r})
\end{displaymath} (32)

The $\epsilon_i$ is the energy of the $i$-th electron. In practice, we start with some approximate orbitals $\phi_i$ (e.g., from hydrogen atom). We need them, since the $\hat{g}_i$ depends on them. We solve all $N$ equations and obtain the $N$ new $\phi^\prime_i$'s. The new $\phi^\prime_i$'s are different from the old $\phi_i$'s, and we believe that they are better. And now we have better approximations for orbitals, and use the new $\phi_i^\prime$'s as a starting point and repeat the process. At some point, $\phi_i^\prime$'s do not change from iteration to iteration, and we obtained the self-consistent field orbitals. From these orbitals we can form a many electron wave function $\Psi$, and then calculate the total energy $E$ of the ground state. Note, that the total energy is not equal to the sum of orbital energies $\epsilon_i$. We calculate the expectation value of energy using the accurate hamiltonian $\hat{H}_{el}$ from equation (20). When we solved equation for $\phi_1$ and $\epsilon_1$ we included the Coulomb interactions between electron: (1, 2), (1, 3), (1, 4), etc. When we solved second equation, we included interactions: (2, 1), (2, 3), (2, 4), etc. But the interaction (1, 2) is the same as (2, 1), i.e., adding the energies we would count interactions twice. For this reason, the correct total energy can be represented as:
\begin{displaymath}
E = \sum_{i=1}^{N} \epsilon_i + \sum_{i=1}^{N-1} \sum_{j=i+1}^{N} J_{ij}
\end{displaymath} (33)

where $J_{ij}$'s represent Coulomb interaction of electron $i$ and $j$. They are called Coulomb integrals and are defined as:
\begin{displaymath}
J_{ij} = \int \int \frac{\rho_i({\bf r}_1) \rho_j({\bf r}_2)...
..._2\vert} \vert\phi_j({\bf r}_2)\vert^2 d {\bf r}_1 d {\bf r}_2
\end{displaymath} (34)


\begin{displaymath}
J _{ij} = \int \int \phi_i^\ast({\bf r}_1) \phi_j^\ast({\bf ...
...}
\phi_i({\bf r}_1) \phi_j({\bf r}_2) d {\bf r}_1 d {\bf r}_2
\end{displaymath} (35)

The Hartree approximation works well for atoms. The one-electron functions are quite good, and allow us to produce an approximate many-electron function for the whole atom. But the form of the function adopted by Hartree was basically wrong. It was known at that time that interchanging the electron labels in the wave functions for the system of electrons has to change its sign. This is not something which can be derived. It is simply a law of nature. And it is important, since without this property, the function will not correctly describe the system. Electrons (fermions) have this propery, and we should use it. In the system of fermions, no two particles can be described by the same one particle function. So, few years later, Fock (1930), and independently Slater(1930) proposed a fix to the Hartree method. They also used one-electron functions, but the total wave function for the system was not a simple product of orbitals, but an antisymmetrized sum of all the products which can be obtained by interchanging electron labels. It is conveniently represented as a determinant (called Slater determinant):

\begin{displaymath}
\Psi({\bf r}_1, {\bf r}_2, \cdots {\bf r}_N) = \frac{1}{\sqr...
... r}_N) & \cdots & \phi_N({\bf r}_N)\\
\end{array} \right\vert
\end{displaymath} (36)

Let us examine some properties of this function taking a 2-electron case:

\begin{displaymath}
\Psi({\bf r}_1, {\bf r}_2) = \frac{1}{\sqrt{2}}\left\vert \b...
...\phi_2({\bf r}_2) - \phi_2({\bf r}_1) \phi_1({\bf r}_2)\right]
\end{displaymath} (37)

If we change electron labels $1 \rightarrow 2$ and $2 \rightarrow 1$ we get

\begin{displaymath}
\Psi({\bf r}_2, {\bf r}_1) = \frac{1}{\sqrt{2}}\left\vert \b...
...\phi_2({\bf r}_1) - \phi_2({\bf r}_2) \phi_1({\bf r}_1)\right]
\end{displaymath} (38)

that is: $\Psi({\bf r}_1, {\bf r}_2) = - \Psi({\bf r}_2, {\bf r}_1)$. Moreover, if we assume that two electrons are described by the same spinorbital: $\phi_1 = \phi_2 = \phi$ we get:
\begin{displaymath}
\Psi({\bf r}_2, {\bf r}_1) = \frac{1}{\sqrt{2}}\left\vert \b...
...f r}_1) - \phi({\bf r}_2) \phi({\bf r}_1)\right]
\equiv {\bf0}
\end{displaymath} (39)

i.e., such wave function whould be zero everywhere, and hence, the probablility of finding such electrons is zero. So, Slater determinant satisfies the Pauli exclusion principle that each electron has to be described by a different wave function. It was originally formulated by Pauli (1925) for electrons in atoms, that no two electrons can have the same values of all four quantum numbers: $n$, $l$, $m_l$, and $m_s$. The single determinant wave function is much better than the Hartree product function as it naturally includes some basic fermion characteristics.

However, it complicates equations compared to Hartree method and introduces a new term, electron exchange. The method of finding the best single determinant wave function for the system is called Hartree-Fock method.

The expectation value of total energy for the single determinant wave function $\Psi$ is given by:

\begin{displaymath}
E = \left<\Psi\vert H\vert\Psi\right> = \sum_{i=1}^{N} H_i + \frac{1}{2} \sum_{i=1}^N \sum_{j=1}^N (J_{ij} - K_{ij})
\end{displaymath} (40)

where:
\begin{displaymath}
H_i = \int \phi_i^\ast({\bf r})[-\frac{1}{2}\nabla_i^2 + \hat{v}_i]\phi_i({\bf r})d ({\bf r})
\end{displaymath} (41)

is an element of one-electron operator $\hat{h}_i$ which was defined by equation (25), the $J_{ij}$ is the Coulomb integral defined by equation (34), and $K_{ij}$ is the new term, called exchange integral, which is defined by
\begin{displaymath}
K_{ij} = \int \int \phi_i^\ast({\bf r}_1) \phi_j({\bf r}_1) ...
...\phi_i({\bf r}_2) \phi_j^\ast({\bf r}_2) d{\bf r}_1 d{\bf r}_2
\end{displaymath} (42)

Note, that $K_{ij}$ is similar in form to the $J_{ij}$ but the functions $\phi_i$ and $\phi_j$ were exchanged. Also, electrons $i$ and $j$ have to be of the same spin for $K_{ij}$ to be nonzero, due to othogonality of their spin parts. It does not have a simple physical interpretation like $J_{ij}$ (i.e., purely electrostatic interaction of two charge densities for electron $i$ and $j$). The exchange integral comes as a result of the determinantal form of $\Psi$ which sums all possible products of permutations of electrons among orbitals. $J_{ij}$ and $K_{ij}$ are equal only when $i=j$. This is very important, since there is no contribution to the total energy coming from electron self-interaction. It can be shown that $J_{ij}$'s are larger (or equal to) $K_{ij}$'s and they are positive numbers.

There are essentially two approaches to HF method: purely numerical, and the one in which orbitals $\phi$ are represented as combinations of basis functions. The numerical HF was used at some point to derive orbitals for many-electron atoms. There are also some numerical HF programs for molecules. However, these methods are quite intensive. It is much more popular to represents orbitals $\phi$ as a linear expansion into a set of some basis functions $\chi$:

\begin{displaymath}
\phi_i({\bf r}) = \sum_{k=1}^n C_{ki} \chi_k({\bf r})
\end{displaymath} (43)

and optimizing coefficients $C_{ki}$ rather than basis functions.

The derivation of specific equations for Hartree-Fock method will not be given here, but can be found in many books, (see, e.g.: McWeeny & Sutcliffe, 1969; Parr, 1963; Pilar, 1968; Slater, 1968; Szabo & Ostlung, 1989) and landmark papers of Roothaan (1951, 1960). In the SFC LCAO MO (Self Consistent Field - Linear Combination of Atomic Orbitals, Molecular Orbitals) method, we are looking for the coefficients $C_{ki}$ which minimize the energy.

They are derived using varational principle, i.e. the goal here is to find a single determinant function $\Psi$ as represented by equation (36) which corresponds to the lowest possible value of energy. It is done by the variational calculus. The condition of minimum of energy is:

\begin{displaymath}
\delta E_0 \equiv \delta \left<\Psi_0\vert H\vert\Psi_0\right> = 0
\end{displaymath} (44)

Another condition is that orbitals $\phi$ have to be orthonormal, i.e.:
\begin{displaymath}
\left<\phi_i\vert\phi_j\right> = \delta_{ij}
\end{displaymath} (45)

where $\delta_{ij}$ is called a Kronecker delta, and is equal to 1 only if $i=j$, but is zero otherwise.

The problem of minimization of a function with some subsidiary conditions is done efficiently with the Lagrange's method of undetermined multipliers. In this method, the constraints are converted into expressions whose value is zero when the constraint is met. Such expressions are then multiplied by an undetermined constant and added (or subtracted, according to taste) to the minimized function. The resulting sum is then minimized:

\begin{displaymath}
\delta \left[ \left<\Psi_0\vert H\vert\Psi_0\right> - E\left(\left<\Psi_0\vert\Psi_0\right> - 1\right)\right] = 0
\end{displaymath} (46)

As a result, one gets Hartree-Fock eigenequations.


\begin{displaymath}
\hat{f}({\bf r}_1)\phi_i({\bf r}_1) = \epsilon_i \phi_i({\bf r}_1)
\end{displaymath} (47)

where the Fock operator is defined as:


\begin{displaymath}
\hat{f}({\bf r}_1) = \hat{h}({\bf r}_1) + \sum_{a=1}^N\left[\hat{j}_a({\bf r}_1) - \hat{k}_a({\bf r}_1)\right]
\end{displaymath} (48)

In these equations, the fact that operators act on certain coordinates is stressed by writing explicitly their dependence on coordinates of electron 1 (it could be any electron number, since it does not matter how we number them). The $\hat{h}({\bf r}_1)$ was defined in equation (25). The Coulomb operator,

\begin{displaymath}
\hat{j}_a({\bf r}_1)\phi_b({\bf r}_1) = \phi_b({\bf r}_1) \i...
...\bf r}_2) \frac{1}{\vert{\bf r}_1 - {\bf r}_2\vert}d {\bf r}_2
\end{displaymath} (49)

and the exchange operator
\begin{displaymath}
\hat{k}_a({\bf r}_1)\phi_b({\bf r}_1) = \phi_a({\bf r}_1)
\...
...{\bf r}_2)\frac{1}{\vert{\bf r}_1 - {\bf r}_2\vert}d {\bf r}_2
\end{displaymath} (50)

are defined by the result of their action on a function.

If we now introduce basis, i.e., replace $\phi_i$'s with their expansions into basis functions according to equation (43), the eigenproblems become a set of algebraic equations. Substituting equation (43) into (47) one gets:

\begin{displaymath}
\hat{f}({\bf r}_1)\sum_{k=1}^n C_{ki} \chi_k({\bf r}_1) = \epsilon_i \sum_{k=1}^n C_{ki} \chi_k({\bf r}_1)
\end{displaymath} (51)

and multiplying on the left by $\chi_l({\bf r}_1)$ and integrating over ${\bf r}_1$ one gets:
\begin{displaymath}
\sum_{k=1}^n C_{ki} \int \chi_l^\ast({\bf r}_1) \hat{f}({\bf...
..._{ki} \int \chi_l^\ast({\bf r}_1)\chi_k({\bf r}_1) d {\bf r}_1
\end{displaymath} (52)

Integrals are further denoted as:
\begin{displaymath}
({\bf F})_{lk} = F_{lk} = \int \chi_l^\ast({\bf r}_1) \hat{f}({\bf r}_1) \chi_k({\bf r}_1) d {\bf r}_1
\end{displaymath} (53)

which is an element of a $n\times n$ Fock matrix ${\bf F}$, and
\begin{displaymath}
({\bf S})_{lk} = S_{lk} = \int \chi_l^\ast({\bf r}_1)\chi_k({\bf r}_1) d {\bf r}_1
\end{displaymath} (54)

which is an element of a $n\times n$ overlap integrals matrix ${\bf S}$. We can obtain $n$ (i.e., number of basis functions) of such equations, and it is convenient to write them in a matrix form as:
\begin{displaymath}
{\bf FC = SC\epsilon}
\end{displaymath} (55)

The problem now is to find such marix ${\bf C}$ which diagonalizes ${\bf F}$, which is a standard procedure in linear algebra. First step is to find a matrix ${\bf X}$ which diagonalizes overlap integrals ${\bf S}$, which is a step ortogonalizing the basis set (i.e., the basis set is converted to linear combinations of original basis functions, which are mutually orthogonal).

\begin{displaymath}
{\bf X}^\dagger{\bf S}{\bf X} = {\bf 1}
\end{displaymath} (56)

Applying this matrix to equation (55) and rearranging it we get
\begin{displaymath}
{\bf C}^{\prime\dagger}{\bf F}^{\prime}{\bf C}^\prime = {\bf\epsilon}
\end{displaymath} (57)

where ${\bf C} = {\bf X C}^\prime$ and ${\bf F}^\prime = {\bf X}^\dagger {\bf F} {\bf X}$. The efficient routines for matrix diagonalization (i.e., obtaining ${\bf C}^\prime$ in this case) are widely available.

While they seem simple, the problem is that elements of ${\bf F}$ depend on orbitals $\phi$, i.e., one can solve them only through iterative process as in Hartree's method. Moreover, due to Coulomb and exchange operators, the Fock matrix elements involve a massive number of two-electron integrals of a type:

\begin{displaymath}
\left<ij\vert kl\right> = \int \int \chi_i^\ast({\bf r}_1) \...
...t}
\chi_k({\bf r}_1) \chi_l({\bf r}_2) d {\bf r}_1 d {\bf r}_2
\end{displaymath} (58)

To escape these difficulties and complexities, people tried for many years to decribe electron systems using density, rather than wave function. In the HF approach the two-electron integrals $\left<ij\vert kl\right>$ dominate the computational effort. Moreover, the HF is an approximation, as it does not account for dynamic correlation due to the rigid form of single determinant wave function. To solve the HF equations, the assumption has to be made that electrons interact with the averaged potential coming from other electrons, while in fact, the interactions between electrons are pairwise. In reality, electrons correlate their movements trying to avoid each other, so there is least amount of electrostatic repulsion. To account for dynamic correlation, one has to go to correlated methods, which use multideterminant wave functions, and these scale as fifth, or even greater powers with the size of a system. While HF method is quite successful for geometries, it fails miserably to describe bond breaking or forming (for review of correlated ab initio methods see: Bartlett & Stanton, 1994). Density Functional Theory - early developments
For many years, the use of electron density as a fundamental description of the system was based on intuition rather than hard proof that this can be done. Electron density is more attractive (depends only on $x,y,z$, and eventually, there may be two densities for spin polarized systems, one for spin up electrons $\rho_\uparrow({\bf r})$ and one for spin down electrons $\rho_\downarrow({\bf r})$, as opposed to many particle wave function which depends on all coordinates of all particles, i.e., for N electrons, it depends on $3N$ variables (or $4N$ if you count in spin). The fact that the ground state properties are functionals of the electron density $\rho({\bf r})$ was proved by Hohenberg and Kohn (1964) and it provides the basic framework for modern Density Functional methods.

More specifically, according to the theorem proved by them, the total ground state energy of an electron system can be written as a functional of the electronic density, and this energy is at minimum if the density is an exact density for the groud state. The theorem of HK is an existence proof of such a functional, but there is no prescription how to construct it. If we knew the form of this functional accurately, and if it was not complicated, quantum chemistry would be a done deal. Unfortunately we do not know the exact form of energy functional. It is necessary to use approximations regarding parts of the functional dealing with kinetic energy and exchange and correlation energies of the system of electrons.

The simplest approximation is the local density approximation (LDA) which leads to a Thomas-Fermi (Fermi, 1928; Thomas, 1927) term for kinetic energy (for review see, e.g., Jones & Gunnarsson, 1989; Slater, 1968; March, 1957) and the Dirac (1930)1 term for the exchange energy. The corresponding functional is called Thomas-Fermi-Dirac energy. As you see, these developments are not recent and were parallel to the work done in the wave function approaches. These functionals can be further improved but the results are not that encouraging for molecular systems. But, on the other hand, the Thomas-Fermi-Dirac+improvments method is a true density functional method, since all components of energy are expressed via density alone, without using many particle wave functions. However, for the time being, it seems that there is no way to avoid wave functions in molecular calculations and for accurate calculations they have to be used as a mapping step between the energy and density. For example, the Thomas-Fermii theory does not predict chemical bonds. While "pure" density functional theories are very usefull in studying solid phase (e.g., conductivity), they fail to provide meaningful results for molecular systems.

The predecessor of the modern chemical approaches to the DFT was undoubtely the Slater's X$\alpha$ method formulated in 1951 (Slater, 1951 & 1974, for review see: Johnson 1973 & 1975). It was developed as an approximate solution to the HF equations. In this method, the HF exchange was approximated by:

\begin{displaymath}
E_{X\alpha}[\rho_\uparrow, \rho_\downarrow]
= -\frac{9}{4}\...
...}{3}({\bf r})
+ \rho_\downarrow^\frac{4}{3}({\bf r})]d {\bf r}
\end{displaymath} (59)

The exchange energy $E_{X\alpha}$ is given here are a functional of densities for spin up ($\uparrow$) and spin down ($\downarrow$) electrons and contains an adjustable parameter $\alpha$. This parameter, was empirically optimized for each atom of the periodic table (see, e.g., Slater, 1974, Schwartz, 1972 & 1974) and its value was between 0.7-0.8 for most atoms. For a special case of homogenous electron gas, its value is exactly $\leavevmode
\kern.1em \raise .5ex \hbox{\the\scriptfont0 2}\kern-.1em $/ $\kern-.15em \lower .25ex \hbox{\the\scriptfont0 3}$ (Gáspár, 1954). Hohenberg and Kohn theorems
The field of rigorous density functional theory was born in 1964 with the publication of the Hohenberg and Kohn paper (1964). They proved the following:
I.
Every observable of a stationary quantum mechanical system (including energy), can be calculated, in principle exactly, from the ground-state density alone, i.e., every observable can be written as a functional of the ground-state density.
II.
The ground state density can be calculated, in principle exactly, using the variational method involving only density,
The original theorems refer to the time independent (stationary) ground state, but are being extended to excited states and time-dependent potentials (for review see, e.g., Gross & Kurth, 1994).

How these theorems were derived? By quite an original logic. Within a Born-Oppenheimer approximation, the ground state of the system of electrons is a result of positions of nuclei. This is obvious, if we look at the $\hat{H}_{el}$ hamiltonian in equation (20). In this hamiltonian, the kinetic energy of electrons ($\hat{T}_{e}$) and the electron-electron interaction ($\hat{U}_{ee}$) ``adjust'' themselves to the external (i.e., coming from nuclei) potential $\hat{V}_{ext}$. Once the $\hat{V}_{ext}$ is in place, everything else is, including electron density, which simply adjusts itself to give the lowest possible total energy of the system. The external potential $\hat{V}_{ext}$ is the only variable term in this equation, and everything else depends indirectly on it.

Hohenberg and Kohn posed a more interesting question, which is quite opposite to the traditional logic: Is $\hat{V}_{ext}$ uniquely determined from the knowledge of electron density $\rho({\bf r})$? Can we find out (in principle, though it need not be easy) where and what the nuclei are, if we know the density $\rho({\bf r})$ in the ground state? Is there a precise mapping from $\rho({\bf r})$ to $\hat{V}_{ext}$? The answer to this question is: Yes!. Actually the mapping is accurate within a constant, which would not change anything, since Schrödinger equations with $\hat{H}_{el}$ and $\hat{H}_{el} + const$ yields exactly the same eigenfunctions, i.e., states (it is easy to prove based on the linear property of the hamiltonian), and the energies will be simply elevated by the value of this $const$. Note that all energies are known only within some constant, which establishes the frame of reference (e.g., we do not include electron-Mars gravitational attraction within most calculations).

Why was this question so important? Because, if this is true, the knowledge of density provides total information about the system, and formally if we know the density, we know everything there is to known. Since $\rho({\bf r})$ determines number of electrons, $N$:

\begin{displaymath}
N = \int \rho({\bf r}) d {\bf r}
\end{displaymath} (60)

and $\rho$ determines the $\hat{V}_{ext}$, the knowledge of total density is as good, as knowledge of $\Psi$, i.e., the wave function describing the state of the system. They proved it through a contradiction:
1.
Assume that we have an exact ground state density $\rho({\bf r})$,
2.
Assume that the ground state is nondegenerate (i.e., there is only one wave function $\Psi$ for this ground state (though HK theorems can be easily extended for degenerate ground states, see, e.g., Dreizler & Gross, 1990; Kohn, 1985),
3.
Assume that for the density $\rho({\bf r})$ there are two possible external potentials: $\hat{V}_{ext}$ and $\hat{V}_{ext}^\prime$, which obviously produce two different hamiltonians: $\hat{H}_{el}$ and $\hat{H}_{el}^\prime$, respectively. They obviously produce two different wave functions for the ground state: $\Psi$ and $\Psi^\prime$, respectively. They correspond to energies: $E_0 = \left<\Psi\vert H\vert\Psi\right>$ and $E_0^\prime = \left<\Psi^\prime\vert H^\prime\vert\Psi^\prime\right>$, respectively.
4.
Now, let us calculate the expectation value of energy for the $\Psi^\prime$ with the hamiltonian $\hat{H}$ and use variational theorem:
\begin{displaymath}
E_0 < \left<\Psi^\prime\vert H\vert\Psi^\prime\right> =
\ove...
...t \rho({\bf r})[\hat{V}_{ext} - \hat{V}_{ext}^\prime]d {\bf r}
\end{displaymath} (61)

5.
Now let us calculate the expectation value of energy for the $\Psi$ with the hamiltonian $\hat{H}^\prime$ and use varational theorem:
\begin{displaymath}
E_0^\prime < \left<\Psi\vert H^\prime\vert\Psi\right> = \und...
...t \rho({\bf r})[\hat{V}_{ext} - \hat{V}_{ext}^\prime]d {\bf r}
\end{displaymath} (62)

6.
By adding equations (61) and (62) by sides we obtain:
\begin{displaymath}
E_0 + E_0^\prime < E_0^\prime + E_0
\end{displaymath} (63)

and it leads to a contradiction.
Since now, we know that $\rho({\bf r})$ determines $N$ and $\hat{V}_{ext}$, it also determines all properties of the ground state, including the kinetic energy of electrons $T_e$ and energy of interaction among electrons $U_{ee}$, i.e., the total ground state energy is a functional of density with the following components2:
\begin{displaymath}
E[\rho] = T_e[\rho] + V_{ext}[\rho] + U_{ee}[\rho]
\end{displaymath} (64)

Additionally, HK grouped together all functionals which are secondary (i.e., which are responses) to the $V_{ext}[\rho]$:

\begin{displaymath}
E[\rho] = V_{ext}[\rho] + F_{HK}[\rho] = \int \rho({\bf r})\hat{V}_{ext}({\bf r})d{\bf r} + F_{HK}[\rho]
\end{displaymath} (65)

The $F_HK$ functional operates only on density and is universal, i.e., its form does not depend on the particular system under consideration (note that N-representable densities integrate to N, and the information about the number of electrons can be easily obtained from the density itself).

The second HK theorem provides variational principle in electron density representation $\rho({\bf r})$3. For a trial density $\tilde{\rho}({\bf r})$ such that $\tilde{\rho}({\bf r}) \ge {\bf0}$ and for which $\int \tilde{\rho}({\bf r}) d {\bf r} = N$,

\begin{displaymath}
E_0 \le E[\tilde{\rho}]
\end{displaymath} (66)

where $E[\tilde{\rho}]$ is the energy functional. In other words, if some density represents the correct number of electrons $N$, the total energy calculated from this density cannot be lower than the true energy of the ground state.

As to the necessary conditions for this theorem, there is still some controversy concerning the, so called, representability of density. The $N$-representability, i.e., the fact that the trial $\tilde{\rho}$ has to sum up to $N$ electrons is easy to achieve by simple rescaling. It is automatically insured if $\rho({\bf r})$ can be mapped to some wave function (for further discussion see: Parr & Yang, 1989; Gilbert, 1975; Lieb, 1982; and Harriman, 1980). Assuring that the trial density has also $V_{ext}$-representability (usually denoted in the literature as $v$-representability) is not that easy. Levy (1982) and Lieb (1983) have shown, that there are some ``reasonable'' trial densities, which are not the ground state densities for any possible $V_{ext}$ potential, i.e., they do not map to any external potential. Such densities do not correspond therefore to any ground state, and their optimization will not lead to a ground state. Moreover, during energy minimization, we may take a wrong turn, and get stuck into some non $v$-representable density and never be able to converged to a physically relevant ground state density. For an interesting discussion, see Hohenberg et al. (1990). Assuming that we restrict ourselved only to trial densities which are both $N$ and $v$ representable, the variational principle for density is easly proven, since each trial density $\tilde{\rho}$ defines a hamiltonian $\hat{\tilde{H}}_{el}$. From the hamiltonian we can derive the corresponding wave function $\tilde{\Psi}$ for the ground state represented by this hamiltonian. And according to the traditional variational principle, this wave function $\tilde{\Psi}$ will not be a ground state for the hamiltonian of the real system $\hat{H}_{el}$:

\begin{displaymath}
\tilde{\rho} \rightarrow \hat{\tilde{H}}_{el} \rightarrow \t...
...{\Psi}\left.\right>
= E[\tilde{\rho}] \ge E[\rho_0] \equiv E_0
\end{displaymath} (67)

where $\rho_0({\bf r})$ is the true ground state density of the real system.

The condition of minimum for the energy functional: $\delta E[\rho({\bf r})] = 0$ needs to be constrained by the $N$-representability of density which is optimized4. The Lagrange's method of undetermined multipliers is a very convenient approach for the constrained minimization problems. In this method we represent constraints in such a way that their value is exactly zero when they are satisfied. In our case, the $N$ representability constraint can be represented as:

\begin{displaymath}
constraint = \int \rho({\bf r}) d {\bf r} - N = 0
\end{displaymath} (68)

These constraints are then multiplied by an undetermined constants and added to a minimized function or functional.
\begin{displaymath}
E[\rho({\bf r})] - \mu \left[\int \rho({\bf r}) d {\bf r} - N\right]
\end{displaymath} (69)

where $\mu$ is yet undetermined Lagrange multiplier. Now, we look for the minimum of this expression by requiring that its differential is equal to zero (a necessary condition of minimum).
\begin{displaymath}
\delta \left\{ E[\rho({\bf r})] - \mu \left[\int \rho({\bf r}) d {\bf r} - N\right]\right\} = 0
\end{displaymath} (70)

Solving this differential equation will provide us with a prescription of finding a minimum which satisfies the constraint. In our case it leads to:
\begin{displaymath}
\delta E[\rho({\bf r})] - \mu \delta \left\{ \int \rho({\bf r}) d {\bf r}\right\} = 0
\end{displaymath} (71)

since $\mu$ and $N$ are constants. Using the definition of the differential of the functional (see, e.g., Parr & Yang, 1989):
\begin{displaymath}
F[f + \delta f] - F[f] = \delta F= \int \frac{\delta F}{\delta f(x)}\delta f(x) dx
\end{displaymath} (72)

and the fact that differential and integral signs may be interchanged, we obtain
\begin{displaymath}
\int \frac{\delta E[\rho({\bf r})]}{\delta \rho({\bf r})}\de...
...f r}) d {\bf r}
- \mu \int \delta \rho({\bf r}) d {\bf r} = 0
\end{displaymath} (73)

Since integration runs over the same variable and has the same limits, we can write both expressions under the same integral:


\begin{displaymath}
\int \left\{ \frac{\delta E[\rho({\bf r})]}{\delta \rho({\bf r})}
- \mu \right\} \delta \rho({\bf r}) d {\bf r} = 0
\end{displaymath} (74)

which provides the condition for constrained minimisation and defines the value of the Lagrange multiplier at minium. It is also expressed here via external potential from equation (65):
\begin{displaymath}
\mu = \frac{\delta E[\rho({\bf r})]}{\delta \rho({\bf r})}
=...
...r}) + \frac{\delta F_{HK}(\rho({\bf r})}{\delta \rho({\bf r})}
\end{displaymath} (75)

Density functional theory gives a firm definition of the chemical potential $\mu$, and leads to several important general conclusions. For review, please refer to Parr & Yang (1989), chapters 4 and 5. Kohn and Sham Method
Above equations provide a prescription of minimizing energy by changing corresponding density. Unfortunately, the expression relating kinetic energy to density is not known with satisfactory accuracy. The current expressions, even those improved upon from the original Thomas-Fermi theory, are quite crude and quite unsatisfactory for atoms and molecules in particular. On the other hand, the kinetic energy is easily calculated from the wave function, provided that it is known. For that reason, Kohn & Sham (1965) proposed an ingenious method of marrying wave function and density approach. They repartitioned the total energy functional into following parts:
\begin{displaymath}
E[\rho] = T_0[\rho] + \int \left[\hat{V}_{ext}({\bf r}) + \hat{U}_{cl}({\bf r})\right]\rho({\bf r})d{\bf r} + E_{xc}[\rho]
\end{displaymath} (76)

where $T_0[\rho]$ is the kinetic energy of electrons in a system which has the same density $\rho$ as the real system, but in which there is no electron-electron interactions. This is frequently called a system on noninteracting electrons, but it may be misunderstood, since electrons still interact with nuclei.
\begin{displaymath}
\hat{U}_{cl}({\bf r})=\int \frac{\rho({\bf r}^\prime)}{\vert{\bf r}^\prime - {\bf r}\vert}d{\bf r}^\prime
\end{displaymath} (77)

is a pure Coulomb (``classical'') interaction between electrons. It includes electron self-interaction explicitly, since the corresponding energy is
\begin{displaymath}
E_{cl}[\rho] = \int \int \frac{\rho({\bf r}^\prime) \rho({\b...
...}{\vert{\bf r}^\prime - {\bf r}\vert}d {\bf r} d{\bf r}^\prime
\end{displaymath} (78)

and it represents interaction of $\rho$ with itself. $\hat{V}_{ext}({\bf r})$ is the external potential, i.e., potential coming from nuclei:
\begin{displaymath}
\hat{V}_{ext} = \sum_{\alpha} \frac{-Z_\alpha}{\vert{\bf R}_\alpha - {\bf r}\vert}
\end{displaymath} (79)

The last functional, $E_{xc}[\rho]$, called exchange-correlation energy, is defined by this equation. $E_{xc}[\rho]$ includes all the energy contributions which were not accounted for by previous terms, i.e.: In fact, all the difficult things were ``swept under the carpet'' in this functional. However, better and better approximations for this functional are being published. To finish the derivation of Kohn-Sham equations, let us assume for a while, that we know this energy functional reasonably well. In a similar fashion, as was done for the equation defining chemical potential (75) we may apply the variational principle and obtain:
\begin{displaymath}
\mu = \frac{\delta E[\rho({\bf r})]}{\delta \rho({\bf r})} =...
...}) + \frac{\delta E_{xc}[\rho({\bf r})]}{\delta \rho({\bf r})}
\end{displaymath} (80)

There is another trick which can be done with this equation, namely:
\begin{displaymath}
\mu = \frac{\delta E[\rho({\bf r})]}{\delta \rho({\bf r})} =...
...\rho({\bf r})]}{\delta \rho({\bf r})} +
\hat{V}_{eff}({\bf r})
\end{displaymath} (81)

where we lumped together all terms, except our noninteracting electron kinetic energy, into an effective potential depending upon ${\bf r}$:
\begin{displaymath}
\hat{V}_{eff}({\bf r}) = \hat{V}_{ext}({\bf r}) + \hat{U}_{cl}({\bf r})
+ \hat{V}_{xc}({\bf r})
\end{displaymath} (82)

where the exchange correlation potential is defined as a functional derivative of exchange correlation energy:
\begin{displaymath}
\hat{V}_{xc}({\bf r}) = \frac{\delta E_{xc}[\rho({\bf r})]}{\delta \rho({\bf r})}
\end{displaymath} (83)

The form of equation (81) asks for a solution as a Schrödinger equation for noninteracting particles:
\begin{displaymath}
\left[ - \frac{1}{2}\nabla^2_i + \hat{V}_{eff}({\bf r})\right]\phi_i^{KS}({\bf r}) = \epsilon_i \phi_i({\bf r})^{KS}
\end{displaymath} (84)

Yes!!! it is very similar to the eigenequation of the Hartree-Fock method, with one difference, it is much simpler. The Fock operator in equation (48) contains the potential which is nonlocal, i.e., different for each electron. The Kohn-Sham operator depends only on ${\bf r}$, and not upon the index of the electron. It is the same for all electrons. The Kohn-Sham orbitals, $\phi_i({\bf r})^{KS}$, which are quite easily derived from this equation5, can be used immediately to compute the total density:
\begin{displaymath}
\rho({\bf r}) = \sum_{i=1}^N \vert\phi_i^{KS}({\bf r})\vert^2
\end{displaymath} (85)

which can be used to calculate an improved potential $\hat{V}_{eff}({\bf r})$, i.e., lead to a new cycle of self-consistent field6. Density can also be used to calculate the total energy from equation (76), in which the kinetic energy $T_0[\rho]$ is calculated from the corresponding orbitals, rather than density itself:
\begin{displaymath}
T_0[\rho] = \frac{1}{2} \sum_{i=1}^N \left<\phi_i^{KS}\vert\nabla_i^2\vert\phi_i^{KS}\right>
\end{displaymath} (86)

and the rest of the total energy as:
\begin{displaymath}
V_{eff}[\rho] = \int \hat{V}_{eff}({\bf r})\rho({\bf r})d {\bf r}
\end{displaymath} (87)

In practice, total energy is calculated more economically using orbital energies $\epsilon_i$ as:
\begin{displaymath}
E[\rho] = \sum_{i=1}^{N}\epsilon_i - \frac{1}{2} \int \int \...
...nt \hat{V}_{xc}({\bf r})\rho({\bf r}) d {\bf r} + E_{xc}[\rho]
\end{displaymath} (88)

It is a popular misconception to look at this method as describing noninteracting electrons moving in a potential given by nuclei. In fact, they move in an effective potenitial $\hat{V}_{eff}({\bf r})$ which includes electron interaction, though in an artificial way. While this is more philosophical than physical, when you look at the form of $\hat{V}_{eff}({\bf r})$, which in Kohn-Sham equations takes a role of "external potential", the electron-electron interaction is replaced by interaction with some medium which mimics the electron-electron interaction. This medium actually, in a way, exaggerates the interaction between electrons, what is evidenced by the fact that the correction which needs be added to $T_0$7is positive, i.e., the "noninteracting electrons" move slower than the real, interacting ones. Implementations of Kohn and Sham method
The derivation of Kohn-Sham equations presented here is close to the original presentation, in which nonpolarized electron density is used, and occupation numbers for Kohn-Sham orbitals are assumed one. However extensions exist both for polarized spin densities (i.e., different orbitals for spin up, and spin down electrons), and for nonintegral occupation numbers in the range $(0,1)$. For review see Parr & Yang (1989) and Salahub et al (1994). Several uneasy questions can be asked here. What are the Kohn-Sham orbitals and energies? Formally, they are artifacts with no real physical significance. However, they are being used, and it seems, that they are quite close to the Hartee-Fock orbitals (Zhao & Parr, 1993). Also, the Kohn-Sham formalism can be extended to fractional occupation numbers $0 \le n_i \le 1$ (Janak, 1978; Perdew & Zunger, 1981; Parr & Yang, 1989). Orbital energies $\epsilon_i$ are in this case:
\begin{displaymath}
\epsilon_i = \frac{\partial E}{\partial n_i}
\end{displaymath} (89)

and one application may be to integrate energy from $N-1$ to $N$ electrons, and calculate ionization potential. The derivatives of energy versus occupation numbers provide also other response functions and can provide rigurous definitions of chemical potential, electronegativity, softness and hardness, (Parr, 1994; Parr & Yang, 1989; Neshev & Proynov, 1993)

First implementations of the Kohn-Sham method were using the local approximations to the exchange correlation energy. The appropriate functionals were taken from data on homogenous electron gas. There were two variants of the method: spin unpolarized (LDF/LDA - Local Density Functional/Approximation) and spin polarized (LSD - Local Spin Density) where arguments require both $\alpha$ and $\beta$ electron densities, rather than a total density.

For historical reasons, the exchange correlation energy was partitioned into 2 parts:

\begin{displaymath}
E_{xc}[\rho] = E_x[\rho] + E_c[\rho]
\end{displaymath} (90)

the exchange energy, and correlation energy. This partition is quite arbitrary, since exchange and correlation have slightly different meaning than in ab initio approaches. The exchange energy in LDF/LSD was approximated with the homogenous gas exchange result given by equation (59) with $\alpha=\leavevmode
\kern.1em \raise .5ex \hbox{\the\scriptfont0 2}\kern-.1em $/ $\kern-.15em \lower .25ex \hbox{\the\scriptfont0 3}$. The correlation energy was expressed as:
\begin{displaymath}
E_c[\rho] = \int \rho({\bf r}) \epsilon_c[\rho_\uparrow({\bf r})\rho_\downarrow({\bf r})]d {\bf r}
\end{displaymath} (91)

where $\epsilon_c[\rho_\uparrow({\bf r})\rho_\downarrow({\bf r})]$ is the correlation energy per one electron in a gas with spin densities $\rho_\uparrow({\bf r})$ and $\rho_\downarrow({\bf r})$. This function is not known analytically, but is constantly improved on the basis of quantum Monte Carlo simmulations, and fitted to analytical expansion (Vosko et al, 1980; von Barth & Hedin, 1979). The local functionals derived from electron gas data worked suprisingly well, taking into account that they substantially underestimate the exchange energy (by as much as 15%) and grossly overestimate the correlation energy, sometimes by a 100%. The error in exchange is however larger than the correlation error in absolute values. LSD/LDF is known to overbind normal atomic bonds, on the other hand, it produces too weak hydrogen bonds.

Early attempts to improve functionals by GEA (Gradient Expansion Approximation), in which $E_{xc}[\rho]$ was expanded in Taylor series versus $\rho$ and truncated at a linear term, did not improve results too much (see, e.g. Langreth & Vosko, 1990). Only GGA (Generalized Gradient Approximation) provided notable improvements by expanding $E_{xc}[\rho]$. The expansion here is not a simple Taylor expansion, but tries to find the right asymptotic behaviour and right scaling for the usually nonlinear expansion. These enhanced functionals are frequently called nonlocal or gradient corrections, since they depend not only upon the density, but also the magnitude of the gradient (i.e., first derivative) of density at a given point. While the term corrections has a historical meaning, where the corrections were added on the top of local density functionals, it is probably no longer correct, since modern nonlocal functionals are quite complicated functions in which the value of density and its gradient are integral parts of the formula. There are many different corrections, and it will take some time when their respective merit, accuracy, and best domains of applications will be established. For review of the existing functionals see e.g.: Clementi (1994), Johnson et al (1993), Seminario & Politzer (1995), Ziegler (1991). The most widely used local potentials are: Slater for exchange (Slater, 1974), and VWN for correlation (Vosko et al, 1980). Probably the most frequently in use today are:

Beside different exchange and correlation functionals, there is recent interest in the ACM (Adiabatic Connection Method) (see: Harris, 1984; Becke, 1993) to include at least partially the exact exchange from Hartree-Fock type calculation, or exchange Kohn-Sham orbitals. This method looks very promising but its application is not without cost. In the worst case, ACM calculation of exact exchange via exchange integrals (see equation 42) scales as $N^4$.

Most molecular DFT codes use basis functions, with a notable exception of a totally basis free NUMOL (Becke & Dickinson, 1990) program. Quite impressive variety of basis sets is used. Since most energy components have to be calculated via numerical integration, the use of contracted gaussians as a basis is not that essential. But most program use them, as it allows a substantial code reuse from the well developed traditional ab initio techniques. The well known deMon (Salahub et al, 1992), , DGauss (Andzelm & Wimmer, 1992), and DeFT (a program similar to deMon written aby Alain St-Amant et al. and available at
ftp://www.ccl.net/pub/chemistry/software/SOURCES/FORTRAN/DeFT)
all use gaussian basis functions. Some traditional ab initio packages also include options to perform DFT calculations: ACES2 (Bartlett & Stanton, 1994), Cadpac5 (Cambridge Analytic Derivative Package, e.g., Murray et al, 1993), Gaussian (Gaussian, Inc.), Spartan (Wavefunction, Inc.), and I hear that many others.

The AMOL, recently renamed to ADF (Amsterdam Density Functional) (Boerrigter et al 1998) uses Slater type basis.

The DVM (Ellis & Painter, 1970) and DMol (Delley, 1995) use numerical basis sets, which are given as spline functions for radial part (the angular part is taken as appropriate spherical harmonics).

There are also programs for extended systems like: Corning (Teter et al, 1989), Crystal94 (Univ. Turin & SERC Daresbury Lab), and Wien95 (Blaha et al, 1995) which use LAPW (Linearized Augmented Plane Wave's) as basis functions. NOTE: This stuff was written in 1996 - there are many new codes and new versions out there Typical Program Organization for SCF-KS equations
The single geometry SCF cycle or geometry optimization involve following steps:

1.
Start with a density (for the 1st iteration, superposition of atomic densities is typically used).
2.
Establish grid for charge density and exchanger correlation potential
3.
Compute KS matrix (equivalent to the ${\bf F}$ matrix in Hartree-Fock method in equation (57)) elements and overlap integrals matrix.
4.
Solve the equations for expansion coefficients to obtain KS orbitals.
5.
Calculate new density $\rho = \sum_{i=occ} \vert\phi_i({\bf r})\vert^2$.
6.
If density or energy changed substantially, go to step 1.
7.
If SCF cycle converged and geometry optimization is not requested, go to step 10.
8.
Calculate derivatives of energy vs. atom coordinates, and update atom coordinates. This may require denser integration grids and recomputing of Coulomb and exchange-correlation potential.
9.
If gradients are still large, or positions of nuclei moved appreciably, go to step 1.
10.
Calculate properties and print results.
Of course, there may be other variants of this method (e.g., when one computes vibrational frequencies from the knowledge of gradients and energies only).

It is quite popular to limit expense of numerical integration during the SCF cycle. It is frequently done by fitting auxiliary functions to charge density and exchange correlation potential. This allows for much faster integral evaluation. These auxiliary fitting functions are usually uncontracted gaussians (though quite different from the atomic basis sets) for which the integrals required for KS matrix can be calculated analytically. Different auxilliary sets are used for fitting charge density and exchange-correlation potential (see e.g., Dunlap & Rösch, 1990). The need for fitting is recently questioned (see e.g., Johnson, 1995) since it scales as $N^3$ even for very large systems, however, it is still very popular in DFT codes. The fitting procedures are in general non sparse, while for large molecules many contributions coming from distant portions may be neglected leading to less steep scaling with molecular size.

Early DFT codes were impaired by the lack of analytical gradients. Currently, expressions for first and second derivatives exist (see e.g.: Dunlap & Andzelm, Komornicki & Fitzgerald, 1993) and are implemented in many programs, thus facilitating geometry optimization and vibrational frequency calculations.

Performance of DFT
This will be a very short list of DFT applications, as there are many excellent reviews on this topic (see e.g.: Labanowski & Andzelm, 1991; Parr & Yang, 1989; Seminario & Politzer 1995; Ziegler, 1991; Bartolotti and Flurchick, 1996; St-Amant. 1996).

The G1 database of Pople and coworkers is a benchmark for accuracy of the traditional ab initio methods. The database containes 55 molecules for which experimental values of atomization energies are known within 1 kcal/mol. Curtiss et al (1991) achieved the 1.2 kcal/mol mean absolute error for these 55 atomization energies using the G2 procedure, which is a quite involved prescription incorporating higher order correlated methods. Becke (1992) was able to reproduce values in this database with a mean absolute error of 3.7 kcal/mol using his NUMOL program with gradient corrected functionals. This result was additionally improved by Becke (1993) to 2.4 kcal/mol when portion of the electron exchange entering the echange-correlation energy, $E_{xc}$ was calculated exactly form Kohn-Sham orbitals While the error in DFT is considered still too big, these results were obtained with a method which is substantially less computationally demanding than original correlated ab initio procedures used by Pople and coworkers. Obviusly, the errors refer to absolute atomization energies, which in general are very difficult to calculate with good occuracy (for review see, e.g., Hehre et al, 1986). The relative differences are usually reproduced much better with DFT methods.

Even without gradient corrections DFT results for bond dissociation energies are usually much better then the Hartree-Fock (which routinely underbinds) results, though they have an overbinding tendency. The LDA results are approximately of MP2 quality. The inclusion of gradient corrections to DFT provides bond dissociaction energies which pair in occuracy with MP4 and CC results.

Molecular geometries even with LSD are much better than corresponding HF results and are of the MP2 quality. However, LSD fails to correctly treat hydrogen bonding. This deficiency is removed when one uses gradient corrected DFT approaches. Quite estonishingly, DFT provides excellent results for molecules which are notoriously difficult for traditional ab initio approaches like FOOF, FON, and metalorganic or inorganic moities. There seems to be a funny regularity: "If something does not work with ab initio, try it with DFT, and vice versa".

Transition states of organic molecules are frequently not reproduced well with "pure" DFT. However, it seems that admixture of exact exchange (see Becke, 1993) via ACM dramatically improves the problem cases. DFT is however a metod of choice for transition states in metalorganic reactions. These systems are notoriously difficult to treat with even high quality ab initio and have problems with convergence.

Vibrational frequencies are well reproduced even by LSD, though gradient corrections improve agreement with experiment even further.

Ionization potentials, electron affinities, and proton affinities are reproduced fairly well within gradient corrected DFT.

Recently, there is much interest in using DFT for high spin species, since preliminary results are very promissing. On the other hand good performance of DFT in this field comes as a surprise, since high multiplets are poorly described by a single determinant wave function. For all the wrong reasons, with KS wave function of broken symmetry, the multiplets splitting are quite well reproduced by DFT.

The scope of applcations for DFT grows rapidly with calculations of new molecular properties being added to actively developed software. Recent extensions include parameters for NMR and ESR spectroscopy, diamagnetic properties, polarizabilities, relativistic calculations, and others.

The DFT methodology is still far from mature. It is encouraging that several groups which were primarily focused on the development of traditional ab initio approaches are now actively working in the DFT area. It will bring new methodological developments, as well as, more carefull assesment of applicability. With all the hopes which DFT brings, one has to keep in mind, that it is primarily a ground state oriented method, and cannot compete with semiempirical and correlated ab initio methods in calculations concerning excited states.

Advantages of DFT in biological chemistry




next up previous
Next: References
Joe Schmoe
2003-01-14