CCL Home Preclinical Pharmacokinetics Service
APREDICA -- Preclinical Service: ADME, Toxicity, Pharmacokinetics
Up Directory CCL June 07, 1993 [004]
Previous Message Month index Next day

From:  Christoph.Niedermeier (+ at +) physik.uni-muenchen.de
Date:  Mon, 7 Jun 93 19:07:44 MET DST
Subject:  Summary of hierarchical multipole methods


Dear netters:

In my original posting, I asked for comments/references on
hierarchical methods for efficiently computing long range
electrostatic interactions (mainly for protein simulations):

> Hi everybody,
>
> I'm working as a PhD student in the field of MD simulations
> of proteins with special interest in electrostatic interactions.
> Currently I am developping a method for efficient computation
> of long range electrostatic interactions in MD simulations
> of proteins.
>
> The most efficient methods existing sofar are, to my knowledge,
> hierarchical multipole algorithms which scale with O(N log N)
> (N being the number of atoms in the system). A special variant
> of this type of algorithms, the socalled Fast Multipole Method (FMM)
> (Greengard & Rokhlin) even scales with O(N).
>
> My question to the list is the following:====
> Has anybody worked with this type of algorithms and/or knows of
> other people who did ? If so, could you please give your
> experiences with/ opinions on these methods and, if available,
> supply a list of references ?
>
> I will post a summary of responses to the list.
>
> Thank you a lot
>
>    Chris
 
I got quite a few responses which demonstrated that there is a whole
bunch of people being interested in this field. Therefore, I will
try to give a short overview about the variety of methods and
algorithms. This overview will mainly consist of information I took
from the responses because I didn't start to study the literature yet.

A lot of people gave references which I consider to be a valuable
source of information by themselves because some of them include an
abstract. I will append a complete (and unique) list of references
at the end of this posting.

In my summary I will refer to statements and contributions of various
people. Because someone might be interested in direct communication
with some of them, I will append a list of participants at the end of
the summary which contains e-mail adresses, phone numbers etc as far as
I know of them.

Now for the summary:

As mentioned by Roger E. Critchlow Jr. ,conventional
MD codes treat long range electrostatic interactions by truncation
at a certain cut-off radius. This is partly due to uncertainties in
the dielectric `constant', i.e. the screening behaviour of th system.
However, it is also due to practial considerations, because the
computational effort for evaluating all pairwise electrostatic
interactions is increasing very rapidly with the size of the system
(order O(N^2)). This problem is less critical vor Van der Waals
interactions because those are decreasing rapidly with increasing
distance  and can therefore be neglected for large distances.

Efficient algorithms have been developped which take into account
all long range contributions but avoid the rapid increase in
computational effort. As I understand the first algorithm of
this kind was developped by Barnes & Hut (see references)
for N-body problems in astrohysics (like stellar dynamics).
The algorithm uses a hierarchy of cubic grids to subdivide the
system in a tree-like manner. In each cubic cell a multipole
expansion of the Coulombic or gravitational interactions is
performed. These multipole expansion are used to calculate the
interactions of all particles with each other with an efficiency
of order O(N log(N)).

Greengard & Rokhlin (see references) developped the socalled Fast Multipole
Algorithm (FMA) or Fast Multipole Method (FMM), respectively. This method uses
a sophisticated scheme of computing the multipole expansions and gathering
up interaction contributions which reduces the effort to the order of O(N).
This algorithms exists in a variety of implementations on single processor
and vector machines as well as massively parallel machines. I will mention
just a few:

- Steve Lustig extended the FMA for Morse and Yukawa potentials and
  implemented it on Connection Machines CM-200 and CM-5

- Bill Goddard (see references) and his group at Caltech
  developped the Cell Multipole Method, a variant of the FMA,
  which also includes London-type interactions in the hierarchical
  evaluation scheme.

- Francis Figueirido inserted an implementation of the Barnes-Hut algorithm
  into the MD-package IMPACT

- Andreas Windemuth (see references) inserted an implementation of the FMA
  into his MD simulation package MD where also a parallel version running
  an a Connection Machines CM-5 is existing. He also cooperated with the
  group of

- John Board (see references) who is  working on implementations
  of the Fast Multipole Algorithm on different parallel platforms.

- Mike Lee implemented the FMA as a Fortran subroutine which will be available
  on the public domain.

Michael Schaefer pointed out that the FMA starts to become more efficient
than direct evaluation of electrostatic interactions when the system size
exceeds 2000 particles.

In my own work on this field I try to further reduce the computational
effort by sacrificing highest accuracy of forces and energies. We use
a very simple electrostatic description of a protein which just considers
charges and dipoles. The method starts from socalled structural (chemical)
groups which are charged or dipolar and builts up a hierarchy of interacting
objects from these structural groups by structure adapted partitioning of
the protein. The error of electrostatic forces is about 1% whereas for
a system of 2000 particles the algorithm speeds up by a factor of seven
compared to direct evaluation of all interactions.

My personal opinion is that very high accuracy which is obtained by FMA
is not necessary because of the incertainties of partial charges and
dielectric screening. To get the basic effects an accuracy of about 1%
may be sufficient for MD simulations. Tests of this hypothesis on the way.


I want to thank all people who contributed comments/references to this
discussion and hope it was useful for some of you.


PARTICIPANTS
=============

The following people participated in the discussion:
(random order, e-mail address in parentheses)

Steve Lustig (lustigsr %-% at %-% esvax.dnet.dupont.com)
Polymer Physics, Central Science Division
E.I. du Pont de Nemours & Co, Inc.
Experimental Station, Route 141
Wilmington, DE 19880-0356
(302) 695 - 3899

Andreas Windemuth (windemut(-(at)-)cumbne.bioc.columbia.edu)
Coulmbia University

Michael Schaefer (schaefer - at - tammy.harvard.edu)
Harvard University

Roger E. Critchlow Jr. (rec : at : arris.com),
ARRIS Pharmaceutical Corporation, South San Francisco, CA
415.737.1650, 415.737.8590 (fax)

Bruce Bush (Bruce_Bush "at@at" merck.com)
Merck & Co., Inc.
Rahway NJ 07065 USA

Teerakiat Kerdcharoen (?)

Graham Hurst (hurst <-at-> hyper.com)
Hypercube Inc, 7-419 Phillip St,
Waterloo, Ont, Canada N2L 3X2 (519)725-4040

Michael A. Lee (?)

Tom Simonson (simonson-0at0-zinfandel.u-strasbg.fr)

John Nicholas (jb_nicholas #*at*# pnl.gov or d3g359 #*at*# rahman.pnl.gov)
Pacific Northwest Laboratory
Richland, WA 99352, USA

Dr. Robert Q. Topper, PRA (topper ( ( at ) ) haydn.chm.uri.edu)
Department of Chemistry
University of Rhode Island
Kingston, RI 02881 USA
(401) 792-2597 [office]
(401) 792-5072 [FAX]

Alan M. Mathiowetz (amm "-at-" kodak.com)
Sterling Winthrop, Inc.

Francisco Figueirido (figuei { *at * } lutece.rutgers.edu)

Thomas C. Bishop (bishop (+ at +) lisboa.ks.uiuc.edu)
Theoretical Biophysics
Beckman Institute
University of Illinois
405 N Mathews, Urbana, IL61801
Tel: (217)-244-1851

Christoph Niedermeier  (Christoph.Niedermeier;at;Physik.Uni-Muenchen.DE)
Theoretische Biophysik
Institut fuer medizinische Optik
Ludwigs-Maximilian-Universitaet Muenchen
Theresienstrasse 37
80333 Muenchen, Germany
phone: ++49-89/2394-4580, fax: ++49-89/2805248


REFERENCES
===========

The references given are collected from responses of different people
and from my own BibTeX database. I did not try to bring them into any
particular order. Most of them are in BibTeX style but I did not make
the cite keys unique. However, the references themselves should be
unique. Some of the references include an abstract which might be useful
to potential readers.


 %-% at %-% article{Greengard87a,
   author = {L. Greengard and V. Rohklin},
   journal = {J.\ Comp.\ Chem.},
   pages = {325-348},
   title = {A Fast Algorithm for Particle Simulations},
   volume = {73},
   year = {1987},
}

 $#at#$ techreport{Greengard87b,
   author = {L. Greengard and V. Rohklin},
   address = {Yale University, New Haven},
   institution = {YALEU/DCS},
   number = {RR-515},
   title = {Rapid Evaluation of Potential Fields in Three Dimensions},
   type = {Research Report},
   year = {1987},
}

 %-% at %-% techreport{Greengard88,
   author = 	{L. Greengard and V. Rokhlin},
   title = 	{On the Efficient Implementation of the Fast Multipole Algorithm},
   journal = 	{Research Report of the Yale University, Department of Computer
Science},
   institution= "Yale University Department of Computer Science",
   volume = 	{RR-602},
   month = 	{Feb.},
   year = 	1988,
   keywords = 	{fast multipole method, n-body-problem, efficient implementation
of the translation operator of the multipole expansion},
}

 -x- at -x- article{Greengard89,
   author = {L. Greengard and V. Rohklin},
   journal = {Chem. Scripta},
   pages = {139-144},
   title = {On the Evaluation of Electrostatic Interactions in Molecular
Modeling},
   volume = {29A},
   year = {1989},
}

 \\at// article{Saito92,
	author = 	{M. Saito},
	title = 	{Molecular Dynamics Simulations of Proteins in Water without the
Truncation of Long-Range Coulomb Interactions},
	journal = 	{Molecular Simulation},
	volume = 	8,
	pages = 	{321-333},
	year = 		1992,
	keywords = 	{FMM, hierarchical multipole algorithm, molecular dynamics, protein
in water},
}

-: at :-article{Kuwajima88,
   author = {S. Kuwajima and A. Warshel},
   journal = {J.\ Comp.\ Phys.},
   pages = {3751-3759},
   title = {The Extended Ewald Method: A General Treatment of Long-Range
Electrostatic Interactions in Microscopic Simulations},
   volume = {89},
   year = {1988},
}


 (+ at +) Article{appel,
  author =       "Andrew W. Appel",
  title =        "An efficient program for many body simulations",
  journal=	 "SIAM J. Sci. Stat. Comput.",
  volume =       "6",
  pages =        "85--103",
  year =         "1985",
  abstract =     "The simulation of $N$ particles interacting in a
		 gravitational force field is useful in astrophysics, but such
		 simulations become costly for large $N$. Representing the
		 universe as a tree structure with the particles at the leaves
		 and internal nodes labeled with the centers of mass of their
		 descendants allows several simultaneous attacks on the
		 computation time required by the problem. These approaches
		 range from algorithmic changes (replacing an $O(N^2)$
		 algorithm with an algorithm whose time-complexity is believed
		 to be $O(N\log N)$) to data structure modifications,
		 code-tuning, and hardware modifications. The changes reduced
		 the running time of a large problem ($N=10000$) by a factor
		 of four hundred. This paper describes both the particular
		 program and the methodology underlying such speedups.",
}

 (+ at +) Article{barnes:hut,
  author =       "Josh Barnes and Piet Hut",
  title =        "A hierarchical ${O}({N}\log {N})$ force-calculation
                 algorithm",
  journal =      "Nature",
  volume =       "324",
  pages =        "446--449",
  year =         "1986",
  abstract = "Until recently the gravitational $N$-body problem has been
modelled numerically either by direct integration, in which the computation
needed increases as $N^2$, or by an iterative potential method in which the
number of operations grows as $N\,\log N$. Here we describe a novel method of
directly calculating the forces on $N$ bodies that grows only as $N\,\log N$.
The technique uses a tree-structured hierarchical subdivision of space into
cubic cells, each of which is recursively divided into eight subcells whenever
more than one particle is found to occupy the same cell. This tree is
constructed anew at every time step, avoiding ambiguity and tangling.
Advantages over potential-solving codes are: accurate local interactions;
freedeom from geometrical assumptions and restrictions; and applicability to a
wide class of systems, including (proto-)planetary, stellar, galactic and
cosmological ones. Advantages over previous hierarchical tree-codes include
simplicity and the possibility of rigorous analysis of error. Although we
concentrate here on stellar dynamical applications, our techniques of
efficiently handling a large number of long-range interactions and
concentrating computational effort where most needed have potential
applications in other areas of astrophysics as well."
}

 at.at PhDThesis{draghicescu,
  author =      "Draghicescu, Cristina I.",
  title =       "Efficient Algorithms for Particle Methods",
  school =      "The Pennsylvania State University",
  year =        "1991",
  abstract =    "A fast algorithm is presented, which reduces the amount of
work necessary for computing pairwise interactions in a system of $n$
particles from $O(n^2)$ to $O(n(\log n)^p)$, where $p$ depends on the
problem in question. Error and work estimates are given.\par I
illustrate its application to the approximation of the Euler equations
in fluid dynamical simulations using the point vertex method. The
algorithm can be applied for both two- and three-dimensional
simulations; in the first case I show that, with a proper choice of
parameters, the accuracy and stability of the direct method are
preserved.\par Also discussed is the application of the algorithm to the
problem of evaluating interactions in molecular simulations.  A slightly
modified version can be used to reduce the complexity of the integral
equation method for boundary value problmes. I implemented the algorithm
for such a problem and provide the numerical results. On a SUN 4 the
algorithm reduces the CPU time required for a calculation with 500,000
points from a month to 15 minutes and is three times faster than the
direct method for as few as 128 particles.",
}

 at.at PhDThesis{salmon,
  author =       "John K. Salmon",
  title =        "Parallel Hierarchical ${N}$-body Methods",
  school =       "California Institute of Technology",
  year =         "1991",
  abstract = "Recent algorithmic advances utilizing hierarchical data
structures have resulted in a dramatic reduction in the time required
for computer simulation of $N$-body systems with long-range
interactions.  Computations which required $O(N^2)$ operations can now
be done in $O(N\,\log N)$ or $O(N)$. We review these tree methods and
find that they may be distinguished based on a few simple features. \par
The Barnes-Hut (BH) algorithm has received a great deal of attention,
and is the subject of the remainder of the dissertation. We present a
generalization of the BH tree and analyze the statistical properties of
such trees in detail.  We also consider the expected number of
operations entailed by an execution of the BH algorithm. We find an
optimal number for $m$, the maximum number of bodies in a terminal cell,
and confirm that the number of operations is $O(N\,\log N)$, even if the
distribution of bodies is not uniform. \par The mathematical basis of
all hierarchical methods is the multipole approximation. We discuss
multipole approximations, for the case of arbitrary, spherically
symmetric, and Newtonian Green's functions. We describe methods for
computing multipoles and evaluating multipole approximations in each of
these cases, emphasizing the tradeoff between generality and algorithmic
complexity. \par $N$-body simulations in computational astrophysics can
require $10^6$ or even more bodies. Algorithmic advances are not
sufficient, in and of themselves, to make computations of this size
feasible. Parallel computation offers, {\em a priori\/}, the necessary
computational power in terms of speed and memory. We show how the BH
algorithm can be adapted to execute in parallel. We use orthogonal
recursive bisection to partition space. The logical communication
structure that emerges is that of a hypercube. A local version of the BH
tree is constructed in each processor by iteratively exchanging data
along each edge of the logical hypercube. We obtain speedups in excess
of 380 on a 512 processor system for simulations of galaxy mergers with
180,000 bodies. We analyze the performance of the parallel version of
the algorithm and find that the overhead is due primarily to
interprocessor synchronization delays and redundant computation.
Communication is not a significant factor."
}

 -x- at -x- Article{SL:JStatPhys:91,
  author =      "K. E. Schmidt and Michael A. Lee",
  title =       "Implementing the Fast Multipole Method in Three Dimensions",
  journal =     "J. Stat.\ Phys.{}",
  volume =      "63",
  pages =       "1223--1235",
  year =        "1991",
  abstract =    "The Rokhlin-Greengard fast multipole algorithm for
evaluating Coulomb and multipole potentials has been implemented and
analyzed in three dimensions. The implementation is presented for
bounded charged systems and systems with periodic boundary conditions.
The results include timings and error characterizations.",
}

 %-% at %-% Article{Hernquist:JCP:87,
  author =      "Lars Hernquist",
  title =       "Vectorization of Tree Traversals",
  journal =     jcompphys,
  volume =      "87",
  pages =       "137--147",
  year =        "1990",
  abstract =    "A simple method for vectorizing tree searches,
which operates by processing all relevant nodes at the same depth in the
tree simultaneously, is described. This procedure appears to be general,
assuming that gather-scatter oprations are vectorizable, but is most
efficient if the traversals proceed monotonically from the root to the
leaves, or {\em vice versa\/}. Particular application is made to the
hierarchical tree approach for computing the self-consistent interaction
of $N$ bodies. It is demonstrated that full vectorization of the
requisite tree searches is feasible, resulting in a factor $\approx$
4--5 improvement in cpu efficiency in the traversals on a CRAY X-MP. The
overall gain in the case of the Barnes-Hut tree code algorithm is a
factor $\approx$ 2--3, implying a net speed-up of $\approx$ 400-500 on a
CRAY X-MP over a VAX 11/780 or SUN 3/50.",
}

 (+ at +) Article{Makino:JCP:87,
  author =      "Junichiro Makino",
  title =       "Vectorization of a Treecode",
  journal =     jcompphys,
  volume =      "87",
  pages =       "1990",
  year =        "148--160",
  abstract =    "Vectorized algorithms for the force calculation
and tree construction in the Barnes-Hut tree algorithm are described.
The basic idea for the vectorization of the force calculation is to
vectorize the tree traversal across particles, so that all particles in
the system traverse the tree simultaneously. The tree construction
algorithm also makes use of the fact that particles can be treated in
parallel. Thus these algorithms take advantage of the internal
parallelism in the $N$-body system and the tree algorithm most
effectively. As a natural result, these algorithms can be used on a wide
range of vector/parallel architectures, including current supercomputers
and highly parallel architectures such as the Connection Machine. The
vectorized code runs about five times faster than the non-vector code on
a Cyber 205 for an $N$-body system with $N=8192$.",
}

 \\at// Article{Barnes:JCP:87,
  author =      "Joshua E. Barnes",
  title =       "A Modified Tree Code: Don't Laugh; It Runs",
  journal =     jcompphys,
  volume =      "87",
  pages =       "161--170",
  year =        "1990",
  abstract =    "I describe a modification of the Barnes-Hut tree
algorithm together with a series of numerical tests of this method. The
basic idea is to improve the performance of the code on heavily
vector-oriented machines such as the Cyber 205 by exploiting the fact
that nearby particles tend to have very similar interaction lists. By
building an interaction list good everywhere within a cell containing a
modest number of particles and reusing this interaction list for each
particle in the cell in turn, the balance of computation can be shifted
from recursive descent to force summation. Instead of vectorizing tree
descent, this scheme simply avoids it in favor of force summation, which
is quite easy to vectorize. A welcome side-effect of this modification
is that the force calculation, which now treats a larger fraction of the
local interactions exactly, is significantly more accurate that the
unmodified method.",
}

 ^at^ Article{Makino:JCP:88,
  author =      "Junichiro Makino",
  title =       "Comparison of Two Different Tree Algorithms",
  journal =     jcompphys,
  volume =      "88",
  pages =       "393--408",
  year =        "1990",
  abstract =    "The efficiency of two different algorithms of
hierarchical force calculation is discussed. Both algorithms utilize the
tree structure to reduce the cost of the force calculation from $O(N^2)$
to $O(N\log N)$. The only difference lies in the method of the
construction of the tree. One algorithm uses the oct-tree, which is the
recursive division of a cube into eight subcubes. The other method makes
the tree by repeatedly replacing a mutually nearest pair in the system
by a super-particle. Numerical experiments showed that the cost of the
force calculation using these two schemes is quite similar for the same
relative accuracy of the obtained force. The construction of the
mutual-nearest-neighbor tree is more expensive than the construction of
the oct-tree roughly by a factor of 10. On the conventional mainframes
this difference is not important because the cost of the tree
construction is only a small fraction of the total calculation cost. On
vector processors, the oct-tree scheme is currently faster because the
tree construction is relatively more expensive on the vector
processors.",
}

(-(at)-)Book{greengardThesis,
	Author= "Leslie Greengard",
	Title= "The Rapid Evaluation of Potential Fields in Particle
Systems",
	Publisher= "MIT Press",
	Address=  "Cambridge, MA",
	Year = "1988", }


 $#at#$ Inproceedings{jab:ASME,
	Author="J. A. {Board, Jr.} and R. R. Batchelor and J. F.
{Leathrum,Jr.}",
	Title="High Performance Implementations of the Fast Multipole
Algorithm",
	Booktitle="Symposium on Parallel and Vector Computation in
Heat Transfer, Proc. 1990 AIAA/ASME Thermophysics and Heat Transfer
Conference",
	Year=1990,
}

 "at@at" Inproceedings{jab:NATUG3,
	Author="J. A. {Board, Jr.} and J. F. {Leathrum,Jr.}",
	Title="The Fast Multipole Algorithm on Transputer Networks",
	Editor="Alan S. Wagner",
	Booktitle="Proceedings, Third North American Transputer Users
Group Meeting, April 1990",
	Publisher="IOS Press",
	Address="Washington, DC",
	Year=1990,
}

 "-at-" Inproceedings{jab:WOTUG,
	Author="J. F. {Leathrum, Jr.} and J. A. {Board, Jr.}",
	Title="Parallelization of the Fast Multipole Algorithm using
the {B012} Transputer Network",
	Booktitle="Transputing '91",
	Publisher="IOS Press",
	Address="Washington, DC",
	Year=1991,
}

 <-at-> Article{GRAPE,
	Author="D. Sugimoto and others",
	Title=" ",
	Journal="Nature",
	Year=1990,
	Volume=345,
	Pages="33",

}

(-(at)-)Article{Delft,
	Author="D. J. Auerbach and W. Paul and A. F. Bakkers",
	Title="A special purpose computer for molecular dynamics:
motivation,
design, and application",
	Journal="J. Phys. Chem.",
	Year=1987,
	Volume=91,
	Pages="4881",

}

 <-at-> Article{HierarchyTimesteps,
	Author="H. Grubmuller and H. Heller and A. Windemuth and K.
Schulten",
	Title="Generalized Verlet algorithm for efficient molecular
dynamics",
	Journal="Mol. Sim.",
	Year=1991,
	Volume=6,
	Pages="121",

}

 ^at^ incollection{GreengardParallel,
	Author="L. Greengard and W. Gropp",
	Title="A parallel version of the fast multipole algorithm",
	Editor="G. Rodrigue",
	Booktitle="Parallel Processing for Scientific Computing",
	Publisher="SIAM",
	Address="Philadelphia",
	Year=1989,
	Pages="213-222",

}

 "at@at" incollection{ICASE,
	Author="James F. {Leathrum, Jr.} and John A. {Board, Jr.}",
	Title="Mapping the adaptive fast multipole algorithm onto
MIMD systems",
	Editor="P. Mehrotra and J. Saltz and R. Voight",
	Booktitle="Unstructured Scientific Computation on Scalable
		Multiprocessors",
	Publisher="MIT Press",
	Address="Cambridge, MA",
	Pages="161-178",
	Year=1992,
}

 (+ at +) Article{VectorFMA,
	Author="K. E. Schmidt and Michael A. Lee",
	Title="Implementing the Fast Multipole Method in Three
Dimensions",
	Journal="J. Stat. Phys.",
	Volume=63,
	Pages=1220,
	Year=1991,
}

 -AatT- Article{ZhaoArticle,
	Author="F. Zhao and S. Lennart Johnsson",
	Title="The Parallel Multipole Method on the Connection
Machine",
	Journal="SIAM J. Sci. Stat. Comp.",
	Year=1991,
	Volume=12,
	Pages=1420,
}

 -8 at 8- article{GRAPE2,
	Author="T. Ito and T. Ebisuzaki and J. Makino and D.
Sugimoto",
	Title="A Special-Purpose Computer for Gravitational Many-Body
		Systems: GRAPE-2",
	Journal="Publ. Astron. Soc. Japan",
	Volume=43,
	Pages="547-555",
	Year=1991,
}


-: at :-Article{CPL,
	Author="J. A. {Board, Jr.} and J. W. Causey and J. F.
{Leathrum, Jr.},
	A. Windemuth and K. Schulten",
	Title="Accelerated Molecular Dynamics Simulation with the
Fast
	Multipole Algorithm",
	Journal="Chem. Phys. Lett.",
	Year=1992,
	Volume=198,
	Pages="89",

}


 { *at * } Article{ReifFaster,
	Author="Reif and Tate",
	Title="",
	Journal="",
	Year="",
	Volume="",
	Pages="",
}

 -8 at 8- Article{jab:HPCCBME,
	Author="John A. {Board, Jr.},
	Title="Grand Challenges in Biomedical Computing",
	Journal="Crit. Rev. Biomed. Eng.",
	Volume=20,
	Pages=1,
	Year=1992,
}

Non-BibTeX References:
======================

   Ding, H.-Q. Karasawa, N., and Goddard, W.A. III.
   "Atomic level simulations on a million particles - the Cell Multipole Method
   for coulomb and London nonbond interactions."
   J. Chem. Phys. 97:4309-4315, 1992.

   Ding, H.-Q., Karasawa, N., Goddard, W.A. III.
   "The reduced cell multipole method for Coulomb interactions in
   periodic systems with million-atom unit cells."
   Chem. Phys. Lett. 196:6-10, 1992.

--
Christoph Niedermeier -- Theoretische Biophysik --
Institut fuer medizinische Optik --
Ludwigs-Maximilian-Universitaet Muenchen --                          __o
Theresienstrasse 37 -- 8000 Muenchen 2 -- Germany                  _`\<,_
phone: ++49-89/2394-4580, fax: ++49-89/2805248                    (_)/ (_)
email: Christoph.Niedermeier { *at * } Physik.Uni-Muenchen.DE
~~~~~~~~~~~



Similar Messages
06/03/1993:  Hierarchical multipole solvers
11/26/1997:  tRNA modelling: summary
10/13/1992:  On the use of Cut-off Schemes in MD
08/03/1995:  ACS Chicago - CINF Abstracts    - 29 pages document -
02/28/1995:  Parallel Molecular Dynamics with full Coulomb interactions
02/28/1995:  Parallel Molecular Dynamics with full Coulomb interactions
08/01/1996:  Re: CCL:M:Heat of formation calculation using MOPAC.
08/05/1998:  Drug design - summary.
08/06/1998:  Drug design - summary.
08/06/1998:  Drug design - summary.


Raw Message Text