CCL Home Preclinical Pharmacokinetics Service
APREDICA -- Preclinical Service: ADME, Toxicity, Pharmacokinetics
Up Directory CCL February 28, 1995 [018]
Previous Message Month index Next day

From:  Andreas Windemuth <windemut at.at cumbnd.bioc.columbia.edu>
Date:  Tue, 28 Feb 95 13:37:54 -0500
Subject:  Parallel Molecular Dynamics with full Coulomb interactions




For those interested in parallel and scalable molecular dynamics simulation
of biological macromolecules: Version 0.9 of the program PMD has been
made available. This is an experimental program that uses the Greengard/
Rokhlin fast multipole algorithm (FMA) in conjunction with a constant
force multiple timestep method to permit the efficient simulation of large
biological macromolecules without cutting off the long range forces. For
more details, see "http://tincan.bioc.columbia.edu/pmd/"
or the README
file from the distribution reproduced below.

---
Andreas Windemuth

+--------------------------------------------------------------------
|Columbia University, Department of Biochemistry and Biophysics
|630 West 168th St. BB-221 | tel: (212)-305-6884, fax: 6926, NeXTmail
|New York, NY 10032        | email: windemut.,at,.cumbnd.bioc.columbia.edu
+--------------------------------------------------------------------



This is an experimental version of PMD,
Version 0.9, release Feb 28, 1995, all rights reserved.

PMD is a scalable, parallel program for the simulation of the
dynamics of biological macromolecules. PMD utilizes the
Greengard/Rokhlin Fast Multipole Algorithm to allow the
simulation of very large biological macromolecular systems
without sacrificing the important long-range Coulomb
interactions.


The force field implemented by PMD is compatible to programs
such as CHARMM, X-PLOR, GROMOS, Discover and others. Residue
topology and parameter files suitable for X-PLOR can be used
with PMD. Particularly, PMD can fully implement the CHARMM19
CHARMM22 force fields. PMD is also intrinsically and
transparently parallel and suitable for running on a wide
variety of parallel architectures, both shared memory and
message passing.


The most salient features of PMD are:

- Use of the Fast Multipole Algorithm allows for the calculation of the
  full long range electrostatic interactions in linear (of order N) time.

- The Distance Class Algorithm reduces the calculation time
  further to make full-range calculations faster than conventional
  cutoff methods.

- PMD is designed to be completely scalable, i.e. arbitrarily large systems
  (millions of atoms) can be simulated as long as enough processing nodes
  are available. Memory use is minimal, compared to other programs.

- PMD runs without changes on a large number of UNIX workstations and
  can easily be adapted to others. Parallel implementations exist for
  the CM-5, the Intel Paragon, the Cray T3D and workstation networks.

- Parallel instructions are limited to a small set of commands that are
  easily implemented in any machine specific or portable (TCGMSG, PVM,
  Linda) parallel processing interface.

PMD is work in progress. Expect plenty of changes of all
kinds in future versions. This program is made available to
encourage researchers to add  features they need and in the
hope that some of these improvements find their way back into
future versions. A mailing list has been established to foster
discussion among users and contributors. Please direct
inquiries to join the mailing list to
"pmd-request(-(at)-)cumbnd.bioc.columbia.edu".

The source code is being made available under the condition
that any additions, improvements or changes will be sent to
the author (windemut()at()cumbnd.bioc.columbia.edu) for inclusion
into the distribution. No restrictions different from those on
PMD itself may be put on such contributions. PMD or parts of
it may not be used or distributed for non-disclosed corporate
research or commercially without prior consent of the author.


If either the program or ideas from its code are used in a
publication, it is asked that the following references be cited:

J.A. Board Jr., J.W. Causey, J.F. Leathrum Jr., A. Windemuth, and K. Schulten.
Accelerated molecular dynamics simulation with the parallel fast multipole
algorithm.
Chem. Phys. Lett. 198:89--94, 1992.

A. Windemuth and K. Schulten.
Molecular dynamics on the Connection Machine.
Molecular Simulation, 5:353--361, 1991.

A. Windemuth.
Advanced Algorithms for Molecular Dynamics Simulation: The Program PMD.
in "Parallel Computing in Computational Chemistry"
(Timothy G. Mattson, ed.), ACS Books, 1995, in press.


Installation:
    cd
    zcat pmd.tar.Z | tar xvf -

Demo run:
    cd pmd/demo
    make SYS=  (one of next hpux dec iris aix sun paragon)

Parallel demo run:
    [install TCGMSG and customize bin/par.]
    cd demo
    make SYS= PSYS=tcgmsg

This will start a simulation of Pancreatic Trypsin Inhibitor
(PTI), starting from the original PDB file, adding hydrogens,
equilibrating with harmonic constraints and a minimizing. Look
at "demo/makefile" and "src/make.sim" to see how the run is
controlled and how you might modify it to run your own
molecules. First, try changing the line "SYS=next" to reflect
the machine you are actually using. Then, try "make MOL=mb" to
run the solvated myoglobin setup that is frequently used as a
benchmark for CHARMM. Finally, try "make mutant" to generate
a Ile 3 -> Tyr mutant of T4 phage lysozyme (2lzm). Some of these
simulation will take quite some time if not interrupted. It is

suggested to run them in the background using the ampersand ("&")
character, i.e. "make mutant &". The link "demo/log" will always
point to the newest simulation log. Try "tail -f log" to check on
the simulation progress.

Some other sample molecules are available in "demo". Their
topology is described in files "*.str", their atom coordinates
in files "*.pdb". It is quite straightforward to make your own
"*.str" files, once you study the sample files and "src/make.sim".

If you wish to run PMD in parallel, make sure that you have
TCGMSG or PVM installed and that there is a directory or link
"tcgmsg" or "pvm" in your home directory pointing to your
system's TCGMSG or PVM root directory. Edit "makefile" and
change the line "PSYS=serial" to one of the supported parallel
interfaces (tcgmsg, pvm, pvm-t3d). You also will have to
edit a startup script that sets up processes on the network.
The script has the name "par-$(SYS)-$(PSYS)" and it is located
in "bin". Some examples are provided, but you will almost
certainly have to adapt one of them to your own environment.
TCGMSG can be obtained from "ftp.tcg.anl.gov". See the file
"README.TCGMSG" for more information. Information on PVM can
be obtained at "http://www.epm.ornl.gov/pvm/pvm_home.html"
on
the World-Wide Web. For information on the Cray T3D refer to
"http://pscinfo.psc.edu/machines/cray/t3d/t3d.html".


Documentation:
    This file, some scattered comments in makefiles and scripts,
    and the World Wide Web pages.

Main features:
    Fast multipole algorithm
	Linear scaling enables simulation of extremely large systems
	Full long range interactions with no cut-off
    Distance class algorithm
	Accounts for full long-range interaction while providing
	performance better than conventional cut-off calculations
    Scalable parallel implementation on
    	Workstation networks (with TCGMSG or PVM)
	Cray T3D (PVM)
	Convex Exemplar (TCGMSG)
    	Intel Paragon (TCGMSG)
	Thinking Machines CM-5 (slow, no vector units, no longer supported)
	Parsytec GC  (initial implementation, no longer supported)
    Now implemented:
	Growing of hydrogens
	Mutation (Growing of sidechains)
	Building solvation shells
	Superposition and RMS-values
	Solvent accessible surface (no forces, yet)
	Harmonic constraints
	Stochastic boundary (friction and random fluctuations)
	Restart files and DCD trajectories
	
Main limitations:
    FMA and atom reassignment not fully scalable (yet)
    No vectorization
    Not enough features

Forthcoming: (no guarantees, of course :-) )
    Advanced solvent treatment
	Generalized Born potential
	Hydrophobic forces and continuum electrostatics with forces
    FMA and atom reassignment
	Improved scalable versions
	Periodic boundary conditions
    Other features
	Containment fields for closed boundary
	Pretty pictures (ray-tracing and movies)

Adapting PMD to other parallel systems:

All machine-specific parallel communication commands have
been isolated into one file, called the adaptor. The adaptors
in the current release are "tcgmsg.c": for the TCGMSG public
domain parallel programming Interface, "pvm.c": for the PVM
public domain parallel programming Interface, "cm5.c": for
the Connection Machine 5 with CMMD-3.0, "pvm-t3d.c" for the
restricted set of PVM used on the Cray T3D, and "serial.c": for
non-parallel workstation implementations. With TCGMSG or PVM
PMD can be run on a wide variety of platforms, such as workstation
networks, the Intel Paragon and the Convex Exemplar. Other
adaptors are expected to become available as work progresses.

Changes from 0.8:

- Much improved makefiles and compilation
- Simplified parallel execution
- Parallel output is now much faster and should always work
- new trajectory format, with PDB and DCD conversion utility
- PVM and the Cray T3D are now supported
- Numerous bugs corrected.
- Everything else also improved.

Have fun!

Andreas Windemuth

+--------------------------------------------------------------------
|Columbia University, Dept. of Biochemistry and Biophysics, BB-221
|630 West 168th St.     |   tel: (212)-305-6884, fax: 6926, NeXTmail
|New York, NY 10032     |   email: windemut "at@at" cumbne.bioc.columbia.edu
+--------------------------------------------------------------------



Similar Messages
02/28/1995:  Parallel Molecular Dynamics with full Coulomb interactions
08/04/1993:  MD description
06/07/1993:  Summary of hierarchical multipole methods
11/05/1992:  Re: Musings about parallelism... 
06/03/1993:  Hierarchical multipole solvers
08/03/1995:  ACS Chicago - CINF Abstracts    - 29 pages document -
04/08/1994:  normal coordinate calculation 
11/04/1992:  Re: Musings about parallelism... 
02/21/1996:  Summary: CRYSTAL & all
11/26/1997:  tRNA modelling: summary


Raw Message Text