From kdb-: at :-oddjob.uchicago.edu Mon Dec 12 05:19:18 1994 Received: from oddjob.uchicago.edu for kdb(+ at +)oddjob.uchicago.edu by www.ccl.net (8.6.9/930601.1506) id EAA05724; Mon, 12 Dec 1994 04:29:59 -0500 Received: by oddjob.uchicago.edu Mon, 12 Dec 94 03:29:58 CST Date: Mon, 12 Dec 94 03:29:58 CST From: "Keith Ball" Message-Id: <9412120929.AA17111 # - at - # oddjob.uchicago.edu> To: chemistry %-% at %-% ccl.net Subject: Fast Diag. Routines: RESPONSES Here is a summary of the response to my mesage about possible fast diagonalization methods, posted to the Computational Chemistry List and sci.math, sci.numerical-analysis, sci.physics, and sci.chem. Concerning 'fast' algorithms (ie faster than O(n^3) ) there don't seem to be any for non-sparse matrices. The routines most recommended are tred2 (Householder reduction of a symmetric matrix to a tridiagonal matrix) followed by tqli (finding eigenvalues, and eigenvectors if desired, of the tridiagonal matrix). These routines are given in Press et al, _Numerical Recipes_, available in FORTRAN, C, and Pascal versions, to my knowledge. (2nd ed.) One can purchase the programs on disk, and they may already be on whatever machine you currently use (but I would check to see if they are 1st ed. and if there have been any problems with the 1st ed programs; i have no idea. Probably could just type in the necessary changes from the 2nd ed. book). I especially recommend that one read Chapter 11 on Eigensystems, if they have not already done so. The eispack library has versions of these routines, and lapack also has versions of these routines. For access to these libraries, the full bibliography for Press, and other quips and references, see below. There are also one or two routines offered to me, which I have included. I have not tried any of these, so I would check with the person who sent the code (name and e-mail are provided). Enjoy!! Keith Ball kdb-!at!-cloister.uchicago.edu ------------------------------------------------------------ From: "M. J. Saltzman" In sci.math you write: > I am wondering if anyone knows of any references to fast algorithms >for diagonalization (i.e. finding the eigenvalues and eigenvectors) >of a symmetric matrix. > The problem I am using this for consists of finding stationary points >(specifically saddle points) on the 3*N-dimensional Cartesian potential >surface for a system of N particles interacting via pairwise interactions. >The matrix in question is the Hessian second-derivative matrix. >Any references or suggestions would be greatly appreciated! >Please send your responses directly by e-mail. Check out LAPACK at Netlib. Send mail to netlib#* at *#ornl.gov with the message "send index". -- Matthew Saltzman Clemson University Math Sciences mjs*- at -*clemson.edu ---------------------------------------- From: David Heisterberg For the general case I don't know of anything that beats the old workhorse of Householder reduction followed by QL with implicit shifts, as implemented in Eispack (and now Lapack). This is assuming you want all eigenvalues and vectors. If you are repeatedly solving for eigenvalues and vectors of matrices that change only a small amount at each step, then maybe an inverse iteration technique would work. -- David J. Heisterberg (djh ^at^ ccl.net) Gee, it's so beautiful, I gotta The Ohio Supercomputer Center give somebody a sock in the jaw. Columbus, Ohio -- Little Skippy (Percy Crosby) ---------------------------------------- From: sling ^at^ euclid.chem.washington.edu (Song Ling) Hi Keith. I suggest that you look into existing program packages, one is called IMSL and another NAG, I think they are commercial. If you want to try it on your own, there are some fortran codes in a book called "Numerical Recipes" by Press et al, it is a very popular book, and I suggest that you find it at the bookstore of U of Chicago. Best, Song Ling ---------------------------------------- From: hinsenk.,at,.ERE.UMontreal.CA (Hinsen Konrad) All linear algebra libraries provide routines for that, and unless your matrix has some exotic properties and/or you are looking for only a small subset of eigenvalues, there aren't any faster routines than the standard ones. My personal preference are the routines from the LAPACK library, which are free and in my experience more stable and faster than many others. You can get them from many sources, e.g. from AT&T's netlib (ftp to netlib.att.com:netlib/lapack). ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsenk ^%at%^ ere.umontreal.ca Departement de Chimie | Tel.: +1-514-343-6111 ext. 3953 Universite de Montreal | Fax: +1-514-343-7586 C.P. 6128, succ. A | Deutsch/Esperanto/English/Nederlands/ Montreal (QC) H3C 3J7 | Francais (phase experimentale) ------------------------------------------------------------------------------- ---------------------------------------- From: mike%!at!%mycenae.CChem.Berkeley.EDU (Mike L. Greenfield) For calculating eigenvalues and eigenvectors, I recommend the combination of tred2 and tql2 from the EISPACK library. You can obtain the Fortran source code by anonymous ftp to netlib.att.com. There are probably also LAPACK routines to do this; I don't know their names. I think these two routines are also discussed in the eigenvalues section of Numerical Recipes. I assume you've looked at Jon Baker's paper (and references therein) on finding transition states? He discusses a Cerjan-Miller type algorithm in detail: -x- at -x- Article{Baker860, author = "Jon Baker", title = "An Algorithm for the Location of Transition States", journal = "J. Comput. Chem.", year = 1986, volume = 7, pages = "385--395" } Good luck with your calculations. Mike Greenfield Dept. of Chemical Engineering UC Berkeley mike /at\mycenae.cchem.berkeley.edu ---------------------------------------- From: cooksj;at;ttown.apci.com (Stephen J. Cook) Keith, I'm not sure how fast they are, but have you looked at the netlib archives? These can be found at either research.att.com (or) netlib.att.com One of these names is obsolete, but I know that one of them works. These machines hold linpack,lapack,eispack and other numerical algorithm suites that may be helpful to you. They are all available in Fortran. Good luck. Steve Cook -- ********************************************************************* * Steve Cook cooksj ^at^ ttown.apci.com * * Air Products and Chemicals, Inc. Tel. (610) 481-2135 * * 7201 Hamilton Blvd. FAX (610) 481-2446 * * Allentown, PA 18195 * * USA * ********************************************************************* * Emacs - the choice of a GNU generation * ********************************************************************* * Disclaimer: The opinions expressed here are those of the author. * * Any resemblance between my opinions and those of Air * * Products is purely coincidental... * ********************************************************************* ---------------------------------------- From: bittner- at -hpcf.cc.utexas.edu (bittner) I use the LAPACK equlvalents to the EISPACK tred2 and tql2 routines. All of which I know are on rainbow.uchicago. BTW, will their be a 2nd Keith Ball symposium for mathematical physics? I forgot the subject for the first symposium...but it was great idea. ---------------------------------------------------------------------- Eric R. Bittner phone: (512)-471-1092 Dept. of Chemistry fax: (512)-471-8696 Univ. of Texas (formerly U of C) ----------------------------------------------------------------------- Wenn is das Nunstuck git und Slottermeyer? Ja! ...Beiherhund das Oder die Flipperwaldt gersput! ---------------------------------------- From: Ryszard Czerminski X 217 Hi Keith, I am not quite sure if this one will be fast enough for you, but it served me well over past ~10 years. Ryszard Czerminski SUBROUTINE HOUSE (Z,NZ,N,D,E,ERROR) IMPLICIT REAL*8 (A-H,O-Z) C C DIAGONALIZES REAL, SYMMETRIC MATRIX USING HOUSEHOLDER METHOD. C C INPUT: C Z - Matrix to be diagonalized C NZ - first dimension in calling program C N - order of Z C C OUTPUT: C D - eigenvalues in decreasing order C Z - genvectors C ERROR=0 - normal solution C ERROR=J - failure in J eigenvalue after 30 iterations C C MACHINE DEPENDENT PARAMETERS: C MACHEP - relative machine precision C SMALL - smallest floating point number C INTEGER ERROR REAL*8 MACHEP C DIMENSION D(N),E(N),Z(NZ,N) real*8 D(N),E(N),Z(NZ,N) real*8 b,c,f,g,h,p,r,s,small,tol c DATA SMALL/5.4D-79/ IBM 370 c MACHEP=2.D0**(-56) IBM 370 C DATA SMALL/2.225D-308/ titan, IBM PC real*8 ms-fortran 4.0 C = 2**(-1022) C MACHEP=2.D0**(-52) titan, IBM pc real*8 ms-fortran 4.0 C DATA SMALL/0.495D-323/ sun : 2**(-1074) C DATA SMALL/0.495D-323/ sun : 2**(-1074) DATA SMALL/0.495D-323/ C DSQRT(X)=DDSQRT(X) C DSIGN(X,Y)=DDSIGN(X,Y) C DABS(X)=DDABS(X) MACHEP=2.D0**(-52) C TOL=SMALL/MACHEP IF (N .EQ. 1) GO TO 320 DO 300 II = 2, N I = N + 2 - II L = I - 2 F = Z(I-1,I) G = 0.0E0 IF (L .LT. 1) GO TO 140 DO 120 K = 1, L 120 G = G + Z(K,I) * Z(K,I) 140 H = G + F * F IF (G .GT. TOL) GO TO 160 E(I) = F H = 0.0E0 GO TO 280 160 L = L + 1 G = -DSIGN(DSQRT(H),F) E(I) = G H = H - F * G Z(I-1,I) = F - G F = 0.0E0 DO 240 J = 1, L Z(I,J) = Z(J,I) Z(J,I) = Z(J,I) / H G = 0.0E0 DO 180 K = 1, J 180 G = G + Z(K,J) * Z(I,K) JP1 = J + 1 IF (L .LT. JP1) GO TO 220 DO 200 K = JP1, L 200 G = G + Z(J,K) * Z(K,I) 220 E(J) = G / H F = F + G * Z(J,I) 240 CONTINUE HH = F / (H + H) DO 260 J = 1, L F = Z(I,J) G = E(J) - HH * F E(J) = G DO 260 K = 1, J Z(K,J) = Z(K,J) - F * E(K) - G * Z(I,K) 260 CONTINUE 280 D(I) = H 300 CONTINUE 320 D(1) = 0.0E0 E(1) = 0.0E0 DO 500 I = 1, N L = I - 1 IF (D(I) .EQ. 0.0E0) GO TO 380 DO 360 J = 1, L G = 0.0E0 DO 340 K = 1, L 340 G = G + Z(I,K) * Z(K,J) DO 360 K = 1, L Z(K,J) = Z(K,J) - G * Z(K,I) 360 CONTINUE 380 D(I) = Z(I,I) Z(I,I) = 1.0E0 IF (L .LT. 1) GO TO 500 DO 400 J = 1, L Z(I,J) = 0.0E0 Z(J,I) = 0.0E0 400 CONTINUE 500 CONTINUE ERROR = 0 IF (N .EQ. 1) GO TO 1500 DO 1100 I = 2, N 1100 E(I-1) = E(I) F = 0.0E0 B = 0.0E0 E(N) = 0.0E0 DO 1240 L = 1, N J = 0 H = MACHEP * (DABS(D(L)) + DABS(E(L))) IF (B .LT. H) B = H DO 1110 M = L, N IF (DABS(E(M)) .LE. B) GO TO 1120 1110 CONTINUE 1120 IF (M .EQ. L) GO TO 1220 1130 IF (J .EQ. 30) GO TO 1400 J = J + 1 P = (D(L+1) - D(L)) / (2.0E0 * E(L)) R = DSQRT(P * P + 1.0E0) H = D(L) - E(L) / (P + DSIGN(R,P)) DO 1140 I = L, N 1140 D(I) = D(I) - H F = F + H P = D(M) C = 1.0E0 S = 0.0E0 MML = M - L DO 1200 II = 1, MML I = M - II G = C * E(I) H = C * P IF (DABS(P) .LT. DABS(E(I))) GO TO 1150 C = E(I) / P R = DSQRT(C * C + 1.0E0) E(I+1) = S * P * R S = C / R C = 1.0E0 / R GO TO 1160 1150 C = P / E(I) R = DSQRT(C * C + 1.0E0) E(I+1) = S * E(I) * R S = 1.0E0 / R C = C / R 1160 P = C * D(I) - S * G D(I+1) = H + S * (C * G + S * D(I)) DO 1180 K = 1, N H = Z(K,I+1) Z(K,I+1) = S * Z(K,I) + C * H Z(K,I) = C * Z(K,I) - S * H 1180 CONTINUE 1200 CONTINUE E(L) = S * P D(L) = C * P IF (DABS(E(L)) .GT. B) GO TO 1130 1220 D(L) = D(L) + F 1240 CONTINUE NM1 = N - 1 DO 1300 I = 1, NM1 K = I P = D(I) IP1 = I + 1 DO 1260 J = IP1, N IF (D(J) .LE. P) GO TO 1260 K = J P = D(J) 1260 CONTINUE IF (K .EQ. I) GO TO 1300 D(K) = D(I) D(I) = P DO 1280 J = 1, N P = Z(J,I) Z(J,I) = Z(J,K) Z(J,K) = P 1280 CONTINUE 1300 CONTINUE GO TO 1500 1400 ERROR = L 1500 RETURN END c------------------------------------------------ subroutine horder(a,e,n,ch) character*1 ch real*8 a(n,n),e(n),x c c ch = 'i' - increasing order of eigenvalues c ch = 'd' - decreasing order of eigenvalues c do 30 i=1,n-1 do 20 j=i+1,n if(e(i).gt.e(j)) then if(ch.eq.'i') then do 10 k=1,n x = a(k,i) a(k,i) = a(k,j) a(k,j) = x x = e(i) e(i) = e(j) e(j) = x 10 continue endif endif 20 continue 30 continue return end ---------------------------------------- From: dickson #at# zinc.chem.ucalgary.ca (Ross M. Dickson) Have you looked into EISPACK or its successor, LAPACK? They're public-domain FORTRAN libraries for eigenproblems. Although I'm not intimately familiar with the algorithm available therein for real symmetric matrices, I know there is code for that special case. The developers tried to choose the most efficient yet reliable algorithms possible for each matrix type covered. They can be obtained as a whole or in pieces through NETLIB: For info, send a message consisting solely of the word 'help' to netlib $#at#$ ornl.gov. Ross Dickson, dickson[ AT ]zinc.chem.ucalgary.ca Department of Chemistry, The University of Calgary, Alberta, Canada ---------------------------------------- From: ucacsam The Householder method is among the best you can find. See texts by J H Wilkinson, L Fox and many others for descriptions. Basically the method calculates orthogonal matrices that are used to transform your original matrix to a (symmetric) tridiagonal form, which for an n x n matrix can be done in about 2/3 of the cost of one complete matrix multiplication (of two n x n matrices - that's very clever!!). From then on the process uses Laguerre's method for finding the eigenvalues and you can the vectors as well. All rapidly convergent and numericaly stable. A simpler method, which takes rather longer is known as Jacobi's method. Have fun. Paul Samet ---------------------------------------- From: saj(+ at +)chinet.chinet.com (Stephen Jacobs) How big? Up to about order 1000 brute force (and maybe some sparse storage techniques) seems adequate. Have you looked at "Numerical Recipes"? It gives references to the stuff it leaves out. In the most general case, you're screwed anyway because the problem gets so badly ill-conditioned. My personal favorite matrix book is Golub and Van Loan (title "Matrix Computations"); but it doesn't go into extreme practical details. Steve ---------------------------------------- From: rossi ^%at%^ t1.chem.umn.edu (Ivan Rossi) Keith, I think the absolute best around are the LAPACK routines from J. Dongarra They use BLAS level 3 wherever possible, and if you use a workstation or a Cray BLAS come optimized for maximum performance (libblas on IBM, SGI; libsci on Cray C90) . You can get the source from netlib send a message to netlib %-% at %-% ornl.gov or netlib.att.com saying send index from lapack or search the WWW site at http://gams.nist.gov/ happy computing Ivan Ivan Rossi | Department of Chemistry | EXPERT : (n) University of Minnesota | someone who avoids all the 207 Pleasant St. SE,Minneapolis, MN 55455 | little errors, going straight Tel. +1-612-624-6099 | towards the catastrophe e-mail : rossi |-at-| t1.chem.umn.edu | | Intel Inside : the warning label ---------------------------------------- From: Bob Funchess Dear Keith, There's a fairly fast method for sparse matrices called the "Lanczos Algorithm"; I'm not at my reference papers right now so I can't give you the exact reference. The original method had some numerical instabilities, but those are correctable... I know that the book "Biological Magnetic Resonance, vol. 8: spin labeling: theory and applications" by academic press (L. Berliner, ed.; pub. date 1988 or 1989) contains an EPR simulation program which uses this method. The book comes with a DOS disk that has the source code to the program, and the chapter which discusses the program contains the full references to the original algorithm and the correction techniques. Regards, Bob Funchess -- Dr. Robert B. Funchess bobf |-at-| msi.com Senior Support Scientist Tel (617) 229-9800 x202 Molecular Simulations Inc. FAX (617) 229-9899 16 New England Executive Park http://www.msi.com/~bobf/bobf.html Burlington, MA 01803 ---------------------------------------- From: Robert Fraczkiewicz Keith, I use Fortran routines DSYEV.F and DSPEV.F from LAPACK package available from NETLIB (a repository of mathematical software). If you have a WWW client try the following URL: ftp://netlib.att.com/netlib/master/readme.html.Z Eventually you can ftp anonymously to netlib.att.com; directory netlib/lapack. Some machines have specifically optimized LAPACK library; using it is much better than transferring source code. On UNIX machine you would type command "man lapack" to see if your sysadm installed one. I am also interested in high-speed diagonalization; if you get better answers, please, let me know and/or post the summary to CCL. Thanks in advance ! Happy computing, -- +[]={}-()~|+[]={}-()~|+[]={}-()~|+[]={}-()~|+[]={}-()~|+[]={}-()~|+[]={}-()~|+ + |\+_+/\+_+/| + + just Robert Fraczkiewicz |=\ /||\ /=| robert # - at - # roman.chem.uh.edu + + Deparment of Chemistry |--\-||-\--| chem86 "-at-" jetson.uh.edu + + University of Houston |=/_\||/_\=| (713) 743-3236 + + |/+ +\/+ +\| + +[]={}-()~|+[]={}-()~|+[]={}-()~|+[]={}-()~|+[]={}-()~|+[]={}-()~|+[]={}-()~|+ + + + "It is also a good rule not to put too much confidence in experimental + + results until they have been confirmed by Theory" + + + + Sir Arthur Eddington + + + +[]={}-()~|+[]={}-()~|+[]={}-()~|+[]={}-()~|+[]={}-()~|+[]={}-()~|+[]={}-()~|+