From chemistry-request -8 at 8- ccl.net Thu Jun 4 20:27:19 1992 Date: Thu, 4 Jun 1992 16:08 EST From: "Lt. Saavik" Subject: benchmarks To: chemistry-!at!-ccl.net Status: R My original request was for benchmarks for Gaussian 9X. I got a variety of information about the performance of various ab initio programs (not just G92) on various platforms and some information about benchmarks and machine speed in general. The raw, uncut (and uneditted) replies follow. Pam Seida ****************** From: IN%"PAMELA "-at-" CC.BRYNMAWR.EDU" "Lt. Saavik" 15-MAY-1992 17:54:35.99 To: IN%"chemistry#* at *#ccl.net" CC: Subj: benchmarks for g92 Date: Fri, 15 May 1992 09:26 EST From: "Lt. Saavik" Subject: benchmarks for g92 To: chemistry%!at!%ccl.net From: BMC::ANN 14-MAY-1992 18:34:13.31 To: MMF PAMELA From: IN%"MOSES&$at$&CMCHEM.CHEM.CMU.EDU" 11-MAY-1992 16:18:06.08 To: IN%"ANN # - at - # CC.BRYNMAWR.EDU" Date: Mon, 11 May 1992 15:40:37 EDT From: MOSES ^at^ CMCHEM.CHEM.CMU.EDU (D.J. Moses / 412-621-2050) Subject: RE: G92 benchmarks To: ANN.,at,.CC.BRYNMAWR.EDU From: IN%"virtual %-% at %-% quantum.larc.nasa.gov" "Don H. Phillips" 15-MAY-1992 13:58:23.58 To: IN%"pamela |-at-| CC.BRYNMAWR.EDU" Date: Fri, 15 May 92 13:54:31 -0400 From: "Don H. Phillips" Subject: performance on quantum codes To: pamela: at :CC.BRYNMAWR.EDU Date: Mon, 19 Aug 1991 14:56:15 -0400 To: senese {*at*} schug.larc.nasa.gov, virtual {*at*} schug.larc.nasa.gov, Subject: 730 benchmarks Status: R Sharon, I've added a column for the 9000/730 to these benchmarks; also, I normalized the times with respect to the YMP rather than the cobra. I didn't have the disk space for some of the benchmarks missing from the 730 column. How much longer will the 730 be available? -Fred -------------------------------------------------------------------- Here are a few timings for some quantum chemical codes ported to the cobra. The benchmarks represent "typical" quantum chemical computations; they are floating-point intensive and also do a fair amount of disk I/O. Identical codes were used on all machines, but native blas routines and the highest level of compiler optimization were used whenever possible. Times are given as user/system times in seconds. The number in parentheses is the total time normalized to the performance of the Cray YMP. Sun Sparcstation Dec IBM HP 9000 HP 9000 CRAY Test SS1 SS2 DS5000 RS6000 720 730 YMP -------- -------- -------- -------- -------- ------- -------- 0 4792/303 1099/145 --- --- 737/ 39 --- 720/27 (6.8) (1.7) (1.0) (1.0) 1 253/ 32 104/ 16 80/ 21 46/ 7 36/ 4 32/ 3 18/ 2 (14.3) (6.0) (5.1) (2.7) (2.0) (1.8) (1.0) 2 260/ 20 112/ 11 103/ 17 99/ 8 72/ 4 56/ 2 22/ 2 (11.7) (5.1) (5.0) (4.5) (3.1) (2.4) (1.0) 3 794/168 354/ 92 266/108 210/36 145/ 22 139/17 78/ 7 (11.3) (5.2) (4.4) (2.9) (2.0) (1.8) (1.0) 4 936/146 402/ 84 295/ 98 169/48 128/ 20 124/16 72/ 6 (13.9) (6.2) (5.0) (2.8) (1.9) (1.8) (1.0) 5 12708/736 5612/424 3917/395 1737/191 2102/100 1592/82 316/47 (37.0) (16.6) (11.9) (5.3) (6.1) (4.6) (1.0) 6 431/ 95 198/ 55 158/ 46 136/27 96/ 16 88/13 38/11 (10.7) (5.1) (4.2) (3.3) (2.3) (2.1) (1.0) 7 5256/378 2203/268 1963/243 1124/117 1110/ 62 857/52 267/31 (18.9) (8.3) (7.4) (4.2) (3.9) (3.1) (1.0) 8 22250/3148 9322/1775 --- 2903/816 2864/384 --- 1790/110 (13.4) (5.8) (2.0) (1.7) (1.0) 10 1015/ 68 431/ 37 332/ 41 185/ 14 134/ 8 110/6 69/ 3 (15.0) (6.5) (5.2) (2.8) (2.0) (1.6) (1.0) 12 7546/995 3338/572 --- 1137/255 942/110 --- 475/32 (16.8) (7.7) (2.7) (2.1) (1.0) MACHINES SS1,SS2 Sun Sparcstation 1, 2 (quantum, mermaid) SunOS 4.1.1, f77 1.3.1 / cc 1.0, -cg89 -dalign -Bstatic -O3 -libmil DS5000 DecStation 5000 (ultra) Ultrix 4.1, MIPS f77 2.10 -O2 -G 0. RS6000 IBM RS6000 m530 (ibmr6000) AIX 3.1, xlf 1.01, -O -L/lib -lblas. GAMESS modules scflib, gamess, statpt, hss2b, and inputb gave incorrect results with xlf -O; these modules were compiled without optimization. HP 9000/720 HPUX 8.01, f77 -O HP 9000/730 HPUX 8.05, f77 -O CRAYYMP Cray YMP/332 (sabre) UniCOS 5.1, cft77 3.1.1 -O full,nozeroinc -Zp, cc -O -h intrinsics,olevel_3. libsci blas were used. System load was such that these jobs received 5-10% of a 3-cpu machine. [ N.B. These jobs were run on a single cpu ( test 0 is possible exeption). I think Fred meant 5-10% of one cpu on a 3 cpu machine, i.e. the machine was heavily loaded. -DHP] BENCHMARKS 0. FOMMAR all-integral bootstrapped frozen orbital calculation on a linear (HF)_9 chain, with a 99 AO basis set. About 95% scalar. 1. GAMESS RHF SiC_2 H_6, with a 61 AO basis set. Mostly scalar. Uses 14 Mb of disk. 2. GAMESS MCSCF, SiH2, with a 29 AO basis set and 51 CSF's. Vectorizable. Uses 5 Mb of disk. 3. GAMESS second order CI, Si_2 H_4, with a 46 AO basis and 4600 CSF's. The calculation involves out-of-core sorts on large disk files. Uses 64 Mb of disk. 4. GAMESS RHF SiC_3 H_8, with an 80 AO basis set. Mostly scalar. Uses 38 Mb of disk. 5. GAMESS MCSCF + gradient, C_3H_4, with a 53 AO basis and 20 CSFs. MCSCF is vectorizable, gradient is mostly scalar. Uses 84 Mb of disk. 6. GAMESS CI transition, O2+, 60 AO basis, 504 CSF's. Vectorizable. Uses 43 Mb of disk. 7. GAMESS MCSCF OHBr, with 49 AO's and 110 CSF's, requiring 55 Mb of disk. Vectorizable. 8. GAMESS GVB-PP, SnC_5H_6, with 96 AO's and 6 CSF's. Uses 105 Mb of disk. 10. GAMESS ROHF gradient calculation on P_2 H_4+, with a 56 AO basis set, requiring 14 Mb of disk. 12. GAMESS RHF + gradient, SbC4H4NO2, with 110 AO's, requiring 111 Mb of disk. Mostly scalar. ----- Fred Senese, MS 234 (804) 864-4777 | senese ^at^ schug.larc.nasa.gov (128.155.22.47) Speaking from (but not for) NASA-LaRC, Hampton VA 23665-5225 >From virtual -8 at 8- quantum.larc.nasa.gov Fri May 10 08:42:56 1991 Date: Fri, 10 May 91 08:47:30 EDT To: senese-: at :-quantum.larc.nasa.gov Subject: dsmc benchmarks Status: R >From wilmoth ":at:" abcfd01.larc.nasa.gov Fri May 10 08:27:05 1991 Date: Fri, 10 May 91 08:26:27 -0400 To: virtual- at -quantum.larc.nasa.gov Subject: Re: hp9000/720_benchmarks Status: R > Harris said you didn't get very good performance from the > new HP machine on your dsmc code. Our performance varies but it is > relatively good. I'll attach the results in case you haven't seen > them. The cases that excercise vevtorizable code are probably > no more than 60% vectorizable at best. Does your code excercise > primarily the random number generator and exponentiation like the > Metropolis Monte Carlo? What are your results? > I ran two codes on the HP 9000/720. One is a 1-D Monte Carlo problem and the other is a 2-D Monte Carlo benchmark. Both spend a significant amount of time in the random number routine and both have negligible amounts that can be vectorized. Here are the results: 1D Rayleigh Problem: ------------------- CPU Time, s Machine Compiler ----------- ------- -------- 54.5 IBM RS6000/320 xlf -O 50.6 IRIS 4D/340 f77 -O3 47.3 Mips Magnum 3000 f77 -O2 40.9 SparcStation 2 f77 -fast -O3 (f77 v.1.4) 40.0 Taurus i860 fc860 -OLM (GreenHills 1.8.5) 27.0 Cray 2 cf77 26.0 HP 9000/720 f77 -O 22.0 Cray 2s cf77 19.0 Cray YMP cf77 2D Equilibrium Benchmark: ------------------------ CPU Time, s Machine Compiler ----------- ------- -------- 145.0 IBM RS6000/320 xlf -O 100.9 SparcStation 2 f77 -fast -O3 (f77 v.1.4) 47.5 HP 9000/720 f77 -O Didier Rault from our group also ran a 3-D code on the HP9000/720 and on an Iris 4D/320. He ran the code for a fixed amount of time on both machines and measured the work. The HP9000/720 performed about 40 percent more time steps than the Iris. While the HP definitely was faster (by 40-100 %), it certainly doesn't give anywhere near the advertised speed on our tests. That combined with the relatively high cost of memory (about $250/MB) doesn't make it a very strong candidate as far as switching from SparcStations. -Dick- >From root Fri Nov 22 17:29 EST 1991 Date: Fri, 22 Nov 91 08:14:57 PST To: chemistry /at\ccl.net Subject: IBM and HP timing comparisons Status: R I am posting in response to Mark Murcko's question about relative performance of the IBM RS6000/550 and the HP-9000-730. We at Chiron used these two benchmarks to evaluate the machines: SPASMS is a general molecular dynamics package developed at UCSF as AMBER5. Spasms implements the AMBER force-fields. (Authors: D. Spellmeyer, W. Swope, E. Evensen, P. Kollman, R. Langridge) The QCPE version of DGEOM, Jeff Blaney's distance geometry package, was also tested. (QCPE #590: Authors: J.M. Blaney, G.M. Crippen, A. Dearing, J. Scott Dixon, 1984-1990.) Both benchmarks are included as validation tests in the program distributions. I have seen consistent relative speeds between the IBM 550 (or 540) and the SGI 340 for varied input choices for both programs. David Spellmeyer ============== SPASMS timings ============== Crambin (all atom ff) with 1259 waters (tip3p) in a periodic box. 100 steps at 0.001 ps timestep and 8 Ang cutoff. All times are for double precision, non-parallelized code. CPU time Relative(to SGI-340) CRAY/Y-MP 83.5 11.5 (1 processor only) IBM RS/6000 550 268.6 3.60 IBM RS/6000 540 382.4 2.52 IBM RS/6000 530 445.8 2.17 HP 9000 730 264.4 3.65 HP 9000 720 353.7 2.73 SGI 340/GTXB 967.2 1.00 (1 processor only) ============= DGEOM timings ============= Cyclosporin test case, 100 structures, fixed random seed (last column shows speed relative to a single CPU on a SGI4D/320 in double precision): sum sum funct funct #bits Errfn Dist Calls #convg cpu Calls/s prcs Rel ----- ---- ----- ------ ------ ------- ----- ----- CRAY/XMP 209 2846 98223 36 623 s 157.6 64 4.9 HP9000/730 195 2731 97949 49 1055 92.8 64 2.9 HP9000/720 195 2731 97949 49 1452 67.5 64 2.1 IBM6000/550 202 2803 100691 43 1313 76.6 64 2.4 IBM6000/540 202 2803 100691 43 1852 54.3 64 1.6 IBM6000/530 202 2803 100691 43 2418 41.6 64 1.3 IBM6000/530 203 2815 99735 34 2593 38.5 32 1.2 IBM6000/520 202 2803 100691 43 3094 32.5 64 1.0 IBM6000/520 203 2815 99735 34 3258 30.6 32 0.9 SGI4D/320* 202 2794 97097 43 3053 31.8 64 1.0 SGI4D/240* 202 2794 97097 43 3807 25.5 64 0.8 SGI4D/240* 210 2858 98731 36 2820 35.0 32 1.1 VAX/8800* 191 2723 100050 47 18040 5.5 64 0.2 VAX/8800* 215 2872 98668 32 11559 8.5 32 0.3 CRAY is an X-MP running COS; DGEOM optimized specifically for CRAY with special vectorized routine. * 1 CPU This example is included as a standard example in the QCPE release. Reference: "Calculating three-dimensional molecular structure from atom-atom distance information: cyclosporin-A", J. Lautz, H. Kessler, J.M. Blaney, R.M. Scheek, W.F. van Gunsteren, Int. J. Peptide Protein Res., 33, 1989, 281-288. ============================================================================== Neither program was tuned specially for the HP nor for the IBM. However, SPASMS was developed at least partly on IBM equipment through a joint study with the IBM Palo Alto Science Center. Info on the specific configurations for each machine listed in the tables below is not available. ============================================================================== >From root Tue Oct 29 14:52 EST 1991 Date: Tue, 29 Oct 1991 14:46:15 -0500 To: virtual %-% at %-% schug.larc.nasa.gov Status: R >From news.larc.nasa.gov!elroy.jpl.nasa.gov!sdd.hp.com!cs.utexas.edu!sun-barr!ccut!s.u-tokyo!riksun!rik835!ichihara Tue Oct 29 13:15:25 EST 1991 Article: 5857 of comp.sys.hp Xref: news.larc.nasa.gov comp.unix.aix:7222 comp.sys.hp:5857 Path: news.larc.nasa.gov!elroy.jpl.nasa.gov!sdd.hp.com!cs.utexas.edu!sun-barr!ccut!s.u-tokyo!riksun!rik835!ichihara >From: ichihara:~at~:rik835.riken.go.jp (Takashi Ichihara) Newsgroups: comp.unix.aix,comp.sys.hp Subject: HP/9000 vs RS/6000 Message-ID: <1991Oct29.191753.1 -A_T- rik835.riken.go.jp> Date: 29 Oct 91 10:17:53 GMT Sender: news # - at - # riksun.riken.go.jp Organization: RIKEN (The Institute of Physical and Chemical Research) Wako, Saitama, 351-01, Japan Lines: 115 Nntp-Posting-Host: rikvs2 Performance evaluation of the ** numerical calculations ** of the recent RISC work stations, VAX/VMS and Main frames using some typical Fortran programs for the nuclear physics research. NB: Following data are only some examples of the performance evaluations and they strongly depend on the application programs and environments (workload level, real memory size, program size, version of the OS and fortran compiler, compiler options, linker options, etc. ) Ver 2.1 29-Oct-1991 by Takashi Ichihara (RIKEN) (1) Result of some bench mark tests measured at RIKEN. -------------------------------------------------------------------------- | Linpack (*1) Linpack (*1) Dhrystone V1.1 (*2) | Single Precision Double Precision (MIPS) (dhrystones/s) ----------------+--------------------------------------------------------- (RISC UNIX) | HP 9000/730 | 19.5 MFLOPS 13.7 MFLOPS 74.0 MIPS (130200) PowerSt550(3006)| 21.3 MFLOPS 18.7 MFLOPS 62.3 MIPS (109650) PowerSt530(3006)| 11.5 MFLOPS 11.2 MFLOPS 37.5 MIPS (65900) DEC St.5000/200 | 6.2 MFLOPS 3.2 MFLOPS 22.9 MIPS (40400) SG IRIS 4D/220 | 6.4 MFLOPS 3.8 MFLOPS 19.5 MIPS (34300) SparcStation 2 | 4.4 MFLOPS 2.7 MFLOPS 21.3 MIPS (37400) SparcStation 330| 2.5 MFLOPS 1.47 MFLOPS 14.4 MIPS (25300) SparcStation 1 | 1.9 MFLOPS 1.1 MFLOPS 10.8 MIPS (18900) | (VAX/VMS) | VAX-6510 | 3.09 MFLOPS 2.14 MFLOPS 19.6 MIPS (34500) VAX-6410 | 1.68 MFLOPS 1.12 MFLOPS 10.6 MIPS (18700) VAX St.3100/M76 | 1.52 MFLOPS 0.88 MFLOPS 10.5 MIPS (18500) VAX St.3100/M38 | 0.90 MFLOPS 0.54 MFLOPS 5.1 MIPS ( 8900) VAX-8250 | 0.27 MFLOPS 0.16 MFLOPS 1.8 MIPS ( 3100) Micro VAX II | 0.17 MFLOPS 0.12 MFLOPS 1.0 MIPS ( 1800) | FACOM M780 (MSP)| 19.7 MFLOPS 14.9 MFLOPS (not measured) FACOM M380 (MSP)| 8.0 MFLOPS 7.6 MFLOPS (not measured) ---------------------------------------------------------------------------- (2) Result of the performance evaluation by some fortran programs for some numerical calculations of nuclear physics measured at RIKEN. ------------------------------------------------------------------------------ | CPU time in second (*3) | Relative performance (*4) | | (*5) |CALC-1 CALC-2 CALC-3 CALC-4 |CALC-1 CALC-2 CALC-3 CALC-4 Avr ----------------+----------------------------+-------------------------+------ (RISC UNIX) | | | HP 9000/730 | 4.8 5.5 0.9 15.6 | 93. 101. 621. 202. |185. PowerSt550(3006)| 2.66 2.40 2.19 28.8 |168. 232. 252. 109.7 |181. PowerSt530(3006)| 4.53 4.13 3.71 48.0 |101. 135. 149. 66.2 |122. DEC St.5000/200 | 18.0 25.2 5.9 61.4 | 24.7 22.1 93.5 51.4 | 40.2 SG IRIS 4D/220 | 19.3 26.0 6.3 60.3 | 23.1 21.4 87.6 52.4 | 38.8 SparcStation 2 | 18.2 33.4 6.2 56.4 | 24.5 16.6 89.0 56.0 | 37.7 SparcStation 330| 32.5 51.8 9.8 104.5 | 13.7 10.7 56.3 30.2 | 22.3 SparcStation 1 | 47.2 89.6 13.4 168.8 | 9.4 6.2 41.2 18.7 | 14.6 | | | (VAX/VMS) | | | VAX-6510 | 17.74 28.14 7.99 69.8 | 25.1 19.7 69.1 45.3 | 35.3 VAX-6410 | 31.36 48.26 14.27 122.8 | 14.2 11.5 38.6 25.7 | 20.1 VAX St.3100/M76 | 34.99 52.57 16.78 141.6 | 12.7 10.6 30.4 22.3 | 17.3 VAX St.3100/M38 | 62.31 105.96 29.71 254.4 | 7.2 5.2 18.5 12.4 | 9.6 VAX-8250 |168.77 320.33 87.75 754.4 | 2.6 1.7 6.3 4.2 | 3.3 Micro VAX II |320.40 485.16 138.65 1049.8 | 1.4 1.1 4.0 3.0 | 2.1 | | | FACOM M780 (MSP)| 2.14 2.74 2.72 16.8 | 208. 203. 203. 188. | 200. FACOM M380 (MSP)| 4.46 5.56 5.52 31.6 | 100. 100. 100. 100. | 100. ------------------------------------------------------------------------------ [Main part of the calculation] CALC-1,2,3,4 are CPU bounded application programs (IO is negligible small). CALC-1 simple single-precision(SP) floating point calculation CALC-2 simple double-precision(DP) floating point calculation (source code is identical to calc-1 except that the variable is DP) CALC-3 Matrix (with many subroutine calls + single-precision FP cal.) CALC-4 many nesting conditional branches + double-precision FP cal. Fortran Compiler Option: RS/6000 530,550 xlf -O HP 9000/730 f77 +O3,-Wl,-a,archive (+OS +OP4,-WP,..) DEC Station 5000/200 f77 -O4 (-static) IRIS 4d/220 f77 -O3 Sparc Station 1,2 f77 -O4 (-Dalign -Bstatic) Sparc Station 330 f77 -O VAX/VMS fort/opt FACOM MSP FORT7CLG,OPT=3 Version of OS and Fortran Compiler IBM Power Station AIX 3.1(3006) HP 9000/730 HP-UX A.B8.05, Fortran/9000 DEC Station Ultrix 4.1 Fortran 2.0 (200) IRIS IRIX System V Release 3.3 Sparc Station 1,2 SUN OS 4.1.1, Fortran 1.4 Sparc Station 330 SUN OS 4.0.3 Fortran 1.2 VAX/VMS VAX/VMS 5.4 or VAX/VMS 5.3, Fortran V5.5-98 (*1) Linpack benchmark test for 100*100 system of linear equations. (*2) C Compiler option in compiling dhrystone V1.1 program: UNIX machine: cc -O -DREG=register (+ something) VAX/VMS : cc/opt (*3) CPU time for executing the load module (*4) Relative performance (FACOM M380 = 100) (*5) Geometrical avarege of the ralative performance for 4 calculations --------------------------------------------------------------------------- Takashi Ichihara RIKEN (The Institute of Physical and Chemical Research) 2-1, Hirosawa, Wako, Saitama, 351-01, Japan (Internet) Ichihara(+ at +)rik835.riken.go.jp (Hepnet) RIK835::Ichihara (41911::Ichihara) From: IN%"srahman-!at!-alleg.edu" 15-MAY-1992 13:58:54.90 To: IN%"PAMELA -A_T- CC.BRYNMAWR.EDU" "Lt. Saavik" CC: IN%"srahman -8 at 8- alleg.edu" Subj: RE: benchmarks for g92 Date: Fri, 15 May 92 13:37:40 EDT From: srahman[ AT ]alleg.edu Subject: RE: benchmarks for g92 To: "Lt. Saavik" From: IN%"fredvc#* at *#esvax.dnet.dupont.com" 15-MAY-1992 14:24:55.78 To: IN%"'pamela- at -cc.brynmawr.edu'- at -esds01.dnet.dupont.com" Date: Fri, 15 May 92 13:16:24 -0400 From: fredvc %-% at %-% esvax.dnet.dupont.com Subject: RE: benchmarks for g92 To: "pamela |-at-| cc.brynmawr.edu" |-at-| esds01.dnet.dupont.com From: IN%"FOX \\at// CMCHEM.CHEM.CMU.EDU" 15-MAY-1992 16:30:51.62 To: IN%"pamela:~at~:CC.BRYNMAWR.EDU" Date: Fri, 15 May 1992 16:30:49 EDT From: FOX |-at-| CMCHEM.CHEM.CMU.EDU Subject: G92 benchmarks To: pamela(+ at +)CC.BRYNMAWR.EDU From: IN%"JWTESCH- at -DFWVM04.VNET.IBM.COM" 15-MAY-1992 18:06:01.73 To: IN%"pamela $#at#$ CC.BRYNMAWR.EDU" Date: Fri, 15 May 92 16:35:12 CST From: JWTESCH(+ at +)DFWVM04.VNET.IBM.COM Subject: Gaussian To: pamela {*at*} CC.BRYNMAWR.EDU From: IN%"jle-!at!-world.std.com" 15-MAY-1992 20:09:41.43 To: IN%"PAMELA /at\CC.BRYNMAWR.EDU" Date: Fri, 15 May 92 20:09:25 -0400 From: jle $#at#$ world.std.com (Joe M Leonard) Subject: RE: benchmarks for g92 To: PAMELA /at\CC.BRYNMAWR.EDU From: IN%"adfernan /at\undergrad.math.waterloo.edu" "Andrew D. Fernandes" 22-MAY-1992 13:25:04.48 To: IN%"PAMELA -8 at 8- CC.BRYNMAWR.EDU" Date: Fri, 22 May 1992 13:25:42 -0400 From: "Andrew D. Fernandes" Subject: benchmarks To: PAMELA&$at$&CC.BRYNMAWR.EDU Date: Thu, 4 Jun 1992 20:49 EST From: Rosalyn Strauss Subject: Geometry of an aromatic hydroxylamine nitrogen-for AMBER To: chemistry \\at// ccl.net Status: R In preparing a file for AMBER, I am encountering a problem deciding whether to designate a particular nitrogen as planar or tetrahedral. The nitrogen is directly bonded to an O, an H and a naphthylene. Has anyone done calculations on compounds analagous to this one (i.e. any aromatic group in place of the naphthylene). I have yet to come across the geometry of such a structure in my search of the literature. Thanks, Rosalyn Strauss STRSSROS.,at,.ACFcluster.NYU.edu