Startseite    

Sattelpunkte (Saddle Points)

GAMESS locates minima by geometry optimization, as RUNTYP=OPTIMIZE, and transition states by saddle point searches, as RUNTYP=SADPOINT. In many ways these are similar, and infact nearly identical FORTRAN code is used for both. The term "geometry search" is used here to describe features which are common to both procedures. The input to control both RUNTYPs is found in the $STATPT group.
As will be noted in the symmetry section below, an OPTIMIZE run does not always find a minimum, and a SADPOINT run may not find a transition state, even though the gradient is brought to zero. You can prove you have located a minimum or saddle point only by examining the local curvatures of the potential energy surface. This can be done by following the geometry search with a RUNTYP=HESSIAN job, which should be a matter of routine.

Finding minima is relatively easy. There are large tables of bond lengths and angles, so guessing starting geometries is pretty straightforward. Very nasty cases may require computation of an exact hessian, but the location of most minima is straightforward. In contrast, finding saddle points is a black art. The diagonal guess hessian will never work, so you must provide a computed one. The hessian should be computed at your best guess as to what the transition state (T.S.) should be. It is safer to do this in two steps as outlined above, rather than HESS=CALC. This lets you verify you have guessed a structure with one and only one negative curvature. Guessing a good trial structure is the hardest part of a RUNTYP=SADPOINT!
This point is worth iterating. Even with sophisticated step size control such as is offered by the QA/TRIM or RFO methods, it is in general very difficult to move correctly from a region with incorrect curvatures towards a saddle point. Even procedures such as CONOPT or RUNTYP=GRADEXTR will not replace your own chemical intuition about where saddle points may be located.
The RUNTYP=HESSIAN's normal coordinate analysis is rigorously valid only at stationary points on the surface. This means the frequencies from the hessian at your trial geometry are untrustworthy, in particular the six "zero" frequencies corresponding to translational and rotational (T&R) degrees of freedom will usually be 300-500 cm -1 , and possibly imaginary.
The Sayvetz conditions on the printout will help you distinguish the T&R "contaminants" from the real vibrational modes. If you have defined a $ZMAT, the PURIFY option within $STATPT will help zap out these T&R contaminants).
If the hessian at your assumed geometry does not have one and only one imaginary frequency (taking into account that the "zero" frequencies can sometimes be 300i!), then it will probably be difficult to find the saddle point. Instead you need to compute a hessian at a better guess forthe initial geometry, or read about mode following below.
If you need to restart your run, do so with the coordinates which have the smallest RMS gradient. Note that the energy does not necessarily have to decrease in a SADPOINT run, in contrast to an OPTIMIZE run. It is often necessary to do several restarts, involving recomputation of the hessian, before actually locating the saddle point.
Assuming you do find the T.S., it is always a good idea to recompute the hessian at this structure. As
described in the discussion of symmetry, only totally symmetric vibrational modes are probed in a geometry search. Thus it is fairly common to find that at your "T.S." there is a second imaginary frequency, which corresponds to a non-totally symmetric vibration. This means you haven't found the correct T.S., and are back to the drawing board. The proper procedure is to lower the point group symmetry by distorting along the symmetry breaking "extra" imaginary mode, by a reasonable amount. Don't be overly timid in the amount of distortion, or the next run will come back to the invalid structure.
The real trick here is to find a good guess for the transition structure. The closer you are, the better. It is often difficult to guess these structures. One way around this is to compute a linear least motion (LLM) path. This connects the reactant structure to the product structure by linearly varying each coordinate. If you generate about ten structures intermediate to reactants and products, and compute the energy at each point, you will in general find that the
energy first goes up, and then down. The maximum energy structure is a "good" guess for the true T.S. structure. Actually, the success of this method depends on how curved the reaction path is.
A particularly good paper on the symmetry which a saddle point (and reaction path) can possess is by
P.Pechukas, J.Chem.Phys. 64, 1516-1521(1976 )
* * * Mode Following * * *
In certain circumstances, METHOD=RFO and QA can walk from a region of all positive curvatures (i.e. near a minimum) to a transition state. The criteria for whether this will work is that the mode being followed should be only weakly coupled to other close-lying Hessian modes. Especially, the coupling to lower modes should be almost zero. In practice this means that the mode being followed should be the lowest of a given symmetry, or spatially far away
from lower modes (for example, rotation of methyl groups at different ends of the molecule). It is certainly possible to follow also modes which do not obey these criteria, but the resulting walk (and possibly TS location) will be extremely sensitive to small details such as the stepsize.
This sensitivity also explain why TS searches often fail, even when starting in a region where the Hessian has the required one negative eigenvalue. If the TS mode is strongly coupled to other modes, the direction of the mode is incorrect, and the maximization of the energy along that direction is not really what you want (but what you get).
Mode following is really not a substitute for the ability to intuit regions of the PES with a single local negative curvature. When you start near a minimum, it matters a great deal which side of the minima you start from, as the direction of the search depends on the sign of the gradient. We strongly urge that you read before trying to use IFOLOW, namely the papers by Frank Jensen and Jon Baker mentioned above, and see also Figure 3 of
C.J.Tsai, K.D.Jordan, J.Phys.Chem. 97, 11227-11237 (1993 )which is quite illuminating on the sensitivity of mode following to the initial geometry point.
Note that GAMESS retains all degrees of freedom in its hessian, and thus there is no reason to suppose the lowest mode is totally symmetric. Remember to lower the symmetry in the input deck if you want to follow non-symmetric modes. You can get a printout of the modes in internal coordinate space by a EXETYP=CHECK run, which will help you decide on the value of IFOLOW.
* * *
CONOPT is a different sort of saddle point search procedure. Here a certain "CONstrained OPTimization" may be considered as another mode following method. The idea is to start from a minimum, and then perform a series of optimizations on hyperspheres of increasingly larger radii. The initial step is taken along one of the Hessian modes, chosen by IFOLOW, and the geometry is optimized subject to the constraint that the distance to the minimum is constant.
The convergence criteria for the gradient norm perpendicular to the constraint is taken as 10*OPTTOL, and the corresponding steplength as 100*OPTTOL.
After such a hypersphere optimization has converged, a step is taken along the line connecting the two previous optimized points to get an estimate of the next hypersphere geometry. The stepsize is DXMAX, and the radius of hyperspheres is thus increased by an amount close (but not equal) to DXMAX. Once the pure NR step size falls below DXMAX/2 or 0.10 (whichever is the largest) the algorithm switches to a straight NR iterate to (hopefully)
converge on the stationary point.
The current implementation always conducts the search in cartesian coordinates, but internal coordinates may be printed by the usual specification of NZVAR and ZMAT. At present there is no restart option programmed.
CONOPT is based on the following papers, but the actual implementation is the modified equations presented in Frank Jensen's paper mentioned above.
Y. Abashkin, N. Russo, J.Chem.Phys. 100, 4477-4483(1994 ).
Y. Abashkin, N. Russo, M. Toscano, Int.J.Quant.Chem. 52, 695-704(1994 ).
There is little experience on how this method works in practice, experiment with it at your own risk!
IRC methods
The Intrinsic Reaction Coordinate (IRC) is defined as the minimum energy path connecting the reactants to products via the transition state. In practice, the IRC is found by first locating the transition state for the reaction. The IRC is then found in halves, going forward and backwards from the saddle point, down the steepest descent path in mass weighted Cartesian coordinates. This is accomplished by numerical integration of the IRC equations, by a variety of
methods to be described below.
The IRC is becoming an important part of polyatomic dynamics research, as it is hoped that only knowledge of the PES in the vicinity of the IRC is needed for prediction of reaction rates, at least at threshold energies. The IRC has a number of uses for electronic structure purposes as well. These include the proof that a certain transition structure does indeed connect a particular set of reactants and products, as the structure and imaginary frequency normal mode at the saddle point do not always unambiguously identify the reactants and products. The study of the electronic and geometric structure along the IRC is also of interest. For example, one can obtain localized orbitals along the path to determine when bonds break or form.The accuracy to which the IRC is determined is dictated by the use one intends for it.
Dynamical calculations require a very accurate determination of the path, as derivative information (second derivatives of the PES at various IRC points, and path curvature) is required later. Thus, a sophisticated integration method (such as AMPC4 or RK4), and small step sizes (STRIDE=0.05, 0.01, or even smaller) may be needed. In addition to this, care should be taken to locate the transition state carefully (perhaps decreasing OPTTOL by a factor
of 10), and in the initiation of the IRC run. The latter might require a hessian matrix obtained by double differencing, certainly the hessian should be PURIFY'd. Note also that EVIB must be chosen carefully, as described below.
On the other hand, identification of reactants and products allows for much larger step sizes, and cruder integration methods. In this type of IRC one might want to be careful in leaving the saddle point (perhaps STRIDE should be reduced to 0.10 or 0.05 for the first few steps away from the transition state), but once a few points have been taken, larger step sizes can be employed. In general, the defaults in the $IRC group are set up for this latter, cruder quality IRC. The STRIDE value for the GS2 method can usually be safely larger than for other methods, no matter what your interest in accuracy is.
The simplest method of determining an IRC is linear gradient following, PACE=LINEAR. This method is also known as Euler's method. If you are employing PACE=LINEAR, you can select "stabilization" of the reaction path by the Ishida, Morokuma, Komornicki method. This type of corrector has no apparent mathematical basis, but works rather well since the bisector usually intersects the reaction path at right angles (for small step sizes). The ELBOW variable allows for a method intermediate to LINEAR and stabilized LINEAR, in that the stabilization will be
skipped if the gradients at the original IRC point, and at the result of a linear prediction step form an angle greater than ELBOW. Set ELBOW=180 to always perform the stabilization.
A closely related method is PACE=QUAD, which fits a quadratic polynomial to the gradient at the current and immediately previous IRC point to predict the next point. This pace has the same computational requirement as LINEAR, and is slightly more accurate due to the reuse of the old gradient. However, stabilization is not possible for this pace, thus a stabilized LINEAR path is usually more accurate than QUAD.
Two rather more sophisticated methods for integrating the IRC equations are the fourth order Adams-Moulton predictor-corrector (PACE=AMPC4) and fourth order Runge-Kutta (PACE=RK4). AMPC4 takes a step towards the next IRC point (prediction), and based on the gradient found at this point (in the near vicinity of the next IRC point) obtains a modified step to the desired IRC point (correction). AMPC4 uses variable step sizes, based on the input STRIDE. RK4 takes several steps part way toward the next IRC point, and uses the gradient at these points to predict the next IRC point. RK4 is the most accurate integration method implemented in GAMESS, and is also the most time consuming.
The Gonzalez-Schlegel 2nd order method finds the next IRC point by a constrained optimization on the surface of a hypersphere, centered at 1/2 STRIDE along the gradient vector leading from the previous IRC point. By construction, the reaction path between two successive IRC points is thus a circle tangent to the two gradient vectors. The algorithm is much more robust for large steps than the other methods, so it has been chosen as the default method. Thus, the default for STRIDE is too large for the other methods. The number of energy and gradients
need to find the next point varies with the difficulty of the constrained optimization, but is normally not very many points. Be sure to provide the updated hessian from the previous run when restarting PACE=GS2.
The number of wavefunction evaluations, and energy gradients needed to jump from onepoint on the IRC to the next point are summarized in the following table:
PACE # energies # gradients
LINEAR 1 1
stabilized
LINEAR 3 2
QUAD 1 1 (+ reuse of historical gradient)
AMPC4 2 2 (see note)
RK4 4 4
GS2 2-4 2-4 (equal numbers)
Note that the AMPC4 method sometimes does more than one correction step, with each such correction adding one more energy and gradient to the calculation. You get what you pay for in IRC calculations: the more energies and gradients which are used, the more accurate the path found.
A description of these methods, as well as some others that were found to be not as good is given by Kim Baldridge and Lisa Pederson, Pi Mu Epsilon Journal, 9, 513-521 (1993 ).
* * *
All methods are initiated by jumping from the saddle point, parallel to the normal mode (CMODE) which has an imaginary frequency. The jump taken is designed to lower the energy by an amount EVIB. The actual distance taken is thus a function of the imaginary frequency, as a smaller FREQ will produce a larger initial jump. You can simply provide a $HESS group instead of CMODE and FREQ, which involves less typing. To find out the actual step taken for a given EVIB, use EXETYP=CHECK. The direction of the jump (towards reactants or products) is governed by FORWRD. Note that if you have decided to use small step sizes, you must employ a smaller EVIB to ensure a small first step. The GS2 method begins by following the normal mode by one half of STRIDE, and then performing a hypersphere minimization about that point, so EVIB is irrelevant to this PACE.
The only method which proves that a properly converged IRC has been obtained is to regenerate the IRC with a smaller step size, and check that the IRC is unchanged. Again, note that the care with which an IRC must be obtained is highly dependent on what use it is intended for.
Some key IRC references are:
K.Ishida, K.Morokuma, A.Komornicki J.Chem.Phys. 66, 2153-2156 (1977 )
K.Muller Angew.Chem., Int.Ed.Engl.19, 1-13 (1980 )
M.W.Schmidt, M.S.Gordon, M.Dupuis J.Am.Chem.Soc. 107, 2585-2589 (1985 )
B.C.Garrett, M.J.Redmon, R.Steckler, D.G.Truhlar, K.K.Baldridge, D.Bartol,
M.W.Schmidt, M.S.Gordon J.Phys.Chem. 92, 1476-1488(1988 )
K.K.Baldridge, M.S.Gordon, R.Steckler, D.G.Truhlar J.Phys.Chem. 93, 5107-
5119(1989 )
C.Gonzales, H.B.Schlegel
J.Chem.Phys. 90, 2154-2161(1989 )
The IRC discussion closes with some practical tips:
The $IRC group has a confusing array of variables, but fortunately very little thought need be given to most of them. An IRC run is restarted by moving the coordinates of the next predicted IRC point into $DATA, and inserting the new $IRC group into your input file. You must select the desired value for NPOINT. Thus, only the first job which initiates the IRC requires much thought about $IRC.The symmetry specified in the $DATA deck should be the symmetry of the reaction path. If a saddle point happens to have higher symmetry, use only the lower symmetry in the $DATA deck when initiating the IRC. The reaction path will have a lower symmetry than the saddle point
whenever the normal mode with imaginary frequency is not totally symmetric. Be careful that the order and orientation of the atoms corresponds to that used in the run which generated the hessian matrix.
If you wish to follow an IRC for a different isotope, use the $MASS group. If you wish to follow the IRC in regular Cartesian coordinates, just enter unit masses for each atom. Note that CMODE and FREQ are a function of the atomic masses, so either regenerate FREQ and CMODE, or more simply, provide the correct $HESS group.
Gradient Extremals
This section of the manual, as well as the source code to trace gradient extremals was written by Frank Jensen of Odense University.
A Gradient Extremal (GE) curve consists of points where the gradient norm on a constant energy surface is stationary. This is equivalent to the condition that the gradient is an eigenvector of the Hessian. Such GE curves radiate along all normal modes from a stationary point, and the GE leaving along the lowest normal mode from a minimum is the gentlest ascent curve. This is not the same as the IRC curve connecting a minimum and a TS, but may in some cases be close.
GEs may be divided into three groups: those leading to dissociation, those leading to atoms colliding, and those which connect stationary points. The latter class allows a determination of many (all?) stationary points on a PES by tracing out all the GEs. Following GEs is thus a semi-systematic way of mapping out stationary points. The disadvantages are:
i ) There are many (but finitely many!) GEs for a large molecule.
i i ) Following GEs is computationally expensive.
I i i ) There is no control over what type of stationary point (if any) a GE will lead to.
Normally one is only interested in minima and TSs, but many higher order saddle points will also be found. Furthermore, it appears that it is necessary to follow GEs radiating also from TSs and second (and possibly also higher) order saddle point to find all the TSs.
A rather complete map of the extremals for the H2 CO potential surface is available in a paper which explains the points just raised in greater detail:
K.Bondensgaard, F.Jensen, J.Chem.Phys. 104, 8025-8031(1996 ).
An earlier paper gives some of the properties of GEs:
D.K.Hoffman, R.S.Nord, K.Ruedenberg, Theor. Chim. Acta 69, 265-279(1986 ).
There are two GE algorithms in GAMESS, one due to Sun and Ruedenberg (METHOD=SR), which has been extended to include the capability of locating bifurcation points and turning points, and another due to Jorgensen, Jensen, and Helgaker (METHOD=JJH):
J. Sun, K. Ruedenberg, J.Chem.Phys. 98, 9707-9714(1993 )
P. Jorgensen, H. J. Aa. Jensen, T. Helgaker Theor. Chim. Acta 73, 55 (1988 ).
The Sun and Ruedenberg method consist of a predictor step taken along the tangent to the GE curve, followed by one or more corrector steps to bring the geometry back to the GE.
Construction of the GE tangent and the corrector step requires elements of the third derivative of the energy, which is obtained by a numerical differentiation of two Hessians. This puts somelimitations on which systems the GE algorithm can be used for. First, the numerical differentiation of the Hessian to produce third derivatives means that the Hessian should be calculated by analytical methods, thus only those types of wavefunctions where this is possible can be used. Second, each predictor/corrector step requires at least two Hessians, but often more. Maybe 20-50 such steps are necessary for tracing a GE from one stationary point to the next. A systematic study of all the GE radiating from a stationary point increases the work by a factor of ~2*(3N-6). One should thus be prepared to invest at least hundreds, and more likely thousands, of Hessian calculations. In other words, small systems, small basis sets, and simple wavefunctions.
The Jorgensen, Jensen, and Helgaker method consists of taking a step in the direction of the chosen Hessian eigenvector, and then a pure NR step in the perpendicular modes. This requires (only) one Hessian calculation for each step. It is not suitable for following GEs where the GE tangent forms a large angle with the gradient, and it is incapable of locating GE bifurcations.
Although experience is limited at present, the JJH method does not appear to be suitable for following GEs in general (at least not in the current implementation). Experiment with it at your own risk!
The flow of the SR algorithm is as follows: A predictor geometry is produced, either by jumping away from a stationary point, or from a step in the tangent direction from the previous point on the GE. At the predictor geometry, we need the gradient, the Hessian, and the third derivative in the gradient direction. Depending on HSDFDB, this can be done in two ways. If .TRUE. The gradient is calculated, and two Hessians are calculated at SNUMH distance to each side in the gradient direction. The Hessian at the geometry is formed as the average of the two displaced Hessians. This corresponds to a double-sided differentiation, and is the numerical most stable method for getting the partial third derivative matrix. If HSDFDB = .FALSE., the gradient and Hessian are calculated at the current geometry, and one additional Hessian is calculated at SNUMH distance in the gradient direction. This corresponds to a single-sided differentiation. In both cases, two full Hessian calculations are necessary, but HSDFDB =
.TRUE. require one additional wavefunction and gradient calculation. This is usually a fairly small price compared to two Hessians, and the numerically better double-sided differentiation has therefore been made the default.
Once the gradient, Hessian, and third derivative is available, the corrector step and the new GE tangent are constructed. If the corrector step is below a threshold, a new predictor step is taken along the tangent vector. If the corrector step is larger than the threshold, the correction step is taken, and a new micro iteration is performed. DELCOR thus determines how closely the GE will be followed, and DPRED determine how closely the GE path will be sampled.
The construction of the GE tangent and corrector step involve solution of a set of linear equations, which in matrix notation can be written as Ax=B. The A-matrix is also the second derivative of the gradient norm on the constant energy surface.
After each corrector step, various things are printed to monitor the behavior: The projection of the gradient along the Hessian eigenvalues (the gradient is parallel to an eigenvector on the GE), the projection of the GE tangent along the Hessian eigenvectors, and the overlap of the Hessian eigenvectors with the mode being followed from the previous (optimized) geometry. The sign of these overlaps are not significant, they just refer to an arbitrary phase
of the Hessian eigenvectors.
After the micro iterations has converged, the Hessian eigenvector curvatures are also displayed, this is an indication of the coupling between the normal modes. The number of negative eigenvalues in the A-matrix is denoted the GE index. If it changes, one of theeigenvalues must have passed through zero. Such points may either be GE bifurcations (where two GEs cross) or may just be "turning points", normally when the GE switches from going uphill in energy to downhill, or vice versa. The distinction is made based on the B-element corresponding to the A-matrix eigenvalue = 0. If the B-element = 0, it is a bifurcation, otherwise it is a turning point.
If the GE index changes, a linear interpolation is performed between the last two points to locate the point where the A-matrix is singular, and the corresponding B-element is determined. The linear interpolation points will in general be off the GE, and thus the evaluation of whether the B-element is 0 is not always easy. The program additionally
evaluates the two limiting vectors which are solutions to the linear sets of equations, these are also used for testing whether the singular point is a bifurcation point or turning point.
Very close to a GE bifurcation, the corrector step become numerically unstable, but this is rarely a problem in practice. It is a priori expected that GE bifurcation will occur only in symmetric systems, and the crossing GE will break the symmetry. Equivalently, a crossing GE may be encountered when a symmetry element is formed, however such crossings are much harder to detect since the GE index does not change, as one of the A-matrix eigenvalues merely touches zero. The program prints an message if the absolute value of an A-matrix eigenvalue reaches a minimum near zero, as such points may indicate the passage of a bifurcation where a higher symmetry GE crosses. Run a movie of the geometries to see if a more symmetric structure is passed during the run.
An estimate of the possible crossing GE direction is made at all points where the A-matrix is singular, and two perturbed geometries in the + and - direction are written out. These may be used as predictor geometries for following a crossing GE. If the singular geometry is a turning point, the + and - geometries are just predictor geometries on the GE being followed.
In any case, a new predictor step can be taken to trace a different GE from the newly discovered singular point, using the direction determined by interpolation from the two end point tangents (the GE tangent cannot be uniquely determined at a bifurcation point). It is not possible to determine what the sign of IFOLOW should be when starting off along a crossing GE at a bifurcation, one will have to try a step to see if it returns to the bifurcation point or not.
In order to determine whether the GE index change it is necessary to keep track of the order of the A-matrix eigenvalues. The overlap between successive eigenvectors are shown as "Alpha mode overlaps".
Things to watch out for:
1) The numerical differentiation to get third derivatives requires more accuracy than usual.
The SCF convergence should be at least 100 times smaller than SNUMH, and preferably better. With the default SNUMH of 10**(-4) the SCF convergence should be at least 10**(-6). Since the last few SCF cycles are inexpensive, it is a good idea to tighten the SCF convergence as much as possible, to maybe 10**(-8) or better. You may also want to increase the integral accuracy by reducing the cutoffs (ITOL and ICUT) and possibly also try
more accurate integrals (INTTYP=HONDO). The CUTOFF in $TRNSFM may also be reduced to produce more accurate Hessians. Don't attempt to use a value for SNUMH below 10**(-6), as you simply can't get enough accuracy. Since experience is limited at present, it is recommended that some tests runs are made to learn the sensitivity of these factors for your system.
2) GEs can be followed in both directions, uphill or downhill. When stating from a stationary point, the direction is implicitly given as away from the stationary point. When startingfrom a non-stationary point, the "+" and "-" directions (as chosen by the sign of IFOLOW) refers to the gradient direction. The "+" direction is along the gradient (energy increases) and "-" is opposite to the gradient (energy decreases).
3) A switch from one GE to another may be seen when two GE come close together. This is especially troublesome near bifurcation points where two GEs actually cross. In such cases a switch to a GE with -higher- symmetry may occur without any indication that this has happened, except possibly that a very large GE curvature suddenly shows up. Avoid running the calculation with less symmetry than the system actually has, as this increases the likelihood that such switches occurring. Fix: alter DPRED to avoid having the predictor step close to the crossing GE.
4) "Off track" error message: The Hessian eigenvector which is parallel to the gradient is not the same as the one with the largest overlap to the previous Hessian mode. This usually indicate that a GE switch has occurred (note that a switch may occur without this error message), or a wrong value for IFOLOW when starting from a non-stationary point. Fix: check IFOLOW, if it is correct then reduce DPRED, and possibly also DELCOR.
5) Low overlaps of A-matrix eigenvectors. Small overlaps may give wrong assignment, and wrong conclusions about GE index change. Fix: reduce DPRED.
6) The interpolation for locating a point where one of the A-matrix eigenvalues is zero fail to converge. Fix: reduce DPRED (and possibly also DELCOR) to get a shorter (and better) interpolation line.
7) The GE index changes by more than 1. A GE switch may have occurred, or more than one GE index change is located between the last and current point. Fix: reduce DPRED to sample the GE path more closely.
8) If SNRMAX is too large the algorithm may try to locate stationary points which are not actually on the GE being followed. Since GEs often pass quite near a stationary point, SNRMAX should only be increased above the default 0.10 after some consideration.

Beispiel: Drehung um die O-O-Bindung in H2O2

Seitenanfang