Friday, April 5, 2019

Comparison Of Rate Of Convergence Of Iterative Methods Philosophy Essay

Comparison Of Rate Of intersection point Of Iterative Methods philosophy EssayThe term repetitious manner refers to a wide range of techniques that use sequential approximations to compass more(prenominal) accurate resolving powers to a linear arranging at each whole tone In quantitative analysis it attempts to play a problem by finding sequentialapproximationsto the solution starting from an initial guess. This approach is in contrast todirect forms which attempt to solve the problem by a de restrict ecological succession of operations, and, in the absence ofrounding geological faults, would deliver an exact solution Iterative regularity actings are normally the only choice for non linear compares. However, repetitious rules are often effectual nevertheless for linear problems involving a large heel of variables (sometimes of the companionship of millions), where direct orders would be prohibitively expensive (and in some cases impossible) as yet with the best available computing power. unmoving methods are older, simpler to understand and implement, but usually non as effective Stationary iterative method are the iterative methods that per diversitys in each iteration the kindred operations on the current iteration vectors.Stationary iterative methods solve a linear organization with an slatternapproximating the original unrivaled and based on a measurement of the error in the result, form a correction equation for which this process is repeated. While these methods are simple to derive, implement, and analyze, crossroad is only contractd for a limited class of matrices. Examples of stationary iterative methods are the Jacobi method,gauss seidel methodand thesuccessive over eternal sleep method.The Nonstationary methods are based on the brain of sequences of saucy vectors Nonstationary methods are a relatively recent development their analysis is usually harder to understand, but they pot be highly effective These are th e Iterative method that has iteration-dependent coefficients.It intromit Dense hyaloplasm hyaloplasm for which the number of zero fractions is too small to warrant specialized algorithms. Sparse ground substance Matrix for which the number of zero elements is large enough that algorithms avoiding operations on zero elements pay off. Matrices derived from partial differential equations typically have a number of nonzero elements that is comparative to the intercellular substance size, while the total number of ground substance elements is the true of the matrix size.The rate at which an iterative method sees depends greatly on the spectrum of the coefficient matrix. Hence, iterative methods usually contain a second matrix that transforms the coefficient matrix into sensation with a more favorable spectrum. The transformation matrix is called apre seter. A good preconditioner improves the overlap of the iterative method, sufficiently to overcome the extra cost of construct ing and applying the preconditioner. Indeed, without a preconditioner the iterative method may even fail to converge.Rate of crosswayInnumerical analysis, the quicken at which aconvergent sequenceapproaches its limit is called therate of converging. Although strictly speaking, a limit does not give information about any finite first part of the sequence, this concept is of practical immensity if we deal with a sequence of successive approximations for aniterative method as then(prenominal) typically less iterations are needed to yield a useful approximation if the rate of convergence is higher. This may even make the difference between needing ten or a million iterations.Similar concepts are used fordiscretizationmethods. The solution of the discretized problem converges to the solution of the continuous problem as the grid size goes to zero, and the speed of convergence is one of the factors of the efficiency of the method. However, the terminology in this case is different from the terminology for iterative methods.The rate of convergence of an iterative method is represented by mu () and is defined as such(prenominal)Suppose the sequencexn(generated by an iterative method to find an approximation to a fixed point) converges to a pointx, thenlimn-infinity = xn+1-x/xn-xalpha=,where0 and(alpha)= rear of convergence.In cases where=2 or 3 the sequence is said to havequadraticandcubic convergencerespectively. However in linear cases i.e. when=1, for the sequence to convergemustbe in the interval (0,1). The scheme behind this is that for En+1Ento converge the absolute errors must decrease with each approximation, and to guarantee this, we have to set0In cases where=1 and=1andyou know it converges (since=1 does not furcate us if it converges or gos) the sequencexnis said to convergesublinearlyi.e. the order of convergence is less than one. If1 then the sequence diverges. If=0 then it is said to convergesuperlinearlyi.e. its order of convergence is higher than 1, in these cases you changeto a higher value to find what the order of convergence is.In cases whereis negative, the iteration diverges.Stationary iterative methodsStationary iterative methods are methods for solving alinear system of equations. Ax=B. whereis a inclined matrix andis a given vector. Stationary iterative methods tail be uttered in the simple formwhere neithernordepends upon the iteration count. The four main stationary methods are the Jacobi Method,Gauss seidel method,successive overrelaxation method(SOR), and even successive overrelaxation method(SSOR).1.Jacobi method- The Jacobi method is based on solving for every variable locally with respect to the other(a) variables one iteration of the method corresponds to solving for every variable once. The resulting method is late to understand and implement, but convergence is slow. The Jacobi method is a method of solving amatrix equationon a matrix that has no zeros along its main diagonal . Each diagonal ele ment is solved for, and an approximate value plugged in. The process is then recurd until it converges. This algorithm is a stripped-down discrepancy of the Jacobi transformationmethod ofmatrix diagnalization.The Jacobi method is easily derived by examining each of theequations in the linear system of equationsin isolation. If, in theth equationsolve for the value ofwhile as unioning the other entries of go forward fixed. This giveswhich is the Jacobi method.In this method, the order in which the equations are examined is irrelevant, since the Jacobi method treats them independently. The definition of the Jacobi method can be show with matricesaswhere the matrices,, andrepresent the diagnol, strictly lower triangular, andstrictly upper berth triangular split of, respectivelyConvergence- The standard convergence condition (for any iterative method) is when thespectral radiusof the iteration matrix(D 1R) D is diagonal component,R is the remainder.The method is guaranteed to conve rge if the matrixAis strictly or irreduciblydiagonally dominant. Strict lyric diagonal dominance means that for each row, the absolute value of the diagonal term is greater than the sum of absolute values of other termsThe Jacobi method sometimes converges even if these conditions are not satisfied.2. Gauss-Seidel method- The Gauss-Seidel method is like the Jacobi method, except that it uses updated values as soon as they are available. In general, if the Jacobi method converges, the Gauss-Seidel method will converge riotouser than the Jacobi method, though still relatively slowly. The Gauss-Seidel method is a technique for solving theequations of thelinear system of equationsone at a time in sequence, and uses previously computed results as soon as they are available,There are two important characteristics of the Gauss-Seidel method should be noted. Firstly, the computations come out to be serial. Since each component of the new iterate depends upon all previously computed compo nents, the updates cannot be done at the same time as in theJacobi method. Secondly, the new iteratedepends upon the order in which the equations are examined. If this ordering is changed, thecomponentsof the new iterates (and not just their order) will alike change. In terms of matrices, the definition of the Gauss-Seidel method can be expressed aswhere the matrices,, andrepresent thediagonal, strictly lower triangular, and strictly upper triangularparts of A, respectively.The Gauss-Seidel method is applicable to strictly diagonally dominant, or stellate incontrovertible definite matrices A.Convergence-Given a full-strength system ofnlinear equations with unknownxThe convergence properties of the Gauss-Seidel method are dependent on the matrixA. Namely, the procedure is known to converge if eitherAis bilaterally symmetricalpositive definite, orAis strictly or irreduciblydiagonally dominant.The Gauss-Seidel method sometimes converges even if these conditions are not satisfied. 3. consecutive Overrelaxation method-The successive overrelaxation method (SOR) is a method of solving alinear system of equationsderived by extrapolating thegauss-seidel method. This extrapolation takes the form of a weighted average between the previous iterate and the computed Gauss-Seidel iterate successively for each component,wheredenotes a Gauss-Seidel iterate andis the extrapolation factor. The idea is to choose a value forthat will accelerate the rate of convergence of the iterates to the solution.In matrix terms, the SOR algorithm can be written aswhere the matrices,, andrepresent the diagonal, strictly lower-triangular, and strictly upper-triangular parts of, respectively.If, the SOR method simplifies to thegauss-seidel method. A theorem callable to Kahan shows that SOR fails to converge ifis outside the interval.In general, it is not possible to compute in advance the value ofthat will maximize the rate of convergence of SOR. Frequently, some heuristic estimate is used, such aswhereis the mesh spacing of the discretization of the underlying physical domain.Convergence-Successive Overrelaxation method may converge faster than Gauss-Seidel by an order of magnitude. We seek the solution to set of linear equationsIn matrix terms, the successive over-relaxation (SOR) iteration can be expressed aswhere,, andrepresent the diagonal, lower triangular, and upper triangular parts of the coefficient matrix,is the iteration count, andis a relaxation factor. This matrix expression is not usually used to political platform the method, and an element-based expression is usedNote that forthat the iteration reduces to thegauss-seideliteration. As with theGauss seidel method, the computation may be done in place, and the iteration is continued until the changes do by an iteration are below some tolerance.The choice of relaxation factor is not necessarily easy, and depends upon the properties of the coefficient matrix. For symmetric, positive definite matrices it c an be proven thatwill get hold of to convergence, but we are generally interested in faster convergence rather than just convergence.4.Symmetric Successive overrelaxation- Symmetric Successive Overrelaxation (SSOR) has no advantage over SOR as a stand-alone iterative method however, it is useful as a preconditioner for nonstationary methods The symmetric successive overrelaxation (SSOR) method combines twosuccessive overrelaxation method(SOR) sweeps unneurotic in such a way that the resulting iteration matrix is similar to a symmetric matrix it the case that the coefficient matrixof the linear systemis symmetric. The SSOR is a forward SOR sweep followed by a backswept SOR sweep in which theunknownsare updated in the reverse order. The similarity of the SSOR iteration matrix to a symmetric matrix permits the application of SSOR as a preconditioner for other iterative schemes for symmetric matrices. This is the primary motivation for SSOR, since the convergence rate is usually slow er than the convergence rate for SOR with optimal..Non-Stationary Iterative Methods-1.Conjugate slope method- The conjugated gradient method derives its name from the fact that it generates a sequence of conjugate (or orthogonal) vectors. These vectors are the oddments of the iterates. They are also the gradients of a quadratic functional, the minimization of which is equivalent to solving the linear system. CG is an extremely effective method when the coefficient matrix is symmetric positive definite, since storage for only a limited number of vectors is required. Suppose we want to solve the following system of linear equationsAx=bwhere then-by-nmatrixAissymmetric(i.e.,AT=A),positive definite(i.e.,xTAx 0 for all non-zero vectorsxinRn), andreal.We denote the unique solution of this system byx*.We say that two non-zero vectorsuandvareconjugate(with respect toA) ifSinceAis symmetric and positive definite, the left-hand side defines an national productSo, two vectors are conjugate i f they are orthogonal with respect to this midland product. Being conjugate is a symmetric relation ifuis conjugate tov, thenvis conjugate tou.Convergence- Accurate predictions of the convergence of iterative methods are difficult to make, but useful bounds can often be obtained. For the Conjugate Gradient method, the error can be spring in terms of the spectral condition numberof the matrix. ( ifandare the largest and smallest eigenvalues of a symmetric positive definite matrix, then the spectral condition number ofis. Ifis the exact solution of the linear system, with symmetric positive definite matrix, then for CG with symmetric positive definite preconditioner, it can be shown thatwhere, and . From this relation we see that the number of iterations to reach a relative reduction ofin the error is proportional to.In some cases, practical application of the above error bound is straightforward. For example, elliptic second order partial differential equations typically give rise to coefficient matriceswith(whereis the discretization mesh width), independent of the order of the finite elements or differences used, and of the number of space dimensions of the problem . Thus, without preconditioning, we expect a number of iterations proportional tofor the Conjugate Gradient method.former(a) results concerning the behavior of the Conjugate Gradient algorithm have been obtained. If the extremal eigenvalues of the matrixare well separated, then one often observes so-called that is, convergence at a rate that increases per iteration. This phenomenon is explained by the fact that CG tends to eliminate components of the error in the direction of eigenvectors associated with extremal eigenvalues first. After these have been eliminated, the method proceeds as if these eigenvalues did not exist in the given system,i.e., the convergence rate depends on a trim down system with a smaller condition number. The effectiveness of the preconditioner in reducing the condition number and in separating extremal eigenvalues can be deduced by perusal the approximated eigenvalues of the related Lanczos process.2. Biconjugate Gradient Method-The Biconjugate Gradient method generates two CG-like sequences of vectors, one based on a system with the original coefficient matrix , and one on . Instead of orthogonalizing each sequence, they are made mutually orthogonal, or bi-orthogonal. This method, like CG, uses limited storage. It is useful when the matrix is nonsymmetric and nonsingular however, convergence may be irregular, and there is a opening night that the method will break down. BiCG requires a multiplication with the coefficient matrix and with its permute at each iteration.Convergence- Few theoretical results are known about the convergence of BiCG. For symmetric positive definite systems the method delivers the same results as CG, but at twice the cost per iteration. For nonsymmetric matrices it has been shown that in phases of the process where the re is remarkable reduction of the norm of the residual, the method is more or less comparable to full GMRES (in terms of numbers game of iterations). In practice this is often confirmed, but it is also observed that the convergence behavior may be quite irregular, and the method may even break down. The breakdown situation due to the possible event thatcan be circumvented by so-called look-ahead strategies. This leads to complicated codes. The other breakdownsituation,, make outs when the-decomposition fails, and can be repaired by using another decomposition.Sometimes, breakdownor near-breakdown situations can be satisfactorily avoided by a restartat the iteration step immediately before the breakdown step. Another scuttle is to switch to a more robust method, like GMRES.3. Conjugate Gradient Squared (cgs system).The Conjugate Gradient Squared method is a variant of BiCG that applies the updating operations for the -sequence and the -sequences both to the same vectors. Ideally, this would double the convergence rate, but in practice convergence may be very much more irregular than for BiCG, which may sometimes lead to unreliable results. A practical advantage is that the method does not need the multiplications with the transpose of the coefficient matrix.often one observes a speed of convergence for CGS that is about twice as fast as for BiCG, which is in agreement with the observation that the same contraction operator is applied twice. However, there is no reason that the contraction operator, even if it really reduces the initial residual, should also reduce the once reduced vector. This is evidenced by the often highly irregular convergence behavior of CGS. one(a) should be aware of the fact that local corrections to the current solution may be so large that cancelation effects occur. This may lead to a less accurate solution than suggested by the updated residual. The method tends to diverge if the starting guess is close to the solution.4 Biconju gate Gradient Stabilized (Bi-CGSTAB).The Biconjugate Gradient Stabilized method is a variant of BiCG, like CGS, but using different updates for the -sequence in order to obtain aerodynamic convergence than CGS. Bi-CGSTAB often converges about as fast as CGS, sometimes faster and sometimes not. CGS can be viewed as a method in which the BiCG contraction operator is applied twice. Bi-CGSTAB can be interpreted as the product of BiCG and repeatedly applied GMRES. At least locally, a residual vector is minimized, which leads to a considerably smootherconvergence behavior. On the other hand, if the local GMRES step stagnates, then the Krylov subspace is not expanded, and Bi-CGSTAB will break down. This is a breakdown situation that can occur in addition to the other breakdown possibilities in the underlying BiCG algorithm. This type of breakdown may be avoided by combining BiCG with other methods,i.e., by selecting other values for One such alternative is Bi-CGSTAB2 more general approach es are suggested by Sleijpen and Fokkema.5..Chebyshev Iteration.The Chebyshev Iteration recursively determines polynomials with coefficients chosen to minimize the norm of the residual in a min-max sense. The coefficient matrix must be positive definite and knowledge of the extremal eigenvalues is required. This method has the advantage of requiring no inner products. Chebyshev Iteration is another method for solving nonsymmetric problems . Chebyshev Iteration avoids the computation of inner productsas is necessary for the other nonstationary methods. For some distributed memory architectures these inner products are a bottleneckwith respect to efficiency. The price one pays for avoiding inner products is that the method requires enough knowledge about the spectrum of the coefficient matrixthat an ellipse enveloping the spectrum can be place however this difficulty can be overcome via an adaptive constructiondeveloped by Manteuffel, and utilize by Ashby. Chebyshev iteration is sui table for any nonsymmetric linear system for which the enveloping ellipse does not include the origin.Convergence-In the symmetric case (whereand the preconditionerare both symmetric) for the Chebyshev Iteration we have the same upper bound as for the Conjugate Gradient method, providedandare computed fromand(the extremal eigenvalues of the preconditioned matrix).There is a painful penalty for overestimating or underestimating the field of values. For example, if in the symmetric caseis underestimated, then the method may diverge if it is overestimated then the result may be very slow convergence. Similar statements can be made for the nonsymmetric case. This implies that one needs fairly accurate bounds on the spectrum offor the method to be effective (in comparability with CG or GMRES).Acceleration of convergenceMany methods exist to increase the rate of convergence of a given sequence, i.e. to transform a given sequence into one converging faster to the same limit. Such techniqu es are in general known as series acceleration. The goal of the transformed sequence is to be much less expensive to calculate than the original sequence. One example of series acceleration is Aitkens delta -squared process.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.