Least squares computations and the condition of the matrix
Titel:
Least squares computations and the condition of the matrix
Auteur:
Longley, James Wildon
Verschenen in:
Communications in statistics
Paginering:
Jaargang 10 (1981) nr. 6 pagina's 593-615
Jaar:
1981
Inhoud:
In the days of Von Neumann and Goldstine (1947), when matrix inversion was in vogue, the condition of the regression matrix in least squares problems held great relevance to both the sensitivity of its inverse to relatively small perturbations in its coefficients, and also to the number of decimal digit accuracy that could be obtained in the solution of the system of equations. With the advent of least squares programs which give exact answers, the condition of the matrix is no longer a limiting factor to accuracy. Also with such programs the source of variation in the results from sample to sample is no longer doubtful. Although approximation methods, such as Gram-Schmidt Orthogonalization, produce answers that are sometimes far from exact, tests with over one hundred problems indicate that there is little or no association between the condition of the matrix, however defined, and the average decimal digit accuracy produced in the solution. The object here is to explain in part the reasons why this association is weak. Relevance of bounds which can be established by the use of condition numbers on the solution vector is also subject to qualifications. Various techniques for avoiding some of the effects of ill-conditioning within machine capability are evaluated, such as choice of computing algorithm, scaling, normalizing, pivoting, and iterative refinement. If increase in accuracy or reliability of results is required, there is no substitute for increased precision. Programs which give exact answers would serve to eliminate many uncertainties associated with least squares problems.