Solves sparse least-squares using the LSMR algorithm.
LSMR solves the system of linear equations A * X = B. If the system is inconsistent, it solves
the least-squares problem min ||b - Ax||_2. A is a rectangular matrix of dimension m-by-n, where
all cases are allowed: m=n, m>n, or m<n. B is a vector of length m. The matrix A may be dense
or sparse (usually sparse).
Some additional configurable properties adjust the behavior of the algorithm.
If you set lambda to a non-zero value then LSMR solves the regularized least-squares problem min
||(B) - ( A )X|| ||(0) (lambda*I) ||_2 where LAMBDA is a scalar. If LAMBDA is not set,
the system is solved without regularization.
You can also set aTolerance and bTolerance. These cause LSMR to iterate until a certain backward
error estimate is smaller than some quantity depending on ATOL and BTOL. Let RES = B - A*X be
the residual vector for the current approximate solution X. If A*X = B seems to be consistent,
LSMR terminates when NORM(RES) <= ATOL*NORM(A)*NORM(X) + BTOL*NORM(B). Otherwise, LSMR terminates
when NORM(A'*RES) <= ATOL*NORM(A)*NORM(RES). If both tolerances are 1.0e-6 (say), the final
NORM(RES) should be accurate to about 6 digits. (The final X will usually have fewer correct
digits, depending on cond(A) and the size of LAMBDA.)
The default value for ATOL and BTOL is 1e-6.
Ideally, they should be estimates of the relative error in the entries of A and B respectively.
For example, if the entries of A have 7 correct digits, set ATOL = 1e-7. This prevents the
algorithm from doing unnecessary work beyond the uncertainty of the input data.
You can also set conditionLimit. In that case, LSMR terminates if an estimate of cond(A) exceeds
conditionLimit. For compatible systems Ax = b, conditionLimit could be as large as 1.0e+12 (say).
For least-squares problems, conditionLimit should be less than 1.0e+8. If conditionLimit is not
set, the default value is 1e+8. Maximum precision can be obtained by setting aTolerance =
bTolerance = conditionLimit = 0, but the number of iterations may then be excessive.
Setting iterationLimit causes LSMR to terminate if the number of iterations reaches
iterationLimit. The default is iterationLimit = min(m,n). For ill-conditioned systems, a
larger value of ITNLIM may be needed.
Setting localSize causes LSMR to run with rerorthogonalization on the last localSize v_k's.
(v-vectors generated by Golub-Kahan bidiagonalization) If localSize is not set, LSMR runs without
reorthogonalization. A localSize > max(n,m) performs reorthogonalization on all v_k's.
Reorthgonalizing only u_k or both u_k and v_k are not an option here. Details are discussed in
the SIAM paper.
getTerminationReason() gives the reason for termination. ISTOP = 0 means X=0 is a solution. = 1
means X is an approximate solution to A*X = B, according to ATOL and BTOL. = 2 means X
approximately solves the least-squares problem according to ATOL. = 3 means COND(A) seems to be
greater than CONLIM. = 4 is the same as 1 with ATOL = BTOL = EPS. = 5 is the same as 2 with ATOL
= EPS. = 6 is the same as 3 with CONLIM = 1/EPS. = 7 means ITN reached ITNLIM before the other
stopping conditions were satisfied.
getIterationCount() gives ITN = the number of LSMR iterations.
getResidualNorm() gives an estimate of the residual norm: NORMR = norm(B-A*X).
getNormalEquationResidual() gives an estimate of the residual for the normal equation: NORMAR =
NORM(A'*(B-A*X)).
getANorm() gives an estimate of the Frobenius norm of A.
getCondition() gives an estimate of the condition number of A.
getXNorm() gives an estimate of NORM(X).
LSMR uses an iterative method. For further information, see D. C.-L. Fong and M. A. Saunders
LSMR: An iterative algorithm for least-square problems Draft of 03 Apr 2010, to be submitted to
SISC.
David Chin-lung Fong clfong@stanford.edu Institute for Computational and Mathematical
Engineering Stanford University
Michael Saunders saunders@stanford.edu Systems Optimization Laboratory Dept of
MS&E, Stanford University. -----------------------------------------------------------------------