solving linear systems

several applications require solving a linear system of the form . the recommended method can vary depending on the properties of , and the desired numerical accuracy.

is triangular

in this case, one can use and directly to find , using the functions provided in faer::linalg::triangular_solve.

use faer::{Mat, Par};
use faer::linalg::triangular_solve::solve_lower_triangular_in_place;
use reborrow::*;

let A = Mat::from_fn(4, 4, |i, j| if i >= j { 1.0 } else { 0.0 });
let B = Mat::from_fn(4, 2, |i, j| (i - j) as f64);

let mut X = Mat::<f64>::zeros(4, 2);

X.copy_from(&B);
solve_lower_triangular_in_place(A.rb(), X.rb_mut(), Par::Seq);

// X now contains the approximate solution

in the case where has a unit diagonal, one can use solve_unit_lower_triangular_in_place, which avoids reading the diagonal, and instead implicitly uses the value 1.0 as a replacement.

is real-symmetric/hermitian

is positive definite

if A is hermitian and positive definite, users can use the cholesky decomposition.

use faer::{mat, Side};
use faer::prelude::*;

let A = mat![
    [10.0, 2.0],
    [2.0, 10.0],
];
let B = col![15.0, -3.0];

// compute the cholesky decomposition,
// reading only the lower triangular half of the matrix.
// returns an `Err` if A is not positive definite
let llt = A.llt(Side::Lower).unwrap();

let X = llt.solve(&B);

low level api

alternatively, a lower-level api could be used to avoid temporary allocations. the corresponding code for other decompositions follows the same pattern, so we will skip similar examples for the remainder of this page.

use faer::prelude::*;
use faer::{Par, Conj};
use faer::linalg::cholesky::llt;
use dyn_stack::{MemStack, MemBuffer};

let A = mat![
    [10.0, 2.0],
    [2.0, 10.0],
];
let mut b = col![15.0, -3.0];

let mut llt = Mat::<f64>::zeros(2, 2);

// compute the size and alignment of the required scratch space
let cholesky_memory = llt::factor::cholesky_in_place_scratch::<f64>(
    A.nrows(),
    Par::Seq,
    LltParams::default(),
);
let solve_memory = llt::solve::solve_in_place_scratch::<f64>(
    A.nrows(),
    b.ncols(),
    Par::Seq,
);

// allocate the scratch space
let mut memory = MemBuffer::new(cholesky_memory.or(solve_memory));
let stack = MemStack::new(&mut mem);

// compute the decomposition
llt.copy_from(&A);
llt::factor::cholesky_in_place(
    llt.rb_mut(),
    // no regularization
    llt::factor::LltRegularization::default(),
    // no multithreading
    Par::Seq,
    // scratch space
    stack,
    // default settings
    default(),
);
// solve the linear system
llt::solve::solve_in_place_with_conj(
    llt.rb(),
    // do not implicitly conjugate A
    Conj::No,
    b.as_mat_mut(),
    Par::Seq,
    stack,
);

if is not positive definite, the bunch-kaufman decomposition is recommended instead.

use faer::{mat, Side};
use faer::prelude::*;

let A = mat![
    [10.0, 2.0],
    [2.0, -10.0],
];
let b = col![15.0, -3.0];

// compute the bunch-kaufman lblt decomposition,
// reading only the lower triangular half of the matrix.
let lblt = a.lblt(Side::Lower);

let x = lblt.solve(&b);

is square

for a square matrix , we can use the decomposition with partial pivoting, or the full pivoting variant which is slower but can be more accurate when the matrix is nearly singular.

use faer::prelude::*;

let A = mat![
    [10.0, 3.0],
    [2.0, -10.0],
];
let b = col![15.0, -3.0];

// compute the lu decomposition with partial pivoting,
let plu = A.partial_piv_lu();
let x1 = plu.solve(&b);

// or the lu decomposition with full pivoting.
let flu = A.full_piv_lu();
let x2 = flu.solve(&b);

is a tall matrix

when the linear system is over-determined, an exact solution may not necessarily exist, in which case we can get a best-effort result by computing the least squares solution. that is, the solution that minimizes .

this can be done using the qr decomposition.

use faer::prelude::*;

let A = mat![
    [10.0, 3.0],
    [2.0, -10.0],
    [3.0, -45.0],
];
let b = col![15.0, -3.0, 13.1];

// compute the qr decomposition.
let qr = A.qr();
let x = qr.solve_lstsq(&b);

singular value decomposition

use faer::prelude::*;

let A = mat![
    [10.0, 3.0],
    [2.0, -10.0],
    [3.0, -45.0],
];

// compute the svd decomposition.
let svd = A.svd();
// compute the thin svd decomposition.
let svd = A.thin_svd();
// compute the singular values.
let svd = A.singular_values();

eigenvalue decomposition

use faer::prelude::*;
use faer::Side;

let A = mat![
    [10.0, 3.0],
    [2.0, -10.0],
];

// compute the eigendecomposition.
let evd = A.eigendecomposition::<c64>();

// compute the eigenvalues.
let evd = A.eigenvalues::<c64>();

// compute the eigendecomposition assuming `A` is hermitian.
let evd = A.selfadjoint_eigendecomposition(Side::Lower);

// compute the eigenvalues assuming `A` is hermitian.
let evd = A.selfadjoint_eigenvalues(Side::Lower);