least-squares-cpp

[brief]

A single header-only library for least squares optimization.

Cpp14 License CMake cppget

least-squares-cpp is a header-only C++ library for unconstrained non-linear least squares optimization using the Eigen3 library.

It provides convenient and configurable access to the following fitting algorithms:

Convergence and runtime behaviour can be parametrized using advanced newton step refinement operators such as line searching or trust region methods.

The following animation shows a pointcloud registration using Gauss Newton and Armijo Backtracking in five iterations without initial guess.

pointcloud_registration_armijo

Install

Simply copy the single header file into your project and add it to your include path, then you are good to go.

least-squares-cpp now also supports the build2 build system. You can simply add least-squares-cpp into the dependency list of your manifest file:

depends: least-squares-cpp ^0.3.0

You can then simply import least-squares-cpp in your build files.

import libs = least-squares-cpp%lib{least-squares-cpp}

If CMake is your preferred tooling you can install the library with the following command:

cd path/to/repo
cmake -B out/ -S .
cmake --install out/

You can then simply use the library in your CMake projects by finding the package:

find_package(lsqcpp REQUIRED)
add_executable(myproject main.cpp)
target_link_libraries(myproject lsqcpp::lsqcpp)

The library requires Eigen3 to be installed on your system. In Debian based systems you can simply type

apt-get install libeigen3-dev

Make sure Eigen3 can be found by your build system.

Usage

For examples on using different algorithms and stop criteria, please have a look at the examples/ directory.

There are three major steps to use least-squares-cpp:

#include <lsqcpp/lsqcpp.hpp>

// Implement an objective functor.
struct ParabolicError
{
    static constexpr bool ComputesJacobian = false;

    template<typename Scalar, int Inputs, int Outputs>
    void operator()(const Eigen::Matrix<Scalar, Inputs, 1> &xval,
                    Eigen::Matrix<Scalar, Outputs, 1> &fval) const
    {
        // omit calculation of jacobian, so finite differences will be used
        // to estimate jacobian numerically
        fval.resize(xval.size() / 2);
        for(lsqcpp::Index i = 0; i < fval.size(); ++i)
            fval(i) = xval(i*2) * xval(i*2) + xval(i*2+1) * xval(i*2+1);
    }
};

int main()
{
    // Create GaussNewton optimizer object with ParabolicError functor as objective.
    // There are GradientDescent, GaussNewton and LevenbergMarquardt available.
    //
    // You can specify a StepSize functor as template parameter.
    // There are ConstantStepSize, BarzilaiBorwein, ArmijoBacktracking
    // WolfeBacktracking available. (Default for GaussNewton is ArmijoBacktracking)
    //
    // You can additionally specify a Callback functor as template parameter.
    //
    // You can additionally specify a FiniteDifferences functor as template
    // parameter. There are Forward-, Backward- and CentralDifferences
    // available. (Default is CentralDifferences)
    //
    // For GaussNewton and LevenbergMarquardt you can additionally specify a
    // linear equation system solver.
    // There are DenseSVDSolver and DenseCholeskySolver available.
    lsqcpp::GaussNewtonX<double, ParabolicError, lsqcpp::ArmijoBacktracking> optimizer;

    // Set number of iterations as stop criterion.
    // Set it to 0 or negative for infinite iterations (default is 0).
    optimizer.setMaximumIterations(100);

    // Set the minimum length of the gradient.
    // The optimizer stops minimizing if the gradient length falls below this
    // value.
    // Set it to 0 or negative to disable this stop criterion (default is 1e-9).
    optimizer.setMinimumGradientLength(1e-6);

    // Set the minimum length of the step.
    // The optimizer stops minimizing if the step length falls below this
    // value.
    // Set it to 0 or negative to disable this stop criterion (default is 1e-9).
    optimizer.setMinimumStepLength(1e-6);

    // Set the minimum least squares error.
    // The optimizer stops minimizing if the error falls below this
    // value.
    // Set it to 0 or negative to disable this stop criterion (default is 0).
    optimizer.setMinimumError(0);

    // Set the parameters of the step refiner (Armijo Backtracking).
    optimizer.setRefinementParameters({0.8, 1e-4, 1e-10, 1.0, 0});

    // Turn verbosity on, so the optimizer prints status updates after each
    // iteration.
    optimizer.setVerbosity(4);

    // Set initial guess.
    Eigen::VectorXd initialGuess(4);
    initialGuess << 1, 2, 3, 4;

    // Start the optimization.
    auto result = optimizer.minimize(initialGuess);

    std::cout << "Done! Converged: " << (result.converged ? "true" : "false")
        << " Iterations: " << result.iterations << std::endl;

    // do something with final function value
    std::cout << "Final fval: " << result.fval.transpose() << std::endl;

    // do something with final x-value
    std::cout << "Final xval: " << result.xval.transpose() << std::endl;

    return 0;
}

Performance Considerations

The performance for solving an optimization problem with least-squares-cpp can be significantly influenced by the following factors:

References

[1] Jorge Nocedal, Stephen J. Wright, Numerical Optimization, Springer 1999

license MIT
project least-squares-cpp
url github.com/Rookfighter/least-squares-cpp
1 Version
version 0.3.0+1
repository https://pkg.cppget.org/1/alpha
depends 2; Eigen, catch2
requires 1; c++ >= 14