Few algorithms are used as frequently as the Kalman filter. Whether helping astronauts navigate through space, estimating the trajectories of a drone from noisy sensor data, or any other number of uses, the Kalman filter is a technique every engineer should know how to use.
This post is not a guide on the theory of Kalman filtering. For this, see pages like this one or this one. This post assumes you have a basic understanding of the Kalman filter and want to write a high-performance C++ implementation of the algorithm.
This implementation will use Eigen for linear algebra, so make sure you have this installed before starting to be able to run the code. Let’s jump in!
The Kalman Filter Class
The code for the entire Kalman Filter class is given below. Since the process and measurement matrices are passed as arguments to the class functions in the public interface, no linearization occurs within the class itself. Linearization is passed to the user to be handled upstream.
This means that the code is generalized to run for both linear and nonlinear (extended) Kalman filter applications. Subsequent sections of this post will break down specific portions of the code.
/**
* A C++ Kalman Filter Implementation
*/
#include <iostream>
#include <Eigen/Dense>
#include <random>
/**
* The Kalman Filter Class templated on the number of states
*/
template <int N>
class KalmanFilter {
public:
/**
* Filter Constructor
*
* @param x0 The initial filter state. Must have dimension Nx1.
* @param P0 The initial filter covariance. Must have dimension NxN
*
* @brief Sets the initial state and covariance of the filter. If the
* initialization is successful, the filterInitialized_ private member
* flag will be set to true, otherwise it will be set to false.
*/
KalmanFilter(const Eigen::VectorXd& x0, const Eigen::MatrixXd& P0) {
// Check size of initial state
if (x0.size() != N || P0.rows() != N || P0.cols() != N) {
std::cout << "Incorrect state size. Failed to initialize filter." << std::endl;
filterInitialized_ = false;
return;
}
// Check size of initial covariance
if (P0.rows() != N || P0.cols() != N) {
std::cout << "Incorrect covariance size. Failed to initialize filter." << std::endl;
filterInitialized_ = false;
return;
}
// Set state and covariance
x_ = x0;
P_ = P0;
filterInitialized_ = true;
}
/**
* Check if this filter is initialized
*
* @return True if the filter is initialized, false otherwise.
*/
bool filterInitialized() const { return filterInitialized_; }
/**
* Get the filter state
*
* @return x_
*/
Eigen::Vector<double, N> getState() const { return x_; }
/**
* Get the filter covariance
*
* @return P_
*/
Eigen::Matrix<double, N, N> getCovariance() const { return P_; }
/**
* Filter prediction
*
* @param Phi: The NxN state transition matrix
* @param Q: The NxN process noise matrix
* @return True if the filter was predicted successfully, False otherwise.
*
* @brief This function performs the Kalman filter state and covariance
* prediction. The state is predicted by x(k+1) = Phi * x(k), and the
* covariance is predicted by P(k+1) = Phi * P(k) * Phi' + Q.
*
*/
bool predict(const Eigen::MatrixXd& Phi, const Eigen::MatrixXd& Q) {
// Verify the filter has been initialized
if (!filterInitialized_) {
std::cout << "Unable to predict because filter not initialized" << std::endl;
return false;
}
// Check size of state transition matrix
if (Phi.rows() != N || Phi.cols() != N) {
std::cout << "Unable to predict because Phi has incorrect dimensions" << std::endl;
return false;
}
// Check size of process noise matrix
if (Q.rows() != N || Q.cols() != N) {
std::cout << "Unable to predict because Q has incorrect dimensions" << std::endl;
return false;
}
// Perform the prediction
x_ = Phi * x_;
P_ = Phi * P_ * Phi.transpose() + Q;
return true;
}
/**
* Filter update
*
* @param z: The Mx1 measurement
* @param H: The MxN measurement matrix which maps state space to measurement space
* @param R: The MxM measurement noise matrix
* @return True if the filter was updated successfully, False otherwise.
*
* @brief This function performs the Kalman filter state and covariance
* update. First, the Kalman Gain is computed by K = P * H' * inv(H * P * H' + R).
* Next, the state is updated by x(k+1) = x(k) + K * (z - H * x(k)). Finally, the
* covariance is updated using: P(k+1) = (I - K * H) * P(k) * (I - K * H)' + K * R * K'.
*/
bool update(const Eigen::VectorXd& z, const Eigen::MatrixXd& H, const Eigen::MatrixXd& R) {
// Verify the filter has been initialized
if (!filterInitialized_) {
std::cout << "Unable to update because filter not initialized" << std::endl;
return false;
}
// Get measurement dimension
int M = z.size();
// Check size of the measurement matrix
if (H.rows() != M || H.cols() != N) {
std::cout << "Unable to update because H has incorrect dimensions" << std::endl;
return false;
}
// Check size of measurement noise matrix
if (R.rows() != M || R.cols() != M) {
std::cout << "Unable to update because R has incorrect dimensions" << std::endl;
return false;
}
// Compute Kalman Gain
Eigen::MatrixXd K = P_ * H.transpose() * (H * P_ * H.transpose() + R).inverse();
// Update state and covariance
Eigen::MatrixXd IKH = Eigen::MatrixXd::Identity(N, N) - K * H;
x_ = x_ + K * (z - H * x_);
P_ = IKH * P_ * IKH.transpose() + K * R * K.transpose();
return true;
}
private:
// Filter State
Eigen::Vector<double, N> x_;
// Filter Covariance
Eigen::Matrix<double, N, N> P_;
// Filter initialized boolean
bool filterInitialized_;
};
Initialization
The filter initialization occurs in the class constructor. The goal of initialization is to set the initial state and covariance values (which are stored as private member variables) and to set the private filter initialized flag to true. This flag is checked before predicting or updating the filter to ensure the behavior is expected.
The key part of the initialization routine is checking that the state and covariance provided have the correct dimension. If these are wrong, the Kalman filter class will not initialize which will prevent any future operations from running.
/**
* Filter Constructor
*
* @param x0 The initial filter state. Must have dimension Nx1.
* @param P0 The initial filter covariance. Must have dimension NxN
*
* @brief Sets the initial state and covariance of the filter. If the
* initialization is successful, the filterInitialized_ private member
* flag will be set to true, otherwise it will be set to false.
*/
KalmanFilter(const Eigen::VectorXd& x0, const Eigen::MatrixXd& P0) {
// Check size of initial state
if (x0.size() != N || P0.rows() != N || P0.cols() != N) {
std::cout << "Incorrect state size. Failed to initialize filter." << std::endl;
filterInitialized_ = false;
return;
}
// Check size of initial covariance
if (P0.rows() != N || P0.cols() != N) {
std::cout << "Incorrect covariance size. Failed to initialize filter." << std::endl;
filterInitialized_ = false;
return;
}
// Set state and covariance
x_ = x0;
P_ = P0;
filterInitialized_ = true;
}
Prediction
The filter prediction function propagates the state and covariance forward in time, taking a state transition matrix and process noise matrix as inputs. It uses the standard Kalman filter equations and prior to running checks that the filter is initialized and the input matrices have the correct dimensions.
/**
* Filter prediction
*
* @param Phi: The NxN state transition matrix
* @param Q: The NxN process noise matrix
* @return True if the filter was predicted successfully, False otherwise.
*
* @brief This function performs the Kalman filter state and covariance
* prediction. The state is predicted by x(k+1) = Phi * x(k), and the
* covariance is predicted by P(k+1) = Phi * P(k) * Phi' + Q.
*
*/
bool predict(const Eigen::MatrixXd& Phi, const Eigen::MatrixXd& Q) {
// Verify the filter has been initialized
if (!filterInitialized_) {
std::cout << "Unable to predict because filter not initialized" << std::endl;
return false;
}
// Check size of state transition matrix
if (Phi.rows() != N || Phi.cols() != N) {
std::cout << "Unable to predict because Phi has incorrect dimensions" << std::endl;
return false;
}
// Check size of process noise matrix
if (Q.rows() != N || Q.cols() != N) {
std::cout << "Unable to predict because Q has incorrect dimensions" << std::endl;
return false;
}
// Perform the prediction
x_ = Phi * x_;
P_ = Phi * P_ * Phi.transpose() + Q;
return true;
}
Measurement Updates
The measurement update function updates the state and covariance values in the class by computing the Kalman gain and then using the filter update equations. The Joseph form of the covariance update equation is used for numerical stability. The update function takes the measurement, the linearized measurement model and the measurement noise matrix as inputs.
/**
* Filter update
*
* @param z: The Mx1 measurement
* @param H: The MxN measurement matrix which maps state space to measurement space
* @param R: The MxM measurement noise matrix
* @return True if the filter was updated successfully, False otherwise.
*
* @brief This function performs the Kalman filter state and covariance
* update. First, the Kalman Gain is computed by K = P * H' * inv(H * P * H' + R).
* Next, the state is updated by x(k+1) = x(k) + K * (z - H * x(k)). Finally, the
* covariance is updated using: P(k+1) = (I - K * H) * P(k) * (I - K * H)' + K * R * K'.
*/
bool update(const Eigen::VectorXd& z, const Eigen::MatrixXd& H, const Eigen::MatrixXd& R) {
// Verify the filter has been initialized
if (!filterInitialized_) {
std::cout << "Unable to update because filter not initialized" << std::endl;
return false;
}
// Get measurement dimension
int M = z.size();
// Check size of the measurement matrix
if (H.rows() != M || H.cols() != N) {
std::cout << "Unable to update because H has incorrect dimensions" << std::endl;
return false;
}
// Check size of measurement noise matrix
if (R.rows() != M || R.cols() != M) {
std::cout << "Unable to update because R has incorrect dimensions" << std::endl;
return false;
}
// Compute Kalman Gain
Eigen::MatrixXd K = P_ * H.transpose() * (H * P_ * H.transpose() + R).inverse();
// Update state and covariance
Eigen::MatrixXd IKH = Eigen::MatrixXd::Identity(N, N) - K * H;
x_ = x_ + K * (z - H * x_);
P_ = IKH * P_ * IKH.transpose() + K * R * K.transpose();
return true;
}
An Example
Now that we have our class, let’s test it out! The following simulation code can be used:
/**
* Main Execution of the Kalman Filter Simulation
*/
int main() {
// Define model parameters
double dt = 0.1;
double simTime = 30.0;
double processNoiseVar = 0.1;
double measurementNoiseVar = 5.0;
// Initialize Gaussian distributions for generating noise
std::default_random_engine generator;
std::normal_distribution<double> processNoiseDist(0.0, processNoiseVar);
std::normal_distribution<double> measurementNoiseDist(0.0, measurementNoiseVar);
// Define process model and noise covariance
Eigen::Matrix<double, 4, 4> Phi = Eigen::Matrix<double, 4, 4>::Identity();
Phi.block(0, 2, 2, 2) = dt * Eigen::Matrix<double, 2, 2>::Identity();
Eigen::Matrix<double, 4, 4> Q;
Q << dt*dt*dt/3.0, 0, dt*dt/2.0, 0,
0, dt*dt*dt/3.0, 0, dt*dt/2.0,
dt*dt/2.0, 0, dt, 0,
0, dt*dt/2.0, 0, dt;
Q = Q * processNoiseVar;
// Define measurement model and noise covariance
Eigen::Matrix<double, 2, 4> H;
H << 1, 0, 0, 0,
0, 1, 0, 0;
Eigen::Matrix<double, 2, 2> R = measurementNoiseVar * Eigen::Matrix<double, 2, 2>::Identity();
// Set an initial truth state
Eigen::Vector<double, 4> truthState0{0.0, 0.0, 1.6, -2.1};
Eigen::Vector<double, 4> truthState = truthState0;
// Loop over sim time and generate truth and measurements
std::vector<Eigen::Vector<double, 4>> truthHistory;
std::vector<Eigen::Vector<double, 2>> measurementHistory;
std::vector<double> tHistory;
double tCurr = 0.0;
while (tCurr <= simTime) {
// Update time
tCurr += dt;
// Sample Process Noise
Eigen::VectorXd vP = Eigen::VectorXd::Random(Q.rows());
vP = Q.llt().matrixL() * vP;
// Propagate the dynamics
truthState = Phi * truthState + vP;
// Sample measurement noise
Eigen::VectorXd vM = Eigen::VectorXd::Random(R.rows());
vM = R.llt().matrixL() * vM;
// Generate a noisy measurement at the truth state
Eigen::VectorXd z = H * truthState + vM;
// Add data to output
tHistory.push_back(tCurr);
truthHistory.push_back(truthState);
measurementHistory.push_back(z);
}
// Define initial state and covariance for Kalman Filter
Eigen::Vector<double, 4> x0{0.0, 0.0, 0.0, 0.0};
Eigen::Matrix<double, 4, 4> P0 = 100.0 * Eigen::Matrix<double, 4, 4>::Ones();
// Create Kalman Filter Object
KalmanFilter<4> kf(x0, P0);
// Loop over data and run filter
// Prints output in form: t, x1, x2, x3, x4, sig1, sig2, sig3, sig4, e1, e2, e3, e4
// These quantities represent the state and the square root of the diagonal of the
// covariance matrix after the filter update at each step. The state error is also printed.
for (int i = 0; i < tHistory.size(); i++) {
// Print initial filter state
if (i == 0) {
Eigen::VectorXd x = kf.getState();
Eigen::MatrixXd P = kf.getCovariance();
Eigen::VectorXd e = x - truthState0;
std::cout << 0.0 << ", " << x(0) << ", " << x(1) << ", " << x(2) <<
", " << x(3) << ", " << std::sqrt(P(0,0)) << ", " << std::sqrt(P(1,1)) <<
", " << std::sqrt(P(2,2)) << ", " << std::sqrt(P(3,3)) << ", " << e(0) <<
", " << e(1) << ", " << e(2) << ", " << e(3) << std::endl;
}
// Predict state and covariance
bool predictSuccess = kf.predict(Phi, Q);
// Perform measurement update
bool updateSuccess = kf.update(measurementHistory[i], H, R);
// Print output at time
if (updateSuccess) {
Eigen::VectorXd x = kf.getState();
Eigen::MatrixXd P = kf.getCovariance();
Eigen::VectorXd e = x - truthHistory[i];
std::cout << tHistory[i] << ", " << x(0) << ", " << x(1) << ", " << x(2) <<
", " << x(3) << ", " << std::sqrt(P(0,0)) << ", " << std::sqrt(P(1,1)) <<
", " << std::sqrt(P(2,2)) << ", " << std::sqrt(P(3,3)) << ", " << e(0) <<
", " << e(1) << ", " << e(2) << ", " << e(3) << std::endl;
}
}
return 0;
}
The code starts by defining several model parameters, such as the simulation time, time step and system noise variance. The noise parameters are used to then define Gaussian distributions, per the Kalman filter assumptions, which will later be sampled from.
Next, we define the process model (state transition matrix and process noise matrix). This example assumes a 4-dimensional state (x/y position and x/y velocity). The process noise model comes from the definition of the constant velocity motion model. Additionally, we define the measurement model (observes x and y directly) and the associated measurement noise.
With the system dynamics defined, the truth trajectory and associated measurement history is created. Given an initial state, the truth trajectory is computed recursively by propagating the previous truth state forward through the constant velocity motion model and adding process noise, sampled from the process noise distribution. Likewise, a measurement is generated at each time by passing the truth state through the measurement model and adding noise which is sampled from the measurement noise distribution.
Finally, the Kalman filter simulation is run. At each step, the filter performs a prediction followed by an update. Once complete, several outputs are printed to the console for post-processing. The output set includes the current time, the state, the standard deviations for each of the states and the error from the truth.
The following figure shows the state errors over time, along with the standard deviations.
As we would expect, the filter 1-sigmas quickly converge to a steady state, along with the error in the state estimate. Visually, this filter also appears to be statistically consistent, though it is recommended as an exercise to the reader to verify that assertion. To generate this plot, save the filter terminal output to a file titled “output.txt” and run the following Python script:
# Plot Kalman Filter Results
import numpy as np
import matplotlib.pyplot as plt
# Load the data from output.txt
data = np.loadtxt('output.txt', delimiter=',', skiprows=0)
# Extract data
errColumns = data[:, 9:13]
sigColumns = data[:, 5:9]
# Plot results
fig, axs = plt.subplots(4, 1, figsize=(10, 8), sharex=True)
for i in range(4):
axs[i].plot(data[:, 0], errColumns[:, i], label=f'err{i+1}', color='b')
axs[i].plot(data[:, 0], sigColumns[:, i], label=f'sig{i+1}', color='r')
axs[i].plot(data[:, 0], -sigColumns[:, i], color='r')
axs[i].set_ylabel('Error')
axs[i].legend(loc='upper right')
axs[i].grid(True)
plt.xlabel('Time')
plt.tight_layout()
plt.suptitle('Kalman Filter Performance', fontsize=16)
plt.show()
You can even pull more of the data from the filter that is printed and create your own plots! I recommend you start by tweaking the model parameters and then if you feel confident in your understanding, try to create your own simulation scenario using a different process/measurement model.
Conclusion
I hoped you enjoyed this C++ Kalman filter tutorial. We covered the class design and the main simulation loop for generating data and running the filter. These two tools should give you the flexibility you need to apply this code to your own specific problem. Try it out on some of your own data and let me know in the comments how it works. Happy programming!
-Parker