Did you know Automatic Differentiation (AD) can be used to increase speed and accuracy in lap time simulation? AD increases performance by up to 100x, meaning more lap simulation iterations over the same amount of time. It also improves robustness by providing more accurate first and second-order derivatives. More accuracy can help the optimizer to find an optimal solution more reliably.

Why is this important? Well, lap simulation is widely used in racing and can be vital to a team’s competitiveness. We have also seen the demand for lap simulation tools growing in e-sports, as it becomes more lucrative. Ultimately, the value of lap simulation is in its accuracy and the amount of data it can produce.

NAG’s experience within competitive motorsports tells us a successful and performant simulation depends on a range of mathematical models, representing the physical behaviour of the vehicle, as well as efficient algorithms and tools.

This blog goes into a little more detail about how AD can be used to improve lap simulation, and looks at a case study based on a collaboration with RWTH Aachen University and the Formula Student Team Ecurie Aix. The blog is technical, but we have added some visuals; watch the simulation opposite to see an example lap time simulation for the Silverstone, UK racetrack.

The models in this case study are less complex than the ones in cutting-edge implementations NAG have been a part of, but the core numerical aspects are similar, and these methods and the benefits AD brings can easily be projected to other models. This is especially true, since the efficient and accurate computation of derivatives with AD works flexibly for varying model implementations alike. As we will demonstrate with numbers later, using AD not only preserves flexibility for the model development, but also has the quickest run time and yields robustness of the optimizer through accuracy of the derivatives.

For this case study, we took a bicycle vehicle model, a tire model, the governing energy equations, and set up a nonlinear optimization problem. This problem is defined by an objective function as well as linear (box) and nonlinear constraints. To give you an impression of the underlying models, let’s have a look at their definition.

The racetrack is discretized at N evaluation points based on x and y coordinates of the centerline, track width and friction coefficients. Currently, height information is not included. The track is assumed to be closed; however, an open track can be implemented with minor changes in the discretization. A time to arclength transformation is applied, such that the integration over the trajectory is independent of the achieved lap time.

The parameter space is defined by 8*N variables which are all subject to optimization. The optimization objective is described by a scalar cost function (with the lap time as the primary objective with optional secondary objectives added to achieve smooth actuator inputs). All variables are subject to box constraints and in addition the optimization is subject to 9*N (non)linear constraints:

- 5*N constraints enforce the governing system ODEs (equality constraint on residual = 0).
- 2*N constraints enforce that the maximal tire forces on the front and rear axle are not exceeded (Circle of Forces).
- 2*N optional constraints enforce that the slip angle stays reasonable (should be satisfied at the solution anyway but can increase the performance of the optimizer).

To perform efficient non-linear optimization with a derivative-based algorithm, it needs the following derivative information:

- The gradient of the objective function (8*N entries),
- the Jacobian of the constraint function (sparse 9*N x 8*N matrix, band structure w/ 16 bands),
- and the Hessian of the Lagrangian (sparse 8*N x 8*N matrix, band structure w/ 28 bands).

We are using a C++ implementation of the model embedded into NAG’s optimization suite. For running the optimizer, we need to provide functions which evaluate the objective, the constraints, as well as their first and second derivatives. Depending on the method used for computing the derivatives, coding effort, efficiency, and accuracy properties widely vary:

**Dense finite differences:**

This is the simplest method at first sight; until one realizes that accuracy and performance issues emerge very quickly. Dense finite differences can be used for the full set of derivative information needed and can coded with little effort. Solvers like the one we use from the NAG Library implement finite differences internally, and the user is only required to pass callbacks to functions that evaluate the objective and the constraints, respectively. Unfortunately, especially for higher derivative information, the questionable accuracy properties may become a showstopper: as we see later, the optimizer might just not find a solution.

**Sparse finite differences:**

Most derivative tensors have a sparse structure; exploiting this sparsity information brings huge performance benefits when computing derivatives. Though this doesn’t help in any way for the gradient (which is usually dense), it could and should be exploited for the computation of the Jacobian of the constraint function and the Hessian of the Lagrangian. This requires knowledge about the sparsity structure of the problem, which is either known, or can be retrieved computationally (via AD). Using sparsity information for more efficient finite differences are a little more effort to implement, but way more efficient. Solvers like the NAG ones also exploit the sparsity structure. The accuracy issue persists, though.

**Manual differentiation / hand-writing derivative code:**

If the models used have a mathematical representation, the corresponding sensitivity models can be derived symbolically. This approach might lead to the most efficient implementation possible since it potentially exploits every little mathematical and computational detail optimally. Unfortunately, the skills required to derive these, and the effort involved in implementing them is usually quite large. In addition, this effort is not only a one-off investment, but each time the model or the set of parameters changes, the sensitivity models and their implementation change as well. Depending on numerical properties and correctness of derivation and implementation, the accuracy properties are usually good.

**Automatic differentiation:**

AD combines the upsides of sparse finite differences and manual coding. It computes exact derivatives (up to machine precision), comes with very good performance (especially by using the adjoint mode, also called reverse mode) for first and higher derivatives, and is flexible regarding model changes. In short, with AD there is a one-off investment for implementing it, but in return, it comes with a very low maintenance cost, good efficiency, and high accuracy. The initial integration of AD into a reasonably big C++ code is not trivial. There are challenges related to program design, the use of third-party libraries, compute architecture, parallelization, and other aspects. With these caveats, it becomes apparent that AD is not a silver bullet. But with the correct tools and reliable support, these risks can be limited, and the benefits unfold quickly.

The implementation is done in C++. Eigen is used for the model implementation, the NAG Library for the optimization, and dco/c++ for computing the required derivatives. The NAG Library provides a wide variety of numerical algorithms ranging from optimization algorithms over linear algebra to interpolation – and much more. dco/c++ is NAG’s AD tool, which comes with a rich set of features and best-in-class performance.

With the following run time numbers, we want to give an impression about the potential gains by using the NAG Library in combination with AD.

The results we get from the simulation are visualized above. On the left is the optimal race line, on the right the various parameters over the racetrack.

Note that the results presented here are obtained by a model geared towards the Formula Student competition rules, i.e., light cars, slow speeds, electric powertrain, high drag for high downforce, which would typically run on much smaller and tighter tracks than Silverstone.

We use the e04stc routine from the NAG Library to perform the optimization, which is an interior point method optimization solver.

The following picture shows the sparsity pattern for the Jacobian of the constraints — it has quite some regularity (as expected), but might change when changing constraints or discretization schemes. The sparsity is exploited in the sense that only non-zero elements of the Jacobian are computed and stored. The Hessian has even fewer non-zero entries.

When computing the derivatives with dco/c++, we use the first order adjoint mode (dco::ga1s<double>) for computing the gradient, first order tangent vector mode (dco::gt1v<double, 8>) for the Jacobian, and second order tangent vector over adjoint mode (dco::ga1s<dco::gt1v<double, 8>>) for the Hessian computation. For increasing the performance, explicit vectorization (AVX2) has been implemented for all involved AD vector modes, gaining another 40% speedup on the given code compared to the basic set of features in dco/c++.

For the non-linear optimization routine e04stc we compare run times and convergence behavior for sparse finite differences and sparse AD. We run an optimization test case with 1000 discretization points over the Silverstone racetrack (as discretized by TU Munich).

For this specific model we observe that the full AD version (AD for both first and second derivatives) is the most reliable. In terms of runtime only calculating the first derivatives with AD and approximating the Hessian by a BFGS approach can be competitive, however if this approach converges to a solution at all, and with how many iterations, is highly susceptible to only minor changes in the formulation of the nonlinear objective.

A full FD implementation is not competitive if the gradient of the nonlinear objective is dense and suffers from the same reliability issues as the AD + BFGS version.

Iterations | Runtime [s] | Gradients [s] | Jacobian [s] | Hessian [s] | |

e04stc FULL AD | 1001 | 83 | 0.76 | 2.29 | 54.52 |

e04stc AD + BFGS | 2678 | 100 | 3.43 | 6.23 | – (BFGS) |

e04stc FD + BFGS | 1486 | 584 | 536.40 | 5.53 | – (BFGS) |

AD brings many benefits; it comes with a lot of flexibility while preserving a very low maintenance cost, good efficiency, and high accuracy. With NAG, you get access to the most efficient tools as well as targeted support — not only for AD. With NAG as a partner, you also have access to specialists in numerical and high-performance computing.

We hope you enjoyed the blog. Please don’t hesitate to get in touch if you want to know more about the NAG Library in general, the included optimizers, or NAG’s AD Solutions.

Blog authors: Markus Towara and Johannes Lotz