Did you know Automatic Differentiation (AD) can be used to increase speed and accuracy in lap time simulation? AD increases performance by up to 100x, meaning more lap simulation iterations over the same amount of time. It also improves robustness by providing more accurate first and second-order derivatives. More accuracy can help the optimizer to find an optimal solution more reliably.
Why is this important? Well, lap simulation is widely used in racing and can be vital to a team’s competitiveness. We have also seen the demand for lap simulation tools growing in e-sports, as it becomes more lucrative. Ultimately, the value of lap simulation is in its accuracy and the amount of data it can produce.
nAG’s experience within competitive motorsports tells us a successful and performant simulation depends on a range of mathematical models, representing the physical behaviour of the vehicle, as well as efficient algorithms and tools.
This blog goes into a little more detail about how AD can be used to improve lap simulation, and looks at a case study based on a collaboration with RWTH Aachen University and the Formula Student Team Ecurie Aix. The blog is technical, but we have added some visuals; watch the simulation opposite to see an example lap time simulation for the Silverstone, UK racetrack.
The models in this case study are less complex than the ones in cutting-edge implementations nAG have been a part of, but the core numerical aspects are similar, and these methods and the benefits AD brings can easily be projected to other models. This is especially true, since the efficient and accurate computation of derivatives with AD works flexibly for varying model implementations alike. As we will demonstrate with numbers later, using AD not only preserves flexibility for the model development, but also has the quickest run time and yields robustness of the optimizer through accuracy of the derivatives.
For this case study, we took a bicycle vehicle model, a tire model, the governing energy equations, and set up a nonlinear optimization problem. This problem is defined by an objective function as well as linear (box) and nonlinear constraints. To give you an impression of the underlying models, let’s have a look at their definition.
The racetrack is discretized at N evaluation points based on x and y coordinates of the centerline, track width and friction coefficients. Currently, height information is not included. The track is assumed to be closed; however, an open track can be implemented with minor changes in the discretization. A time to arclength transformation is applied, such that the integration over the trajectory is independent of the achieved lap time.
The parameter space is defined by 8*N variables which are all subject to optimization. The optimization objective is described by a scalar cost function (with the lap time as the primary objective with optional secondary objectives added to achieve smooth actuator inputs). All variables are subject to box constraints and in addition the optimization is subject to 9*N (non)linear constraints:
To perform efficient non-linear optimization with a derivative-based algorithm, it needs the following derivative information:
We are using a C++ implementation of the model embedded into nAG’s optimization suite. For running the optimizer, we need to provide functions which evaluate the objective, the constraints, as well as their first and second derivatives. Depending on the method used for computing the derivatives, coding effort, efficiency, and accuracy properties widely vary:
The implementation is done in C++. Eigen is used for the model implementation, the nAG Library for the optimization, and dco/c++ for computing the required derivatives. The nAG Library provides a wide variety of numerical algorithms ranging from optimization algorithms over linear algebra to interpolation – and much more. dco/c++ is nAG’s AD tool, which comes with a rich set of features and best-in-class performance.
With the following run time numbers, we want to give an impression about the potential gains by using the nAG Library in combination with AD.
The results we get from the simulation are visualized above. On the left is the optimal race line, on the right the various parameters over the racetrack.
Note that the results presented here are obtained by a model geared towards the Formula Student competition rules, i.e., light cars, slow speeds, electric powertrain, high drag for high downforce, which would typically run on much smaller and tighter tracks than Silverstone.
We use the e04stc routine from the nAG Library to perform the optimization, which is an interior point method optimization solver.
The following picture shows the sparsity pattern for the Jacobian of the constraints — it has quite some regularity (as expected), but might change when changing constraints or discretization schemes. The sparsity is exploited in the sense that only non-zero elements of the Jacobian are computed and stored. The Hessian has even fewer non-zero entries.
When computing the derivatives with dco/c++, we use the first order adjoint mode (dco::ga1s<double>) for computing the gradient, first order tangent vector mode (dco::gt1v<double, 8>) for the Jacobian, and second order tangent vector over adjoint mode (dco::ga1s<dco::gt1v<double, 8>>) for the Hessian computation. For increasing the performance, explicit vectorization (AVX2) has been implemented for all involved AD vector modes, gaining another 40% speedup on the given code compared to the basic set of features in dco/c++.
For the non-linear optimization routine e04stc we compare run times and convergence behavior for sparse finite differences and sparse AD. We run an optimization test case with 1000 discretization points over the Silverstone racetrack (as discretized by TU Munich).
For this specific model we observe that the full AD version (AD for both first and second derivatives) is the most reliable. In terms of runtime only calculating the first derivatives with AD and approximating the Hessian by a BFGS approach can be competitive, however if this approach converges to a solution at all, and with how many iterations, is highly susceptible to only minor changes in the formulation of the nonlinear objective.
A full FD implementation is not competitive if the gradient of the nonlinear objective is dense and suffers from the same reliability issues as the AD + BFGS version.
Iterations | Runtime [s] | Gradients [s] | Jacobian [s] | Hessian [s] | |
e04stc FULL AD | 1001 | 83 | 0.76 | 2.29 | 54.52 |
e04stc AD + BFGS | 2678 | 100 | 3.43 | 6.23 | – (BFGS) |
e04stc FD + BFGS | 1486 | 584 | 536.40 | 5.53 | – (BFGS) |
AD brings many benefits; it comes with a lot of flexibility while preserving a very low maintenance cost, good efficiency, and high accuracy. With nAG, you get access to the most efficient tools as well as targeted support — not only for AD. With nAG as a partner, you also have access to specialists in numerical and high-performance computing.
We hope you enjoyed the blog. Please don’t hesitate to get in touch if you want to know more about the nAG Library in general, the included optimizers, or nAG’s AD Solutions.
Blog authors: Markus Towara and Johannes Lotz
This will close in 20 seconds
This will close in 0 seconds