Automatic Differentiation

AD Myths Debunked: It’s Hard to Maintain Code With AD Embedded


Published 09/06/2022 By Johannes Lotz

    Access Content

    Required Form Submission

    Please take a moment to fill out the required form below. We appreciate your cooperation and assure you that the information you provide will be treated confidentially. Once you’ve submitted the form, you will gain immediate access to the material.




    By clicking the button below you agree to our Privacy Policy

    This post is part of the AD Myths Debunked series.

    You’ve decided you want your C++ library or application to benefit from the advantages of AD. So, let’s further assume you successfully integrated AD. At nAG, we understand things change and new code is always being developed. This presents you and your development team with another challenge: maintaining the AD in your source code. Is maintaining AD code an expensive process that requires the whole development team to learn and understand AD in detail? Or is that a myth?  In this article, we look at the potential maintenance costs and pitfalls, depending on which approach to AD you take.

    • Writing derivative code by hand, not using any AD tools.

    Writing derivative code by hand is not only difficult and error-prone, you also create two separate codebases: one is the primal code, i.e., the original model, and the other one is the derivative code, the adjoint. There are a few important rules to remember when working in this way! First, whenever a change is made in the primal code, the derivative code needs to reflect that change. Every developer in your team needs to be aware of this crucial fact and must be able to write the corresponding derivative code. If the primal and derivative code run out of sync, the effects can be costly. Furthermore, it is important to be aware that a seemingly tiny change in the primal code may require global changes in the derivative code. Additionally, if, for example, you need Hessians as well as first-order sensitivities, the required workload increases. In summary, maintaining a hand-written derivative code usually comes with huge costs and time delays. 

    • Using automatic source transformation. 

    Assuming you’ve managed to successfully use a source transformation approach on your code, you end up having two separate codebases for the primal and the sensitivities (similar to writing it by hand). You are, though, in a much better position since you can rerun the source-to-source compiler whenever you make a change in your primal. Smartly designed, this can be part of your build system. However, experience shows that today’s source-to-source compilers are not as robust as you’d like them to be. This usually means that the primal code needs some massage to be digestible and the generated code also needs some manual postprocessing. These steps are not easily automatable. Furthermore, depending on the user-friendliness of the tool, cryptic compilation errors can be expected. In summary, this is a much better approach than writing AD code by hand, but it still comes with many uncertainties and pitfalls. 

    • Using operator overloading techniques.

    As described in previous Myth Busting posts, integrating AD with operator overloading techniques using a robust tool like dco/c++ is the easiest approach. This holds true for the maintenance of the code. The main benefit is that you have a single code base. Every change in the primal code directly results in the corresponding change for the derivative code. When using templates in C++, all incarnations of your problem, primal, first-, and higher-order derivative codes coincide in the same source. Although compiler messages are likely to be more complex in case of an error, it is standard C++ and developers will be familiar with the types of errors reported. In summary, the approach with the lowest maintenance costs is operator overloading. 

    Maintaining hand-written or source-to-source compiled derivative code is by no means impossible. However, if you rely on the additional performance you might achieve through these approaches, you should try to make the generation and build process as robust as possible. Targeted testing and smart coupling with overloading tools can help with this but there will be considerably more effort involved. 

    nAG’s AD toolset has been developed and enhanced over the last 12 years and it builds upon a further 10 years of AD R&D experience. We know that details matter. Myths are narratives which might sound like truths but by talking through these in some detail and sharing our experiences we hope to help businesses navigate these issues. Results matter, myths should not.

      Please provide your work email to access the free trial

      By clicking the button below you agree to our Privacy Policy

      This will close in 20 seconds

        Discover how we can help you in just a few clicks





        Discover how we can help you in just a few clicks

        Personal Information





        By clicking the button below you agree to our Privacy Policy

        This will close in 20 seconds