This is where you will find the full catalogue of NAG published blog posts, from technical information to thought leadership.
Designed from the outset for modern hardware, the Zetta Toolkit achieves a
10x performance improvement over reference 2D-PDE calculations on real finance problems.
Tags: PDE, Quant Finance, Zetta Toolkit
With Monte Carlo, it is crucial to leverage the full potential of hardware parallelism to achieve optimal performance and accuracy.
Tags: Derivatives, Monte Carlo, Zetta Toolkit
Trading Desks can hedge auspiciously and gain a competitive edge in the market as our mathematical algorithm technology, dco/c++, gives them rich, cheap and accurate intra-day risk. This fast and accurate risk data at lower cost means more profits for traders and for the business.
Tags: Automatic Differentiation, Finance, Risk Calculation
An approximate correlation matrix is one that is not positive semidefinite.
In this document we consider an application from finance.
Tags: Correlation Matrices, NAG Library
Second-order cone programming (SOCP) is convex optimization which extends linear programming (LP) with second-order (Lorentz or the ice cream) cones.
NAG has developed a CVA demonstration code to show how the NAG Library and the Algorithmic Differentiation (AD) tool dco/c++ can be combined with Origami to solve large scale CVA computations.
Tags: Cloud, CVA
An important problem in finance is to compute the implied volatility. Typically volatilities are computed for large vectors of input data.
Tags: Chebyshev Interpolation
Calculating XVA in a timely manner poses a big performance challenge
for financial institutions.
Tags: Adjoints, Automatic Differentiation, CVA
GS2  is an open source gyrokinetic simulation code used to study turbulence in plasma, one application is for fusion experiments. It is a gyrokinetic flux tube initial value and eigenvalue solv-er and is written in Fortran and parallelised with MPI.
This poster describes work performed on OpenFOAM focussing on
performance as well as OpenFOAM in the Cloud.
Tags: HPC, OpenFOAM
Adjoints are sophisticated numerical techniques for computing a large
number of gradients quickly. To compute an adjoint, your computer
program must be run backwards.
Tags: Adjoints, Automatic Differentiation, GPU
The C++ language is so complex that no AD compilers can handle it.
To get an adjoint, we must write it by hand or use an operator overloading
Tags: Adjoints, Automatic Differentiation, Tape Free