NAGnews 144

Posted on
1 Dec 2016

In this issue:


Mark 26 Algorithm Spotlight: NEW Interior Point Method for Nonlinear Optimization in the NAG Library


New to the NAG Library, Mark 26, is an Interior Point Method for Nonlinear Optimization (also called nonlinear programming, NLP). NLP is present in a plethora of applications across various fields such as finance, engineering and operational research, so it is important to have the right solver for your needs. At Mark 26 we have included an addition to our nonlinear optimization solvers offering: a solver based on an interior point method (IPM) for largescale problems. The new solver does not supersede the existing ones; rather it complements them, as the underlying algorithms are fundamentally different. Some key features, along with advice on what solver to choose, are the subject of a new mini-article. You can read it here.

How can I start using Mark 26?

Many readers of NAGnews will be entitled to use Mark 26 as part of their supported NAG Software Agreement. If you currently use the NAG Library and would like us to see if you are eligible for an upgrade to the new Mark, email us and we'll do the checking. If you're interested in using the routines in the Library do get in touch or visit our website for more information.


Investment Company Utilize NAG Optimization Solvers to Calibrate Nonlinear Least-Squares Problem


Global Investment Company Exane specialize in three finance areas: Cash Equities, Derivatives and Asset Management, and it was within the Equity Derivatives function that Exane benefitted from using NAG's superior optimization solvers to effectively calibrate parametric arbitrage-free volatility surfaces.

The Exane Quant Team for Equity Derivatives needed to quickly, efficiently and, on a continuous basis, solve a constrained nonlinear least squares optimization problem with approximately 50 parameters, 100 linear constraints and 100 nonlinear constraints.

They selected NAG Library optimization routines and conducted an extensive test phase, including pitching them against other numerical libraries and several open source routines. During the testing phase NAG experts helped the Exane team achieve a proof of concept, overcoming the initial complexity challenges. At the end of the evaluation NAG was chosen to supply their solvers for a host of reasons including the extensive algorithmic coverage found in the NAG Library. The Library offers numerous algorithms for the same class of problems which means the user can choose exactly the right solver for the problem.


NAG joins STAC® to assist with risk technology benchmarks


NAG is delighted to join the STAC Benchmark Council™ and to play a leading role in the code inspection for STAC-A2™ benchmark implementations. STAC® (the Securities Technology Analysis Center) have turned to NAG for help with code reviews in order to leverage our expertise in HPC and computational finance.

STAC provides technology research and testing tools based on community-source standards. This accelerates technology selection at user firms while reducing the sales cycle for vendors. The standards are developed by the STAC Benchmark Council, a group of major financial firms and other "algorithmic enterprises" as well as leading technology vendors.

STAC-A2 is the technology benchmark standard based on financial market risk analysis, designed by quants and technologists from some of the world's largest banks. To learn more, visit www.STACresearch.com/a2.


Learning Opportunity: 'Improving Application Performance on Intel® Xeon Phi Processor™' Webinar Series


December 5-13 2016, 2-hour webinar sessions held over 7 subsequent days

NAG and Intel are partnering to present a highly valuable set of learning opportunities designed to teach the fundamental skills needed to achieve optimum application performance on the Intel® Xeon Phi™ Processor architecture. We are providing teaching via 2-hour webinar sessions held on 7 subsequent days. The sessions will deliver targeted instruction in both theory and practical sessions.

Delegates will:

  • Increase their knowledge of the Intel® Xeon Phi™ Processor architecture and what applications can best leverage it
  • Learn how to use OpenMP to utilize multicore parallelism as well as vectorization
  • Learn how to further optimize already-parallel applications to even more effectively utilize Intel® Xeon Phi™ Processor and maximize performance
  • Take an initial application and optimize it for excellent performance on Intel® Xeon Phi™ Processor

Click here for more information and to register your interest.


Best of the Blog


Calling NAG Routines from Julia

Julia Computing was founded in 2015 by the co-authors of the Julia programming language to help private businesses, government agencies and others develop and implement Julia-based solutions to their big data and analytics problems.

Julia is an open-source language for high-performance technical computing, created by some of the best minds in mathematical and statistical computing.

Reid Atcheson, Accelerator Software Engineer, NAG, and Andy Greenwell, Senior Application Engineer, Julia Computing, have teamed up to ensure that NAG Library routines can be called from the Julia language. Read their piece here.


Out & About with NAG


Come and see us at various conferences and events over the next few months.

Computing Insight UK 2016
14-15 December 2016, Manchester

NAG and Intel Webinar Series - Improving Application Performance on the Intel Xeon Phi Processor
Week commencing 5 December 2016

Rice University Oil & Gas HPC Conference
15-16 March 2017