Error estimates of floating-point numbers and Jacobian matrix computation in Clad

Autor: Vassilev, Vassil, Penev, Alexander, Shakhov, Roman
Rok vydání: 2020
DOI: 10.5281/zenodo.4134097
Popis: Automatic differentiation(AD) is a method in computer algebra to evaluate the derivative of a function defined by source code, it's an alternative to symbolic and numerical differentiation. AD uses the fact than any algorithm can be decomposed to differentiable elementary operation and thereby differentiated using a chain rule. Clad is a Clang plugin based on source code transformation. Given C++ source code of a mathematical function, it can automatically generate C++ code for computing derivatives of the function. It supports both forward-mode and reverse-mode AD. Our talk covers two features of Clad. The first one is Jacobian matrix computation using forward and reverse modes. Jacobian matrix have applications in such fields as machine learning and computational physics. The second one is error estimates of floating-point numbers. It allows to monitor the estimated relative error during a computation involving differentiated function produced by clad. It achieved by source code transformation, Clad can produce code for error estimation along with AD.
Databáze: OpenAIRE