With the increase in the available computational power, the Nathan Myhrvold's Laws of Software continue to apply: New opportunities enable new applications with increased needs, which subsequently become constrained by the hardware that used to be "modern" at adoption time. This trend hasn't escaped finance: Whether backtesting trading strategies on large data sets, pricing assets with increasingly complex models, or applying sophisticated risk management techniques, the need for high performance computing hasn't diminished.
R ecosystem offers a solution -- the Rcpp package -- which enables access to the C++ programming language and provides integration with R. C++ itself opens the access to high-quality optimizing compilers and a wide ecosystem of high-performance libraries.
At the same time, simply rewriting the "hot spots" in C++ is not going to automagically yield the highest performance -- i.e., the lowest execution time. The reasons are twofold: Algorithms' performance can differ in theory -- and that of their implementations can differ even more so in practice.
Modern CPU architecture has continued to yield increases in performance through the advances in microarchitecture, such as pipelining, multiple issue (superscalar) out-of-order execution, branch prediction, SIMD-within-a-register (SWAR) vector units, and chip multi-processor (CMP, also known as multi-core) architecture. All of these developments have provided us with the opportunities associated with a higher peak performance -- while at the same time resulting in optimization challenges when actually trying to reach that peak.
In this talk we'll consider the properties of code which can make it either friendly -- or hostile -- to a modern microprocessor. We will offer advice on achieving highest performance, from the ways of analyzing it -- beyond algorithmic complexity, recognizing the aspects we can entrust to the compiler, to practical optimization of the existing code.