Название: Perturbation Methods in Credit Derivatives
Автор: Colin Turfus
Издательство: John Wiley & Sons Limited
Жанр: Ценные бумаги, инвестиции
isbn: 9781119609599
isbn:
Another argument that is not infrequently heard against the introduction of new analytic results is that it is just too much trouble to integrate them into pricing libraries which are already quite mature. An accompanying argument may be that, since the libraries of financial institutions are already written in highly optimised C++ code, any gains that might be made are only likely to be marginal.
There is also a suspicion concerning the utility of perturbation methods insofar as, while the most interesting and challenging problems in derivatives pricing occur where stochastic effects have a significant impact on the pricing, most perturbation approaches have some kind of reliance on the smallness of a volatility parameter, usually a term variance.1 But, for this parameter to have a significant impact on pricing it cannot be too “small”, so we are led to the expectation that we will need a large number of terms in any approximating series to secure adequate convergence in many cases of importance.
A more recent argument which the author has encountered in a number of conversations with fellow researchers is that, insofar as more efficient ways are sought to carry our repetitive execution of pricing algorithms, the strategy adopted in the future will increasingly be to replace the time-consuming solution of SDEs and PDEs not with analytic formulae but with machine-learned algorithms which can execute orders of magnitude faster (see for example Horvath et al. [2019]). The cost of adopting this approach is a large amount of up-front computational effort in the training phase where the full numerical algorithm is run multiple times over many market data configurations and product specifications to allow the machine-learning algorithm to learn what the “right answer” looks like so that it might replicate it. There will also be a concomitant loss of accuracy. But if, as is often the case, the requirement is to calculate prices for a given portfolio or the CVA associated with a given “netting set” of trades with a given counterparty over multiple scenarios for risk management or other regulatory purposes, the upfront cost can be amortised against a huge amount of subsequent usage of the machine-learned algorithm. Since machine-learning approaches are a fairly blunt instrument, there is not the need to customise the approach to the particular problem addressed, as would be necessary if perturbation methods were used instead as a speed-up strategy wherein some accuracy is traded for speed.
Finally, there is not uncommonly a perception that, unlike with earlier analytic options pricing formulae which were deduced using suitable application of the Girsanov theorem, with which financial engineers tend to be familiar, perturbation‐based methods are by comparison something of a dark art. Many of the results are derived using Malliavin calculus or Lie theory, with which relatively few financial engineers are familiar, and often presented in published research papers in notation which is relatively opaque and quite closely tied in to the method of derivation. Other derivations are performed using methodologies and notations borrowed from quantum mechanics or other areas of theoretical physics, areas with which a contemporary financial engineer is unlikely to be familiar. There is, furthermore, not a clearly defined body of theory which the practitioners of perturbation analysis seek to rely on; books which offer a unified approach to perturbation methods applicable to a range of problems in derivatives pricing such as Fouque et al. [2000], Fouque et al. [2011] and Antonov et al. [2019] are few and far between.
1.2 IN DEFENCE OF PERTURBATION METHODS
Although the arguments presented above challenging the merit of attempts to extend the range of analytic formulae available for derivatives pricing by means of perturbation expansion techniques may appear compelling, we suggest that, when they are unpicked a little, their apparent validity starts to unravel. More specifically they are seen to be premised on a view of what is possible with perturbation methods which is challengeable in the light of recent theoretical developments, in particular those set out in this book. They, furthermore, depend on a view of what practical purposes option pricing methods need to address in the industry and the consequent constraints they must satisfy which is likewise challengeable and not altogether up to date.
While the development of derivatives pricing methods was based on the concept of risk‐neutral pricing to guarantee the absence of arbitrage opportunities through which market makers could systematically lose money, the use of pricing models is increasingly in practice for risk management purposes, rather than the calculation of prices for market‐making purposes. So, even if it is the case that an approximation method might technically give rise to arbitrage opportunities in a small number of extreme cases, provided no trading takes place at these prices this is not necessarily a problem. Indeed we are often in a risk management context more interested in real‐world probabilities than in their risk‐neutral counterparts, on account of the fact it is extreme real‐world events and their frequency of occurrence in practice which can lead to the destabilisation or demise of a financial institution. For example, a report by Fintegral and IACPM [2015] surveying 37 global and regional financial institutions concludes that calculation of counterparty credit risk (CCR) tends to operate under “real‐world” assumptions using historical volatilities to calibrate the Monte Carlo simulation.Also, since risk management is generally about portfolio aggregates rather than individual trades, and typically involves computing prices under hypothetical future scenarios, it is not so important to be able to estimate the size of errors associated with the pricing of individual trades as the expected aggregate error, which can often be estimated to a sufficient degree of accuracy by fairly heuristic methods. This is recognised in the Basel IV (FRTB) regulatory framework which has been proposed to replace VaR: internal models used for risk management purposes don't have to be validated in terms of their ability to price individual trades accurately, but rather the aggregate risk numbers produced need to be sufficiently close to those obtained using end‐of‐day pricing models in a back‐testing exercise.Another factor is that, whereas the main criterion pricing models have to satisfy is accurate calculation of the first moment of a distribution, risk models are much more focussed on the distribution of prices, typically in the extreme quantiles where the greatest risk is usually deemed to lie, so their ability to give an accurate assessment of second and higher moments tends to be at least as important, if not more so. While, in the market, prices of a large number of traded financial securities can be considered known to a reasonable degree of accuracy (the bid–offer spread), this is not the case if one is asking about the distribution of those prices in the future, market information about which is typically much scarcer and the uncertainty about which is correspondingly much greater.
One of the issues with the way in which perturbation expansions were derived and presented historically is that they were deduced as particular solutions associated with a specific payoff structure: often this was a vanilla European‐style payoff. Such results could not therefore be used for related problems like, say, forward‐starting options. Likewise, restrictive assumptions were often made about market data, such as volatilities being constant, without clarification of how results could be generalised. We shall refer to such approaches to perturbation analysis as first generation. The last five years or so have seen focus shift more and more to deriving instead pricing kernels; in other words, general solutions to the pricing equation which can be used relatively straightforwardly to derive solutions for multiple payoff configurations. This approach to perturbation analysis we shall refer to as second generation. A good introduction to this subject is provided by Pagliarani and Pascucci [2012], where pricing kernels are referred to as transition densities.The approach we shall take below is robustly second generation, seeking from the outset a pricing kernel associated with a given model before applying it to calculate derivative prices СКАЧАТЬ