As simple as possible;
but no simpler

P&L Attribution – Judging the weathermen

“The storm starts, when the drops start dropping
When the drops stop dropping then the storm starts stopping.”
― Dr. Seuss, Oh Say Can You Say?

“Pray don’t talk to me about the weather, Mr. Worthing. Whenever people talk to me about the weather, I always feel quite certain that they mean something else. And that makes me so nervous.”
– Oscar Wilde, The Importance of Being Earnest, Act 1

We will talk about weathermen and the predictions they make. And we will mean something entirely different. By weathermen, we will mean the models in a bank and the predictions they make or the hypotheses they form. And for the realism of Dr. Seuss’ drops dropping, we will substitute the realism of P&L..  More specifically, we will talk about P&L attribution (PLA) and the role it plays in helping us use the realism of P&L to test the hypotheses posed by our various risk models – which actually is its primary purpose in life.

We will focus specifically on 3 hypotheses formulated by a bank’s risk models, its VAR model and its CVA/EPE model respectively. Namely, for a given bank:

I.         Change in the mark-to-market value of its positions are materially determined by changes to a specified set of variables and parameters (i.e. risk factors) and the expected change is quantified by the sensitivities obtained to these risk factors from its models;

II.         There is a specified % probability that the value of its positions will lose more than its VAR number over any given interval equal to the VAR holding period;

III.         The cost of insuring its aggregate positions against the risk of counterparty Z defaulting is not expected to exceed the cumulative sum of the CVA fees charged to its trading desks for originating exposure to counterparty Z.

We do have a light hearted Prezi that talks about much the same thing, here:

Carrying on…

PLA  provides a critical product control function of decomposing and analyzing actual booked Profit & Loss (P&L) and its variance, especially in the context of testing and falsifying our three hypotheses above.

We argue that the process of laying out the hypotheses implicit in risk models in falsifiable form; and then of thinking of PLA as a function whose primary motivation is to construct (from actual P&L) the observation data set with which to test the risk hypotheses provides the right intuition about how the PLA function needs to be organized to be effective and what its relationship to the risk functions should be.

Borrowing from a 1996 paper by Emanuel Derman, models in finance may be loosely organized into 3 classes:

1. A fundamental model: a system of postulates and data, together with a means of drawing dynamical inferences from them;

2. A phenomenological model: a description or analogy to help visualize something that cannot be directly observed; and

3. A statistical model: a regression or best-fit between different data sets.

The first two model classes  embody some form of cause and effect where a causal relationship is established between value and some variables and parameters. The third class describes rather than explains a correlated value relationship and makes probabilistic statements  from this.

In both cases a falsifiable hypothesis can be formulated as either a statement quantifying the causal relationship by direction, size  and completeness; or as a statement quantifying the frequency of a precisely described probabilistic occurrence. Our risk sensitivities and CVA hypotheses are of the first form, while the VAR  hypothesis is of the latter. As predictions of a value function, all three are refutable after the fact by P&L observations – we just need to make sure that we slice the P&L just right so that the observation matches the prediction.

And that is where PLA comes in…in the slicing.

Slicing P&L

P&L Attribution

 

The slices of attribution can be broadly categorized to include the following:

  • Market-driven PLA  (Sensitivities based PLA, Revaluation PLA)
  • Non-market event-driven PLA (Lifecycle Events PLA, New Trade PLA, CVA, Current Day Cash, Fees and Commissions)
  • Cost of Carry PLA (Carry not including Theta)
  • Prudence PLA (IPV Adjustments, Reserves PLA)
  • Accounting PLA (Transfer Pricing, Joint Venture charges etc)

With P&L carved up attributed,  various slices may then be recombined to form the observation sets to start testing our three hypotheses with.

Unexplained P&L

For a given position i made up of n trades that are risk homogenous with P&L attribution sub items calculated consistently for them:

Buy & Hold (B&H) P&L is the P&L due to holding a bought start-of-day (SOD) position. It is the subset of actual P&L that excludes all other P&L components except those due to (fair value) market movements of the risk factors and some trade events affecting the SOD position.

Bank’s risk Hypothesis I

I.         Change in the mark-to-market value of its positions are materially determined by changes to a specified set of variables and parameters (i.e. risk factors) and the expected change is quantified by the sensitivities obtained to these risk factors from its models;

If Unexplained P&L is material then this falsifies risk hypothesis I and, all else being equal (and consistent), suggests that either the sensitivities calculated or/and the models used are incorrect; or there are material risk factors or risk factor sensitivities missing in the risk calculation.

 Buy & Hold P&L and VAR Back-testing

To prep Buy & Hold P&L as an observation set for risk hypothesis II it needs to be normalized to a position that is consistent with the start of the holding period for the VAR number it is going to be compared to i.e. the VAR baseline date. So to produce a normalized or clean Buy & Hold P&L, cumulative P&L attributed to Trade Lifecycle events starting from the VAR baseline date + 1 must be backed out of the Buy & Hold P&L (so this approximates only the portion of actual P&L due to the VAR baseline date position).

Bank’s risk Hypothesis II

II.         There is a specified % probability that the value of its positions will lose more than its VAR number over any given interval equal to the VAR holding period;

Daily clean Buy & Hold P&L is tracked over a period e.g. a year; and compared to the baseline VAR number. Exceptions occur on days that the clean Buy & Hold P&L shows a loss greater than the baseline VAR.  The VAR back-test compares the frequency of exceptions over the period to the VAR predicted % probability. If the frequency is materially more or less than predicted then this falsifies the hypothesis and constitutes a VAR back-test breach.

Cumulative CVA PLA

Bank’s risk Hypothesis III

III.         The cost of insuring its aggregate positions against the risk of counterparty Z defaulting is not expected to exceed the cumulative sum of the CVA fees charged to its trading desks for originating exposure to counterparty Z.

Cumulative CVA PLA for positions to a specific counterparty over an appropriate accounting period e.g. a month can be compared with the CVA desks cumulative P&L on trades to the same counterparty over the same period (Less received cash balances from CVA fees charged to originating desks). If these amounts do not net to zero or close to zero then this may be considered a CVA back-test breach and a falsification of the hypothesis.

Implications for the risk and PLA process?

  • Predictions must be stable over the test period to be falsifiable – so for risk hypothesis I, sensitivities and its dependencies must be immutable w.r.t. the PLA used to test it.
  • The objects and agents in predictions must be rigorously defined so they can be observed – products, portfolios, risk factors, risk factor sensitivities, VAR baseline date, counterparties etc all must have unambiguous definitions.
  • Any relationships (causal or correlated) inferred by a prediction between the object and agents of a prediction must be  stable over the test period – similar consistency implications as above i.e. the risk classification mapping a product to risk factors must remain immutable w.r.t. PLA.
  • Objects and agents in a prediction (and their dependencies) must be consistent with the objects and agents observed – products, portfolios, risk factors, market data, reference data, portfolio hierarchies etc used in the models must be consistent with what is used in P&L. Likewise any transformations made must be consistent to both the prediction and the observation e.g. adjustments made to P&L must be reflected in the risk via re-attribution.

This provides an intuitive context in which to think of the role of PLA.
Our whitepaper on PLA builds on this same theme.

%d bloggers like this: