As simple as possible;
but no simpler

The Volcker metric known as inventory aging… and thoughts of Whisky

Inventory Aging is a rather innocuous looking member of the band of (now) seven metrics that, under the Volcker rule, banking entities with significant trading assets and liabilities are required to calculate daily and report monthly.

As written, the metric description seems straightforward enough:

Inventory Aging generally describes a schedule of the trading desk’s aggregate assets and liabilities and the amount of time that those assets and liabilities have been held. [It] should measure the age profile of the trading desk’s assets and liabilities and must include two schedules, an asset- aging schedule and a liability-aging schedule.

The graphic below broadly outlines the processes of asset/liability tagging, matching, sorting and netting of trades involved in generating an inventory aging schedule.

(more…)

Senate committee hearing on JPM loss – possible industry implications on derivatives infrastructure

Today’s Senate Sub-committee hearing on last year’s credit derivatives trading loss at JP Morgan’s CIO office makes, in some segments, for riveting Q&A. The Senate sub-committee report released yesterday (as well as JPM’s own internal report) also makes for very compelling reading.

Both reports, and the sub-committee hearing, highlight some very specific control and reporting issues that are unlikely to be unique to JPM. The hearing also was (somewhat) critical of the Office of the Comptroller of the Currency’s (OCC) oversight. It would seem more likely than less that regulatory oversight of these issues will see increased  focus and scrutiny across the industry.

Below, we list nine (9) possible implications. Using our schematic of key post-DFA process and data flows within OTC derivatives infrastructure, we also highlight the functional areas we believe may see such increased regulatory oversight scrutiny as a consequence.

Dealer firms will be well served to consider conducting current state analyses and (more…)

Unraveling Proprietary Trading – implementing the Volcker metrics

The Volcker Rule mandates that banking entities cease proprietary trading, subject to certain exceptions for “permitted activities” including market making and risk-mitigating hedging.

The current proposed implementation of the rule includes recommendations for a framework of 17 quantitative metrics to be calculated and analyzed daily, and reported to regulators monthly.

The 17 quantitative metrics are grouped into 5 metrics groups (as listed to the left)

Each metrics group variously seeks to establish that a bank’s risk taking appetite, revenue profile, trading inventory and origination are all consistent with that of a market maker providing liquidity and hedging any residual risks incurred in the provision of this service.

Risk Management: the 4 metrics in this group try to establish that the bank’s trading units retain risk that is not in excess of the size and type required to provide intermediation/market making services to customers.

Sources of Revenue: the 5 metrics in this group try to establish that the bank’s trading units’ revenues are earned primarily from customer revenues (fees, commissions and bid-offer spreads) and not from price movements of retained principal positions.

(more…)

Whitepaper on P&L Attribution now available for download

Our whitepaper on P&L attribution (PLA) is now available for download.

The paper examines the practice of PLA production, analysis and reporting within banks. Given the recent regulatory focus  on PLA and banks’ capacity to produce it, the report also examines areas of potential interest i.e. policy, governance, process capacity and metrics that can be used to benchmark the bank’s capacity to produce, analyze, monitor and report PLA.

P&L Attribution – Judging the weathermen

“The storm starts, when the drops start dropping
When the drops stop dropping then the storm starts stopping.”
― Dr. Seuss, Oh Say Can You Say?

“Pray don’t talk to me about the weather, Mr. Worthing. Whenever people talk to me about the weather, I always feel quite certain that they mean something else. And that makes me so nervous.”
– Oscar Wilde, The Importance of Being Earnest, Act 1

We will talk about weathermen and the predictions they make. And we will mean something entirely different. By weathermen, we will mean the models in a bank and the predictions they make or the hypotheses they form. And for the realism of Dr. Seuss’ drops dropping, we will substitute the realism of P&L..  More specifically, we will talk about P&L attribution (PLA) and the role it plays in helping us use the realism of P&L to test the hypotheses posed by our various risk models – which actually is its primary purpose in life.

We will focus specifically on 3 hypotheses formulated by a bank’s risk models, its VAR model and its CVA/EPE model respectively. Namely, for a given bank:

I.         Change in the mark-to-market value of its positions are materially determined by changes to a specified set of variables and parameters (i.e. risk factors) and the expected change is quantified by the sensitivities obtained to these risk factors from its models;

II.         There is a specified % probability that the value of its positions will lose more than its VAR number over any given interval equal to the VAR holding period;

III.         The cost of insuring its aggregate positions against the risk of counterparty Z defaulting is not expected to exceed the cumulative sum of the CVA fees charged to its trading desks for originating exposure to counterparty Z.

(more…)

Risk Factor classification example spreadsheet available for download

This example Excel spreadsheet illustrates the classification of derivative products into risk factor taxonomies. This could be for the purpose of evaluating their risk profiles e.g. for P&L attribution.

A Practical Example of Classification

Of herding 1 million hissing cats onto a carousel somewhere a few blocks north of Bryant Park in New York. It must be said though that the music on this particular carousel had stopped (and Edith Piaf had most definitely left the building).

As part of the unwind of Lehman Brothers’ derivatives portfolio for the post-bankruptcy Estate (a portfolio of over 1 million derivatives trades); the team conducted a classification exercise of the products in the portfolio (with underlying covering all major asset classes; and instruments running the gamut of complexity from vanilla single factor single asset class flow products to highly exotic structured multi-factor hybrid products).

The context and objective was valuing them in the shortest possible time, in the most efficient manner possible (given limited to no infrastructure), in the most defensible manner possible (given their eventual day in Bankruptcy court) over the days in September/October 2008 when they were unwound.

This classification exercise was essential to developing and driving the valuation strategy for the portfolio, covering:

  • Team selection;
  • Valuation platform, model and method selection;
  • Computational resource provisioning;
  • Market data requirements definition;
  • Developing the netting and hedging assumptions needed to take a view on reasonable bid/offer and transaction costs;
  • And conducting self-consistency checks of the valuations.

A complex and gargantuan valuation exercise that could only be accomplished by intelligent abstraction of product commonality through classification.

Classifying Derivatives (or herding cats onto a carousel)

It is always easy to find fault with a classification. There are a hundred ways of arranging any set of objects, and something may almost always be said against the best, and in favour of the worst of them. But the merits of a classification depend on the purposes to which it is instrumental.

John Stuart Mill
Auguste Comte and Positivism

Classification as used here attempts to arrange traded financial derivatives into product classes or groups based on similar or related properties; properties as identified within a defined scheme of taxonomies; and similarity of properties as meaningful within some context.

The motivation for classification here is not much different to classification in biology in that the focus is not so much on the naming of things but on coming up with the best possible ordering of our knowledge base about the properties of the objects being classified such that the ordering gives us the greatest contextual command of the knowledge already acquired about the objects, and also leads us in the most direct way to the acquisition of more.

In plain English and as an example, within the context of classification for risk based P&L attribution policy as an example, we want to think of how to order the properties of financial derivative contracts in such a way that we can group them around the types of risk sensitive behavior they are likely to exhibit and thus how their P&L behavior may be best explained. Additionally, a fundamentally intuitive grouping helps shed light on more risk-sensitive properties that may be applicable within groups.

(more…)