What happened in bank risk last year?

Risky Finance has crunched the year-end regulatory filings for the biggest 11 global banks. For those who aren’t subscribers to the full database, here are some of the key trends we noticed.

1) Is reducing risk increasing it?

Troubled Deutsche Bank is on a mission to cut its complex balance sheet and reduce risk, and at first sight, the numbers bear that out. The bank shrank its derivative assets by more than 15 per cent in 2018, and its total assets by more than 12 per cent.

Screenshot of interactive visualisation available to subscribers

However, Deutsche’s regulatory capital requirement or risk-weighted assets barely went down despite credit and operational RWA reductions of five per cent apiece. The culprit? An unexpected 15 per cent increase in market RWAs that was caused by the balance sheet reduction.

It sounds like a contradiction but a paragraph buried in the bank’s annual report explains why.

“The increase was primarily driven by stressed value-at-risk”, Deutsche said, “coming from a reduction in diversification benefit due to changes in the composition of interest rate and equity related exposures”.

So according to Deutsche’s calculations, that bloated, complex derivatives portfolio would actually be helpful in a 2008-style crisis when correlations all went to one, because the exposures offset one another. The idea is not totally far-fetched: remember how Gregg Lippmann’s famous ‘big short’ helped save Deutsche in 2008?

It’s a bit like saying that nuclear power stations don’t melt down when all the components are aligned properly. But what happens when things go wrong? As Deutsche Bank is telling us, when the portfolio is being dismantled, it becomes even more dangerous.

2) Steinhoff really spooked JP Morgan

It was the big story at the beginning of last year, how a South African retailer came unstuck with accounting irregularities, ensnaring the global banks like Bank of America and HSBC that extended margin loans to its largest shareholder. It was that rare thing these days: sudden hefty impairments on investment grade corporate loans.

Screenshot of interactive visualisation available to subscribers

When we published a story about it in March, we noticed how JP Morgan and Citigroup did things a bit differently, booking the loans through their securities financing business, incurring mark to market losses when the value of the equity collateral plunged in value.

A few months later, JP Morgan did something quite dramatic: it reduced its securities financing exposure by $130 billion. This wasn’t reported by any news organisation, perhaps because the disclosure to the Federal Financial Institutions Examination Council never appeared in an SEC filing.

In this disclosure, there are two types of securities financing exposure reported, and the second one in which margin loan collateral is reflected in JP Morgan’s loss-given default (LGD) model saw the $130 billion decrease. The other type of exposure stayed constant, but saw an equally dramatic rise in its capital requirement.

One might detect the hand of the New York Fed here. Until Steinhoff, JP Morgan appears to have run its collateralised margin lending on the tiniest sliver of capital. Without that advantage, the bank saw no point in operating in this market, and made a quiet exit.

3) Securitisation is back

Of course it never completely went away – the 11 banks in the Risky Finance database have $800 billion of securitisation exposure between them. But after years of slow decline after the financial crisis, 2018 was the year that securitisation bounced back.

Screenshot of interactive visualisation available to subscribers

$48 billion of new exposures appeared on the balance sheets of BNP Paribas, Barclays, Citigroup and Deutsche Bank, offset by about $20 billion of reductions at the likes of Goldman, HSBC and Wells Fargo, where legacy portfolios continue to run off.

When you look at the contribution of new securitisation to credit RWAs at the banks, you can see why they are doing it – the impact on capital requirements is tiny. And that’s the whole reason for securitisation in the first place: the tranching of exposures in a waterfall of losses which gives huge risk transfer benefits to originating or sponsoring banks.

The financial crisis cast a long shadow over this market. The most toxic parts of it, like correlation trading or asset-backed CDOs, have all but disappeared. But collateralised loan obligations are in rude health, with BNP Paribas having done $20 billion of them last year, while Barclays structured $6.7 billion of synthetic CLOs. Mortgage-backed securities origination also saw a comeback, with Barclays doing $11.5 billion of European MBS, while Citi ramped up its private label MBS and asset-backed financing to the tune of $16 billion.

These are modest numbers compared with pre-2008 but are significant nonetheless.

Where would the market be without share buybacks?

The decline in equity markets seen in the last few months would have been worse without the countervailing effect of buybacks. But by how much?

Share buybacks are an enduring part of market practice. Warren Buffett loves them. And the numbers are huge. For example, the aggregate annual figure spent on buybacks by S&P 500 companies is approaching a rate of $1 trillion, according to S&P Dow Jones Indices.

Yet despite numerous headlines suggesting that the buybacks have been propping up the market, it isn’t clear is how much of an impact these buybacks have had on share prices. Or to express this as a counterfactual, where would the market be without buybacks? Risky Finance has conducted some analysis to shed light on this.

Is size becoming a risk for S&P 500 stocks?

During the two months before Thanksgiving, more than two trillion dollars were wiped off the S&P 500, dragged down by the technology giants whose stocks saw declines of 20 per cent or more. Until this week’s rebound, the index itself came close to being in ‘correction’ territory.

Screenshot of interactive chart available to subscribers

Although by many indicators a full-scale market rout may be overdue, October and November’s decline was something different. Correlations didn’t converge to one, and winners and losers could be categorised in several ways.

First of all, consider the sector story. Amid the $2 trillion of wealth destruction some sectors of the index performed well. Consumer goods stocks like McDonalds or Starbucks enjoyed double digit returns since September. Healthcare and utilities are other sectors with decent returns over this period.

The Risky Finance equity visualisation tool shows the cumulative cap-weighted returns for stocks in each sector over the last two months, and the effect is easy to see.

We can also see the same result as a histogram of returns, where the columns are the number of S&P 500 stocks with a return in a specific range. Here each sector is assigned a different colour. Technology stocks (in orange) are clustered over on the left (negative return) side of the chart, while consumer staples are mostly on the right.

There’s not only a sector story – there’s also a size story. We have been writing about gigantism in the S&P 500 for some time now, exploring theories and evidence on how size defeats everything in its path as an investing strategy. The last couple of months suggest an interesting reversal.

We’ve created a scatter chart plotting returns versus the log of market cap at the start of the period for S&P index members. Taking the returns from the start of the year, there is a small positive relationship between size and return. Taking the returns from the start of September to Thanksgiving, and this relationship becomes modestly negative.

Of course one must be cautious about small statistical effects buried in noisy data. But it chimes with the feeling that the climate is changing for once-charmed mega-cap tech stocks: either because of a regulatory backlash as with Facebook and Google, or the fear that consumer appetite has peaked, as with Apple.


When we plot the cumulative year-to-date returns of the index members, the striking dominance of giants like Apple is tempered compared with a couple of months ago, although you still would do better by holding just the largest four members of the index, stopping at Amazon. Adding the fifth-largest stock at the start of the year, Facebook, would have dragged your return down to 6 per cent. For active funds that track the S&P, the choice between these two stocks as the largest member of their portfolio makes all the difference. For those whose pensions are invested in such funds, these distinctions are worth bearing in mind.

Collins wedge pushes internal credit models towards irrelevance

Senator Susan Collins is a well-known figure in US politics. Attacked from the left for not blocking the Supreme Court nomination of controversial judge Brett Kavanagh and attacked from the right for being a RINO (Republican in name only), Collins is less well-known for her impact on the capital strategies of large US banks.

The amendment that bears her name was tacked onto the 2012 Dodd-Frank Act, the omnibus post-crisis regulation that transformed supervision of US banks. Under Dodd-Frank, Basel III internal models – known as advanced approaches in the US – would be applied to all US banks with more than $250 billion assets. For the first time, bespoke credit models used by large European banks would cross the Atlantic.

Then came the Collins Amendment. This imposed a backstop using standardised risk models with much less freedom for banks to optimise loan loss and recovery assumptions. If the capital requirement for standardised model was greater than the one for advanced, then standardised would be applied instead in the denominator of the key CET1 ratio.

The Book of Why

by Judea Pearl with Dana MacKenzie
Allen Lane, 2018


There are several reasons to read this book. A curious non-scientist will learn a lot about how today’s experts think about concepts like causation and correlation. Those with a technical or science background, who absorbed statistics as part of a university education, will find it useful to retool their thinking, which might be contaminated with out-dated concepts that this book will sweep away. And then there are those who work or invest in artificial intelligence that might benefit from a wake up call to incorporate thinking that is largely missing from their field.

I met Judea Pearl in 2001 while researching my unpublished probability book.
Pearl was an affable Israeli-American professor who had been recommended to me as a key player in Bayesian probability, particularly for its application to complex chains of inference known as Bayesian networks. These models of reasoning had emerged in the late 1980s as the best way to create intelligent agents – such as self-driving cars – that combined background knowledge with new data from their environment to make decisions.

The idea had evolved from neural networks, an earlier formalism that combined data without the superstructure of probability theory. Although Pearl modestly said “my contribution is very tiny” he was the one who added probability theory to the formalism.

During my interview with him, Pearl was almost self-effacing. He answered my questions in detail and was generous with his historical anecdotes. He had a copy of Thomas Bayes’ original 1764 paper and argued that this paper was as much about causality as making inferences from data. Bayes theorem, Pearl said, provided a window into causality that had been forgotten for 250 years.

I soon learned that there was something more to Pearl. A couple of days later in Redmond, I spoke to David Heckerman, a senior researcher at Microsoft who had qualified as a medical doctor before getting his computer science PhD. Heckerman told me that Pearl’s ideas on causality were so powerful that they could end up abolishing the need for randomised experiments – because his networks could infer the same result. “It has a potential for profound impact on society”, Heckerman said.

But then my other interviewees – who praised Pearl for his Bayesian insights – warned me off. Cambridge professor David Spiegelhalter summed up the view of mainstream Bayesian statisticians with his comment that Pearl’s causal inference was “deeply controversial. Frankly I don’t like it either”.
Seventeen years after my meeting, Bayesian networks are ubiquitous in society. From error-correcting codes in mobile phones to self-driving cars, they power the most transformational technologies around today. Yet Pearl’s causal thinking is only just getting a foothold in social science, and has largely been sidelined by the artificial intelligence community.

This book, which is co-authored by Pearl and science writer Dana MacKenzie, explains why. The resistance to causality comes from the deeply engrained – or one might say ossified – teachings of statistical founding fathers like Karl Pearson and Ronald Fisher a hundred or so years ago. We owe to them catchphrases like “correlation is not causation” which has been conveyed in stats courses to millions of undergraduates over the decades.

As Pearl recounts, early pioneers of causal thinking, such as US biologist Sewall Wright, were dismissed out of hand by the powerful cliques of statisticians these founding fathers built up around themselves. The only legitimate route to causality in science was the randomised controlled trial, invented by Fisher, now accepted as the gold standard of statistical inference.

This force of history also explains why so many of the Bayesians I spoke to seventeen years ago were wary of Pearl. Their approach of looking at probability as a degree of belief (whether for a human or AI agent) was fiercely opposed by the schools of Pearson and Fisher, who countered with a theory of probability based purely on frequencies of observed events.

Having their approach accepted as equivalent was a hard-won victory for Bayesians. It might have been easier for Pearl, with a tenured research position at UCLA, to keep pushing the foundations. Not so much for the likes of Spiegelhalter, who helped expose abnormal death rates in UK hospitals and had to defend his methods against traditionalists.

“Most statisticians, as soon as there’s any idea of learning causality from non-randomised data, they will almost universally refuse to step over that mark”, Spiegelhalter told me in a 2003 interview. “You look over your shoulder, and all your colleagues will come down on you like a ton of bricks. You could say there’s a vested interest, but the fact is that there’s a fantastic tradition of doing things with randomised trials. There is a huge industry – the pharmaceutical industry – that literally relies on it.”

However, this resistance may be on the cusp of change. In his book, Pearl puts both Bayesians and frequentists at the bottom rung of a ‘ladder of causation’. Consider a canonical example that Pearl examines in detail: whether smoking causes cancer. If you observe that smokers have a greater probability of getting cancer than non-smokers, you might argue that smoking caused cancer.

Wrong, argued the likes of Fisher, pointing out that there might be a third variable – such as an unseen ‘cancer gene’ – that was more common in smokers and also happened to cause cancer. Because you can’t see this gene you can’t control for it in the population and therefore you can’t reach any conclusions based solely on observations of smoking and cancer. Such a variable is called a ‘confounder’.

Pearl’s response is that we have climb up his ladder of causation from the bottom rung, which is only about observations, to the next level, where one is allowed to make interventions. Observing X (which might be controlled by Z) is different from doing X (blocking any influence of Z).

Armed with diagrams, or graphical models of causal mechanisms, Pearl shows how to defeat confounders such as the mythical ‘cancer gene’. RCTs are one form of intervention that emerge from his theory (and in his relaxed way he assures fans of RCTs that they can carry on using them as if his theory didn’t exist). But what about those situations where it is unethical or impractical to conduct experiments?

As Heckerman told me in 2001, Pearl solved the problem. Using the causal calculus devised by him and his students, you can indeed infer causal relationships from observations without performing experiments, stepping over the mark that Spiegelhalter refused to step across.

The book contains plenty of examples of how this works, and we can now see the broader community in public health or economics beginning to use these methods. For example, the UK Medical Research Council is now offering funding for causal research on health interventions, citing Pearl’s research as a reference.

In the book, Pearl goes further, climbing the next rung of his causation ladder to applying counterfactuals. These are ubiquitous in areas such as the law, as a way of apportioning blame or liability. Pearl shows once again how to use observational data to answer ‘would-have-been’ questions.

There’s one final chapter in the book where Pearl comes full circle back to his earlier work in computer science. He muses on why artificial intelligence has passed on using causal modelling. Advances like deep learning are flawed, he argues, because they are stuck on the bottom rung of the ladder of causation, searching for connections between data without interventions or counterfactuals. One is inclined to agree with Pearl that AI will never display human-like intelligence until it overcomes this deficiency.

Year of the MAGA stock

Last week the market cap of Amazon briefly crossed the $1 trillion barrier, putting it in the rarefied company of Apple. It isn’t just the size that makes these stocks special, but also the growth.

Nine months ago we looked at how size had been a winning bet in the S&P 500 index. Since then, evidence for this thesis has only got stronger. Shares of Microsoft, Apple, Google and Amazon – let’s call them MAGA stocks – have gained $1 trillion of market capitalisation this year. That’s 55% of the year-to-date gain for the entire S&P500.

Screenshot of interactive equity tool available to subscribers

If we accept that the valuation of these companies is something more than a bubble, we need to understand what makes them so special. Given their market power and social importance as well as their size, answering this question is an urgent issue in public policy.

Which fundamental variable should be considered? PE isn’t much help here. Trailing price-to-earnings ratios are big for Amazon, modest for Apple and have grown dramatically for Microsoft and Google/Alphabet. The companies are generating a lot of revenue but Apple excepted, shareholders are paying a lot to own it.

Volatility is more interesting. Investors that use a risk budgeting or risk parity approach, might underweight stocks with high volatility and overweight those with low vol. Historical volatility (using a 90 day window) used to regularly breach 40% for Amazon and seldom dropped below 25% for Alphabet/Google.

Five years ago, the relatively high volatility of US technology stocks was a puzzle for some analysts. A trio of academics, Söhnke Bartram, Gregory Brown and Rene Stulz, showed in a 2011 paper that US stocks were more volatile than those listed in other countries, and moreover this volatility was not systemically connected to the index, but was rather an idiosyncratic feature of individual stocks.

This was a puzzle because stocks with the higher idiosyncratic volatility tended to have lower average returns. Orthodox finance theory had long equated volatility with risk, and treated it as something to be minimised (in order to increase portfolio returns). Not true, said Bartram, Brown and Stulz, arguing that US stocks somehow had ‘good volatility’ that increased their returns above the index average.

By controlling for other variables, the three authors went on to show that ‘good volatility’ was a product of greater research & development expenditure (as a percentage of revenues) at US-based companies. In other words, investors were ignoring what finance theory said and buying US companies with outsized R&D expenditure in order to profit from innovation.

Now fast forward from 2011. Although their study was a regression of data available at the time, it turns out that Bartram, Brown and Stulz made an inadvertent prophecy. ‘Good volatility’ and R&D expenditure may explain the gigantism in the S&P 500 that we have seen since.

Orthodox finance theory has been debunked: With ‘good volatility’ you don’t need to hold the index in order to diversify away idiosyncratic risk. You just want to capture the innovation and as our chart above shows, you only should bother owning the four or five biggest names to do it.

This squares with what some of these companies have said. Google, when it reorganised itself into Alphabet, made clear that the cash-rich search engine business was only part of its value. The rest was a series of ‘alpha bets’ on things like self-driving cars or deep learning. Amazon and Microsoft are similar in the way that they combine cash generation with a portfolio of options on R&D innovation.

Indeed, there is some evidence that these companies are in an arms race to increase R&D spending as a percentage of revenue, and thus maintain this option value. In the chart below, we see that Amazon increased R&D from 7.5% of revenue in 2010 to 17% today (unlike other companies, Amazon reports this item as ‘research & content’). Even Apple, which makes so much cash that R&D long seemed like an afterthought, doubled its spending from 3% to 6.6% of revenues today.

Select view:

But there is now a more sinister feel to this arms race than the innovation-must-be-good argument of 2011. Earlier this year Bartram, Brown and Stulz got together again for a follow up paper. They found that since their previous study, idiosyncratic volatility of US stocks had fallen to a 50 year low. This was not connected to macroeconomic effects, the new study showed.

There were two factors that were behind the phenomenon, the authors found. Firstly, many of the most innovative young companies were avoiding public markets and staying private. That lowered the volatility of those that remained public. And what about those still-public companies? Their volatility was lowered further because competition had declined.

Indeed, the historical volatility of Microsoft fell below that of the S&P 500 index itself this year, while Amazon, whose volatility averaged 30% from 2010 to 2015, now has volatility below 20%. In the last couple of years, the MAGA stocks have reduced competition either by buying up rivals (as Microsoft has done) or putting them out of business as Amazon is freely doing in retail.

Arguably, one reason for the MAGA stocks’ value is no longer just about R&D optionality, but also their unfettered power as competition killers. That is something policymakers are only just waking up to. Or perhaps this is what is really meant by ‘make America great again’.

Looking in SEC disclosures for the next LTCM

This September will see two important financial crisis anniversaries. Not only will it be ten years since Lehman Brothers filed for bankruptcy, but also twenty years since the near-collapse of the hedge fund Long-Term Capital Management. Amid all the reforms to the global banking system and trading infrastructure since 2008, it’s worth asking the question: could an outsized hedge fund threaten the financial system today in the same way that LTCM did in 1998?

Turkey’s crisis in six charts

Markets have turned against Turkey, as they see an escalating spat between the country’s authoritarian President Erdogan and US President Trump derail a credit-fuelled economy. Risky Finance has prepared six charts that illustrate the severity of Turkey’s problems.

The Turkish Lira has declined precipitously against the dollar, more than any other currency, including the Argentine peso. The currency weakened by 45% since the start of the year, and 60% since the end of 2015. If it persists, this decline will have serious consequences for Turkey.

To investigate this, we use the Risky Finance sovereign tool, which shows iBoxx data for Turkish sovereign and quasi-sovereign debt. There is $120 billion outstanding of this liquid debt, $55 billion of which is in external currency, mostly dollars. A more interesting way to view this is to compare the exposure with other countries, scaled as a percentage of gross domestic product.

Outstanding debt of non-investment grade emerging market sovereigns, scaled by GDP converted to dollars on 13 August. Screenshot of interactive visualisation available to subscribers.

Which GDP figure do we use? We start with the dollar current prices GDP published by the International Monetary fund in April. iBoxx debt accounts for 14% of Turkish GDP, a fairly modest amount compared with other non-investment grade EM sovereigns.

Next we take the IMF’s April local currency GDP, and convert to dollars using the 13 August exchange rate. This time the ratio of iBoxx debt to GDP stands at 23%, making Turkey stand out much more against other EM issuers. Clearly the decline in Turkey’s currency is making its sovereign debt much less sustainable.

Investors have reacted by selling the bonds, and this can be seen by the cluster of red squares for Turkey in the previous chart. To get a closer look, consider the yield curve plot below. This chart combines local currency bonds with foreign currency debt, comparing yields on 10 August (green dots) with those at the start of the year (pink dots).

Yield curves for Turkish bonds, 13 August 2018 and end December 2017.

    

The local bonds are clustered at the shorter maturities and have higher yields to reflect the currency risk and inflation risk for non-Turkish investors. Their yields have risen to as high as 25%, double the amount at the end of 2017. The foreign currency bonds yielded around 5% at the start of the year, and now yield twice that.

One reason that markets are so intolerant of Turkey is because of its financing requirements. Following the IMF’s methodology, we add maturing debt and interest coupons to the forecast current account deficit to show the country’s so-called gross refinancing requirement (GFR).

Gross refinancing requirement for Turkey, as percent of GDP converted to USD on 13 August

For Turkey, this refinancing requirement is about $50-60 billion annually for the next five years (the maximum horizon for IMF current account forecasts). If we express that as a percentage of GDP (converted from local currency at the 13 August exchange rate), then Turkey has to refinance more than 12% per year, putting the country in the top bracket for emerging market sovereigns, along with Brazil. Then again, Brazil has a proportionately much lower foreign currency debt burden.

Turkey’s problems don’t stem only from sovereign borrowing. The country’s private sector has also borrowed heavily in recent years. The Risky Finance corporate debt tool displays $35 billion of foreign currency borrowing (in USD, EUR and GBP) tracked by iBoxx. The chart shows bonds in red and equity market cap in pink.

Turkish corporate debt outstanding, with issuer market caps in pink.

The lion’s share of the debt is bank borrowing, led by domestic players such as Turkiye Is Bankasi, Yapi ve Kredi Bankasi, and Garanti Bankasi. These three banks and others in the sector have seen their share prices hammered such that their market caps are now less than half of their outstanding foreign currency debt.

With earnings denominated in Turkish lira, the ten banks tracked by iBoxx will have to collectively pay about $5 billion annually in hard currency principal repayments and bond coupons in the next five years. Some may face questions about their solvency, even though Turkey’s politically-controlled central bank has pledged to provide liquidity.


This takes us to our final chart, which shows credit exposures of EU banks to Turkey compiled by the European Banking Authority in June 2017. This totals €35 billion, led by BBVA and Unicredit. These banks take their exposure in the form of controlling equity stakes in Turkish banks, which Basel rules require to be treated as credit exposure.

The rationale for that is that banks are likely to bail out these investments rather than walk away and benefit from shareholder limited liability. A full-fledged Turkish banking crisis will test this rationale to the limit.

    

How Goldman punished its FICC failures

After being pulled down by lacklustre fixed income trading results in 2017, Goldman Sachs has been in recovery mode, delivering two solid quarters of revenues, led by its underwriting and advisory business. That chimes with the retirement of CEO and trading veteran Lloyd Blankfein and his replacement by David Solomon, whose background is in the advisory side.

Amid challenges to its old business model, Goldman is rejigging its trading business and has replaced a number of senior staff, notably securities division co-heads Pablo Salame and Isabelle Ealet. Others left of their own accord. Six months after the year-end, we now have a window into how that process may have played out at bonus time, at least in London. Risky Finance has created a bonus disclosure visualisation tool for subscribers, which we have tried out on Goldman Sachs.