European Commission
Rue de Spa 2
1000 Brussels
7th April, 2014
Dear Sirs
Prime Collateralised Securities (“PCS”) is an independent, not for profit
initiative set up to help revive the European securitisation market on a sound
and robust basis.
We are writing this document following the public hearing held by the
European Commission on 10th March 2014 on the liquidity cover ratios in
general and on the definition of “high-quality liquid assets” in particular for the
purposes of implementing the liquidity requirements mandated in CRD4.
This paper sets out our views on the EBA’s report on the definitions of
“extremely HQLA” and “HQLA” published in December 2013.
In view of our mandate, we will only comments on aspects of the proposed
report that are relevant to the securitisation market.
1. Deep concerns over methodology
PCS would like to draw attention to two methodological aspects where it
believes the EBA’s approach to the definition of “high-quality liquid assets”
(“HQLA”) is deeply flawed.
[A] The limitations of forward liquidity models in stressed situations
One of the most salient lessons of the financial crisis must surely be the
awareness of the limits of sophisticated modeling of past data to generate
accurate predictions about the future. This was the basis on which rating
agencies and banks built their analyses of credit risks, including for US subprime
mortgages and CDO-squareds.
These limits, widely recognised today, affect many of the changes that have
and are taking place in the regulation of financial institutions.
However, the difficulties inherent in using past data to predict future credit
performance are insignificant when compared to the difficulties of using them
to predict future liquidity. A credit default almost invariably occurs when the
debtor does not have funds available to pay the creditor. The analysis of the
likelihood of this event is a financial analysis of future cash flows and
economic predictions. A failure of liquidity occurs when all individuals that
participate in a market decide they no longer want to purchase a particular
asset. The analysis of the likelihood of that event has to take into account
psychological motivations. Predictions including psychological motivations are
much, much less certain than the already uncertain predictions of financial
This fact can easily be ascertained from even a cursory glance at liquidity
crises, from the Dutch tulip crisis, the British South Sea Bubble to the US
equity bubble that ended in 1929. Individuals participating in markets flee for
rational and irrational reasons. Even when their reasons are rational, they
may not be connected to any financial analysis: the most common rational
reasons for individual market participants to exit suddenly a market is the
belief that, even if the market product is deemed robust, a belief prevails that
other market participants have withdrawn, or are about to withdraw, leading to
illiquidity and a mark-to-market loss. It is the famous “rush for the door”.
PCS accepts that it is entirely possible to calculate the liquidity of any given
instrument today or in the past through accepted quantitative methods–
assuming the data is available. It is also probably possible to calculate
relative liquidity rankings of different products in times of financial normality,
based on the products inherent characteristics. But this is not what the EBA
is seeking in defining HQLAs. The methodology, by focusing on a crisis
period (2008-2012) and trying to define instruments that can be used by
banks in times of stress, seeks to determine, through quantitative analysis of
past data, what instruments can remain liquid in times of future crisis.
This, we believe, is fundamentally flawed. Liquidity collapses are
fundamentally crisis contingent. The assets that become illiquid are the
assets relevant to the crisis at hand: equities in the US in 1929, emerging
market sovereign debt in Asia in 1998, securitisations globally in 2008. Even
outside of systemic bubbles, an analysis over the long duration of smaller
liquidity difficulties will demonstrate that these were intimately linked to the
fears in a given market at a given time.
By using only data from the last crisis as a universal indicator of liquidity, the
EBA analysis becomes a classic case of “generals fighting the last war”.
This does not mean that PCS believes that no rules can be framed to capture
overarching aspects of liquidity. These broad rules, which do not focus on
asset types, may certainly be tested using long span past data.
These broad rules are well known. All participants in markets and their
regulators are aware of them. They include for bonds the following, all other
things being equal:
i. Large bond issues are more liquid than small bond issues;
ii. Bonds issued in standard forms are more liquid than bonds with
unusual, bespoke terms;
iii. Bonds denominated in widely traded currencies are more liquid than
bonds denominated in rarely traded currencies;
iv. Bonds that are part of a large market of similar bonds are more liquid
than bonds that are part of a small market;
v. Bonds of high credit quality are more liquid than bonds of lower credit
These rules can be tested quantitatively using data from multiple periods of
But, as we have seen, liquidity can dry up in any market for irrational reasons.
Therefore, the second key to liquidity for the LCRs will be diversification, so
that no single illiquidity incident in any market can call into question the
liquidity position of financial institutions. However, as we will mention later in
the document, effective systemic liquidity buffers, to be credible, require a
sufficient range of HQLA defined instruments.
In conclusion, PCS believes that the attempt to use past data from a
single crisis to generate universally relevant definitions of HQLA is
flawed. Because it is presented in a quantitative guise, it is also
potentially dangerously misleading. It can easily appear that the
conclusions such a quantitative approach lead to are “scientifically
accurate”. This, in turn, can lead to a perilous over-confidence in their
prudential value. This overconfidence, in our view, is not fundamentally
different from the confidence such quantitative analysis produced
before 2008.
We would strongly recommend simpler rules based on an approach that
blends the qualitative analysis of the impact on liquidity of broad
overarching rules and quantitative work based on long span data. Such
analysis should also allow, to the greatest extent reasonable, a strong
diversification of the LCRs.
[B] A puzzling division between “qualitative” and “quantitative”
During the public hearing held by the EBA in October 2013, the authority was
asked whether it had sought to incorporate a qualitative component it its
analysis. In response the EBA indicated that it would only use a “qualitative
approach” where a “quantitative approach” was not possible.
We find this response both puzzling and worrying. The response posits that
the two approaches are dichotomous and to be used in the alternative. It also
implies the primacy of the “quantitative approach” over the “qualitative” with
the former presumably “objective” rather than “subjective”, “scientific” rather
than “anecdotal”.
In reality, not only are the two approaches complementary in the analysis of
social facts (including credit risk), but the latter is a logically necessary
component of the former. In other words there is no quantitative approach
that does not include qualitative assumptions, whether these are intended or
not and whether they are understood or not. The only choices quantitative
analysts have is either to acknowledge and make explicit the qualitative
assumptions embedded in their work or to proceed from the erroneous
position that no such assumptions exist. Following the first course allows for
an informed methodological debate over the proposed approach. The second
risks hiding flawed results behind a veneer of a supposed “scientific
In this respect, we believe the point was never better made than by Professor
Hayek in his Noble Prize acceptance speech:
“[The desire for a scientific approach] is sometimes carried to the point where
it is demanded that our theories must be formulated in such terms that they
refer only to measurable magnitudes.
It can hardly be denied that such a demand quite arbitrarily limits the facts
which are to be admitted as possible causes of the events which occur in the
real world. This view, which is often quite naively accepted as required by
scientific procedure, has some rather paradoxical consequences. We know, of
course, with regard to the market and similar social structures, a great many
facts which we cannot measure and on which indeed we have only some very
imprecise and general information. And because the effects of these facts in
any particular instance cannot be confirmed by quantitative evidence, they are
simply disregarded by those sworn to admit only what they regard as scientific
evidence: they thereupon happily proceed on the fiction that the factors which
they can measure are the only ones that are relevant.”1
The reason we find the EBA’s approach here particularly worrying is that this
false dichotomy is very similar to the model based risk analysis that was
prevalent prior to 2007. When examined carefully, the models used and
abused prior to the credit crisis were not mathematically incorrect. But they
embedded “qualitative” assumptions (for example on correlation) that were
entirely inaccurate. These assumptions, though, were not mathematically
derived but “qualitatively” derived. They represented the spoken or unspoken
views of analysts about how markets worked “in the real world”.
Where are such qualitative assumptions to quantitative processes to be
found? There are found in the choice of data to be processed, in the way of
identifying the data and bucketing it (the question of granularity), in the choice
of periods for which the data is chosen, in identifying what data is relevant to
the analysis – and therefore analysed – and what is irrelevant – and therefore
ignored – and in the choice of proxies when key data is not available. They are
also found in the interpretation of the results of the analysis.
What we mean by the preceding fairly academic considerations is more easily
conveyed by a simple and trivial example. It would be entirely possible to test
the liquidity of bond issues during the crisis by classifying them according to
the letter of the alphabet with which their name begins. Having done this, it is
equally possible that a pattern appears. It could be that bonds whose first
letter is a vowel are much less liquid than bonds starting with consonants.
With unexamined assumptions, we would conclude that the pure “quantitative”
approach has proven that bonds with consonants are “extremely high quality
liquid assets” whereas those with vowels as first letters cannot be part of the
LCR buffers.
Of course, looking with a “qualitative” filter, one might realise that the names
of bonds are correlated to the language of the country in which they are
issued. Understanding this fact may lead one further to realise that words in
Latin languages from the southern part of Europe have a greater probability to
start with a vowel whereas those from countries with Saxon or Scandinavian
languages start more frequently with a consonant. The liquidity gap is then
revealed as the result of the different liquidity characteristics of northern and
southern European bond issues during the crisis, not the letters making up the
name of the bonds.
This example does not show that the quantitative approach is wrong. On the
contrary, the qualitative perception that the difference is the result of
geography and not vocabulary itself needs to be tested quantitatively to have
any credibility.
This silly example does, however, demonstrate that a properly conducted
quantitative analysis needs to make its qualitative assumptions open and
explicit – here that letters making up the name on a fixed income instrument
influences its liquidity. Only then can these be challenged. Also, the validity of
a quantitative conclusion needs to be based on a qualitative understanding of
exactly what has happened during the crisis. The data, in and of itself, cannot
differentiate between geography and initial letter as the reason for poor
liquidity. Only a qualitative analysis grounded on a broader understanding of
events can do that.
Before we succumb to the criticism that this level of trivial example is surely
not relevant to the matter at hand, we would draw the attention to the case of
covered bonds. We have a very good analysis of the behaviour of the bidoffer
spread in covered bonds during the crisis in the report partially
commissioned by PCS from William Peraudin and Risk Control (the “Peraudin
Paper”)2. The work done by the EBA concludes, based on the averaged data,
that covered bonds are, as a general matter, a highly liquid instrument.
However, the data (as seen in the Peraudin Paper) shows three very distinct
phases to the behaviour of covered bond spreads during the crisis. The first
phase, was during the “financial crisis” when covered bonds were issued
primarily by banks in countries where these were believed to benefit from the
implicit guarantee of the sovereign. During this phase, the bid-offer spread
remains tight and indicative of good liquidity. The second phase, covers the
period in 2011 when the “financial crisis” gives way to the “sovereign crisis”.
Then, as the implicit guarantee to banks issuing covered bonds begins to look
less robust to the markets, the bid-offer spread of these bonds begins an
inexorable climb. This indicates a declining liquidity for this asset class. Then,
and very suddenly, the bid-offer spread comes in and stays tighter. This third
phase begins precisely at the moment – in December 2011 – the European
Central Bank announces the LTRO. This central bank purchasing provided
the support to liquidity in the covered bond market that is clearly visible in the
data. 3
In addition, we must not lose sight of the fact that the largest part, we assume,
of the covered bond data set will be made up of German Pfandbrief. During
the period examined by the EBA, German Pfandbrief were issued by banks
widely believed to benefit from the implicit guarantee of one of the few
European countries whose financial soundness was beyond question.
So what does the data tell us? That bonds which benefit from a payment
covenant from a bank and asset coverage are very liquid solely by virtue of
their nature as covered bonds or that bonds believed to be backed by the
strongest economy in Europe are liquid? That covered bonds are intrinsically
liquid by virtue of their double covenant structure or that bonds which a central
bank has agreed to purchase will find liquidity? That covered bonds are liquid
in crisis or that covered bonds are liquid in crises that do not affect the credit
perception of sovereigns?
Whether the quantitative approach provides a sound measure of covered
bonds liquidity depends entirely on the qualitative assumptions built in to the
analysis: do you run the numbers on all covered bonds versus all bank
unsecured debt or do you run the numbers on all German bank debt (covered
and uncovered) versus all non-German bank debt (covered and uncovered)?
And how do you then take into account the special effect of ECB purchases?
We should make it clear, as should be obvious from our general comments on
the theoretical problems of modeling liquidity, that PCS believes that covered
2 The paper may be found here:
3 We draw attention to the commentary found in the Peraudin Paper on page 12 as to the
academic analysis of the impact of this scheme.
bonds should clearly be allowed as HQLAs and we are not suggesting that
they are illiquid. We are suggesting though, that whether they should be seen
as liquid in totality or how liquid they are comparatively to other asset classes
may not be apparent from the nature of the EBA analysis.
Based on the above, we believe that, by stipulating an artificial
dichotomy between the “quantitative approach” and the “qualitative
approach”, the EBA is leaving opaque and unexamined the qualitative
assumptions that are embedded in its conclusions. We believe further
that only by bringing a qualitative understanding of markets, of how
they operate and of the actual events that unfolded during the period for
which the EBA examined data, can you reach a sound basis for a
quantitative approach.
[C] Unexamined qualitative assumptions of the EBA analysis in relation
to securitisation
The reason for the theoretical need to make explicit the qualitative
assumptions underlying the quantitative approach, so that they are open to
challenge, is that in the case of the EBA’s work on the definition of HQLA and
securitisation, key unexamined assumptions appear to have been made. We
believe that a number of these are effectively invalidate the conclusions. (We
also acknowledge that some of these assumptions are the result of the
specific mandate under which the EBA was required to operate. But, if this is
so, if behooves the EBA to draw attention to the limited value of conclusions
drawn from incorrect assumptions).
Time irrelevance
The first qualitative assumption made in the EBA’s work is that it is possible to
derive a universal view of the liquidity of securitisation instruments based on
an analysis of a single specific crisis, namely 2008-2012. This is
compounded by the specific assumption, in the case of securitisations, that
their comparative liquidity behaviour during the crisis was unaffected by the
fact that the crisis in 2008 started exactly in a segment of the securitisation
This, in itself, might be defensible if the calibration of other asset classes had
been done for periods where those asset classes had been under stress.
Crisis of varying depth can be found for equities, gold and covered bonds.
This would have suggested that an attempt had been made to compare like
with like: in other words, what happens to any asset class’ liquidity when it hits
This, however, was not done. It follows from this that the EBA analysis for
securitisation is only valid if the LCR buffers are needed in a crisis that has
near identical beginnings to that of 2008. This is rare in human history, but it
is made all the more unlikely when one examines the second incorrect
assumption apparently made by the EBA.
What’s in a name?
By attempting to derive an understanding of the liquidity behaviour of future
securitisations based solely on the behaviour of past securitisations, the EBA
has assumed that the instruments issued and traded under the name of ABS
presently and in the future are and will be, qualitatively and in the market’s
perception, the same type of instruments that were issued and traded prior to
and during 2008.
Since the crisis, regulators and legislators globally have introduced a myriad
of changes with the explicit intention of changing the characteristics of ABS.
Indeed, the Commission and the EU generally took a decisive leadership
position in this process. These measures include the regulation of rating
agencies, the requirements for originator retention (“skin in the game”),
increased capital penalties or outright prohibitions on entities purchasing resecuritisations
and increased disclosure and investor due diligence
requirements. These official rules have been complemented in Europe by the
action of central banks whose repo rules now require substantially increased
disclosures for securitisations. To this can be added market initiatives such
as the PCS quality label.
The assumption of the EBA’s work must be that none of these have been of
any use when it comes to liquidity.
The assumption made here is that instruments that have the same name will
behave in the same way. Therefore, we can model their future performance
based on data for similarly named instruments. This, it is further assumed,
can be done without having to perform any analysis to determine whether,
notwithstanding the similarity of names, these really are similar products. This
is the assumption on which rating agencies based their modeling of US subprime.
Namely that the US sub-prime market of 2005-6 was the same as and
could be expected to display the same credit dynamics as the US sub-prime
market of 1999-2001.
We believe that one of the crucial lessons of the crisis (and the subprime
crisis in particular) is that a long and hard qualitative look needs to be given to
products before we model their future behaviour based on the past behaviour
of similarly named products so as to ensure we really are modeling the same
Another unexamined and, in our view, flawed assumption made by the EBA
relates to the granularity of the data: in other words, how they chose to divide
the data into categories to be analysed.
As far as one can tell, the only filter the EBA appears willing to use on the
ABS data is to divide it by underlying asset class: making a distinction
between RMBS, ABS, CMBS etc… Beyond using a rating criteria for “high
quality”, no attempt appears to have been made to distinguish between high
quality securitisations (simple, transparent, pass-throughs in traditional and
simple asset classes with no originate to distribute bias) and other complex,
opaque ABS generated outside of the traditional European banking model.
As for the credit quality filter, it does not need retelling that a trigger for the
financial crisis of 2008 was the award by the rating agencies of high ratings to
flawed ABS products. As a result, no investor was likely to pay much heed to
ratings of ABS at that point. Therefore, the use of credit ratings for ABS as a
filter for the 2008-2012 period was exceedingly unlikely to do anything but
depress the comparative result of ABS against other asset classes. Again, the
insight that a ratings filter was of limited value for ABS for that particular
period can only come from a proper qualitative analysis of the unfolding of the
crisis. And the conclusion (based on the events of 2008) that the high credit
quality of ABS will have little impact on their liquidity in the future can only be
based on the implausible belief that neither the new criteria for ABS issued by
the rating agencies, nor the lesser reliance on ratings by investors (mandated
and common sensical), nor the regulation of the agencies, will have any
positive impact.
The general lack of filters is consistent with the EBA’s rejection of considering
qualitative aspects when they feel quantitative analysis alone should prevail.
We believe the data used by the EBA came divided into asset class and
geographic sets. Since geographic sets cannot be used due to the political
constraints of setting global standards, the only distinctions left in the analysis
are those based on asset class. Only if the EBA were prepared explicitly to
involve itself with a deeper qualitative understanding of both the ABS market
and the events of the financial crisis, would they be able to devise a set of
more accurate and relevant filters.
In particular, PCS believes that a definition of high quality securitisation can
be crafted and that such definition will be used for types of securitisations that
will be issued in the future. We also believe that a quantitative analysis of the
behaviour of such securitisations from a liquidity point of view would show
significantly better results, even during the 2008-2012 crisis.4 In fact, PCS
has commissioned some work on precisely this question. We hope to have
the results very soon and will circulate them as soon as they are available.
When combined with an understanding of the negative bias for securitisation
from limiting the sample to the crisis period, this should lead to a much higher
estimation of the liquidity potential of ABS.
We also believe this type of more granular approach is consistent with the
work done by the EBA in other asset classes, such as sovereigns, where
clearly the sovereign debt of Malta or Slovenia cannot be compared, from a
liquidity point of view, with that of Germany. A consistent approach to all the
examined asset classes would require that the EBA at least attempts to
devise a better conceptual analytical framework for ABS than that which they
4 A definition of high quality securitisation can be found in the PCS response to the EBA
Questionnaire on Securitisation which we will be sending together with this report and which
we understand the Commission already has seen.
have apparently chosen. We note, of course, that this was the course that
EIOPA chose, at the suggestion of the Commission, when looking at capital
calibrations for Solvency II.
Overall, we believe that the implicit qualitative assumptions contained in
the EBA analysis of ABS for HQLA purposes are incorrect in key
elements: the assumption that the recent crisis is a valid proxy for
liquidity behaviour in crises generally; the assumption that past data on
securities generically labeled “ABS” is a robust basis for the analysis of
ABS in the future, the assumption that only asset class distinctions
affect liquidity in ABS.
The joint effect of (i) doubts about the true value of quantitative analysis
of the type conducted by the EBA to model liquidity behaviour, (ii) the
shortcomings likely to occur from a belief that quantitative analytics
does not contain qualitative assumptions and the resulting absence of a
qualitative analysis of what actually happened during the crisis, leading
to the use of flawed and unexamined qualitative assumptions has
resulted in a deeply inaccurate methodology for an analysis of the
relative liquidity value of ABS for the purposes of defining HQLA. This
has unfairly penalized ABS compared to other asset classes.
3. Methodological issues within the chosen approach
As noted above, PCS has fundamental concerns about the methodology
selected for the EBA report. However, even within the confines of that
methodology, we are concerned by certain aspects of the EBA’s work.
[A] Data set choice
The data set for ABS selected by the EBA is the MIFID data set. PCS
understands that this data set only contains about 1,000 data points. So,
whereas the slides speak of 9,000,000 trades and 13,000 bonds having been
looked at, the set used for a segment of the capital markets that may be a key
to mitigating the impact of the deleveraging of banks on the European
economy is statistically minuscule.
The choice of the MIFID data set is especially puzzling as we are aware of
other much more abundant sets, such as the Bloomberg ABS data set with
about seven times more data. Of course, it is possible that the EBA had good
reasons to select the much smaller set. However, such reasons should be
clearly disclosed so that policy makers can judge whether this was a wise
We note, for example, in the context of gold as a potential HQLA candidate,
that the EBA forsook the use of the MIFID data set as being too limited.
[B] Granularity
We have discussed above the granularity concerns over the lack of a “high
quality securitisation” filter when analyzing the data.
However, even within the data that is available to the EBA – ie without
requiring the creation of additional filters – we remain concerned over some
choices that appear to have been made.
A key choice is granularity within the selected time period of 2008 to 2012.
Based on work done by the industry and the available MIFID data, we would
enjoin the EBA to divide the crisis period into shorter sub-periods. This is
logical since the LCRs are created to deal with short-term liquidity stresses
and so HQLAs should be able to be liquid at any time.
The reason we would enjoin such division is that the work of William
Peraudin, already cited, covering the period 2010-2012 and using similar
actual bid-offer spread data (rather than, as the EBA did, extrapolated data),
clearly shows that some types of covered bonds where less liquid than high
quality ABS. The overall worse performance of ABS in the longer period of
2008-2012 appears to be the result of the catastrophic performance of ABS
during the ABS crisis. (And as we have written above, unless one believes
that the next and all future crisis will be ABS crisis, this is not a good
If this is the case, this would suggest that for 50% of the examined
period ABS performed better than most covered bonds.
Also, PCS would enjoin the EBA to publish a breakdown of relative liquidity
behaviour by geography, not just for ABS but also for covered bonds.
Therefore, even within the flawed methodology used by the EBA, we are
concerned that not enough appears to have been done to ensure an
appropriate degree of robustness.
[C] The use of proxies and absent sets
We hope that the EBA will also explain why they have elected to use
academic proxies, such as the Roll test for bid-offer spreads, rather than
actual data. We understand the EBA’s statement that these are widely used
in academic circles but considering the importance for Europe’s economy of
the conclusions of this report, we cannot but wonder why more was not done
to find the actual data.
[D] The definition of “liquidity”
The LCR buffers are designed to contain assets that may be sold in a crisis
quickly and without suffering substantial price erosion. However, the EBA’s
approach has focused enormously on indicators of trading volumes. In other
words, the actual definition of HQLA selected by the EBA appears to be: “How
often is this asset traded?” rather than “Can I sell this asset at par if I need
Traditionally, ABS has not traded extensively. The reasons for this are not
related to its intrinsically illiquid nature but to the lack of reasons for an
extensive secondary market to arise. Basically, for the whole period of ABS’
growth, issuance has tended to be greater than redemptions year on year.
This meant that one could always buy new issues rather than have to seek
old issues. More importantly, until the crisis, most ABS shared a number of
• Because of the existence of credit enhancement calculated by the
credit rating agencies to absorb macro-economic shocks, the ratings of
high quality ABS were extremely stable;
• Most European ABS were floating rate and so their price did not
fluctuate with interest rate movements;
• Many ABS issues were of fairly short duration and average life.
Because of these reasons, the prices of ABS were extremely stable. When
assets have extremely stable prices over long periods and continuous new
supply, there is no incentive to create a trading market. No money can be
made in such market for traders and investors seeking ABS could always turn
to the new supply.
This is why looking at trading volumes unfairly penalizes ABS. Yet,
qualitative data including many discussions with investors will underline the
fact that there was never a problem in disposing of an ABS position at or
about par.5
Another quantitative element that demonstrates this proposition is that during
the financial crisis, automotive manufacturers were able to issue ABS without
difficulty or price spike. This is indicative of the existence of a ready market
for this paper.
This issue also calls into question the assertion of the EBA that their data is
more robust because they selected seven data sets rather than one or two.
We believe that some of those data sets (volumes traded and number of notrading
days), although not irrelevant, do not capture the key element of
liquidity which the LCR buffers are meant to address. At best, these are very
imperfect proxies. In the case of securitisations, where there are good
reasons (as set out above) for the lack of a secondary market – reasons not
connected to a difficulty in selling the securities – they profoundly distort
5 This was not, of course, the case during the height of the “securitisation crisis”. However,
that period was extremely short and reflect the point made earlier in our paper that liquidity
will always drain from any asset class perceived to be at the fulcrum of the crisis. On that
basis and by definition, no assets could ever qualify for HQLA status.
By selecting many data points relating to the existence of a volume
secondary market rather than the capacity to sell swiftly without loss (as
indicated, by example, by bid-offer spreads), the EBA work distorts the
real liquidity strengths of high quality securitisations and focuses on the
wrong definition of liquidity for the LCR buffers.
4. Inconsistency with the Solvency II approach
In dealing with the calibration of capital requirements for insurance companies
in the context of Solvency II, EIOPA created a definition of high quality
securitisations. It is important to remember in this context that the EIOPA
analysis is fundamentally similar to the analysis that is required to be
conducted for the definition of HQLAs.
Both EIOPA and the EBA were mandated to look at the issue of what would
be the likely result of a sale of a security or other asset in a market in some
distress. We fully acknowledge that, at a highly technical level, there were
distinctions of detail in the exact methodology required to be used by both
authorities. (For example, EIOPA was required to examine the possible price
variation within a VAR analysis at a 99.5 level of confidence). However, it
would be more than unfortunate if very small technical differences in the
specifications of what are fundamentally similar tasks led to radically different
It does not, of course, follow necessarily that because EIOPA has identified a
group of securitisations the high quality highly likely to have much better
liquidity characteristic that the average securitisation, such more liquid
securitisations should be granted HQLA status. However, by excluding them,
the European Union would be creating a discountinuous regulatory structure
and fragmenting the potential future securitisation market. This should only
be done, in our view, if there are indeed very good reasons to do so. As our
paper has attempted to demonstrate, we do not believe this is the case.
In our view, a similar treatment in the European regulatory schemes of similar
assets when the targeted prudential outcome is the same (as with Solvency II
and the definition of HQLA) would not only provide a consistent framework but
allow for a deep market for high quality securitisation as all potential investors
would be able to deal in the same way with the same securities.
5. The importance of reaching the right definition of HQLA
Although the EBA report and our objections may appear highly technical,
there is a lot at stake in the outcome.
[A] Re-creating systemic risk
Although no one wants to see illiquid securities be part of the liquidity buffers,
it is important to understand a paradox at the heart of the EBA’s work in the
definition of HQLAs. On the surface, it may appear that by being very strict in
rejecting asset classes as illiquid, the EBA is following a conservative
approach. However, there is a point at which limiting the eligible asset
classes tips from conservatism in avoiding systemic risk to re-creating
systemic risk on a large scale.
This phenomenon occurs because if the list of HQLA is too small, all the
banks liquidity buffers will be invested in the same assets. This pooling effect
would be made worse by the propensity of banks to invest in the highest
yielding asset in any given set. This propensity can be countered by
regulatory action demanding diversification. But if your eligible set is too small
and the available pool of any type of HQLA too limited, such regulatory
requirements can be quite limited in scope.
If the banks liquidity buffers are invested in the same assets, at times of
banking crisis, all the banks will find themselves trying to liquidate the same
assets in the markets. And the market will be aware that they are doing so.
This is likely to result in a steep drop in price, forcing the banks to sell even
more, leading to a greater drop and ultimately a collapse in the monetary and
systemic value of the LCR buffers.
If you take into account the difficulties in assessing before the crisis what will
be and will not be liquid (see 1[A] above), it follows that, within reason, a
portfolio approach should prevail also in defining HQLAs and therefore
the counsel of wisdom and safety is to allow more asset classes in the
definition of HQLAs rather than less.
Put in other words, the financial system is likely to be a lot safer if the policy
approach is to see what to exclude based on good analysis rather than what
to include based on a restrictive interpretation.
[B] The European economy
The case for the importance to the European economy has been well and
decisively made by the Commission in its Communication on Long-Term
Financing, as well as by the European Parliament in its response to the
Commission Green Paper on Long-Term Financing and the European Central
Bank on many occasions.
[C] One time decision
During the hearing, the EBA indicated that the definition of HQLAs would be
monitored over time. This, at first blush, could appear to be the answer to
those who warn that the choice of a particularly bad period for ABS (2008-
2012) will unfairly taint the asset class. In time, the counter argument runs, if
ABS shows that it is liquid, it can gain a place in the HQLAs definition.
We believe that this reasoning is deeply flawed. Especially when one takes
into account the key role of banks in liquefying markets, the absence of ABS
within the HQLA definition will dramatically reduce its liquidity. This is a
simple circle: in order to be liquid enough to become a HQLA you need to
already be a HQLA. No amount of future monitoring is going to change that
[D] Limited Risk
For the reasons set out above, PCS believes that high quality ABS, benefiting
from the legislative and market changes that have improved the market both
objectively and in the perception of investors, will be as liquid as other asset
classes destined for HQLA treatment.
We acknowledge that quantitative data supporting this proposition is light. We
have explained why this is so and why, in the absence of an abundance of
relevant data, one should not be tempted to resort to irrelevant data. When
data is thin, policy makers and regulators have little option but to make rules
based on what relevant data they do have, whatever relevant proxies they can
find and then use intelligent qualitative analysis to fill in the gaps.
However, in the case of the definition of HQLA, if the data is light the
downside can be easily controlled by placing an appropriate cap on the
proportion of the LCR buffers than can be made up of high quality
securitisations. By limiting the amount of securitisations in any institution’s
buffer to 20% of the total, policy makers can both support the growth of a
strong high quality securitisation market yet compensate for the thinness of
the relevant data.
Later on, as high quality securitisations demonstrate robust liquidity, such
limits could be revised upwards.
Conclusions and Proposals
PCS believes that at a fundamental level a quantitative approach cannot
deliver genuinely robust predictions as to what will be HQLAs in real
market conditions unless it is supported and accompanied by a
qualitative and general approach, both being necessary to deliver much
better outcomes for European financial stability.
In addition, when using a quantitative approach, you need to
acknowledge the qualitative assumptions that are necessary. By not
doing so, the EBA has allowed a number of erroneous qualitative
assumptions to undermine its conclusions regarding ABS:
universalization of a limited time period, failure to see relevant
differences and lack of granularity.
Finally, even within the confines of the EBA’s chosen quantitative
approach a number of puzzling choices need to be explained, as on the
surface, they indicate an inconsistent and weak basis for the analytical
These cumulative problems have resulted in an unfair and unjustified
treatment of ABS which will increase overall systemic financial risk and
hurt the prospects of growth for the European economy.
In response, PCS would argue that:
• The need to diversify, within reason, the number of assets classes
belonging to the HQLA for sound prudential reasons (coupled
with strong diversification requirements for each LCR buffer to
limit the “all eggs in the same basket” risk for individual
institutions and, in aggregate, for the financial system);
• The good performance of high quality ABS during the crisis
compared to many other asset classes (as shown by the work on
bid-offer in the Peraudin Paper);
• The link shown by this work between better liquidity and high
quality ABS;
• The methodological weaknesses of the EBA work in concluding
that high quality ABS is not sufficiently liquid for HQLA definition
(outside a limited RMBS category);
• The benefits of a consistent regulatory approach across
regulators (such as the EBA and EIOPA);
• The capacity to limit the downside of the limited data by placing a
cap on the amount of high quality ABS in any institution’s LCR
all argue for a definition of high quality securitisation to be used in
approaching the issue of the definition of HQLA, based on the work
already done by EIOPA, and for such high quality securitisations to be
included in the definition of HQLA with some appropriate cap which we
suggest could lie at 20% of the total LCR buffer of any institutions.
Yours sincerely
Ian Bell
Head of the PCS Secretariat