Decision Making in Banking — Michael Mainelli

Martin C. W. Walker
8 min readFeb 24, 2021

--

What’s needed is more challenging advice. Back to asking hard questions and generating hypotheses that might be tested — how could we build a bank that never had a run? Never went bust? Never dissatisfied a customer?

Michael Mainelli is Chairman of Z/Yen, a commercial think-tank, which he co-founded in 1994. He is Emeritus Gresham Professor of Commerce at Gresham College in London, an Alderman of the City of London (Broad Street Ward), and founder of the Long Finance initiative. He is the co-author of The Price of Fish: A New Approach to Wicked Economics and Better Decisions

Q. Senior management in banks make many types of decisions. Decisions regarding business strategy, risk appetite, investment in new systems, HR policies etc. In general what type of management decisions do you think banks are good at?

A tough question. Banks are generally quite prudent in the adoption of new ideas. Obviously bad ideas generally do not get that far. They are also not bad about making decisions related to credit but you would hope that were the case, given that making decisions about credit is one of the main things banks do. Still, even credit decisions get out of control on occasions as we well know.

Q. What types of management decisions do you think banks could be better at?

Pretty much everything else…

Banks are in a troublesome spot, there are high barriers to entry and they work within a highly regulated environment. They do not want to lose their licenses and competitors are not snapping at their heels in core products. This creates a bias against innovative decisions unless their regulators push them.

Banks have particular problems with decisions that require ethical evaluation. Many such decisions are about ‘long term’ products, for example the decisions that led to PPI (1) mis-selling, problems with endowment mortgages (2), pension products, or actively managed funds versus trackers. In the UK, banks have long favoured newer customers over existing by offering preferential rates. Is that fair? Before banks launch new products perhaps they should ask themselves, “Are we doing right by our existing customers?”

Q. How good are banks at defining their problems before they attempt to solve them?

The major banking centres do not have a monopoly on clear thinking. You find banks around the world that are very good at thinking objectively about some of the problems they face. Banks are generally good dealing with operational research problems. For example, most banks face tough network decisions, what to do with their real estate, where to open branches, where to close branches, how will technology change branches. These they often define and analyse well.

In contrast, I find banks are not good at undefined problems not amenable to operational research. One symptom of this is their overuse of consultants all the time. If banks were trust-based organisations, they would trust their internal experts more, rather than too frequently bring in outsiders who often lack relevant expertise. Another symptom of banks’ struggles to define their problems is the endless meetings they have on a topic, meetings that generally reach no conclusion and frequently have no agenda.

Q. Do you think banks pay sufficient attention to the quality and relevance of the information they have available for the decision making processing?

It’s not just about data. Decision makers need to put the science back into data science. I have a hypothesis and data, so I use the data to test the hypothesis and I draw conclusions. Big data needs big hypotheses. And big hypotheses need testing. If anything banks lack those big, daring hypotheses. And we’re back to high barriers to entry within a highly regulated environment.

There is one particular aspect of data that banks need to pay attention to, variance (3). The extent to which a measure varies can tell you an enormous amount about the operation of a business or process. PPI was an example of where the variance was disturbing low. PPI was an insurance product, so you would expect a varying pattern of claims under the policies, instead the data showed an unvarying picture of few or no claims.

Which takes us to a key piece of information, how a business makes money. Senior management in banks really need to understand how they are making money. As much they need to understand why a decision resulted in a loss, they need to understand why some products are outrageously profitable. Outrageous profits often indicate lack of competition, information asymmetries, agency issues, or externalities. High profits pose questions as disturbing as losses.

Q. In general do you think banks make sufficient effort to fully assess the outcome of major decisions?

Post-decision analysis is poor to non-existent.

Where is does exist it is too dependent on numbers provided by the finance department. To validate outcomes they need to triangulate more, in other words validate the outcome from two or more perspectives/sources of data. The amount of business may have increased after the decision based on the numbers from finance but is it really reflected in the numbers from Risk or the amount of effort required in the Operations department?

In general Banks need to put more thought into measures of success beyond simple profit. They could also greatly benefit from the greater use of “naïve predictive tools” such as Bayes’ Theorem (4) This may sound complicated but simply means making better use of prior knowledge of conditions that might be related to the event. The classic example is in medical diagnosis, the chance of a false positive in a cancer diagnosis will generally be much higher in younger than in older subjects, something that needs to be taken into consideration if you design a screening programme.

However, the structure of banks in general makes it very difficult to assess outcomes. Matrix management, diffused accountability, and organisational complexity can make it very hard to judge who made a decision as well what the outcome was.

Q. Do you think banks are good at learning the lessons from their decisions, whether the outcomes were successful or unsuccessful?

No. Far too rarely, going back to our scientific approach point, do banks set out in advance how their hypotheses will be tested. You cannot learn unless you have appropriate metrics.

A major problem is “punctuated baselines”, by which I mean the baseline you measure success against is continually being changed. I have seen the example of a global credit department that was continually changing its procedures. They would had a weekly meeting to discuss changes and they would make fortnightly changes to written procedures. The end result of this was many pages of procedures to which nobody did or could pay any attention and a complete inability to measure what was working or not. I recommended the first step to solving their problems was to stop making changes, but they refused. Nobody wanted to be seen to stop the machine. In the midst of the confusion, they incurred enormous credit losses and fines for sanctions-busting.

To learn the lessons from decisions, banks also need to make better use of experimental controls (5). Banks are large enough to be able to make a change in one or more teams, leaving the others alone, allowing you to minimize the effects of other variables and truly learn what works.

Sometimes the top level does not even want any meaningful operational metrics which makes it even harder to assess the outcomes of decisions.

Q. Are there any tools or techniques that could improve decision making e.g. Big Data, Artificial intelligence?

People talk about data as an asset. However it is only an asset it generates a return, otherwise it is a liability, particularly for example if you are a bank impacted by GDPR (6) holding personal data. Why collect data if it not going to add any value but has costs and regulatory complications?

As before, Big Data requires Big Hypothesis.

People get far too pre-occupied by big maps of data or colourful dashboards. Sometimes you may see an anomaly in data presented this way (which may help decision making) though often this may simply reflect an error in the data itself. If the map of data does not drive decision making it is of little beyond aesthetic value.

There some specific techniques that should be used more. For example, despite Artificial Intelligence’s popularity, people should look more at more robust statistical techniques such as Support Vector Machines(7), rather than Neural networks(8), for their replicable results, science again. We also push banks hard to ‘predict’ outcomes as a key control. Predictive analytics(9) is popular, but we push using it for things like predicting daily profit from operational activity numbers, or determining trading losses flowing from today using today’s activity numbers. If you can predict results from activities, for example using support vector machines, you’ve proven that you have a grip on the big picture.

Q. What do you think of the quality of external advice banks receive, to help them make big decisions?

Broadly speaking there are three main areas of advice where banks seek the help of consultancies, strategic advice (markets, networks, and products), operations advice (personnel, compliance, and ICT), and presentational advice (marketing, PR, CSR).

Strategy consultancies have it easy. Banking is itself defined by regulation, so the same generic strategic advice can be re-used over and over again. Banks are poor at procuring operations advice because consultancies indulge the banks’ belief that each of them is special and that the industry as a whole has little to learn from other sectors. Why do banks resist proven operations techniques such as Six Sigma (10) unless there is a bank-specific version? It makes some consultancies a lot of money but it doesn’t help the banks. Presentational advice is dominated by showmanship, though I’ve noticed the better banks paying more attention recently to sitting through some uncomfortable videos of how they are perceived by customers.

What’s needed is more challenging advice. Back to asking hard questions and generating hypotheses that might be tested — how could we build a bank that never had a run? Never went bust? Never dissatisfied a customer? Was able to recommend a pension product to a 20 year old today with no fear of mis-advising? The missing external advice is the challenging advice. Sadly, banks are not structures disposed to collegiate discussion and challenge.

Q. What is the single most important thing banks could do to improve the quality of their decision making?

You cannot have good quality decisions without freedom of speech. In too many banks bad decisions arise from an inability of staff to ‘speak truth to power’. None of the other improvements mentioned matter if you do not have the freedom to ask the questions that create testable hypotheses you can evaluate and use to improve yourself.

First published 3rd July 2017

1. Payment Protection Insurance — a form of retail credit insurance, typically against loss of income, that was mis-sold to many UK banking customers leading to billions of pounds being paid by banks as fines, settlements and customer compensation

2. Endowment mortgages — A formerly popular form of property financing which consisted of an interest-only mortgage and an endowment policy that was intended to repay the principal. This product also lead to large scale compensation of bank customers.

3. Variance is the statistical concept that measure the extent to which data varies from the mean value. The technical definition is “the expectation of the squared deviation of a random variable from its mean.”

4. https://en.wikipedia.org/wiki/Bayes%27_theorem

5. A control is a common feature of experimental design. It aims to minimise the effects of factors other than the main variable be changed

6. The European Union’s General Data Protection Regulation

7. https://en.wikipedia.org/wiki/Support_vector_machine

8. Networks of computers that emulate the operations of the brain

9. Using historic data, statistical algorithms and machine learning techniques to identify the likelihood of potential outcomes

10. Operational management techniques for process improvement introduced by Motorola and popularised by General Electric that focus on reducing the probability of variance and error. Six Sigma equates to 99.99966% on-target delivery.

--

--

No responses yet