Archive for August, 2009

Sense and nonsense in the UK-US health debate

August 20, 2009

Goodness, Obama’s health plans have spurned a hornet’s nest of nonsense on both sides of the Atlantic. In the US, the plans are apparently being portrayed as an attempt to bring UK-style healthcare to America, and a mass of misinformation about the NHS is appearing alongside including claims that old people’s care is limited by death-panels. In the UK there is the claim that somewhere between 40 and 50 million people have no access to healthcare at all in the US as they lack insurance, and that those mean Americans should stop being rude about our health system. Things get worse when Conservative MEPs start talking about the NHS as a 60-year-old mistake, and their leader feels the need to leap to the health service’s defence and try and reclaim his political party as being the best guardians of the NHS. I’ve even seen claims from Labour politicians that to insult the NHS is to be unpatriotic.

How much sense is there in all of this? Well, to start with, we all have to be a bit more honest about our healthcare. The NHS is good at providing (relatively) cheap healthcare, which is free (for the most part), universal (so long as your British), and comprehensive (so long as you don’t mind waiting and don’t want to choose the latest care). The US is good at providing the very latest healthcare in expense-looking healthcare facilities, provided that you have the right insurance package, and you don’t mind America, as a whole, paying getting on for a third more than most other nations (in terms of healthcare as a % of GDP). Both systems have their merits and their problems.

The problems with the NHS are traditionally expressed in terms of poor facilities and longer waiting times. More recently they have been expressed in terms of delays in getting access to the latest drugs and technology. The Labour government have attempted to address the first two problems, poor facilities and longer waiting times, through the PFI (private finance initiative) programme and the NHS Plan. PFI has led to a rash of new hospitals being built – this in itself is a pretty good thing as a great deal of the UK’s health infrastructure was getting rather old, and that last round of hospital building on any scale occurred in the 1960s and 1970s. However, PFI has been questioned because it may not offer terribly good value for money for the taxpayer long-term, and the private sector building and maintenance contractors who have been involved have often got extremely high returns for building for the government, with little attached risk. The NHS Plan was about improving UK healthcare in return for an increased healthcare investment, raising healthcare expenditure as a proportion of GDP to European averages, but demanding particularly that waiting times come down. And they have – dramatically so. Waits are not as short as available through private healthcare in the US, but are much, much better than they used to be across most treatments. And of course, there is still the option of purchasing private healthcare in the UK if you wish to be seen quicker – critics of the NHS from across the Atlantic often seem to forget that we have a private healthcare system too, should people wish to take out insurance or pay for it directly.

The US system also has problems. The uninsured aren’t left completely without access to care, but their options clearly are limited. I wouldn’t want to have a long-term condition of any kind (and around 1/3 of people are generally estimated to have a ‘chronic’ condition of some kind or another) without insurance in the US. Those lacking insurance have access to emergency rooms and to some basic primary care, but not a lot else. So why, given that such a lot of people have access to basic healthcare only, does the US system cost so much? Well, they seem to pay a lot more for drugs over there because those with insurance demand access to the very latest treatments, and the very latest treatments tend to be only incremental improvements (if improvements at all) on established, cheaper drugs. There are big breakthroughs from time to time, but they are few and far between. Equally, those with private cover often want access to the latest technologies, which again don’t come cheap. In the UK decisions about whether to pay for access to these drugs and technologies would be far more closely monitored and controlled, and this might mean that particular drugs and technologies would be denied. But this doesn’t mean that patients would be denied treatment – but that they don’t necessarily get the treatment that they might be demanding. There is a big difference between no treatment and cost-effective treatment. This isn’t to say that NHS’s approach is correct – but if you don’t like it you can still take out private health insurance and try and get the treatments you want that way instead.

Equally, there are other, well-documented problems in the US. Doctors have incentives to over-prescribe and treat that tend to be minimised in the UK (although not removed completely) and getting treated, even if you have insurance, can be complicated because of the sheer complexity of many insurance policies and programmes that may have opt outs for the insurer for a range of reasons. In comparison the UK provides comprehensive care – it will provide some kind of treatment – but again there may be a wait and it may not be the treatment you wanted. So, a bad knee might lead to you receiving physiotherapy after a wait rather than chiropractic immediately – the latter is not widely available in the UK because it is not regarded as having proven itself adequately in clinical trials.

So both systems have their advantaged and disadvantages. I think the US can learn a thing or too about providing more treatment at lower cost than the UK, and the UK can certainly learn something about providing better healthcare service from the US. Neither is intrinsically a better system, but the UK system does at least treat everyone, and so to dismiss it without thought is a significant error.


Banking pay

August 12, 2009

As I write this, the UK FSA are publishing decisions about guidelines and rules in respect of banker pay, issues which have come to the fore during the financial crisis after bubbling away in the background for a number of years.

The arguments in favour of bankers being paid substantial sums in terms of bonuses are well-worn ones. First, there is the argument that banks must pay the market rate for ‘talent’ or it will go somewhere else, and either that individual institution, or the economy as a whole, will lose out. We can call this the ‘market rate’ argument. Second, there is the argument that, in order to incentivise bankers to innovate and come up with new financial products and services, that we have to offer them lucrative bonuses to encourage them. We can call this the innovation argument. Third, there is the argument that certain bankers create incredible returns on their activities, and so it is only fair that they receive a share of that return. We can call that the ‘fair share’ argument. Do these arguments stand up to any scrutiny.

First the ‘market rate’ argument. This depends on labour markets working efficiently, and of money being a substantial incentive for those working in banks. If markets don’t work efficiently then rewards go to the wrong people, and if money doesn’t incentivise bankers then paying them more is a waste of money – they would have done the same work for less.

The argument about efficient labour markets in banking presumes we are able to identify good performers from bad ones, and that good performers should command a premium. However, there are disquieting claims that good financial performers might just be rather lucky rather than brilliant. Taleb’s ‘The Black Swan’ and ‘Fooled by Randomness’ are the most popular version of this claim, suggesting simply that, as there are a very large number of traders, some are going to be very successful period after period not because of their ability, but because of blind luck. It might well be that traders receiving extraordinary rewards are just extremely fortunate. Equally, if bankers pay is bid up in periods where high rewards are being made, what about in periods where losses occur. In such periods should bankers receive substantial cuts in pay, especially where their organisations have been bailed out by taxpayer’s money. In such times, we are told, it is more important than ever to pay high rates to make sure that the best bankers remain in firms to make good losses and to ensure public debts are repaid. This seems rather like trying to have the argument both ways to me – that in boom times bankers should be paid highly, and in bad times they should be too. This isn’t any kind of market logic – that would suggest that in bad economic times bankers take substantial pay cuts. Until such a time as this happens, it’s hard to take this argument seriously.

The second part of this logic is that money incentivises bankers. This seems, on the surface, to be an obvious thing – of course it does. However, I’m not sure how far it extends as bonuses get bigger and bigger. The law of diminishing returns would suggest that bonuses have a limit beyond which they achieve nothing. If you pay me a £2 million bonus rather than £1.5 million, am I going to work proportionately less? There may be a collective action problem here – that you pay me £2 million because you think I might leave if you don’t, and so everyone is forced to pay this much as a bonus. But this goes back to the first part of the market rate argument, and is less to do with the market than an inherent belief that labour markets in finance are inherently efficient. If there is good reason to doubt this is the case, then the argument doesn’t work. Equally, it seems to assume that bankers are only interested in financial incentives, and pay little regard for the quality of their experience at work, their colleagues, or any of the other reasons why we work for who we work for. This surely doesn’t stand up to much scrutiny.

Next, there is the innovation argument, that bankers need to be paid substantial bonuses in order to come up with brilliant new financial products and services. This seems to suggest that banking is such a terrible job to have that, only by incentivising people through the use of blunt cash can we get them to do their job well. Is this really the case? This would seem to make the case that bankers aren’t terribly professional, and don’t take much pride in what they do. I hope this isn’t the case – and if it is, the perhaps why ought to be thinking again about how we organise banking. Either way, I don’t think the argument works.

Next there is the ‘fair share’ argument. This one doesn’t really work either. This is because bonuses are often linked to performance, and what we’ve learned over the last couple of years is that bank ‘performance’ is a far from obvious concept. Banks seem to have exposed themselves to considerable risk, leveraging themselves far beyond any sensible level, with considerable implications for us all since the credit crunch of 2007. In the years prior to the ‘crunch’ bankers appeared to be generating substantial profits on opaque financial instruments, and paid themselves massive bonuses on that basis. But these profits have often proven to be rather illusory, paper-based rather than adding any long-term value to the economy as a whole. They may even have been wholly illusory – think of the trillions that have been wiped out over the last two years. Banking bonuses, it seems to me, should be linked to the long-term value of assets such as shares rather than short-term profits. This might also have the effect of making bankers more loyal to banks, and so giving them less of an incentive to want to change banks on the grounds that their bonus is only £2 million and not £3 million – it might bring an end to this kind of nonsensical thinking.

Equally, the ‘fair share’ argument should apply both ways if it is to apply at all. If bankers are to get shares of the profits they generate, should they also get shares in any losses? It seems unreasonable for them to gain only the upside of the risks they take without having to participate in any downside. Only by linking bonuses to long-term performance can both upside and downside be taken into account, and bankers encouraged to create tradable assets of real value rather than those designed only to achieve short-term gain.

To whom should the NHS be accountable?

August 12, 2009

In the UK healthcare is primarily funded through taxation, and so must be held to account for the way that money is spent. Over time the way that we have attempted to ensure accountability has changed quite considerably, but without perhaps widespread awareness of the implications of changing the means of accountability. This issue is regarded as less glamorous than whichever system-level reforms are presently being proposed or implemented, but is a key element of them that often gets rather overlooked.

At the creation of the NHS, accountability for the NHS was meant to reside firmly at its centre, with the Department of Health. Hence Bevan’s aphorism that every bedpan dropped in the corridors of the health service rang through the corridors of Whitehall. However, this was never really the case. People might have been content to blame the government (which is not quite the same thing as the Department of Health) for the NHS’s failings, but whether health services were accountable through this means is doubtful. If people wanted a change in the stewardship or colour of the government, they could certainly, once every five years or so, vote to try and achieve this, and it is remarkable that the government that created the NHS, Attlee’s Labour government of 1945-1951, was voted out remarkably quickly after the founding of the NHS in 1948. But this isn’t really an accountability mechanism – it is more a means of changing government with bigger implications than reforming healthcare and of little use between elections, or for dealing with the day-to-day problems of making sure healthcare is accountable. Of course, elections also don’t just depend on our views on healthcare, but on the government performance as a whole. They are rather a blunt weapon for ensuring accountability for any individual public service.

The model of Parliamentary accountability tries to deal with the issue by opening healthcare decisions open to scrutiny and question through open debate. As such, Parliamentary accountability can work more frequently than electoral accountability, but depends on whether the opposition of the day are engaging and challenging the government appropriately, and whether the government are prepared to change things if their approach to healthcare is being found wanting. We would hope both are the case, with the ultimate sanction again being that a government seen to be running the NHS poorly would leave itself open to attack from the opposition and so more likely to lose a general election.

The issue of elections links to the idea that health services are accountable to Parliament on behalf of the citizens of the country. This implies a citizenry paying attention to Parliamentary debates and making informed decisions about who to vote for on the next election based upon their view of what is happening there. If citizens are paying no attention to debates on healthcare, then their vote can’t really make any political party accountable for their decisions in that area. It is possible that, during a general election, claims and counter-claims can be made by political parties and assessed by the citizenry, but this, again, reduces accountability to a once in every five years event. Unless the citizenry are paying attention in between these dates as well, they are unlikely to make good voting decisions.

Extending the notion of citizenry gives other means by which health services might be more accountable. Individual members of the public can get in touch with their local hospitals or Primary Care Trusts and become members of them, so attempting to become representatives and affect decisions ‘on the ground’. Health organisations should then become more responsive and accountable to local people. However, again, the NHS has not traditionally been very good at achieving local accountability through these sorts of means. The membership schemes are still fairly new, but have been subject to strong criticisms of whether they achieve very much more accountability of any kind at all.

If citizen-type means of accountability have struggled in the NHS, how about the alternatives? Two have been tried, accountability through performance management and accountability through consumer mechanisms.

Accountability through performance management is a relatively recent invention. Performance indicators for healthcare have existed in various forms since the 1970s, but only in the 2000s were formal league tables constructed of performance and sanctions imposed upon poorly performing organisations. In this system, data is collected from NHS organisations and assessed against both benchmark standards and against other NHS organisations. Ratings (originally based on stars and now on normative statements) can then be issued based on benchmarks, and comparisons made with other healthcare organisations to see where yours ranks.

The problems of performance management have become well documented. Managers have been accused of adopting target focus rather than looking to systemically improve performance as a whole, of game-playing to try and find ways of making performance measures improve without necessarily achieving anything new, or even of simple misrepresentation and lying in their performance returns. There is a great deal of sense in measuring health service outcomes to see if they are getting better, but always with the risk that it skews patient care in terms of the measurable at the expense of less tangible aspects. A danger of seeking quantity about quality. However, achieving accountability through performance management is now with us to stay, even if it does carry risks of managers losing sight of why such systems were introduced in the first place, and even though such systems need to be supplemented by more discursive systems to allow managers to explain what is happening, how and why, as well as the ‘what’ that numbers are likely to be able to provide.

Finally, there is the consumer route of accountability. This attempts to create markets in healthcare, with funds following patient choices, and so automatically giving greater resources to the highest performing organisations. Accountability comes not through ‘voice’ mechanisms, as it does in citizen-type mechanisms, but instead through ‘choice’. This is very much the vogue in the NHS at present.

However, choice mechanisms resemble voice mechanisms in a number of ways that are often over-looked. If expecting citizens to follow Parliamentary debates to make health services accountable seems to ask a lot, equally it is asking a lot to ask sick people to choose the best healthcare provider to meet their needs. Most people want to go to a good local service rather than choosing between potential providers. Equally, evidence from Barry Schwartz suggests that whereas most people say before they are sick that they want to choose their healthcare provider, when they actually fall sick, they would prefer their doctor chose instead. Now, it is certainly possible that doctors might be better choosers than their patients, and for health services to be accountable to local doctors through this mechanism, but the evidence of the 1990s, of the Conservative internal market, is rather ambiguous as to whether Primary Care doctors were prepared to take this role on.

So accountability hasn’t really worked through voice mechanism, or through performance management, or through choice. Are health services doomed to be unaccountable? I hope this isn’t the case. It seems to me that the NHS is a collective good that must be accountable to people collectively. When we try and treat it as consumers, big mistakes get made. We demand, on behalf of our families, that the NHS pay for new drug breakthroughs without thought as to which part of its budget is going to be no longer available if that decision is made. The NHS is a public good, and requires collective public decisions about what it should spend its money on.

It seems to me that the appropriate place for such decisions to be made is in local government. Local government, to be sure, has a pretty poor reputation after thirty years of neglect. But as healthcare seems to be one of the few public services that people feel strongly about, perhaps putting the NHS under greater local government control could both regenerate debate in local government, and make health services more responsive to local people’s needs. One implication of this approach is that health services would differ from one area to another – the dreaded ‘postcode lottery’, but the decisions made locally would be open and transparent, and available to everyone. So long as which health services, subject to national limits about basic provision, were made clear to local populations and to those considering moving to such areas (through means such as then there is no reason why the range of health services shouldn’t change from one area to the next. Such a process would allow greater collective accountability, responsiveness, and potential enliven the rather stifled world of local government.

Who is to blame for the financial crisis?

August 5, 2009

Given the complexity of the financial crisis, I suppose it’s no surprise that a wide range of candidates have been suggested as to who we ought to blame for it. Here are a few of them.

The market
A first generic response is to blame the market. All of it. This is a kind of backlash against the neocons or neoliberal dominance that has undoubtedly been present for the last thirty years or so. This is good in the sense that the blind faith in markets that has overtaken us, at least in the US and UK, hasn’t been terribly healthy. Our governments appear to have reached the conclusion that markets can solve everything, and that, if only markets were more widespread, more widely used in areas such as the public sector, things would be better.

However, this is also sloppy thinking. Markets, in themselves, don’t solve anything. Markets aren’t always competitive, and competition isn’t always a good thing is every setting. We need to think a bit more carefully in the future as to where competition can be used to make things better, and when marketplaces need to be more carefully regulated and controlled. Markets aren’t the solution to everything – they are social institutions that we make the rules for. They should serve us, not dominate our lives.

The government
A second obvious agent of blame is the government. First we can allow the government for allowing the financial crisis to happen. The US and UK governments have for several years been telling us that they’ve solved the problems of economic boom and bust through a combination of independent central banks, low interest rates, and in the UK prudent public spending. They staked a great deal on the financial sector being the dynamo of their respective economies.

This seems to have been a substantial mistake. It will be some time before we know whether the entire economic growth of the last ten year (or more) has been wiped out by the financial crisis, but it seems plausible to suggest this is the case. This puts the focus back on the so-called ‘real’ economy to deliver growth for us, but this carries with it substantial risks.

We are at the end of a long boom of activity that began in the 1950s (so called ‘Fordism’) and it’s not clear what goods and services might help us achieve a new one. IT is trotted out as the answer (but remember the tech-collapse of the late 90s?), but it is not clear how sustainable economic growth can be achieved in the same way from what is, in effect, a service technology to improve productivity rather than the potential source of a new growth period in itself. Perhaps it’s time to think about learning to live with what we have and accepting that economic growth year on year is no longer possible or desirable? It may be well that our environment requires this kind of thinking.

The regulators
Another obvious candidate is the regulators, who have been portrayed as being ‘asleep at the wheel’. In the UK, the complex three way split of regulation introduced by Gordon Brown, splitting control between the Bank of England, the FSA and the Treasury seems to have resulted in no-one wanting to take control, or responsibility, for what happened. In the US criticisms have been made that the initial response, the so-called TARP programme, was badly-thought through and may have made the crisis worse.

This is the toughest one to crack. My sense is that regulators became too focussed on individual banks and not enough on the system as a whole. Systemic risk was allowed to build up through the use of complex derivatives, with too much leverage building up and not enough capital to back it up. Any individual bank may have been able to make the case it was reasonably secure, but the system as a whole was far over-leveraged. Regulators should have seen this happening, but they didn’t. They also allowed the growth of a ‘shadow’ banking system where the real commitments of financial institutions were extensively hidden from shareholders, and even from some senior bankers themselves. Some serious lessons to be learned here.

The financial instruments
We can also, of course, blame the financial instruments that created the crisis. The growth of SDOs, CDFs etc, all within a relatively short period of time, has been extraordinary. The idea of securitization, in itself, however, does make sense. Making markets more liquid through their use has a logic. The argument Gillian Tett makes in her books (and columns) is that they were extensively mis-used. This makes a good deal of sense. CDOs and CDFs were transplanted from their original context into others where, perhaps because those designing them did not really understand the dangers involved, they were not appropriate.

The story Tett tells is of J P Morgan bankers who originated these financial instruments looking out at the marketplace wondering how other financial institutions using them with sub-prime mortgages are possibly making money from them. J P Morgan’s bankers couldn’t make the instruments work in that setting because to do so would require extraordinarily high insurance (CDFs) against the risk being taken on. It turns out that the other companies made CDOs in subprime products work by not adequately insuring them, and by credit rating agencies apparently not understanding the nature of the underlying asset.

The financial instruments themselves were not to blame. That is like saying knives are evil because they are used as murder weapons. It’s their mis-used that is the problem.

The bankers
At the beginning of the financial crisis it was popular to lame the bankers for the crisis. They had got us into a terrible mess through selling bogus financial instruments, claiming colossal bonuses, and requiring government bailouts. Things went quiet for a while, and we’re blaming them again now that banks on both sides of the Atlantic are beginning to make money again. Is this fair?

It is partly. Institutions that have required public bailouts, and so which would not exist unless taxpayer’s funds had been used to intervene, seem to have gone back to ‘business as normal’, paying huge bonuses and being rather bullish. It’s hard not to feel a touch resentful about this. At the same time, nationalised banks in the UK seem to be making massive losses, which we are having to pay for, but at the same time sometimes continuing to pay executives extraordinary amounts to work within them. It’s pretty ghastly.

A lot of the banking pay argument seems to me to rest on two ideas. The first is that, in order to get good people, you have to pay market rates. There’s something in this, but I can’t see that the rate for executive bankers was very high a year ago when the banks were going bankrupt left, right and centre. If we were paying market rates then, salaries would have been pretty close to zero. Market rates have to work in both directions.

Second, there seems to be an assumption that bankers need to be incentivised through huge bonuses. This seems pretty close to nonsense to me. Are you going to work significantly harder if offered a million pound bonus rather than a £500,000 one. Will you work twice as hard if there is potential for a £2 million bonus instead? What kind of logic is this? Is it so horrible being a banker that we need to pay this kind of money just to get people to do the job?

My view, as I’ve said in other entries here, is that bankers need to start behaving more like professional, and less like second-hand car salesmen. Perhaps we might respect them a bit more then.

Last, xenophobia gets an airing. The argument is that the Chinese/Arabs/whoever-else-you-don’t like caused this by buying up our assets and forcing them into a financial bubble that has now come crashing down. Their investment in our economies caused interest rates to be too low for too long, and they kept their own currencies from appreciating to keep selling us good cheaply so that they fuelled a boom from which we have now moved to bust.

Oh dear. Lots of nonsense here. Yes, the economies of the world have received substantial investment from China, petro-countries (including Russia) and yes, that investment has kept interest rates lower than they would have been, and yes, they have sold us an awful lots of goods. But I don’t remember us blaming these countries while the boom lasted. If you want to blame China, don’t buy Chinese goods, but I suspect you’ll have a hard time avoiding them. Blaming other countries for what we ourselves have done isn’t going to solve anything.