Sunday, February 17, 2013

Is "skin in the game" the only way to solve principal-agent problems?

Constantine Sandis and Nassim Nicholas Taleb recently invited comments on a working paper titled "Ethics and Asymmetry: Skin in the Game as a Required Heuristic for Acting Under Uncertainty." This opened up a rather heated discussion on Twitter, so I felt it might be useful to comment in longer-form. And while not articulated explicitly as such, it discusses an issue important to many political economists - the principal-agent problem.

From the abstract:
We propose a global and mandatory heuristic that anyone involved in an action that can possibly generate harm for others, even probabilistically, should be required to be exposed to some damage, regardless of context. We link the rule to various philosophical approaches to ethics and moral luck.
This heuristic is, bluntly speaking, the mandate that actors have "skin in the game."

Reading the paper, I was reminded of a classic scene in the last episode of Joss Whedon's Firefly, "Objects in Space," where the philosophical bounty hunter Jubal Early converses with his hostage, doctor Simon Tam, about the merits of experience in expert decision-making.
Jubal Early: You ever been shot? 
Simon: No. 
Jubal Early: You oughta be shot. Or stabbed, lose a leg. To be a surgeon, you know? Know what kind of pain you're dealing with. They make psychiatrists get psychoanalyzed before they can get certified, but they don't make a surgeon get cut on. That seem right to you? 
Individuals who act for others should be exposed to the same risks. Bankers who manage others' money should know the pain of loss. Politicians who fail to act on behalf of their constituents should be voted out of office. Only when the agent and the principal share the same preferences is timeless principal-agent problem solved.

Although a bit simplified, this is the overarching idea that I took from the paper.

I think the article obfuscates a pretty basic concept with a great deal of unrelated discussion of ethics, axiology and morality. Not to say that these are not worth considering, but the core problem that Sandis and Taleb identify is a practical one: when people are tasked with making decisions that have repercussions for others, how do we ensure that they make "good" decisions on behalf of those that they affect? Additionally, how do we limit agents' ability to enrich themselves at the expense of their principal(s)?

Sandis and Taleb's argument is uncompromising, which perhaps makes it more appealing as an ethical claim than as a practical one. By arguing that agents are only justified in acting on behalf of principles when they have "skin-in-the-game," they have assumed away the entire principal-agent problem. If the agent has the exact same preferences as the principal (i.e. they are exposed to the same risks), then there is no problem. The agent will always behave in the manner that the principal prescribes.

This is a nice thought exercise, but agents almost never have preferences identical to their principals. They are rarely exposed to identical risks. So "skin-in-the-game" ends up as a kind of aspirational goal for how principal-agent relations should be managed. Even then, Sandis and Taleb's discussion is still much too simple to be of practical value. According to the paper's logic, the best policy is the one that alters the agent's incentives such that they become aligned with those of the principal. 

This is obvious.

But yet again, there would be no principal-agent problem if the principal always had the means to perfectly discipline the agent. 

In the real world, agents rarely share the same preferences as their principals and principals are almost never in perfect control of their agents. Power is shared and relationships are tense. Yet delegation is a necessary aspect of nearly all human institutions.

Moreover, there is rarely a single principal. Agents face conflicting pressures from a myriad of sources. Politicians do not respond to a unified "constituency" but to a diverse array of "constituents." So when Sandis and Taleb argue that decision-makers need "skin-in-the-game," they raise the question of "whose game are we talking about?" 

The paper provides a first-best solution to the principal-agent problem, one where the agent is fully attuned to the risks suffered by the principal. But there are costs to such a solution. Depending on the viability of a "skin in the game" solution, a second-best approach may be more desirable. "Skin in the game" is certainly neither global nor mandatory.

Take one example in the paper
The ancients were fully aware of this incentive to hide risks, and implemented very simple but potent heuristics (for the effectiveness and applicability of fast and frugal heuristics, see Gigerenzer, 2010). About 3,800 years ago, Hammurabi’s code specified that if a builder builds a house and the house collapses and causes the death of the owner of the house, that builder shall be put to death. This is the best risk-management rule ever. The ancients understood that the builder will always know more about the risks than the client, and can hide sources of fragility and improve his profitability by cutting corners. The foundation is the best place to hide such things. The builder can also fool the inspector, for the person hiding risk has a large informational advantage over the one who has to find it. The same absence of personal risk is what motivates people to only appear to be doing good, rather than to actually do it.
As a risk management rule, Hammurabi's code ensures that builders suffer the same costs as the owners. However, it also probably ensures an under-provision of houses. Suppose that even a house built by an expert builder has some risk of collapsing despite the builder's best efforts. Knowing this, a builder suffers an additional lifetime cost of possible death every time he/she constructs a house. If the builder places some non-zero value on his life, he/she will choose to constrain the amount of houses that he builds even if there is demand for more. In economic terms, the death risk is an additional "cost" to production.

Now, if the builder valued the life of an owner to the same extent as his/her own life, then the law would simply enforce an already incentive-compatible relationship - there would be no need to force the builder to have "skin in the game," it already would be. But this is almost never the case for obvious reasons. Indeed, the reason why we care about institutions is that rarely do individuals have incentives aligned with what is "optimal" behavior.

But institutional designs impose costs of their own.

The point of this stylized example is that there are constraints in every principal-agent relationship. How do we weigh the benefit of fewer deaths from shoddy houses against the costs of an under-supply of houses? Should we tolerate some shirking on the part of builders if there is an imminent need for more houses? Sandis and Taleb's heuristic gives no answers.

Moreover, even if "skin in the game" is a "first-best" solution to controlling agent behavior, it is not a necessary condition.

In Sandis and Taleb's article, "bad luck" plays a crucial role in justifying their heuristic. But "bad luck" creates its own issues that make perfect enforcement inexorably costly. To get an agent's "skin in the game," the principal needs to be able to punish the agent for bad behavior. But rarely is behavior observed. Rather, it is some outcome on which the principal conditions their punishment. The outcome is to some extent a function of the agent's effort, but it's also subject to unknown randomness. In the builder example, the probability that a building will fail is related to the builder's effort, but it is not inconceivable that an expertly constructed structure might collapse.

Principals get noisy signals of agent behavior. It is unclear whether an outcome is the result of poor decision-making or bad luck. This distinction may or may not matter, depending on the case. However, in many instances where it is difficult to observe the agent's behavior, the optimal solution to the principal-agent problem still leaves the agent somewhat insulated from the costs of their actions.

Consider a political example, the relationship between a government (agent) and the citizens (principal) of a country.

This example draws on an extensive body of game theoretic work by Ferejohn, Barro, Acemoglu, Robinson, Przeworski and recently Fearon (among many, many others). It is very much a simplification.

The citizens, if collectively powerful enough, can overthrow a government that does not behave in a manner that reflects their interests. Assuming that they are perfectly able to see what the government does, they will always overthrow a government that deviates from their desires. The government, knowing this, and preferring not to get overthrown, will comply perfectly with the wishes of the citizens. The government's "skin is in the game" in that the government faces costs (rebellion) concurrently with the citizens (reduced welfare from government shirking).

But typically, citizens cannot perfectly observe government action. The relationship between policy and outcome is complex, particularly in the economic realm. Citizens only observe their and their comrades' welfare, which is a function of both government policy and unforeseen events (like a war in a neighboring country that leads to reduced export revenue). More generally, citizens are unsure how much to blame government action for their current predicament.

If they adopt the same rule as in the perfect observation case - overthrow the government if we observe outcomes different from our ideal outcome - then they are likely to overthrow governments that did nothing wrong, a costly scenario.

Citizens may therefore tolerate some deviation from perfection, that is, weaken the amount to which the government has "skin in the game," based on the fact that they can't perfectly observe the government's behavior.  Rejection becomes a question of expectation - small deviations from perfection might just be bad luck, but a massive downturn in welfare can likely be blamed on government incompetence.

Indeed, having too rigid of an "overthrow rule" may even lead to perverse outcomes. Suppose a government knows that, no matter what it does, it is going to be blamed for an economic downturn beyond its control. It now has strong incentives to ignore the will of the public and be as predatory as possible, knowing that it has no hope of appeasing the public either way. These sorts of twisted incentives are at the heart of much game theoretic work on democratization and why leaders choose to give up power.

Is it worth putting the agent's "skin in the game" if that leads the agent toward predatory behavior?

Returning to the example of ancient Babylon, instead of killing builders of failed houses and running the risk that some good builders will get killed by accident, Hammurabi could have developed building codes. This is different from a "skin the game" solution. The builder's "skin" is no longer in the "game" since he/she does not suffer any costs after the house is built. However, failing to build to code is a clear and observable signal that the builder is shirking their responsibilities. The builder still has more private knowledge and can cut corners, but only within limits. Clear deviations from the code get punished. Some non-compliance is tolerated in order to avoid mistaken executions.

What Sandis and Taleb miss in their brief discussion is that there is a continuum of possible mechanisms for controlling an agent's behavior - "skin in the game" is one extreme, but there are other viable solutions depending on the context.

Importantly, there are trade-offs. Conditional on some level of imperfect observation of the agent's behavior (which is made smaller by increasing quality of monitoring), the principal must weigh the benefits of tight control over the agent's behavior against the costs of punishing a "good" agent. If standards are high, the principal might accidentally punish an compliant agent. If standards are low, the "false positive" risk drops, but the incentive to shirk rises. Sandis and Taleb's argument provides no method of resolving this question.

Moreover, there are a myriad of cases where a principal actually benefits by delegating decisions to an agent who is somewhat insulated from the costs facing the principal. This is often labeled dynamic or time-inconsistent preference problem. Governments often find it in their best long-term interests to delegate decision-making to someone who has absolutely no "skin in the game." International courts are a great example of this, as are domestic courts (imagine the prospects for civil rights in 1960s America if Supreme Court judges faced periodic elections).

My ultimate issue with the paper is that it simply lacks sufficient nuance to be useful or informative. By focusing on what should be the ideal-typical relationship between principal and agent, it ignores the trade-offs that plague real world principal-agent problems. Either "skin in the game" is such a broad standard that it is absolutely meaningless, or it represents a specific solution to principal-agent problems, namely subjecting the agent and principal to identical rewards/punishments, that is neither feasible nor, in many instances, desirable.

2 comments: