Download this chapter as pdf-file




C.III.0       Preface

This chapter gives an account of work done by Martin Hellwig and his group over the past four years. As in previous periods, this work had a foundational part and an applied part. In past reports, the foundational part was presented in Chapter C.I of this Report, the applied part in Chapter C.III, with sub-chapters C.III.1 on network industries, competition policy and sector-specific regulation and C.III.2 on financial stability and banking regulation. Over the past four years, hardly any work was done on network industries. Therefore this chapter reports only on work done on the foundations of public economics and on financial stability and regulation. Work on foundations of public economics is reported in sub-chapter C.III.1, work on financial stability in sub-chapter C.III.2. Work on foundations of public economics was much enhanced by having Felix Bierbrauer return to the institute as a visitor in 2015–16.

C.III.1   The Mechanism Design Approach to Public-Good  Provision and Taxation

An important part of the research programme since 2004 concerned the conceptual framework for the normative analysis of public-goods provision when decision makers cannot be presumed to have the information needed to properly assess the amount of resources that should be devoted to such goods, in particular, when decision makers cannot be presumed to know the values that the different decision makers attach to the different public goods.

Our approach has the following distinct features:

  • Whereas most of the literature considers the problem of public-good provision with private information in the context of small-economy models, in which each participant has the power to affect aggregate outcomes, we consider large economies, in which any one individual is too insignificant to affect the level of public-good provision aggregate outcome.
  • We consider incentive problems associated with coalition formation as well as individual incentive compatibility.
  • We look at public-goods provision and taxation in an integrated manner. The problem of how to pay for public goods is intimately related to the problem of what is an appropriate system of taxes and prices for public services.

Our research and research interests in this area can be roughly divided into three broad topics:

  • Development of a conceptual and formal framework that is suitable for dealing with issues that concern the revelation, communication and use of private information in a large economy.
  • Development of an overarching conceptual and formal framework that can be used to integrate the theory of public-goods provision with the rest of normative economics, in particular, the theories of public-sector pricing and of taxation.
  • Development of a conceptual and formal framework that is suitable to address issues concerning incentives and governance on the supply side of public-good provision and can also be used to integrate the analysis of such issues with the more conventional analyses of demand and funding.

C.III.1.1       Public Goods versus Private Goods: What is the Difference?

To fix semantics, define a public good to be one that exhibits nonrivalry in the sense that one person’s “consumption” of this good does not preclude another person from “consuming” it as well. When several people “consume” the public good, there may be external effects, e.g. negative externalities from crowding or positive externalities from mutual entertainment, but there is not the kind of rivalry in consumption that one has with private goods where one person’s eating a piece of bread precludes another person’s eating it as well.

We focus on nonrivalry as the key characteristic because this property is at the core of the allocation problem of public-good provision. Because of nonrivalry, it is efficient for people to get together and to coordinate activities so as to exploit the benefits from doing things jointly. Other characteristics, such as nonexcludability, affect the set of procedures that a community can use to implement a scheme for public-good provision and finance, but such considerations seem secondary to the main issue that nonrivalry is the reason why public-good provision is a collective, rather than individual concern.

The mechanism design approach to public-goods provision asks how a community of  n  people can decide how much of a public good should be provided and how this should be paid for. If each person’s tastes were publicly known, it would be easy to implement an efficient provision level. If tastes are private information, the question is whether and how “the system” can obtain the information that is needed for this purpose. Because this information must come from the individuals who hold it, the question is whether and how these individuals can be given incentives to properly reveal this information to “the system”.

The bottom line of the previous literature is that it is always possible to provide individuals with the incentive to reveal their preferences in such a way that an efficient level of public-good provision can be implemented. For this purpose, financial contributions must be calibrated to individuals’ expressions of preferences for the public good in such a way that there are neither incentives to overstate preferences for the public good in the hope that this raises the likelihood of provision at the expense of others nor incentives to understate preferences for the public good in the hope that this reduces one’s payment obligations without too much of an effect on the likelihood of provision. The mechanism design literature shows that one can always find payment schemes which satisfy this condition.[1]

However, there usually is a conflict between incentive compatibility, feasibility, i.e., the ability to raise sufficient resources for public goods, and voluntariness of participation. In some instances, it may be impossible to have a public good provided efficiently on the basis of voluntary contracting. Some coercion may be needed for efficiency. The original idea of Lindahl (1919) that the theory of  public goods provides a contractarian explanation of the role of government and the state would then be invalid. Samuelson’s (1954) conjecture that private, spontaneous arrangements for efficient public good provision are not available would be vindicated. Samuelson (1954) stresses the difference between public and private goods, suggesting that private goods can be efficiently provided by markets and contracts and public goods cannot.

On this issue, the mechanism design literature is unclear. If we consider an economy with n participants with independent private values,[2] we get the same kinds of impossibility theorems for private and for public goods: On the basis of voluntary participation and in the absence of a third party providing a subsidy to “the system”, it is impossible to have a decision rule that induces an efficient allocation under all circumstances, unless the information that is available ex ante is sufficient to determine what the allocation should be.[3] If coercion is allowed, there is no problem in achieving efficiency for either kind of good.

To find a difference between public and private goods, one must look at the behaviour of such systems as the number of participants becomes large. For private goods, a larger number of participants means that there is more competition. This reduces the scope for dissembling, i.e., acting as if one cared less for a good than one actually does, in order to get a better price. With competition from others, attempts to dissemble are likely to be punished by someone else getting the good in question. Hence, there are approximation theorems showing that, for private goods, there are incentive mechanisms that induce approximately efficient allocations, even with a requirement of voluntary participation, if the number of participants is large.[4]

For public goods, there is no such competition effect. An increase in the number of participants has two different effects. On the one hand, there are more people to share the costs. On the other hand, the probability that an individual’s expression of preferences affects the aggregate decision is smaller; this reduces the scope for getting a person to contribute financially, e.g., by having an increase in financial contribution commensurate to the increase in the probability that the public good will be provided. The second effect dominates if individual valuations are mutually independent and if the cost of providing the public good is commensurate to the number of participants, e.g., if the public good is a legal system whose costs are proportional, or even more than proportional, to the number of parties who may give rise to legal disputes. In this case, the expected level of public-good provision under any incentive mechanism that relies on voluntary participation must be close to zero.[5]

Samuelson’s view about public goods versus private goods, the latter being efficiently provided by a market system, the former not being efficiently provided at all by a “spontaneous decentralized” solution, thus seem to find its proper place in a setting with many participants where, on the one hand, the forces of competition eliminate incentive and information problems in the allocation of private goods, and, on the other hand, incentive and information problems in the articulation of preferences for a public good make it impossible to get the public good financed.

However, in the transition from a finite economy to a large economy, the question of what is the proper amount of resources to be devoted to public-goods provision is lost, at least in the independent private values framework that has been used by this literature. In this framework, a version of the law of large numbers implies that cross-section distributions of public-goods valuations are commonly known. Given this information, the efficient amount of public-goods provision is also known. The only information problem that remains is the assignment problem of who has a high valuation and who has a low valuation for the public good. This assignment problem matters for the distribution of financing contributions but not for the decision on how much of the public good to provide.

C.III.1.2       Do Correlations Make Incentive Problems Disappear?

If one wants to avoid the conclusion that the proper amount of resources to be devoted to public-goods provision is known a priori because the cross-section distribution of valuations for the public good is pinned down by the law of large numbers, one must assume that the public-goods valuations of different people are correlated so that the law of large numbers does not apply. However, for models with correlated valuations, the impossibility theorems mentioned above are no longer valid. Indeed, for models with private goods, Crémer and McLean (1988) and McAfee and Reny (1992) have shown that one can use the correlations in order to prevent people from obtaining “information rents”, i.e., benefits that they must be given if they are to be induced to properly reveal their information. For public goods, Johnson, Pratt, and Zeckhauser (1990) and d'Aspremont, Crémer, and Gérard-Varet (2004) show that, generically, incentive schemes that use correlations to harshly penalize deviations when communications from different people are too much in disagreement, can be used to implement first-best outcomes – with voluntary participation and without a third party providing a subsidy, at least in expected-value terms.

Incentive schemes in these analyses are not very plausible. They look more like artefacts of the mathematics than anything that might be used in reality. But then the question is what precisely is deemed to be implausible about them.

One answer to this question has been proposed by Neeman (2004) and Heifetz and Neeman (2006). In their view, the results of Crémer and McLean (1988) presume that an agent’s preferences for a good can be inferred from the agent’s beliefs about the world. In Crémer and McLean (1988), beliefs are implicitly defined as conditional expectations where the information on which expectations are conditioned consists of the agents’ preference parameters and moreover, this information can only take finitely many values. Generically, preference parameters can be inferred from beliefs, and the differences in attitudes towards bets, i.e., state-contingent payment schemes, which go along with differences in beliefs, can be used to extract all surplus. According to Heifetz and Neeman (2006), such surplus extraction is impossible if a given belief about the world might be compatible with different values of preference parameters, say a value of zero and a value of ten for the good in question. Because the person with a value of ten has the same beliefs as the person with a value of zero, it is then not possible to make the person with a value of ten reveal the high valuation and at the same time surrender the benefit that he obtains if he can enjoy the good; after all, this person could always act as if his value was zero. Neeman (2004) uses a version of this argument in order to extend the Mailath-Postlewaite (1990) theorem on the impossibility of public-good provision in a large economy with voluntary participation to a setting with correlated values. Heifetz and Neeman (2006) argue that, in the set of relevant incomplete information models, with information variables taking more than finitely many values, the “Beliefs Determine Preferences” (BDP) property of Crémer and McLean is in fact negligible.

Gizatulina and Hellwig (2010, 2014, 2017) suggest that this line of argument fails. Gizatulina and Hellwig (2010) showed that the uniform violation of BDP in Neeman (2004) is incompatible with the notion that when there are many agents, each individual agent is informationally small.[6] Gizatulina and Hellwig (2014) observed that Heifetz and Neeman (2006) did not actually study the BDP property as a property of belief functions but as a property of priors and that Heifetz and Neeman (2006) did not take account of the role belief functions as conditional distributions. For incomplete-information models with given finite-dimensional abstract type spaces, Gizatulina and Hellwig (2014) used a version of the well-known embedding theorem for continuous functions to show that the set of continuous belief functions exhibiting the BDP property is a residual subset of the set of all continuous belief functions when this space is given the topology of uniform convergence. They also showed that this genericity result for the BDP property can be extended to vectors of belief functions (for the different agents) that are compatible with common priors.

For incomplete-information models with abstract type spaces, Gizatulina and Hellwig (2017) show under fairly general conditions, not only the BDP property but also the McAfee-Reny (1992) necessary and sufficient condition for full surplus extraction (FSE) is generic. The result rests on the insight that the McAfee-Reny condition can be interpreted as a strengthening of the BDP condition, namely, if one knows an agent’s beliefs, then one also knows that the agent himself knows his type, i.e., his beliefs cannot come from a non-degenerate mixture of types, and one can infer the type from the beliefs. An initial version of this result was already reported on in the last report of the Institute. The final version has two important generalizations: First, it allows for beliefs to be arbitrary measures on the space of states of nature (and other agents’ characteristics); initially we only had a result for beliefs that have continuous densities with respect to some fixed measure.

Second, we allow for the space of beliefs to have any topology that is induced by a metric that is a convex function. This condition allows not only for the usual topology of weak convergence of probability measures but also for the topology that is induced by the total-variation norm. This generalization is important because it pre-empts the criticism that the topology of weak convergence of probability measures is too weak to provide for the continuity properties of strategic behaviour that are deemed to be desirable.[7]

Gizatulina and Hellwig (2017) also provide an answer to the question that was posed in the last report of how the analysis of genericity of BDP or FSE belief functions in an abstract type space setting relates to the analysis of strategic behaviour in the universal type space, i.e. space of hierarchies of agents’ characteristics, agents’ beliefs about other agents’ characteristics, agents’ joint beliefs about other agents’ characteristics and beliefs about other agents’ beliefs, etc. In the last report, we had argued that, in the context of the universal type space, it does not make sense to talk about properties of belief functions because belief functions in the universal type space are trivially given as projections from universal types to belief hierarchies (or to the measures on other agents’ type spaces that are induced by the belief hierarchies). The question of how beliefs are generated, what information they reflect, and whether the information can be inferred from the beliefs cannot be addressed as a question about belief functions.

To overcome this objection, we introduce the concept of an information-based subset of the universal type space, i.e. a subset of the universal type space that is obtained as the image of an abstract-type space (with given belief functions) under the natural mapping that uses the vector of belief functions in the abstract-type space model to generate the hierarchy of beliefs of an agent. Given this concept, we show that the set of subsets of the universal type space for which full surplus extraction is feasible contains a set that is residual in the set of compact information-based subsets of the universal type space when this set is given the Hausdorff topology.

The underlying topology on the space of belief hierarchies can be any topology that is metrizable by a convex metric. This condition is not only satisfied by the product topology but also by the (stronger) uniform topologies proposed by Dekel et al. (2006) and Chen et al. (2010) in order to allow for the possibility that, as in Rubinstein’s (1989) e-mail game, beliefs of arbitrarily high orders in the hierarchies can be strategically important. Our genericity results are thus much stronger than analogous results in Chen and Xiong (2011, 2013), which deal with common priors on belief-closed subsets of the universal type space  works when that space has the product topology; their analysis relies on the denseness of finite models in the product topology, which in turn rests on the fact that the product topology assigns ever smaller weight to ever higher-order beliefs.

Whereas the universal type space involves beliefs in terms of hierarchies of beliefs of different orders, the McAfee-Reny condition for full surplus extraction treats beliefs as probabilities over other agents’ characteristics. From Mertens and Zamir (1985), it is well known that hierarchies of beliefs can be mapped into beliefs over other agents’ belief hierarchies and that this mapping is a homeomorphism if the space of belief hierarchies is given the product topology and the spaces of beliefs all have the topology of weak convergence of probability measures. Hellwig (2016) proves the analogous result when the space of belief hierarchies has the uniform strategic topology of Dekel et al. (2006) or the uniform weak topology of Chen et al. (2010). The homeomorphisms theorems play an important role in the Gizatulina-Hellwig (2017) analysis of the genericity of full surplus extraction in a universal-type-space approach. A 2017 revision of Hellwig (2016) provides a considerable strengthening of the homeomorphism theorem, relying on some new mathematical results in Hellwig (2017).

Our results on the genericity of full surplus extraction should not be interpreted as saying that we regard Crémer-McLean or McAfee-Reny mechanisms as plausible, or that we consider the mechanisms of Johnson, Pratt, and Zeckhauser (1990) and d'Aspremont, Crémer, and Gérard-Varet (2004) as an appropriate basis for tackling social choice problems involving public goods. They should instead be interpreted as saying that assessments of genericity or sparseness do not provide a good basis for criticizing these mechanisms. To be effective, a criticism must dig deeper.

C.III.1.3       Robustness and Large Economy Models: Samuelson Vindicated

The ability to exploit correlations between valuations requires precise information not just about the joint distribution of the different participants’ public-good valuations, but also about the different participants’ beliefs about the other agents’ valuations, the other agents’ beliefs about the other agents’ valuations, etc. It seems implausible that a mechanism designer should have this information. Ledyard (1979) and Bergemann and Morris (2005) have proposed a robustness requirement that would eliminate the dependence of an incentive scheme on this kind of information. According to Bergemann and Morris, a social choice function, e.g. in the public-good provision problem a function mapping cross-section distributions of valuations into public-good provision levels and payment schemes, is robustly implementable if, for each specification of “type spaces”, in particular, for each specification of beliefs that agents hold about each other, one can find an incentive mechanism that implements the outcome function in question.

In public-good provision problems with quasi-linear preferences, robust implementability is, in fact, equivalent to ex post implementability and to implementability in dominant strategies. This eliminates all social choice functions whose implementation would involve an exploitation of correlations and agents’ beliefs about correlations. In particular, social choice functions with first-best outcomes are not robustly implementable. The mechanisms for first-best implementation in Johnson et al. or d’Aspremont et al. make essential use of information about beliefs, beliefs about beliefs, etc.

Given these findings, Bierbrauer and Hellwig (forthcoming) argue that the robustness criterion of Ledyard (1979) and Bergemann and Morris (2005) provides the proper setting for understanding the essence of the difference between public and private goods. All findings from the independent-private-values case carry over to robust implementation with correlated values. In particular, (i) for private goods, approximately efficient implementation is possible with voluntary participation if the number of participants is large, and (ii) for public goods with provision costs commensurate to the number of participants, hardly any provision at all is possible with voluntary participation if the number of participants is large. These results confirm Samuelson’s (1954) suggestion that private, contractual arrangements for efficient public good provision are not available and that an increase in the number of participants is likely to make the problems worse rather than better.[8]

If voluntary participation is not required, a very different conclusion is obtained. In the absence of participation constraints, one can use Groves mechanisms to implement first-best outcomes. However, with a finite number of participants, it is not possible without generating a surplus or a deficit of the public budget in some contingencies. Clarke-Groves mechanisms never yield deficits but they sometimes involve surpluses. The reason is that each agent’s payments must be calibrated precisely to the externalities he imposes in those circumstances, where he is pivotal and has a significant effect on the level of public-good provision. This calibration can be compatible with budget balance in some circumstances but not in all.

In contrast to the problems posed by voluntary participation, the difficulties that robust, or dominant-strategy, incentive compatibility poses for budget balance become less important when there are more participants, and they disappear altogether in a large economy, with a continuum of agents. In a large economy, no agent is ever pivotal, i.e. no agent ever has a significant effect on the level of public-good provision. Robust incentive compatibility reduces to the requirement that each agent’s payment be independent of what the agent communicates to “the system” about his valuation for the public good. Thus, Bierbrauer and Hellwig (2015) show that, in a large economy, first-best implementation with budget balance can always be obtained. The result holds regardless of what is being assumed about correlation structures, so that, in contrast to the independent-private-values case, it encompasses models with aggregate, as well as individual uncertainty in which the question of how much of the public good should be provided is non-trivial.

Along the same lines, Bierbrauer’s (2014) study of the interdependence of public-good provision and income taxation with aggregate uncertainty about public-good preferences shows that, if a robustness condition is imposed, the standard procedure of having separate analyses of public-good provision and income taxation, effectively neglecting the information problems in public-good provision,[9] can be vindicated, at least if preferences are additively separable between consumption and leisure. In this case, the arguments given in Bierbrauer and Hellwig (2015) imply that, in a large economy, it is always possible to induce truthtelling about public-good preferences by having payments be independent of reported preferences; moreover, incentive-compatibility conditions do not depend on people’s beliefs about each other, i.e. they hold robustly. Given the financing needs that arise from efficient public-goods provision, an optimal income tax schedule can be determined along the lines of Mirrlees (1971) or Hellwig (2007). 

The analysis of large economies with aggregate as well as individual uncertainty involves difficult technical problems. If one thinks about uncertainty in the large economy as involving a mixture of individual and aggregate shocks, one needs an appropriate mathematical framework. The issue is how to formalize the notion of a continuum of conditionally independent random variables in such a way that cross-section distributions are well defined.

For this purpose, Hellwig (forthcoming) develops a formulation of incomplete-information games with a continuum of agents in which there is both aggregate and individual uncertainty. At the level of aggregates, individual uncertainty cancels out. This is formally derived from a (conditional) law of large numbers. However, in any such model, one must deal with the conundrum that, at least in standard formulations, there is no such thing as a continuum of non-trivial (conditionally independent random variables; more precisely, while one can use Kolmogorov’s extension theorem to construct such an object, the cross-section sample realizations, e.g. the assignments of public-good valuations to individual agents are non-measurable with probability one, so that cross-section distributions are not even well defined. Drawing on Sun’s (2006) notion of a rich Fubini extension of a product of probability spaces, Hellwig (forthcoming) shows how these difficulties can be overcome, even in a game-theoretic context where one is not just interested in the realization of the uncertainty for one randomly drawn agent but one is interested in the cross-section sample realization as a whole because that determines the constellation of actions chosen by the different agents.

In this setting, a condition of anonymity in payoff functions guarantees that agents only care about the cross-section distribution of other agents’ actions. A further condition of anonymity in beliefs ensures that agents treat other agents’ characteristics as (essentially pairwise) exchangeable random variables. Under this condition, by a version of De Finetti’s theorem, the decomposition of uncertainty into an aggregate component and an individual component arises naturally in that, conditionally on the cross-section distribution of agent characteristics, individual characteristics are (essentially pairwise) independent and identically distributed with a conditional probability distribution equal to the cross-section distribution. Given this decomposition of uncertainty, the cross-section distribution of actions depends only on the cross-section distributions of characteristics and the cross-section distribution of strategies (functions mapping characteristics into actions). A coherence condition ensures that a given belief function is compatible with some prior (which may be agent-specific) and that the belief function exhibits anonymity in beliefs at all “types” of the agent if and only if, under the prior, the different agent’s characteristics types are (essentially pairwise) exchangeable random variables.

With anonymity in beliefs, all relevant aspects of an agent’s belief function are contained in his macro belief function, which maps the agent’s characteristics into probability measures over cross-section distributions of the other agents’ types. Every coherent macro belief function is compatible with an agent-specific prior, but not necessarily compatible with a common prior. Building on Hellwig (2011), the paper ends by giving necessary and sufficient conditions under which a coherent macro belief function is compatible with a common prior.

C.III.1.4       Coalition Proofness and Voting

Whereas the above-cited result in Bierbrauer and Hellwig (2015) shows that in a large economy, first-best public-good provision rules with budget balance can be implemented robustly, we are not convinced that, in the absence of participation constraints, first-best implementation is realistic. As we have argued in previous reports, we consider it reasonable to impose an additional requirement of coalition proofness. In the context of a large economy, such a requirement had originally been introduced in Bierbrauer’s (2009) analysis of the interference between preference revelation for public-good provision and for consumption-leisure choices. Bierbrauer and Hellwig (2015, 2016) adapt the idea to the public-good provision problem on its own.

The requirement of coalition proofness is motivated by the observation that robust implementation of first-best allocation rules may have to rely on people giving information that they would be unwilling to give if they appreciated the way it is being used. The above-cited result in Bierbrauer and Hellwig (2015) relies heavily on the fact that, in a large economy, where no one individual has a significant impact on the level of public-good provision, individual incentive compatibility conditions are trivially met if payments are insensitive to people’s communications about their preferences. This kind of implementation, however, abuses the notion that, if a person’s communication about his or her preferences does not make a difference to anything, then the person is indifferent between all messages and therefore may as well communicate the truth. If there was just the slightest chance that a person’s communication would make a difference, at least some people would strictly prefer not to communicate the truth.

To see why this might happen, observe that first-best implementation relies on information concerning the intensities of people’s preferences. If there is a large number of people whose benefits from the public good are just barely less than their share of the cost, first-best implementation may require that the public good be provided because the large benefits that the public good provides to a few other people are more than enough to outweigh this small shortfall. If, instead, the people who oppose the public good draw no benefit at all from it, first-best implementation may require that the public good should not be provided because the shortfall of their benefits relative to their costs is not compensated by the net benefits that are available to others. In this constellation, the overall outcome depends on the information that can only be obtained from people who don’t want the public good to be provided at all, namely whether their opposition is mild or strong. Truthtelling is individually incentive compatible because nobody believes that the information he provides makes a difference. However, truthtelling is not coalition-proof: If someone was to organize a coalition of opponents so as to coordinate on a manipulation of the information they provide, the overall incentive mechanism would no longer be able to provide for first-best implementation.

Work on this issue started a long time ago, and we have reported on previous versions in previous reports, see, e.g. Bierbrauer and Hellwig (2011/13). Relative to these previous versions, Bierbrauer and Hellwig (2015) contains several innovations. In particular, the condition of coalition proofness is weakened. Whereas in previous versions, we had imposed a condition of robust coalition proofness, which requires the stipulated incentive mechanism to be immune to collective deviations on all type spaces, we now impose a condition of immunity to robust collective deviations. Under the earlier concept, a deviating coalition was taken to know the environment; if the type space was a singleton, i.e. a single assignment of public-goods valuations to agents, this meant that the deviating coalition had complete information, including full information about the public-good valuations of people who were not part of the deviating coalition.  By imposing a robustness condition on the collective deviations themselves, we disallow any conditioning on such information about people who are not coalition members. A robust collective deviation must be advantageous, or at least not disadvantageous to all coalition members, regardless of what the type space may be and regardless of what the characteristics of people outside the coalition may be. For a collective deviation to block the implementation of a social choice function in such a robust manner is much more restrictive than blocking with conditioning on the type space. The set of social choice functions that can be implemented by robustly incentive mechanisms that are immune to robust collective deviations might therefore be presumed to be larger than the set of social choice functions that can be implemented by robustly incentive-compatible and robustly coalition-proof mechanisms.

Within the class of monotonic social choice functions, this intuition turns out to be false. We say that a social choice function is monotonic if the level of public-good provision it stipulates does not go down and may go up if the distribution of public-good valuations in the population is shifted “to the right” in the sense of first-order stochastic dominance. Bierbrauer and Hellwig (2015) prove that, if the public good can be provided at the level zero or the level one, then a monotonic social choice function can be implemented by a robustly incentive-compatible mechanism that is immune to robust collective deviations if and only if (i) the payments people must make are independent of their own characteristics and depend only on the level at which the public good is provided and (ii) the level at which the public good is provided depends only on the population shares of the set of proponents and the set of opponents of provision, i.e. the set of people whose valuations exceed the difference between the payments at the two outcomes and the set of people whose valuations fall short of that difference. Such social choice functions can in fact be implemented by voting mechanisms, i.e. by asking people who is for and who is against the provision of the public good and providing the public good if the votes for provision exceed a specified threshold (not necessarily 50%).

Whereas Bierbrauer and Hellwig (2011/13) considered only the case of two provision levels, Bierbrauer and Hellwig (2015) allow for an arbitrary number of provision levels with non-decreasing marginal provision costs. We introduce the additional condition that, if all participants claim to have either the minimal possible or the maximal possible valuation for the public good, then, as the population share of the people claiming the maximum goes from zero to one, the public-good provision level stipulated by the social choice function goes from zero to n, taking all the values 0,1,…,n-1, n in between. A social choice function that satisfies this condition, in addition to monotonicity and equal cost sharing, can be implemented by a robustly incentive compatible mechanism that is immune to robust collective deviations if and only if there exists a non-decreasing sequence of thresholds such that the public good is provided at level k if and only if, in a binary vote between levels k-1 and k, the threshold for the higher level is met and, in a binary vote between levels k and k+1, the threshold for the higher level is not met.

If the additional condition the social choice function is not imposed, it still is the case that robustly incentive-compatible mechanisms that are immune to robust collective deviations must be voting mechanisms, but these mechanisms can have more complex structures. Preliminary investigations indicate that the same is true if public-good provision involves decreasing, rather than increasing marginal costs. (The conjecture in our previous report that decreasing marginal costs may give rise to voting paradoxes seems to be false.)

We consider these results to be important because they provide a link between welfare economics/mechanism design and political decision making, bridging a gap that has in the past caused a complete disconnect between economics and political science, and even between public economic theory and political economy. From the perspective of welfare economics, or of public economic theory, voting mechanisms have always been suspect, or even an object of scorn, because they pay no attention to intensities of preferences. Thus it is easy to show that voting can lead to inefficient outcomes, for example if there are many yea-sayers who do not really very much and a few nay-sayers who care a great deal. The scorn is misplaced however if information about preference intensities cannot be obtained in a reliable manner. Our results indicate that, if communication about preference intensities is vulnerable to distortions by coalitions, then indeed voting mechanisms may be the only ones one can use.

In the editorial process for Bierbrauer and Hellwig (2011/13), we had been asked to show that our analysis also applies to large finite economies. Once we had done so – for robustly implementable and robustly coalition-proof mechanisms – the referees and the editor asked us to drop the large-economy part altogether. The referees were experts in mechanism design and social choice and could not have cared less about the economics of public-good provision, let alone the need to have a large-economy approach that would allow us to integrate public-good provision with standard analyses of income taxation and of commodity taxation in competitive markets, which presume a continuum of agents. Thus, Bierbrauer and Hellwig (2016) contains the result that, in a model with finitely many participants and two levels of the public good, robustly incentive-compatible and robustly coalition-proof mechanisms must be voting mechanisms and, conversely, any voting mechanism is robustly incentive-compatible and robustly coalition-proof.

Relative to this result for finite economies, the large-economies result in Bierbrauer and Hellwig (2015) has several advantages: First, the concept of coalition proofness is weaker, so the finding that coalition proofness implies use of a voting mechanism is more surprising. In Bierbrauer and Hellwig (2016), coalitions that condition on complete-information type spaces play a key role. Second, Bierbrauer and Hellwig (2016) have to assume that coalitions do not use side payments between members. In the large-economy analysis of Bierbrauer and Hellwig (2015, we actually show that side payments in a coalition must be zero. However, since robustness in the transition from finite to large economies is important, we will have to come back to this issue for the weaker concept of coalition proofness in Bierbrauer and Hellwig (2015).

C.III.1.5       Taxation

In past work, we had addressed the role of taxes as a source of funding for public goods. In particular, Hellwig (2004/2009) had argued that the traditional three-way split between the theory of mechanism design and public-good provision, the theory of public-sector pricing under a government budget constraint, and the theory of redistributive taxation (income taxation) should be replaced by a two-way split between models with and models without participation constraints, where taxes play a role in both, as a source of funding for public goods under participation constraints and as a means of redistribution when there is inequality aversion. Over the past four years, we have not added to this work but have made several contributions to the theory of taxation, especially in connection with political competition.

Within a Ramsey-Boiteux setting, Aigner and Bierbrauer (2014) study the problem of how to tax financial services, a question that has been prominent in recent policy debate. They use a model of “boring banking”, in which the bank uses some inputs to provide services for depositors and some other inputs to screen loan customers, to study optimal taxation in a general-equilibrium setting. Under the assumptions of perfect competition and constant returns to scale, they find that a variety of “different” modes of taxation that have been considered in the policy debate are in fact equivalent. The differences that have been stressed in the policy debate have in fact been due to differences in revenue raised by the government and in utility obtained by the private participants. Once these differences are corrected for, different modes of taxation that end up having the same effects on margins between final outputs and final inputs are shown to be equivalent. Matters are different if there are rents, from monopoly power or from decreasing returns to scale. In this case, the different tax modes that have been proposed may differ with respect to their impact on rents but the logic of Ramsey, Boiteux, Diamond, and Mirrlees, which demands that these rents should be taxed away, still dominate the analysis.

Aigner (2014) studies the interaction of distributive and allocative concerns in the context of environmental taxation, which might have adverse distributive effects. The problem is considered in a standard Mirrleesian framework of optimal income taxation with two productivity groups, augmented by a second consumption good, which induces a negative environmental externality. The analysis of optimal taxation is done once in a setting with first-best income taxation and once in a setting with second-best income taxation à la Mirrlees. After identification of a term in the formalism that can be taken to stand for the “greenness” of the Pigouvian tax on the good with the negative externality (which is an issue because, in a general-equilibrium setting, there is no natural numéraire), the paper shows that, somewhat surprisingly, an increase in the welfare weight of the less productive group makes the “greenness” term go up if a first-best allocation is to be implemented and to go down if a second-best interior allocation is to be implemented. The reasons have little to do with the political considerations that originally motivated the analysis and a lot with the effects of the welfare weight of the low-productivity group on the shadow price of the resource constraint: In a first-best allocation, only high-productivity people work; if their welfare weight goes down, the shadow price of the resource constraint goes down because it is less problematic to have these people work extra. In a second-best interior allocation, in contrast, the shadow price of the resource constraint goes up because with more redistribution, deadweight losses from having to satisfy incentive constraints are higher.

Hansen (2017) studies a generalization of the classical model of optimal utilitarian income taxation, combining the formulations of Mirrlees (1971), which had looked at labour-leisure trade-offs at the intensive margin where people decide how many hours to work, and Diamond (1980), which had looked at labour-leisure trade-offs at the extensive margin where people decide whether to take up a job or not. Hansen (2017) allows for choices to concern both hours worked and whether to take a job or not; he assumes that heterogeneity across agents involves two parameters, one that is relevant for decisions at the intensive margin and one that is relevant for decisions at the extensive margin. For this specification, he shows that the sign of the optimal marginal income tax is indeterminate.  The classical result that labour supply of all skill groups except for the top is distorted downwards at both the intensive and the extensive margin, and labour supply at the top is undistorted at the intensive margin, but is distorted downwards at the extensive margin, holds for some specifications, but not for all. For some specifications, it may be the case that, at the utilitarian optimum, labour supply of all skill groups is undistorted at the intensive margin and labour supply of some skill group is distorted upwards at the extensive margin. And so on: There is a plethora of possible constellations; optimal marginal tax rates depend not only on the trade-off between distributive concerns and efficiency concerns but also on the trade-off between efficiency concerns at the intensive margin and efficiency concerns at the extensive margin. The original conclusions of Mirrlees and Diamond, that optimal marginal income tax rates are everywhere non-negative and optimal taxation induces only downward distortions in labour, are however restored if only one of the two dimensions of heterogeneity is private information of the person involved, and the other dimension is publicly observed. This is true regardless of which of the two dimensions is publicly known and which is private information. The observation that upward distortions at the extensive margin might be desirable bears on the discussion about the earned-income tax credit in the United States, which subsidized work by people at the lower end of the income distribution – an arrangement which would be undesirable in both, the formulation of Mirrlees (1971) and the formulation of Diamond (1980).

A critical survey of the literature on optimal utilitarian taxation in the tradition of Mirrlees (1971) and Diamond (1980) is given in Bierbrauer (2016).

C.III.1.6       Political Competition and Voting

In a series of papers, Bierbrauer and  Boyer (2013, 2014, 2016) have considered the impact of political competition on taxation. Bierbrauer and Boyer (2013, 2014) do so in a Mirrleesian setting with only two values of the productivity parameter and potential ability differences between politicians. Assuming that the low-productivity group is larger, they find a tradeoff between distributive concerns and concerns about the ability of the politicians. Outcomes depend on parameter constellations. The leading case is shown to be one where the optimal Mirrleesian income tax for a Rawlsian welfare function is implemented.

Bierbrauer and Boyer (2016) modify the analysis by introducing the possibility of targeted transfers, i.e. the use of funds raised to provide subsidies to any targeted set of participants. With two politicians competing for votes, they find that any symmetric equilibrium must induce an allocation that is efficient in the sense that it maximizes overall surplus (including surplus from public-good provision). If voters are risk-averse, some insurance/redistribution might be desirable, but there is a surplus-maximizing policy with randomized subsidies to participants that wins a majority against any welfare-maximizing policy.

Hansen (2014) studies two models of political competition. The first model stands in the tradition of Downs (1957), with voters arrayed on a Hotelling line, where voters rank policies according to their closeness to their location. The new feature of the analysis is in the endogenous formation of party membership and the endogenous choice of party candidates. This contrasts with the original work of Downs, which simply assumed that there are two parties without considering the membership of these parties. It also contrasts with the work of Besley and Coate (1997), who considered political competition between individual “citizen candidates”. Apart from injecting an element of realism, the consideration of endogenous party membership and endogenous candidate selection allows the author to obtain qualitatively new insights. Whereas Downs had argued that parties trying to maximize their shares of the vote will choose their programs so as to congregate in the centre and the citizen candidate model of Besley and Coate yields equilibrium policy programs at the two ends of the Hotelling line, Hansen’s model yields outcome between the two extremes, with both a minimum distance and a maximum distance between the candidates whom the parties put up for the general election. The minimum distance is given by the requirement that party membership must be motivated, which is only possible if they see a genuine difference between the two parties. The maximum distance is given by the Downsian concern that extremism is bad for attracting votes.  

In the other model of Hansen (2014), voters have the same preferences and have to choose between two candidates without knowing which candidate is more competent. The underlying policy question is whether to engage in a reform or not. The reform is costly, and the cost is only worth it if the reform succeeds; the probability of success depends on the competence of the politician in charge. The question posed is to what extent uncertainty about candidate quality can justify a form of power sharing. Power sharing and power concentration are modelled by assuming that each candidate gets a share of power that depends on the candidate’s share of the vote and on a parameter characterizing the extent to which the political system allows for a concentration of power with the winner of the vote. The analysis shows that, if there is no uncertainty about the candidates’ ability levels, full concentration of power is desirable. In this case, a candidate proposes reform if and only if he able enough so that the expected net benefit of the reform is positive. With uncertainty about candidates’ ability levels, full concentration of power with the winning candidate is still welfare maximizing if holding the office provides only small intrinsic benefits to candidates. If keenness to hold the office exceeds a certain threshold however, full power concentration is longer welfare-maximizing and some power sharing is desirable, the more so the greater the “keenness parameter” is. The theoretical analysis is complemented by an empirical study showing that, in a cross-section of countries with similar democratic systems but different degrees of power concentration for election winners, per capita GDP growth in the years 2004 - 2011 (interpreted as a measure of policy efficiency) is significantly negatively correlated with a variable that is given by the product of power concentration and office motivation of politicians.

Bruns and Himmler (2010, 2011, 2016) concern the impact of media presence on the performance of elected public officials. The earlier papers had provided empirical evidence showing that performance is positively affected by media presence: Bruns and Himmler (2010) showed that public spending in a county in the US is positively affected if a television station is located in that county;  Bruns and Himmler (2011) showed that the performance of local government in Norway is the better the larger the local newspaper circulation is. The new paper, Bruns and Himmler (2016) develops a theoretical model to investigate the willingness of voters to pay for media that provide information about (local) government. Whereas the standard argument about people being “rationally uninformed” suggests that people would not spend money on such media because whatever they do with the information they obtain will not have an effect on the outcome, this paper shows that some spending on such media (albeit inefficiently little) can be supported if people have a sense of a belonging to a group with homogeneous interests and that the information affects the choices of all group members. There may thus be an interesting link between the empirical findings in Bruns and Himmler (2010, 2011) and our theoretical work on coalition formation as an important element in public decision making.

C.III.1.7       Further Work on Incentive Mechanisms and Governance

Several papers address issues of incentive mechanisms and governance that do not directly fall into the program that was outlined in Sections C.III.1.1 – C.III.1.4. Thus, Bierbrauer and Netzer (2016) study the implications of agents’ having social preferences à la Fehr-Schmidt for the implementability of social choice functions. Under an additional assumption on the desire for insurance, incomplete information of the mechanism designer about the weight given to reciprocity concerns does not upset implementability. With complete information about the weight given to reciprocity, all efficient social choice functions are shown to be implementable.

Gorelkina (2014) is concerned with the well-known vulnerability of the Vickrey (second-price) auction to collusion. To preclude collusion, the paper proposes two modifications. First, a so-called “gap rule” stipulates that the Vickrey rule is applied if and only if the gap between the highest price and the second-highest price exceeds the gap between the second-highest price and the third-highest price. Otherwise the bidder with the second-highest price receives the object and pays the third-highest price. Second, the auction is split into two rounds. In the first round, participants make bids, and each participant names a “target”, some other participant or himself. If the two top bidders self-target, the Vickrey auction is played, otherwise the gap rule is applied. For a game in which the actual gap rule/target bids auction is preceded by a stage in which the players know their values and can communicate and conclude a cartel agreement, the paper shows that the Vickrey outcome is a Bayes-Nash equilibrium outcome of the game. The modifications introduced by the gap rule and by target bids preclude active collusion, which would not be true if the coalition formation stage would be followed by a simple Vickrey auction.

Gorelkina (2015) considers the effects of level k reasoning on equilibrium outcomes of games played under the expected-externality mechanism of d’Aspremont and Gérard-Varet (1979). “Level k reasoning” occurs when agents go only through a finite number of iterations in thinking about what other agents think about what other agents think, etc.  While the original expected-externality mechanism is likely to fail to implement an efficient social choice function rule in this environment, the paper shows that this mechanism can be adjusted to restore efficiency.

Hanousek and Kochanova (2016) study the effects of corruption on efficiency. In an empirical analysis of multi-country data, they reconcile the apparent conflicts between previous assessments by showing that a higher mean index for corruption in a country goes along with weaker firm performance (sales and productivity growth) but a higher measure of dispersion in corruption goes along with better firm performance. Fungacova, Kochanova and Weill (2015)  show that indices of bribery in local environments go along with higher levels of firm indebtedness to banks. However, this effect involves mainly short-term rather than long-term debt, which explains why it does not translate into effects on long-term investments. The effect is the stronger, the lower is the level of financial development, the lower is the market share of private banks and the lower is the market share of foreign banks. Further papers by Jerbashian and Kochanova (2016, 2017) show that bureaucratic barriers to entry into doing business go along with lower rates of investment in ICT and that higher levels of ICT development go along with more intense competition in services. Both papers together provide a picture of strategic complementarity between ICT investment and economic activities that rely on ICT. From a theoretical point of view, this complementarity is not surprising, but it is remarkable that the picture is clearly confirmed by what is after all a set of very noisy data. Most recently, Hasnain, Kochanova, and Larson (2016) show that having the ICT infrastructure and putting government procedures on the web can have a significant impact on such matters as tax compliance and corruption.

 

C.III.1.8       References

Aigner, R. (2014). Environmental Taxation and Redistribution Concerns, Finanzarchiv/Public Finance Analysis, 70, 249–277 (Preprint 2011/17)

Aigner, R. & Bierbrauer, F. (2014). Taxing Wall Street: The Case of Boring Banking, mimeo, Max Planck Institute for Research on Collective Goods and University of Cologne, CESifo Working Paper 5309

d'Aspremont, C. & Gérard-Varet, L. A. (1979). Incentives and Incomplete Information, Journal of Public Economics, 11, 25–45

d'Aspremont, C., Crémer, J. & Gérard-Varet, L. A. (2004). Balanced Bayesian Mechanisms, Journal of Economic Theory ,115, 385–396

Bergemann, D. & Morris, S. J. (2005). Robust Mechanism Design, Econometrica, 73, 1771–1813

Besley, T. & Coate, S. (1997). An economic Model of Representative Democracy, Quarterly Journal of Economics, 112, 185–114

Bierbrauer, F. (2009). Optimal Income Taxation and Public Goods Provision with Endogenous Interest Groups, Journal of Public Economic Theory, 11, 311–342 (MPI Preprint 2006/24)

Bierbrauer, F. (2014). Optimal tax and Expenditure Policy with Aggregate Uncertainty, American Economic Journal Microeconomics, 6, 205–257. (MPI Preprint 2008/39)

Bierbrauer, F. (2016). Ungleiche Einkommen, ungleiche Vermögen und optimale Steuern, Perspektiven der Wirtschaftspolitik, 17(1), 2–24

Bierbrauer, F. & Boyer, P. (2013). Political Competition and Mirrleesian Income Taxation: A First Pass, Journal of Public Economics, 103 (2013), 1–14

Bierbrauer, F. & Boyer, P. (2014). The Pareto Frontier in a Simple Model of Mirrleesian Income Taxation, Annals of Economics and Statistics, 113/114, 185–206 (MPI Preprint 2010/16)

Bierbrauer, F. & Boyer, P. (2016). Efficiency, Welfare and Public Policy, Quarterly Journal of Economics, 131, 461–518

Bierbrauer, F. & Netzer, N. (2016). Mechanism Design and Intentions, Journal of Economics Theory, 163, 557–603

Bierbrauer, F. & Hellwig, M. F. (2011/2013). Mechanism Design and Voting for Public-Good Provision, Preprint 2011/31, Max Planck Institute for Research on Collective Goods, Bonn 2011, revised 2013

Bierbrauer, F. & Hellwig, M. F. (2015). Public-Good Provision in Large Economies, Preprint 2015/12, Max Planck Institute for Research on Collective Goods, Bonn

Bierbrauer, F. & Hellwig, M. F. (2016). Robustly Coalition-Proof Incentive Mechanisms for Public Good Provision are Voting Mechanisms and Vice Versa, Review of Economic Studies, 83, 1440–1464 (MPI Preprint 2015/11, 2011/31)

Bierbrauer, F. & Hellwig, M. F. (forthcoming). The Theory of Incentive Mechanisms and the Samuelson Critique of a Contractarian Approach to Public-Good Provision, Max Planck Institute for Research on Collective Goods, Bonn

Boadway, R. & Keen, M. (1993). Public Goods, Self-Selection, and Optimal Income Taxation, International Economic Review 34, 463–478

Bruns, C. & Himmler, O. (2010). Media Activity and Public Spending, Economics of Governance,  11 (4), 309–332

Bruns, C. & Himmler, O. (2011). Newspaper Circulation and Local Government Efficiency, Scandinavian Journal of Economics, 113 (2), 470–492

Bruns, C. & Himmler, O. (2016). Mass Media, Instrumental Information, and Electoral Accountability, Journal of Public Economics, 134, 75–84

Chen, Y. & Xiong, S. (2011). The Genericity of Beliefs-Determine-Preferences Models Revisited, Journal of Economic Theory, 146 (2011), 751–761

Chen, Y. & Xiong, S. (2013). Genericity and Robustness of Full Surplus Extraction, Econometrica, 81 (2013), 825–847

Chen, Y., Di Tillio, A., Faingold, E. & Xiong, S. (2010). Uniform Topologies on Types, Theoretical Economics, 5, 445–478

Clarke, E. H. (1971). Multipart Pricing of Public Goods, Public Choice, 11, 17–33

Crémer, J. & McLean, R. (1988). Full Extraction of the Surplus in Bayesian and Dominant Strategy Auctions, Econometrica, 56, 1247–1257

Dekel, E., Fudenberg, D. & Morris, S. (2006). Topologies on Types, Theoretical Economics, 1, 275–309

Diamond, P. A. (1980). Income Taxation with Fixed Hours of Work, Journal of Public Economics, 13, 101–110

Downs, A. (1957). An Economic Theory of Democracy, Harper & Brothers, New York

Fungacova, Z., Kochanova, A. & Weill, L. (2015). Does Money Buy Credit? Firm Level Evidence on Bribery and Bank Debt, World Development, 68, 308–322

Gizatulina, A. & Hellwig, M. F. (2010). Informational Smallness and the Scope for Extracting Information Rents, Journal of Economic Theory, 145 (2010), 2260–2281

Gizatulina, A. & Hellwig, M. F. (2014). Beliefs, Payoffs, Information: On the Robustness of the BDP Property in Models with Endogenous Beliefs, Journal of Mathematical Economics, 51, 135–153 (MPI Preprint 2011/28)

Gizatulina, A. & Hellwig, M. F. (2017). The Generic Possibility of Full Surplus Extraction in Models with Large Type Spaces, Journal of Economic Theory, 170, 385–416 (MPI Preprints 2017/02, 2015/08)

Gorelkina, O. (2014). Bidder Collusion and the Auction with Target Bids, Preprint 2014/10, Max Planck Institute for Research on Collective Goods, Bonn

Gorelkina, O. (2015). The Expected Externality Mechanism in a Level-k Environment, Preprint 2015/03, Max Planck Institute for Research on Collective Goods, Bonn, forthcoming, in: International Journal of Game Theory

Grafenhofer, D. & Kuhle, W. (2016). Observing Each Others’ Observations in a Bayesian Coordination Game, Journal of Mathematical Economics, 67, 10–17 (MPI Preprint 2015/18)

Groves, T. (1973). Incentives in Teams, Econometrica, 41, 617–631

Güth, W. & Hellwig, M. F. (1986). The Private Supply of a Public Good, Journal of Economics/ Zeitschrift für Nationalökonomie, Supplement 5, 121–159

Hanousek, J. & Kochanova, A. (2016). Bribery Environment of Firm Performance: Evidence from Central and Eastern European Countries, European Journal of Political Economy, 43, 14–28

Hansen, E. (2014). Essays in Public Economics, Doctoral Dissertation, University of Bonn

Hansen, E. (2017). Optimal Income Taxation with Labor Supply Responses on two Margins: When is an Earned income Tax Credit Optimal?, Preprint 2017/10, Max Planck Institute for Research on Collective Goods, Bonn

Hasnain, Z., Kochanova, A. & Larson, B. (2016). Does E-Government Improve Government Capacity? Evidence from Tax Administration and Public Procurement, Policy Research Working Paper Series No. 7657, forthcoming in: World Bank Economic Review

Heifetz, A. & Neeman, Z. (2006). On the Generic (Im-)Possibility of Full Surplus Extraction in Mechanism Design, Econometrica, 74, 213–233

Hellwig, M. F. (2003). Public-Good Provision with Many Participants, Review of Economic Studies, 70, 589–614

Hellwig, M. F. (2004/2009). Optimal Income Taxation, Public-Goods Provision and Public-Sector Pricing: A Contribution to the Foundations of Public Economics, Preprint 2004/14, Max Planck Institute for Research on Collective Goods, Bonn, revised 2009

Hellwig, M. F. (2007). A Contribution to the Theory of Optimal Utilitarian Income Taxation, Journal of Public Economics, 91, 1449–1477

Hellwig, M. F. (2011). Incomplete-Information Models of Large Economies with Anonymity: Existence and Uniqueness of Common Priors, Preprint 2011/08, Max Planck Institute for Research on Collective Goods, Bonn, 2011

Hellwig, M. F. (2016). A Homeomorphism Theorem for the Universal Type Space with the Uniform Weak Topology, Preprint 2016/17, Max Planck Institute for Research on Collective Goods, Bonn, revised May 2017

Hellwig, M. F. (2017). Probability Measures on Product Spaces with Uniform Metrics, Preprint 2017/05, Max Planck Institute for Research on Collective Goods, Bonn

Hellwig, M. F. (forthcoming). Incomplete-Information Games in Large Populations with Anonymity, Preprint, Max Planck Institute for Research on Collective Goods, Bonn

Jerbashian, V. & Kochanova, A. (2016).The Impact of Doing Business Regulation on Investments in ICT, Empirical Economics, 50, 991–1008

Jerbashian, V. & Kochanova, A. (2017). The Impact of Telecommunications Technology on Competition in Services and Goods Markets: Empirical Evidence, Scandinavian Journal of Economics, 119, 628–655

Johnson, S., Pratt, J. W. & Zeckhauser, R. J. (1990). Efficiency Despite Mutually Payoff-Relevant Private Information: The Finite Case, Econometrica, 58, 873–900

Ledyard, J. (1979). Incentive Compatibility and Incomplete Information, Journal of Economic Theory, 18, 171–189

Lindahl, E. (1919). Die Gerechtigkeit der Besteuerung, Gleerupska, Stockholm

Mailath, G. & Postlewaite, A. (1990). Asymmetric Information Bargaining Procedures with Many Agents, Review of Economic Studies, 57, 351–367

McAfee, P. & Reny, P. (1992). Correlated Information and Mechanism Design, Econometrica, 60, 395–421

Mertens, J.-F. & Zamir, S. (1985). Formulation of Bayesian Analysis for Games with Incomplete Information, International Journal of Game Theory, 10, 619–632

Mirrlees, J. M. (1971). An Exploration in the Theory of Optimum Income Taxation, Review of Economic Studies, 38, 175–208

Myerson, R. B. & Satterthwaite, M. A. (1983). Efficient Mechanisms for Bilateral Bargaining, Journal of Economic Theory, 28, 265–281

Neeman, Z. (2004). The Relevance of Private Information in Mechanism Design, Journal of Economic Theory, 117, 55–77

Palfrey & Srivastava (1986). Private Information in Large Economies, Journal of Economic Theory, 39, 34–58

Rubinstein, A. (1989). The electronic mail game: Strategic behavior under 'almost common knowledge', American Economic Review, 79, 385–391

Samuelson, P. A. (1954). The Pure Theory of Public Expenditure, Review of Economics and Statistics, 36, 387–389

Sun, Y. (2006). The exact law of large numbers via Fubini extension and characterization of insurable risks, Journal of Economic Theory, 126 (2006), 31–69

Wilson, R. (1985). Incentive Efficiency of Double Auctions, Econometrica, 53, 1101–1116


C.III.2        Financial Stability, Financial Regulation, and Monetary Policy

C.III.2.1       Systemic Risk and Macro-Prudential Policy

Discussions of collective goods do not usually refer to the financial sector. However, collective-goods aspects play an important role in arguments about statutory regulation in this sector. “Systemic risk” has always had a prominent place in discussions about banking regulation and supervision. In the past, however, references to systemic risk were more a matter of lip service than of substance,[10] but the financial crisis of 2007–2009 has put systemic risk squarely on the agenda of regulators and scholars. “Systemic risk” has even become a legal term, which appears in the European Union’s regulations concerning the European Systemic Risk Board (ESRB) and the capital requirements for banks.[11] Thus the Capital Requirements Directive of 2013 explicitly provides for a “number of tools to prevent and mitigate macro-prudential and systemic risk”. The rapidity with which systemic risk concerns have been put into legal norms is remarkable because as yet there is no clear understanding of what the term refers to. In some utterances, “systemic risk” referred to risks to the real economy that might come from the financial system; in others to risks to the financial system (as a whole) that might come from shocks to the real economy (business cycle risk, interest rate risk, exchange rate risk); in more traditional economic analyses, the term used to refer to risks arising from propagation mechanisms inside the financial system that might lead the difficulties of one institution to take down the entire system.The term “macro-prudential”, a younger cousin of “systemic risk”, is no clearer. Is “macro-prudential” policy concerned about protecting the macro-economy, or is it concerned about protecting the financial system from the macro-economy? In an upswing, when exuberance encourages risk taking in the financial system along with an expansion in the real economy that may turn out to be a bubble, the two concerns are aligned, but in a slump, both when the banking system and the macro-economy are weak, they may be in conflict. In such a situation, should macro-prudential policy encourage bank lending even if that means taking on risks that may prove deadly? Or should macro-prudential policy focus on restoring bank health, with a hope that healthy banks will take up new lending if and when such lending promises adequate returns?

Hellwig (2014c) provides a systematic account of the issues and discusses possible implications for the design of institutions and policies. The account begins with an overview over the different propagation mechanisms that we have seen:

  • domino effects from defaults on contracts, e.g. Lehman Brothers vis à vis the money market fund Reserve Primary;
  • repercussions of the disappearance of potential contracting parties, e.g. Lehman Brothers as a market maker in derivatives in London or money market fund investors as a source of funding for money market funds and ultimately money markets;
  • information contagion as the difficulties of one institution are taken to provide information about other institutions, e.g., Lehman Brothers about other investment banks, Reserve Primary about other money market funds, Greece about Portugal;
  • hysteria contagion as the difficulties of one institution make people afraid that other investors might draw inferences about other institutions, and everyone begins to run;
  • asset price contagion (fire sale contagion) as asset sales by one institution taking defensive measures to reduce threats to its solvency or to its liquidity depresses asset prices and thereby imposes (fair-value accounting) losses on all institutions that hold similar assets;
  • market breakdowns and freezes, as an extreme form of disappearance of potential contracting parties.

The paper stresses the highly contingent nature of these mechanisms. For example, the strength of an asset price contagion mechanism depends on what shape the potential buyers are in, what confidence the potential buyers have about the underlying fundamentals, and what expectations they might have about the dynamics of ongoing downward movements in markets. The paper also points out that the different contagion mechanisms are likely to appear together, with combinatorics of potential interactions that defy any ex ante analysis, let alone any attempt to provide a comprehensive analysis by putting such mechanisms into a standard dynamic stochastic general equilibrium macro model and calibrating the model dynamics.

As an alternative, Hellwig (2014c) suggests investigating system exposure to macro shocks. In simple cases, such as the Scandinavian banks around 1990, such an analysis is easy because different banks had parallel exposures, e.g. with significant maturity transformation implying high vulnerability to the direct and indirect effects of increases in interest rates. In more complicated cases, some of the exposures might be hidden in counterparty credit risk, or in asset price contagion risks. For example, adjustable-rate mortgage lending provides the lender with a hedge against the risk that market rates of interest might go up, but exposes the lender to a risk of borrower default if market rates of interest go up, the interest rate on the mortgage is adjusted accordingly, the borrower is unable or unwilling to pay, and real-estate prices are in decline (as one would expect when market rates of interest are high). Hellwig (2014c) argues that, since most serious financial crises have been associated with macro shocks, investigating institutions’ exposures to macro shocks, including those exposures that arise from imperfect hedging, the participants’ fooling the supervisors and themselves about what their true exposures are, may be a better way to assess overall system risk exposure than a fixed calibration of propagation mechanisms in the context of a given macro model.

In contrast to most of modern macroeconomics, which focuses on formalization, quantification, and calibration, this proposal takes its cue from competition policy where it is well known that usually a given real-world phenomenon cannot be matched to any particular formal model that may be available but must be analysed on the basis of ideas taken from the zoo of available models with a view to trying to understand what the story behind the observed facts may be.[12] As a matter of institution design, Hellwig (2014c, 2015) proposes that one institution be given the task to observe ongoing developments and come up with systemic risk assessments along these lines, focussing on the probable story behind observed developments, rather than ticking boxes in lists of indicators that are presumed to matter.

For macro-prudential policies, Hellwig (2014c, 2015) notes that most measures that have so far been used are in fact measures of micro-prudential regulation and supervision, which are added to the usual micro-prudential measures when macro-prudential concerns seem to call for them; indeed in some instances, the “macro-prudential” label is no more than a fig leaf to disguise micro-prudential measures taken at the national level to circumvent the EU’s  harmonization of micro-prudential regulation. Given this observation, he suggests that “macro-prudential” policies ought to be carried out by all the authorities that must be concerned, micro-prudential supervisors, central banks, and finance ministries, taking the systemic-risk analysis of the previously mentioned institution as an input and coordinating on the appropriate measures. He also suggests that trade-offs between different objectives, such as the one mentioned above, whether in a recession priority should be given to the restoration of bank lending and the real economy or to the restoration of bank profits and bank solvency, should be addressed explicitly.

Hellwig (2017a) contains a discussion of the trade-off in the context of the current situation in the euro area. With reference to experiences from past crises, the paper strongly recommends that priority be given to the restoration of bank profitability and bank solvency even if that takes time.[13] Given the stock-flow problems associated with a restoration of bank equity from retained earnings, the paper also points to the analysis of Admati et al. (2012) whereby an immediate equity issue through a rights offering would be feasible if a bank was known to be solvent and would not require a net flow of cash from the market to the bank if the bank were to reinvest the proceeds in the market. For banks whose solvency is impaired, entry into a resolution regime would be called for. For the incentive issues that are associated with such measures, see the discussion of debt overhand and leverage ratchet effects in Section C.III.3 below.

C.III.2.2       Capital (Equity) Requirements for Banks

Admati and Hellwig (2013a), “The Bankers’ New Clothes: What’s Wrong with Banking and What to Do about It”, was published by Princeton University Press in 2013, but has continued to affect work for quite a while. A paperback edition with a new preface was published in 2014. By now the book has also been published in German, Spanish, Japanese, Complex Chinese, Simplified Chinese, Hebrew, Portuguese, and Italian. An account of the book’s contribution is given in our previous report. As mentioned there, the book had originated from discussions of what to do with Admati et al. (2010/2013), which was too long for a journal article and too short for a monograph. By now a major part of this paper has been actually been published in Admati et al. (2014). Even so, SSRN continues to be the main outlet for this paper, with over 7000 downloads so far.

Discussion of the book’s and the paper’s messages has gone on unabated. We have therefore updated Admati and Hellwig (2013 b) several times. The last update, from January 2016, refers to “31 Flawed Claims Debunked”. A further update, addressing at least 33 flawed claims, is due to come out shortly.

Among the book’s messages, the criticism of risk-based determination of capital requirements for banks has been particularly contentious. “Surely a bank that holds riskier assets must be required to have more equity!” In practice of course, the meaning is that banks claiming have safer assets will get away with lower equity requirements, i.e. with using the equity funding they have to borrow more and to engage in more risks. Problems with risk weights had previously been pointed out in Hellwig (2008/2009, 2010). The analysis in Hellwig (2014c) that is summarized above indicates that, once systemic interactions are taken into account, in particular correlations between counterparty risks on hedges, such as adjustable-rate clauses, the notion that risks can be “measured” is quite unrealistic.

Hellwig (2016b) extends this criticism to the proposed practice of calibrating minimum requirements for funding by equity and bail-in-able debt, i.e. debt that is not exempt from participating in losses in bank resolution, to the risks inherent in the bank’s assets. In this context, the arguments given in earlier work are reinforced by the concern that the purpose of bail-in-able debt only becomes relevant in insolvency, i.e., that, in principle, a risk-based argument should be formulated in terms of conditional probabilities given the event of resolution. Under existing rules, resolution is triggered by the authorities’ determination that the bank is failing or likely to fail. From an ex ante perspective, under the rules given for risk-based modelling, this event in turn should have an assessed probability of no more than 1 percent over a horizon of ten days. The presumption that one can give reliable estimates of risks conditional on an event that has a probability of 1 percent is absurd. However, risk-weighting is attractive because it reduces the required eligible liabilities, enabling banks to save on default risk premia in the interest they must pay to debt holders.

Behn et al. (2014) provide empirical evidence on the impact of allowing banks to use their own internal ratings to assess credit risk for determining required equity. To obtain identification, the paper uses a natural experiment that provided by the fluke that Germany introduced the new internal-ratings-based (IRB) approach for assessing credit risk of loan customers just before and during the financial crisis, and that she did so gradually, with different banks transitioning to the IRB approach at different dates and each bank using both, IRB and standard approaches simultaneously, at least for some time. As a result, a given firm might be indebted under different loan contracts of which some would require the lending bank to use equity under the IRB approach and some to use equity under the standard approach. Assuming that the true default risk for the different loan contracts would be the same, differential responses of the lending bank(s) to outside shocks would have to be ascribed to the difference between the two approaches or to differences between banks; the latter effect however can be controlled for by conditioning on relevant bank characteristics such as the bank’s own equity or profits, or whether the bank was large enough to have its own IRB assessment at all.

The investigation yields two major findings: First, IRB loans that were originated under the new regime of Basel II have lower assessed probabilities of default PD than IRB loans that were originated before the introduction of the new regime as well as non-IRB loans. Second, the actual subsequent performance suggests that PDs of IRB loans that were originated under the new regime of Basel II were systematically underestimated. Transition to a system in which required equity depended on the banks’ assessments of borrowers’ risks provided incentives to have lower assessments of these risks, and this is precisely what the empirical analysis shows as having happened.

Behn et al. (2016) studies the effects of using the IRB on the cyclicality of bank lending. Relying on the same natural experiment as the first paper, the empirical analysis shows that, in the financial crisis, loans for which required equity was determined by the IRB approach were reduced significantly more than loans for which required equity was determined by the standard approach; moreover, this difference was present even in those cases where the standard approach was used although the bank did have its own IRB assessment of credit risk, i.e. the difference was due to the regulation rather than the risk assessments. The effect was the larger the less well capitalized the bank was. It was also relatively larger if the loan was larger, the firm was relatively less profitable in 2007/2008 and if the firm’s probability of default (PD) had gone up in the crisis. The results are interpreted as indicating that the internal ratings are very responsive to cyclical developments and therefore banks relying on the IRB approach react more strongly to an economic downturn.

C.III.2.3       Debt Overhang and Leverage Ratchet Effects

Further revisions were done on Admati et al. (2013), which is now forthcoming in the Journal of Finance. As was explained in the previous institute report of the institute, this paper argues that shareholders’ attitudes to increases in equity are largely determined by the effects of debt overhang. In contrast, most of the literature considers shareholder resistance to equity issuance to be due to asymmetric information as in Myers and Majluf (1984). However, the Myers-Majluf argument cannot explain shareholder resistance to increases in equity that take place through retained earnings or through rights offerings. In fact, Myers and Majluf claim that as a form of funding retained earnings are cheaper than debt. By contrast, the effects of debt overhang apply to all forms of increases in equity.

In its simplest form, the argument starts from the original propositions of Modigliani and Miller that, in the absence of distortions and frictions, the value of a firm and the cost of capital of a firm are independent of its financing mix. Ex ante, before any securities have been issued, a corporation’s owners are therefore indifferent about the choice of funding mix. Ex interim, however, after some debt and possibly some outside equity have been issued, they are no longer indifferent. At this time, a recapitalization that lowers the probability that the firm might go bankrupt provides a benefit to debt holders. If debt is repurchased in the open market, the price at which the buyback occurs will have to reflect the increase on the value of the debt from the lowering of the bankruptcy probability. The reason is that, if debt holders can choose whether to hold on to the debt or to sell it, they will only sell if the price is high enough to compensate them for the benefits from holding the debt, taking account of the improvement in these benefits from the buyback itself. Thus the debt holders gain from the recapitalization. Because, by the Modigliani-Miller Theorem, the total value of the firm is unchanged, the shareholders must lose.

Whereas the basic argument has been known since Black and Scholes (1973), Admati et al. (2013) makes three contributions. First, it shows that the debt overhang effect is very robust to changes in the parameters of the model and generates resistance to a recapitalization even when such a recapitalization would be beneficial to the firm and to society as a whole. By contrast, in Myers (1977), the debt overhang effect works only if benefits from new investment are sufficiently small. In fact, the debt overhang is shown to give rise to a ratchet effect: Whereas shareholders resist debt reductions, they also find some additional debt increases to be advantageous (even if the status quo is a result of previous optimization and no new information has come in). Indeed, if at the margin there is a tax benefit to higher leverage, this incentive to increase leverage is always present even incumbent debt holders are protected by covenants that require new debt to be junior to old debt.

Second, the paper discusses the implications of the leverage ratchet effect for the dynamics of firm funding when this effect is anticipated by potential creditors. After explaining the nature of equilibrium, the paper uses numerical examples to illustrate the dynamics. The examples indicate that initial leverage is likely to be lower than predicted by traditional trade-off theories, but once the firm is in debt, then over time, leverage will rise to levels much higher than predicted by the traditional theories. Responses to exogenous shocks, e.g. changes in corporate tax rates, are asymmetric in that the firm’s leverage goes up, when a shock increases, e.g., the tax benefits of debt, but fails to go down when the shock decreases the tax benefits of debt.  Such hysteresis effects raise fundamental doubts about the explanatory power of the traditional trade-off approach to corporate finance. They also raise doubts about the traditional presumption that market outcomes are Pareto efficient; if a firm is unable to commit the entire time path of its funding choices ab initio, the resulting market outcomes may be incentive-efficient for the given extensive form of investor-firm interactions, but this extensive form reflects the firm’s inability to pre-commit its future choices, and the overall outcome may be improved upon by statutory regulation that provides a substitute for the missing commitment power.

Third, the paper considers the reactions of shareholders to increases in regulatory capital requirements that take the form of a higher ratio of required equity to total assets. Under certain conditions, shareholders are shown to be indifferent between (i) asset sales accompanied by a reduction in debt, (ii) an issue of equity through a rights offering accompanied by a reduction in debt, and (iii) an issue of equity through a rights offering accompanied by asset purchases. The conditions are: a single class of debt, homogeneous assets, and a price of assets that equals the expected present value of returns (taking account of tax and bankruptcy cost effects) after the operation. The empirical observation that banks prefer alternative (i) over (ii) and (iii) can be explained by deviations from these conditions, namely, with heterogeneous debt, asset sales accompanied by a reduction in junior debt impose a burden on senior debt whose exposure to losses in bankruptcy is increased. If the externality on incumbent senior debt is sufficiently strong, the preference for asset sales is present even if these sales destroy value.

At the 2016 Western Finance Association Meetings in Salt Lake City, Admati et al. (2013) was awarded a prize as the best paper on corporate finance at the meetings.

C.III.2.4       Liquidity Provision and Equity Funding of Banks

The last report sketched a new project, “Liquidity Provision and Equity Funding of Banks”, which was intended to investigate whether the “production” of liquid claims such as deposits, which are legally debt, is in conflict with equity funding of banks. In policy discussions about equity requirements for banks, resistance against higher requirements has been justified with the argument that such requirements would come at the expense of banks’ funding by deposits and other liquid claims and would thus contravene the very function of banks in the economy. A counter-argument would be that higher equity makes the bank safer, strengthens trust in the bank and thereby enhances the liquidity of deposits and other short-term claims produced by the bank.

Moreover, the increase in the share of equity in bank funding would not come at the expense of funding by liquid debt, if this increase was achieved by raising additional equity and investing the proceeds, e.g. in the market. The latter argument presumes that there are additional funds to be raised, i.e. that we are not starting from an equilibrium in which investors only hold debt and equity of banks. This presumption is not unrealistic but its place in the overall conceptual framework is unclear. The purpose of the project is to clarify the issues, also, to clarify what market failures might arise and why such market failures might call for statutory regulation.

Work on this project has not yet been brought to completion because there are problems in proving the existence of an equilibrium for the model in question. The reasons for these problems seem to be technical, rather than economic, namely with a continuum of participants (to avoid all concerns about market power), and, e.g. a constant-returns-to-scale technology, there are no natural bounds on the positions taken by individual banks. Without such bounds, standard fixed-point arguments cannot be used; whether the problem can be fixed by an appropriate detour is at this point an open question.

Apart from this technical problem, the analysis is complete and actually quite simple: The model is a general equilibrium model with many consumers and many banks in which bank deposits provide liquidity benefits by a “warm-glow” effect on their holders, the details of which are not analysed; however, the warm-glow effect occurs only if the bank is not in default. Banks issue deposits, bonds, or shares in order to fund investments that earn returns under a stochastic constant-returns-to-scale technology. Bonds and shares provide their holders with monetary returns. Deposits provide their holders with whatever monetary returns are promised and, in addition, if the issuing banks are not in default, the “warm glow” liquidity benefits as direct contributions to utility. Deposit provision may (but need not) involve a resource cost. 

If uncertainty about returns is sufficiently small, default is not a relevant concern. In this case, an equilibrium necessarily exists and involves bank funding by deposits up to the point where the marginal resource cost of additional deposits is equal to the marginal liquidity benefit. If investors have more funds to invest, the extra funds go into shares or bonds but the mix is irrelevant as long as the bond finance doesn’t induce a prospect of default. In the absence of default, laissez-faire is efficient.

If uncertainty about returns is large, e.g. if the rate of return on investments can be close to zero with positive probability, default may be unavoidable. In this case, some equity funding of banks is desirable because it reduces the probability of default and increases expected liquidity benefits from deposits. However, if banks are unable to pre-commit and to communicate their overall intended funding mixes to investors, equilibrium deposit funding will be excessive; in this case liquidity provision be inefficiently low because, relative to what would be efficient, there is too little capacity for loss absorption by equity and too high a default probability.

The argument is akin to the debt overhang effect in the “leverage ratchet” paper: In negotiating with any one depositor, the externalities of additional debt on the other depositors’ liquidity benefits are neglected. If the technology exhibits constant returns to scale, equilibrium liquidity benefits are in fact zero and any form of statutory regulation of bank equity would improve the allocation. (In this version of the model, an equilibrium can be shown to exist.) If banks are able to pre-commit and to communicate their overall funding mixes to investors, and if an equilibrium exists, the equilibrium allocation will in fact be constrained-efficient and will provide for bank funding by equity as well as deposits. In these equilibria, the equity supports the liquidity benefits from deposits, i.e. liquidity provision and equity funding are complementary rather than substitutes.

Along with the findings of Brunnermeier and Oehmke (2013) and Admati et al. (2013), the findings for the case where banks are unable to pre-commit and to communicate their overall intended funding mixes to investors point to an important methodological issue. Since Jensen and Meckling (1976), we have become used to “explaining” funding patterns that we observe with reference to some optimization or contracting problem under information and incentive constraints. This approach has yielded a rich set of insights, but it risks biasing any welfare analysis: If one “explains” real-world phenomena as solutions to some optimization problem, one is bound to find that equilibrium outcomes are efficient.

However, such findings depend on the commitment technology that is assumed. If commitment possibilities are weak, observed leverage of banks may reflect the desire of bank managers and new creditors to conclude new debt contracts at the expense of incumbent creditors rather than any efficiency-enhancing effects of debt finance. In practice, commitment problems are evident in the creation of contracts such as repo borrowing and lending that are specifically designed to jump maturity and priority queues – and that, presumably, have such collateral that creditors do not invest in information as would be required for debt as a disciplining device.

Hellwig (2016a) discusses the methodological problem in some detail, also the problem of how we assess the real-world relevance of theoretical analyses in the given tradition, especially when there are several competing “explanations”. The argument is illustrated by the example of bank funding by short-term debt: One set of theoretical models “explains” such funding by investors’ needs for insurance against uncertainty about the time when they will want to use their assets.[14] Another set of theoretical models refers to the disciplining role that short-term debt can have if the debt holders monitor the banks’ managers and the managers are afraid that, if they misbehave, the funding will not be renewed.[15] A third set of theoretical models stresses the effects of debt overhang and the inability to commit future funding mix choices. The three approaches rest on different behavioural assumptions and have contradictory welfare implications.[16]

C.III.2.5       Liquidity Provision and System Fragility

The implications of liquidity provision for the fragility of the financial system are explored in a series of papers by Luck and Schempp (2014a, 2014b, 2014c, 2016). In these papers, the need for liquidity is modelled along the lines of Diamond and Dybvig (1983), in a three-period model where people invest in period 0 and consume in periods 1 and 2. As of period 0, they do not know whether they will want to consume in period 1 or in period 2. Across individuals, the uncertainty about the timing of consumption needs is stochastically independent and a law of large numbers is assumed to hold. In principle therefore, there is scope for insurance, but ex interim, as of period 1, there is asymmetric information; outsiders cannot directly observe whether a person truly needs to consume at date 1 or date 2. To deal with this information problem, a callable debt contract specifies claims that an investor has on the “insurer” so that these claims depend on the date at which they are made and the choice of date is left to the person himself/herself, as in the case of a demand deposit, which can be withdrawn at will.

In Diamond and Dybvig (1983), consumption at date 1 is provided for by investments in short-term assets, which have low rates of return. Luck and Schempp (2014a) departs from their analysis by introducing the possibility that the resources required to satisfy date 1 claims might be obtained from third parties, e.g., new investors, rather than the returns from short-term investments. If it works, such an arrangement has the advantage that all initial funding can be used for long-term investments, which provide higher returns. However, the arrangement is vulnerable to the possibility of a run on deposits as well as a “rollover freeze”. The possibility of a run on deposits was already pointed out by Diamond and Dybvig (1983): If the initial investors believe that the bank will default on its obligations to them, even those investors who only need the funds at date 2 will prefer to make withdrawals at date 1, fearing otherwise they will not get anything. Such a run causes the bank to default, thus confirming the participants’ expectations. By the same logic, a rollover freeze can be the result of pessimistic expectations of potential new investors inducing actions that confirm these very expectations: If potential new investors fear a default and therefore do not contribute to rolling over the bank’s debt at date 1, the bank must liquidate assets to satisfy depositors at this date; in consequence, it will default on its obligations at date 2, if not already at date 1. Moreover, whereas a run by depositors can be forestalled by the introduction of deposit insurance, as had been pointed out by Diamond and Dybvig (1983), deposit insurance cannot prevent the occurrence of a rollover freeze. (Notice that, in 2008, the crises of Bear Stearns and Lehman Brothers did involve rollover freezes as money market funds and hedge funds worried about the solvency of these institutions and withdrew from further funding.)

Luck and Schempp (2014a) go on to discuss the possible role of public debt as a basis for making the system less fragile. In the language of Holmström and Tirole (1998), government debt serves as a source of outside liquidity. At date 0, the government and the bank conclude two debt contracts, one that obliges the government to pay the bank at date 2 and one that obliges the bank to pay the government at date 2. The claims just offset each other. However, the government’s obligation is assumed to be fungible. Thus, banks can sell government bonds to outside investors at date 1. If the government bonds have zero default risk, outside investors are willing to acquire them and provide the banks with the resources they need at date 1. The problem of a rollover freeze is eliminated.

In this setting, the problem of rollover freezes can reappear if the government itself may end up being unable to service its debt. This would be the case if the government’s ability to service its debt depends on tax revenues, the tax revenues depend on how the economy does, and this in turn depends on the state the banks are in.

Luck and Schempp (2014b) therefore study the implications of the possibility of sovereign default for the system introduced in Luck and Schempp (2014a). If the government’s ability to service its debt depends on the health of the banking system, there is again a multiplicity of equilibria, “good” equilibria in which government bonds provide a basis for banks’ use the availability of outside liquidity provided by the government in order to forestall a “rollover freeze”, and “crisis” equilibria in which there is a rollover freeze because investors expect the government to be unable to pay and therefore are unwilling to buy government bonds from the banks at date 1. As this freeze causes banks to default at date 1, the government in fact cannot pay its debts at date 2, i.e. the investors’ expectations of a government default are self-fulfilling. The crisis equilibrium is accompanied by a run from depositors, i.e., there is a “twin crisis” of government debt and banks. Deposit insurance does not help because deposit insurance is not credible if the government is unable to pay.

Luck and Schempp (2014b) also extend the analysis to a two-country setting. The banks of either country are assumed to hold the government bonds of both countries. A crisis in one country can therefore affect the solvency of banks in the other country, creating the possibility of crisis contagion from one country to the other. If joint tax revenues are known to be sufficiently high, a fiscal and banking union can be beneficial because it avoids the risk of a twin crisis, by the logic of Luck and Schempp (2014a). Indeed one country’s government’s providing assistance to the other country’s government can be beneficial to the first country by eliminating the possibility of a crisis and the possible fallout from contagion.

Luck and Schempp (2014c) considers the impact of shadow banking institutions such as money market funds on the fragility of the financial system. The baseline model is again a version of the Diamond-Dybvig (1983) model, now in a reformulation with overlapping generations. In this baseline model again, runs by depositors are a possibility, but this possibility is eliminated by deposit insurance. Deposit insurance however is accompanied by regulation and this regulation imposes a cost on banks. By way of regulatory arbitrage, a set of unregulated institutions (shadow banks) compete with the regulated banks, providing the same sorts of assets and services, however without deposit insurance and with lower regulatory costs. The shadow banks rely on outside markets for their assets to provide the requisite liquidity. In the absence of any runs, the model has an equilibrium in which banks and shadow banks coexist and the shadow banks reduce overall regulatory costs.

Under certain conditions however, there also is an equilibrium in which the shadow banks are run upon. The problem is again due to an inability to obtain enough liquidity to cover current needs, with investor reluctance to provide liquidity based on self-confirming pessimistic expectations. A depression of asset prices through fire-sale effects contributes to the mechanism. The shadow banking system can be stabilized if regulated (and insured) banks provide the shadow banks with liquidity guarantees and if (and only if) the shadow banking system is not too large relative to the regulated system.

At the first ECB Forum on Central Banking in Sintra in 2014, Luck and Schempp (2014c) was presented as a “poster paper” and was awarded a prize for the best “poster paper” at this conference.

Luck and Schempp (2016) studies systemic effects through asset price contagion. The starting point is again a Diamond-Dybvig (1983) model, with two additions. First, there is moral hazard in the sense that intermediaries have a choice between two long-term investment technologies, one of which is inefficient but provides private benefits to the bank manager. Second, as in Luck and Schempp (2014a, b) there is a set of outside investors who are available to buy assets in the interim period, thus enabling liquidity provision without any need for low-return short-term investments. In contrast to Luck and Schempp (2014a), however, these outside investors have limited funds. There also is a possibility for the intermediary’s owner/manager to invest funds of his own. However, this is assumed to be inefficient because the opportunity cost of these funds is very high.

The social optimum for this model involves liquidity provision to depositors as in Diamond and Dybvig (1983), zero short-term investments, complete reliance on outside liquidity provision through asset sales at date 1, and the threat of a run by depositors as a disciplining device to discourage the owner/manager of the intermediary from choosing the inefficient investment technology. This outcome can be implemented as an equilibrium outcome of a game but the game also has subgame equilibria in which depositors run on the bank (and therefore multiple overall equilibria). Runs occur not only because, by the arguments of Diamond and Dybvig (1983), the short-term investors have relatively (too) large claims, but also because the outside investors’ funds do not suffice to provide liquidity to satisfy withdrawal wishes by all depositors in the intervening period. Scarcity of outside investors’ funds induces cash-in-the-market pricing of assets. With multiple banks, this fire sale effect on asset prices provides a mechanism of contagion by which a run on one bank induces solvency problems for other banks and in consequence runs on these other banks as well.

As in Diamond and Dybvig (1983), the runs problem can be eliminated by the introduction of deposit insurance. However, the elimination of runs on the equilibrium path also eliminates the use of runs as a disciplining device off the equilibrium path. To prevent bankers from choosing the inefficient technology, another device is needed. A second-best allocation with deposit insurance will therefore involve a requirement that the banker invest some of his own money even though this is inefficient. With an assumption of Bertrand competition between intermediaries, the cost is passed on to depositors.

Regulation imposing such a requirement on banks can make room for shadow banks, as in Luck and Schempp (2014c). Consumers are assumed to be with regulated banks initially and to have switching costs, so they do not all move to the shadow banks if the latter seem to be offering better opportunities. If the switching costs are high, there is an equilibrium in which regulated banks and shadow banks coexist without much of a change for regulated banks. If the switching costs are low, the shadow banking sector will be large, and a run on shadow banks may induce cash-in-the-market pricing of assets, which affects the regulated banks as well. This vulnerability of regulated banks can be reduced by restrictions on market funding, requiring them to use short-term investments for liquidity at date 1. This regulation reduces fragility at the cost of efficiency.

C.III.2.7       Information Aggregation in Markets and Strategic Games

Many issues in financial systems have to do with the aggregation of information. Different participants have different pieces of information on which they act. Overall outcomes depend on how they interact and how the different pieces of information are combined (if at all). The question arises in both market and non-market settings. For market settings, the early work of Grossman (1976), Hellwig (1980), and Kyle (1989) has developed a paradigm in which prices are seen as (weighted) averages of the different pieces of information of different individuals, and participants take account of the “aggregate information” reflected in prices, in Kyle (1989) also of their own impact on the information content of prices. For models of currency attacks and bank runs, the global-games approach of Morris and Shin (1998), Rochet and Vives (2004) or Goldstein and Pauzner (2005) has shown that, with private information about fundamentals, the equilibrium multiplicity of the Diamond-Dybvig model may disappear, the aggregate outcome depends on the true value of the fundamental through the combined actions of all participants. In both contexts, the allocative implications of information aggregation have not yet received much attention.[17]

Gorelkina and Kuhle (2013) investigate the effects of information acquisition and use by shareholders and information aggregation and transmission through stock prices on the conditions under which a firm can borrow and the firm’s cost of capital. Creditors are assumed to condition their actions on stock prices. Firms are shown to internalize some of the externalities inherent in shareholders’ investing in information and having the information communicated through share prices; this is possible because firms with a strong fundamental will issue more equity and less debt than they would without the informational spillover. In the larger market, more equity is traded, and incentives to invest in information are stronger. Significant strategic complementarities enhance the effects of good information on the firm’s funding conditions.

Kuhle (2016) takes a critical look at the implications of the global-games approach for uniqueness or multiplicity of equilibria. Whereas Morris and Shin (1998) and the subsequent literature assume that participants have a common prior, he considers strategic interactions when priors are heterogeneous and derives a sharp condition for equilibrium uniqueness or multiplicity. This condition indicates that unique equilibria are played if player's public disagreement (i.e. heterogeneity of priors) is substantial. If disagreement is small (zero in the case of a common prior), equilibrium multiplicity depends on the relative precisions of private signals and subjective priors. Extensions to environments with public signals show that prior heterogeneity, unlike heterogeneity in private information, provides a robust anchor for unique equilibria. Finally, irrespective of whether priors are common or not, public signals can ensure equilibrium uniqueness, rather than multiplicity, if they are sufficiently precise.

Grafenhofer and Kuhle (forthcoming) also show that the Morris-Shin (1998) results on uniqueness and multiplicity of equilibria change significantly when agents observe signals about the other agents’ actions, rather than signals about the fundamentals. In coordination games, agents are most interested in what the other agents’ actions are because these actions determine, e.g., whether a currency attack is successful or a run is fatal to a bank. However, in Morris and Shin (1998) and most other papers in the global-games approach, agents observe signals about fundamentals, which provide information about the other agents’ actions only because the other agents also observe signals about fundamentals and act upon them. In contrast, if I see another agent reading a newspaper, or if I see the other agent lining up before the doors of a bank like Northern Rock, I directly learn something about the other agent’s information or the other agent’s actions. This is not the same as my learning something about Northern Rock and inferring what others might have learnt about that bank.

For a model with noisy observations of (aggregates of) other agents’ actions, Grafenhofer and Kuhle find that a high degree of precision of private signals is conducive to equilibrium multiplicity. This finding contrasts with the global-games literature where uniqueness is obtained if (and only if) private signals are relatively more precise than public signals. Grafenhofer and Kuhle (2016) consider the electronic mail game of Rubinstein (1989), also a coordination game under the assumption that agents have noisy signals of other agents’ observations and get equilibrium multiplicity, namely, in addition to the Rubinstein equilibrium, their version of the game also has equilibria in which agents coordinate on a change of actions, an attack or a run, if the fundamental is such that the induced outcome is Pareto-superior.

Roux and Sobel (2016) show that information aggregation has a significant effect on decision making by groups as opposed to individuals. In contrast to the papers discussed so far, this is a paper about group decisions in the absence of conflicts between group members, rather than group behavior as a result of uncoordinated decisions of individuals. The paper shows that, under fairly general conditions, the distributions of group actions that are induced by the different realizations of the information variables are more dispersed than the distributions of optimal actions of individuals. Because the aggregate information of group members is more precise than the information of any one member, residual uncertainty about the underlying variables of concern is smaller, so there is less risk in reacting strongly to the information.

Bachi, Ghosh, and Neeman (2016) consider pre-play communication in strategic games, assuming that such communication is not “cheap”, in the sense that those engaged in it may unintentionally betray their true intentions, or guess the true intentions of others. This implies that players' strategies should be described by response functions from gestures of the other players into actions in the game, rather than by mere actions, as in the standard formulation. This has a profound effect on the way games are played. The model can account for the significant levels of cooperation and correlation observed in experimental Prisoner's Dilemma games with non-binding pre-play communication.

C.III.2.8       Policy Contributions: Weak Banks, Financial Stability and Monetary Policy

Hellwig (2014b) gives an overview over the developments that led to the creation of the European Banking Union and a critical assessment of the arrangements introduced, the Single Supervisory Mechanism and the Single Resolution Mechanism. Running counter to the prevailing attitude among officials at the time, the paper argued that Banking Union would not be a Santa Claus solving all the problems of the euro area financial and monetary systems. It predicted that the Single Supervisory Mechanism would be hampered by the need to cooperate with national authorities and to apply national laws that implement European directives. It also predicted that the legal procedures for the recovery and resolution of weak banks would not work. If banks with systemically important operations in several countries enter into resolution, there is still no way to prevent the breakdown of these operations and to limit the resulting systemic damage. Moreover, the legislation makes no provisions for the liquidity needed to maintain systemically important operations at least temporarily. Finally, there is no fiscal backstop. Because of the deficiencies, the paper predicted that the “too-big-to-fail” syndrome would still be present.

Developments since then, in particular the weakness of the resolution mechanism, have confirmed the criticism. Hellwig (2017b, 2017c), written in response to requests from the European Parliament’s Committee on Economic and Monetary Affairs, deal with issues that only arise because authorities are reluctant to use the available resolution mechanism and instead continue to prefer procrastination over cleanups of their banks’ problems. Hellwig (2017b) provides a critical assessment of the proposal, which was recently made by the Chair of the European Banking Authority, that the € 1 trillion non-performing loans in European banks should be placed into an EU-wide, government-guaranteed or even government-funded asset management company in order to rid banks from the burden of these loans and permit them to engage more freely in new lending. Whereas the proposal involves a clawback condition on banks in order to immunize taxpayers from the associated risks. Hellwig (2017b) argues that such a condition would create a contingent liability of banks with risks equivalent to the asset risks under current arrangements, so any notion that banks would get of the burden from the non-performing loans must rest on accounting cosmetics rather than actual risk exposure. Moreover, to the extent that the banks in question are actually insolvent, taxpayers would be exposed to these risks after all. Based on a review of experiences with asset management companies, the paper also argues that for, non-tradable assets such as loans, any notion of substantial value enhancements from larger volumes is unrealistic and that the proposal does not provide any obvious advantages relative to a resolution procedure that would allow for the patience needed to wind down the assets in question, with a time horizon on the order of ten years rather than the three years mentioned in the proposal.

Hellwig (2017c) discusses the legal regime and the practice of “precautionary recapitalizations”, injection of government funds as equity into failing banks in order to forestall a resolution procedure or avoid insolvency, at least for another while. Whereas some such measure would be warranted for institutions with systemically important operations in multiple jurisdictions, the actual practice, in the cases of Monte dei Paschi di Siena and of the Venetian banks, is quite objectionable because systemic concerns are minimal, and the recapitalizations amount to bailouts in the interest of particular investors. In fact, these recapitalizations are an integral part of a system where resolution or insolvency are delayed while professional investors get out and are replaced by retail investors who are misled about the risks so subsequent scandalization about the mis-selling of such debt under the eyes of the supervisors creates a political need for bailouts. Here again the contribution of the paper to lay out the existing rules and to analyse the issues raised by the actual practice.

Hellwig (2014b) and Hellwig (2015) discuss the role of financial stability concerns in monetary policy and the issues that this role raises for the relation between the central bank and the supervisory authority and for the implementation of monetary policy. Both papers begin with systematic accounts of the evolution of central banking and monetary policy mandates. Historically, financial stability has figured prominently among central banks’ objectives, with policies ranging from interest rate stabilization to serving as lender of the last resort. With the ascent of macroeconomics and with the shift from convertible currencies to pure paper currencies, these traditional concerns of central banks have been displaced by macroeconomic objectives, price stability, full employment, growth. The financial crisis and the euro crisis have shifted the focus back to financial stability even though there no longer is any financial stability mandate.

The weakness of banks presents a challenge for monetary policy because banks are an important part of the monetary system: Bank deposits share important functions of money, they are the basis of the payment system, and bank loans are an important part of the transmission mechanism for monetary policy. In 2008/09 and again in 2011/12, the European Central Bank (ECB) provided enormous amounts of liquidity to banks in order to maintain the monetary system; in terms of mandates, this was justified by the argument that a financial crisis would induce deflation, a deviation from price stability, and therefore had to be forestalled. Measures taken since 2015 under the label of “quantitative easing” are also justified by the need to fight deflation, except that these measures put the banking system at risk; purchases of long-term debt flatten the maturity premium and put pressure on bank profitability, as do negative interest rates on banks’ deposits with the central bank. The idea now is that, if banks are forced to lend to the real economy, economic growth will pick up and deflation will be pre-empted.

Political and legal discussions about the ECB’s monetary policies have focused on whether these policies are compatible with the ECB’s mandate, whether they are compatible with the prohibition of direct government finance by the central bank, and whether they might not impose unconscionable losses on the central bank, including a risk of insolvency. Hellwig (2014a, 2015) argues in some detail that concerns about return risks are misplaced in a world in which the issue of paper money imposes no obligation on the issuer (unlike the world of the gold standard, where the issuer had to be ready to exchange notes into gold), that such issue of paper money actually creates a windfall gain, which may be reduced by subsequently losses on the assets that have been acquired but never so far that the gain turns into a loss. Both papers also argue that in view of the role of banks in the monetary system the central bank is bound to pay attention to financial stability. 

Hellwig (2014a) goes on to discuss possible moral hazard on the side of banks,  bank supervisors and governments that might be caused by a central bank’s commitment to financial stability as an essential precondition for reaching the central bank’s macroeconomic objective of price stability. Such moral hazard can undermine monetary dominance and the independence of central bank decision making. For example, when the ECB supported the financial system to prevent a crisis in 2011/12, many banks, in particular, weak banks invested the funds they obtained with their governments, leading many participants and observers to conclude that having weak banks is a way of obtaining indirect access to the printing press. The European Banking Union was to some extent a reaction to this experience, but then the integration of supervision into the ECB raises the question whether supervisory decisions, e.g. a decision on whether to put a commercial bank into a resolution regime, might not become hostage to the central bank’s monetary policy objective. Hellwig (2014a) provides an extensive discussion of the challenges for institution design that arise.

In contrast, Hellwig (2015) focuses on the practical question of how financial stability concerns of monetary policy should be handled in practice. In particular, how should the central bank go about assessing the relevance of financial stability concerns in any given situation? To deal with the fact that systemic interdependence takes multiple forms and is changing all the time and many contagion risks cannot be measured, the paper proposes procedures along the lines suggested in Hellwig (2014c), as discussed above in Section C.III.2.1.

Hellwig (2014b) also discusses the relation between financial-stability and macroeconomic-stability objectives in some detail, considering to what extent they coincide, to what extent they may be in conflict and how in cases of conflict the potential trade-offs should be assessed. The above observation that in 2012, the ECB rescued the banks in order to maintain the monetary system (and to protect the macroeconomy) and since 2015 has been pressuring the banks to lend to the real economy even if they could hardly bear the risks suggests that we need some principles on which to decide such prioritizations. As past experience suggests that delaying cleanups in the financial sector tends to be very costly, Hellwig (2014b) proposes that such cleanups be given priority over macroeconomic concerns, perhaps though with impositions of immediate recapitalizations, rather than long waits until retentions from new profits have provided sufficiently large increases in bank equity.

 

C.III.2.9       References

Admati, A. R., DeMarzo, P. M., Hellwig, M. F. & Pfleiderer, P. (2010/2013). Fallacies, Irrelevant Facts, and Myths in the Discussion of Capital Regulation: Why Bank Equity is Not Socially Expensive, Preprint 2013/23, Max Planck Institute for Research on Collective Goods, Bonn 2013 (Revision of Preprint 2010/42)

Admati, A. R., DeMarzo, P. M., Hellwig, M. F. & Pfleiderer, P. (2012). Debt Overhang and Capital Regulation, Preprint 2012/05, Max Planck Institute for Research on Collective Goods, Bonn 2012

Admati, A. R., P. Conti-Brown & Pfleiderer, P. (2012). Liability Holding Companies, UCLA Law Review, 59, 852–913

Admati, A. R., DeMarzo, P. M., Hellwig, M. F. & Pfleiderer, P. (2013). The Leverage Ratchet Effect, Preprint 2013/13, Max Planck Institute for Research on Collective Goods, Bonn 2013. Forthcoming in: Journal of Finance

Admati, A. R. & Hellwig, M. F. (2013 a). The Bankers’ New Clothes: What’s Wrong with Banking and What to Do about It, Princeton University Press, Princeton

Admati, A. R. & Hellwig, M. F. (2013 b). The Parade of the Bankers’ New Clothes Continues: 23 Flawed Claims Debunked, Rock Center for Corporate Governance at Stanford University, Working Paper No. 143, Stanford 2013, Latest Version (“31 Flawed Claims Debunked”) January 2016

Admati, A. R. & Hellwig, M. F. (2014). The Bankers’ New Clothes: What’s Wrong with Banking and What to Do about It, Paperback Edition (with a new preface), Princeton University Press, Princeton

Admati A. R., DeMarzo P. M., Hellwig, M. F., Pfleiderer, P. (2014). Fallacies and Irrelevant Facts in the Discussion of Capital Regulation, in: Goodhart C., Gabor D., Vestergaard J., Ertürk I. (eds.), Central Banking at a Crossroads – Europe and Beyond, Anthem Press, 33–51

Bachi, B., Ghosh, S. & Neeman, Z. (2016). Communication and Deception in 2-Player Games, forthcoming in: Journal of Mathematical Economics

Behn, M., Haselmann, R. & Vig, V. (2014). The Limits to Model-Based Regulation, IMFS Working Paper Series 82, Institute for Monetary and Financial Stability, Frankfurt

Behn, M., Haselmann, R. & Wachtel, P. (2016). Procyclical Capital Regulation and Lending, Journal of Finance, 71, 919–956

Black, F. & Scholes, M. (1973). The Pricing of Options and Corporate Liabilities, Journal of Political Economy, 81, 637–554

Brunnermeier, M. K. & Oehmke, M. (2013). The Maturity Rat Race, Journal of Finance, 68, 483–521

Calomiris, C. W. & Kahn, C. M. (1991). The Role of Demandable Debt in Structuring Optimal Banking Arrangements, American Economic Review, 81, 497–513

Diamond, D. W. & Dybvig, P. H. (1983). Bank Runs, Deposit Insurance, and Liquidity, Journal of Political Economy, 91, 401–419

Diamond, D. W. & Rajan, R. G. (2000). A Theory of Bank Capital, Journal of Finance, 55, 2431–2465

Diamond, D. W. & Rajan, R. G. (2001). Liquidity Risk, Liquidity Creation and Financial Fragility, Journal of Political Economy, 109, 287–327

Goldstein, I. & Pauzner, A. (2005). Demand Deposit Contracts and the Probabililty of Bank Runs, Journal of Finance, 60, 1293–1327

Gorelkina, O. & Kuhle, W. (2013). Information Aggregation through Stock Prices and the Cost of Capital, Preprint 2013/18, Max Planck Institute for Research on Collective Goods, Bonn, forthcoming. Journal of Institutional and Theoretical Economics

Gorton, G. (2010). Slapped by the Invisible Hand, Oxford University Press

Grafenhofer, D. & Kuhle, W. (2016). Observing Each Others’ Observations in a Bayesian Coordination Game, Journal of Mathematical Economics, 67, 10–17 (MPI Preprint 2015/18)

Grafenhofer, D. & Kuhle, W. (forthcoming). Thinking Ourselves into a Recession, mimeo, Max Planck Institute for Research on Collective Goods, Bonn

Grossman, S. J. (1976). On the Efficiency of Competitive Stock Markets when Traders Have Diverse Information, Journal of Finance, 31, 573–585.

Gual, J., Hellwig, M. F., Perrot, A., Polo, M., Rey, P., Schmidt, K. & Stenbacka, R. (2006). Economic Advisory Group on Competition Policy, An Economic Approach to Article 82, Competition Policy International, 2, 111–154

Hellwig, M. F. (1980). On the Aggregation of Information in Competitive Markets, Journal of Economic Theory,  22, 477–498

Hellwig, M. F. (1998). Systemische Risiken im Finanzsektor, Schriften des Vereins für Socialpolitik, NF 261 (Zeitschrift für Wirtschafts- und Sozialwissenschaften, Beiheft 7), Berlin: Duncker & Humblot, 123–151

Hellwig, M. F. (2008/2009). Systemic Risk in the Financial Sector: An Analysis of the Subprime-Mortgage Financial Crisis, Jelle Zijlstra Lecture 6, Netherlands Institute for Advanced Studies, Wassenaar 2008, reprinted in: De Economist, 157 (2009), 129–208

Hellwig, M. F. (2010). Capital Regulation after the Crisis: Business as Usual?, CESifo DICE Report 8/2 (2010), 40–46, revised as: Preprint 2010/31, Max Planck Institute for Research on Collective Goods, Bonn 2010

Hellwig, M. F. (2014 a). Financial Stability, Monetary Policy, Banking Supervision, and Central Banking, in: European Central Bank (ed.), Monetary Policy in a Changing Landscape: Proceedings of the First ECB Forum on Central Banking, 21–54 (Preprint 2014/09, Max Planck Institute for Research on Collective Goods, Bonn 2014)

Hellwig, M. F. (2014 b). Yes Virginia, There is a European Banking Union! But It May Not Make Your Wishes Come True, in: ÖNB: Österreichische Nationalbank (ed.), Towards a European Banking Union: Taking Stock, 42nd Economics Conference 2014, 156–181 (MPI Preprint 2014/12)

Hellwig, M. F. (2014 c). Systemic Risks and Macro-prudential Policy, in: A. Houben, R. Kijskens, M. Teunissen (eds.), Putting Macroprudential Policy to Work, Occasional Studies 12–7, De Nederlandsche Bank, Amsterdam 2014, 42–77

Hellwig, M. F. (2015). Financial Stability and Monetary Policy, Preprint 10/2015, Max Planck Institute for Research on Collective Goods

Hellwig, M. F. (2016 a). Neoliberale Sekte oder Wissenschaft? Zum Verhältnis von Grundlagenforschung und Politikanwendung in der Ökonomik (Neoliberal Sect or Science? On the Relation between Fundamental Research and Policy Advice in Economics), in: K. Schneider and J. Weimann (eds.), Den Diebstahl des Wohlstands verhindern: Ökonomische Politikberatung in Deutschland – ein Portrait, Springer Gabler, Wiesbaden 2016, 195–205 (MPI Preprint 2015/17)

Hellwig, M. F. (2016 b). „Total Assets“ versus Risk-Weighted Assets: Does It Matter for MREL Requirements?, Preprint 2016/12, Max Planck Institute for Research on Collective Goods, Bonn 2016

Hellwig, M. F. (2017a). Wachstumsschwäche, Bankenmalaise und Bankenregulierung (Weak growth, banking problems and banking regulation), Wirtschaftsdienst, 97 (Sonderheft), 43–48

Hellwig, M. F. (2017 b). Carving Out Legacy Assets: A Successful Tool for Bank Restructuring, Preprint 2017/03, Max Planck Institute for Research on Collective Goods, Bonn

Hellwig, M. F. (2017c). Precautionary Recapitalizations: Time for a Review, Preprint 2017/14, Max Planck Institute for Research on Collective Goods, Bonn

Holmström, B. & Tirole, J. (1998). Private and Public Supply of Liquidity, Journal of Political Economy, 106, 1–40

Jensen, M. C. & Meckling, W. H. (1976). Theory of the Firm: Managerial Behaviour, Agency Costs and Capital Structure, Journal of Financial Economics, 3, 305–360

Kuhle, W. (2016). A Global Game with Heterogeneous Priors, Economic Theory Bulletin, 4, 167–185

Kyle, A. S. (1989). Informed Speculation with Imperfect Competition, Review of Economic Studies, 56, 317–355

Luck, S. & Schempp, P. (2014 a). Outside Liquidity, Rollover Risk, and Government Bonds, Preprint 2014/14, Max Planck Institute for Research on Collective Goods

Luck, S. & Schempp, P. (2014 b). Sovereign Default, Bank Runs, and Contagion, Preprint 2014/15, Max Planck Institute for Research on Collective Goods

Luck, S. & Schempp, P. (2014 c). Banks, Shadow Banking, and Fragility, ECB Working Paper No. 1726, European Central Bank, Frankfurt

Luck, S. & Schempp, P. (2016). Regulatory Arbitrage and Systemic Risk, mimeo, Max Planck Institute for Research on Collective Goods

Morris, S. & Shin, H. S. (1998). Unique Equilibrium in a Model of Self-Fulfilling Currency Attacks, American Economic Review, 88, 587–597

Myers, S. C. (1977). Determinants of Corporate Borrowing, Journal of Financial Economics, 5 (1977), 147–175

Myers, S. C. and Majluf, N. S. (1984). Corporate Finance and Investment Decisions When Firms Have Information that Investors Do Not Have, Journal of Financial Economics, 13, 187–222

Rochet, J. C. & Vives, X. (2004). Coordination Failures and the Lender of the Last Resort: Was Bagehot Right After All? Journal of the European Economic Association, 2, 1116–1147

Roux, N. & Sobel, J. (2016). Group Polarization in a Model of Information Aggregation, American Economic Journal: Microeconomics, 7, 202–232

Wissenschaftlicher Beirat beim Bundesministerium für Wirtschaft und Energie (2016). Brief zu den Vorschlägen des Basler Ausschusses für Bankenaufsicht zur Behebung von Missständen bei den Eigenkapitalvorschriften für Banken (Letter of the Academic Advisory Committee of the Federal Ministry for Economic Affairs and Energy on the Proposals of the Basel Committee on Banking Supervision Concerning the Abolition of Abuses under Existing Capital Regulation of Banks), November 2016

Wissenschaftlicher Beirat beim Bundesministerium für Wirtschaft und Energie (2017). Gutachten zur Diskussion um Bargeld und die Null-Zins-Politik der Zentralbank (Report of the Academic Advisory Committee of the Federal Ministry for Economic Affairs and Energy Concerning the Discussion about Cash and about the Zero-Interest-Rate Policy of the Central Bank), February 2017




[1]           This is shown by Clarke (1971) and Groves (1973) for implementation in dominant strategies and by d’Aspremont and Gérard-Varet (1979) for Bayes-Nash implementation.

[2]           Independent private values: If one person is known to have a high preference for  the good in question, this contains no information about any other person’s preference for this good. Preferences of different people are stochastically independent.

[3]           For private goods, see Myerson and Satterthwaite (1983), for public goods, Güth and Hellwig (1986), Mailath and Postlewaite (1990).

[4]           See, e.g. Wilson (1985).

[5]           See Mailath and Postlewaite (1990), Hellwig (2003).

[6]           See, e.g. Palfrey and Srivastava (1986).

[7]           See Dekel et al. (2006), Chen et al. (2010). Grafenhofer and Kuhle (2016) show that the analysis of Rubinstein’s e-mail game changes dramatically if, in addition to their own information, the participants can also observe noisy signals of the other agents’ observations; this modification of Rubinstein’s game always has an equilibrium in which agents co-ordinate on a change of actions, e.g. the co-ordinated “attack” in Rubinstein’s military example, whenever the fundamentals are such that this change is Pareto-superior to passivity.

[8]           In contrast, if robustness is not imposed, with correlated values, the results of Gizatulina and Hellwig (2017) imply that, generically, first-best allocations can be implemented with voluntary participation, in models with public goods as well as in models with private goods.

[9]           See, e.g., Boadway and Keen (1993).

[10]          See Hellwig (1998)

[11]          Regulation (EU) No. 1092/2010 of the European Parliament and of the Council of 24 November 2010 on European Union macro-prudential oversight of the financial system and establishing a European Systemic Risk Board, Regulation (EU) No. 575/2013 of the European Parliament and of the Council of 26 June 2013 on prudential requirements for credit institutions and investment firms, and Directive 2013/36/EU of the European Parliament and of the Council on access to the activity of credit institutions and the prudential supervision of credit institutions and investment firms.

[12]          See Gual et al. (2006).

[13]          See also Wissenschaftlicher Beirat (2016, 2017).

[14]          See, e.g. Diamond and Dybvig (1983), Gorton (2010).

[15]          See, e.g. Calomiris and Kahn (1991), Diamond and Rajan (2000, 2001).

[16]          See, e.g. Brunnermeier and Oehmke (2013).

[17]          The critical survey in Hellwig (2005) is not yet out of date.