Sample Projects

Fake Trust 

Credence goods are defined by an information asymmetry. The consumer is not in a position to verify the quality of the good or service before entering the contractual relationship. Even after the contract has been implemented, it cannot be proven to the requisite standard whether the supplier has delivered on her promises. This structure of the market has led to heavy regulatory intervention, for instance in the markets for taxi rides, medical treatment, or professional education. However, these interventions do only partly overcome the problem, and they have the unintended side-effect of creating rents. In small-scale contexts, the risk of customers being exploited by the suppliers is contained very differently: in long-term relationships suppliers build trust. Arguably, modern communication platforms make it possible to scale up this non-state solution. They can be interpreted as private institutions. Yet to which extent do online platforms for credence goods truly create trustworthiness? Are such arrangements vulnerable to creating a mere illusion of trustworthiness? Does this illusion lure customers into accepting deals that are systematically biased against them? Are customers able to read signals of online systems trying to trick them into trading? Ideally, these questions would be studied in a field experiment, for instance in collaboration with a platform that evaluates online offers.

The Ambivalence of Transparency 

Following the lead of US politics, many countries have embraced the idea of "freedom of information". Government is not only obliged to make statutes and administrative acts available. Everybody from the general public in principle has the right to access the minutes of governmental decision-making, and to see the files. Obviously, anticipating potentially ubiquitous transparency disciplines government officials. They cannot expect to be protected by their comrades if they engage in questionable activities. Traditionally, administrative law on the European continent was skeptical though. A whole set of rules regulates the confidentiality of governmental decision-making. These rules do not only protect the legitimate interests of private parties about whom government decides. They are also meant to protect government officials themselves. The main concern can be characterized with a term from US constitutional law. The legislator is afraid that transparency might have a “chilling effect”. Public officials might shy away from implementing the normative decisions of the legislator, for fear of being held responsible, being the object of a press campaign, or not being promoted in the future. These considerations define a trade-off. Abuse of power on the one hand, and neglect of legitimate policy goals on the other hand. Testing under which conditions which effect dominates invites empirical investigation. The question could be formally modeled and tested in a lab experiment.

Machine Aided Balancing  

A typical legal case is ill-defined. The judge or administrator knows full well that she has only imperfect access to decision-relevant facts. Very often, deciding the case also calls for living up to the expectations of incommensurable normative goals. In legal discourse, it for instance is standard to discuss a conflict between doing justice to the parties of the case, and contributing to the development of the applicable legal rules. While the former goal may call for leniency, the latter may call for visibly holding up a principle. Moreover, typical cases have multiple dimensions, each with their specific normative expectations. Which facts the decision-maker takes to be decision relevant is guided by their relevance for the competing normative goals. At University, law students learn to make meaningful choices despite the fact that these choices cannot be derived from first principles. In the legal debate, the core of this exercise is often referred to as "balancing". The actual balancing act is taken to be intuitive, based on professional training and experience, and ritualized debate with the parties. Now the advent of powerful machine learning methods increasingly makes it possible for technology to supplement humans in areas of life that traditionally were considered to be inaccessible. Most likely, machines will not replace human decision-makers. But machines might not only help humans make qualitatively better decisions. They might for instance help them not to overlook important dimensions. Machines might also alert human decision-makers to signals for stereotype and bias. Machine aided balancing might therefore hold the promise to improve legal decision-making at its core. Yet thus far this is only a possibility. It would have to be carefully worked out and tested, for instance by an experiment that has interns decide on a mock case. The control group would decide on their own, while the treatment group would have access to a (probably still very basic) decision aid.

Go to Editor View