Spotlight: Laurent Mathevet

Information Design and Psychology

Laurent Mathevet

Disclosing or concealing information can affect a person’s choices while also having emotional impact. Suppose that you have information that is relevant to another person, for example, whether prayers have a statistically significant effect on illness. Consider telling a friend who wants to make good decisions, but also experiences discomfort when information conflicts in any way with his prior beliefs. In a more economic example, consider a financial advisor who considers providing a client a five hundred-page dossier about prospective investments.

Should a well-intentioned advisor always reveal everything?1

In standard economics, the answer is yes because a better-informed person makes better decisions.2 However, people conceal information to spare the feelings of their friends and loved ones, financial advisors hand out compact leaflets, parents abstain from learning the sex of their unborn children, and many of us do not want to know the caloric content of the food we eat.

This question is also relevant to regulatory contexts. Lawmakers often mandate disclosure with the uninformed party’s welfare in mind. For example, Informed Consent is the ethical concept, codified in law, according to which doctors should provide a detailed description of the possible consequences of a treatment or drug. Transparency carries over to many other professions and entities. For example, the Clery Act requires institutions of higher education to publish an annual campus security report about all crimes from the past three years. Although transparency aims to improve welfare, it may be misguided. In the context of medical care, Ubel (2001) writes: “Mildly but unrealistically positive beliefs can improve outcomes in patients with chronic or terminal diseases… Moreover, unrealistically optimistic views have been shown to improve quality of life.”

While the answer to the opening question is obviously negative, my joint research with Elliot Lipnowski takes the next natural steps to assess which information one should provide.3 Of course, this depends on whether the audience likes information or not. While this may seem an obvious point, it is not immediate how to operationalize it.

First, what does it mean to like or dislike information? It is important to distinguish between psychological and behavioral attitudes toward information. A person is psychologically information-loving (respectively, averse) if, fixing her action or choice, she likes to be more (less) informed. Alternatively, a person is behaviorally information-loving (respectively, averse) if, having the possibility to react to information, she likes to be more (less) informed. A simple example illustrates the difference. Imagine a person whose imminent seminar presentation may have some errors. If his laptop is elsewhere, he may prefer not to know about any typos, since knowing will distract him. He is psychologically information-averse. But if he has a computer available to change the slides, he may want to know, as such information is instrumentally helpful. He is behaviorally information-loving. Therefore, someone can be psychologically information-averse but behaviorally information-loving. Our mathematical characterizations of preferences for information reveal, among other things, that someone who is psychologically information-loving must be behaviorally information-loving.

Now, assuming that you have interacted with a person long enough to know her preference for information, how should you optimally disclose information to her? This can be subtle. Consider yet another situation in which your friend would desire to be informed about world events only when the news is good, but would prefer to know less if the news is bad. Even in this simple case, it is not obvious how best to disclose information. The first scheme that comes to mind—reveal the news when it is good and say nothing otherwise—accomplishes little here, since no news then reveals bad news. Economists have developed mathematical tools for optimal disclosure, but applying them can be demanding in general. This is why we develop a method that simplifies the search for optimal information policies. The idea is that, although a person might not like information at all possible beliefs, he might still like it locally at beliefs in some given region. Accordingly, the advisor should never leave that person’s beliefs inside such a region, since she could instead give him more information. When it is possible to identify a collection of such regions, the search for an optimal policy can be restricted to those leaving beliefs elsewhere.

Return to the first example and assume that the friend’s preference for information has a “prior bias” form (specifically, he wants to match the uncertain state with his action, but suffers in proportion to how much he changes his beliefs). If we apply our method to it, it turns out that either full information or no information is optimal. In other words, there is no tradeoff worth averaging. Either this friend can bear information, in which case we should tell him the whole truth to enable good decision-making, or he cannot, in which case we should say nothing.

The method also delivers a simple yet resonant lesson in the case of temptation and costly self-control.4 A tempted agent does not want to know what he is missing, which has a precise mathematical definition in the paper. More information would only serve to eliminate some temptations from the agent’s mind, and give him a better idea of precisely what choices he is missing. Concretely, a student who has the choice to study or party on a Friday night might decide to turn off his cell phone in order not to receive news from his friends who are attending the party. He does not want to know what he is missing. In the paper, we focus on a consumption-saving problem, in which an individual has impulsive desires to consume rather than save. The world can either be tempting or not. Given his prior beliefs, the consumer always succumbs to temptation and never saves if he receives no information. But an advisor can help him be happier. When the state is tempting, the advisor tells the consumer to consume with some probability, and with complementary probability or when the state is not tempting, he tells him to save. This policy ensures that the agent splurges only rarely, while investing and mitigating temptation the remainder of the time. When the agent is told to consume, he does so and suffers no cost of self-control; when told to save, he faces a reduced cost of self-control, believing the state likely to be non-tempting.

Finally, even knowing that a policy is optimal in theory need not make it sensible in practice. First, it may be difficult to implement in reality. As Ben-Shahar and Schneider (2014) argue:5 “When we…ask how people really make decisions, we see that they generally seek…not data, but advice.” Consequently, we study whether this simple form of communication—giving advice— delivers the same benefit as other policies do. For example, a doctor could simply recommend special dietary choices instead of a statistical description of potential pathologies. In psychological contexts, an agent might want information for its own sake, and so be well served by extraneous information. Information-averse agents, however, have no such desire. One of our main results confirms this intuition: recommending an action is optimal if the listener is psychologically information-averse. The experts will note that this result holds even though the Revelation Principle can fail under information aversion. Second, people have limited time and mental resources in reality. Therefore, the concept of full transparency, as imagined by regulators, may, in fact, hinder the effective informativeness that decision-makers face: telling them less could lead to better decisions. In ongoing research, Elliot Lipnowski, Dong Wei, and I study “Attention Management,” i.e., which information to best disclose to an inattentive audience. We provide sufficient conditions for full information to be optimal—which does not mean that the listener will pay full attention, but rather that paternalistic spoon-feeding of limited information jeopardizes good decisions. When full information is not optimal, we partially characterize the nature of the optimal informational restrictions.

In closing, a well-intentioned advisor should be mindful of psychology before revealing the whole truth, and she should only disclose information when the material benefit exceeds the psychological cost, if any.


1 To be clear, a person A (you, the financial advisor, etc.) will have information that a person B (your friend, the prospective investor, etc.) does not know, and A’s only objective is to make B happy. Should A always reveal everything?   

2 See Blackwell (1953).

3  See “Disclosure to a Psychological Audience” (2017) by Elliot Lipnowski and Laurent Mathevet.

4 For the experts, temptation is modeled a la Gul and Pesendorfer (2001).

5 See Ben Shahar and Schneider (2014), “More Than You Wanted to Know: The Failure of Mandated Disclosure,” Princeton University Press.