Nudge

Cass Sunstein and Richard Thaler

Liberal paternalism means preserving people’s agency while protecting them from their own biases.


Likes

  • Clearly defines choice architecture and disambiguates liberal paternalism from other forms of choice architecture, which are now crystalized into my design vocabulary

  • I love the author’s perspectives on systems, trade-offs, and economics

  • Filled with good examples and stories of choice architecture good and bad

  • The authors do a nice job articulating the relationships between human biases with human decision-making

Dislikes

  • The econ-heavy parts of the book were hard to understand (e.g. mortgages)

  • Chapters around insurance, mortgages, credit cards, and climate change felt conflated, had a hard time understanding the major take-aways

  • With so many it depends, it’s hard to imagine how to put this book to use

  • It’s a long book, and at some points gets really boring (e.g. chapter 4)


Synopsis

Nudge is a pretty famous book that I have heard about for a while. It certainly has appeal to economists, which the authors are, but extends itself for applications in behavioral psychology, product design, UX, public service, politics, and a bunch of other domains. It always seemed pushed to the top of Amazon as a recommendation, and had been sitting in my wishlist for some time, so I decided to ask for it as a holiday gift. As someone who is about to have a kid, I found the book immediately enlightening. Nudge dives deep into the topic of choice-architecture, which is our responsibility as designers of any systems to not only allow our users agency or freedom of choice, but also to protect people from their own biases, such as social influence. On the flip-side, it also illustrates how we can leverage people’s biases in order to help them make decisions (a bit of an contradiction but whatever). A good example is that we are more likely to splurge while emotionally charged, act while using information that is immediately available to us, or choose options that have an immediate positive impact, over prolonged benefit. For those reasons the authors meticulously dive into decision-making processes for high-stakes scenarios like picking a health insurance plan, taking out a mortgage, selecting (and using!) a credit card, and protecting the climate. What I really appreciated about this book was the implications in designing for behavior, especially how we can nudge for good to help people make better decisions when using a product or service. So, if a nudge is a good thing we can do to improve decision-making, there must be an opposite to that, right? That’s where sludge comes in. The book also illustrates how nudging can be done for bad by coercing people into paths that will damage their well-being, oftentimes for the good of a company.

Overall I really enjoyed this book and will be inserting the concept of nudging into my vocabulary. I love the implications it has for parenting, and the concept of paternal libertarianism puts into words what I have felt to be a good method of parenting for a while.


Biggest Learnings

  • Anchoring is a bias where previously disclosed information, whether implicitly or explicitly, nudges us to make a decision. For example, if we are disclosed information about the population size of a city, and then asked to estimate the population size of a relative city, we will use the information from the first city to form our judgement. Presenting people with default options can also serve as an anchor, setting expectations for what’s conventional, and influencing decision-making as well.

  • Another way we make it easier for people to make decisions is through architecting data in a way that is standardized and machine readable. By creating standard units of measurement for things (e.g. KWH, MPG, USD) it makes it easier to compare things apples to apples. By making data machine-readable, such as a database of grocery ingredients, our banking information, or viewing history, we can enable machines to aggregate our data, compare it, and transform it into something that can assist us with making decisions.

  • Asking someone to formulate a plan, or at least asking them about what their plan is for accomplishing a task such as voting has been shown to improve goal completion. We call these “implementation intentions” and they are more successful when it’s a single person being asked to complete a goal, rather than a collective. Coordinating plans is hard. 

  • Availability bias occurs when we plant an idea, whether implicitly or explicitly, that becomes available in someone’s mind and thus influences their judgement or decision-making or something. It also suggests that someone’s previous experience plays a large role in their decision-making. If they have a memory or story available to them that influences them to make one decision, or another, than they will be more likely to go down that path based upon their experience. As choice architects, we can remind people of good or bad things to inform peoples decision.

  • Choice architecture is a concept that refers to a persons role in influencing choice. In design we do it all the time, as we intentionally pay out elements that in one way or another impose a hierarchy to the end user. Libertarian paternalism is a choice architecture style that suggests two major things. Firstly (on libertarian), people should have the freedom to choose and we as designers should not artificially remove choice options that people might otherwise pick for themselves. Secondly (on paternalism), we have a responsibility to architect choice in ways that will have the least harm on the end user. These two things combined create a framework for both providing freedom and protection.

  • Designers have a lot of tools in their picked to help people make choices, or even nudge them. Feedback for example is a way of letting people know that they need to make, or even not make, a choice. It can also be a way of letting people that they shave made an error, or that there has been a sufficient change in the status of the system. Mapping is another tool we can user, which is how choices are organized amongst eachother to prompt choice. Mapping can be used to organize choices, and we can map choices to established mental models to make understanding the organization of choices more clear. Incentives are another tool we can use, which provides choices with value to people, incentivizing them to pick on or another. We can also use defaults to prompt a choice, or, prompt people to make a forced choice if they do not like the default.

    A good system of choice architecture helps people map their choices to outcomes, thus making them better off. If there’s one principle that holds true, making it easy is a solid bet if we want to encourage people to do something. Most people will follow the path of least resistance.

  • Gain, losses, and framing are related in interesting ways. First of all, people are biased by the way information is framed, even if the information provided is nearly identical. In a classic example, patients who were told that they had a 90% chance of survival after five years were more likely to engage in a risk operation than those who were told that 10/100 patients died from a surgery. The data is identical, but the way it is framed makes a big difference. Moreover, people are more motivated to take action when there is an apparent loss at risk, than a gain. For example, people will be less likely to use plastic bags at a grocer if it is associated with an almost negligible loss, compared to gaining a negligible amount of money for using reusable bags. People assign more value to things they might lose.

  • In some cases like organ donation, imposing a proper choice architecture can save lives. However sometimes the final choice is not made by the chooser but by a third party such as the choosers family. In order to circumvent this, messaging policies need to be put into place to get families on board.

    There are five main types of choice architectures when it comes to organ donation.

    1. Routine removal: people don’t get a choice, the state harvests organs by law automatically. (High success, high objection)

    2. Presumed consent: People are automatically enrolled unless they explicitly register as unwilling donors, and must opt-out (Good, but requires family consent which can be hard to get)

    3. Explicit/informed consent: People must take some concrete steps to opt-in such as by joining an online register of donors. (High inertia, low conversion)

    4. Prompted choice: Offer a choice to sign up during something routine (e.g. voting or drivers license) but don’t make it mandatory (High participation)

    5. Mandated choice: Force people to make a choice while doing something routine. (Good but can cause backlash if it stops someone from completing their task)

    All of these are options to eliciting a choice. But promoted choice seems to be the best option because it enables people agency.

  • Most people have temptations that can be hard to resist. There are two mindsets that contribute to this: the impulsive Doer and the resistant Planner. We impose strategies whether it be on ourselves or others in order to suppress the impulsive nature of the Doer. There are a couple of ways we do this. The first is a commitment strategy, or self-control strategywhich is a self-imposed promise that we make to ourselves with a reward (or loss, which may be more appealing due to our gains and losses bias) for sticking to a commitment, such as money. The second strategy is called mental accounting in which we explicitly or implicitly set limits on our behavior in order to achieve a goal or satisfy a need, such as tucking away money into a savings account, or gambling only with our gains. All of these strategies are a form of self-control, and can be enacted by people for people as well.

  • Nudges can be implemented to support or take advantage of human frailties. Usually there’s more money involved in catering to human frailties than helping people avoid them, but it’s indeed unethical.

    Some examples of times when people need nudges are for choices that:

    • Require memory, such as paying a credit card bill, or attending a meeting

    • Have a delayed effect, such as exercising  and healthy eating

    • Are difficult to make, such as picking out a benefits package or insurance policy

    • Are infrequently made, such as purchasing a house or buying a car

    • Offer poor feedback, such as unmade choices that we do not experience but might benefit us if they were to become a habit

    • Have an ambiguous relationship between choice and experience, such as picking a dish to eat when we are totally unfamiliar with the options.

    You would think that markets would help people make choices based on these things, but often they prey on our inability to make a choice that really helps us, or will even sell us bogus product that takes advantage of our frailties.

  • Nudging can have clear implications for making and saving money- such is the case with retirement savings. Many employers enroll employees by default, in the US and Sweden for example, but choice architects have to make the decision about what the default is, and if they want to nudge employees to select another option, perhaps one that’s better for their unique situation. 

    One strategy that seems to take advantage of our biases, is a ‘save more tomorrow’ strategy, where our allocations to a savings fund increase all nag with our salary. This contributes to a ‘set it and forget it’ mindset, giving employees knowledge that their savings will increase along with their salary. The question then becomes- “when is it necessary for people to change their plans?”

    Advertising is a tool that can be used to provide a nudge, along with graphics and messaging. The problem arises where people become used to messaging and advertising so much so that it no longer provides a nudge. To remedy this, it’s good practice to switch up the visual language of the nudge (e.g. with new colors and fonts) so that they capture peoples attention in a new way (perhaps once every few months). Either way, nudging can have a real positive effect on peoples savings, enabling them to make money in the future. Active choosing is the best desired outcome, but passive use of defaults is better than nothing.

  • Optimism and overconfidence bias are people’s natural inclination to overestimate their performance on something compared to others. Naturally, people are optimistic about their performance for the most part, even when it rarely ends up being the case. Unrealistic optimism can lead to dangerous risk-taking, for example with gambling or taking out high-interest loans. We can reduce the effects of this bias by taking advantage of the availability bias, reminding people of stats or stories that reflect the truth.

  • Paternal libertarianism involves preserving peoples agency/right to choose, while protecting them from their own biases (e.g. emotionally charged decisions). At some points though it becomes more important to give them an educational boost than to nudge them. A boost goes a step beyond a nudge because it requires more effort from a person to understand something rather than just make a decision. Sometimes it’s also best to provide a cooling-off point before a big decision is made, to allow people to ramp up or down from being emotionally charged.

  • Post-completion error is a phenomenon where when we have finished a main task, we tend to forget things relating to the previous steps. Examples like leaving our ATM card in a machine after getting cash, or our CC at a restaurant after eating are good examples. We can design around this error by including a forcing function meaning that in order to get what we want, we have to do something else first, such as removing our card from the machine.

  • Representativeness bias is where we act based upon stereotypes. That is, we will categorize something depending on our preconceived notion that one thing belongs to a group because it reflects a stereotype that we believe to be true. This can be restated as the similarity bias.

  • Sludge is where friction exists in goal systems. Sometimes sludge exists because it’s just the way things are, and sometimes it’s intentionally designed into a system to prevent us from completing goals. For example, on Bluehost it’s easy to buy a domain and spin up a new site. But every year it will auto-renew, and in order to cancel it will require you to call Bluehost and speak with a customer service representative. Sludge is similar to dark patterns, and may even be a dark pattern on its own. 

  • Social influence has the ability to induce systems-level change. Put simply, psychology has demonstrate that people are more likely to do something if they see other people doing it or if behavior becomes a norm. This was the case with same-sex marriage becoming legalized across countries Post-2005, this was also the case with people wearing or not wearing masks during CoVid.

    People are also influenced to make their decisions based around what people of power are doing. As designers these rules are important because we have the power to enact change (aka nudge) by elevating social change. We can frame things like “X% of people have done Y, and you haven’t!”. Important to note two things regarding this: Firstly, we should only frame things so that people seem like the dumb minority (AKA people who have not paid their taxes). Second, we should do our best to frame messaging in a way that is culturally significant to that group (e.g. Don’t Mess with Texas, and Minnesotans Wear Masks), otherwise we risk people not wanting to change, and actively doing the opposite of what messaging suggests.

    Social norms and social influences are powerful, so much so that it has cascading effects, and can lead some to select a wrong answer instead of the apparent right one, or, purchase or download something, or change their mind (even political stance) just because they see others doing it that they identify with.

  • Stimulus response compatibility is a psychological principle that states that you want the signal you receive (the stimulus) to be consistent with the desired action. When there are inconsistencies, performance suffers and people blunder. There are many examples of this failure in the world, such as doors with handles that need to be pushed. These issues are often the results of poor mapping.

  • The confidence heuristic suggests that people tend to think that confident speakers must be correct. This means that consistent or unwavering people can move groups and practices in their preferred direction.

  • The first rule of behavioral design is to make any action you want users to take easy, a second rule is to make that interaction fun as well. A couple ways to make things fun are to insert a reward for completing a task such as a lottery or a rewards program. Gamification is an example of such a thing, but the reward has to be worth the investment.

  • The status quo bias, a fancy name for inertia, is the tendency for people to stick with default option or status quo, whether it’s a small decision or one that can greatly impact their lives. One of the problems this creates is that people might forget what option they picked, or when it comes time to make a decision again, not bother doing it. To design around this, it is in some cases wise to auto-enroll users in a default (such is the case with health insurance renewal) or set users back to zero (such is the case with FSA’s)

  • There is a difference between the way someone chooses something and the way someone uses something. Just because someone makes a good choice (e.g. through smart disclosure) does not mean someone will also use it well. For example, someone might pick a credit card that seems good for them, but will rack up a bunch of debt. It is also true that mortgage brokers aren’t always out for our best interest, and might nudge us to make a choice that yields them the biggest payoff. It has been found, for example at car lots, that people of color and women are charged higher rates when they buy in person versus buy online. So context for how and when a choice is made plays a huge role as well. One of the best ways to make something easy is to make it automatic, such is the case with making mortgage or credit card payments. 

  • We are biased by different ways of thinking. Our automatic system (AKA system one) is erroneous but is able to make decisions pretty quickly, where our reflective system (AKA system two) requires more time and effort to make more deliberate decisions. Most of the time we are dealing with system one which enables us to navigate the world efficiently. After all, if we stayed in system 2 all the time we might as well live a life of analysis paralysis. We can design for these systems by understanding that people will act on system one in most context where they are automatically doing something, reserving system two for more arduous modes of research.

Previous
Previous

Measure What Matters

Next
Next

Designing Experiences