Ethics Matter: Dan Ariely on the Hidden Forces that Shape our Decisions

Nov 20, 2012

TV Show

Highlights

Why do smart people cheat? Why do we eat more than we should or text while driving? In this funny and insightful talk, behavioral economist Dan Ariely explores the hidden factors that shape our most puzzling decisions and shows how emotions, peer pressure, and sheer irrationalism dictate our behavior.

Introduction

MARLENE SPOERRI: Hello and welcome to Ethics Matter. I’m Marlene Spoerri, program officer for Ethics Matter here at the Carnegie Council for Ethics in International Affairs. I’d like to welcome everyone who’s here with us today as well as those who are tuning in online.

Most of us take our rationality for granted. In fact, if there’s one trait that defines what it means to be human, it’s our presumed ability to act with reason and logic. Humans are rational beings.

Or are we? What if the decisions we make are often irrational, based not on logic or reason but on trivial factors, like whether we’re hungry or stressed? That’s just one of the questions posed by today’s guest.

As one of the world’s leading behavioral economists, Dan Ariely has taken direct aim at conventional economic theory. He argues not only that our decisions are often irrational, but that they are predictably irrational.

It’s his ability to understand that very predictability which brings Ariely here today. As we’ll see, his findings have important implications for everything from conflict resolution and human rights promotion to financial regulations on Wall Street and ethics education.

Backing up Ariely’s research are some pretty stellar credentials. Dan Ariely is the James B. Duke Professor of Psychology and Behavioral Economy at Duke University. He has, not one, but two Ph.D.s under his belt and is the author of two New York Times bestsellers, Predictably Irrational and The Upside of Rationality, as well as his latest release, The Honest Truth About Dishonesty. In addition to co-founding the Center for Advanced Hindsight, he also runs a blog and podcast that achieve what can only be described as the impossible: making social science both fun and understandable.

Professor Ariely, it’s a real pleasure to welcome you to the Council.

Remarks

MARLENE SPOERRI: We’ll begin with the term “behavioral economics,” which I mentioned twice. Can you give us a little bit of an intro into what that means?

DAN ARIELY: If you think about standard economics, standard economics has a very descriptive way of thinking about human nature. It starts with some assumptions about people behaving rationally. The idea of "rationally" is that we can think about all the possibilities; we consider everything across time; we have no emotion, no interference. It’s as if we had a supercomputer in our mind and we had access to all the information, and based on that we compute and execute our decisions.

Behavioral economics basically said: Let’s not make these assumptions. Instead, let’s put people in different situations and let’s see how they behave. It turns out when you try and see how people behave, it doesn’t work that well.

So just to think about it, how many people in this audience in the last week have eaten more than you think you should? [Laughter] [Show of hands]

How many people here have ever texted while driving? [Show of hands] Okay. Everybody of a certain age group.

Last question: How many people here have ever had unplanned, unprotected sex? [Show of hands]

If you think about allwe’ll talk about lying later on [Laughter]if you think about all of those behaviors, it’s kind of interesting that not only do we recognize that we have done them, we recognize there’s a good chance we’ll do them again.

Think about texting and driving. It’s not as if people said, “Oh, I did it once, I realize it was incredibly stupid, I don’t want to die, I don’t want to kill other people, let me not do it again.” No. We recognize that under the right circumstances there’s a good chance we’ll do it again. This basically suggests that we don’t really work in our own long-term self-interest.

MARLENE SPOERRI: So you in fact argue that we’re irrational.

DAN ARIELY: In many ways.

MARLENE SPOERRI: Is there then variation in your data? Are women, say, less rational or more rational than men?

DAN ARIELY: It’s interesting because there’s a huge gender difference. The gender difference is that only women think that they are more rational than men. [Laughter] But in our experiments we almost never find any gender differences.

MARLENE SPOERRI: So how did you stumble upon this idea of rationality, and how did you get interested in this notion of decision-making?

DAN ARIELY: My personal story is I was badly injured many years ago. I got burned on 70 percent of my body and I spent about three years in a hospital. A hospital is a place where you can observe lots and lots of irrational behaviors. I got a lot of my interest from that perspective.

But the thing that troubled me the most was the question of bandage removal. Imagine you were a burn patient and you were covered with burns and somebody had to take the bandages off. Now the question is: What’s the ideal way to take it off? Should you rip the bandages off quickly, minimize the duration but having it very, very painful? Or should you stretch it over a long time, every second not as painful but take a large amount of time?

Just consider your own intuition. How many of you would go for the quick ripping approach? [Show of hands]

How many would go for the slow approach? [Show of hands]

This is also a test of how many of you have read any of my books. [Laughter]

Most people think that the ripping approach is better. My nurses believed that, actually strongly. They held the belief that this was the right thing to do.

When I left the hospital, I started doing experiments in pain. I would bring people to the lab and I would crunch their fingers a little bit. I did experiments with electrical shocks and heat and all kinds of things. I found that it’s the wrong answer. It turns out that if you take a painful experience and you make it twice as long, you don’t make it twice as painful. You play with the amplitude; now you dramatically change how people experience it. The nurses were trying to shrink the duration. In fact, they should have tried to shrink the intensity.

The question is: How could they get it wrong? Here were good, wonderful, kind people, spending their life to help patients, and they were getting things wrong. It was not that some were doing it too fast and some were doing it too slow. They were getting it wrong in a systematic way.

That started me thinking that maybe there are things in life that we feel like we have an intuition that goes one way but the reality is different. Because we’re good people, we never want to go against our intuition.

So think, if you were a nurse and you believed that the quick ripping approach is the right approach, would you ever try the other approach? Probably not. Every day you would say, “Let me do the best for my patient by doing it quickly.” But in fact you might do the worst thing for your patient every day repeatedly.

Nurses are not the only people in the world that get things wrong in a systematic way. It turns out it’s much more general.

MARLENE SPOERRI: In your books you’re really frank about describing a lot of the pain you went through during the recovery process. And yet, as emotional and painful as I imagine the accident and the recovery were, in some ways it seems like it also enabled you to take a more kind of objective point of view towards human behavior. Can you describe that dynamic a little bit?

DAN ARIELY: I’m incredibly grateful to the fact that our memory for pain is not the actual pain. We all should be grateful for that.

I was 18 and I was badly injured, and a physical injury, which is not ideal for somebody who is just about to grow up. My injury really kind of put me outside of the social circle. All of a sudden, I was in bed, everything around me was behaving in the same way, and I was not able to take any of the actions of my friends.

For a long time I didn’t eat. I had a tube. I didn’t get out of bed. I’m not talking about reading and writing and talking to people and having a romantic relationship. I was really not involved in any of those things, any of the daily activities.

I remember the first time they asked me to chew something. Months and months after I got injured, I had to chew. It felt so strange to chew. I mean who would want to do that? We’re all so used to it, we enjoy it. That’s just a small thing.

But in many ways this separationI just kind of felt I was observing life, as if I was an alien. I was kind of distant from every possible thing that everybody else was doing around me and observing it with a distance.

Even when I got out of hospital, I was covered with burns. I had pressure bandages. I had a body suit that was supposed to put pressure on the bandages. So I had the trousers and a shirt and gloves and a mask on my face. The only thing you could see are holes for my eyes and ears. Everything else was brownish. That kind of again created separation.

For years I just looked at behavior from an outside perspective. It was very difficult in many ways. But I think as a social scientist it was a good training.

I’ll tell you one other thing. One of the things that people don’t recognize about extensive burnsI didn’t recognize itwhen I got burned, I thought, “Oh, you get burned; it goes away.” We all have had little burns and it usually goes away. But extensive burns are very, very different.

It’s actually a real big mystery. If you think about it, none of these cells got burned. The body has replaced these cells many, many times over. How can it be that the cells don’t regenerate into the normal healthy skin? The fact that the cells regenerate into a scar rather than healthy tissue is a very odd part of biology, something we don’t really understand.

But one of the real challenges in my recoveryone of the things that happens with scars is they shrink. Sometimes I would sit in the same position for an hour and watch a movie, and when I finished I couldn’t straighten my elbows. The scars would shrink so quickly. I had to kind of slowly, slowly, slowly try and stretch it over. You know how kids sometimes do Indian burns? That’s the feeling. I had to really work against that.

In physical therapy, they would basically stretch me. I had to sleep with my neck outside of the bed to tryand then I would fail. At some point you couldn’t do it anymore, and then it would shrink, and they would have to operate and add new skin.

This kind of fighting against my body was a very strange and difficult experience. I felt like my own body was fighting against me. It created another separation I think between my physical existence and how I thought about life.

MARLENE SPOERRI: So you go through this very treacherous and long recovery process that takes many, many years. But you stumble upon this idea that humans might be irrational and you opt to in some ways dedicate your career to this. Why? What’s so important about this idea?

DAN ARIELY: I think of myself like a social engineer. People have lots of interests. My interest is the things that we do wrong that we should fix. I think back to my own life and I think about all the things that people did wrong in hospital and the way that we should fix it.

Now I see things wrong in all kinds of ways. Every time we do something wrong, it’s a shame. This is actually one of the biggest differences between standard economics and behavioral economics. If you think that people are perfectly rational, if you just push information out there, people would have all the information and they will make the right decisions.

If you think that people are making systematic mistakes, then the question is: So what kind of regulation, what kind of rules, what kind of improvement, do you want to suggest? I really want to figure out what mistakes do we make and how do we overcome them.

MARLENE SPOERRI: Within that vein, you do a lot of work on ethics and honesty. I think one of the really interesting points that you make is that most of us consider ourselves moral and good, and yet we often turn a blind eye to the ways in which we each are individually dishonest. So what’s going on there?

DAN ARIELY: What’s going on? First of all, let’s say something in defense of dishonesty.

How many people here have lied at least once in 2012? [Laughter] [Show of hands] You know, we can ask about this week and today and the last hour.

There’s actually a very interesting experiment. They put people together for 10 minutes, people who didn’t know each other, and said, “Introduce yourself to the other person.” They went ahead and they talked.

After 10 minutes, they said, “Okay. Did you lie in these 10 minutes?”

People said, “Me? Heaven forbid. Of course not.”

Then they said, “Let’s show you the video of your discussion and let’s go point by point. Most people recognized they lied two or three times in those 10 minutes.

So the truth is we lie all the time. But nevertheless we think of ourselves as good people.

We also understand the value of lying. How many people here would want to be married to somebody who always told you the absolute truth? [Laughter] Not so much.

We recognize that in the social domain some kind of rounding corners, fudging the truth, is actually a good thing. The problem is what happens when it becomes in the political arena, in the business arena, in the professional world. Now these things that actually help as social lubricants can become incredibly dangerous.

MARLENE SPOERRI: That’s exactly it. You argue it’s not simply that people cheat, but you say that if you look historically you can argue, when you look at professional honesty in particular, that it’s on the decline. You cite cases like Enron. Can you talk a little bit about that?

DAN ARIELY: I think there’s a couple of reasons why ethics is on the decline. It’s not because the young generation is less moral than the old generation.

One of the things we find is that people have a hesitation of doing an immoral act directly, but it’s much easier for us to do it indirectly. I’ll give you two examples.

We did a study with 12,000 golf players. We asked them, “If the ball fell in the rough, not a good place, would you pick it up and move it four inches to the left to a much better position?”

People said, “Heaven forbid. I can’t imagine doing it. Nobody I play with would do it. I wouldn’t do it. It’s illegal. This is not what golf is about. Would never do it.”

Then we said, “What about kicking it a little bit with your shoe?”

“Yes, that’s no problem. I’ve done this a lot.” [Laughter]

“What about hitting it with a club?”

“Even easier.”

Now, here’s the experiment we did. In our experiments we tempt people to steal money from us. The way we do it is we give people a sheet of paper with 20 simple math problems and we say, “Solve as many of those as you can in five minutes. We’ll give you a dollar per question.”

If you were in the experiment, I would pass the sheets around, I would say, “Go.” You would work as fast as you can. At the end of the five minutes, I would say, “Stop. Count how many questions you got correctly. Now take the sheet of paper and go to the back of the room and shred it. Then come back and tell me how many questions you got correctly. I’ll pay you accordingly.” People do that. They say they solved six problems. We pay them $6.00. They go home.

What the people in the experiment don’t know is that we played with the shredder. So the shredder only shreds the sides of the page but the main body of the page remains intact and we can go in and we can find out how many questions people really solved correctly.

What do we find? On average, people reported six but they solved four. By the way, it’s not as if we have a few big cheaters who kind of shift the mean. We have a ton of little cheaters.

Just to give you kind of an estimation, in many experiments we run about 30,000 people. From those 30,000 people we had about 12 big cheaters. Together they stole about $150 from me. We also had about 18,000 little cheaters, and together they stole about $36,000 from me. So just kind of to give you a sense of the importance of big cheaters versus small cheaters.

Now, here’s the thing about distance. Imagine you were in the experiment. In one condition you finish, you shred, you come back, and you look the experimenter in the eyes and you say, “I solved X problems, give me X dollars.”

In the other experiment, you look him in the eyes and you say, “I solved X problems, give me X tokens.” We paid people in tokens. Now they take these tokens, they walk 12 feet to the side and change them for dollars. It’s all about money, but when you lie and look at somebody in the eye, you lie for something that is one step removed from money. What happened? Our participants doubled their cheating.

For me this is an incredibly worrisome result, because if you think about it, we are moving to a society with great distances, moving from cash to credit cards to the electronic world; from cash to stocks to stock options. We are moving from dealing with people directly to dealing over great distances. So I think one of the things that is happening is that with this great distance of dealing with other people, it’s easier for us to be dishonest but think of ourselves as good people. That’s one thing.

The second thing is that we take a big clue about what is okay and not okay from other people around us. So imagine the following experiment:

You are in the experiment. We do everything in the same way but with two changes. One is we give you an envelope with all the money up front, say, “This is all the money for the experiment. When you finish, give us back the money you did not make and keep the rest.”

The second change is that we hire an acting student. That acting student sits in the front row, and 30 seconds into the experiment they raise their hand and they say, “I solved everything. What do I do?”

Now, if you are in the experiment, you are still working on the first problem. There is no question this person is lying. The experimenter is saying, “You solved everything. You are free to go.” You see that person taking all their money and going home. What would happen to your morality? Lots of people cheat more.

But you could say, “Why?” There could be two reasons.

One is that we just proved to you that in this experiment you can cheat and get away with it. Here’s the proof: Somebody cheated, got away with it, nobody chased him, there were no repercussions. Maybe you could just do it.

The other possibility is that it’s not about getting away with it; it’s about the fact that it’s kind of socially acceptable. Nobody stood up, nobody did anything. Somebody from your group did it.

So to look into this, we changed the outfit of the acting student. Here is what we did. We ran this experiment at Carnegie Mellon in Pittsburgh. Everybody was a Carnegie Mellon student. The acting student was a Carnegie Mellon student.

In the second condition, he was wearing a University of Pittsburgh sweatshirt. Now what happened if you are a University of Carnegie Mellon student and you see a University of Pittsburgh student cheats? You still learn from the cost/benefit analysis that you can get away with it, but it doesn’t give you the social proof. It doesn’t tell you that people like you are doing that. What happened now? When the University of Pittsburgh student cheats, cheating actually goes down.

This suggests that we learn from other people around us what is acceptable and not acceptable. You can think about all kinds of things in politics and on Wall Street and doping in sports. We know what’s legal and illegal, but what’s really driving our behavior is: What do the people around us that we associate with do? Every time one person chooses away from the moral path, the whole group can change. I think we see this kind of progression.

MARLENE SPOERRI: So if you’re a policymaker and you’re working on something like dealing with financial regulations on Wall Street, what do you advise?

DAN ARIELY: The first thing is to realize what not to do. What do policymakers think? Policymakers think that people weigh the cost and benefit: “Is this a worthwhile crime or not? What’s the punishment? Should I do it or not?”

Because of that, with this kind of thinking, you could say, “Let’s just have big punishments, and as long as we have big punishments nobody will commit a crime.”

For example, think about something really big, like the death penalty. You would say, “If you had the death penalty, nobody would ever commit a crime.” Guess what? In the United States we have states that have the death penalty and states that don’t, and do you think there’s a different in crime rates? We can’t observe any difference.

California, “three strikes and you’re out.” You can steal three slices of pizza, one after the other, and go to jail for life. You would say, “Nobody would ever commit a third crime.” Not so.

Now, in our experiments when we do things, we also don’t find much evidence that people think about the cost/benefit analysis. So the first thing to do is if you think about life, you say, “There’s education, there’s the moment of temptation, and there’s the future.”

Policy is focusing only on the future: “Let’s create big punishments, and as long as these punishments are big nobody will behave badly.”

I think we need to focus much more on education, which is morality, and we need to think about how do we get people not to feel comfortable cheating at the momentnot to move the golf ball, not to be able to rationalize it. What do we do at the moment?

How do we create a standard of ethics? Every time we create fuzzy boundaries in which lots of behaviors are possible, people are going to read that in a way that is selfishly good for them at the moment but not necessarily for the long term.

So I think it’s about education, it’s about thinking about the temptation of the moment, it’s about strict rules, and it’s really not about creating big punishments.

MARLENE SPOERRI: Talking about dishonesty, that’s a pretty depressing topic. But I think

DAN ARIELY: Actually, the good news about dishonesty is that we are not as dishonest as economic theory would predict. Think about it.

How many of you this week had the chance to steal something and nobody could find out and you couldn’t get punished? You must have had many of those opportunities. You’ve been to a restaurant where they might have had nice silverware you could have taken home. You probably went to see a friend, you went to the bathroom, maybe you could have taken some stuff from there; maybe they had some nice stuff.

The reality is that we have lots of opportunities. If we were truly trying to maximize our wealth and all the time thinking, “What can I steal and get away with?” we would steal much more. So the good news is that we have morality. We do have feelings of remorse, regret, embarrassment, all of those things, which actually help us keep honest. The problem is we don’t have as much of it as we wish. But it’s not just depressing.

MARLENE SPOERRI: Well, now I am going to depress you. Which is to say that I think one of the most depressing findings that you write about is that people opt not to intervene in cases of mass tragedies as the number of victims rises. I think that has important implications for a group like the Carnegie Council, that’s working to get people to intervene in cases of large-scale atrocities. Can you talk a little bit about that?

DAN ARIELY: This is something called identifiable victim effect. Again, you can have the sad part and the good part. The sad part is that we are really not moved emotionally by large tragedy. The good side is that we are moved by cases of individuals.

There are lots of examples like this, but think about the following illustration. Imagine you are walking over a bridge and you’re walking for a job interview. It’s your dream job interview. You always wanted to get this job. Your interview starts in 15 minutes. You have a new suit and a new tie and new socks and new underwear. I mean you’re just ready for the job interview. As you walk over the bridge, you hear a cry of a baby. You look over and you see a baby about to drown. If you jump, you could save the baby, and if you don’t jumpand you can’t get your clothes offand if you jump with all your clothes, the job interview is over.

How many people would consider this and say, “Well, it’s one baby. If I go and get the job, I might get a really good salary. I could get half of it to charity; this would clearly save many more babies than this one baby. I could contribute to malaria; this would be really effective.” Really hard to think this way.

This is a case where the reality is that we will jump, we will risk our career, lose a lot of money, the suit and so on, to save a baby. There are babies who die every day that we could save for very little money. But when we see a tragedy of an individual, our hearts start working, we feel terrible, and we come to help. That’s the good news.

The bad news is that there are all kinds of things that basically mute our emotions. One of them is the size of the tragedy. You would think that if I would describe to you a poor girl in Africa who is suffering and starving, your heart would go out to her and you would want to help her, and if I described 10,000 of those, your heart would go out 10,000 times more.

Sadly, not. As the tragedy becomes larger, we actually start thinking more rationally, rather than emotionally, and we help less. That’s a real question, because we have these incredible tragedies happening all the time. Until a case of an individual comes up in which we can tie our emotion to it, we just don’t care enough and don’t do much about it.

MARLENE SPOERRI: You have also written about conflict resolution, and you’ve spoken to organizations to talk about the role of behavioral economy in conflict resolution. I love this experiment you use with beer and vinegar. Can you talk a little bit about that?

DAN ARIELY: Yes. You would wonder what beer and vinegar has to do with this.

So here’s the thing. In this experiment, we did the following. Imagine I gave you a choice and I gave you a glass of beer with vinegar and a glass of beer without vinegar. I said, “Try them both and tell me which one you like more, which one you would like a big glass of.”

People try and they say, “I hate the one with the vinegar. Give me the regular one.” You would expect.

Other people, we say, “Try two glasses." We don’t tell them one has vinegar and one doesn’t. Then we say, “Which one do you like now?” It turns out most people like the one with vinegar.

Now here’s the thing. It turns out in a blind tasteby the way, it’s balsamic vinegar; we can give you the exact recipebalsamic vinegar does make beer better, both Budweiser and Sam Adams. It just makes beer better. But when you think it will make it worse, it actually makes it worse.

So if you think about it in a bigger scheme, what this suggests is that our preconceptions about reality actually influence the way we interpret reality.

In a very sad way, I think this is kind of the experience of, for example, the Israeli-Palestinian conflict. Every time something bad happens in the Middle East, everybody sees it from the wrong perspective. There’s no way for people to see it in an objective way. The Israelis see the explosion from their perspective. The Palestinians see it from their perspective. What we see is that any one of those events just pulls the two sides further apart.

If you think about this, if you think about the fact that your preconceived notions are coloring your experience, one approach to try to fix it is to try to eliminate preconceived notions.

For example, one of the experiments we have been trying to do is we’ve been trying to get Israelis and Palestinians to play games online without knowing who the other person is. Can we create trust? If I know you’re a Palestinian and I’m Israeli and so on, there’s a good chance that our knowledge about each other would paint the way we think about each other and will not let trust emerge.

Can we think about electronic media? Can we think about getting people together before they know about the vinegar, in a sense? So far we have at least some suggesting evidence that this is a good direction.

MARLENE SPOERRI: Do you find that policymakers are open to those suggestions?

DAN ARIELY: You know what? I think we had the best success so far with policymakers in how to get more taxes out of people. [Laughter] The rest of it I think is going to be a little slower. But I’m still hopeful.

The British government in the last election opened an office of behavioral economics and they are doing all kinds of experiments with British citizens. The U.S. government has done some things. I am going to talk to the Dutch Central Bank next week. There is movement and I’m hopeful. Academics are used to everything being very slow, so in academic-speed this is quite good. [Laughter]

MARLENE SPOERRI: Before I open up the floor to all of your questions, a last question about how you come up with these experiments. You’ve worked with Legos, Coca-Cola, beer. How do you come up with these ideas?

DAN ARIELY: I talk to people and I really learn a lot from the people I happen to talk to. Different experiences provide different opportunities and different ideas. That’s basically it. So I read the paper, I talk to people, I try things myself. Many of the mistakes I write about started as my own mistakes and then I tried to think about them and say, “Is it only me or are other people doing it?” and so on. For me that’s one of the fun things about social science. Social science is about our daily lives.

Every time we have coffee and we have a discussion with somebody or we feel upset that we’re late or we’re tired, whatever it is, there’s a question of how did this emerge, how did this come about, and what do we know about it, and what don’t we know about it? It’s really fun to try and take our own experiences and try to dissect them.

I think there is this belief that the big mysteries are kind of the galaxies, the star far away, and molecular biology. I think as big of a mystery are the things that we just do day day-in and day-out and don’t necessarily think or understand why we do them and how we do them and so on.

Questions

QUESTION: Thanks. That was very interesting. I have a question on what you were talking about as preconceived ideas. Can you talk a bit about how you would translate that into policy? So when you’re talking about the example with the Israeli-Palestine conflict, obviously you probably can’t have a negotiation online where no one knows who the other person is.

DAN ARIELY: That’s right.

QUESTIONER: So how do you translate that into policy?

DAN ARIELY: I think there are lots of ways to think about how would you take a principle like this. The principle is that if you have a preconceived notion, this will paint anything you know.

One thing you could say is, “How do we create camaraderie between people?” There’s a question of politicians but there is also a question of how do you create things on the ground. The question about the game is not about politicians negotiating; it’s about having people to talk to each other and deal with each other.

I think for me what this suggests is that at the end of the day there is not going to be any chance that the Israelis and Palestinians will sit at a table and agree on the facts. They basically say, “If we can’t agree on whether we like a beer with vinegar or without vinegar, which is supposed to be quite simple, and we have data, how are we going to agree about facts and historywhat happened in 1948, what happened in 1967?”

Because of that, I would try to steer the discussion away from arguing about the facts to thinking about what will happen in the future. I think that’s one issue.

I see lots of debates about, “Oh, this happened.” “No, it’s this interpretation.” I don’t care.

By the way, the same thing happens in personal relationships, right? You don’t want to have a discussion with your wife saying, “This happened” or “this happened.” You each have your own different perspectives. Both of them could be wrong and not an accurate reflection of reality.

So I think the first thing it suggests is we shouldn’t fight about what the right facts are. There is no chance for agreement.

The second thing is that I think it suggests that there is an important role for intervention from a third party, that the idea that the two parties would agree is just very, very low.

So it’s basically kind of depressing, in the ability of two sides to come to a resolution and agree on the facts and on the solution. But it tells you something: maybe you shouldn’t expect it and do something else.

QUESTION: Have you found that simply knowing about the irrationality helps to prevent behaving irrationally, or is policy absolutely necessary?

DAN ARIELY: Does knowledge help? It’s a centrally important question. The answer is that it depends on the domain of irrationality.

If you think about texting and driving, we all know this is not ideal. Texting and driving actually could backfire. There are some states in the United States that have regulated that you can’t text and drive. What happened in those states? The accident rate goes up. Why? Because instead of texting and driving over the wheel people text and drive under the wheel. Now, don’t people know that it’s more dangerous? They know. But it doesn’t seem to protect against the stupidity of this particular act.

I think that in general terms if you count on the idea that you would know something and you would execute it every day, every time, the odds that it would work out for you are very low.

So overeating. We knownot a big mysterythat we should eat less. Nevertheless, if you have to get people to make the decision to eat less three times a day or six times a day, this is a recipe for failure.

The cases where knowing can help are cases like financial savings, where you could say, “I know that I would fail in these kinds of things, so let me create automatic deductions from my checking account. It’s a one-time decision. It is going to help you behave better in the long run.

So I think it’s not just knowing, it’s knowing and thinking about every time you are tempted, and achieving knowing and thinking about every time you are tempted to behave badly is a really high bar. So instead, I think that it is something that we need to do something to carry it for a long time.

Now, the other thing about regulationthere is a very nice book, called The Darwin Economy, in which the argument is that we should think about Darwin as the father of economics and not about Adam Smith. [Editor's note: Check out Robert H. Frank's November 2011 talk at Carnegie Council on his book The Darwin Economy.]

It gives the example of the elephant seal. The elephant seal, the biggest male in the herd, gets all the females. So do you want to be the second largest? No, no returns for that. You just want to be the largest. So what happens? All the elephant seals try to get bigger and bigger and bigger. They are fighting to be bigger and bigger and bigger. Now they are so big that they die from heart problems, vascular problems, sometimes they crush the females when they copulate.

Think about it. If you’re the second-largest, do you say, “Oh, it’s not really good for the species to get so large. Let me not try to be a little larger?” Of course not. You try to be larger.

By each of us fighting to be just slightly better than the rest, have a slightly bigger homeyou can think about all kinds of analogieswe can actually destroy the species.

What the results show is that in the United States one interesting metaphor for this is to think about education. You want your kids to go to a good school. To go to a good school, because of the way taxes pay for schools, you want to move to a neighborhood that has high taxes and good schools. Everybody wants to do it. So the cost is high. So people basically stretch their budget and go slightly above what they can afford in order to give their kids good education. What happens? Domestic violence increases, divorce of course, and bankruptcies.

In the same way, if we all try to fight to get things a bit better, we can destroy the economy. This is a case where regulation is really about coordination, is about not letting our individual desire to be slightly better than the other destroy the whole environment. So I think there are cases.

One of the things people often miss is that in economics there’s a notion of equilibrium. Where do we meet? What’s the meeting point? There are many equilibria, some of them good, some of them not. The question is: How does an equilibrium get chosen? Policy is really about choosing the right equilibria.

I’ll give you a very mundane example. How many of you have been to some party where most of the discussions were about sports and the weather and you said, “I really wish topics were different”? It just happens all the time.

But here is what happens. We have a ton of topics to choose from. We gravitate to the lowest common denominator. It’s one equilibrium. How nice would it be if there were rules in which at cocktail parties sports and the weather were out, not a possible topic, only talk about interesting stuff? Everybody would have been much happier.

If you think about it, in lots of ways the goal of regulation is to prevent this deterioration to the bottom, or this competition, or all kinds of other bad behaviors.

QUESTION: In your research, have you looked at the correlation between education levelsI’m not talking about ethical education; conventional education, if you want to call it that, academic qualificationsand incidence of rationality in making mundane decisions of the kind you were talking about, and also being honest?

DAN ARIELY: Yes. There’s no question that IQ comes to bear in some types of decisions. If you think about long-term calculations, if you think about decisions when you have many choices, decisions where you have some kind of potential way to think about it in mathematical terms, both formal education and IQ would matter.

In terms of honesty, we have been doing research with relatively low income people. We have been waiting outside of the parole office in Durham, North Carolina, and we have also done stuff with the local poor neighborhoods in Durham. We’ve also tested bankers in bars not too far from Wall Street. In general, in our dishonesty we don’t find a big difference based on that.

If I can, I’ll tell you a more general point. We also have done these experiments I described to you in many places. I grew up in Israel, so the first place we went to test was Israel.

How many of you think that the Israelis would cheat more than the Americans? [Show of hands]

Four. This is a politically correct group.

The Israelis cheat just the same.

Francesca Gino, my Italian collaborator, said, “Come to Italy. We’ll show you what the Italians can do.” [Laughter] The Italians cheat just the same.

We tried Turkey. We tried China. We tried England. We also tried Canada, because the Canadians usually think that they are betterthey’re not, they cheat just the same.

Now here’s the thing. Anybody who has traveled to other places has the strong feeling that cheating in different places feels very differently. So how can it be that our experiments really show these incredible similarities? I think it’s because our experiments are really detached from a cultural context. Our experiments don’t come up in a particular context. They are general. You have never done this before. In which areas of life does this fit? Nothing, right? It’s new.

Because of this, our experiment checked the kind of general human ability to rationalize. This is all the same.

Now, it doesn’t mean culture doesn’t matter. But I thinkactually we have some findingsthat culture matters in domain-by-domain activity. So culture can take a domain, like illegal downloads, and young people today say, “Don’t worry about this. This is not a moral question.”

Culture can take something else, like cheating on taxes, and say, “Don’t worry about it.”

Culture can take something like infidelity. In some countries they really worry about itif somebody has an affair, they make it the biggest item on the news for a whileand in other countries, like France, you could say, “Don’t worry about this.”

So culture does matter, but it matters on a domain-by-domain specific. I think this is actually a more general thing about how we take decision rules and apply them.

I will tell you we did find one cultural difference. When we run these experiments on cheating, we run them in two ways: We either run them in universities, because students are the same everywhere; or we run them in bars because we think people who go to bars are the same everywhere. When we run these experiments in bars, what we do is we change the payment so that every four questions you solve correctly, you get one glass of beer in this particular place. The kind of international currency is beer.

One of the places we ran this experiment was in a bar in Washington, D.C., where congressional staffers hang out in. Then we ran a similar experiment in a bar in New York City where bankers from Wall Street hang out. This is the only place we found a difference. So who do you think cheats more, the politicians or the bankers?

How many people vote for politicians? [Show of hands]

How many vote for bankers? [Show of hands]

It was the bankers, two to one. You can’t be happy with this. This is not the result that you say, “Yes!” [Laughter]

Now, I should point out two things. One is the domain of cheating was money, which was closer to banking than politics. That’s one thing. The second thing is these were junior politicians, these were congressional staffers, and maybe there’s room for growth over time. [Laughter]

QUESTION: How much do you think the changing society and rising inequality and “winner take all” is affecting cheating and ethics? The reason I ask the question is every time I cheat, lie, steal, I think of it as, “Okay, benefit to me versus the cost of looking in the mirror and thinking to myself, “Boy, I’m not the person I thought.” To your example of the job interview and the baby is drowning, “There’s a lot of jobs; I’ll save the baby.” “If this is an American Idol audition and career, fame, and fortune, that’s the only one, screw the baby.”

As the world moves towards that, is it hopeless? What do you think?

DAN ARIELY: What happens when it’s winner take all? One thing, I think that if you actually walked over the bridge and the baby was crying, even if it was the only job out there, you would jump. I think emotion would take over. You would not think about it, right? That’s the role of emotion. Emotion basically takes over for the cute baby. [Laughter]

Now, the question about winner take all, I think that’s actually a more complex question. In addition to the research I told you about cheating, kind of lab experiments, I’ve also talked to all kinds of big cheaters. I invited people who cheat in all kinds of ways to come and talk to me on camera and I interviewed them. I talked to people from accounting fraud and insider trading and doping in sports.

One of the things that one of the athletes told me, a cyclist who took drugs, was that in his mind he was taking drugs not to get ahead but to keep his rightful place. This is again kind of going back to this coordination notion. In a competitive environment, there are lots of things that could make you rationalize all kinds of behavior: “Everybody is doing it”; “I need to be true to myself to do this.”

I think as we start to care more and more about competition, and as some people who compete deviate from the strategy, they basically bring everybody with them.

I’ll tell you one other thing. We did an experiment. We made people the CEOs of a fictitious bank. The bank was doing well for a while and then the bank started doing badly. As the bank was doing badly, they got these suggestions of things that are called in the industry “revenue enhancements.” Revenue enhancements is a nice name for screwing your customers. You can hold checks for longer, increase fees, small printall kinds of things like it. There are quite a few companies that optimize revenue enhancements.

Now the question for the CEO of this bank is how many of these revenue enhancements will they start using?

Some CEOs were told that the goal of their bank is to maximize shareholder value and some people were not. Guess what? The people who were in the mode of maximizing shareholder value were willing to take many more revenue enhancements, screwing their customers.

What it basically tells you is that the moment you have an ideology to tie your selfish incentives tobecause they actually got paid by the revenue of the bankbut the moment you have an ideology to tie it on to, everything is easier.

You can think about lots of ideologies“That’s what society expects of me”; “It’s a competition”; “Everybody else is doing it.” All of those things help rationalize and justify negative behavior.

QUESTION: Religion is a form of ideology. Most religions do have a moral code and stress the importance of the moral code. Have you done any tests to see whether people who claim to be religious are more likely to be honest as compared to atheists and people who disclaim religion?

DAN ARIELY: I wish we had an hour.

First off, let me tell you a story on religion. There’s a story in the Bible that God comes to Sarah and says, “Sarah, you’re going to have a son.” Sarah laughs and she said, “How can I have a son when my husband is so old?” God said, “Don’t worry.”

Then God goes to Abraham and says, “Abraham, you’re going to have a son.”

Abraham says, “Did you tell Sarah?”

God said, “Yes.” This is my reenactment of the Bible. It’s not exactly like this. [Laughter]

Abraham said, “And what did Sarah say?”

God said, “Sarah said how could she have a son when she is so old.”

The religious scholars have wondered how could God lie. The conclusion is that it’s okay to lie for peace at homeshalom bayit in Hebrew.

If you think about itI talked to quite a few religious leaders about thisone of the rationalizations is that there are many human values. Honesty is one of them. But human values happen to collide from time to time. Not all human values are compatible all the time. What do you do when they collide? That is something that religion hasthere are some thoughts at least about that.

The other thing I’ll tell you about religionwe’ve done some experiments on thisthe lead to the experiment was the following. There’s a little joke.

A guy goes to the rabbi and he says, “Rabbi, you wouldn’t believe what happened. Somebody stole my bicycle from synagogue.”

The rabbi is appalled. “Somebody stealing your bicycle from synagogue? This is terrible. This is awful. I can’t believe this is happening. Here is what you do. Come to synagogue next week and sit in the front row. As we go over the Ten Commandments, turn around and look at everybody in the eyes. When we get to ‘Thou shalt not steal,’ see who can’t look you straight in the eyes. You’ll know that’s your thief.”

The guy is very excited. Comes to synagogue, sits in the front row, turns around during the Ten Commandments. At the end of the services he goes to the rabbi and he says, “Rabbi, you wouldn’t believe it. It worked like magic. It worked like a charm. The moment we got to ‘thou shalt not commit adultery,’ I remembered where I left my bike.” [Laughter]

Now, where is the experiment? We went to UCLA and we asked 500 undergrads to try and recall the Ten Commandments. None of them could recall all Ten Commandments. They invented lots of interesting new ones. [Laughter]

But what happened after they just tried to recall the Ten Commandments? We tempted them to cheat in the same task I described earlier. Nobody cheated.

It wasn’t that the people who remembered more commandments didn’t cheat and the people who can’t remember any commandments, the nonreligious people, cheated a lot. It was across the board. In fact, even when we take self-declared atheists and we get them to swear on the Bible, they stop cheating.

So if you think about religion, I think that religion has a couple of elements within it.

One element is heaven and hell, something in the long term. I don’t think that matters. I don’t think we have any evidence of any human activity that is driven toward that long term.

But I think that religion does give you lots of “do” and “don’t do” rules for daily life. I think these rules are actually helpful. This is why I think that being spiritual is not enough. I think that actually having specific rules for behavior helps people regulate their approach.

I’ll give you a hypothetical example. Imagine Alcoholics Anonymous. Very strict rulesdon’t drink. Imagine a drink was more or less half-a-glass a day. How would it look like? The glass would become very big. You would drink now and count for the next week. The fact that we have specific rules that mandate our behavior on a small scale I think is incredibly helpful.

I will tell you one other thing about religion. We’ve done experiments where we gave people hundreds of opportunities to cheat over time. What we see is people cheat a little bit, trying to balance feeling good, cheating a little bit, feeling good. Then at some point many people switch and start cheating all the time. We call this the “what the hell” effect. If you think about it, if you think you’re good you’re good, but if you think you’re not good you might as well enjoy it, go all the way.

Then we thought about the Catholic confession. We thought, “Why would people ever stop? The Catholic confession gives you a way to start a new page.” So we tried that. People cheat a little bit, they start cheating a lot, we give them a chance to say what they have done badly and to ask for forgiveness. With both of those elements cheating goes down to the pre-“what the hell” level. I think this is actually an interesting mechanism that religion has created to actually help people open new pages.

You can ask yourself: What would happen if we tried to create new pages in more secular aspects of society? We are now comparing, by the way, the Catholic approach to the Jewish approach. If you think about it, in Catholicism it’s at the discretion of the sinner when you’re going to confess. In Judaism it’s once a year, and in Judaism everybody confesses at the same time, so there is social coordination. Which one of those two is working better?

Also at some point I tried to get the Israeli government to make the tax day in Israel just before or just after the Day of Atonement. [Laughter] No takers.

QUESTION: It’s a wonderful discussion. Thank you very much. But I am wondering a little bit about the practical aspects of this. You, as someone who knows probably the most around here about irrational behavior and decision-making, how does that affect your decision-making? And when you’re talking about things like the Israeli-Palestinian conflict, or maybe the current political climate that we have here in the U.S. where there is such divide between the two parties and there isn’t much room there for a third-party intervention or mediator, how do we address these as a practical matter?

DAN ARIELY: I’ll give you an example that will go back to your question as well about data. There was a philosopher called John Rawls. John Rawls asked the question of “What’s a just society?” He came to the conclusion that a just society is a society that if you knew everything about it you would be willing to enter it in a random place. He called this “the veil of ignorance.” It’s an interesting way to think about society.

If you think about wealth distribution, you say, “Some people are wealthy, some are not,” if you are very poor, you might want society to be more equal; if you’re very rich, you might want society to be very unequal. But if you don’t know where you will end up and it could be random, you have to consider all possibilities. This was his version of social justice and what he called “the veil of ignorance.” We asked almost 30,000 Americans to describe to us what would be a distribution of wealth such that they would be willing to enter it in a random place. We did it in many ways.

One of the ways we did it was by way of wealth distribution in the U.S. Imagine we took the U.S. and divided it into five buckets: the top 20 percent, the next 20 percent, the next, the next, and the next. How much of the total wealth of 100 percent do you think is owned by the bottom 40 percent? It’s 0.3 percent. We have what’s called the Gini coefficient. We have a very unequal distribution of wealth in this country.

First of all, we have a very unequal distribution of wealth. When you ask people what it is they don’t understand what it is. But when we ask people to tell us what they think it should be in the Rawls world, it’s much more even. In fact, when we show people a distribution of wealth that is more equal than Sweden, and a distribution of wealth that is the U.S., and we say, “Under the veil of ignorance, which one would you rather live in?” Ninety-two percent of Americans choose the one that is more equal than Sweden.

Now, going back to your question, when we break this by Republicans and Democrats, there is almost no difference. So when you ask this question in general, 92 percent prefer the more equal than Sweden. For Democrats, it’s 93 percent. For Republicans, it’s 91 percent. Different, but tiny differences.

I think is exactly the beer and vinegar. What happened is that when you think about the Rawls constraint, you think in abstract terms of what’s a just society. You’re not taking your current position into account. In fact, that’s what the Rawls constraint does; it says you don’t know where you’ll end up.

So I think that these kinds of exercises, where we basically just get to think about what we think is a just society where we would like to live, I’m not saying somebody would go through this and choose for a different party. But I think that the moment you get to confront your ideas about what you think is right and wrong without being colored by your own position, I think people would come up to become much, much more similar.

In our experiments we don’t find differences in Republicans and Democrats, rich and poor, men and womentiny differences, but really insignificant. People who go to Harvard, to the Divinity School and to the Business School; we don’t find differences between NPR listeners and Forbes readers. The reality is that I think if you strip down and you talk about what we consider justice, we would be much more similarnot perfectly similar, but much more similar than the current political system would have you believe. That’s my good news.

MARLENE SPOERRI: Dan Ariely, thank you so much for joining us today. It was a fascinating discussion.

You may also like

NOV 21, 2024 Article

A New International Order Is Emerging, We Must Bring Our Principles With Us

On the heels of a new international order, Carnegie Council will continue to champion the vision of peace and cooperation that remains our mission.

NOV 13, 2024 Article

An Ethical Grey Zone: AI Agents in Political Deliberations

As adoption of agentic AI increases, it is critical for researchers and policymakers to agree on ethical principles to inform governance of this emerging technology.

OCT 24, 2024 Article

Artificial Intelligence and Election Integrity in 2024

This final project from the first CEF cohort discusses the effects of AI on election integrity as billions of people go to the polls in 2024.

未翻译

此内容尚未翻译成您的语言。您可以点击下面的按钮申请翻译。

要求翻译