Josh Greene is a Harvard psychologist, neuroscientist, and philosopher whose research has reshaped how we understand moral decision-making. But after publishing his book Moral Tribes, Josh changed his mind - realizing that explaining why people clash wasn’t enough. Since then, he’s focused on building tools to reduce division and promote cooperation.
We explore how Josh made that shift, what it means to be a “moral engineer,” and how projects like Giving Multiplier and a bipartisan trivia game are helping people bridge divides in an increasingly polarised world.
Want more Josh Greene or his work?
- Check out Josh’s lab and research
- Try Giving Multiplier — use code changedmymind for a matching bonus
- Play Tango — the trivia game tackling polarisation
About the hosts:
Thom and Aidan left boring, stable careers in law and tech to found FarmKind, a donation platform that helps people be a part of the solution to factory farming — regardless of their diet. While the podcast isn’t about animal welfare, it’s inspired by their daily experience grappling with a fundamental question: Why do people so rarely change their minds, even when confronted with compelling evidence? This curiosity drives their exploration of intellectual humility and the complex factors that enable (or prevent) meaningful belief change.
Thoughts? Feedback? Guest recommendations?
Email us at hello@changedmymindpod.com
00:00:05
So from my point of view, political conflict, intergroup
00:00:08
conflict is the mother of all problems.
00:00:11
Because, you know, whether it's having plausible solutions to
00:00:14
climate change or laws that prevent animal suffering or the
00:00:20
next pandemic, or the rise of dangerous artificial
00:00:23
intelligence or human trafficking, whatever it is, if
00:00:27
we're all willing to work together as a group, if we all
00:00:30
see each other as part of the same US, we can solve all of
00:00:33
those problems. I'm Tom and I'm Aiden, and
00:00:36
you're listening to Change My Mind.
00:00:38
We're interesting people share their biggest changes of heart
00:00:41
and takes along their journey from first outs to completely
00:00:43
new perspectives. Today we're joined by Professor
00:00:47
Josh Green of Harvard University.
00:00:49
Professor Green is an experimental psychologist,
00:00:51
neuroscientist and philosopher. His research has fundamentally
00:00:54
shaped how we understand moral decision making.
00:00:56
His early work used brain imaging to reveal that our moral
00:00:59
judgements arise from 2 competing systems, fast
00:01:02
emotional responses, and slower deliberative reasoning.
00:01:05
But Josh's career took an unexpected turn after publishing
00:01:08
his acclaimed book Moral Tribes in 2013.
00:01:11
While the book diagnosed why different moral communities
00:01:14
clash, explaining how the very instincts that make us
00:01:16
cooperative within groups caused us to fight between groups, Josh
00:01:20
realized that he'd offered little in the way of real
00:01:22
solutions. This led to what he calls A
00:01:24
reckoning, A fundamental shift from studying moral psychology
00:01:27
to actively shaping moral progress.
00:01:29
Instead of just explaining why people can't get along, he
00:01:32
decided to build tools to help them actually do it.
00:01:35
His latest project is an online trivia game that pairs Democrats
00:01:38
and Republicans as team mates, which has already shown
00:01:40
remarkable promise in breaking down barriers between groups.
00:01:44
In other words, Josh has changed his mind about the role of
00:01:46
scientists, from observers of reality to actors shaping future
00:01:50
for the better. Now he's helping to change other
00:01:52
people's minds about their potential political adversaries.
00:01:57
Hi, Josh. Thank you for joining us.
00:01:59
Thanks for having me here. Excited to chat to you today.
00:02:02
You had a very successful early career as a philosopher and
00:02:05
moral psychologist, developing ideas that later found their way
00:02:08
into your public facing work and your 2013 book Moral Tribes that
00:02:12
this that book discusses a number of things I think are
00:02:15
going to come up on a lot in this conversation.
00:02:17
But essentially you tell the story of how common sense
00:02:20
reality that we've evolved has been fantastic for in Group
00:02:24
collaboration. That leads to some really
00:02:25
serious problems when it comes to us collaborating between
00:02:28
groups, the classics of US versus them that is so common in
00:02:33
today's politics. For example, you've said that
00:02:35
you are really happy with the book, but that it offered little
00:02:38
in the way of practical real world problem solving.
00:02:41
And this LED you to a what you called A reckoning,
00:02:44
reorientating your career to focus on finding ways to use
00:02:47
tools of the lab to help solve real world challenges.
00:02:51
Many academics see their role as not as engineers of society, but
00:02:54
as observers. And I wonder, what is it that
00:02:56
made you unsatisfied to stick to trying to explain the world and
00:03:00
cause you to want to change it instead?
00:03:02
Yeah. Well, it's interesting.
00:03:04
I think if you asked me back in the old days, am I interested in
00:03:09
real world impact, I would have said yes.
00:03:12
And he said, OK, well, you became a philosopher.
00:03:14
That doesn't sound like you know what you do if you want to go
00:03:17
out and change the world. And I would have said no, no,
00:03:19
you don't understand. You know, you look at people
00:03:21
like some of my heroes in the consequentialist tradition like
00:03:25
John Stuart Mill and Peter Singer and say philosophers can
00:03:28
really have a big impact. It can take a long time.
00:03:31
But you know, that's what I was trying to do.
00:03:33
It might be if you asked me, you know, what was my theory of
00:03:35
change, I would have said something like get the
00:03:38
philosophy right and then apply it right.
00:03:42
And I guess the thing that was distinctive about my early work
00:03:46
was bringing in the psychology and the neuroscience.
00:03:49
That is, I saw what I thought was a way have a bit of
00:03:54
philosophical psychological self knowledge that if we could
00:03:57
understand how our moral thinking works, we could gain in
00:04:02
a better understanding of when our intuitions in particular are
00:04:06
likely to be on target and when they're likely to be leading us
00:04:09
astray. Then we can kind of correct our
00:04:13
moral thinking and wraps resolve some of the persistent paradoxes
00:04:18
in in moral philosophy. And so in the ideal version of
00:04:22
this, it would go something like this.
00:04:24
Utilitarianism makes a lot of sense.
00:04:27
That is, when, when would you ever want to make the world
00:04:30
worse off overall, right? But there are these intuitive
00:04:34
objections to it, and a lot of them have to do with making
00:04:38
sacrifices that seem wrong, even if they're for the greater good.
00:04:41
A stylized version of this is the trolley problem, and
00:04:45
specifically versions of it like the footbridge dilemma, where
00:04:48
you can push the guy off the footbridge and push him in front
00:04:52
of the trolley and he'll die, but you can save the five people
00:04:55
from getting run over. And most of us feel like that's
00:04:58
wrong, even if we assume that it would promote the greater good.
00:05:01
And so I got very interested in these things and it was partly
00:05:05
out of just scientific curiosity and wanting to test my ideas
00:05:08
about what's actually going on in our brains when we're making
00:05:11
these decisions. But I also thought there was, I
00:05:14
wanted to do psychologically informed philosophy, that is to
00:05:18
be able to say, look, this intuition here, if you really
00:05:21
pay attention to it, this is a kind of emotional overreaction.
00:05:25
And what we're really responding to here, we've since learned and
00:05:28
actually this has been tested in now countries all over the
00:05:30
world. We respond to things like the
00:05:32
difference between pushing somebody to their death versus
00:05:35
hitting a switch achieves the same result.
00:05:37
But you call a friend from the foot footbridge and say, help
00:05:39
me, I got to decide whether or not to kill this one person,
00:05:42
save the five people. You wouldn't say, well, that
00:05:44
depends, are you using your hands to push the person
00:05:47
directly or can you do it by hitting a switch?
00:05:49
Right. So there are these quirks that
00:05:51
are quite pervasive in our moral thinking.
00:05:53
And my goal was to try to understand those things and then
00:05:57
do philosophy better. And then in the limiting case,
00:06:00
the hope was that the field of philosophy within coalesce and
00:06:03
say, aha, the paradoxes have been resolved and we can now see
00:06:06
that the most potent objections to utilitarianism cut them come
00:06:11
from over general neuralizations of otherwise useful moral
00:06:15
intuitions. And some people have said, yeah,
00:06:18
right on. It didn't precipitate a massive
00:06:21
shift in people's philosophical thinking, whereas it did
00:06:24
actually have a pretty big influence just on the basic
00:06:26
science on the field of moral psychology and in giving rise to
00:06:30
neuroscientific moral psychology and in social psychology.
00:06:34
So that theory of change, where we go from philosophy with a
00:06:39
kind of importation of new psychology and neuroscience to
00:06:43
better philosophy to better action in the world, if that
00:06:48
Hathaway is going to work hasn't happened yet.
00:06:50
And maybe I have wrong time scale, maybe next century, I
00:06:54
don't know. Maybe there are people out there
00:06:55
who got that message and are doing great things.
00:06:57
I felt like I wasn't personally satisfied with that.
00:07:01
And I started thinking, well, what would it take to apply
00:07:04
these ideas more directly? And this has really come out in
00:07:08
two different public facing projects.
00:07:11
One is Giving Multiplier, which is a website that uses some sort
00:07:17
of psychological insights that are related to things in my
00:07:20
psychology research and in my book Moral Tribes to boost the
00:07:23
impact of charitable giving. And then the project I'm most
00:07:26
focused on now is called Tango, which is the idea is it's a
00:07:29
scalable tool for reducing political polarization and
00:07:34
intergroup conflict. So that's what I'm focused on
00:07:36
these days. Fantasy.
00:07:37
I'm looking forward to getting into both of those in later in
00:07:41
this conversation. But before we do that, I just
00:07:43
wanted to stay on this this piece that you've been speaking
00:07:46
about around your kind of initial theory have changed.
00:07:47
I wonder, I guess one of the most obvious objections that
00:07:51
someone might have when they're talking about, you know, these
00:07:53
intuitions that we have that maybe are over generalized might
00:07:56
be to say, well, morality is really what we is, what we think
00:08:01
it is a sort of non realist version of morality.
00:08:03
And so to say that we have thought wrongly about morality
00:08:08
is maybe a contradiction in terms in that what we think
00:08:10
about morality is what it is. And so if we think that there is
00:08:13
a distinction between pushing somebody off a bridge and
00:08:16
letting them die, that that is in fact, you know, that's a
00:08:19
relevant moral distinction. Even if a utilitarian would
00:08:22
think that this is somehow arbitrary.
00:08:24
So what? What would be a response to that
00:08:25
kind of? So I would say that the I mean,
00:08:28
and you could try to appeal to some ultimate moral truth, but
00:08:30
I'm secretly not a moral realist.
00:08:33
I kind of wish I were. I want to be moral realist, but
00:08:36
I'm not. I haven't been compelled.
00:08:37
So for me it's got to be Plan B, which is the arguments have to
00:08:42
be based on a kind of coherence or incoherence into internal
00:08:46
consistency. So for the person who says, well
00:08:50
I'm a human and my human brain tells me that it's OK to save 5
00:08:54
lives. Threatened to buy a trolley by
00:08:56
hitting a switch that turns it on to A back with one person
00:08:59
instead. But it's not OK to push somebody
00:09:01
off of a footbridge to save 5 people, I would say.
00:09:03
Well, let me tell you what's going on.
00:09:05
What's going on a neural level is that action of pushing the
00:09:09
guy off of the footbridge is sending a signal to your
00:09:13
amygdala that is a kind of negative emotional response.
00:09:16
And it is similar to the kind of response you'd get if you saw a
00:09:20
stick that looked like a snake, right?
00:09:22
Or something like that. And then there's another part of
00:09:25
your brain that is doing kind of the cost benefits reasoning and
00:09:28
in particular, and this is where you really get the normative
00:09:31
traction. What triggers that response?
00:09:34
It's many things, but the strongest effect is the
00:09:37
difference between pushing with your hands in a direct kind of
00:09:41
way versus hitting a switch, right?
00:09:43
And so, as I said before, most people don't think that's a
00:09:47
difference that ought to matter. But the science can tell us,
00:09:50
well, guess what? That's what's making the
00:09:52
difference for you right now. Someone could say, OK, well,
00:09:57
psychologically speaking, that may be what made me say that
00:10:00
it's wrong in this case and right in that case, or maybe
00:10:02
what makes other people say that.
00:10:04
But there is, in fact, a real justification for doing that.
00:10:09
And then I would say, OK, and what is that justification,
00:10:12
right? You know, where does it come
00:10:14
from? And I think what you're going to
00:10:16
end up with are people who are just rationalizing.
00:10:18
That is, they're searching around for a theory and digging
00:10:22
out their old Thomas Aquinas books or whatever it is to try
00:10:25
to justify it. But if they're honest with
00:10:27
themselves, they're really just trying to justify their
00:10:29
intuitive emotional responses. And I made this argument about
00:10:33
Dan's logical philosophy more generally.
00:10:36
So in a serious but somewhat freaky way, I pointed to Kant,
00:10:39
for example, who Kant has a little essay called On Wanton
00:10:43
Self abuse in which he explains why masturbation is wrong and he
00:10:48
appeals to the categorical imperative.
00:10:49
He says, look, you're using yourself as a means to the
00:10:52
gratification of an animal drive right now.
00:10:55
Is he really applying the categorical imperative in some
00:10:58
kind of coherent way there? Or is it just that he has an
00:11:01
18th century prudish Prussian moral intuitions and that he's
00:11:06
then setting about to find a justification?
00:11:08
And I think that this is a lot of what we do, kind of the rough
00:11:11
and tumble of normative ethical argument.
00:11:13
Yeah. So all of this is to say, I
00:11:15
think psychology show us in ways that are hard to deny, if not
00:11:20
impossible to deny, that we're kind of kidding ourselves bit.
00:11:23
And that's, that's, that's the strategy there.
00:11:26
Yeah, I'm with you. I also don't think that there
00:11:29
are objective moral truths. And so ultimately it's all a
00:11:32
matter of intuitions. But what this kind of
00:11:34
psychological evolutionary lens does is it allows us to dig into
00:11:39
various of those intuitions, particularly when some of them
00:11:41
conflict, and see which ones, when it really comes down to it,
00:11:44
are driven by evolutionary forces such that we don't really
00:11:48
endorse these intuitions when we're evaluating them and which
00:11:50
ones, upon reflection, we we do in fact endorse.
00:11:52
And so that does seem like a really valuable process.
00:11:55
I think it's funny you mentioned kind of this original view.
00:11:57
You know, you're going to show people, give them the tools to
00:11:59
identify what's going on in their own psychology, and then
00:12:02
they'll figure out how to do ethics properly.
00:12:04
And then, you know, step three, solve the world.
00:12:06
But it reminds me of how in the kind of rationalist movement,
00:12:09
people basically thought if we just teach everybody about all
00:12:12
of our cognitive biases and logical fallacies, then we'll be
00:12:15
able to identify when we're doing them and we'll be
00:12:16
perfectly rational and figure out the truth.
00:12:18
But when you look at, you know, people in the rational
00:12:21
community, that is that is not what has happened.
00:12:23
And educating the world on these cognitive biases and logical
00:12:26
fallacies doesn't seem to have made people form more accurate
00:12:30
views either. If anything, it just allows them
00:12:33
to have more sophisticated ways of attacking their opponents and
00:12:36
justifying their own beliefs. So yeah, it sounds like you
00:12:39
hoped that it was going to have this sort of change in the world
00:12:42
and it and it it didn't. Was there a specific kind of
00:12:45
moment that crystallized this feeling that just explaining
00:12:48
things wasn't enough, you needed to actually just make the change
00:12:52
yourself? No, there was no one moment, and
00:12:55
it really came more from myself than from other people saying
00:12:59
come on Josh, go do something useful.
00:13:01
The sense that like these ideas combined with other ideas
00:13:06
sometime down the road would have the kind of big effect that
00:13:10
I would hope for. You know, it's funny, I actually
00:13:12
sort of foreshadowed this failure in some sense in for
00:13:15
what the term is epigraph, epigram.
00:13:16
Whatever the little quote at the beginning of my book, it came
00:13:19
from a fortune cookie I got once in a Chinese restaurant in
00:13:23
Princeton, NJ. It says philosophy of 1 century
00:13:26
is the common sense of the next. And so even even in writing the
00:13:30
book, I kind of said, well, this may take a while, right?
00:13:34
So, you know, I haven't end up. There was no one moment.
00:13:37
It was partly push and partly pull.
00:13:39
I mean, partly feeling like, OK, this is not having, you know,
00:13:43
it's not creating a philosophical revolution, you
00:13:47
know, either academically or out in the world.
00:13:49
I have anything. I think it's had more of an
00:13:51
influence outside academia than in philosophical ethics.
00:13:55
You know, I always kind of had an entrepreneurial bent, right?
00:13:59
And I got on this track, which is very fortuitously, like, I
00:14:02
was in grad school, do my philosophy dissertation and had
00:14:05
these thoughts about trolley dilemmas.
00:14:06
And then John Cohen showed up to Startup Center for the Study of
00:14:09
Brain, Mind and Behavior. And I thought, oh, you could,
00:14:11
you know, do a brain imaging study with these dilemmas.
00:14:14
And then two years later, it was in science.
00:14:16
And, you know, and I'm doing a postdoc in psychology and
00:14:18
cognitive neuroscience. And I just got really lucky to
00:14:21
catch that wave. And then I wrote it for a while,
00:14:23
right? And then I was like, OK, well,
00:14:25
what's next, right? And, and this idea of going in a
00:14:27
more practical direction just felt very appealing to me as I
00:14:31
was in my 40s. Now I'm in my early 50s.
00:14:34
Yeah. It's interesting that you use
00:14:35
the word entrepreneurial to describe it because, yeah, I was
00:14:39
thinking about how in most fields there's like an
00:14:41
established pathway for applying academic knowledge.
00:14:43
You know, psychology graduates become therapists, economists
00:14:46
work in policy. But you're doing this like,
00:14:48
entirely different thing where you're creating a new
00:14:50
application of research in the real world, like turning moral
00:14:54
psychology research into a trivia game, for example.
00:14:56
Yeah. Is this just your natural
00:14:57
inclination towards the more entrepreneurial path or was this
00:15:00
kind of a conscious decision that you think that there are
00:15:02
some comparative advantages to doing it this way as opposed to
00:15:05
the traditional path I mean? You know, once I decided that I
00:15:09
wanted to try to apply our understanding of human nature
00:15:13
directly to solving real world problems.
00:15:15
It's like if I wasn't going to do it myself, what was I going
00:15:18
to do? Put out an ad and say someone
00:15:20
should do this and hope they do it and hope that they do it
00:15:23
well. You know, I mean, if you're
00:15:25
like, you know, molecular biology and you design something
00:15:29
that could become a cancer drug or something like that, you can
00:15:32
say to people, look, this could be a cancer drug.
00:15:35
And not only, you know, would other people have this sort of,
00:15:38
you know, understand, like, OK, well, what does it mean to take
00:15:40
a drug out of the lab and into the world?
00:15:42
And they can make lots of money doing it, so they'd have a big
00:15:44
incentive to do it. But I don't know who the Hell's
00:15:46
going to start giving multiplier or tango if I if I don't.
00:15:50
So got to go do it, you know? Do you think, are there other,
00:15:54
have you sort of started a trend within academia with
00:15:57
particularly philosophers trying to get out there in the real
00:15:59
world and do more things? I mean, I know there are the
00:16:01
effect of altruism even be a really good example of people
00:16:04
trying to take philosophy out into the world.
00:16:06
Are there others that you're aware of that are doing this?
00:16:08
Yeah, I was going to say, well, there's those farm kind guys.
00:16:11
I, I, I, no, I mean, I certainly, I don't think I can
00:16:15
take any credit for starting, starting a trend.
00:16:17
I think that the effect of altruism movement has had a big
00:16:21
influence on my thinking. And we won't go into recent
00:16:25
unfortunate failures, which I think are irrelevant to the core
00:16:29
ideas, but relevant to people's perceptions.
00:16:31
I, I think the way people like Peter Singer and others in the
00:16:36
EA movement have gone out there and done stuff, Will and Toby
00:16:40
and others like, I think that's been very inspiring in this
00:16:45
particular way. I certainly don't think I can
00:16:48
say, oh, that I started a trend. And also it's just early days.
00:16:51
I mean, giving multipliers really just a couple of years
00:16:54
old and tango. The paper was just published
00:16:56
last week and we're really just scaling up.
00:16:59
So is our early days, but others will follow.
00:17:02
Just a long history of social entrepreneurship and so pretty
00:17:06
humbled by what I've seen people accomplish.
00:17:09
So let's talk about giving multiply then.
00:17:11
So this is a project which sort of tries to solve a problem in
00:17:14
charitable donations, which is, broadly speaking, we want to do
00:17:18
good and we also want to feel good about the good that we're
00:17:21
doing. And often actually there's a
00:17:23
tension between these two things.
00:17:24
Sometimes the charities that help the most are not the most
00:17:28
kind of appealing to donate to. They don't make us feel all warm
00:17:31
and fuzzy inside. And Giving Multiplier, broadly
00:17:34
speaking, tries to solve this problem.
00:17:36
So how did you think about this particular issue and how does
00:17:41
Giving Multiplier try and fix this?
00:17:43
All right, great. Well, so a bit of background.
00:17:46
I was trying to think when I started being into like, how do
00:17:48
I do real world stuff? The most obvious thing this is
00:17:51
kind of being influenced by a fact of altruism is like the
00:17:54
most straightforward way that ordinary people can do
00:17:57
extraordinary good is by donating their money to highly
00:18:02
effective charities. And I was long ago influenced by
00:18:07
Peter Singer's drowning child argument.
00:18:09
For those listening or not familiar, short version is if
00:18:13
there was a child who's drowning right in front of you and you
00:18:16
could say, well, you could wade in and save the child, but
00:18:18
you're going to money up your clothes and they're going to be
00:18:20
be ruined. Is it OK to let the child drown?
00:18:22
No, even if it ruins your clothes, no.
00:18:25
But if for the same cost, you could save a child's life on the
00:18:28
other side of the world with life saving food or medicine, to
00:18:31
a lot of people that seems optional, right, in a way that
00:18:33
pulling the child out of the water doesn't, right.
00:18:35
And I was convinced by that argument to give and to give
00:18:39
effectively and I wanted to try to do so.
00:18:41
I mean, one of those just striking facts about modern
00:18:45
morality is how much extraordinary good people can
00:18:48
do, right? So less than a dollar.
00:18:50
You can provide a deworming treatment that will rid a child
00:18:54
of intestinal worms and enable them to go to school.
00:18:56
For $5000, maybe less, you can distribute enough malaria Nets
00:19:01
that statistically you're likely to save someone's life, probably
00:19:04
young child's life. You're like, well, how do I get
00:19:06
other people to do that? And I started out thinking,
00:19:09
well, I'll try to convince other people the way I was convinced,
00:19:12
like with Peter Singer style arguments.
00:19:14
And so did some of this on my own.
00:19:16
We actually had a study with Peter and others and we found
00:19:19
that it worked a little bit, but not much.
00:19:21
And then along came Lucius Caviola, who's a fantastic
00:19:26
behavioral scientist, psychologist, philosopher, who
00:19:29
came to my lab as a postdoc. And we were thinking about this
00:19:32
and we had a simple idea, which is instead of saying, hey, you
00:19:35
should really do this in addition to or instead of what
00:19:38
you already doing where most people give something to some
00:19:41
charity. So it's really a question of
00:19:42
where you give. But instead of saying don't do
00:19:44
that ineffective thing, do this. It's much better.
00:19:47
What if we just said to people, do both, right?
00:19:50
So we ran our first experiment which had this simple setup.
00:19:53
We in the control condition, you have an all or nothing choice.
00:19:57
So we say to people, OK, we're giving you $10.
00:20:00
You can give it all to your favorite charity.
00:20:03
Tell us what your favorite charity is, or you can give it
00:20:06
to this super duper effective charity that these experts
00:20:09
recommend and here's a description of it.
00:20:11
And then in the treatment condition, we give people those
00:20:14
two options. But we also say, or you could do
00:20:17
a 5050 split between your favorite charity and the Super
00:20:20
effective ones. And what we found is that in the
00:20:22
control condition, you know, something like 80% or more of
00:20:25
people chose to give to their personal favorite charity, which
00:20:28
might be something like the local animal shelter or a cancer
00:20:32
charity, if especially if someone they knew died of cancer
00:20:35
or something like that, if someone they knew someone's
00:20:36
close to them. And most of it goes to the
00:20:39
valuable but less effective charity.
00:20:41
And, but in the treatment condition where people have this
00:20:43
5050 option, we found that like just over half of people did the
00:20:48
5050 split. And this was enough to make it
00:20:51
so that something like 70% more money overall goes to the highly
00:20:55
effective charity. So just by adding that 5050
00:20:57
option, you move more to the highly effective charity.
00:21:00
And then we started thinking, okay, well, we could write a
00:21:03
paper saying, hey, everybody do 5050 splits and then no one
00:21:07
would pay attention. So we okay, we need some way.
00:21:09
What if we try to incentivize this?
00:21:11
So we did the kind of obvious thing and said, well, what if we
00:21:15
add money on top? Does that make people want to do
00:21:17
it? We want to do it even more.
00:21:19
And we found, unsurprisingly, that when we add money to both
00:21:22
of your donations, people really like that and they donate even
00:21:24
more to the highly effective charity or more likely to do a
00:21:27
split. What do you mean by add money on
00:21:29
top? So if we say, OK, so let's say
00:21:32
you're splitting between the local animal shelter and the
00:21:35
Against Malaria Foundation, and you've got $10, and so you could
00:21:39
give five and five. What we could say is if you
00:21:42
choose to do 5:00 and 5:00, we'll add another two to each so
00:21:45
that it becomes seven and seven or something like that.
00:21:48
Yeah, right. So it's adding more.
00:21:49
And we found that that really encourages people to do that.
00:21:52
That was great. Then we had this problem.
00:21:54
OK, but where's that money going to come from?
00:21:56
And then we thought, well, what if we asked people to support a
00:22:00
matching fund for future donors? And the way we did it was we
00:22:04
said, OK, you just agreed to give half your donation to this
00:22:07
charity you never heard of. Let's say, you know, evidence
00:22:10
actions do. We're in the world.
00:22:12
Instead of giving directly to that charity, what if you put
00:22:14
the money in a matching fund that would then pay for other
00:22:16
people to do one of these flipsy things that you just did?
00:22:19
And what we found was that about 30% of people were willing to
00:22:23
put the money, who did the split, willing to put the money
00:22:25
in into the matching fund. And that was enough to cover the
00:22:29
matching for the people who didn't choose that auction.
00:22:32
O we kind of looked in the math and were like, if this is right,
00:22:35
like it looks like this could work.
00:22:37
That is, you know, we encourage people to do these splits.
00:22:41
Some proportion of people are willing to support a matching
00:22:44
fund and some proportion of people are encouraged to do it
00:22:47
because of the matching funds. And you'd get this kind of
00:22:50
virtuous circle. So, you know, and then we did a
00:22:53
little more research on the psychology, like why do people
00:22:56
like these splits? And, and this is where it really
00:22:58
connects with the earlier work on trolleys and dual process
00:23:01
psychology. That is the unfortunate thing
00:23:03
about the footbridge dilemma, you know, by design is you have
00:23:06
this emotional response that says, don't do that.
00:23:08
And then this utilitarian calculation that says, yes, do
00:23:11
that and you kind of have to choose either more people are
00:23:13
going to die or you're going to feel bad about what you
00:23:16
endorsed, right? Whereas here the dilemma has a
00:23:19
middle of the road solution. You can split the difference,
00:23:21
right? And, and give some to the
00:23:23
charity that really tugs at your heartstrings, but some of the
00:23:26
charity that tugs at your brain strings, right?
00:23:29
And, and so I had this nice way of sort of threading the needle
00:23:33
where you have a more emotional pull on the one hand or a more
00:23:35
rational pull on the other. And we did our experiments that
00:23:38
showed that indeed, that seems to be what's going on.
00:23:40
That is. But the cool thing is when you
00:23:43
give half of your donation to that charity that you love
00:23:46
personally, the local animal shelter or cancer charity, it
00:23:49
doesn't really matter whether you give $50.00 or $100.
00:23:52
What matters is that you gave something to something you care
00:23:55
about. So if you only give 50, now
00:23:57
you've got this extra 50 bucks and you can have a different
00:24:00
kind of satisfaction, namely the satisfaction of doing something
00:24:03
really smart and impactful. So you get this double
00:24:06
satisfaction by doing the split where you get most of the value
00:24:10
of going with your heart, and then you also get to go with
00:24:13
your head at the same time. And that seems to be the
00:24:16
psychological mechanism that makes these things so appealing.
00:24:19
But one thing that is really interesting and surprising to me
00:24:22
about the results is that it seems like it's understandable
00:24:25
that people would see the more empathetic choice as giving to
00:24:29
the 'cause that's close to them and the smarter choice is giving
00:24:31
to the effective one. But in the study, people thought
00:24:34
that supporting both the favorite charity and the Super
00:24:38
effective charity was both the more empathetic and smarter
00:24:41
thing to do than either giving all of your money to the
00:24:44
favorite charity or all the money to the.
00:24:45
Effective. Some of, I don't know, I mean,
00:24:48
when we had people evaluate other people, I think that they
00:24:52
kind of got extra points for kind of it implicitly made it
00:24:55
sound like this was their idea to split it, you know.
00:24:58
And so I think they maybe got extra points for being just
00:25:01
like, I know I'll do both right? And they go, that's so wise.
00:25:04
I don't know. I'm not sure why.
00:25:06
But in other experiments trying to get it what the psychology,
00:25:10
it didn't look like people always got full credit on both
00:25:13
dimensions. Instead they got 80% credit, one
00:25:16
or both dimensions. So I wouldn't want to lean too
00:25:18
hard on that. But yeah, like, sometimes it
00:25:20
seems like you get like, like full, full, full credit on both
00:25:23
if people are interested. This is a paper called Boosting
00:25:26
the Impact of Charitable Giving with Bundling and Micro
00:25:29
Matching. This is Lucius Caviolis, the
00:25:31
first author, and you can find it on Lucius's website or my
00:25:34
website. This is in Science Advances in
00:25:36
2023, so that's where the research can be found.
00:25:39
But then people can also engage with the the real thing in the
00:25:42
real world, right? Because then you, yes.
00:25:44
OK, right. So once we saw this working,
00:25:46
then Lucius and some of his tech savvy friends built the website
00:25:50
to see if this would actually work the real world.
00:25:53
So it's called giving multiplier.
00:25:55
I imagine some of your audiences in the US, some is in the UK.
00:25:58
We would love to expand to the UK, but right now you can only
00:26:02
get tax credit in the US. But the way it works is you go
00:26:05
to the giving multiplier.org and it says, what's your favorite
00:26:10
charity? And you find that there and it
00:26:12
has a search field for all the charities that are registered in
00:26:15
the US. And then it says, OK, we've got
00:26:18
these 10 super duper effective charities in different areas.
00:26:22
So global health and poverty. These are charities that are
00:26:24
recommended by give. Well, we have the malaria
00:26:28
Consortium against Malaria Foundation.
00:26:30
I mentioned evidence actions to in the world we have Helen
00:26:34
Keller International. Oh gosh.
00:26:36
What's the the name of the vaccine?
00:26:39
New incentives. New incentives.
00:26:41
Thank you. Yeah, give directly.
00:26:43
We've got Good Food Institute and Humane League close to your
00:26:46
heart. So humane League dealing with,
00:26:48
you know, alleviating suffering in factory farming as it exists
00:26:51
and good food Institute looking to create meat alternatives to
00:26:54
reduce the demand for factory farming.
00:26:57
And then we have the Clean Air Task Force, which is a highly
00:26:59
effective climate charity, you know, really looks to make
00:27:02
change at scale by investing in policy change and and new
00:27:07
technologies. And then Johns Hopkins Center
00:27:09
for Health Policy, which is we working to prevent the next
00:27:12
pandemic. Those are the ones that are
00:27:14
currently there. All, you know, super duper
00:27:17
effective either with RC TS or in in rational prospect.
00:27:21
And we asked people to pick one of those.
00:27:22
Then you decide, OK, how much do you want to give total?
00:27:25
And then we've got our nifty little slider where people can
00:27:28
decide how they want to allocate their money between two
00:27:31
charities. And the more they give to the
00:27:33
perfective charity that they picked from our list, the more
00:27:36
money we add on top. And if you have a special code,
00:27:42
you can get a larger matching donation.
00:27:45
And for your listening audience, we'll, we'll put this in change
00:27:49
my mind. If you put that in there, you'll
00:27:51
get a higher matching rate. So, so that's how it works.
00:27:55
And, and the sort of short version of the results.
00:27:58
So we launched this in 2020. We've raised over $4 million
00:28:02
total and close to two and a half million has gone to
00:28:06
charities on our list of super effective charities.
00:28:09
And, and it has worked, you know, primarily with individual
00:28:15
donors providing the matching funds for other donors.
00:28:18
So we we've seen exactly the kind of virtuous cycle we were
00:28:22
hoping for. So you know what, going into
00:28:23
this, like I would have been happy if this like, you know,
00:28:26
was one step above bake sale. Like, you know, maybe we can
00:28:29
raise like $20 and, and it's, I've just been delighted
00:28:32
with how well it's gone. So from the lab to the field,
00:28:35
That's that's the giving multiplier story.
00:28:38
That's fantastic. It's really exciting to see how
00:28:40
well it's done. One of the reasons.
00:28:42
That scientists do work in labs is because the real world is so
00:28:47
incredibly messy. It's incredibly difficult to
00:28:49
know why things have happened the way they have.
00:28:51
There's so many confounding factors and I wonder what
00:28:54
process was like of taking this idea that worked so well in the
00:28:57
lab into the real world. Was there anything that came up
00:29:01
that sort of surprised you that didn't work how you expected it
00:29:03
based on the the lab experiments?
00:29:06
Or has it been actually in this case, like pretty plain sailing?
00:29:08
It kind of just worked. I was of two minds.
00:29:11
There was part of my brain going, it's got to work.
00:29:13
We did it. We did the lab study.
00:29:15
Part of everybody was like it's not going to work.
00:29:17
And then it did. It depended a lot on there are
00:29:20
things I learned about kind of attention, like we learned kind
00:29:24
of practical things like we did it.
00:29:25
We did a couple of ads on what we thought would be like
00:29:28
sympathetic podcasts where it was just a spot on the ad, right
00:29:32
where someone's reading a script and that did like nothing.
00:29:36
But when I've gone on podcasts like this very podcast, thank
00:29:40
you so much with the right kind of audience, the kind of people
00:29:43
who are humane people who really want to make the world better.
00:29:47
But we're all so nerdy and, like, want to be smart about it
00:29:50
and think about impact and strategy.
00:29:52
Like podcasts that have that audience.
00:29:54
We've just had great success come on and explain it and have
00:30:00
brilliant hosts ask good questions and things like that.
00:30:03
It's Yeah, it's like marketing is interesting.
00:30:06
You know, it's something I never thought it's not just like, oh,
00:30:08
this is a good thing. You put it out there and then
00:30:10
proof it just takes off, right? It is interesting like what gets
00:30:13
it to connect and what doesn't. But it went surprisingly well
00:30:16
when we launched it. Awesome.
00:30:17
Oh yeah. Marketing is super hard, but
00:30:20
it's really exciting to see the amazing progress that Giving
00:30:23
Multiplier is is doing. So you have a more recent
00:30:27
project that we're really excited to talk to you about.
00:30:30
It's extremely on brand for the show because really it's about
00:30:33
changing people's minds. Before we dive into how your new
00:30:36
project Tango worked, let's explore the problem you're
00:30:39
trying to fix, which is political polarisation.
00:30:42
A stat that I think really brings us to life is that in
00:30:45
1960, only about 4 to 5% of Americans were uncomfortable
00:30:51
with their children marrying someone who supports the other
00:30:54
major party. So Democrats marrying
00:30:56
Republicans and vice versa. But by 2010, 1/3 of Democrats
00:31:01
and half of Republicans would be upset by such a marriage, and I
00:31:05
imagine that that number is probably quite a bit higher now.
00:31:08
So political polarization operates at both cognitive and
00:31:11
emotional levels. People not only disagree with
00:31:14
the other side about issues, you know, whether it's abortion or
00:31:18
gun control or something like this, but they also seem to
00:31:21
actively dislike and distrust the other side.
00:31:25
Well, before before I get into tango, let me talk about why.
00:31:28
You know, I know your audience, I imagine is some people whose
00:31:31
their entry point is, you know, animal welfare, but also people
00:31:35
who are kind of, you could call it high impact philanthropy or
00:31:39
effective altruism or whatever, whatever it is.
00:31:41
And I would like to see people taking politics more seriously.
00:31:45
And, you know, the argument goes like this, that if you look
00:31:49
around the world at the places where people's lives are good,
00:31:53
they tend to be healthy, stable democracies.
00:31:56
And you look around at the places where life is pretty
00:31:59
miserable and it's places that have weak, ineffective or
00:32:03
corrupt governance. Like the to a first
00:32:05
approximation, life is good in places that have good, well
00:32:10
enforced rules and bad in places where they don't.
00:32:14
And this is broadly Speaking of it called as the institution's
00:32:17
perspective. That has really been a
00:32:19
revolution in economics and political science in economics,
00:32:23
going back to to Nobel laureate Douglas N, who started thinking
00:32:27
about these things, you know, decades ago.
00:32:29
And then most recently Daron Asimoglu and and James Robinson
00:32:33
winning the Nobel for their work on on the role of institutions
00:32:37
in human well-being. But that, you know, where do
00:32:39
those institutions come from? I mean, those are really
00:32:41
largely, you know, governments, right?
00:32:43
And the things that governments allow to exist or foster, what
00:32:47
good institutions do is they set the rules of the game so that
00:32:49
it's worth trying, right? That if you start a business,
00:32:53
you're not going to get a shakedown from the local cops.
00:32:56
You're not going to have everybody stealing your stuff,
00:32:58
right? You're, you're not going to be
00:32:59
accused of crimes you didn't commit so that other people can
00:33:02
confiscate your, your belongings.
00:33:04
Like what makes the pie larger is that people feel like it's
00:33:08
worth making an effort and worth, worth being productive,
00:33:10
right? And that's a matter of, of, of
00:33:12
government and strong institutions.
00:33:14
The number one threat to these things is, is intergroup
00:33:17
conflict. It's groups feeling like they
00:33:20
can't live together and trust each other.
00:33:23
And this is pulling the United States apart, this sense that we
00:33:27
are no longer part of the same us.
00:33:29
And this can divide partly across racial lines.
00:33:32
It's largely across economic and class lines.
00:33:36
There's a bit of an age division as well.
00:33:38
And this is not just in the US, you know, it's all around the
00:33:41
world and even in the world's most stable democracies like the
00:33:44
Scandinavian countries. Then the number one issue, if
00:33:47
there's anything that threatens to drag those countries down,
00:33:50
it's divisive views about immigration, about people coming
00:33:54
in who other people say these people are not us and we want
00:33:56
them out, right? So that US versus then problem
00:33:59
is the number one threat to good democracies with good
00:34:03
institutions that are what makes life go well, right.
00:34:07
So from my point of view, political conflict, intergroup
00:34:10
conflict, is the mother of all problems because, you know,
00:34:14
whether it's having plausible solutions to climate change or
00:34:19
laws that prevent animal suffering or the next pandemic,
00:34:23
or the rise of dangerous artificial intelligence or human
00:34:27
trafficking or just plain old racism or whatever it is, If
00:34:32
we're all willing to work together as a group, if we all
00:34:34
see each other as part of the same US, we can solve all of
00:34:38
those problems. But as long as there's an US
00:34:41
versus them fighting, then we're very limited in what we can do.
00:34:45
Yeah. So it seems really important to
00:34:47
understand what is actually underlying this polarization.
00:34:51
On the surface, it can seem like this is legitimate disagreement
00:34:54
about actual policy, like you mentioned immigration, for
00:34:57
example, as an issue that different people have different
00:34:59
views on. But when you dig deeper,
00:35:00
sometimes it seems like actually this is just people supporting
00:35:04
their favorite sports team. Because you have experiments
00:35:06
where you can show that you can get Democrats to dislike a
00:35:09
policy by telling them it's supported by Republicans and
00:35:12
vice versa. And even over time, you can see
00:35:15
the bundle of policies that either side supports.
00:35:18
Sometimes it's like completely transformed over the course of
00:35:21
couple of decades. So it seems like it might not
00:35:23
actually have to do with substantive beliefs at all.
00:35:26
But that also seems too simple. How do you think about what is
00:35:29
underlying this polarization? I mean, I think there are some
00:35:33
real disagreements, right? I mean that people really do
00:35:36
disagree about abortion. People really do disagree about
00:35:40
gay rights and trans rights and things like that.
00:35:42
Now, you might think, actually, if they really were operating
00:35:45
with all the same facts, including facts about God's will
00:35:48
or lack thereof or whatever it is, then maybe they would
00:35:50
converge. But, you know, within the realm
00:35:52
of, you know, ordinary factual discourse, there are some things
00:35:55
where people really just disagree.
00:35:57
But there's a lot where people don't, and there's a lot of
00:36:00
misperception about what the other side thinks.
00:36:02
I'll give you an example from a paper that we just submitted.
00:36:05
This is work led by Lucas Woodley.
00:36:07
We were thinking about capitalism and so socialism and
00:36:09
how people use those terms. And yeah, this hunch.
00:36:13
I've seen similar ideas in like columns by Paul Krugman and the
00:36:16
New York Times that actually there's not that much
00:36:19
disagreement about this. And the people just use these
00:36:21
terms in ways that are talking past each other.
00:36:23
And in fact, they're deliberately divided by this
00:36:25
language. So what we did was, or Lucas
00:36:28
did, is we came up with a description of a kind of
00:36:31
middling social democracy where it's a free market economy, but
00:36:34
there are guardrails, a free market in terms of regulating
00:36:38
regulations for and social safety net and things like that.
00:36:41
And we asked people, Republicans and Democrats or people who said
00:36:45
that they like capitalism or like socialism, what do you
00:36:47
think of this system here? And we put enough in there that
00:36:49
it wasn't just empty, like there was real policy substance.
00:36:52
And we found that almost all Republicans and almost all
00:36:55
Democrats said, yeah, that sounds pretty good.
00:36:57
And then we asked people, how would you describe this system?
00:37:01
To what extent is it capitalist and to what extent is it
00:37:03
socialist? And the liberals or
00:37:06
self-described socialist fans said, oh, this is either partly
00:37:10
or mostly socialist. And the Republican said this is
00:37:13
almost entirely capitalist. So you had people basically
00:37:16
saying they like the same thing but giving it very different
00:37:20
labels. Right?
00:37:21
Yeah. And then we found that this is
00:37:23
sort of the optimistic part, is that if we told people about the
00:37:26
results of the experiment I just described and asked people
00:37:29
things like, would you vote for a candidate from the other party
00:37:33
who's shared your values on it, sort of economic issues.
00:37:36
People were more likely to say that they would vote for someone
00:37:40
from the other party after they got this bit of corrective
00:37:43
information about how people talk past each other with
00:37:45
capitalism and socialism, right? And I think that it's not just
00:37:48
that people use these terms differently.
00:37:50
If you've got people on the right who are describing all
00:37:54
Democrats that we have found this quote from Trump saying any
00:37:57
vote for any Democrat is a vote for radical socialism and the
00:38:01
destruction of the American dream, everything like that,
00:38:04
right? And then have people like Bernie
00:38:07
Sanders, who calls himself a socialist, he's not actually
00:38:11
saying like he's not talking about like the 1950.
00:38:14
He's not talking about having a planned centralized socialist
00:38:18
economy. It's a free market economy with
00:38:22
just a social safety net. And and, you know, it's Sweden.
00:38:25
Yeah, right. Something like that, which is
00:38:26
when Sweden is not as Sweden as people think it is, right.
00:38:29
And, you know, but and it may be that, you know, even on the
00:38:32
left, like for Bernie calling himself a socialist, it misleads
00:38:37
it. It's a way of saying I am
00:38:40
definitely not a Republican, right?
00:38:43
That it's a kind of political branding that sort of shows a
00:38:47
kind of authenticity and distance from the other side.
00:38:50
But then the downside of that is the other side can look at you
00:38:53
and say, see, those people are socialists.
00:38:57
And then what they think that means is like, you know, you
00:39:00
can't start a business because we're living in a centrally
00:39:02
planned, you know, economy. So, you know, on a lot of issues
00:39:06
when it comes to healthcare, when it comes to immigration
00:39:10
policy, there's not that much disagreement.
00:39:12
But there's this sense that there are these people on the
00:39:15
other side who hold these radical views and who want to
00:39:18
destroy the country and who will destroy the country if only they
00:39:22
are given enough power to do so. Right.
00:39:24
Yeah. So yeah, it is very much about
00:39:26
US versus them and and much less so about substantive policy
00:39:30
disagreements. Yeah.
00:39:31
So I guess one final question before we go on to Tango itself.
00:39:34
I think you've talked a lot about about institutions and how
00:39:38
important they are. But sort of the flip side of
00:39:40
that is often when people describe how we've got to this
00:39:43
position of polarization, they describe it in terms of
00:39:46
institutions. So particularly the media,
00:39:48
social media and, and the design of the political system that
00:39:52
actually sometimes incentivizes people to highlight the
00:39:55
differences and try to paper over where they're similar.
00:39:58
And I guess I wonder, maybe it's a push back to interventions
00:40:01
like Tango where you're engaging with individual people and
00:40:04
trying to change. To their mind, if polarization
00:40:07
is caused at this sort of systemic level, can we actually
00:40:10
fix it by doing how individual people think or, or what they
00:40:14
believe? Or is it simply that if we're
00:40:17
not intervening at the system level, these powers are simply
00:40:19
too sort of powerful and people will be pulled apart again, even
00:40:22
if we can bring them together? Well, so when I say, you know,
00:40:25
institution is good, I don't mean anything that qualifies as
00:40:28
an institution, which is a somewhat vague term.
00:40:30
Good thing is functioning well. So I don't mean all forces at
00:40:33
the systemic level are good. It's certainly not true.
00:40:37
What does it mean to intervene at the systems level?
00:40:40
Well, if it's the government doing it and it's a democracy,
00:40:43
that means that it's people who we voted for, right?
00:40:47
And votes are individual choices.
00:40:48
So I don't think that there's any kind of way around that
00:40:51
there, you know, is especially powerful wealthy businesses or
00:40:54
wealthy individuals do big things on their own, right.
00:40:58
And some of that is really good, right?
00:41:00
You know, so kudos to Bill Gates for taking his private money,
00:41:03
you know, setting up not just doing good, but but setting up
00:41:07
institutional level programs that really sort of solve
00:41:10
problems effectively. Well, I, I, I think, you know,
00:41:13
there's, we're not going to have a world with good institutions
00:41:18
if people, the dominant attitudes are fear and
00:41:22
resentment towards half the other people in the country.
00:41:25
I don't think there's any way around the hearts and minds
00:41:28
problem. And that the path to success
00:41:31
going to inevitably involve both the collective thoughts and
00:41:36
feelings and behaviors and attitudes of individuals and the
00:41:40
systems in which they participate in that shape the
00:41:43
flow of their lives. Perfect.
00:41:44
OK. And so how you decided to do
00:41:47
this with Tango is to invent a quiz, Yes.
00:41:51
Save the world with a quiz game, right?
00:41:53
Basically. So how's trivia save the world
00:41:56
then? Yeah, OK, so let's start from
00:41:58
first principles, right. So I, I think about this, you
00:42:02
know, in addition to as a philosopher, partly as a
00:42:04
biology, life scientist, you know, a person and partly as a
00:42:08
social scientist and you know, from, from a biological point of
00:42:11
view, if what you want is to bring warring tribes together,
00:42:16
what you, you need to do is create mutually beneficial
00:42:20
cooperation across the lines of division.
00:42:22
That's really the story of all life on earth is a story of
00:42:26
mutually beneficial cooperation. Your molecules form cells and
00:42:30
cells form colonies and organisms and more complex
00:42:34
organisms with organs and tribes and villages and nations and
00:42:38
occasionally United Nations. And at every single level that
00:42:41
level is created and sustained because the parts can accomplish
00:42:45
more working together than they can accomplish separately.
00:42:48
At every level there's conflict as well.
00:42:49
So it's not just the world is all cooperative puppies and
00:42:52
rainbows, but where there are cooperative systems, they're
00:42:56
they're held together. The glue is mutually beneficial
00:42:59
cooperation. And then you look on the social
00:43:01
science side and, you know, there's a lot of evidence in
00:43:05
favor of the same idea. Well, it's known as the contact
00:43:07
hypothesis. It's a bit misleading because
00:43:09
it's not just that things go well when you bring groups into
00:43:12
contact. It has to be under the right
00:43:14
kinds of circumstances, which you could loosely describe as
00:43:17
circumstances that are conducive to real cooperation.
00:43:20
And that idea made a lot of sense.
00:43:23
There's other research supporting this.
00:43:25
And it's funny kind of as an academic, when I started looking
00:43:27
at this, like, you always want to have like the big new theory.
00:43:29
I'm going to have the big new theory of intergroup conflict.
00:43:31
And I looked at the old stuff in the old social site textbook,
00:43:34
and I actually thought the old theories were right.
00:43:36
And people can debate about where it works and where it
00:43:39
doesn't and why. But it seemed to me that those
00:43:41
core ideas that you need to bring people together and have
00:43:43
them work together in some mutually productive way were
00:43:46
certainly never disproven and had a lot of support behind
00:43:49
them. It's not just research.
00:43:50
We see cases of this in history, you know, the racial integration
00:43:54
of the US military and then soon after in sports teams.
00:43:58
And really every modern city is a testament to the idea that
00:44:01
people, different races and religions and cultural
00:44:04
backgrounds live together and work together.
00:44:06
And I'm not saying cities are perfect, that there's no racism
00:44:09
or strife. But, you know, a lot of cities
00:44:12
work pretty well and people live happily together with people who
00:44:15
are very different from them and often don't even think of
00:44:17
themselves as being different anymore.
00:44:18
The classic case in the US where, you know, the Irish are
00:44:21
no longer considered, you know, questionable minorities.
00:44:24
And so I looked at this and I thought, I actually don't think,
00:44:28
like, we need a new theory. Like it's not a problem of basic
00:44:31
science. We need new engineering.
00:44:33
That is, how do we get people from opposite sides of the
00:44:38
divide on the same team in a way where they can cooperate for
00:44:43
mutual benefit? So that's one.
00:44:45
And then two, you want to do this in a way that's scalable,
00:44:49
right? So you can bring people together
00:44:51
and have them put together IKEA furniture and bond over that.
00:44:56
But that's. They'll just end up fighting.
00:44:57
Horrible thing to do if you want to build a good.
00:44:59
Relationship with someone, but if they manage to succeed, boy,
00:45:03
will they be brothers and sisters forever.
00:45:06
That's true. And so you can do that, but
00:45:08
that's not very easy to scale, right?
00:45:10
So I wanted some way to get people cooperating in a way
00:45:13
that's scalable, which really says digital.
00:45:15
And you want it in a way that's enjoyable, more fun than putting
00:45:18
IKEA furniture together. And so that's where the quiz
00:45:21
this game came in, right? So the idea is we have people
00:45:25
paired up online and the game is set up so that you get paired up
00:45:30
with your partner, ideally someone who's from the other
00:45:33
side, whether that's a Republican and a Democrat or
00:45:36
Arabs and Jews in Israel or whatever it is.
00:45:38
And you are connected by chat and you get to know each other
00:45:42
and you pick your team name and then you start getting these
00:45:44
questions and we try to design the questions to maximize sort
00:45:48
of interdependence. So like in the US, for example,
00:45:51
Republicans are more likely to know things about the show Duck
00:45:54
Dynasty, whereas liberals are more likely to know about the
00:45:57
show Stranger Things. And that that sounds like
00:46:00
stereotype mongering, but we actually confirm this with data.
00:46:03
It's true. So we have questions like that
00:46:05
where, you know, one side is more likely to know the answer
00:46:07
to some questions, but then the other side knows the answer to
00:46:09
other questions. And then we also have questions
00:46:11
that are more political in nature that kind of challenge
00:46:14
each side's assumptions and, and, and as a result, you know,
00:46:18
can result in a kind of mutual appreciation and humbling.
00:46:21
So, so in the US, if you ask liberals what percentage of gun
00:46:24
deaths involve assaults that style weapons like military
00:46:27
weapons, people say O30 percent, 50% and the and the answer is
00:46:31
something like 4%. Liberals are more likely to get
00:46:34
that wrong, whereas conservatives know that actually
00:46:36
it's mostly handguns and not not assault weapons.
00:46:39
But if you ask about rates of crime among immigrants,
00:46:42
Republicans think it's sky high often, whereas Democrats say,
00:46:45
no, it's low. So you have questions like that
00:46:47
where the Republican gets to be right about the guns and the
00:46:50
liberal gets to be right about immigration.
00:46:52
And because you're both there in that context where your goal is
00:46:55
not to fight about these things, but to get the answer right and
00:46:57
get your money right, you just work it out, right?
00:47:00
And you say, OK, well, I'm not sure.
00:47:02
But if you're, if you're confident, OK, we'll go with
00:47:04
your answer. Oh, look, you were right or oh,
00:47:06
I was wrong. Sorry.
00:47:07
We'll trust you next time, right?
00:47:09
And you know, you do enough of this.
00:47:11
And what what these people find is like, wow, I just answered a
00:47:14
bunch of questions about guns and immigration and things like
00:47:17
that with someone from the opposite political camp.
00:47:20
And they were perfectly nice and perfectly reasonable.
00:47:22
They're willing to acknowledge that I was right about some
00:47:24
things. It helps that I was willing to
00:47:26
acknowledge that they were right about some things.
00:47:28
And now I think those people are not as bad as I thought they
00:47:31
were, you know, into this and what we find.
00:47:34
We've now done many randomized control trials with this.
00:47:36
And we find when people play this game for less than an hour,
00:47:39
they have feel less coldly towards the opposition.
00:47:43
They're more willing to divide $100 randomly between a random
00:47:46
Republican and a random Democrat more evenly if they play this.
00:47:50
And we see these effects going out to four months after playing
00:47:54
the game once. Does that mean that?
00:47:55
They stop after four months or that's just as long as you
00:47:57
measure it and they may go. Online, no.
00:47:59
They play the game once, like one day for less than an hour,
00:48:02
and then we follow up four months later, which of course
00:48:06
reminds them about the game. So it's not like this is on
00:48:08
their mind for the next four months.
00:48:10
It's it's just that when they're reminded about that experience,
00:48:13
at least they go, Oh, yeah, Republicans or Democrats, you
00:48:18
know, and they give them a rating on how much they like
00:48:20
them or dislike them. And it's not as bad as it was,
00:48:23
you know, it would have been if they'd played the game with a
00:48:25
party, a partner who is similar, similar to them.
00:48:28
But it would. Last more than four months.
00:48:29
Right. I guess that's just, I mean,
00:48:31
that's just as far out as we've we've measured and we do see a
00:48:34
decline. So, you know, and sometimes it's
00:48:36
only statistically significant to a month.
00:48:38
But yeah, I think we we could see effects lasting even longer.
00:48:41
And we're actually running an experiment now where we're
00:48:44
having people play the game three times, but for a shorter
00:48:48
period of time, but with three different out party partners.
00:48:51
And we want to see if like seeing it three different times
00:48:55
makes those effects more durable.
00:48:57
So we'll see but but so far the effects are bigger after playing
00:49:01
three times, 3 short ones versus 1 long one.
00:49:05
And it seems to me like for this to work, people have to trust
00:49:08
that the quiz itself isn't biased, that the answers are
00:49:11
correct. Do you see people like
00:49:12
questioning that? And.
00:49:13
So, So what we find is for the most part people accept it,
00:49:17
especially in the mixed condition.
00:49:19
That is you might question the quiz and say, well, I don't know
00:49:23
about what you said about guns there.
00:49:25
But if you see it saying something that you agree with
00:49:28
about immigration and you see the Lib being surprised about
00:49:32
that, but going along with it, it makes you willing to say, OK,
00:49:36
hey, this game's all right. At least at least this game
00:49:39
speaking truth about, you know, that issue, right?
00:49:42
So in my example would be deliberately wrong about the
00:49:45
guns and right about the immigration for the two example
00:49:47
questions I get. But it's the mutual humbling
00:49:50
that that seems to work where we get people questioning.
00:49:53
It is more in the control condition when people are paired
00:49:56
up with people who are like them so that they they see the
00:49:59
answers that that fit with their expectations, and they just go,
00:50:02
well, of course. And then they see the questions.
00:50:04
They don't. And they go, oh, this game is
00:50:05
biased. It's actually funny.
00:50:07
Like, this is anecdotal, but oh, I should say the lead on this
00:50:10
research now is an amazing grad student named Lucas Woodley.
00:50:13
The early studies were done by another amazing grad student,
00:50:16
Evan D Philippus. And when I talked to Evan about
00:50:18
this early on, he would say, yeah, the liberals, when their
00:50:22
comments in the control condition were like, oh, this
00:50:25
was so interesting. It challenged my intuitions and
00:50:28
it was really surprising. And a lot of Republicans said
00:50:30
that, But you sometimes also Republican say, oh, this game is
00:50:33
bullshit. And that makes sense because
00:50:35
Republicans are more skeptical of researchers, especially from
00:50:39
Harvard, or if they, if they were right, experts and things
00:50:42
like that, a bit more skeptical. But when they're in the mixed
00:50:44
groups, they tend to roll with it.
00:50:46
That's really interesting. The other thing is this is
00:50:48
really important. It's fun.
00:50:50
Like, the median enjoyment rating is like a 10 out of 10 or
00:50:54
100 out of 100, depending on how we measure it, right.
00:50:57
And this is, like, totally different from every other
00:51:00
intervention, as far as we know. It's like boiled Brussels
00:51:04
sprouts. It can feel valuable or or
00:51:06
moving, but it's not the kind of thing where people's like, that
00:51:08
sounds fun, like, this is actually fun and it sounds fun.
00:51:12
Yeah. We ran a big tango game at
00:51:14
Harvard, our biggest yet. We have 352 people play spread
00:51:18
out, playing simultaneously, spread out across 11 dining
00:51:21
halls. And the way we got people to
00:51:23
play was we offered Boston Celtics basketball playoff
00:51:27
tickets, and that was really motivating.
00:51:29
We also said we give $1000 to the house, the residential dorm
00:51:34
that gets the most points total. So it's like bringing in money
00:51:37
for the house, getting a shot at those Celtics playoff tickets
00:51:41
like that brings people in who have no interest in bridge
00:51:45
building across the political divide, Right?
00:51:47
Yeah. But then once they play,
00:51:49
especially if they play with someone who's politically
00:51:51
different from them and they're like, oh, actually this was
00:51:53
interesting. This was kind of cool.
00:51:55
And we get a lot of people saying, please let us know when
00:51:57
you're doing this again. We'd love to play it more.
00:51:59
Yeah, this is super exciting for the potential as might scale and
00:52:02
as you mentioned, maybe to some other polarized environments,
00:52:05
Israel, Palestine, India, Pakistan and so on.
00:52:07
But I put my my scientist hat on and I read the study because
00:52:10
with all studies there are strong points and there are
00:52:12
relative weak points that give you reason for skepticism.
00:52:15
And so I identified some possible ones which I wanted to
00:52:17
raise to use to see if you have a if the study doesn't, if the
00:52:20
phenomenon that the study proports to show isn't real,
00:52:23
then then that's bad. Two, that I thought of going
00:52:26
through it. One is kind of a selection bias
00:52:29
where people that get annoyed when they realize they're
00:52:31
playing with a political out group or when their beliefs are
00:52:33
challenged, they might be more likely to drop out part way
00:52:35
through. And as I understand it, you kind
00:52:37
of lose their data for one. And the second one is the social
00:52:40
desirability bias thing where people maybe have a sense that
00:52:43
the folks running the study want them to get along and want them
00:52:46
to have come to to like their political opponents.
00:52:49
And so they are somewhat incentivized to to give the
00:52:52
researcher what they want. What do you think about that?
00:52:54
So classic objections. So what you're talking about,
00:52:57
the first objection is people call attrition or differential
00:53:00
attrition between conditions. And in some of our studies, we
00:53:03
didn't sort of a glitch of the software.
00:53:05
We didn't have really good data on attrition in our experiment
00:53:08
five in the paper. And I'd say for people who are
00:53:11
listening, who are nerds who want to go find the paper, this
00:53:13
was published in Nature Human Behavior last week.
00:53:17
So this is early June 2025 and you can find it there or on my
00:53:20
lab web page. In Experiment 5, we have 1000
00:53:23
people play it and we have really good data on attrition
00:53:25
and we find that there was not that much attrition, nutrition,
00:53:27
and there was no evidence of differential attrition, that is
00:53:30
people dropping out if they were playing without party partners.
00:53:33
So we did a lot of work to deal with that objection.
00:53:36
And then your other objection was known as experimenter
00:53:39
demand, right? And what we find, we have a few
00:53:42
different ways of dealing with that.
00:53:44
One is we asked people about their political views and what
00:53:49
we found initially and then continue to find is the game
00:53:52
doesn't change people's political beliefs on issues.
00:53:56
It's not really what it's about out, but people think that it is
00:53:59
or that it might be right. And we find these big effects on
00:54:02
attitudes towards the out group, but we don't find that it
00:54:05
changes people's attitudes on issues, even though they might
00:54:08
very well think that that's what we're trying to do.
00:54:10
And we see no effect on that at all.
00:54:12
So that's one guard against the experimental demand.
00:54:15
The other is we have a control condition where people are still
00:54:18
being confronted with surprising facts that challenge their
00:54:22
political beliefs. This by itself could be a
00:54:25
perfectly plausible intervention.
00:54:28
And in fact, we've done one player versions of this where
00:54:30
they just get the quiz game and we didn't find strong effects.
00:54:33
But in this thing called the strengthening democracy
00:54:35
challenge, there actually did see positive effects with it.
00:54:38
So like in the control condition, it's still very
00:54:40
plausible that people think that we're trying to have some
00:54:43
opening up effect on their views of their attitudes, and we don't
00:54:47
see these types of changes. So I think we've got pretty good
00:54:51
reason to think that it's not just experimenter and, or
00:54:55
attrition, but you, you sound like, you know, reviewer #2 and
00:54:58
reviewer #3 good questions. Yeah.
00:55:00
You're on the right track. So that's really exciting.
00:55:02
It sounds like it potentially is working and delivering some
00:55:05
really positive results at least four months into the future and
00:55:09
potentially more. So I wonder, have you got any
00:55:12
current plans to try to scale this up?
00:55:14
You've talked about some of the things you've been doing within
00:55:16
Harvard, but are there any other plans about how you're going to
00:55:18
get this into people's hands, get people using it?
00:55:20
Yeah, I, I'm so glad you asked. So we've been running field
00:55:24
tests at universities. So we've done tests at Cornell,
00:55:27
at University of Missouri, John Carroll University, and then we
00:55:30
had that big game at Harvard. Works beautifully everywhere.
00:55:33
We've done it both in terms of positive effects of the game and
00:55:36
enjoyment. We ran our first randomized
00:55:38
control trial with employees about a Fortune 500 company and
00:55:41
got very positive results there. And, and, and so we are hoping
00:55:45
to get this into higher Ed and into businesses in a more
00:55:48
programmatic way. And the idea is that in higher
00:55:51
Ed, you know, campuses have great division.
00:55:53
There's a sense that students feel like they can't express
00:55:56
their opinions because they're afraid. 11 positive thing we've
00:55:59
seen on college campuses and with the businesses is the game
00:56:03
increases people's agreement with the statement.
00:56:05
I feel comfortable expressing controversial views on campus or
00:56:09
at work. We see big gains there, so we
00:56:11
think that universities are going to be delighted to have
00:56:15
something that students find fun and that makes them feel more
00:56:18
comfortable expressing their views on on campus.
00:56:21
Every university should be signing up for this.
00:56:23
Put it in the water, have it in first year orientation and do it
00:56:26
every other month. Like if you're part of a
00:56:28
university or related to one, please come my way.
00:56:31
Either go to letstango.org or come to me businesses,
00:56:35
businesses have a polarization problem.
00:56:37
There are businesses that have to shut down their office chats,
00:56:40
whether it's like a slacker of Microsoft Teams because people
00:56:43
were fighting about Israel Palestine and things that right.
00:56:45
And and there's also a sense of disconnection in businesses,
00:56:48
especially with remote workers. And we think that Tango can help
00:56:52
bring people together in a way that's fun, get people to sort
00:56:55
of appreciate each other's perspective across lines of, of
00:56:58
difference and feel more connected to their community,
00:57:01
especially when they know that people have diverse political
00:57:04
views and, and backgrounds. So we're excited about the idea
00:57:06
of getting this into something that can be community building
00:57:09
for businesses that has positive effects on basic business
00:57:13
outcomes, but that also works to combat polarization and
00:57:17
intergroup animosity in society more widely.
00:57:20
And then eventually what we want is to have this out in the wild.
00:57:23
So we have our website, letstango.org.
00:57:25
And if you want to play our game, you can sign up.
00:57:27
We especially are looking for people who are more
00:57:29
conservative. So please bring your
00:57:31
conservative friends and bring yourself if you in that
00:57:34
direction. But we think that if we can get
00:57:36
corporations to offer prizes and incentives and things like that,
00:57:39
then we could really bring a lot of people here, get the right
00:57:42
celebrity endorsements or whatever it is.
00:57:43
If we're doing incentives, then we've got to have the right kind
00:57:46
of security and a more robust system.
00:57:47
And we don't yet have the have the software for that.
00:57:50
But the ultimate goal is to have millions of people do this, have
00:57:54
it be lots of fun. People enter tournaments with
00:57:57
different partners and they you have you got your partner that
00:58:00
you play with who's politically different from you, and you play
00:58:02
with other people and you win big prizes.
00:58:04
And we want millions of people to have the experience of
00:58:07
cooperating with people whose political views, tribal
00:58:10
allegiances are different from theirs.
00:58:12
And being like, we may disagree about stuff, but you're OK,
00:58:15
You're part of the bigger us, right?
00:58:17
And we think we can do this. I mean, it is, it is really, I
00:58:20
think a special thing to have something that works and that is
00:58:24
digital and that is fun. And so we are, we're going full
00:58:27
steam and I'll just do a shameless bid for funding.
00:58:32
Ultimately, we think this can generate a revenue stream.
00:58:34
But to get to the point where we can do that, we to raise more
00:58:37
money. So if there are people out there
00:58:39
funding who want to save the world by solving our tribalism
00:58:42
problem, please come my way. I'd be delighted to talk.
00:58:45
Well, this has been a fascinating conversation, Josh.
00:58:48
Thank you so much. Before we end, we always have
00:58:51
one question that we like to ask our guests and that's what's 1
00:58:54
belief or perspective you wish more people would reconsider or
00:58:58
change their minds about. I think there's this great
00:59:01
bumper sticker. You know, don't believe
00:59:03
everything you think I would. Maybe more specific.
00:59:06
They don't believe everything, even to it.
00:59:08
I wish people had a little more distance between what pops into
00:59:12
their head or what they feel and what they choose to believe and
00:59:16
what they choose to act on, right?
00:59:18
You can get this in different ways.
00:59:20
I mean, I think actually earlier in the conversation we talked
00:59:22
about the sort of rationalist approach about teaching people
00:59:25
about their biases doesn't work. I don't know.
00:59:28
I mean, it's true debiasing has troubled past with working so
00:59:31
well. I do think that something about
00:59:34
the sort of liberal arts education where you're
00:59:38
confronted with challenging ideas, the way it's supposed to
00:59:41
work, meeting people with different perspectives.
00:59:43
And it it's very much the theme of David Foster Wallace's famous
00:59:47
graduation speech and then book This is Water, where he
00:59:50
basically says the whole, if you want to boil education at
00:59:52
anything, it's learning to just pause and be willing to question
00:59:56
your gut reaction. And I think it is possible to
00:59:59
teach that. It doesn't come from just
01:00:01
learning about base rate neglect or omission bias or something
01:00:06
like that. Like it it it has to be more
01:00:09
experiential. And I think Tango does that, but
01:00:12
I don't think it's the only way. To do that, and I think you get
01:00:15
this in certain contemplative practices.
01:00:17
I mean, I've been thinking a lot about meditation and it's been
01:00:21
part of my life for the last few years thanks to Sam Harris among
01:00:24
others. I think another way into that
01:00:26
people think of like Buddhism is very woo woo, but it's actually
01:00:30
really very like practical and rational and empirical.
01:00:34
And it's largely about being less reactive, having a little
01:00:38
bit of distance between what pops into your head and then
01:00:41
what you're going to feel and what you're going to believe and
01:00:43
what you're going to act on. So I think there are many
01:00:45
different ways into this. But yeah, if we could learn to
01:00:48
pause and and stop and think and question a little bit, we we
01:00:52
would all be better off. That's a great answer, and I
01:00:54
love that you mentioned that commencement speech by David
01:00:57
Foster Wallace. This is water.
01:00:58
That's got to be my favorite speech of any kind of all time,
01:01:01
so I'll link that in the show notes.
01:01:02
I hope people will go in and check it out.
01:01:03
Yeah, awesome awesome. Oh this is more Hawking of
01:01:07
where's, but please mention the change my mind code to giving
01:01:10
multiplier. So all one word changed my mind.
01:01:17
All right, so we are back after that fascinating conversation
01:01:21
with Professor Josh Green. Hey, and what was your kind of
01:01:24
biggest take away from today's chat?
01:01:27
Well. First thing I just want to say I
01:01:28
love this idea of doing research and then making it happen in the
01:01:31
world to make change. Really, really cool.
01:01:33
And yeah, I found just about everything Josh had to say
01:01:36
really compelling. The one answer he gave that left
01:01:39
me wanting more is to your really good question about So
01:01:42
maybe Tango can make individual people less polarized, but if
01:01:47
we're stuck in institutions that Dr. polarization, then isn't
01:01:50
that just going to kind of undo it?
01:01:52
And yeah, he kind of said, well, at the end of the day, we can't
01:01:55
solve this without solving the individual problem.
01:01:57
And that's true, but it seems like that might be necessary but
01:02:00
not sufficient. And so you look at like,
01:02:02
government. And he said, yeah, well, you
01:02:03
know, governments are made-up of people that are elected by
01:02:06
individuals. But if all of the politicians we
01:02:09
have to choose from are beholden to the same incentives that mean
01:02:13
that they don't actually want to change the system to be less
01:02:16
encouraging of polarization, then it doesn't matter what the
01:02:18
individuals think. That's not going to change.
01:02:20
And then he mentioned businesses.
01:02:22
I mean, similarly, Bill Gates is great, but most people aren't
01:02:26
Bill Gates, and most people are just going to respond to the
01:02:29
incentives where maybe nobody up the social media company thinks
01:02:32
it's a good idea that social media is polarizing us, and yet
01:02:35
kind of locally, it's the thing that makes sense for them to do.
01:02:37
So, yeah. What did you think about?
01:02:39
Yeah. So I think on the politics, I
01:02:42
think the argument is a bit stronger in the one of the key
01:02:45
incentives in politics is obviously to be reelected.
01:02:48
And if levels of polarization are low, if people are maybe
01:02:53
more resistant to polarization or actually antagonistic towards
01:02:56
those who try to polarize them, then this could actually start
01:02:59
to change some of the incentives in the environment and lead
01:03:02
politicians to respond very differently.
01:03:04
So I think that's possible or even simply if you know, if
01:03:07
creating polarization becomes an ineffective electoral strategy,
01:03:11
then this might help. I think though, in the case of
01:03:13
social media clearly isn't sufficient to simply change
01:03:17
people's people's views because social media is extremely
01:03:20
powerful. In the case of businesses, I
01:03:22
think the the case is far weaker because the direct incentives
01:03:26
aren't around what people think, but they're around what they're
01:03:29
prepared to spend the money on, where they're prepared to put
01:03:31
their attention in. We know that these these kinds
01:03:34
of incentives for anger and outrage are just extremely
01:03:37
strong. So I I think the the case is
01:03:38
maybe a little, little weaker there.
01:03:40
Yeah, I mean, this isn't to say that Tango, if it, if it does,
01:03:43
what the research suggests that it does, isn't an amazingly
01:03:45
powerful tool that I wouldn't want to see rolls out
01:03:47
everywhere. It seems like it's a really
01:03:49
important puzzle piece, but maybe not the not the only one
01:03:52
that we need to significantly transform polarization.
01:03:55
Yeah, what I think is really exciting about it is I think the
01:03:58
strongest arguments against the system wide approach is that for
01:04:02
decades now people have identified the money in
01:04:06
politics, the two party system of first part of the post, the
01:04:10
media landscape as problems. And no one has done anything
01:04:13
about them because these are incredibly difficult for things
01:04:16
to change. And if you can find these
01:04:18
solutions that work at different levels, that actually are things
01:04:22
you can really do that can really make a difference.
01:04:24
That's quite exciting and allows us to move away from simply
01:04:27
saying, wouldn't it be great if things were different to
01:04:29
actually getting stuck in and making things change.
01:04:30
And so even if, even if it's not enough on its own, I think this
01:04:34
is a really positive and amazing thing to be to be doing and
01:04:37
trying to work on and bringing such creativity to these these
01:04:40
really complex and really important problems, as Josh
01:04:44
says. Absolutely.
01:04:47
Thank you for listening to Changed My Mind.
01:04:49
We're a new podcast, so if you liked what you heard, consider
01:04:52
giving us a star rating on your podcast type of choice or
01:04:54
sharing your favorite episode with a friend.
01:04:56
Either way really helps us get the word out there.
01:04:59
Special thanks to Harrison Wood for editing and production
01:05:01
support. If you have any guest
01:05:04
suggestions or feedback, especially of the critical kind,
01:05:06
we'd love to hear from you. You can reach us at
01:05:09
hello@changedmymindpod.com.
