Spencer Greenberg is an entrepreneur, mathematician, and social scientist who has dedicated his career to helping people think more clearly and make better decisions.
After years of building tools to improve reasoning from personality tests to structured debate platforms, Spencer realised that the biggest challenge isn’t just giving people good information. It’s getting them to actually use it. Today, his work focuses on making psychological insights and rational thinking tools practical, engaging, and easy for anyone to apply in everyday life.
We explore why Spencer cares so deeply about truth-seeking, what makes people resistant to changing their minds, and how his projects (like Clearer Thinking and GuidedTrack) are helping people question assumptions, explore alternative perspectives, and live more intentionally.
Want more of Spencer and his work?
🧠Check out Clearer Thinking — free tools and training to help you think better
🎙 Listen to Spencer’s podcast, Clearer Thinking with Spencer Greenberg
About the hosts
Thom and Aidan left boring, stable careers in law and tech to found FarmKind , a donation platform that helps people be a part of the solution to factory farming, regardless of their diet. While the podcast isn’t about animal welfare, it’s inspired by their daily experience grappling with a fundamental question: Why do people so rarely change their minds, even when confronted with compelling evidence? This curiosity drives their exploration of intellectual humility and the complex factors that enable (or prevent) meaningful belief change.
Thoughts? Feedback? Guest recommendations?
Email us at hello@changedmymindpod.com
00:00:02
Sometimes the reason we can't shift the belief is because if
00:00:05
we get rid of that column in our belief structure, we're left
00:00:08
with like nothing in a certain domain.
00:00:10
Because there's this weird way. If we have no prediction, if we
00:00:13
have no idea what's going to happen, it almost feels like
00:00:15
death in some funny way, even though it's obviously not
00:00:18
literal death. It's sort of like the idea of do
00:00:21
you want to believe this thing or jump into the abyss?
00:00:23
Compare that to do you want to believe this thing or believe
00:00:25
this other thing, right? Like the jumping into this.
00:00:27
You don't want that to be the alternative.
00:00:29
You're listening to Change My Mind when we explore the
00:00:31
psychological forces that drive our biggest changes of belief
00:00:34
and the confounding forces that so often get in the way.
00:00:47
Today we're joined by Spencer Greenberg, A mathematician by
00:00:49
training, serial entrepreneur by profession, and founder of
00:00:52
Clearer Thinking, a platform dedicated to helping people
00:00:55
reason more effectively and former accurate beliefs.
00:00:58
In many ways, Spencer is kind of the final boss of our podcast.
00:01:01
He's developed several theories and frameworks that bear on how
00:01:03
to change your mind. He's also built research back
00:01:06
tools to help people overcome the psychological barriers that
00:01:08
keep us locked into bad beliefs, like the Belief Challenger tool,
00:01:12
the question you're an Entity tool, and the Productive
00:01:14
Disagreements tool. All of these are made available
00:01:16
for free on the Clearer Thinking platform.
00:01:18
But what makes Spencer particularly credible on this
00:01:20
topic is how he practices what he preaches.
00:01:22
Across 200 plus episodes of the Clearer Thinking podcast, he's
00:01:24
publicly updated his views on everything from climate change
00:01:27
to the nature of enlightenment. In this episode, we explore
00:01:29
Spencer's concept of anchor beliefs and why they're so
00:01:32
resistant to change. We'll examine his tools to
00:01:34
helping people challenge their own beliefs, discuss whether
00:01:36
rationality training can actually work at scale, and
00:01:39
explore how to navigate contentious topics that threaten
00:01:41
people's sense of self. Thanks so much for joining us.
00:01:45
Thanks for having me. So Spencer, you've developed
00:01:48
this concept of anchor beliefs, which is super relevant to the
00:01:50
central theme of our podcast. What are anchor beliefs?
00:01:52
I think very often when we disagree, we treat it as though
00:01:56
their beliefs are changeable, that if we just make the right
00:01:59
logical argument or provide the right evidence or make the right
00:02:02
emotional appeal, that their belief will shift.
00:02:04
And I think it is really true that for most sorts of beliefs
00:02:08
that is possible. And for some types of beliefs
00:02:10
it's even likely. For example, if someone's
00:02:13
rushing to go to a wedding and then they have some uncertainty
00:02:17
about whether they're heading the right direction and you
00:02:19
provide information about what the right direction is, it's
00:02:21
very likely they will immediately update on that
00:02:22
information. However, I think there's a class
00:02:25
of beliefs, which I call anchor beliefs, that tend to be
00:02:28
extremely resistant to evidence, to arguments, to rhetoric.
00:02:32
And for all intents and purposes, when we're discussing
00:02:35
an anchor belief with someone, we should treat it as fixed and
00:02:38
immovable. You know, maybe there's
00:02:40
something that could one day move it, but we have to treat it
00:02:42
as fixed and immovable. And then when you do that, it
00:02:44
changes the way you think about these kinds of conversations
00:02:46
because you then think, if I treat this as fixed, I'm not
00:02:50
going to be able to make progress in certain ways, but I
00:02:52
may be able to make progress in other ways.
00:02:53
Yeah, that's really interesting. I definitely want to talk quite
00:02:56
a bit about what the implications of anchor beliefs
00:02:58
are for how we go about discussions and changing other
00:03:01
people's beliefs on our own. But to just kind of lay out the
00:03:04
idea a little bit more, can you give me some examples of of
00:03:07
anchor beliefs? A classic example of anchor
00:03:10
beliefs are beliefs that you've had since you were a child that
00:03:13
you never questioned. So, for example, if you were
00:03:15
raised in a particular religion or you're raising a cult and you
00:03:20
know, you're, you know, now 18 years old and you've literally
00:03:22
never challenged the belief yourself.
00:03:25
Another example would be if you're, you know, relays, if
00:03:28
you're raised in a certain political perspective as a child
00:03:30
and you never had a chance to question it.
00:03:32
So those are kind of obvious ones.
00:03:34
There are also anger beliefs that can be more based on
00:03:36
evidence. For example, your belief that 1
00:03:38
+ 1 = 2, right? You might think of that as anger
00:03:40
belief. Now, it's not maybe so important
00:03:42
because it's so likely to be true and it's not the sort of
00:03:45
thing that, you know, you'd realistically try to change
00:03:47
someone's mind about. But if you imagine trying to
00:03:49
change someone's mind about that, it would be almost
00:03:51
impossible. So I think sometimes anger
00:03:53
beliefs can come about because of the history of that belief,
00:03:56
Like we've believed it's his childhood.
00:03:57
Soon as it can come about because of the amount of
00:03:59
evidence, like 1 + 1 = 2. But also sometimes it can come
00:04:02
about because of how tied it up it is in our identity or our
00:04:06
sense of the future. So if, you know, imagine
00:04:10
believing that, like if I didn't believe this then I'd be doomed.
00:04:12
Or if I didn't believe this then everything I did wouldn't have
00:04:15
mattered or something like that. Those can also be anchor
00:04:17
beliefs. In your in your essay about the
00:04:19
beliefs you, you talk about how they don't update with evidence,
00:04:22
they actually shape how we interpret evidence.
00:04:24
And you give an example of like if you just have to believe that
00:04:28
the wall of the blue, for example, did you like you played
00:04:30
how that would play out in terms of interpreting evidence?
00:04:32
Right. So it's, it's obviously not the
00:04:34
sort of thing that people would typically have an anchor belief
00:04:36
about. But let's suppose that you had
00:04:37
an anchor belief about the walls being blue of your house.
00:04:40
But The thing is that they're not blue, they're white, right?
00:04:43
And So what would actually happen?
00:04:44
Well, you're going to get evidence that the walls are
00:04:47
white. Like you go into your apartment,
00:04:48
you see the walls are white, but you can't change the blue that
00:04:50
the walls are blue. So what do you do?
00:04:52
Well, an obvious thing for your brain to do that.
00:04:53
And it's to think, well, maybe the room, there's some kind of
00:04:55
tinted lighting in the room, right?
00:04:57
Because that's a way to make sense of the of the evidence
00:04:59
you're seeing without changing your belief about the walls
00:05:02
being blue. And you're thinking, well, maybe
00:05:04
if it's tinted the right way, the walls might appear to be
00:05:06
white, even they're actually blue, right?
00:05:09
Or you might start to think maybe I have a neurological
00:05:11
condition where I can't process colors properly, etcetera.
00:05:14
So, you know, if we think about it, we often think like, oh,
00:05:16
evidence leads you directly to a certain conclusion, but that,
00:05:19
but there's almost always multiple ways to interpret
00:05:21
evidence. And so if a belief is fixed,
00:05:23
you're going to be forced to find a different interpretation
00:05:25
of the evidence. Yeah, you distinguish between
00:05:27
steel anchors, which like beliefs with extremely strong
00:05:30
evidence that are almost certainly true, like the example
00:05:32
you gave 1 + 1 = 2, and then tin anchors that are believes people
00:05:36
have far shaky evidence in, but they treat a similarly certain
00:05:40
like, I don't know, everything happens for a reason, for
00:05:42
example. Why is this distinction
00:05:44
important? Well, basically we can have
00:05:46
anchor beliefs that are really well justified by evidence.
00:05:49
For example, 1 + 1 = 2. That's an anchor belief, but
00:05:51
it's a perfectly reasonable anchor belief.
00:05:53
It's going to serve us well. And then we can have 10 anchors,
00:05:56
which are anchor beliefs that are as fixed in our minds, but
00:06:00
they we don't actually have a lot of evidence for them, and so
00:06:02
they're much more likely to be shaky.
00:06:04
So we if we're forcing the evidence to warp around that
00:06:07
belief, rather than updating that belief based on the
00:06:09
evidence, it can cause all kinds of havoc because essentially
00:06:12
we're we may be forced to believe something that's false
00:06:15
and therefore reinterpret the evidence to make sense of a
00:06:17
false belief. On anchor beliefs, I think we
00:06:20
can all think of times we've encountered anchor beliefs both
00:06:22
in ourselves and in others. Definitely by that they're a
00:06:24
thing, but I wanted to challenge whether they're really a
00:06:27
discrete category of belief or if they exist on more of a kind
00:06:30
of spectrum. Because for example, there's
00:06:32
this concept and in psychology called the affected tipping
00:06:35
point, which actually came across on your podcast.
00:06:37
And the idea here is that when it comes to updating beliefs,
00:06:39
instead of thinking, you know, there's an 80% chance astrology
00:06:42
works and then adjusting that probability based on new
00:06:44
evidence, people seem to think in absolute terms.
00:06:47
They just believe astrology works and the evidence against
00:06:50
it doesn't really shift their view at all until suddenly they
00:06:53
hit some sort of tipping point where they start to get this
00:06:57
way. I might be wrong feeling and
00:06:59
start genuinely considering alternatives.
00:07:01
So it seems like all beliefs are kind of anchor like in that
00:07:04
they're entirely resistant to evidence up to a point.
00:07:07
And so anchor beliefs really a different kind of belief?
00:07:09
Or are they just like what we call beliefs with very high
00:07:11
tending points? Yeah, I think that they are
00:07:13
qualitatively different because you could be quite confident in
00:07:16
something, but you're still adjusting your certainty as you
00:07:18
get evidence. So, you know, as you get, you
00:07:21
know, imagine you're like very confident that something is
00:07:23
true, but you get a moderate amount of evidence.
00:07:25
It's not true. And now you're a little bit,
00:07:27
you're a bit less confident than you were before.
00:07:29
You still think it's very likely true.
00:07:30
You're a bit less confident and then you get some more evidence
00:07:32
against. And now you're even a little bit
00:07:33
less confident still, right? That's not an anchor belief,
00:07:36
right? Because you're actually
00:07:37
incorporating the evidence. But take something like, you
00:07:40
know, you were raised in a cult and you believe the cult leader
00:07:42
is like, you know, in direct communication with God and
00:07:45
you've kind of always believed that.
00:07:46
And then you get some evidence that the cult leader is making
00:07:49
it up. I think the way that typically
00:07:51
works is that you actually don't update your probabilities at
00:07:55
all. Like if you were to try to
00:07:56
elicit your how confident you were before and after, I don't
00:07:59
think you actually maybe for like a few seconds, you might
00:08:01
feel less confident, but I don't think you're actually less
00:08:03
confident after. I think what happens is, you
00:08:06
know, Mormons actually have a name for this.
00:08:07
They call your shelf. You like stack up these
00:08:09
inconsistencies on your shelf, but somehow it like doesn't
00:08:12
interplay with your actual belief structure.
00:08:15
You just kind of like stuck in the back of your mind.
00:08:17
And then one day your whole shelf can collapse like
00:08:19
something that might happen that just suddenly the belief like
00:08:21
goes all at once. It's like the anchor snaps or
00:08:24
something, but it doesn't seem to behave like Bayesian updating
00:08:27
where you kind of get some evidence and you shift a little
00:08:29
bit and you get some more, you shift to some more.
00:08:31
Sounds very similar to the effective tipping point thing.
00:08:33
Like, I mean, it sounds like you're saying with the effective
00:08:36
tipping point, yes, your belief that when you encounter new
00:08:39
evidence, you don't think that your belief is changing, but it
00:08:42
is subtly until you hit a certain point and it becomes
00:08:44
conscious and then you have the conscious experience of your
00:08:47
belief things change all at once or becoming open to changing all
00:08:50
at once. Is that really different?
00:08:51
Well, I think that what does it mean for a belief to change,
00:08:53
right? I would argue that with a lot of
00:08:55
these anchor beliefs, as you accumulate evidence against it,
00:08:58
your belief, it does not change. You're actually equally
00:09:00
confident, even though you're just accumulating the
00:09:02
consistencies, but you're not actually adjusting how confident
00:09:05
you are on the belief. And then most likely you'll just
00:09:07
believe it for the rest of your life.
00:09:08
But every once in a while there's so much evidence like
00:09:10
piles up that one day you might go from like completely to
00:09:13
believing to not believing at all.
00:09:14
So instead of like basing and updating bit by bit and
00:09:17
adjusting your confidence, it's like you just keep holding the
00:09:20
belief fully until the entire thing shatters or it never
00:09:25
shatters. I suppose there's you can maybe
00:09:26
draw a distinction between what you consciously believe, what
00:09:30
you think you believe, what you might say you believe, and then
00:09:33
what you actually believe. If you were to test that through
00:09:36
like, I don't know, forcing you to like consider really
00:09:39
carefully, you know, you make bets on things.
00:09:40
And as I understood, this idea of the effective tipping point
00:09:43
is consciously you encounter new evidence and you aren't doing
00:09:46
Bayesian updating, at least not consciously.
00:09:48
Consciously your belief stays fixed until all of a sudden it
00:09:50
doesn't. And that sounds the same as the
00:09:52
anchor belief, in that consciously your beliefs not
00:09:54
changing at all until suddenly they are.
00:09:56
Yeah, that that does that does sound quite similar when it's
00:09:58
phrased that way. Yeah, OK.
00:10:00
Maybe one thing that can help shed light on this idea of
00:10:04
anchor beliefs and this tipping point is you have this, this
00:10:07
temple metaphor for beliefs that I think could be quite helpful.
00:10:09
I know this is something you've written, I don't know, like 14
00:10:12
years ago. I'm asking you to go deep into
00:10:14
the memory base, but can you explain if you can remember how
00:10:17
you think about beliefs as being arranged as like a temple
00:10:19
structure? Yeah, basically you can think of
00:10:22
beliefs as being built on other beliefs.
00:10:24
So if we go, I mean, religions are a really easy example.
00:10:26
So let's use that again. Like you might have a belief
00:10:30
that, you know, Jesus was the son of God, and then you might,
00:10:32
a lot of things could get built on top of that belief like that
00:10:36
this particular leader represents the Church of God,
00:10:39
right? Like, but if like you didn't
00:10:41
believe that Jesus was the son of God, that belief that the
00:10:43
following belief may not make any sense.
00:10:44
So you could think of it as like you have these different layers
00:10:47
of your belief and they get stacked on top of each other.
00:10:49
Additionally, at each level, some of the beliefs might bear
00:10:53
more weight in sort of your belief structure than others.
00:10:56
So for example, maybe maybe like a mid level belief would be like
00:11:00
this church is a really good church or something like that.
00:11:02
And then like, but maybe that could be swept away and you just
00:11:05
think, oh, that church is badly run, but it doesn't really
00:11:07
affect anything else about your belief structure.
00:11:09
It's just now it's so so that mid level belief about the
00:11:12
church being well run, like didn't really bear a lot of
00:11:15
weight in your belief structure. And it could could be swapped
00:11:17
out for belief that that church is badly run, but you still
00:11:19
believe in the overall thing. You just think that church had a
00:11:21
particular bad management, but the whole superstructure is
00:11:23
generally true. And so this kind of relates
00:11:26
anchor beliefs in a way because I think things that are like
00:11:28
very low levels, like near the first floor of this kind of
00:11:32
structure of your beliefs tend to be more likely to be anchor
00:11:34
beliefs and especially when they're weight bearing.
00:11:36
In other words, if you were to stop believing it, you'd have to
00:11:39
restructure tons of beliefs above it.
00:11:41
Those things, I think, tend to not update with regard to
00:11:44
evidence. Yeah, that makes sense.
00:11:46
And then in terms of this tipping point theory of how
00:11:48
minds change, you might think of it as like, as you encounter new
00:11:51
evidence against your belief, maybe like not particularly load
00:11:55
bearing columns that is that belief are being removed, but
00:11:58
the belief itself remains until enough of those you know, enough
00:12:02
of those columns are removed that the whole thing comes
00:12:04
crashing down. Exactly.
00:12:06
And then this also comes up if you're trying, if you're in a
00:12:09
discussion with someone and you think they're making a mistake
00:12:11
about something, you're trying to persuade them because what
00:12:14
can happen is the person throws up.
00:12:16
Like you make a claim and the person throws up 1 objection and
00:12:19
you knock down that objection, let's say very successfully, and
00:12:21
they throw another objection and you knock that down very
00:12:23
successfully. You might expect their belief to
00:12:25
change. But the but then the question
00:12:26
is, are these objections the actual load bearing columns in
00:12:29
their belief structure? They could throw out all kinds
00:12:30
of objections that they just thought of in the moment that
00:12:32
actually have no effect on the belief structure.
00:12:35
And changing their mind on those particular objections will do
00:12:37
nothing for their underlying belief.
00:12:39
And so if you're actually like thinking about deep belief
00:12:42
change, you have to think about the load bearing in columns of
00:12:44
the belief structure, but ideally ones that are not anchor
00:12:47
Blues because there's also aren't going to change.
00:12:49
So it's like we're the load varying parts that are not so
00:12:52
fixed that they're immovable. Yeah, There's a related concept
00:12:56
that I'm trying to figure out how it how it fits in here,
00:12:58
which is world views. And I think they're kind of hard
00:13:01
to define, but one way putting it might be that one's worldview
00:13:04
is that collection of attitudes and values, stories,
00:13:07
expectations about the world that are consciously shaped, how
00:13:10
they interpret the information and experiences.
00:13:13
And it seems to me like over time, our specific beliefs can
00:13:16
combine to form world views. And then the worldview itself is
00:13:19
some sort of anchor belief that used to vet new beliefs and
00:13:22
decide whether to incorporate them or not.
00:13:24
Yeah. Is that right?
00:13:25
Or do you think these the anchor belief concepts and world users
00:13:28
relating differently? We did some research into world
00:13:31
views and our goal was to say if we take a like a long list of
00:13:34
world views, what are the elements that they have in
00:13:37
common? Not in the sense of the people
00:13:39
who believe the different world views all believe the same
00:13:41
thing, but the the different sort of attributes that that you
00:13:44
need to have something be a worldview, right.
00:13:47
So if we solve capitalism, A worldview, what are the
00:13:49
attributes to fill in so that you believe in capitalism?
00:13:52
Or if we say Christianity is a worldview, what are the
00:13:54
attributes? And what we found actually
00:13:56
surprised me a lot that that like there aren't that many
00:13:59
things that worldviews seem to have in common, except things
00:14:02
like who is good? Who deserves the good?
00:14:05
What do you have to do to be good?
00:14:08
And so actually those are like the thing it seems like those
00:14:10
are the fundamental aspects of worldviews.
00:14:11
And then the other stuff is optional.
00:14:13
Some worldviews have them and some don't.
00:14:14
For example, some worldviews have a out group and but many
00:14:17
worldviews have no out group. Worldviews often have a
00:14:19
community, but it's not necessarily required.
00:14:21
You could actually have someone who's like deeply believes the
00:14:23
worldview has no community around them, right.
00:14:25
So, but those things I described things like what do you have to
00:14:28
do to be good or do good? Those I think are tend to be
00:14:31
anger beliefs who didn't be very very very deeply rooted and hard
00:14:34
to change. It's thing that you found it.
00:14:37
Some of the defining attributes of worldviews pertain to the
00:14:40
good, who's good, what good? Because I guess, you know, I
00:14:43
have a friend from many years ago that the the main way we
00:14:48
connect is like having discussions about the world.
00:14:49
He's very smart guy and we used to have fairly similar beliefs
00:14:53
and have diverged a lot over the years.
00:14:55
And the way I've been thinking about why we disagree about so
00:14:59
much these days that we kind of have different world views.
00:15:01
But when I interrogate that, I think it's less about the good
00:15:04
and more about what is true and how, how you figure out what's
00:15:07
true. Like it seems to me like we have
00:15:10
differences in how much we trust authority or like appeals to
00:15:13
expertise or whether we think central centralization versus
00:15:17
decentralization is like a better way to collect
00:15:20
information and, and, and stuff like that.
00:15:22
Do you think epistemics play a role in world views?
00:15:24
Well, when you form a worldview it tends to be in practice
00:15:28
connected to a community. It's not always true.
00:15:31
Like you can't have a random person on the Internet who just
00:15:33
adopts a worldview that they know community connects to.
00:15:35
But typically it is. And typically communities differ
00:15:40
in their epistemic norms, their authorities, what sort of
00:15:44
arguments they find convincing, that kind of thing.
00:15:46
So those tend to cluster by worldview.
00:15:48
So for example, libertarians in the US tend to like economic
00:15:53
type arguments, right? Whereas maybe like communists in
00:15:57
the US, maybe more like suspicious of like, you know,
00:15:59
reading economics papers and thinking that that has like
00:16:01
insight into the world. So, so you know, but that I
00:16:04
think has to do with sort of the history of those, you know,
00:16:07
those perspectives and so on. Interesting.
00:16:10
Again, last little just interrogation of this angry
00:16:12
belief concept before we get into more practical matters.
00:16:15
You have this life philosophy called Valuism, which is about
00:16:18
identifying your fundamental personal values and then
00:16:20
pursuing those in the world. And you ran the study that
00:16:23
identified 22 categories of fundamental values that
00:16:26
different people have, like being happy or preventing
00:16:28
suffering or having true beliefs and experiencing beauty, stuff
00:16:31
like that. So are these intrinsic values
00:16:33
examples of act of belief or are they something else?
00:16:35
I think of them as something else.
00:16:37
It's a little bit hard to describe human psychology with
00:16:41
like really rigorous detail because, you know, at the end of
00:16:43
the day, we have squishy brains that are really complicated and
00:16:46
have huge numbers in neural connections.
00:16:47
And like, you know, the deepest level description of like a
00:16:51
human, what a human mind is, is like has to do with the squishy
00:16:54
brain. But if we, if we abstract away
00:16:57
and try to really like give it a simple explanation for what's
00:17:00
going on, I think a natural thing that, you know, eventually
00:17:03
rises when you're trying to describe humans is this idea of
00:17:06
values. And basically that there are
00:17:08
some things that we think of as as good, as desirable of things
00:17:11
that, that we wish there were more of or that we wish
00:17:13
happened. And then there are things we
00:17:14
think of as bad. And I like to think of them in
00:17:17
terms of intrinsic values because most of the things we
00:17:19
value, we value because they get us other things like, you know,
00:17:22
would you really value value getting a sandwich if the
00:17:25
sandwich had no flavor, if they have no nutritional value, it
00:17:28
didn't fill you up, it didn't make you not hungry, right?
00:17:30
It's like once you remove all the things you get from the
00:17:32
sandwich, like sandwiches are not valuable.
00:17:34
And similarly, like cash, you know, if would you value having,
00:17:37
you know, dollar bills if like you couldn't use them to buy
00:17:40
anything, They didn't give you any status.
00:17:42
You couldn't even burn them to say worm.
00:17:44
Like, no, you once you lose all the external properties, they're
00:17:47
not valuable. So those are instrumental
00:17:49
values. And most of our values are
00:17:50
instrumental, but some of our values are intricate because we
00:17:52
care about them for their own sake, even if they get us
00:17:54
nothing else. And then I think the way my best
00:17:56
guess of how the human brain does this is that we have this
00:17:59
sort of pretty basic valuing operation.
00:18:01
We can think about like a state of the world represented by like
00:18:04
concepts about the world. And then we can our brain will
00:18:06
spit out like, is that valuable? Is it not valuable?
00:18:09
It doesn't do it by intrinsic or not.
00:18:10
That's something you have to layer on top.
00:18:12
You have to think another step ahead.
00:18:14
But it's sort of, you know, you imagine yourself, you know,
00:18:17
getting married and it'll either spits out like that's valuable
00:18:19
or that's not valuable. And within a certain amount of
00:18:21
how valuable it is to you, or you think about yourself being
00:18:24
deceived your whole life and believing a lie, and you think
00:18:27
about how valuable or not valuable it is to you.
00:18:29
So I think that's a reasonable approximation for one of the
00:18:32
things that drives humans, but there are a lot of other things.
00:18:34
The values represent a set of drives.
00:18:37
So OK, you like to think of intrinsic values as as being
00:18:40
different from anger beliefs, but did you see them as sharing
00:18:43
the property that they are very resistant to change?
00:18:46
I think they are quite resistant to change.
00:18:48
I think they can change sometimes, but they're very
00:18:50
resistant to change. But I don't, I don't even want
00:18:52
to call them beliefs. I actually think that there are
00:18:54
many belief like structures in the human brain.
00:18:57
There are many things that behave belief like.
00:18:59
And so maybe you could say values are belief like, but I
00:19:02
don't think they're really beliefs in the, in the in a
00:19:04
meaningful sense. I think they're lower level.
00:19:06
I think it's like you can think about a potential state of the
00:19:09
world and like almost instantaneously, you know,
00:19:11
probably hundreds of have an answer to like what do you think
00:19:14
is valuable and to what extent before you're like go mine even
00:19:17
gets going before it has a chance to think.
00:19:20
So yeah, I think it's it's very low level kind of processing
00:19:23
stuff in my opinion. And just to tease out another
00:19:26
potential difference between these concepts, when it comes to
00:19:29
anchor beliefs, you would advocate to people try and
00:19:31
figure out what are their 10 anchor beliefs and question them
00:19:33
so that they can replace false ones.
00:19:35
But you wouldn't really suggest that people interrogate.
00:19:38
Well, you wanted to figure out what their intrinsic beliefs
00:19:40
are, but there would be no reason to protect to change them
00:19:43
so. Yeah, their intrinsic values.
00:19:46
Well, it depends. Depends what you believe is true
00:19:48
about, you know, good and bad. Like if you think there's
00:19:50
objective moral truth, like there's a actual factual answer
00:19:53
to what's true, that could give you a reason to try to adjust
00:19:56
your values. But if you don't believe in
00:19:58
objective moral truth, or you don't, or you aren't a believer
00:20:01
of a particular philosophy about what you should do or the way
00:20:04
you know things should be, then I don't think you have really
00:20:06
good reason to change your values, except perhaps your
00:20:09
values might actually, some of your values might say you should
00:20:12
change other of your values. So that's a possibility.
00:20:14
But other than that, I don't think there's a good reason to
00:20:17
try to change your values. OK, cool.
00:20:18
So I think that's a pretty maybe too thorough lay of the land of
00:20:22
what anchor beliefs are all about.
00:20:23
And so I want to get to the kind of the practical implications
00:20:26
next, because we've spoken an anchor belief so far as these
00:20:28
views that are extremely resistant to change.
00:20:30
But we know that sometimes they do change, as you've mentioned.
00:20:33
So yeah, you use the metaphor of a steel anchor on the ocean
00:20:38
floor of only like a really enormously powerful current can
00:20:41
budget what would constitute that kind of enormously powerful
00:20:44
current for an anchor believe. Yeah, it's a good question.
00:20:48
We're actually working right now in a study where we're going to
00:20:50
be asking people about times when different types of blue, so
00:20:53
there's changed and ask them about like the context, what
00:20:56
happened, what was happened, right, the moment of change or
00:20:59
like shortly before it changed. So I don't feel like I have a
00:21:02
super good explanation for this. What I do know is that it
00:21:04
sometimes changes when you get an incredibly large amount of
00:21:07
evidence really quickly. Like imagine, you know, you, I
00:21:10
don't want to keep, you know, using religious examples.
00:21:12
I don't want to pick on religion, anything like that.
00:21:13
But let's say, let's say a political example, like you
00:21:16
favor a certain political leader.
00:21:17
You're a huge fan of them, you follow them, you know,
00:21:20
everything they do and you're and, you know, think that they
00:21:22
make great decisions. And then one day you get really,
00:21:24
really strong evidence that they actually accepted bribes, right?
00:21:28
Like it's just proven beyond a shadow of a doubt, like everyone
00:21:32
now believes it. Like maybe that could be like
00:21:35
it's it might be so forceful and such, so out of line with your
00:21:38
belief. And you know, if everyone like,
00:21:40
let's say everyone else who supports them now also doesn't
00:21:42
support them, right? And they all simultaneously or
00:21:44
like, yeah, that person is bad is they're not who we thought.
00:21:47
Like it might be able to shock you out of it.
00:21:48
But I do think that there are other things.
00:21:50
I do think there's an accumulation model that
00:21:52
sometimes works that somehow and just goes back to the the Mormon
00:21:55
idea of like your shelf or like I think some people they it like
00:21:59
they kind of accumulate these anomalies and they stack so many
00:22:02
anomalies like on the side that someone day and like one more
00:22:06
anomaly, it just like the whole thing just explodes and then
00:22:09
suddenly in like 5 seconds, they no longer believe it.
00:22:10
It's like, well, why was that the one thing that pushed me to
00:22:13
the edge? I don't know, but maybe it had
00:22:14
more to do with like, yeah, sudden tipping point like we
00:22:16
were describing before, rather than there's something so unique
00:22:19
about that piece of evidence if. If you're somebody that suspects
00:22:23
that perhaps you have a false anchor belief, or maybe you
00:22:26
don't have any particular belief in mind, but you just want to
00:22:29
make sure that you don't have any false anchor beliefs, how
00:22:32
would you recommend people kind of safely challenge their own
00:22:34
anchor beliefs? Because as you mentioned, it can
00:22:36
be quite disorienting to to have this really low bearing belief
00:22:39
suddenly fall out from underneath.
00:22:41
Yeah, Well, I think the first question to ask yourself is if
00:22:44
it's if you're wrong about it, do you want to find out then?
00:22:47
I think that is a very serious question.
00:22:49
Like I, you know, I think people who are truth seeking and they
00:22:51
want to immediately say yes without thinking about it.
00:22:52
But I think it's actually worth genuinely thinking about.
00:22:55
Suppose the world is not like you think it is.
00:22:57
You're wrong about this deeply held belief.
00:22:59
Do you want to know it? And there's some reasons you
00:23:02
might not want to know it. Like maybe you're deeply
00:23:04
embedded in the community, and not believing that thing would
00:23:06
actually make it really hard to be part of that community.
00:23:08
Maybe because you'd be rejected by it, or maybe because you
00:23:11
would just feel so weird in that community not believing it.
00:23:13
They're like, you couldn't, you know, could be part of it
00:23:15
anymore realistically or be happy in it.
00:23:17
So take that question seriously. And then once you've decided, or
00:23:21
if you decide, yes, if it is not true, I want to know it's not
00:23:24
true, then I think one of the most elegant approaches is just
00:23:28
to join a social circle that that doesn't believe it.
00:23:31
Now it is possible you already are that way, but many people
00:23:34
who have like really deeply rooted anchor beliefs, they like
00:23:36
most of their social community also has the same anchor belief.
00:23:39
So if you go make a bunch of friends where all those that
00:23:42
group doesn't believe the thing and you just spend time with
00:23:44
them, that actually can be like a gentle wave to like shift
00:23:48
anchor bleeds. It's not a thing that happens
00:23:50
right away. But like, over time, as you get
00:23:52
more and more embedded in communities, I think you can
00:23:55
find that it can start to change.
00:23:57
It can start to change people pretty fundamentally, yeah.
00:23:59
That's interesting because, you know, I've heard the the finding
00:24:03
that when you expose people to counteract them to their
00:24:05
beliefs, sometimes it actually entrenchers them.
00:24:07
And so I first thought as well, isn't that what's going to
00:24:09
happen in this scenario? But the social forces are really
00:24:13
quite powerful here, like if if you're just around a group where
00:24:16
certain beliefs are normalized and also you see people who hold
00:24:18
them are not evil, that that probably that does a lot to to
00:24:22
soften you to to. Yeah, it has to be a group that
00:24:25
you get along with well and you feel bonded to.
00:24:26
It doesn't work. If you like, feel friction with
00:24:28
this group all the time, then it's much less likely to work.
00:24:30
Yeah. And you mentioned the question
00:24:32
of, you know, you should think to yourself, do you actually
00:24:35
want to know whether this belief is true or not?
00:24:37
But I've seen you say elsewhere that you think that keeping
00:24:40
beliefs that untrue but useful is kind of a bad life strategy.
00:24:44
Is that is that right? Well, I think that most of the
00:24:47
time untrue useful beliefs can be replaced with better beliefs.
00:24:51
And by better I mean true useful beliefs that serve a similar
00:24:54
purpose, right? So like you can be upgrade them
00:24:57
a lot of times. Like for example, let's say you
00:24:58
have a belief that whatever happens to you is like the
00:25:01
perfect thing to happen to you. So a friend of mine had this
00:25:03
belief it was kind of like a mantra shoot site.
00:25:05
And so, you know, if, if let's say she doesn't get a job, well,
00:25:08
that's OK. It's the, it's the perfect thing
00:25:10
to happen to her, right? It's fine that she didn't get a
00:25:12
job that, you know, so she'll get another job that that she
00:25:15
likes more. And, and you know, she doesn't
00:25:16
really know how that job would have turned out.
00:25:18
So she, it's easy to convince herself to whatever job she does
00:25:21
get, she'll she'll be happier at right.
00:25:23
Or suppose she gets sexually assaulted.
00:25:25
Well, that's where it starts to be a problem, right?
00:25:27
It's like what you're supposed to believe that's the perfect
00:25:29
thing to happen to you. Like that's insane.
00:25:31
And so that I think the problem with the with false helpful
00:25:34
beliefs is that them being false actually puts you in a weird
00:25:38
situation where you end up making decisions in a way that
00:25:41
serve. Is this cornered with reality
00:25:42
because you think things aren't true or that causes you to warp
00:25:45
evidence in a really weird way. Like I think it's very bad idea
00:25:48
to like believe they're being sexually assaulted.
00:25:50
It was a perfect thing to happen to you.
00:25:51
And it's a, you know, it's a very potentially harmful belief.
00:25:53
So the question is, is this belief helpful to her?
00:25:56
In some cases, I think it is. In some cases it's is there a
00:25:59
true belief that actually also has similar benefits?
00:26:01
And there might be, you know, there might be a belief like
00:26:04
whatever happens to you, there's something to be learned from it,
00:26:06
which it's probably almost always true that there's
00:26:07
something to be learned from it. Like it's, I would say that's a
00:26:09
significantly truer belief than her original 1, but it might
00:26:12
give her a way of having similar benefits where like when
00:26:15
something happens, it seems bad. She's like, well, there's
00:26:16
something to learn from this. What can I learn?
00:26:18
And then it like reframes it more positively.
00:26:20
So I would just suggest to people try to find a true belief
00:26:23
or truer belief that serves a similar purpose.
00:26:25
I'm not going to say this is you can do this 100% of cases.
00:26:28
I think it's ridiculous to say that you know 100% of time
00:26:30
someone's better off having true beliefs in terms of your
00:26:32
happiness or well-being or outcomes.
00:26:35
But I think it a lot of the time it's better because true beliefs
00:26:37
have the fundamental advantage of being in accordance with
00:26:40
reality. Yeah, yeah.
00:26:41
I guess one could push back and say that if the useful true
00:26:45
belief has, in certain cases, negative consequences, well,
00:26:48
then it is in fact not useful. And so I guess your point is
00:26:51
just that beliefs that seem false, beliefs that seem useful,
00:26:55
may be useful in many cases, but there are corner cases where
00:26:57
they're not. And you can probably always find
00:26:59
a true useful belief that is more robustly useful.
00:27:02
Is that right? Yeah, I think at least usually
00:27:05
there exists such beliefs. Now the question is, can you get
00:27:06
yourself to believe that thing? This actually leaves us I think
00:27:09
is really important, which is that sometimes the reason we
00:27:12
can't shift the belief, like sometimes the reason something
00:27:13
is anchor belief, I think is because if we get rid of that
00:27:17
column in our belief structure, we're left with like nothing in
00:27:21
a certain domain. It's like, you know, your whole
00:27:23
life you believed you were going to be a teacher.
00:27:26
And it's because when you were a child or person you thought was
00:27:28
a prophet told you this, right? And it's like you've banked your
00:27:31
whole life on this. You've gone to education,
00:27:33
school, etcetera, etcetera. And then like now, if you swat,
00:27:36
if you like, pull away that anchor belief like that, that
00:27:38
prophet had prophetic wisdom. It leaves you with this huge
00:27:41
void of like, what do I doing with my life?
00:27:43
Like, did I waste all this time and like, what's my future like
00:27:46
and all this uncertainty? But so it often can be.
00:27:49
I, I think it can begin to make it more possible for anchor
00:27:53
belief which to change or to start shifting it away.
00:27:55
So it's not an anchor belief. If you like, if you feel in it
00:27:59
was something else, right. It's like, well, maybe imagine,
00:28:02
imagine an alternative career you would really enjoy, start
00:28:04
feeling in what your life would look like.
00:28:06
And then suddenly pulling away. That column doesn't look like
00:28:09
you know the end of everything because there's this weird way
00:28:12
where we're like, if we have no prediction, if we have no idea
00:28:15
what's going to happen, it almost feels like death in some
00:28:17
funny way, even though it's obviously not literal death.
00:28:20
Yeah, very least extremely disorienting and unpleasant.
00:28:24
It's. Sort of like the idea of do you
00:28:26
want to believe this thing or jump into the abyss?
00:28:29
Compare that to do you want to believe this thing or believe
00:28:31
this other thing, right? Like the jumping into the this.
00:28:33
You don't want that to be the alternative.
00:28:35
Yeah, so it seems like one thing could be a useful strategy if
00:28:38
you want to prevent finding yourself attached to false
00:28:42
beliefs because you have no alternatives, is to not put all
00:28:44
of your ideological eggs in one basket.
00:28:47
Like not that anyone belief become too load bearing.
00:28:50
I can't see this is similar to like I think a good social
00:28:52
strategy is to not put all of your social eggs in one
00:28:55
community or one group because then it makes it a lot harder
00:28:58
for you to consider leaving them if if they're in fact actually
00:29:02
not good for you or like, yeah, something like that.
00:29:04
So yeah. Is that a reasonable strategy,
00:29:06
or is that not good epistemics? It might be a original strategy,
00:29:10
except that I just think these things just happen are more like
00:29:13
things that happen to us. You know, we end up as an adult
00:29:15
and we'll have all these anchor beliefs.
00:29:16
You know, many of us rooted in childhood or you know, we've
00:29:19
adopted from a social circle without realizing it.
00:29:21
So you know, it is it's hard enough to change 1 anchor
00:29:24
belief. By definition, an anchor belief
00:29:26
is saying it's hard to change, right?
00:29:27
I'm not saying it's possible to change.
00:29:28
It's hard to change. So it's like to send think you
00:29:31
could go and pick all your anchor beliefs.
00:29:32
It's like maybe that's, you know, just too difficult
00:29:36
realistically. Yeah.
00:29:37
So, so on this, like one of the the first things you said about
00:29:40
anchor beliefs is that when it comes to having conversations
00:29:43
with others, we should try and identify the rank of beliefs and
00:29:46
then more or less assume that they can't be changed and work
00:29:48
within them as as constraints because it's really hard.
00:29:50
And that's very pragmatic. But I guess given that we've
00:29:52
established that anchor beliefs can change and are very
00:29:56
important, perhaps this approach isn't ambitious enough.
00:29:58
Like how do you think about this trade off?
00:30:00
Well, I think people almost always fail to persuade people
00:30:03
around things that are like deeply important, important
00:30:05
beliefs. And I think this way of thinking
00:30:08
about it actually makes success more likely, like much more
00:30:11
likely where you say, OK, this person is not going to change
00:30:14
their mind about X&Y&Z, like those are anger beliefs, but
00:30:18
they might change their mind about PQ and R.
00:30:21
And like, So what am I really trying to do in this
00:30:22
conversation? Right?
00:30:24
Like if you, for example, if you just want their, let's say you
00:30:26
think they're engaging in a really unhealthy behavior, if
00:30:28
your goal is just to get their behavior to change, to be
00:30:30
healthier for their own benefit, then don't try to change anchor
00:30:33
beliefs. Try to find the other beliefs
00:30:35
you can work with them on that can shift much more easily.
00:30:38
And a lot of times, I think a lot of times people spend like
00:30:41
the beating their head against anchor beliefs and never kind of
00:30:43
updating on the fact that they're not, they're not going
00:30:45
to change. Yeah, Yeah.
00:30:46
I think that makes sense. That's something that Tom and I
00:30:50
come across a lot in our day job where we have this, you know,
00:30:52
this nonprofit that is talking about the issue of factory
00:30:55
farming. And this is an issue where
00:30:58
people come back really immediately defensive when you
00:31:00
raise this topic. And often you're best off not
00:31:03
addressing their really, really stubborn beliefs and just
00:31:07
tappling it from from different angles, like talking about like
00:31:10
reasons to do with the climate that we should be concerned
00:31:12
about factory farming and so on. One thing I wanted to actually
00:31:15
ask you about with regards factory farming and and anger
00:31:17
beliefs. Yeah, I think there is a
00:31:20
particular anchor belief that gets, what's the word.
00:31:23
I think it's a particular anchor belief that people feel is under
00:31:25
threat when you raise the issue of factory farming because most
00:31:28
people love animals and hate to see them suffer.
00:31:30
And yet we also, many of us know that most animal products come
00:31:33
from factory farms. So animals experience
00:31:35
significant suffering. And my theory is that factory
00:31:38
farming threatens people's anchor belief that they're
00:31:40
fundamentally good and rational people because it makes it
00:31:42
salient for them that they're complicit in animal cruelty,
00:31:45
which isn't very good. And like how they treat, you
00:31:48
know, a pig versus a dog is inconsistent for arbitrary
00:31:50
reasons. And that's not very rational.
00:31:52
So yeah, I see it as as threatening and anchor belief.
00:31:55
But, you know, work, we don't have the option of avoiding this
00:31:57
topic and these beliefs because like, it's our, it's our job to
00:31:59
try and motivate action on on this issue.
00:32:02
So well, but yeah, I guess. But you do have the option
00:32:04
because you can choose which beliefs to target.
00:32:07
And I mean, I agree. I agree with you.
00:32:08
I think a lot of people have an anchor belief around like I'm a
00:32:10
good person, I'm a thoughtful person, like I can, I make
00:32:13
considerate choices, but also my friends and family are good
00:32:16
people. And so you're actually, if you
00:32:18
tell them that it's like they're doing this absolutely horrendous
00:32:20
behavior. Well, not only are you telling
00:32:23
them they're bad, you're telling them that their friends and
00:32:24
family are bad and their community is bad.
00:32:26
And you know, it's like, I think it's like a multiple anchor
00:32:28
beliefs are challenging. But if your goal is to just get
00:32:30
a difference in behavior, then I think this way of thinking can
00:32:32
be useful. How do you get the behavior to
00:32:34
change without changing the anchor belief?
00:32:36
Because the anchor belief it's you're going to be in for an
00:32:38
incredibly difficult time if you're trying to change the
00:32:40
anchor belief. So what's the what's the path of
00:32:42
least resistance through their beliefs structure?
00:32:44
The best we've come up with is we have this, this thing we call
00:32:48
a compassion calculator, which lets people offset their dietary
00:32:51
impact on animals through donations.
00:32:53
And rather than forcing people to choose between changing their
00:32:56
diet, which they don't want to do, or continuing to live in a
00:32:59
way that's in tension with their anchor belief that they're good
00:33:01
and rationals they don't want to do, it lets them kind of have
00:33:04
their cake and eat it, too. They can keep eating meat.
00:33:06
But now their net impact on animals doesn't threaten their
00:33:09
anchor belief. Yeah, as you might expect, like
00:33:13
the idea of all steady harms, it's more easily with some
00:33:16
people than with others. Yeah, yeah.
00:33:17
Suddenly we appeal to a certain type of person.
00:33:19
So I can, I can definitely see that.
00:33:21
I think a relevant thing here is another framework that I
00:33:24
developed which I call 3 three reasons to believe.
00:33:28
And I think we often think about beliefs as being true, seeking
00:33:32
things, right? Like, well, why do I believe
00:33:35
something? It's because I got evidence that
00:33:37
it's true, but there's actually, I argue two other mechanisms in
00:33:41
the brain that cause us to believe things that are
00:33:43
unrelated to truth. So there's truth motivation is
00:33:46
1. The second motivation is
00:33:49
pleasure and pain. And I think pleasure and pain,
00:33:51
it reinforces all behavior and and all kind of action in
00:33:55
humans. And I think belief is one such
00:33:57
action. So I don't think we should think
00:33:59
of belief is special. I'm not arguing it's special.
00:34:01
I'm just saying it just is affected by playing pleasure
00:34:03
just like everything else. You know, if you if you walk a
00:34:06
certain way and your ankle hurts, you're going to start
00:34:08
walking differently because you get, you know, reinforced by the
00:34:10
pain. Similarly, beliefs that causes
00:34:12
pain, I think we get reinforced away from them and beliefs that
00:34:15
causes pleasure, I think we get reinforced towards them.
00:34:18
So I think that's a second motivation to believe that we're
00:34:20
just operating on this very low level reward punishment circuit.
00:34:24
So another sort of thing to keep in mind is that like some
00:34:27
beliefs are actually rewarding people as they have them, right?
00:34:30
And some punishment you might want them to think or punishing
00:34:32
them to have them. And then the third, the third
00:34:35
one, which I would say is the most speculative, is that I
00:34:38
think that when we are confronted with evidence,
00:34:42
there's something happening very quickly in our minds that's
00:34:45
something projecting forward, like what this means for us,
00:34:48
sort of like the few, what does this mean for our future?
00:34:50
And so for example, let's say you say to someone I know that
00:34:54
you care about not harming animals, but just so you know,
00:34:57
when you buy a hamburger, that might be an animal that was
00:35:01
hurt, you know, suffered a lot in the process of making the
00:35:03
hamper, For example. I think that their brain is like
00:35:05
projecting forward. What does it mean to believe
00:35:07
this thing? And then they're going to
00:35:08
experience emotions based on the prediction of that projection.
00:35:11
So like, that prediction might be, if I believe this, I might
00:35:15
have to stop being hamburgers and I love hamburgers, right?
00:35:18
And then they might have an emotional reaction to that
00:35:20
prediction. It is all I'm arguing.
00:35:22
These are all very fast. It's not like they're having
00:35:24
like the sequence of thoughts consciously in their mind.
00:35:26
They just suddenly feel stressed out, right?
00:35:28
Yeah, yeah, I think those are both really good points.
00:35:31
I can see them coming into play with regards to the issue of
00:35:34
factory farming. So the, the second one about
00:35:37
like pleasure and pain, you know, the belief that factory
00:35:41
farming is a really important 'cause in the world that matters
00:35:44
a lot is quite a painful one to come to terms with because it
00:35:47
means that there are billions and billions of animals that are
00:35:49
having horrible lives. That makes you a whole view of,
00:35:51
of, you know, how much good there is in the world look very
00:35:53
different. So understandably, and I think
00:35:55
part of the issue is that people feel like have any, anything
00:35:58
good they can do about it. And so it's a particularly
00:36:00
unappealing problem to believe is real, which is why I think
00:36:03
it's important that we offer them practical ways that they
00:36:06
can have a big difference, for example, through donating.
00:36:08
And then on your your more speculative point, this doesn't
00:36:11
sound that speculative to me about people projecting forward.
00:36:13
Yeah, One way I've seen this come into play is people might
00:36:16
try and deny that, for example, certain animals are capable of
00:36:21
feeling pain at all, not because they care in particular about
00:36:25
that belief, but because they're projecting forward.
00:36:26
Or if they were to take that belief seriously, what
00:36:28
implications would then have and how do they feel about those
00:36:31
implications? And so rather they kind of stop
00:36:33
it, stop it at that point, then concede that first one, and then
00:36:36
have to have to deal with ramifications.
00:36:38
So there's some really good. Yeah.
00:36:39
That's why I think it often I think that kind of challenges
00:36:41
someone. What's going to often happen is
00:36:44
that they're going to, they're basically they feel pain like at
00:36:48
the thought of believing that thing and then they projecting
00:36:50
forward that's going to have negative consequences for them.
00:36:52
So starting to feel negative emotions also.
00:36:54
And then they're going to be like, OK, what do I do to make
00:36:57
this go away? And then they reach for like the
00:36:59
first thing they can think of that makes it go away.
00:37:00
Like, well, I don't even believe that they feel pain at all or
00:37:02
whatever it happens to be. Yeah.
00:37:04
And then suddenly they feel better.
00:37:05
They're like pain goes away. The emotions get, you know,
00:37:09
lessen their search of resolve. And then they're like, oh, Phew,
00:37:11
OK, cool. And so if alternatively, there
00:37:13
was an easy thing they could reach for to make them feel
00:37:15
better that also had a positive impact on the problem, then we
00:37:18
would be in a in a kind of a virtuous cycle where the problem
00:37:21
might actually get solved. Well, yeah, compare that, for
00:37:23
example, to someone who maybe they eat eggs occasionally and
00:37:28
they also have easy availability of more humane eggs.
00:37:32
And you were to say them, hey, you know, there's this other
00:37:35
brand of eggs that like, they treat the chickens like, really
00:37:37
well and like they have much better welfare.
00:37:40
I know you care a lot about animals.
00:37:41
So yeah, I just want to point that out because if you just
00:37:44
were to swap from buying those eggs to these other eggs, it
00:37:46
would actually, like, really make the lives of the chickens
00:37:48
way better. Like, that is much more the sort
00:37:50
of thing someone can believe because then they could be like,
00:37:52
it's not, you know, projecting forward.
00:37:53
It's not a huge inconvenience. Maybe it's like a very minor
00:37:55
inconvenience. And they don't even eat that
00:37:57
eggs that often anyway. And it actually can allow
00:37:59
themselves to think of themselves as a positive person
00:38:01
by agreeing like and actually can make them feel good about
00:38:03
themselves. So they could actually have
00:38:05
pleasure rather than pain. Now, I'm not saying they would
00:38:07
do it, but I think you're much, much more likely to not have
00:38:09
them to suddenly try to rationalize.
00:38:11
Absolutely, yeah. We want to try and provide
00:38:13
people ways to to help that make them think that they are like
00:38:17
that, aligned with their anchor belief that they are good and
00:38:19
rational people. But I mean, on on this belief, I
00:38:23
had come to conclude that the belief I'm a good and rational
00:38:26
person was kind of a universal anchor belief.
00:38:28
But you pointed out in your work that many people actually hold
00:38:31
the opposite belief, like I'm worthless.
00:38:33
So yeah, I just, I wonder whether there are any universal
00:38:37
active beliefs at all. Universal is tough.
00:38:39
I guess humans are really different from each other.
00:38:40
But when you say near universal, there might be things like I'm,
00:38:44
I am an independent entity of some sort, right?
00:38:47
Like something like that really low level.
00:38:48
But then you have Buddhists who say they don't believe that, you
00:38:50
know, So it's like they say, oh, I'm just, you know, all
00:38:53
consciousness or the same consciousness or whatever.
00:38:55
Yeah, but they're probably some really, really like low level
00:38:57
things we don't even think about.
00:38:59
We take them so for granted. You wouldn't even question them
00:39:01
until like unless like a meditation teacher like
00:39:03
challenged you on it. Yeah, yeah.
00:39:05
So to take us in a bit of a different direction now when it
00:39:08
comes to this podcast core theme of getting better at changing
00:39:11
our minds both at the individual and societal level, you don't
00:39:14
just have theories, but you also have these free tools on a clear
00:39:17
thinking platform like the Belief Challenger tool and so
00:39:19
on. Could you talk to an example of
00:39:20
how one of these tools actually works to help people change
00:39:23
their beliefs? Yeah, so some tools work by
00:39:26
teaching you kind of conceptual information that can help you
00:39:29
when you're thinking about the world.
00:39:30
So we have a tool called a nuanced thinking tool and it
00:39:34
teaches you about different strategies for nuanced thinking
00:39:36
and ways the nuance thing up and goes wrong and it helps you
00:39:39
practice applying them. So that's that's more like the
00:39:41
the concept level teaching you useful concepts you can then
00:39:44
apply. But then we have more direct
00:39:46
ones like our productive disagreements tool where you
00:39:49
actually get like get to work on learning about what makes a
00:39:52
disagreement tend to go better. And then you get to practice.
00:39:54
And we actually have in that tool, we have a little AI that
00:39:56
you can actually practice with. So once you've learned the
00:39:58
ideas, you can, you can go start trying them with the AI and, you
00:40:02
know, just getting a feel for what it's like.
00:40:04
So that one's more like practical and like, you know,
00:40:06
you know, practice based. And what's the evidence that
00:40:10
that these tools work? It's tough because any one of
00:40:13
our tools, so we have over 80 different tools, they typically
00:40:15
have different outcomes that they're targeting.
00:40:17
Like they usually don't have that much overlap and outcomes.
00:40:19
And running randomized control trials is like very expensive
00:40:21
and time consuming. So we'll get a grant to run a
00:40:23
randomized control trial on a particular tool, which is I
00:40:25
think the best way I know of to like check that it does the
00:40:27
thing, but it's very rare. So like for example, we built a
00:40:30
habit formation tool called Daily Ritual to help you form
00:40:32
new healthy habits. And we were able to run a
00:40:34
randomized control trial on it. And we demonstrated that
00:40:36
actually helped people stick to their habits over the next, I
00:40:38
think it was 8 weeks. That's like, you know, best case
00:40:41
scenario when we get to do that. So most of our tools we haven't
00:40:43
been able to test in that way. I think that in terms of
00:40:46
applying this kind of stuff in life, my view is that reading an
00:40:51
article is substantially less likely to get turned into action
00:40:54
than if you get to practice. And so that's why we focus in
00:40:57
our tools on like getting you to actually apply the concept in
00:40:59
the tool and practice in the tool.
00:41:01
So I do think that that is evidence based and as a way of
00:41:03
like increasing the likelihood you apply it.
00:41:05
But I still think not very much is known about how likely are
00:41:08
people to apply it, and it would be cool to have better evidence
00:41:11
on that. Yeah.
00:41:12
OK. So RC TS in the rare cases that
00:41:14
you can have them and then otherwise like just basing the
00:41:16
tools on well evidence mechanisms for for changing
00:41:19
minds. Yeah.
00:41:20
And methods, you know, from that we think are relevance from
00:41:22
education and, you know, also just common sense, but things
00:41:26
like, you know, reminding them of what they learn, that kind of
00:41:29
thing. And as I understand it, in the
00:41:31
cases where you do have empirical evidence on whether
00:41:33
the tools work, self report, like people telling you
00:41:35
afterwards whether the tool changes, is one of the main
00:41:38
forms of evidence. And in a recent episode of yours
00:41:41
where you address criticisms, you adjust to criticism that you
00:41:44
rely on self report by explaining why actually self
00:41:47
report is still one of the best tools that we have.
00:41:48
Can you give a quick recap of that point?
00:41:50
Yeah, sure. And this, this is an issue
00:41:52
where, OK, let's say we were going to run a run an ice
00:41:55
control trial on let's say our new on seeking tool.
00:41:56
What how do you even measure the outcome of like being a more new
00:41:59
on seeker? It's tricky.
00:42:00
Is a car designed an accurate measurement of that?
00:42:03
And often people want to fall back on self report, you know,
00:42:06
like asking people questions about themselves or about their
00:42:08
thinking or whatever. It's definitely true that some
00:42:11
things people will not self report accurately on.
00:42:13
For example, let's say I asked you like tell me, you know, one
00:42:17
week ago on Friday, what exactly did you eat?
00:42:19
Like if you ask people that they're not going to report very
00:42:21
accurately, their memories are just not that good usually
00:42:23
unless they like eat the same thing every day or for some
00:42:26
reason always do a certain thing on Fridays.
00:42:27
Or is it particularly notable? But generally they'll be really
00:42:30
bad at reporting it. Why is that?
00:42:31
Because they don't have perfect memories.
00:42:33
So that's an example of something people can't report on
00:42:35
accurately. But there are other things
00:42:36
people can't report on accurately.
00:42:37
People tend to struggle to report accurately on things that
00:42:40
would make them look bad to themselves if they actually
00:42:42
believed it about themselves. So like, if you ask people like,
00:42:45
are you, How rational are you? People tend to put themselves on
00:42:48
the high end, right? If you ask people, are you going
00:42:51
to make decisions that tend to rate themselves highly, right?
00:42:53
So there's kinds of things you can't trust.
00:42:54
Also because of the way like our egos work and so on.
00:42:57
For a lot of things, I think self report actually does work
00:43:00
quite well. Like let's say anxiety, you ask
00:43:02
people how anxious they feel, how worried they feel, etcetera.
00:43:04
That actually seems to do quite a good job of measuring anxiety.
00:43:07
Now, are there better ways of measuring anxiety?
00:43:09
You know, I think a lot of people assume that there are,
00:43:11
but I would argue that I don't think there are.
00:43:13
I don't think we know the better ways.
00:43:14
Like what do you actually do? Well, you could get hook someone
00:43:17
up to like galvanic skin response meter and measures sort
00:43:20
of the micro sweating, try to get their anxiety or get their
00:43:24
heart rate or get the respiratory rate.
00:43:26
But the problem is all of those things are influenced by many
00:43:28
different factors other than anxiety.
00:43:30
So how do you extract out the anxiety part of that?
00:43:32
It's actually really, really hard.
00:43:34
And how good a prediction predictor of anxiety are they
00:43:36
anyway? Well, you know, if you see an
00:43:38
attractive person, your galvanic skin response might fire, but
00:43:41
does that mean you're anxious? No, not necessarily.
00:43:43
So it's really tricky. And then if you were to try to
00:43:46
calibrate those, what would you calibrate them against?
00:43:48
You're probably ended up with calibrating them against self
00:43:50
report, like, because what else are you going to use to decide
00:43:53
if the thing works? So yeah, I think other times, in
00:43:55
many cases, as you know, oh, it's flawless.
00:43:57
Self report could be. A lot of times it is our best
00:43:58
measure that we know of. And you've also mentioned that
00:44:01
there are issues with self report, but you can often
00:44:03
significantly mitigate these through experimental design.
00:44:06
But I wonder whether that's really happening with their
00:44:08
thinking stalls, because, for example, like social
00:44:11
desirability bias, if people are being asked at the end of the
00:44:14
module on nuance thinking questions about how nuance their
00:44:17
thinking is, they know what you as the experimenter wants them
00:44:21
to say. How do you avoid that?
00:44:23
Well, if you think about why do people, why would people say
00:44:26
that they, let's say, improved when they didn't, right.
00:44:29
Sometimes they might do it because they think, let's say
00:44:31
they're being paid. They might think if the
00:44:32
researchers just like them, they won't pay them or something like
00:44:34
that would be pretty extreme, but they might think that or
00:44:36
they might feel bad like they make they want to make the
00:44:39
researcher feel bad. Or they might just be very like
00:44:41
conformist and and they think that the researcher actually
00:44:44
really wants them to say that they liked it.
00:44:46
And so they're just trying to do what the researchers they think
00:44:48
the researcher wants them to do, etcetera.
00:44:49
We find is that an anonymous online reporting these things
00:44:53
are way less of a big deal than if they are in like things like
00:44:56
interviews. Like I think in person
00:44:58
interviews are very dangerous for exactly these reasons.
00:45:00
They don't want to make you feel bad.
00:45:02
You know, the response doesn't want to make you feel bad.
00:45:04
They want to give you what you want, etcetera, etcetera.
00:45:06
But if it's anonymous, we find people are very like often very
00:45:08
brutal. You know, they get, you know,
00:45:10
feel protected because they know they're not going to find out
00:45:12
who they are. We also tend to like gas people
00:45:14
like what's your brutally honest opinion, etcetera.
00:45:16
And we also find the people on average, people don't like the
00:45:19
things we do. So, you know, so it's it's not
00:45:21
like we always get a good reviews.
00:45:23
One of our strategies, which is not as good as randomized
00:45:25
control trial by any means, but our general policy that we
00:45:28
typically follow is we'll test the tool on 40 random Americans,
00:45:32
which we recruit their platform positively, which helps you
00:45:35
recruit participants. And then we'll ask them a bunch
00:45:38
of standardized feedback questions and we'll compare it
00:45:40
against the same questions for our other tools.
00:45:42
So it gives us kind of a benchmark.
00:45:44
And then we'll also ask a bunch of qualitative questions about
00:45:47
what do they like, what do they dislike, how could it have been
00:45:49
better? How do it have been more useful
00:45:50
than that kind of stuff. And we use that to improve the
00:45:52
tool. And then we'll do another round
00:45:53
of testing on our beta testers who are like our core audience
00:45:56
that are assigned up to be beta testers.
00:45:57
And then we'll do the same thing.
00:45:58
We'll have them take the tool, rate it on quantitative features
00:46:01
and also give a qualitative feedback and just improve.
00:46:03
Now it's not as good as a randomized control trial because
00:46:05
he doesn't mean that it necessarily produces the long
00:46:07
term outcomes you hope. But it does catch a lot of
00:46:09
issues where people like don't like the thing, don't understand
00:46:12
the thing, are unconvinced that it was helpful to them,
00:46:15
etcetera. So it helps you address a bunch
00:46:18
of issues, but not all the issues you can have.
00:46:20
OK, so I think as far as online tools designed to do something
00:46:24
as hard as change people's minds, career thinking is
00:46:27
definitely on the more rigorous, evidence based side.
00:46:29
But another critique that one might level against them is that
00:46:32
they seem to primarily focus on the kind of rational level,
00:46:36
engaging people's deliberative reasoning to help them form
00:46:38
better beliefs. But you know, another guest of
00:46:40
ours, Rick Hansen, pointed out how other factors play a huge
00:46:42
role in our openness to changing our belief.
00:46:44
Like, you know, things like whether we're tired or hungry or
00:46:46
stressed when our beliefs are challenged or our emotional
00:46:49
state and our threat detection systems that assess whether new
00:46:52
information feels safe or dangerous.
00:46:54
And these seem to involve more kind of primitive parts of the
00:46:56
brain that evolved much earlier than our rational faculties.
00:46:59
So what would you say to the critique of that?
00:47:01
Because our thinking is too focused on just one mechanism
00:47:03
involved in how we change our beliefs, when belief formation
00:47:05
is actually this much more complicated process.
00:47:07
So what would it look like to do it?
00:47:09
The kind of thing this person describes it like.
00:47:11
We make a tool to help you relax, and then we hope that by
00:47:14
being more relaxed eventually in your life, you like have more
00:47:17
accurate beliefs. Yeah, I suppose within the
00:47:20
within the constraint of it being a tool to live it online
00:47:22
that that makes it a lot harder than if than if you could like
00:47:24
control all the variables have it there be like a space or a
00:47:27
room like I don't know, but I suppose there could be what I'm
00:47:30
thinking of the fly here. I'm going to attempt to to pay
00:47:32
attention to how the person is feeling and like maybe ask them
00:47:36
like how like are you hungry? If so, go we come back.
00:47:40
Are you tired? If so, come back another day.
00:47:42
Well. One thing we do teach in with
00:47:44
regard to disagreements is I think one of the, you know, if
00:47:48
there's just like, you know, three things you want to focus
00:47:50
on in a disagreement, I think one of them is going to be the
00:47:52
emotional state of the other person.
00:47:54
And if the emotional state starts to become significantly
00:47:57
negative, like they, they become seem really, you know, sort of
00:48:00
seem stressed out. They start to seem angry that
00:48:03
your goal should switch from like resolving the disagreement
00:48:06
or changing their mind or whatever your goal was to
00:48:09
working to get them into a more positive or at least neutral
00:48:12
emotional state. So I think that's really
00:48:14
actually key and important. But still, we teach that by, you
00:48:18
know, teaching it, not by trying to change someone's emotional
00:48:21
state themselves and then then hoping that has the secondary
00:48:23
effect of making them better at the thing.
00:48:25
Does that make sense? Yeah, yeah, yeah.
00:48:27
That does make sense. And so even if the tools work,
00:48:30
they only seem like they're valuable insofar as people will
00:48:32
use them. But how much are correctly these
00:48:34
tools used? Right.
00:48:35
So there's different kinds of ways you could talk about being
00:48:38
used. You could say, well, how many
00:48:39
people are using our tools like, and then you could say, well,
00:48:42
how many people take the idea and then use it in the real
00:48:44
world. It's very hard to know how many
00:48:46
people go use it in the real world.
00:48:48
But some of our tools have been quite popular.
00:48:51
Like for example, our most popular tool ever is our How
00:48:53
Rational Are You test, which I believe more than half a million
00:48:56
people have completed. Well, OK, and it seems like
00:48:59
there's a potential catch 22 here though, where like the
00:49:02
people most likely to use belief challenging tools are probably
00:49:05
those who need them least. So how do you think about
00:49:07
reaching the people who are more resistant to examining their own
00:49:10
thinking? I think of our target audience
00:49:12
as something like the top quarter of like most reflective
00:49:16
people, like top 25% but not top like 1%, something on top.
00:49:19
I do think there is an audience that is just not be interested
00:49:22
in what we're offering, not engaged with it, not likely to
00:49:25
use it. But I do think of our audience
00:49:27
as still being pretty broad. And you know, the way you try to
00:49:30
address that as you try to make the tools more fun, you try to
00:49:33
make them more interesting. You try to write about them in
00:49:36
ways that show people the value of them and the the benefits of
00:49:39
them, rather than just, you know, assume that they already
00:49:42
know the benefits, that kind of thing.
00:49:43
And so if we think that maybe it's something like 25% of the
00:49:47
population that these tools are for, I'm curious what that what
00:49:50
implications that has for clearer think he's kind of
00:49:52
broader project of helping people form accurate beliefs
00:49:56
like what does this say about our capacity to improve our
00:49:59
thinking to have in society large?
00:50:02
How much progress do you think is possible?
00:50:03
Well, possible is a broad word. I mean, I think if we were able
00:50:08
to signific don't mean we as in just clear thinking, but if we
00:50:11
as a society we're able to improve people, people's
00:50:14
critical thinking so substantially, I think it would
00:50:17
it could have a lot of tremendously positive downstream
00:50:19
effects like both. It would have positive effects
00:50:21
on politics. I think the way people vote and
00:50:23
the way they they think about politics and what also what
00:50:26
policies politicians even put up in the 1st place, I think it
00:50:30
would have effects on businesses like the way people run
00:50:32
businesses, the way what kind of products people make.
00:50:34
I think it affect on nonprofits like how they actually try to
00:50:37
make change in the world and what kind of change they go
00:50:39
after. So yeah, I think of it as both
00:50:42
useful at the individual level. I can help you make your life
00:50:44
better. But but also I think there are a
00:50:46
lot of downstream potential positive effects and including
00:50:49
ones like even to people taking seriously risks to society, you
00:50:53
know, it's like, are we really prepared for next potential
00:50:56
COVID? No, I don't think we are.
00:50:58
Well, that's ridiculous. Like we should be, insofar as
00:51:00
we're not prepared for the next pandemic.
00:51:02
I think that's a huge failure in critical thinking.
00:51:05
Yeah. I mean, what I'd say to you to
00:51:06
be saying here is that even if thinking tools are only really
00:51:10
for 25% of people, if you were to influence how 25% of people
00:51:13
how clearly they think, that could still be really
00:51:15
transformative for society at large.
00:51:16
Is that right? Absolutely.
00:51:18
I think it could be still very transformative and obviously,
00:51:21
but it's not likely we're going to actually influence 25% of the
00:51:23
entire population. But you know, it's something to
00:51:25
aspire to. Some critics might argue that
00:51:28
focusing on individual rationality is futile because
00:51:30
our information environment, the social media algorithms,
00:51:33
partisan news sources, etcetera, is structured in a way that
00:51:36
promotes unclear thinking. Do you think that individual
00:51:39
level interventions can compete with these sort of structural
00:51:41
forces? Well, you might say that a bunch
00:51:44
of sort of false information that we get or, you know, bad
00:51:48
ideas people get come from sources like these, and that's
00:51:51
probably true. But I don't really think that
00:51:53
really says very much about what works to help people avoid like
00:51:58
misinformation, avoid harmful beliefs, things like that.
00:52:01
For example, I think a pretty powerful idea is that if you
00:52:04
make people better pattern recognizers, that sort of faulty
00:52:07
kinds of thinking that they start to notice it and then
00:52:10
reading Twitter or they're on YouTube and someone makes an
00:52:13
argument, they just have that thought like that's not a very
00:52:15
good argument. You know, like you begin, you
00:52:17
can begin to inoculate yourself to some extent, or, you know,
00:52:21
you can begin to think, Oh, ha, this influencer that I really
00:52:25
trust said this thing that I don't think is very like likely
00:52:27
to be true. Let me go investigate that.
00:52:30
And and if that's not sure, what does that say about the other
00:52:31
things? You know, how how should that
00:52:33
cause me to change my view potentially on this influencer
00:52:35
in general? So I think working they'll even
00:52:38
if a lot significant amount of the problem comes from non
00:52:41
individual effects, it doesn't mean that it's useless to work
00:52:44
at the level of the individual. That being said, it could also
00:52:47
be good to work at the level of, you know, trying to change
00:52:49
social media. I'm not saying it's not.
00:52:50
I'm just saying I don't think 1 implies the other.
00:52:52
Yeah, yeah. I could imagine it being the
00:52:54
case that the the structural forces are powerful enough that
00:52:57
even if clear thinking reached its complete potential in terms
00:53:00
of scale and some organizations were doing similar sorts of
00:53:03
things, it might not be enough, but, but we wouldn't know.
00:53:06
So it certainly seems worth worth trying.
00:53:08
Yeah. Well, yeah, there.
00:53:09
I mean, the reality is the spectrum, right?
00:53:11
It's not a binary of like, you know, you can nudge society
00:53:15
incrementally. Yeah.
00:53:16
I mean, I was just going to ask whether when it comes to this
00:53:19
broader mission of helping people think more clearly,
00:53:21
whether there are certain domains where you think this is
00:53:23
more achievable and more important than others.
00:53:26
For instance, might it be easier to help people think more
00:53:28
clearly about their personal life decisions than politics?
00:53:31
Yeah. And you know, I think this goes
00:53:33
back to our previous discussion. Politics often does involve
00:53:36
anchor beliefs, which so it gets really tricky.
00:53:39
It also believes involves a lot of social beliefs, like beliefs
00:53:42
that we serve adopt largely because of people around us
00:53:45
adopt them rather than because we like heard a lot of good
00:53:48
arguments or saw a lot of good evidence.
00:53:50
There's very high clustering of like political beliefs among
00:53:53
communities. So they can be particularly
00:53:55
sticky, particularly hard to change.
00:53:57
I think when people have real skin in the game that it, it
00:54:00
tends to be easier to change a belief because they tend to, you
00:54:03
know, if we think about those three reasons to believe we
00:54:05
talked about before, if like getting the truth on it actually
00:54:08
really matters to like how things go for you, that aspect,
00:54:12
the truth aspect might have more weight in the equation, right?
00:54:15
Like you're about to, you have to decide which are two
00:54:18
procedures to undergo for this disease.
00:54:19
You have like, yeah, you actually care about getting the
00:54:22
right answer probably a lot, right?
00:54:24
Like, so, you know, on the individual level, I think
00:54:27
there's certain kinds of decisions that have really large
00:54:29
impacts on people's lives where it can be common to have lack of
00:54:34
critical thinking interfere with the decision.
00:54:36
And examples would be like around health, around like
00:54:39
certain large scale beliefs of your community that some of
00:54:43
which might be detrimental, but you might be tempted to pick up.
00:54:46
And then if you're talking about like impact on society, yeah, I
00:54:48
think if people were more nuanced and better critical
00:54:50
thinkers, it might reduce the chance that that harmful actors
00:54:53
are able to manipulate people that, you know, bad ideas get
00:54:57
moved into the popular popular, you know, center and so on.
00:55:01
OK. Well, I know we're running short
00:55:04
on time, and there's one thing I wanted to ask you about before
00:55:06
you wrap up, which is you seem unusually good at navigating
00:55:09
disagreement in a way that's constructive.
00:55:11
Is this something you're just naturally good at, or there's
00:55:13
some principles and insights that you can share to help us
00:55:15
all kind of get a bit better at this?
00:55:17
Well, I tend to not have anger and I think that helps because
00:55:21
without anger, you know, you just you remove one of the big
00:55:25
things that can go wrong in a in a disagreement.
00:55:29
But yeah, I just naturally, I just almost never experienced
00:55:32
anger, only in really extreme situations.
00:55:34
So I experience it, but yeah, it's just nice.
00:55:35
This is the thing I was born with.
00:55:36
I do think anger can be useful, but I think a lot of times it
00:55:39
cause problems. So but yeah, there's absolutely
00:55:41
things we can learn to do better in disagreements that can help
00:55:44
them go better. And I think the the first thing
00:55:47
I would say is this really make sure you understand why are you
00:55:49
having this disagreement? And like, you know, if you're
00:55:52
arguing with your uncle about politics over the dinner table,
00:55:55
like what's your goal in this situation?
00:55:57
Is it just to get along with your uncle just to have a nice
00:55:59
time at dinner? Or is it you think your uncle
00:56:02
believes something that's like bad for your uncle?
00:56:04
You think your uncle believes something that's bad for
00:56:05
society? Or you know, once you get
00:56:07
straight to your goal, it can actually change the way you act
00:56:09
a lot. You could realize that you the
00:56:11
whole the way you're behaving might be inappropriate to your
00:56:13
goal. I think I would say is, and this
00:56:15
is idea from Julia Galif, are you in soldier mindset or are
00:56:18
you in scout mindset? Is soldier mindset where you're
00:56:21
basically just trying to prove that you're right.
00:56:22
You don't care about what's actually, you're not actually
00:56:24
trying to figure out the truth where a scout mindset is like
00:56:26
you're actually open to, to knowing what's true.
00:56:28
You're trying to figure out what's true.
00:56:30
If you go into a disagreement in soldier mindset, then I think
00:56:33
you're going to have problems because basically you're not
00:56:36
even open to changing your mind. Why on earth would the other
00:56:38
person be willing to be open to changing their mind?
00:56:41
Right? And I think things tend to go a
00:56:42
lot worse. So that so I would ask yourself,
00:56:44
am I in scout mindset? Is the other person in scout
00:56:47
mindset? If the answer to either of those
00:56:49
is no, then you got to do something about that first.
00:56:51
How do I get the other person to be?
00:56:53
How do I help move the other person to scout mindset?
00:56:55
How do I get myself into sky mindset?
00:56:56
That often is a precursor to having a really good
00:56:59
disagreement. 3rd, and I mentioned this before, I would
00:57:02
track the emotional tone of the conversation.
00:57:04
If you notice the other person starting to get upset or angry
00:57:06
or anxious or yourself. That's the thing, the narrow to
00:57:09
switch to focusing on, like take a step back from the
00:57:12
disagreement part and focus on the emotion part and get back to
00:57:15
neutral or better than neutral before, before.
00:57:17
Like moving back into the disagreement part, because it's
00:57:20
just extremely unlikely it's going to be a real change if the
00:57:23
person's angry at you or anxious about the things you're saying,
00:57:26
etcetera. Yeah, with your second point,
00:57:27
The scam Mindset, as a book, it's a lot of love on this
00:57:29
podcast, very, very relevant to the theme.
00:57:31
Your first point was regarding getting clear on what your goal
00:57:34
is, but I've also heard you share advice, which I think is
00:57:37
good, about trying to frame the shared goal of a disagreement
00:57:41
that you have, the goal you share with the person you're
00:57:43
disagreeing with as being to try and map the disagreement, rather
00:57:46
than when. I'm not really sure how you
00:57:47
achieve that. Yeah, that's something that I
00:57:49
like to do as an exercise, but it's not something that I think
00:57:52
you necessarily will be able to do in like a real disagreement.
00:57:55
A real disagreement may not be able to shift to say, well,
00:57:58
let's instead of focus on the disagreement, let's focus on if
00:58:00
we can agree on what the points of disagreement are.
00:58:03
That might be a little awkward move, but I like to do that with
00:58:05
people I know and be like, hey, let's work together, see if we
00:58:08
can figure out why we disagree with work.
00:58:09
I do it on my podcast sometimes all these disagreement episodes.
00:58:12
So I think it's a fun exercise, but not necessarily a strategy,
00:58:15
but there's a related strategy that you can use I think is
00:58:17
actually way underutilized, which is to really at the
00:58:21
beginning of the disagreement, bend significant amount of
00:58:23
effort to really try to fully understand their point of view.
00:58:26
Ask them a lot of questions. Don't try to prove them wrong.
00:58:29
Just ask them a lot of questions.
00:58:30
Try to deeply understand what they believe.
00:58:32
And then once you feel like you really understand that, show
00:58:34
that you understand it, for example, by like repeating it
00:58:38
back and say, did I get this right?
00:58:39
And then once, once they've like I said, yeah, yeah, you got it.
00:58:42
You nailed it. Or the the best thing you got is
00:58:45
like, oh, wow, you explained that so well, it's better than I
00:58:47
would have said or whatever. But like even just saying, yeah,
00:58:49
that's, that's it. That's what I believe.
00:58:51
Then you could say, hey, is it OK if I explained my
00:58:53
perspective? Now you've already, you've
00:58:54
listened to them. Now you understand what they
00:58:56
believe, and now you try to get them receptive to hearing what
00:58:58
you believe. You try to explain it, and then
00:59:00
you confirm that they actually understand what you're saying.
00:59:02
That is a really good starting point.
00:59:04
You both understand each other and now you can actually work on
00:59:07
the the disagreement itself. Yeah, I've seen some some
00:59:10
moderated debates that start that way with with steel manning
00:59:12
is is the term for it steel manning one of others views I
00:59:14
think, I think it's a good a good idea.
00:59:16
It's subtly different than steel Manning.
00:59:18
Actually, Steel Manning is trying to say the strongest
00:59:22
version of the person's view in your own opinion, like what do I
00:59:25
think the strongest version of your view is?
00:59:26
That would be Steel Manning. This is actually just trying to
00:59:29
very carefully listen and pay attention.
00:59:31
I'm trying to say what you actually believe in and why you
00:59:34
actually believe it according to you, rather than to try to morph
00:59:37
into a stronger version. OK, that makes sense.
00:59:41
I feel if we glossed over you saying that you don't experience
00:59:44
anger, what about frustration? Do you experience that?
00:59:46
I do experience frustration, yeah.
00:59:48
I have a theory that many more people are missing emotions than
00:59:52
is generally known. I, my view is there's quite a
00:59:54
large number of emotions, we just never really get into it
00:59:57
with most people. Like, do you feel emotion #27 or
01:00:00
whatever? So I think, I actually think
01:00:01
it's not that uncommon that someone will be missing one or
01:00:03
more emotions. And I think I'm pretty close to
01:00:06
missing 2 emotions, right? It's a spectrum.
01:00:07
So I don't think I have zero of them, but I think I'm pretty
01:00:09
close to missing anger and awe. I don't really experience them,
01:00:12
but I actually would I I believe that a lot of people, if you
01:00:16
actually go through each emotion systematically, you'd find that.
01:00:18
You know, it's not that uncommon to be missing at least one and
01:00:21
we're we're running some studies now to see if this is actually
01:00:23
true. That's really interesting.
01:00:24
Or is it a is 1? I wouldn't want to be missing.
01:00:26
It's an interesting question. Whether I would choose to be
01:00:29
missing here is nice and or yeah, I don't even know.
01:00:32
I don't know that I would give up or to spare myself in anger,
01:00:34
even though anger is quite bad in my life.
01:00:37
Interesting. What are some other fundamental
01:00:40
emotions that you have observed specific people lacking?
01:00:43
Fear, I know people that don't experience fear.
01:00:46
Like one guy I know he's he used to be sent to kill deadly snakes
01:00:50
when they were found in the place that he lived and he just
01:00:53
doesn't didn't bother him at all.
01:00:54
Some people, some people I actually wonder whether they
01:00:58
have experienced happiness. I know that's odd, but there are
01:01:01
there are certain people that I've had that serious doubt
01:01:03
whether they're capable of experiencing happiness.
01:01:05
I think it's rare. I think any one of these things
01:01:07
is very rare. On not experiencing or have you
01:01:10
put this to the test really put yourself in a lot of experiences
01:01:13
that other people find deeply awesome and and and nothing.
01:01:16
Yeah, there was a funny thing was I thought I experienced
01:01:19
because I had misunderstood what they were experiencing for a
01:01:22
long time. This generally happens when
01:01:24
people have something unusual about their mind that they
01:01:26
assume other people must be experiencing what they're
01:01:29
experiencing. So they map it onto the wrong
01:01:31
thing. Like someone I know who is
01:01:33
asexual thought that when people were describing people as like
01:01:36
being good looking in a sexy way, that they were just like
01:01:38
describing their aesthetics, right?
01:01:40
Like, oh, that like, like, you know, the statue of David is
01:01:42
very aesthetically pleasing because it's like sort of the
01:01:44
closest concept that their brand had.
01:01:46
So it's the same for me. Like I thought when I was, you
01:01:48
know, giant mountains in front of you, I thought people
01:01:52
describe me all they were experiencing beauty, which I do
01:01:55
experience, but they're not. It's like, I mean, it's obvious
01:01:58
to you, but it wasn't obvious to me.
01:02:00
They're not describing it being beautiful.
01:02:01
That's what I would mapped it onto.
01:02:03
Yeah, interesting. Yeah, interesting to learn how
01:02:05
different our minds are from one another.
01:02:06
I particularly fall prey to the similar mind fallacy, so hearing
01:02:11
examples like this are really profitable to me.
01:02:13
But yeah, I guess we only have a few minutes left.
01:02:15
So I wanted to ask you a final kind of traditional question
01:02:19
here, which is what's 1 belief or perspective you wish more
01:02:21
people would reconsider or change their minds about?
01:02:23
Yeah, The one that comes to mind for me is that I think often if
01:02:28
there's some group that you feel like is really harmful, people
01:02:31
will jump to assuming that the people who have that belief are
01:02:34
either idiots or evil. And obviously there are
01:02:38
exceptions. Like, if you're talking about,
01:02:39
you know, Nazis that are killing people, yeah, OK, it's
01:02:43
reasonable to say that they're immoral, they're evil.
01:02:46
But the reality is most large groups, most people are not
01:02:49
evil. They're like, pretty normal.
01:02:51
And they're not stupid, you know, they're just pretty
01:02:54
normal. Like, so if there's a really
01:02:56
large group that believes something that you think is
01:02:57
really terrible, chances are you're not dealing with stupid
01:03:01
evil people. You're dealing just, like,
01:03:02
mostly with normal people. And I think it's very important
01:03:05
to accept. I think we want to believe that
01:03:08
they that they must be something wrong with them that makes them
01:03:10
believe but but actually just humans are susceptible to
01:03:13
believe all sorts of things. Yeah, for sure.
01:03:15
That's a good one. I would like people to change
01:03:16
their views on as well. Looks fantastic.
01:03:18
It's been really interesting having this conversation with
01:03:20
you because so many of the concepts you write about
01:03:23
pertains to this question of how we change our minds and and why
01:03:26
so often we don't. So thank you for sharing your
01:03:28
views with us. Good to chat with you.
01:03:30
Thank you for listening to Change My Mind.
01:03:32
We're a new podcast, so if you liked what you heard, consider
01:03:35
giving us a star rating on your podcast type of choice or
01:03:37
sharing your favorite episode with a friend.
01:03:39
Either way really helps us get the word out there.
01:03:41
Special thanks to Harrison Wood for editing and production
01:03:43
support. If you have any guest
01:03:46
suggestions or feedback, especially of the critical kind,
01:03:48
we'd love to hear from you. You can reach us at
01:03:50
hello@changedmymindpod.com.
