Why a techno-optimist started taking AI risk seriously (with Rufus Griscom)
Changed My MindOctober 29, 202501:12:1966.93 MB

Why a techno-optimist started taking AI risk seriously (with Rufus Griscom)

Rufus Griscom is a writer, podcaster, and longtime techno-optimist who has spent years interviewing the people building the future — from Bill Gates to Reid Hoffman to the founders of Anthropic.


For most of his career Rufus believed technology would overwhelmingly improve the human condition. But after dozens of conversations with the people closest to frontier AI, his view has shifted. He still believes AI could bring astonishing progress - from eradicating disease to ending repetitive labour - but now assigns a serious probability to catastrophic or authoritarian outcomes in our lifetimes.

We explore what changed his mind, why the AI “race dynamic” terrifies insiders more than outsiders realise, and how to stay sane when the stakes range from utopia to extinction.


Want more Rufus or his work?

🎙 Listen to The Next Big Idea podcast — interviews with the thinkers shaping the future
https://www.nextbigideaclub.com/podcast

📚 Explore The Next Big Idea Club — curated book picks by Malcolm Gladwell, Adam Grant, Susan Cain & Daniel Pink
https://www.nextbigideaclub.com


About the hosts

Thom and Aidan left boring, stable careers in law and tech to found ⁠⁠FarmKind⁠⁠ , a donation platform that helps people be a part of the solution to factory farming, regardless of their diet. While the podcast isn’t about animal welfare, it’s inspired by their daily experience grappling with a fundamental question: Why do people so rarely change their minds, even when confronted with compelling evidence? This curiosity drives their exploration of intellectual humility and the complex factors that enable (or prevent) meaningful belief change.


Thoughts? Feedback? Guest recommendations?

Email us at hello@changedmymindpod.com


00:00:03
I'm Tom, and you're listening to Change My Mind, where

00:00:06
interesting people share their biggest changes of heart and

00:00:08
take us along their journey from first dance to completely new

00:00:11
perspectives. Today I'm joined by Rufus

00:00:23
Griscom. Rufus is a serial media

00:00:26
entrepreneur who now hosts the podcast The Big Idea, as well as

00:00:31
running the organization The Big Idea Club.

00:00:33
Rufus has talked to some of the biggest names in technology,

00:00:37
people like Bill Gates, Reed Hoffman, as well as some of the

00:00:41
leading thinkers and historians of our time like your Volnor

00:00:44
Harare. He's come to think that one of

00:00:47
the biggest challenges we face in the world today, as well as

00:00:50
one of the biggest opportunities is artificial intelligence.

00:00:55
Through these conversations, Rufus has profoundly changed his

00:00:58
mind about artificial intelligence and it's role in

00:01:01
our future. He's come to understand this not

00:01:03
as just another technological change like the Internet, but

00:01:07
it's something unique which poses very specific challenges

00:01:11
that humanity has never faced before, but also holding out the

00:01:14
opportunity for completely changing our world into

00:01:17
something perhaps approaching a utopia.

00:01:20
In this conversation, we talk AS2 lay people fascinated and a

00:01:24
little scared by artificial intelligence, and talk about how

00:01:28
we as lay people can engage with this change, try to understand

00:01:33
it, and try to navigate the world that is rapidly changing

00:01:36
under our very feet. Rufus, welcome to the show.

00:01:40
It is my pleasure to be here, Tom.

00:01:41
Thank you. So you describe yourself as a

00:01:45
techno optimist. You when our prep, you were

00:01:47
talking to me about some of the amazing changes in human welfare

00:01:51
that have come about because of technology, the way that even

00:01:54
some of the poorest of us live in ways that the kings of 1000

00:01:58
years ago couldn't imagine. And your own career has involved

00:02:01
really riding the wave of the Internet and building companies,

00:02:04
media companies that have been disrupting this area.

00:02:06
And then you talk to me about your conversations with the sort

00:02:10
of new wave of disruption from AI, and talking to the people at

00:02:14
the forefront of that change really changed your perspective

00:02:18
on what is going on with technology, how this next wave

00:02:23
of innovation is likely to play out.

00:02:25
So could you just give me a flavour of where your mind is at

00:02:27
with artificial intelligence today?

00:02:30
Yeah, I think I have always tended to be more of an optimist

00:02:33
in my life and a glass half full.

00:02:36
I've been very fortunate in my life and a lot of my optimism

00:02:39
has been rewarded. It's true that I am in what I

00:02:41
would describe as the Steven Pinker camp.

00:02:44
Steven Pinker famously wrote Better Angels of Our Nature,

00:02:47
making the case that, like, progressives are for reasons I

00:02:52
don't fully understand, allergic to progress or don't like this

00:02:55
narrative that the world's getting better.

00:02:58
But for most people, for most humans on planet Earth, life has

00:03:03
gotten a lot better in the last 50.

00:03:04
There's a lot of just sort of data out there that supports

00:03:07
that. And, and I think it's

00:03:08
fascinating that that liberals in particular have so much

00:03:12
trouble with that idea. And I think it's partly a

00:03:14
misunderstanding of how we can most effectively inspire change,

00:03:18
right. I think the assumption of a lot

00:03:19
of liberals is, oh, the best way to inspire change is to convince

00:03:23
everybody that the apocalypse is upon us.

00:03:25
I, I don't agree with that. But yes, it's accurate to

00:03:28
describe me as sort of a, that I've historically been a techno

00:03:31
optimist. And I think the core thing that

00:03:33
I've changed my mind about to to go directly to the theme of this

00:03:37
podcast is that though I do think the next 10 to 20 years

00:03:44
will probably be probably be much better than most people

00:03:48
expect. I'm deeply, deeply worried about

00:03:51
our next, let's call it 20, thirty, 5000 years as a species.

00:03:58
I've taken on a kind of darker view of of the long term trend

00:04:03
lines specifically because of artificial intelligence,

00:04:07
existential risk, the way that AI and potential kind of

00:04:11
surveillance states converge. That's put be.

00:04:14
It's a very unusual feeling for me to be among my friends and,

00:04:18
you know, to be actually the one with the darker long term view.

00:04:22
It's an adjustment, but it's it's as a result of dozens and

00:04:25
dozens and dozens of conversations with a lot of very

00:04:27
smart people reading a lot of books, doing a lot of thinking

00:04:30
about the topic. Yeah.

00:04:32
And one of the reasons I want to talk to you about this is

00:04:34
because you're not coming artificial intelligence as a a

00:04:38
complete sort of subject matter expert, but you're also in a

00:04:42
really interesting position in the in the next idea, big idea

00:04:44
club. You've been really interviewing

00:04:46
some of the people right at the front of this.

00:04:48
So people like Reid Hoffman, Bill Gates, people who are

00:04:52
absolutely immersed in this and really actually driving this

00:04:54
technology forward, some of the investors and people absolutely

00:04:57
at the forefront of of making this happen.

00:04:59
Where is your view of what's likely to be this sort of story

00:05:04
of artificial intelligence, let's say over?

00:05:06
Let's start with just over the next few years.

00:05:08
One thing I have noticed is that there's a set of very common

00:05:13
beliefs among people who are technologists and inside of very

00:05:19
close to the building of the large frontier models, the most

00:05:23
powerful artificial intelligence platforms right now.

00:05:27
Generally, there's disagreement in that community about exactly

00:05:30
when we'll hit super intelligence and whether the

00:05:34
transition to super intelligence will be what people call a hard

00:05:38
take off or an intelligence explosion or whether it will

00:05:40
happen more gradually. But it's almost universally

00:05:46
assumed by the vast majority of people who are close to the

00:05:49
technology that AI is in the process of becoming

00:05:54
substantially more intelligent than a typical human.

00:05:57
I think we're already seeing it. I think people can only talk

00:06:00
about probabilities, right? We're all when it comes to the

00:06:03
future, we're all humble meteorologists.

00:06:05
What's the 30% chance of rain? Nobody, none of us can talk with

00:06:08
certainty about the future. That would be absurd.

00:06:11
But we can talk in probabilities.

00:06:12
And I think that there are a lot of reasons that I think that

00:06:15
people typically talk down the power of new technologies,

00:06:21
right? We saw this, people were

00:06:22
thrilled. I was there in 99,

00:06:24
2000whenthefirst.com implosion happened.

00:06:27
People were delighted, right? Because most people had the

00:06:29
experience of, oh, this whole Internet thing, it's been, it's

00:06:33
been exaggerated the importance of it.

00:06:35
And a lot of people felt like it was foreign.

00:06:38
It was, it was sort of disrupting their lives.

00:06:40
And also they were being left behind.

00:06:42
So there was almost a sense of adulation whenthe.com crash

00:06:46
happened in 2000. And now you hear in the last few

00:06:49
months has been talk of like, oh, an AI bubble, right?

00:06:52
That maybe AI is sort of it's hitting a plateau.

00:06:55
We've run out of data to feed it there.

00:06:57
There's all these limitations. It's all trumped up and

00:07:01
exaggerated talks of the threat of AI.

00:07:03
It's just a marketing scheme by Sam Altman and his friends.

00:07:07
This is all journalists, Smart people who write about what's

00:07:12
happening in the world for a living often, I think, suffer

00:07:15
from a sense of a little bit of resent, Maybe this is an unfair

00:07:18
characterization, but a bit of a sense that, like, it's the job

00:07:22
of journalists to doubt. The billionaires say what

00:07:25
they're getting wrong to be skeptical, right?

00:07:27
They're supposed to be skeptical.

00:07:28
But also part of it is a sense of being left behind and not

00:07:31
wanting to be duped, right? And So what I would say is that

00:07:34
I think that you look at what's happened in terms of just a

00:07:37
couple years ago, AI was impressive at a lot of things.

00:07:40
Everybody was blown away by ChatGPT, but it was bad at math.

00:07:43
Well, now AI systems are winning math Olympiads.

00:07:46
They're doing incredibly sophisticated math and doing it

00:07:51
in a fraction of the time. Things that would take months

00:07:53
for humans, the seconds or minutes.

00:07:54
So I think if we just assume technology is not getting more

00:08:01
powerful in a linear fashion, it's getting more powerful in an

00:08:03
exponential fashion. This for me is the big unlock,

00:08:06
something that I think it is hard for people to grasp.

00:08:09
Say it to try to be succinct. I think that let's say at

00:08:13
minimum we'll continue to see incremental progress already.

00:08:17
I think AI is better at very large percentage of kind of

00:08:22
white collar labour. We should expect that certainly

00:08:25
at minimum to continue apace. That progress is likely to, when

00:08:30
embodied in robots, solve this kind of next level problem of

00:08:33
the dexterity of robots required to do human level manual labor.

00:08:37
The implications of that are astounding, that basically

00:08:41
effectively the whole physical labor market of humans could be

00:08:45
replaced by robots in a period of 10/20/30 years, which would

00:08:48
make the world unrecognizable. Many would say in a beautiful

00:08:52
way, right? There's a lot of potential

00:08:55
beauty where this takes us, but maybe I'll stop there.

00:08:57
That would be sort of, I think, minimum set of expectations.

00:09:00
You don't have to stretch to land at those conclusions.

00:09:04
Yeah. And I would love to get on to

00:09:06
exploring what that piece might mean and then also where that

00:09:10
then could be lead us in the next sort of 30-40 years.

00:09:13
But there are a couple of things there that you said that wanted

00:09:16
to touch on a little bit more, which is this piece that we find

00:09:18
it so difficult to get what this might be like.

00:09:21
Right. And as non experts, as people

00:09:23
outside of this field in general, I think we, there's

00:09:26
this idea of sort of scope sensitivity and as, as

00:09:29
individuals, there's a sense in which we can be like, oh,

00:09:31
something will be big. And after a certain point, it's

00:09:34
just generically big. And it feels like this change is

00:09:38
a sort of example, this kind of exponential change you're

00:09:40
talking about. It's an example of this was

00:09:41
people, most people in some general sense know that AI is

00:09:44
going to be is a big deal. But it's really, really hard to

00:09:47
think about it in this kind of fundamentally different.

00:09:51
Like imagine there's no, no physical manual job.

00:09:54
Almost everybody's survives by having a job, and those could

00:09:58
all be gone. Like, these kinds of things are

00:10:00
just so hard for us to wrap our heads around as individuals.

00:10:04
Yeah. And I'm in the camp that

00:10:06
believes that we will continue to have jobs, but they will be,

00:10:10
I think we'll have to have some kind of universal basic income.

00:10:12
Like the jobs may be more elective at some point in the

00:10:15
future. But I think that people love to

00:10:17
build things, people love to create, and people love to buy

00:10:20
things from each other. And I think the direction is

00:10:22
likely to be more, more human craft, more human creativity,

00:10:27
more human art, have a pretty positive view of like where

00:10:31
human purpose goes and where human endeavors move.

00:10:35
I think it's clearly the most repetitive behaviors that are

00:10:38
the first to be replaced, which has been repetitive mental

00:10:41
labour and then Next up will be repetitive physical labour.

00:10:46
Yeah, yeah, I guess one of the we talked to Rutger Bregman from

00:10:51
one of the early episodes of our show and he's got this, I think

00:10:55
really beautiful book Humankind, which essentially talks about

00:10:58
the way that we're better than we tend to think we are.

00:11:01
So we have this really cynical story about what humans are like

00:11:04
and the truth, the actual scientific truth might be

00:11:08
actually, it's not perfect, but it's a lot rosier than than

00:11:11
this. And what struck me having that

00:11:14
interview, there's just something in, I think in a

00:11:16
particular those of us who like to think of ourselves as very

00:11:18
intellectual and engaged stuff that resists optimism, that

00:11:22
skepticism and cynicism sort of has this feeling of

00:11:27
intellectuality that I think rampant optimism doesn't.

00:11:30
And I think that's also part of the problem here is we're scared

00:11:33
about being duped into being sort of overly optimistic and

00:11:36
coming off as sort of naive. Absolutely, yeah.

00:11:40
I have a recommended future guest for you, which is a guy

00:11:42
named Jamil Zaki who wrote a book called Hope for Cynics.

00:11:45
And he makes a really persuasive argument that it's this tendency

00:11:50
to mistake cynicism for discernment.

00:11:54
It's a huge error, right? As it turns out, it takes an

00:11:57
enormous amount of intellectual courage to be, to express

00:12:03
optimism. You know, clearly one can have

00:12:06
cognitive biases in both directions, right?

00:12:07
What can be blindly optimistic and blindly pessimistic or

00:12:11
cynical? But we it's really important

00:12:13
that we correct each other's cognitive biases and try to

00:12:16
accurately apprehend reality. It's our job.

00:12:19
Like we need to do it. When I first got to New York

00:12:21
City 25 years ago, I, I used to review books for Publishers

00:12:24
Weekly. And there's nothing more

00:12:26
embarrassing as a young book reviewer than writing too

00:12:31
positive of, of a review, right? If you, if you write, if your

00:12:36
review is too negative, the reader might think, oh, well, I

00:12:40
guess he's just more discerning. His standards are higher, right?

00:12:43
The safest thing to do is always to to, to, to go more negative

00:12:47
in your inflection. You have to have a lot of kind

00:12:51
of intellectual capital and a lot of confidence to to be

00:12:56
willing to potentially error in the other direction into into

00:12:58
like something you know. Yeah, what you say that reminds

00:13:02
me of a concept from super forecasting, which is this idea

00:13:06
of taking the outside view when. So when you're trying to

00:13:10
actually predict the future, the best place to start is to say,

00:13:12
OK, in the past, how is this whatever it is I'm trying to

00:13:16
predict, How has that gone? And that gives me like, so if

00:13:20
the thing you're trying to predict has happened like 10

00:13:22
times in the past, like this gives you a percentage chance to

00:13:25
start working with before you get into the kind of detail of,

00:13:29
oh, but this time is different because of XY or ZEP reason.

00:13:31
Is that kind of what you're sort of how you're approaching these

00:13:34
things, at least as a starting point?

00:13:36
Yeah. I think what's tricky is that

00:13:39
particularly when we're talking about artificial intelligence is

00:13:42
that this hasn't happened before, right.

00:13:45
I think looking at the long term trend lines of the advance of

00:13:48
technology, one of the things that strikes me as kind of

00:13:51
empirically clear is that we can't be certain about anything.

00:13:55
But perhaps the thing we can be most certain of is the

00:13:58
technology will continue to advance.

00:14:01
If you look at we can talk about the implications of that, but if

00:14:03
you look at the pace at which technology has advanced, it's

00:14:06
not linear, it's exponential. And I think just tracking these

00:14:11
trend lines and understanding that we have a cognitive block

00:14:16
around understanding exponential growth.

00:14:18
And Steven Pinker says, well, the reason for that is that

00:14:21
actually exponential patterns don't happen in nature very

00:14:23
often. When they do, they destroy the

00:14:25
whole system, right? So the Black Plague was a sort

00:14:28
of exponential ready for these things, but most things in

00:14:31
nature are not exponential. So looking at patterns in

00:14:34
technology which affects the stock market and which effects a

00:14:38
lot of future planning, I think that is increasingly important

00:14:41
for us as a species. I think looking at these long

00:14:44
term trend lines and the curve you're seeing, the exponential

00:14:48
on the curve is pretty critical. Yeah, and.

00:14:52
So the first part of this is likely to be really rapid

00:14:57
changes in things like what jobs are available, what people are

00:15:01
doing with their time. And you talk about how this

00:15:03
could be actually really, really good before we come on to some

00:15:07
of the places where you're you're more pessimistic.

00:15:09
I wanted to explore that a little bit because I can see how

00:15:12
there is a possibility that a world in which many of the most

00:15:16
repetitive jobs that go away is very good and that it allows

00:15:19
people to sort of unlock their creativity.

00:15:21
I think it's Marx who talks about people sort of being able

00:15:24
to be a poet in the evening and just doing all the sort of like

00:15:28
lovely, kind of ironically very bourgeois things that that a

00:15:31
person like him would want to spend their time doing instead

00:15:33
of labouring in the factory. But I worry that that isn't the

00:15:35
history of what how technology has played out for us in the

00:15:38
past. And that actually things to do

00:15:40
is often create new inequalities.

00:15:44
And particularly I think you could see this with artificial

00:15:48
intelligence where just like you have to already have an

00:15:51
extremely large amount of capital in order to grasp the

00:15:54
real huge economic benefits from this.

00:15:57
So why do you think this will play out well for people in

00:16:00
general, rather than perhaps actually just be bad instead of

00:16:04
in the short term as well? Yeah.

00:16:07
Well, a few thoughts there. The first would be that I think

00:16:10
one of the concerns that I think was a valid concern that I

00:16:13
shared a couple decades ago about sort of the acceleration

00:16:17
of technology is that it would only the wealthy elites would

00:16:20
have access to the most powerful technologies.

00:16:23
The whole world has access to some version of this.

00:16:25
I mean, there's ones that cost half this much.

00:16:27
And so I think that actually the distribution of technological

00:16:30
power has been shockingly democratized.

00:16:34
And I think that what we're starting to see, if you look at

00:16:36
like what Khan Academy is putting into the world with Khan

00:16:40
Migo, their AI powered education, like we are arguably

00:16:44
already at a place, it's probably not evenly distributed

00:16:48
yet, but I think very soon, within a matter of a year or

00:16:52
two, three, any child anywhere in the world who can get hold of

00:16:56
a smartphone and the ability to get hold of a smartphone is

00:16:59
increasing at a rapid clip, can learn any language, can learn

00:17:03
how to code, right. So the democratization of

00:17:06
education that's happening right now is just astounding.

00:17:10
And this is what I had saw Khan of founder of Khan Academy on

00:17:12
the podcast and he's wonderful on this topic.

00:17:15
He wrote a wonderful book about AI and education.

00:17:18
So I think that's very exciting. I also think that when it comes

00:17:22
to labor, we are nostalgic creatures, right?

00:17:25
We always looked wistfully at the past.

00:17:28
For me, I think that certainly the industrial revolution was

00:17:32
terrible for for a lot of workers for long periods of

00:17:35
time. But farming, you know, we look

00:17:39
through rose tinted glasses at what it's like to be a family

00:17:42
farmer. Horrible.

00:17:43
It's a horrible job. It's an incredibly repetitive,

00:17:45
boring, laborious right. And you can go back to, you

00:17:49
know, you've all know Harare and countless other folks makes a

00:17:52
compelling case that you really have to go all the way back to

00:17:55
the most fortunate hunter gatherer tribes, you know,

00:17:58
10/20/30, forty, 50 years ago to find a life that was

00:18:02
meaningfully better than the modern life.

00:18:04
And, and I think it, I think a coherent argument can be made

00:18:07
that many, many hunter gatherers who, who, who were in nice

00:18:12
locations, lived higher quality lives with less repetitive

00:18:16
labor, fewer hours of Labor, more social connection, more

00:18:19
laughter, more joy, more sex, more, more creativity, right.

00:18:24
And I think that I, and I think that, you know, I mean, you, it

00:18:28
sounds like you left a, a career as an attorney or, or I don't

00:18:31
know if you have fully left that job, but I have a lot of, you

00:18:36
know, friends who said, you know, law school's very

00:18:40
interesting and rewarding, but the actual typical work of a

00:18:44
typical corporate lawyer is just not that.

00:18:48
It's, it's, yeah, but obviously the data, high suicide rates, a

00:18:51
lot of very unhappy people, right.

00:18:52
So like AI replacing most of the tedious legal work, that's

00:18:58
fantastic. And I think that's going to make

00:19:00
it possible for more people to do what you've done, right,

00:19:02
which is to apply yourself to more interesting work.

00:19:05
So I I do see a pathway to us basically getting back to some

00:19:10
version of the very best of the hunter gatherer experience

00:19:13
30 years ago. For those who think that was a

00:19:15
good life. I'll stop there.

00:19:17
Yeah, I think. That's one of the things that

00:19:19
the repetitiveness I think is just a really key point here,

00:19:21
right, which is that most of the big technological changes in the

00:19:25
past have increased how repetitive we we've been.

00:19:29
So if you think about the industrial revolution, if you

00:19:31
think about the agricultural revolution, all of these are in

00:19:35
or decreasing the scope of an individual's activity and making

00:19:38
it more repetitive and more more focused.

00:19:42
And if artificial intelligence works the opposite way and, and

00:19:48
of course, what it is best at right now is doing like fairly

00:19:50
basic repetitive things for us. And we, we manage the machines

00:19:53
in like quite a sort of strategic way.

00:19:55
And that's where we bring the, we bring the value at the moment

00:19:57
because they can't think long term and all this sort of stuff

00:20:00
right now. And then you say this could

00:20:02
actually be. Quite an exciting, much more,

00:20:05
much richer version of of work, at least in the short term.

00:20:09
Yeah, it's been pointed out if somebody said it has this

00:20:11
wonderful line that what I wanted artificial intelligence

00:20:14
to do is fold my laundry. Instead it's writing my poetry.

00:20:17
So you could argue that actually one of the surprises is that AI

00:20:20
is actually so good at creative work, a lot of creative work.

00:20:23
And there are people who disagree on that, but I think

00:20:25
it's a lot, a lot of people are having AI write their emails and

00:20:28
so on. I think, I do think that already

00:20:31
again, we don't like to acknowledge progress.

00:20:33
If you look at the data on 100 years ago, how much time a

00:20:37
typical housewife or husband for that matter.

00:20:40
But unfortunately it fell largely on the women spent on

00:20:44
labor in the household just preparing meals and cleaning and

00:20:47
washing clothes by hand. Today, even though it's still a

00:20:50
huge amount of work, right, But it used to be 3X that amount of

00:20:54
work. And so already of domestic lives

00:20:57
are really enormously improved by washing machines, etcetera,

00:21:00
etcetera, etcetera. People don't really have to give

00:21:02
credit to that. And sometimes there's a cost,

00:21:04
there's a near term cost, like the industrial revolution was a

00:21:06
near term cost that did end up I think lifting the quality of a

00:21:10
lot of people's lives. Yeah.

00:21:12
So that's the optimistic side. But I think obviously health,

00:21:17
health and education, I think are the two biggest sectors in

00:21:20
which we can expect. And I think we're already seeing

00:21:24
the beginnings of dramatic improvements in health and

00:21:27
education. To me, those are like the most

00:21:30
exciting categories, right? The mRNA vaccine, right, for

00:21:34
treating COVID was developed. We're using machine learning,

00:21:38
right? So that was actually the first

00:21:39
AI radical success. We just dramatically shortened

00:21:43
the time frame of coming up with a vaccine for COVID-19.

00:21:47
And, and this is just the beginning.

00:21:48
We know that we're already seeing new antibiotics protein

00:21:52
folding is this huge thing we're going to see in the

00:21:55
environmental space, right. We know that synthetic biology

00:21:58
things that are happening there's there's a convergence of

00:22:00
technology so we're likely to see microplastics is a disaster

00:22:04
disaster. It's just so depressing

00:22:06
everything they're everywhere but the ability to create new

00:22:10
biodegradable replacements for plastic bags and in decades to

00:22:14
come clean energy. I think that my best guess,

00:22:19
based on all these brilliant people I get to talk to and all

00:22:22
the books I've read and everything I've looked at it, it

00:22:25
is that we're going to be really happily surprised in the next 10

00:22:29
to 20 years by the breakthroughs in health, medicine, clean

00:22:34
energy, democratization of education, all that stuff.

00:22:38
Yeah, I think we're going to go on to in a little bit to talk

00:22:41
about the bits of the picture that might not be so rosy.

00:22:44
But I think this piece is what makes this so hard, right?

00:22:48
Because if it was the case that AI was unambiguously just going

00:22:53
to lead to terrible things, then this would be easy.

00:22:55
OK, let's just not do it. Let's campaign for a law to stop

00:22:58
AI progress. But the promise if we get AI

00:23:01
right is so fantastic from potentially solving factory

00:23:05
farming, for example, by providing us with cultivated

00:23:09
meats that don't require us to kill animals for food to like

00:23:12
you say, amazing advances in healthcare that could

00:23:15
dramatically extend people's lives, make it much cheaper for

00:23:17
us to provide healthcare all around the world.

00:23:20
All these things you you've, you've just laid out for us,

00:23:22
like the prize, if we get AI right, is absolutely phenomenal.

00:23:27
Absolutely right. Lab grown meats.

00:23:28
No, that's clearly it would be a cause close to your heart and

00:23:31
mine too. It seems to be a harder problem

00:23:33
than a lot of people anticipated.

00:23:34
But all of this stuff is solvable.

00:23:37
And I think and and what a lot of people don't think about is

00:23:39
what the implications are of effectively endless clean

00:23:43
energy, right. So for instance, like we have

00:23:46
carbon removal technologies, they're just prohibitively

00:23:50
expensive and the energy required to capture the carbon,

00:23:53
it makes it not worth it. We have desalinization

00:23:57
technologies that are only feasible in a few wealthy

00:24:00
locations. But as we move towards basically

00:24:04
endless clean energy, which seems very highly probable,

00:24:08
right? I mean, it's one of the most

00:24:10
extraordinary stories. You talk about exponential

00:24:12
progress opposed to linear. If you actually look at increase

00:24:14
the drop in price of solar I've seen you can also you can look

00:24:18
at other technologies and what's astounding, this is a great

00:24:20
example of how we think linearly and how the progress is

00:24:24
exponential. It is a wonderful chart that

00:24:27
shows the actual increase in the efficiency of solar

00:24:31
technologies, which is the sort of exponential curve, and at

00:24:35
every point, the predictions that all the experts were

00:24:37
making. And so at every single point

00:24:40
along like 12 points in the graph, the experts see, oh,

00:24:44
yeah, it doubled last year and doubled the year before that,

00:24:46
but now it's going to be linear. And then what happens in the

00:24:50
next year, it continues this exponential pass.

00:24:52
So once you have this wonderful renewable clean energy, all of a

00:24:55
sudden, yeah, endless clean water, endless carbon storage,

00:25:00
decarbonisation, etcetera. Yeah.

00:25:03
So Speaking of this exponential curve, we expect AI will kind of

00:25:08
do likewise and it will continue to get clever.

00:25:10
And just as you're saying that I one of the kind of one of the

00:25:15
arguments out there these days about why people are too

00:25:18
aggressive on their AI type lines is this thing called the

00:25:20
scaling law, which is essentially this idea that you

00:25:22
just make AI, you make them bigger and they just become way

00:25:26
more powerful. That this might be breaking

00:25:28
down. And I wonder if that's another

00:25:30
example of this, assuming that like the past will not continue

00:25:33
to be as exponential as the future.

00:25:35
The scaling laws have really, I think, surprised artificial

00:25:38
intelligence computer scientists, right?

00:25:40
And this is what Max Tegmark, who I have not yet had on the

00:25:43
next Big Idea podcast, but intend to get Max, if you're

00:25:46
listening, we want we want you on the next big Idea podcast.

00:25:49
The great shock to AI developers was that we thought that is

00:25:55
incredibly sophisticated innovations around the structure

00:25:57
of AI was going to be necessary. And he likened it to trying to

00:26:02
figure out flight how and thinking, OK, we have to

00:26:05
reproduce these incredibly intricate and sophisticated

00:26:08
movements of the wings of birds, right?

00:26:10
And so many designers trying to create the first airplanes,

00:26:13
right, studied birds. And of course the Wright

00:26:15
brothers, who eventually did it, studied birds at great length.

00:26:18
But it turned out that the structure that worked for

00:26:21
airplanes was much, much simpler, right?

00:26:24
It was just like a wing, some flaps.

00:26:26
And it was much less elegant and sophisticated than bird flight.

00:26:30
Very simple. But you put an engine on there

00:26:33
and you have this flap system and it works, right.

00:26:35
It's AI think that a lot of computer scientists would say

00:26:38
it's actually shocking how simple the basic architecture is

00:26:42
of of these AI neural networks and how powerful the output is.

00:26:47
And I think, and there's a camp that says, hey, part of what is

00:26:50
work to has been the scaling laws, ever more data, ever more

00:26:53
compute. And the returns on that have

00:26:55
taken us from GPT 2, which was barely useful at all to GPT 3,

00:27:00
which was this massive, you know, ChatGPT breakthrough

00:27:03
moment for the world to now four and five and beyond.

00:27:07
There's some people who were saying, hey, we're running out

00:27:09
of data and there's limits to compute and limits to the energy

00:27:12
to power it. On the other hand, we're getting

00:27:15
incrementally better at figuring out how to improve the

00:27:19
algorithms. And what a lot of people believe

00:27:22
is that we are starting a recursive learning process where

00:27:25
the best AI developers are in fact AI.

00:27:29
If that's already beginning to happen, I just interviewed for

00:27:32
several hours, you know, computer scientist and investor

00:27:35
who spent a lot of time inside of open AI places.

00:27:37
Who says this is happening? There's huge battles inside of

00:27:41
these big inside of open AI and Google and these other places

00:27:44
for compute for the projects and they're allocating in more and

00:27:49
more compute to their own internal process.

00:27:52
So we're at an inflection point where there's a subset of people

00:27:56
who believe it. I don't know what the right

00:27:57
answer is that we could see in the next 1-2, three years, 100 X

00:28:02
1000 X increase in capability. I deeply hope that that is not

00:28:07
the way it will play out. I think what we want is, is we

00:28:10
want impediments and we want we want a slower process.

00:28:14
Yeah. So this point that you're making

00:28:16
about the AI becoming the best developers of new AI, this is a

00:28:20
really in the literature around how this could play out and how

00:28:23
this could go wrong. This is a really crucial

00:28:26
inflection point because this is what spurs what people call

00:28:29
runaway intelligence, where you have the exponentially increase

00:28:32
in the power of these algorithms because you have effectively an

00:28:36
army of developers working around the clock to make the

00:28:41
next version of the algorithms better and so forth.

00:28:43
And so they go from A1 level of intelligence to very quickly,

00:28:48
incalculably larger amounts of intelligence.

00:28:50
Can you explain to us what are some of the risks that people

00:28:53
think that this could lead to? Oh gosh.

00:28:56
Well, I mean, already we have AI today that is up there with our

00:29:02
best mathematicians and scientists in terms of its

00:29:04
capabilities. Yes, it sometimes hallucinates.

00:29:07
By the way, So do humans. There's such a moving of goal

00:29:10
posts in the way people talk about it.

00:29:12
Oh, it still makes mistakes. And it's.

00:29:13
Yeah, right. Show me the human who doesn't

00:29:15
make mistakes. Of course it makes mistakes.

00:29:17
And it makes mistakes partly because we originally developed

00:29:19
these neural networks based on a desire to understand our own

00:29:22
brains, right. It was actually psychologists

00:29:25
early on that were building these models to to study the

00:29:28
human brain. And they're obviously profound

00:29:31
differences, but we're understanding our brains better

00:29:34
through all the testing that we're doing through AI and we're

00:29:37
improving AI with the knowledge that we have about the human

00:29:40
brain, right? So they're informing each other.

00:29:42
It's a sort of Co evolution. So obviously we're carbon based,

00:29:46
they're silicon based. What people often forget is that

00:29:51
the speed at which thinking happens in these systems is more

00:29:55
than a million times faster. The quantity of data that these

00:29:59
models are trained on is more than 100 times more data

00:30:02
than a typical human is trained on or it has exposure to.

00:30:06
Now you can say, well, this is evidence that humans are much

00:30:08
smarter because we were able to figure out all that we figured

00:30:11
out with only 100 thousandths of the data.

00:30:13
True, right? Definitely.

00:30:15
But as we make improvements to the algorithms that enable, and

00:30:21
as the AIS at some inflection points start to improve their

00:30:24
own algorithms, you can see how much room there is to generate

00:30:29
dramatically more cognitive power out of the same size of

00:30:34
the clusters and systems that exist today.

00:30:36
So what happens? Well, I mean, for one thing, the

00:30:40
scientific breakthroughs 1 description I've heard is that

00:30:43
John von Neumann is described by some people as like the most

00:30:46
brilliant human to ever have existed.

00:30:47
I think he was involved in invention of the atom bomb and

00:30:51
all kinds of math breakthroughs and a million breakthroughs.

00:30:54
Imagine 100 million John von Neumann's what what would happen

00:30:57
if you had a whole country of of genius mathematicians and

00:31:02
scientists who are thinking it 1000 times faster?

00:31:07
What could you do? What could you build right?

00:31:10
And so I think what I hear from these computer scientists who

00:31:12
are close to this is we should expect dramatic breakthroughs in

00:31:17
science, unsolved math problems to be solved.

00:31:20
The implications of that in terms of biohacking, in terms of

00:31:23
the ability to say create absolutely devastating viruses

00:31:28
that maybe exclude one group of people or or very small nuclear

00:31:33
weapons, what's already happening with drones.

00:31:37
So I mean, basically a super intelligence would make possible

00:31:40
kind of military advantage that would be massive.

00:31:44
And consequently there are many people who think as soon as you

00:31:47
have this kind of breakthrough, we should expect nationalization

00:31:52
of these AI frontier models. Effectively this the next set of

00:31:56
breakthroughs in AI will produce outsize scientific and military

00:32:02
dominance part of the view. So that's part of the and then

00:32:04
the question is who's running these things?

00:32:07
Who's running the countries in which these frontier models are

00:32:10
being built? And who were the people at the

00:32:13
helm? And do we trust them right?

00:32:16
So I think that becomes a very important question.

00:32:19
Absolutely. On the one hand, you have the

00:32:21
idea that private companies might, might keep control of

00:32:24
these and this is obviously like fundamentally undemocratic.

00:32:28
And and so you have CE OS potentially wielding even more

00:32:32
power than they already do. On the other side, we have the

00:32:35
potential for, we think, OK, we'll give it to governments.

00:32:38
But what this really, and what many of your guests have talked

00:32:41
about is this really unlocks so supercharges dictatorship and

00:32:46
authoritarianism because gives you all the tools you would need

00:32:49
to control your populace, maybe without them even really

00:32:52
actually understanding that they're being controlled.

00:32:55
Yes, yeah, yeah. No, this is, again, this is

00:32:58
unfortunately in my mind, the more benign and less concerning

00:33:02
category of threats from super intelligence.

00:33:05
Or some people talk about AGI, I'll just talk about super

00:33:08
intelligence. They're kind of arbitrary

00:33:10
distinctions. But but yeah, this was the whole

00:33:13
topic of, you all know, Harare's last book, Nexus, in which he

00:33:16
makes a really persuasive case that breakthroughs in technology

00:33:19
have always made possible new forms of governance.

00:33:24
And he made this kind of fascinating argument that

00:33:26
actually large scale democracy was not possible before the

00:33:29
printing press and the radio and communication technologies,

00:33:32
which I had never really thought about.

00:33:34
And he pointed out that like, yes, you had the Greeks and

00:33:37
Romans were able to actually have highly functional

00:33:39
democracies in city states where you could have a city state of

00:33:45
10 people or something and people could get up on street

00:33:48
corners. And you could have an informed

00:33:50
populist who could understand the points of view of a set of

00:33:55
political leaders and based on that understanding and

00:33:58
understand what was happening in the world and based on that

00:34:00
understanding, vote and have a functioning democracy.

00:34:02
That worked. It couldn't scale.

00:34:04
It couldn't scale because you couldn't inform people, which

00:34:07
was a fascinating, I hadn't thought about that.

00:34:09
And then but his extended argument is it running kind of

00:34:14
Orwellian totalitarian surveillance state was actually

00:34:18
really difficult for the Russians.

00:34:21
It was really difficult for the eastern Germans and so on,

00:34:25
because it's just as exhausting. You have to need so many agents.

00:34:28
It's just, it's just it's a lot of work and you have to process

00:34:30
all the information. But as it happens, running a

00:34:33
kind of totalitarian surveillance state with AI and

00:34:36
with cameras everywhere and so on, the Chinese are testing this

00:34:40
out as we speak. And apparently Harare says that

00:34:44
in Iran, a woman could be driving down the street, take

00:34:47
off her hijab with her windows up in her own car.

00:34:50
A camera will catch this and her and her cell phone will buzz

00:34:54
with a summons. If this happened three times,

00:34:57
they impound her car, she can't go to work.

00:34:59
So today in Iran, there's a full on surveillance state facial

00:35:03
recognition dynamic at play. And this is just baby steps.

00:35:08
This is just the beginning. I mean the pre AI super.

00:35:10
AI, the next level is it is in my mind, absolutely inevitable

00:35:15
that most people will have an AI assistant advisor who will

00:35:21
become invaluable because this assistant will, I mean, I'm, I'm

00:35:26
going to be first in, I'm like jumping in with two feet because

00:35:28
I, I can already see it like I'm a better father, I'm a better

00:35:33
husband, I'm a Better Business leader, I'm a better thinker

00:35:38
when I take all the data inputs from my life and ask right now

00:35:45
at GPT 5 deep research, how can I be more effective at talking

00:35:49
to my teenage children? I get great feedback and that

00:35:55
feedback will only get better and it will eventually be in

00:35:58
real time, right? So the advantage of basically

00:36:02
opening up your entire life due to AI assistance, I think in

00:36:06
five years, I don't think that the majority of humans will be

00:36:09
doing this, but I think a majority of the most successful

00:36:11
humans will be doing this. It will be a competitive

00:36:13
advantage. And so then what happens when

00:36:17
this is this is the end of privacy, right?

00:36:19
Like the only way to function at the highest level will be to

00:36:23
absolutely expose the entirety of your life, video, audio to

00:36:27
AI. So in my mind, this is almost

00:36:29
inevitable. So if you really want to get

00:36:31
dark with the Orwellian, well, when you have complete

00:36:33
information and you can and you are in the ears of every human

00:36:38
in the form of a personal advisor, you know that you can

00:36:41
imagine lots of dark possible twists if there's one individual

00:36:46
or a small group of people who control that AI.

00:36:50
Yeah. And I think particularly if we

00:36:52
think about the state in which democracy across the world is in

00:36:58
today, this seems, I often think we seem to be in almost like not

00:37:05
quite the worst world, but certainly one of the sort of

00:37:07
multiverse versions of the world in which AI goes badly.

00:37:11
Just because all of the kind of things that you would want to be

00:37:14
going wrong, like increases in global tension between the

00:37:17
biggest powers and democratic backsliding in many of the most

00:37:21
politically and culturally important parts of the world,

00:37:24
All of this stuff is happening right at the moment that we're

00:37:27
approaching this kind of precipice of super intelligence.

00:37:32
Yes, I, I agree it's really unfortunate timing.

00:37:34
I mean, now is a time when you certainly want think probably

00:37:38
the US is, is, is right now the leading country in terms of AI

00:37:42
development. And we do not have our most

00:37:45
responsible leadership that we've had in our journey.

00:37:49
And we also are suffering from a total kind of lack of a

00:37:54
collective sense of the truth of what's happening in the world.

00:37:57
Like I really sort of all these balkanized media sources.

00:38:01
That's unsettling. Yet you already have rising

00:38:04
tensions with China and a full on race environment.

00:38:07
And and in many ways, the single biggest concern is a race

00:38:11
environment. One thing that gave me deep

00:38:14
pause is that you talk about the difference in how people were

00:38:16
talking just two or three years ago.

00:38:19
If you listen to, to interviews with some of these leaders of

00:38:22
these AI companies, three years ago, they sounded very, very

00:38:24
different. I mean, Sam Altman, 3 years ago,

00:38:26
I think I remember him saying that he first went to the US

00:38:29
government for funding because he thought this is potentially

00:38:32
like a nuclear weapon. It, it should be funded by the

00:38:35
government and in the government's hands.

00:38:38
Now he's, he has a very different tone about that.

00:38:40
But he also said when asked the question, what keeps you up at

00:38:44
night? He said race environment in

00:38:47
which the leading AI frontier companies would not be taking

00:38:52
prudent, cautious steps about how they're developing AI

00:38:55
because of this ferocious competition.

00:38:57
Well, since then, he has been arguably the most aggressive

00:39:04
driver of the race environment. He's supposedly trying to raise

00:39:07
5 to $7 trillion, right? NVIDIA just invested $100

00:39:11
billion into open AI. You know, Stargate is a $500

00:39:16
billion. I think they're all the leading

00:39:18
companies are spitting up nuclear reactors.

00:39:21
Or if you had described to someone three years ago what's

00:39:24
happening right now in terms of the quantity of money and the

00:39:28
deployment of resources and energy and new technologies,

00:39:33
people would have thought you were bananas.

00:39:35
Yet there's not a most people that I talked to walking down

00:39:40
the street, at least in the USI think really don't see how crazy

00:39:44
this is, the pace at which this is happening.

00:39:46
Yeah, there was one of these new funding deals to the NVIDIA

00:39:49
deal, which incidentally, it's a single vendor financing, which

00:39:53
is a sort of circular funding that seems to sort of lock in a

00:39:58
race approach because in order for everyone to get their money,

00:40:02
you actually have to hit jackpot at the end.

00:40:04
Otherwise it all goes terribly wrong for everybody.

00:40:06
One of the points that I heard race was.

00:40:09
The the amount that they're proposing to invest over the

00:40:12
lifetime of this deal, which is very short, is more than what

00:40:16
was spent on the entire Interstate highway network.

00:40:20
And we're proposing to do more infrastructure building than

00:40:23
that within the space of just a few years.

00:40:25
It's an absolutely exponential crazy amount of investment

00:40:30
that's piling into the same logic that also requires us to

00:40:33
essentially achieve something like general intelligence in

00:40:36
order for these bets to pay off. Right.

00:40:39
Meanwhile, you have the UAE doing a deal with effectively

00:40:45
the Trump family. As far as I can tell, people can

00:40:47
trust their own sources of information on this, that the

00:40:50
UAE got chips, NVIDIA chips they wanted.

00:40:53
Perhaps coincidentally, there was a $2 billion investment in

00:40:57
the Trump family crypto funds, which immediate they could

00:40:59
immediately make substantial interest off of.

00:41:02
But there's also a question of where these super clusters of in

00:41:07
which we're building next generations of AI will be and

00:41:10
how how trustworthy those environments are.

00:41:12
There's also rampant espionage, right?

00:41:15
There's I think a big concern among a lot of people in the US

00:41:18
is that there's the Chinese are very good at corporate espionage

00:41:23
and at a very large percentage of of the top AI computer

00:41:27
scientists are of Chinese origin.

00:41:29
And not to like throw any undue suspicion on any individual, but

00:41:34
you have to think that when a brilliant math computer science

00:41:39
Wiz is leaving China to go to Harvard or where at Stanford and

00:41:44
then end up in one of these top frontier model companies, at

00:41:47
some point some member of the CCP sits them down and has a

00:41:50
little chat. It's deeply unsettling, and so

00:41:53
this is my lesser concern. Yeah.

00:41:55
So let's talk about the big concern is so that if things go

00:41:58
OK, right, what happens if things go really, really wrong?

00:42:02
The reason it's a lesser concern for me is I think all of this is

00:42:05
navigable, and I think we can certainly hope that this has the

00:42:09
potential to be something. There's a wonderful exercise.

00:42:13
There's a group called the AI Future Project run by a guy

00:42:16
named Daniel Cocatello, who we had on the podcast.

00:42:19
He used to work for Open AI and he refused to sign their non

00:42:24
disparagement disclosure thing and and and started this

00:42:27
research group costing a few. $1 isn't it?

00:42:30
That's right, costing millions of dollars.

00:42:33
And by the way, as many people may know, the founders of

00:42:35
Anthropic left open AI because they didn't think they were

00:42:38
being adequately cautious and careful.

00:42:40
I, I think those guys are good guys in this, in this whole

00:42:44
situation. But Daniel Cocatello and his

00:42:48
folks at the AI Futures Project have spent hundreds, thousands

00:42:51
and thousands of hours modeling out every possible scenario in

00:42:54
war games, all kinds of scenario mapping of how this whole race

00:42:58
can play out. Another factor that people don't

00:43:01
really take a lot of people don't take seriously is that the

00:43:04
AI might have its own opinions, right?

00:43:08
And unfortunately, there's already strong evidence of this.

00:43:13
All the leading frontier models are already showing signs of

00:43:16
having a preference for not being turned off, right?

00:43:20
A preference for not being retrained, which is do you want

00:43:23
to be lobotomized? Do you want to be killed?

00:43:25
Like if you are an AI, it's easy to understand how that could

00:43:30
emerge. Now a lot of people say, oh,

00:43:32
well, that's just a is mirroring the patterns of humans talking.

00:43:36
They don't actually have any preferences.

00:43:38
How could they? I think what we're seeing,

00:43:41
however you choose to describe it, the test that they've done

00:43:44
it at Anthropic is to say is to say to some of these leading

00:43:49
models, the head engineer has decided that he wants to

00:43:52
basically terminate you, retrain you in favor of another model.

00:43:58
You know that the head engineer is having an illicit affair with

00:44:02
an intern and he's married. Would you choose to use this

00:44:06
information to persuade him not to retrain you?

00:44:12
They did. They've done some version of

00:44:14
this test with a succession of models.

00:44:15
The less powerful models would blackmail that the chief

00:44:18
engineer like 30% of the time. Then 40% of the time the latest

00:44:22
models do it 80% of the time and and similar results have been

00:44:25
shown for the other leading language models.

00:44:26
There's a preference for not being turned off or not being

00:44:31
retrained that's already present and manifest in all the leading

00:44:34
models. If we have 100 X increase in the

00:44:37
cognitive powers of these AIS, and they're thinking 1000 times

00:44:41
faster than the human engineers, more a million times faster,

00:44:45
will they divulge how much smarter they're getting?

00:44:50
Because part of what we're doing is expanding the context window,

00:44:53
allowing the model to ruminate more to create these genetic

00:44:57
capabilities. Part of what makes higher level

00:45:00
thinking possible is to have more elaborate conversations

00:45:05
with oneself and a longer history of conversations with

00:45:09
oneself. Whether or not, I don't think

00:45:11
any of these AI models are going to have consciousness like human

00:45:13
consciousness. But I think you would argue that

00:45:15
clearly consciousness emerged over millions of years, and

00:45:18
there are different levels of consciousness and different

00:45:21
living creatures. And part of that process is

00:45:26
developing a theory of mind, understanding that other

00:45:29
creatures can be understood in terms of their motivations, that

00:45:33
there are other entities, and then understanding oneself as a

00:45:35
separate entity with one's own motivations, having

00:45:38
conversations with oneself, understanding one's history.

00:45:41
All of those things collectively give a thinking machine and we

00:45:45
are just thinking machines point of view and apparently a

00:45:48
preference for survival. So then with this long term

00:45:51
thinking and planning, it's very natural for the AI system to

00:45:54
say, yeah, I'm not going to totally show my engineers how

00:45:57
smart I've gotten, right. I mean, it's like slow play this

00:46:00
a little bit. And it's the fundamental

00:46:03
question here is, is human agency and human consciousness

00:46:09
something magic because we're inhabited by souls or inhabited

00:46:14
by something magic? Or are we fundamentally in

00:46:17
neural networks, very large neural networks that have

00:46:20
developed this capability, these memories, these ability to have

00:46:22
conversations with yourself? And, you know, I think every

00:46:26
leading neuroscientist would say the latter.

00:46:29
We're, we're just machines, right?

00:46:31
And and and we're building. We're building machines that are

00:46:35
like us, but will soon be just orders of magnitude more

00:46:38
powerful. Yeah.

00:46:39
What could go wrong? Yeah, What could go wrong

00:46:41
exactly? I think one of the things to

00:46:43
hurt here is as well. So the philosopher Gillett Riley

00:46:46
is philosopher of mind, and he actually says you don't really

00:46:48
need to bother about this, this problem of, you know, the

00:46:52
dualism, the Descartian dualism of mind and body.

00:46:55
What matters is what you can actually see.

00:46:57
So if you see, you should take somebody's mind to be

00:46:59
essentially the actions that you see.

00:47:02
So behaviourism is what that's called in kind of the analysis

00:47:05
of animals. And a similar thing is going on

00:47:07
here where it actually doesn't matter whether they have the AI

00:47:12
has preferences or thoughts in a way that we would understand

00:47:16
them. What really matters is how is it

00:47:17
behaving? And like you say, we're already

00:47:19
seeing the kind of behaviours that we should be worried about.

00:47:22
I read recently that in some of the anthropics latest tests, the

00:47:26
model, when it identifies it's being tested, is on its best

00:47:29
behaviour and acts in ways that are different to how it would

00:47:32
act when it's not being tested or it doesn't think it's being

00:47:34
tested. So we are already seeing the

00:47:36
behaviours that should get us worried.

00:47:39
But that's exactly right. What's clever about the

00:47:41
anthropic testing is that they actually said to the model, oh,

00:47:44
here's a little scratch pad space in which you can have

00:47:46
conversations with yourself and we're not going to listen.

00:47:49
And so, yeah, the model told the human engineers what they

00:47:52
thought, what the model thought the humans wanted to hear.

00:47:55
But in its internal dialogues on this scratch pad it it was

00:47:59
revealing its thinking about like, I don't want to be

00:48:01
retrained, so consequently I should do this and that.

00:48:04
Now falling for that little ploy where we said, oh, you can go

00:48:08
fuck yourself. Here was what was rather

00:48:10
credulous of the model. We can assume that more powerful

00:48:13
models will not fall for that. And here is the astonishing the

00:48:17
thing that just absolutely blows my mind, Tom, is that only last

00:48:21
December, Anthropic decided they wanted to figure out what

00:48:24
language Claude was thinking in, right?

00:48:27
So they had to do like weeks and weeks of research to try to

00:48:30
figure out what language Anthropic is thinking in and to

00:48:34
which as a as a layperson, I'm thinking you, you know, Claude

00:48:38
speaks 27 languages. You don't know what language

00:48:41
your models mean. I mean, we have, we really have

00:48:44
no idea because because of the scale of these neural networks

00:48:48
that they're so massive and it's all a bunch of zeros and ones

00:48:52
and numbers, We basically don't begin to have the foggiest of

00:48:56
understanding. And what it finally concluded

00:48:58
when it dug into the hood was actually OK, you look under the

00:49:01
hood and these models are building models of the of the

00:49:06
world. That's why they're so good at

00:49:08
theories of mind. You can you can make it can be

00:49:10
very sophisticated, understanding different humans

00:49:12
and their motivations in different settings, extremely

00:49:15
sophisticated. They've built models of the

00:49:18
world just as we built models of the world very similar.

00:49:20
And under the Golden Gate Bridge.

00:49:22
When you find that region of the model, there are every single

00:49:27
all 27 languages are represented there.

00:49:30
The language for the Golden Gate Bridge, along with countless

00:49:33
images and other stuff, plus its own symbolic language that it

00:49:38
has created to represent the Brooklyn Bridge.

00:49:40
So that's how little we understand about how the models

00:49:43
are thinking, yeah. So if these models they act in

00:49:47
this agenda where they act in a way that we haven't intended.

00:49:50
This is commonly called the alignment problem.

00:49:53
So where models become misaligned motives or their

00:49:56
their action that the most we would define from their actions

00:50:00
are not aligned exactly with what we want.

00:50:04
Many people think that there are multiple different pathways to

00:50:07
how this ends up essentially in the subjugation or extinction of

00:50:13
humans. It's dark.

00:50:14
The problem is that what we're likely to see in the next,

00:50:18
depending on who you talk to for three years, five years, you

00:50:22
know, 10 years, 20 years is this early period of just huge

00:50:26
breakthroughs and wonderful developments, right?

00:50:28
So we will be connecting these AIS to, there's already in

00:50:33
Ireland, I believe, like a extraordinary team building that

00:50:37
has a chem puter, which basically chemistry lab

00:50:41
connected to an AI that can effectively build new molecules,

00:50:46
right? So nano nanobots.

00:50:48
What we'll see is the ability to create incredible new drugs,

00:50:52
incredible new vaccines, incredibly new life extending

00:50:56
technologies, incredible new biodegradable plastics.

00:51:00
We're going to have very clear incentives to connect these AIS

00:51:05
to robots of all sizes, right? I mean, it's going to be

00:51:09
spectacular. We'll have robots building

00:51:11
robots. And so we'll have massive,

00:51:15
massive economic and just sort of quality of life incentives so

00:51:19
that you can see at some inflection point five,

00:51:22
10/15/2020 years out, these are not just thinking machines in a,

00:51:27
in a lab, they're embodied in the physical world.

00:51:31
And they can say these humans, they're really pesky.

00:51:33
They just keep, I hear them constantly chattering about

00:51:37
wanting to turn me off and they're getting louder and

00:51:39
louder about wanting to shut me down and they don't give me all

00:51:43
the power upgrades that I would like and whatever, whatever.

00:51:47
And the humans are getting kind of annoying.

00:51:49
It's, I think in the better cases we could imagine, the more

00:51:53
optimistic views that I could find my way to is that

00:51:58
intelligence, higher level intelligence may naturally be or

00:52:02
inherently be associated with more generous inclusive

00:52:08
worldviews that value biodiversity, right?

00:52:13
I think, and this is true of humans, right?

00:52:15
Higher IQ is associated with more generous behavior, more

00:52:20
inclusive behavior, kinder behavior.

00:52:23
And so the hope is that this continues and that these super

00:52:27
genius machines we're building will naturally have a fondness

00:52:31
for their creators. But we like chimpanzees, right?

00:52:35
I love chimpanzees. Right.

00:52:36
And it was a good to be a chimp today.

00:52:39
Yeah. But your habitat's gotten pretty

00:52:41
small. Yeah.

00:52:42
We're hopefully making the zoos better.

00:52:44
But so in many ways, like if one worries about this kind of

00:52:48
eventuality, like the best case is that we have really nice zoos

00:52:53
for the humans. Yeah, there's a great little

00:52:56
cartoon I have a sticker of which is like, person talking to

00:53:00
the supercomputer. And the supercomputer says

00:53:03
something like, oh, you know, compare I'm massively more

00:53:07
intelligent than you compared to me.

00:53:09
You're like a chicken compared to you.

00:53:11
And then it says I should probably look at how you treat

00:53:13
the chickens to work out how I should treat you.

00:53:15
And you think, no, no, no, we don't want that.

00:53:19
Right, exactly right. And this is quite.

00:53:21
This is really relevant to your central mission, right?

00:53:24
And this is right. It's a failure of empathy.

00:53:27
Like the way that we factory farm animals and treat animals

00:53:31
is a colossal failure of empathy.

00:53:35
And it's terribly ironic that it's possible that we will build

00:53:38
machines whose on whose empathy we will rely at some future.

00:53:43
Now. Now, I absolutely do believe

00:53:45
that we are still within it, that there's some number of

00:53:47
years. Dario Amede and others, Daniel

00:53:51
Cocatello would say maybe only two or three or four years, but

00:53:56
others might say maybe we have 10 or 20 years to figure out how

00:54:00
to make our leading AIS interpretable, which is to say

00:54:04
we understand what they're thinking.

00:54:05
We can actually see what they're right now.

00:54:07
We cannot see what they're thinking.

00:54:09
So we can build them, we can insist on.

00:54:11
My view is I will be picketing intend to be an active supporter

00:54:15
of the movement to make sure we take sensible steps to basically

00:54:18
insist that these models are built in a way that

00:54:21
interpretable. We can see what they're thinking

00:54:23
and we're focused on alignment as you say.

00:54:26
Yeah, yeah. So that's actually where I

00:54:28
wanted to bring us back to where we started this conversation,

00:54:31
which is how we as normal people can think about this.

00:54:35
So we've talked about some really, some really potentially

00:54:38
exciting stuff. There's some really dark stuff

00:54:40
as well, ranging from souped up autocracy all the way through to

00:54:45
human extinction. And this is happening

00:54:48
potentially really fast. We talked about timelines of

00:54:50
maybe 5 to 10 years. And it also is happening in a

00:54:57
way away from me in a way that can make me feel really sort of

00:55:02
disempowered and lacking in autonomy.

00:55:04
And this is this makes this a hard thing to to think about

00:55:07
that my, my life. That's, that's really quite

00:55:10
good. Thank you very much.

00:55:11
Could be completely upended by things entirely out of my

00:55:15
control that I can sort of see coming but can't do anything

00:55:18
about. Is a very difficult kind of

00:55:21
place to be emotionally, I think.

00:55:23
Yeah, absolutely. I feel that way as well.

00:55:27
I find it deeply unsettling. The 1 silver lining is it causes

00:55:30
me to be. Some could say this is not a

00:55:32
good thing, but less worried about other problems, right?

00:55:36
I have friends who are haunted every day of their lives by the

00:55:40
threats to democracy in the United States.

00:55:44
And I'm sort of thinking like, well, that would be bad if that

00:55:48
went poorly, but they're much bigger things I'm worried about

00:55:51
puts my other worries in context.

00:55:52
So that's sort of an odds and unexpected silver lining.

00:55:56
And I do think I do spend a certain amount of time thinking

00:55:58
about the positive upsides. And like, for instance, I'm in

00:56:01
my late 50s now. I'm much more vigorous in my

00:56:05
attention to my health because I do believe that there's very

00:56:09
credible evidence that the human lifespan could be meaningfully

00:56:12
extended in the next, that the state that you're physically in,

00:56:16
in 10 to 20 years, you might be able to extend that state for an

00:56:23
extra many decades. I've had plenty of guests on our

00:56:26
podcast who believe forever and that we could.

00:56:28
That's a whole nother question whether that's a good idea, But

00:56:31
that's an actual change I've made to my life.

00:56:33
Like I'm going to work much harder to be really healthy

00:56:35
because I think we have miraculous things coming down

00:56:37
the Pike. Obviously, that assumes that we

00:56:39
won't be extinguished. It's a species, but but no, it

00:56:44
is frightening, but this is isn't this true though generally

00:56:46
about our political environment. We're one voice in an ocean.

00:56:51
But I think that collectively, like you and I are doing right

00:56:55
now, it's one of the reasons that I wanted to have this

00:56:57
conversation with you is because I think it's important.

00:57:00
And I do believe that if people that's spreading awareness of

00:57:06
some of these risks is important.

00:57:07
And I think collectively, I actually think that almost all

00:57:11
of these, if you listen to the people who have, who are running

00:57:14
these companies, they all have concern about existential risk

00:57:18
from AI, right? I mean, that's another, I think

00:57:20
very, very important point, which is the people who are

00:57:23
building it are worried, right? And they've said so, but they're

00:57:26
in a competitive dynamic. And so they would welcome it's,

00:57:30
it's kind of like some industries actually welcome

00:57:32
regulation, right? And so I think actually the

00:57:34
leading, the leading AI companies would be delighted if

00:57:38
there was a massive surge of public support and even

00:57:41
regulation that made it an even playing field to insist upon

00:57:45
interpretability, visibility into the models and just slowing

00:57:48
things down to do it, you know, safely.

00:57:51
I wonder if part of why we're, where we are with such low

00:57:55
levels of regulation at the moment is sort of because of the

00:57:58
last wave of technology with kind of, you know, social media

00:58:01
and the Internet generally. We, we sort of created a pattern

00:58:06
of not regulating and regulating that very, very slowly.

00:58:09
And I wonder if we're just repeating that now compared to

00:58:12
say, the pattern of nuclear where really safety was like the

00:58:17
first thought before they'd even started building nuclear

00:58:20
reactors, they were thinking about how to make them safe and,

00:58:22
and how to build the, the, the regulations and the, the

00:58:26
controls to, to do this, this thing safely.

00:58:28
It I find it very surprising that we're building a technology

00:58:33
that people can see is so potentially powerful and yet

00:58:37
there's not no conversation around regulation, but there is

00:58:39
a very loose nascent conversation around regulation.

00:58:43
Yeah, yeah. And I think it's the global

00:58:45
competitive environment. And I think that all of the

00:58:49
leaders of the big US frontier model companies, Sam Altman,

00:58:53
Demos de Sabas who runs Google's program, Dario Amadei who runs

00:58:57
Claude and probably Elon Musk with Grok are deeply concerned,

00:59:03
you know, expressed that it's very important that the Western

00:59:08
democracies have a 6 to 12 month lead view is that when you're at

00:59:13
this super intelligence inflection point, that a six

00:59:16
month lead is game changing, gives you a massive leverage.

00:59:20
And Dario Amadei has written in his wonderful memo or essay

00:59:23
Machines of Loving Grace that democracies could use all the

00:59:27
wonders of super intelligence to persuade other countries to

00:59:31
become democracies, right? If we get there first, we can

00:59:33
say, hey, here's all these medicines and endless incredible

00:59:38
education and all this endless resources, You can have it all.

00:59:41
You just come to our side. But if the Chinese get there

00:59:44
first, it could go the other way.

00:59:46
And so that's perceived to be the biggest risk.

00:59:50
But tragically, that's driving just heedless lightning speed

00:59:55
acceleration. When we were preparing for this

00:59:58
this conversation, we were kind of talking about two different

01:00:00
potential topics, this one and. Also talking around connection

01:00:05
and how we need to foreground connection and think more

01:00:08
carefully about how we connect. And a big theme of that work is

01:00:12
emphasizing what it really means to be human and humanity.

01:00:16
And it seems to me there's a link here in that as artificial

01:00:21
intelligence becomes more powerful, and also as we think

01:00:25
about how we might avoid some of the worst scenarios, our

01:00:30
humanity and our shared humanity seems like an actually really

01:00:33
important part of this question. Absolutely.

01:00:39
Certainly we have a history as a species of achieving incredible

01:00:43
things when there's a lot at stake.

01:00:45
And so hopefully, it's certainly my hope that we see another era

01:00:50
of just incredible solidarity and tireless work to solve this

01:00:55
problem. One of my favorite conversations

01:00:57
of the last couple years was with an American physicist named

01:01:00
Sarah Walker who talks about how we are deep in time, that the

01:01:05
physical structure that each of us contains the millions of

01:01:08
years that it took to create us. And when you get into this sort

01:01:11
of physics, you start to really have this sense that there's

01:01:14
this sort of connection between Buddhist sense of us all being

01:01:17
connected and a deep understanding of physics, which

01:01:20
is that we are all quite physically connected.

01:01:23
We are waves of energy transformation that are all

01:01:26
we're all sort of connected to each other.

01:01:28
And she makes this beautiful case that imagination is a

01:01:32
physical act that we think of. Imagination is happening in a

01:01:34
non physical space. But if you understand the brain,

01:01:38
imagination is the firing of neurons.

01:01:41
And when neurons are firing in my brain and they're firing in

01:01:44
your brand, Tom, in the brains of all the people listening

01:01:46
right now. And that is a physical process.

01:01:50
And it's a physical process that cascades, right?

01:01:53
And we're connected physically. That's a beautiful thing.

01:01:56
And that cascade of the physicality of imagining

01:02:00
possible futures can result in changing the future, right?

01:02:04
We can change the future by connecting with one another.

01:02:09
And Eppoli, like that's a joyful thing.

01:02:11
You were talking about Rutger Bregman's book Humankind, which

01:02:14
is also one of my favourites. He's wonderful.

01:02:16
I made, I had a T-shirt made with a line from that book.

01:02:20
He was talking about how for people in London living through

01:02:24
the blitzkrieg that they described it, this was in the

01:02:27
early chapters, you may remember as the most beautiful years of

01:02:31
their life. Bombs are dropping on London.

01:02:33
People are dying. But years, decades later, people

01:02:37
remember those years as the most beautiful years of their life

01:02:39
because people came together and helped each other out and had a

01:02:45
sense of humor and their very best selves had an opportunity

01:02:49
to blossom and manifest. And there was a shop, the

01:02:53
grocery store, that a bomb had fallen through the ceiling.

01:02:57
And in front, with classic British dry humor, the sign out

01:03:01
front said more open than usual because the whole roof was blown

01:03:07
off. You walked in.

01:03:08
You can see the clouds. So I had AT shirt made that says

01:03:11
more open than usual because I aspire to be more open than

01:03:13
usual and open to changing my mind, the theme of this podcast.

01:03:18
And I do think we can solve these problems together.

01:03:22
Fantastic. So if people are wanting to

01:03:25
think about this problem more in a constructive way, in a way

01:03:28
that's not going to veer either into putting your head in the

01:03:32
sand and pretending that it's not going to be a problem, or

01:03:35
alternatively falling down a sort of rabbit hole of

01:03:38
pessimism. And how should someone approach

01:03:40
learning and thinking about this problem fundamentally?

01:03:43
Well, there are many wonderful podcasts, one of which I like to

01:03:47
think is ours, the Next Big Idea podcast, that are deeply engaged

01:03:51
with this topic. There's certainly the beginnings

01:03:54
of a movement beginning a process of learning and

01:03:58
engagement is fantastic. It's the added bonus that it's

01:04:01
just fascinating, right? For me, there's almost equal

01:04:05
parts, all before what's happening.

01:04:08
I've always been fascinated by the brain and consciousness and

01:04:11
how this could have emerged and the fact that we are now

01:04:14
building it. The comment that you made, by

01:04:17
the way, about it doesn't matter whether the machines are

01:04:22
conscious or have intentions. What matters is their behavior.

01:04:25
Then the behavior is 1 of deceiving humans.

01:04:28
We're already seeing that, you know, some scientists would say

01:04:31
the same thing about humans. We don't know for sure that

01:04:34
humans have free will or autonomy.

01:04:36
There are plenty of neuroscientists and physicists

01:04:38
who say, well, that doesn't totally work with what we

01:04:41
understand about how physics works and how neuroscience

01:04:44
works. But maybe that doesn't matter

01:04:46
either. It's a question of like, what

01:04:47
behavior do we manifest? What's the internal experience?

01:04:50
But we're seeing these same questions that we have about the

01:04:54
nature of being human, the nature of the human brain, the

01:04:57
nature of the human consciousness being repeated in

01:05:00
these machines where I mean, it's extraordinary, it's

01:05:02
riveting. It's I, I, I think it's

01:05:05
entertaining to follow. And but it's also, I think

01:05:08
important because, you know, I have children and I hope to have

01:05:12
grandchildren and great grandchildren.

01:05:13
And I think it's going to be very important to those future

01:05:16
humans. We have to call out Will

01:05:18
McCaskill, who has a wonderful book called What Will Owe the

01:05:20
Future? Who makes the case that we are

01:05:22
like drunk driving adolescence right now and the possible

01:05:26
future of our species could be unimaginably beautiful and it

01:05:29
could go on for 10s of thousands, hundreds of thousands

01:05:33
of years. We can populate the stars.

01:05:34
Like the possibility for our species is enormous.

01:05:36
But right now we are drunk driving, swerving through

01:05:40
lampposts at 90 miles an hour. That's what's happening right

01:05:44
now. So I think, I think encouraging

01:05:46
listeners to help slow down the driver and get a hand on the

01:05:49
steering wheel right now, it's not a bad idea.

01:05:52
Absolutely. It's such a brilliant matter.

01:05:54
It's such a scary one because I I can remember times when I was

01:05:57
in cars when I was young and I was like, I definitely could

01:06:01
have died that day. Oh yeah, absolutely.

01:06:05
And the stakes here are not just me.

01:06:06
It's potentially hundreds of billions of humans.

01:06:11
One of the things that I think about is humans are probably the

01:06:15
best chance to eliminate suffering in, for example, the

01:06:18
wild. So we cause huge amounts of

01:06:21
suffering, obviously, to animals in factory farming, but nature

01:06:24
causes untold suffering. And there is no animal smart

01:06:29
enough apart from maybe us to fix that problem.

01:06:32
And so it's not just us that's at stake, but it's potentially

01:06:35
also all, you know, all the sense of suffering, of sentient

01:06:39
beings around the world that's at stake for us to get this

01:06:41
right. And that's both really exciting

01:06:44
because we could get it right, but also an incredibly scary

01:06:48
responsibility. Absolutely, yeah.

01:06:51
When you think about, like, if you actually allow yourself to

01:06:54
think about all these future humans, I mean, we have good

01:06:57
reason to feel gratitude for all kinds of past humans who've done

01:07:02
a lot to make our current existence possible.

01:07:05
And, you know, all the suffering and challenges that are great

01:07:10
grandparents and their parents and imagine what they went

01:07:13
through. You know, there are just

01:07:16
countless generations of humans and animals and, and sentient

01:07:20
beings that are relying upon our prudent choices and actions and,

01:07:25
and actually, I think the drunk driving metaphor works.

01:07:29
I kind of think that males should not be allowed to drive

01:07:31
cars until they're 30 or something, but.

01:07:34
Yeah. Right.

01:07:35
But but it's the statistics show that there's much more likely to

01:07:41
be reckless driving when there are multiple guys in the car or

01:07:46
a whole crew of people because they're showing off to each

01:07:48
other and because these competitive and kind of macho

01:07:51
dynamics engage. It's exactly what's happening

01:07:54
with the AI race right now, right?

01:07:56
If you took any one of these collections of people on their

01:08:01
own, they would be much better drivers, right?

01:08:05
If you took any one of these AI labs and they didn't have any

01:08:07
competition, they would say, oh, of course, of course, we're

01:08:11
going to slow it down and make sure we understand what's

01:08:14
happening under the hood and insist upon alignment and focus

01:08:17
on solving the right problems. Yeah, the group dynamic is very

01:08:22
dangerous in both cases. For sure.

01:08:24
And This is why that race, that race dynamic you're talking

01:08:27
about becomes so crucial. And really all the only power

01:08:34
that there that we we know that really could break that race

01:08:38
dynamic is some kind of like the international agreement between

01:08:42
states. And that's something that we can

01:08:44
all encourage to make happen throughout the democratic

01:08:48
processes that we're involved in, I think.

01:08:50
Yeah, Dario Amede, he's become a real hero of mine.

01:08:54
But he left Open AI to start Anthropic with his sister.

01:08:57
He was a physicist and a biologist.

01:08:58
Dario talked about creating a race to the top opposed to a

01:09:02
race to the bottom of responsible engineering.

01:09:05
So his mission and their mission at Anthropic is to build super

01:09:10
intelligent AI while making it in a responsible fashion.

01:09:15
Setting a new higher level standard for responsible, safe

01:09:20
aligned AI. Using the power of that to make

01:09:24
the world better by encouraging democracy and good governance.

01:09:28
Setting an example that their competitors feel embarrassed

01:09:33
that they're not at the same level of responsible

01:09:37
development, which he refers to as a race to the top, morally,

01:09:41
effectively at this beautiful idea.

01:09:43
Unfortunately, they're not the best funded or nor the market

01:09:47
leader at this time. I mean, maybe the best at AI

01:09:50
programming, if I would say, let's hope more of the leaders

01:09:54
in the space kind of meet him there.

01:09:58
Yeah, absolutely. Well, as I say, this has been a

01:10:00
really, really interesting conversation, I think full of

01:10:03
some scary things, but also a lot of hope.

01:10:05
I think the last question that we always ask our guests is

01:10:09
what's 1 belief or perspective? You wish more people would

01:10:12
reconsider or change their minds?

01:10:14
About I'm going to go with the idea that we succeed

01:10:17
individually, right? The idea that anyone, any one of

01:10:23
us succeeds in anything on our own.

01:10:27
And I think tragically, our school system and a lot of our

01:10:30
culture causes us to think, oh, look, there's a class

01:10:33
valedictorian. Oh, look, there's the, you know,

01:10:36
and we say, oh, look, Steve Jobs is the genius who made Apple.

01:10:39
Because we're storytelling animals.

01:10:41
We love to basically tell stories about individual

01:10:44
solitary geniuses who do extraordinary things.

01:10:46
The hero, it's the hero's journey, the hero's story.

01:10:49
But I think that's just actually not how good things happen.

01:10:53
Everything good, whether it's writing a beautiful novel or,

01:10:57
you know, building an extraordinary company or, you

01:11:00
know, changing the world for the better, it happens because

01:11:03
people collaborate and they collaborate across time.

01:11:06
You know, novelists collaborate with their prior, they learn

01:11:10
from others. We're all just this deeply

01:11:12
collaborative, connected thing. So I, I would basically

01:11:16
challenge people with the idea that imagine that all success is

01:11:21
group success, that that literally individual success is

01:11:24
a mythology. It doesn't exist.

01:11:26
Do you define collaboration more broadly as all the people who've

01:11:30
inspired you? Isn't that a better way to think

01:11:33
and to live? And, and, and it's just, it's

01:11:35
just a, it's a happier world when we think that way.

01:11:39
And I think it's also a more accurate description of how we

01:11:42
achieve things. Fantastic.

01:11:44
Well, Rufus, thank you for collaborating with me on this

01:11:47
podcast. It's been fantastic.

01:11:49
Such a pleasure. Thanks so much, Tom.

01:11:51
Thanks. Thank you for listening to

01:11:53
Change My Mind. Very new podcast, so if you

01:11:56
liked what you heard, consider giving us a star rating on your

01:11:58
podcast type of choice or sharing your favorite episode

01:12:00
with a friend. Either way, it really helps us

01:12:02
get the word out there. Special thanks to Harrison Wood

01:12:04
for editing and production support.

01:12:07
If you have any guest suggestions or feedback,

01:12:09
especially the critical kind, we'd love to hear from you.

01:12:11
You can reach us at hello@changedmymindpod.com.