Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript

Contra Scott Alexander on AI Safety Arguments

In this thought-provoking video, Malcolm and Simone Collins offer a detailed response to Scott Alexander's article on AI apocalypticism. They analyze the historical patterns of accurate and inaccurate doomsday predictions, providing insights into why AI fears may be misplaced. The couple discusses the characteristics of past moral panics, cultural susceptibility to apocalyptic thinking, and the importance of actionable solutions in legitimate concerns. They also explore the rationalist community's tendencies, the pronatalist movement, and the need for a more nuanced approach to technological progress. This video offers a fresh perspective on AI risk assessment and the broader implications of apocalyptic thinking in society.

Malcolm Collins: [00:00:00] I'm quoting from him here, okay? One of the most common arguments against AI safety is, here's an example of a time someone was worried about something, but it didn't happen.

Therefore, AI, which you are worried about, also won't happen. I always give the obvious answer. Okay. But there are other examples of times someone was worried about something and it did happen, right? How do we know AI isn't more like those?

So specifically he is arguing against is every 20 years or so you get one of these apocalyptic movements. And this is why we're discounting this movement this is how he ends the article, so people know this isn't an attack piece, this is what he asked for in the article. He says, conclusion, I genuinely don't know what these people are thinking.

I would like to understand the mindset of people who make arguments like this, but I'm not sure I've succeeded. What is he missing according to you? He is missing something absolutely giant in everything that he's laid out.

And it is a very important point and it's very clear from his write up that this idea had just never occurred to him.

[00:01:00] Would you like to know more?

Malcolm Collins: Hello, Simone. I am excited to be here with you today. We today are going to be creating a video reply slash response to an argument that Scott Alexander, the guy who writes astral codex 10 or sleep star codex, depending on what era you were introduced to his content. Wrote about arguments against AI apocalypticism, which are based around it'll be clear when we get into the piece because I'm going to read some parts of it that no, I should know.

This is not a Scott Alexander is not smart or anything like that piece. We actually think Scott Alexander is incredibly intelligent and well meaning. And he is an intellectual who I consider a friend and somebody whose work I enormously respect. And I am creating this response because the piece is written in a way that actively requests [00:02:00] a response.

It's like, why do people believe this argument when I find it To be so weak, like one of those, what am I missing here? Kind of things. What am I missing here? Kind of things

he just clearly and I like the way he lays out his argument because it's very clear that, yes, there's a huge thing he's missing. And it's clear from his argument and the way that he thought about it that he's just literally never considered this point and it's why he doesn't understand this argument.

So we're going to go over his counter argument and we're going to go over the thing that he happens to be missing. And I'm quoting from him here, okay? One of the most common arguments against AI safety is, here's an example of a time someone was worried about something, but it didn't happen.

Therefore, AI, which you are worried about, also won't happen. I always give the obvious answer. Okay. But there are other examples of times someone was worried about something and it did happen, right? How do we know AI isn't more like those? The people I'm arguing with always seem [00:03:00] so surprised by this response, as if I'm committing some sort of betrayal by destroying their beautiful arguments.

So specifically he is arguing against the form of apocalypticism that when we talk about it more sounds like our argument against AI apocalypticism is every 20 years or so you get one of these apocalyptic movements. And this is why we're discounting this movement. Okay. And I'm going to go further with his argument here. So he says, I keep trying to steel man this argument. So keep in mind, he's trying to steel man it. This is not us saying like he wants it steel man, okay. I keep trying to steel man this argument and it keeps resisting my steel manning. For example, maybe the argument is a failed attempt to gesture at a principle of quote, most technologies don't go wrong, but people make the same argument with technologies that aren't technologies like global cooling or overpopulation.

Maybe the argument is a failed attempt to gesture at a principle of Quote, the world is never destroyed. So [00:04:00] doomsday prophecies have an abysmal track record in quote, but over population and global cooling, don't claim that no one will die. Just that a lot of people will, and plenty of prophecies about mass deaths events have come true.

EG black plague, World War II, AIDS, and none of this explains coffee. So there's some weird coffee argument that he comes back to that I don't actually think is. important to understand this, but I can read it if you're interested. I'm sufficiently intrigued. Okay. People basically made the thing of, once people were worried about coffee, but now we know coffee is safe, therefore AI will also be safe.

Which is to say there was a period where everyone was afraid of coffee, and there was a lot of apocalypticism about it, and there really was. Like people were afraid of caffeine for a period. And the fears turned out wrong. And then people correlate that with AI. And I think that is a bad argument.

But the other type of argument he's making here, so you can see and I will create a [00:05:00] final framing from him here that I think is a pretty good summation of his argument. There is at least one thing that was possible. Therefore, super intelligent AI is also possible. And. And only slightly less hostile reframing.

So he's that's the way that he hears it. When people make this argument, there is at least one thing that was possible. Therefore, super intelligent AI. is also possible and safe, presumably, right? Because it's one thing was past technologies that we're talking about. And then he says, in an only slightly less hostile rephrasing people were wrong when they said nuclear reactions were impossible.

Therefore, they might also be wrong when they say super intelligent AI is possible. Conclusion, I genuinely don't know what these people are thinking. And then he says, I would like to understand the mindset. .

So this is how he ends the article, so people know this isn't an attack piece, this is what he asked for in the article. He says, conclusion, I genuinely don't know what these people are thinking.

I would like to understand the [00:06:00] mindset of people who make arguments like this, but I'm not sure I've succeeded. The best I can say is that sometimes people on my side make similar arguments. The nuclear chain reaction one, which I don't immediately flag as dumb, and maybe I can follow this thread to figure why they seem tempting sometimes.

All right, so great. What is he missing according to you? Actually I'd almost take a pause moment here to see if our audience because he is missing something absolutely giant in everything that he's laid out. There is a very logical reason to be making this argument and it is a point that he is missing in everything that he's looking at.

And it is a very important point and it's very clear from his write up that this idea had just never occurred to him.

Simone Collins: Is this the Margaret Thatcher Irish terrorists idea?

Malcolm Collins: No. Okay. Can you think what if I was trying to predict [00:07:00] the probability of a current apocalyptic movement being wrong, what would I use in a historic context?

And I usually don't lay out this point because I thought it was so obvious. And now I'm realizing that to even fairly smart people, it's not an obvious point.

Simone Collins: I have no idea.

Malcolm Collins: People historically have sometimes built up panics about things that didn't happen. And then sometimes people have raised red flags, as outliers, about things that did end up happening. What we can do To find out if the current event is just a moral panic, or is it actually a legitimate panic, is to correlate it with historical circumstances to figure out what things did the historical accurate predictions have in common, and what things did the pure moral panics have in common.

Simone Collins: So what are examples of past [00:08:00] genuine apocalypses? So like the plague, what else? Yes,

Malcolm Collins: so I went through and we'll go through examples of yeah. So it's history time

Simone Collins: with Malcolm Collins. It's history

Malcolm Collins: time with Malcolm Collins. When people actually predicted the beginnings of something that was going to be huge.

And then times, and hold on, I should actually word this a bit differently. Ooh, the

Simone Collins: industrial revolution. That's a good one.

Malcolm Collins: Simone we'll get to these in a second, okay? The point being is I want to better explain this argument to people because people may still struggle to understand like the really core point that he's missing.

Historically speaking, people made predictions around things that were beginning to happen was in their times, becoming huge and apocalyptic events in the future that ended up in, in mass deaths. We can, from today's perspective, because we now know which of those predictions fell into which categories, correlate, one the ways that these communities [00:09:00] act features of the prediction and the types of things they're making predictions about to find out if somebody today is making a prediction around some current trend leading to mass deaths.

Okay. If it's going to fall into the camp of false prediction or accurate prediction by correlating it with the historic predictions. Yeah. I think that the reason why, because I don't think he's a dumb person. He should have thought of this. Like I genuinely think like this is not a weird thing to think about.

I think the reason he hasn't thought about it is because he's so on the side of AI apocalypticism being something we could, should focus on. He just hasn't thought about disconfirming arguments. And when you begin to correlate AI apocalypticism with historic apocalyptic movements, it fits incredibly snugly in the false fear category.

So let's go into the historic predictions, okay? So the times when they were accurate, all [00:10:00] right, were predictions around the Black Plague, predictions around World War II, predictions around AIDS. predictions around DDT, predictions around asbestos, predictions around cigarette smoking, Native American warnings around Europeans. Okay. All apocalyptic predictions, which ended up becoming true. Now let's go through all of the ones that were incorrect, that developed, freak out almost religious communities around them. Splitting of the Higgs boson. People thought that would cause a black hole. I remember that.

Yes. The Industrial Revolution. That was a huge one right there. I don't know, that did precipitate the beginning of demographic collapse. It wasn't a problem for the reason people thought it was a problem. Okay. They thought no one would have any jobs anymore. That was the core fear in the Industrial Revolution, if you remember, and we'll get more into that.

The speed of trains. The reading, we can get to more on the Industrial Revolution if you want to park it in an edge case. The reading panic. A lot of people don't know there [00:11:00] was a reading panic. Everyone thought that reading would destroy people and that all of these young women were becoming addicted to reading and they very much in the way that we today yeah, there was this fear.

It was called like reading madness. A girl would get really into reading and today we just call that being most a nerdy young woman. Video game violence. Yeah. This is one I didn't know about because I went in to try to create a as comprehensive a list of all these as possible. The Telegraph.

Critics believe that the Telegraph would lead to a decline in literacy, destroy the postal service, and contribute to moral decay. Oh! Destroy the postal service eventually, but amazing. Anyway, radio. Critics warn that radio would lead to a decline in literacy, encourage mindless entertainment, and foster a culture of violence.

So for those that aren't aware, no, literacy has broadly risen since the radio was introduced. The printing press. There was significant fear that the printing press would spread heretical ideas and misinformation.

Simone Collins: Didn't it precipitate the reformation?

Malcolm Collins: Yeah, so I guess the printing [00:12:00] press we can put in the movie category.

Legit.

Simone Collins: Come on.

Malcolm Collins: Legit. Yeah. The spinning wheel. No, not really. The printing press really only moves things forward. The people who were afraid of the printing press we liked the

Simone Collins: Reformation, but it still did cause it. It was the

Malcolm Collins: beginning of no, but this doesn't fall into the category of false predictions.

Oh. So this is like a fascist saying, I'm afraid that other people might have access to information, which has given me power. That's not The spinning wheel. This was in the 13th century. People thought the spinning wheel would lead to the collapse of civilization. Then there was when coffee was introduced to Europe in the 16th century.

And as I said, it was met with suspicion and resistance. Some religious and political leaders feared that coffee houses would become centuries of political dissent and immoral behavior. In 1674, the Women's Petition Against Coffee in England claimed that coffee made men impotent and was a threat to family life.

And yeah. So what do these things have in common now that we have categorized them into these [00:13:00] two groups? And I think there is very loud things about the accurate predictions that you almost never see in the inaccurate predictions. And very loud things amongst all the inaccurate predictions you never see in the accurate predictions.

I think that these two categories of predictions actually look very different, okay? Okay. The things that turned out to be moral panics versus the things that turned out to be accurate predictions of a future danger. People were already dying in small ways. With

Simone Collins: the real ones.

Malcolm Collins: Yes, every single time it has been an accurate prediction.

Whether it's the AIDS, or it's The small batches of people dying. It's a sign shit's about to go down. It's a sign. Yeah, but we haven't had a single AI turn rogue and start murdering people yet. Like we've had machines break in factories, I think like a robotic arm accidentally killed someone in a Tesla factory, but it wasn't like malicious.

It wasn't like trying to kill the person.

Simone Collins: Yeah, there are factory deaths all the time, and of course fewer today than ever before probably.

Malcolm Collins: Yeah this marks it clearly in the moral panic category. [00:14:00] Okay. Okay. Ones that turned out to be wrong are very often tied to a fear of people being replaced by machines.

Simone Collins: Yeah, technology. It seems that's the biggest theme is this new

Malcolm Collins: invention is going to ruin everything. So historically we've seen that has never happened. Or cultural destruction, that's the other thing that's often claimed, which is also something we see around AI apocalypticism fears around cultural destruction and jobs being taken.

The and then here, and people can be like, what, but jobs are being taken, yes, but more jobs are created at the end of the day. What's always happened in a historic context, yes, like photography took jobs away from artists, but no one's now as like photography as like a moral evil or something like that.

 Here's another one. The fake ones, the ones that turn out to be wrong, are usually related to technology or science.

Simone Collins: Yeah.

Malcolm Collins: The ones that are right are usually related to medical concerns, or actually always related to medical concerns or geopolitical predictions.

Simone Collins: Yeah, I was getting the, that it's it's an infection. Either of like [00:15:00] people or outside groups like the Sea Peoples or Europeans or whatever, or literally a disease coming in. Yes.

Malcolm Collins: And here is the final nail in the coffin for his, you cannot learn anything from this. Different cultural groups react to fake predictions with different levels of susceptibility and panic.

By that what I mean, Is that if you look at certain countries and cultural backgrounds, they almost never have these moral panic freak outs when it is inaccurate.

Simone Collins: Okay. Okay. So you're saying like you look at China, and China's not shitting a brick about this thing. Yeah, China

Malcolm Collins: is not very susceptible to moral panic, most East Asian countries aren't.

So India isn't particularly susceptible, China isn't particularly susceptible, Japan isn't particularly susceptible, and South Korea isn't particularly susceptible. They just historically have not had and I remember I was talking with someone once, and then They came up with like some example of a [00:16:00] moral panic in China and then I looked it up and it like, wasn't true.

So if you're like, no, here's some example of when this happened in China historically, like the Boxer Rebellion or something like that. I'm like, no, that was not an, a moral panic. That was a or the opium wars. Like the opium wars were an actual concern about something. Yeah, people, it was

Simone Collins: a batches of people dying issue, which is a real problem.

Malcolm Collins: So when you, in certain cultures are hyper susceptible to moral, to, to apocalyptic movements, specifically, they spread really quickly within Jewish communities and within Christian communities. Those are the two groups that are most susceptible to this. Yeah. Here's the problem. So we get the problem across the board here which is the places having the moral panics today around AI apocalypticism are 100 percent and nearly exclusively the communities that [00:17:00] were disproportionately successful.

susceptible to incorrect moral panics on a historic basis. White Christians and Jews. Christians and Jews. You just don't see big AI apocalyptic movements in Japan or Korea or China or India. They're just not freaking out about this in the same way. And keep in mind, I've made the table very big here.

It's not like I'm just saying, oh, you're not seeing it in Japan. You're not seeing it in half the world that's not prone to these types of apocalyptic panics. Okay. That is really big evidence to me. Okay, that's point one. Point two is it has all of the characteristics of the fake moral panics, historically speaking, and none of the characteristics of the accurate panics, historically speaking. But I'm wondering if you're noticing any other areas where there are congruence in the moral panics that turned out accurate versus the ones that didn't.

Simone Collins: The biggest thing to me is [00:18:00] just invasion versus change. Like a foreign agent entering seems to be a bigger risk than something fundamentally changing from a technological standpoint, which is not what I expected you to come in with. So this is surprising to me.

Malcolm Collins: Yeah. Okay.

So if we were going to modify AI risk to Fit into the mindset of the moral panics that turned out to be correct, like the apocalyptic claims that turned out to be correct, you would need to reframe it. You need to say something like, and this would fit correct predictions. Historically speaking, if we stop AI development and China keeps it.

On with AI development, China will use the AI to subjugate us and eradicate a large portion of our population. That would have a lot in common with the types of moral predictions or moral panic predictions that turned out accurate. AI will, Take people's jobs, AI [00:19:00] will destroy our culture, or AI will kill all people.

These feel very much like the historic incorrect.

Simone Collins: But I think you are underplaying something. Which is that, while these technological predictions, Luddites freaking out about the Industrial Revolution, people freaking out about the printing press. It did not lead to the fall of civilization as expected.

It did lead to fundamental changes. And AI will absolutely lead to fundamental changes in the way that people live and work. I don't

Malcolm Collins: argue. Have we ever argued that AI is not going to fundamentally change human civilization? We have multiple episodes on this point, okay? We say it's going to fundamentally change the civilization, it's going to fundamentally change the economy, it's going to fundamentally change the way that we even perceive humanity and ourselves.

None of that is, is stuff that we are arguing against. We are arguing against the moral panic around AI killing everyone and the need to delay AI advancement. Yeah. [00:20:00] Over that moral panic. Yeah. And

Simone Collins: that is fair. Fair.

Malcolm Collins: And the point here being is you can actually learn something by correlating historic events.

And it is useful to correlate these historic events to look for these patterns,

which I find really interesting in terms of so it makes sense, like with the industrial revolution, like with, the spinning wheel, whenever you see something. That is going to create like an economic and sociological jump for our civilization. There are going to be a Luddite reaction movement to it.

Never historically has there been a technological revolution without some large Luddite reaction. The only and it's not even that weird because actually if you look historically lead movements often really spread well within the edegre educated bourgeoisie that was non working the, That group just seems really susceptible to Luddite [00:21:00] panics.

But I can tell you what growing up, I never expected the effect of altruists community and the rationalists and the singularity community to become sort of Luddite cults. Like

Simone Collins: that, I never expected many so called rationalists to turn to things like energy healing and crystals, but here we are. So we

Malcolm Collins: are that's why we need to create a successor movement.

And I really personally do see the pronatalist movement because I look at the members of the movement and like at the pronatalist conference that we went to and this happening again, this year a huge chunk of the people. We're former people in the rationalist community and disaffected rationalists and the young people I met in the movement were exactly the profile of young person.

As I said, it's a hugely disproportionately autistic movement. Who when they were younger or when I was younger would have been early members in the rationalist EA movement. And so we just need to be aware of the susceptibility of these movements to one mystic grifters who, like you had with our episode, if people want to watch it [00:22:00] it's on the cult leverage or two, if they're not mystic grifters on forms of apocalypticism.

And I should note, and people should watch our episode, they're like, when you talk about the world fundamentally changing because of fertility collapse, like how is that different from apocalypticism? We have an episode on this if you want to watch it.

But the gist of the answer is We predict things getting significantly harder and economic turnover, but not all humans dying.

The nature of our predictions, and this is actually really interesting, and it's something from a historic perspective in the wrong movements. The nature of our predictions say you need to, if you believe this, adopt additional responsibilities in terms of the fate of the world, in terms of yourself.

Having kids is a huge amount of work. AI apocalypticism Allows you to shirk responsibility because you say the world's going to end anyway. I don't really need to do anything other than build attention, i. e. build my own reader base or attention network towards [00:23:00] myself which is very successful from a mimetic standpoint at building apocalyptic panic.

Because if somebody donates to one of our charities, 90 percent of the money needs to go to making things better. You donate to an AI apocalypticism charity. Most of the money is just going to advertising the problem itself. Which is why these ideas spread.

And that's also what you see historically is panic. My

Simone Collins: concern too, is a lot of these projects that have been funded as part of X risk philanthropy. No one's the only people consuming them are the EA community. So these things aren't reaching other groups. And we saw this also at one of the dinner parties we hosted.

One of our guests was the leader of one of the largest women's networks of AI developers in the world. And a bunch of other people there were literally working in AI alignment. This woman had never even heard the term AI alignment. These people working in AI alignment are not reaching out to people actually working in AI.[00:24:00]

They are not reaching they're also not reaching audiences of just broader people. They're all in this echo chamber within the EA and rationalist community and they're not actually getting reach. So even if I did believe in the importance of communicating this message, I wouldn't support this community because they're not doing it.

Malcolm Collins: Yeah. What they need is to create a network that funds attractive young women to go on dates with people in the AI space to just live in areas where they are and try to convince them of it as an issue. But they won't, because a lot of people in it, here's another thing that I noticed that's cross correlating between the two groups.

Actually,

Simone Collins: I would love to see you apply for a grant with one of those X risk funds of just I will hire thirst traps To post on Instagram and to be on like, to be on OnlyFans and to just start

Malcolm Collins: like, cities because there's [00:25:00] some cities where these companies are based. And we're a lot for sure.

But

Simone Collins: And date them. Yes, for sure. But I just, I love this idea of using women.

Malcolm Collins: But here's the other thing that's cross correlated across all of the incorrect panics historically, which I find very interesting. And I didn't notice them just now. Every one of the correct panics had something specific and actual that you could do to help reduce the risk.

Whereas all of, almost all of the incorrect moral panics, the answer was just stop technological progress. That's how you fix the problem. So if you look at the correct moral panics. Black Plague, World War II, AIDS, DDT, asbestos, cigarette smoking, Native American warnings about Europeans. In every one of those, there was like an actionable thing that you needed to do, like DDT, go start doing removal go, don't have it sprayed on as many crops, AIDS, oh Safer sex policies, stuff like that.

However, if you look at the incorrect things, what are you looking at? [00:26:00] Like the splitting of the Higgs. You just need to stop technological development. Industrial revolution. You just need to stop technological development. The speed of trains, you just need to stop technological development. Greeting panic.

You just need to stop technological development. Radio, you just need to stop technological development. printing press. You just need,

Simone Collins: an important point with all of these. And you could argue, actually, that. This was an issue with nuclear as well. In fact, this discussion was had with nuclear is that there was this one physicist who one believed that nuclear wouldn't be possible, but two also was very strongly against censorship because a lot of people were saying, we have to stop this research, it's too dangerous.

And he just strongly believed that you should never, ever censor things in physics if it's not acceptable. And then we did ultimately, end up with nuclear weapons and that, that is a real risk for us. But I think the argument, the larger argument with technological development is someone's going to figure it out.[00:27:00]

And to a certain extent, it's going to have to be an arms race. And you're going to have to hope that your faction develops this and starts to own the best versions of this tech in a game of proliferation before anyone else. There's no, if you don't do it, someone else will.

Malcolm Collins: Yeah. And that's the other, now I haven't gone into this cause this isn't what the video is.

But recently, I was trying to understand the AI risk people better as part of Lemon Week, where I have to engage really heavily with steelmanning, an idea I disagree with. And one of the things I realized was a core difference between the way I was approaching this intellectually and they were, is I just immediately discounted.

Any potential timeline where there was nothing realistic we could do about it.

An example here would be in a timeline where somebody says, AI is an existential risk, but we can remove that risk by getting all of the world's governments to band [00:28:00] together and prevent the development of something that could revolutionize their economies.

Does that one happen? No, it's just stupid. It's a stupid statement. Of course we can't do that. If we live in a world where if we can't do that, AI kills us in every timeline, I don't even need to consider that possibility. It's not meaningful on a possibility graph because there's nothing we can do about living in that reality.

Therefore, I don't need to make any decisions under the assumption that we live in that reality. It's a very relaxing reality. Yeah. And that's what gets me is I realized that they weren't just immediately discounting impossible tasks. Whereas I always do like when people are like you could fix pronatalism if you could give a half million grant to every parent.

I'm like cool, but we don't live in that reality. So I don't consider that. Yeah. They're like, yeah, government policy interventions could work. You need a half million. I'm like, yeah. And people are like technically we could economically afford it. And I go, yes, but in no realistic governance scenario, could you get [00:29:00] that passed in anything close to the near future? Think it's just an issue of how I judge. Timelines to worry about and timelines not to worry about. Which is interesting. Anyway, love you to death. It'd be interesting if Scott watches this. We, we chat with Scott, I'm friendly with him, but I also know that he doesn't really consume YouTube.

So I don't know if this is something that will get to him, but it's also, just useful for people who watch this stuff. And if you are not following Scott stuff, you should be, or you are out of the cultural zeitgeist. That's just what I'm going to tell you. He is certainly still a figure that is.

well respected than us as an intellectual. And I think he is a deservingly respected intellectual. And I say that about very few living people. Yeah. I know very few living intellectuals where I'm like, yeah, you should really respect this person as an intellectual because they have takes beyond my own occasionally.

Simone Collins: Yeah. He is wise. He is extremely well read. [00:30:00] He is extremely clever and. surrounded by incredibly clever people. And then beyond that, I would say he disagrees with us on quite a few things. So we have a lot to learn from him.

Malcolm Collins: Actually question, Simone, why do you think he didn't consider what I can just laid out and think is a fairly obvious point that you should be correlating these historical movements?

Simone Collins: I just, I think that you have a way of looking at things from And even more cross disciplinary and first principles way than he does sometimes. So you both are very cross disciplinary thinkers, which is one reason why I like both of your work a lot. But I think in the algorithm of cross disciplinary thinking, he gives a heavier weight to other materials and you give a heavier weight.

To just first principles, reach reasoning, and that's how you come to [00:31:00] reach these different conclusions.

Malcolm Collins: Yeah. I'd agree with that. Yeah. And I also think another thing he gives a heavier weight to, like when I disagree with him most frequently to things that are culturally normative in his community.

He gives a slightly heavier weight.

Simone Collins: No. That's actually you are very similar in that way. And that your opinion is highly colored by recent conversations you've had with people and recent things you've watched. So it's something that both of you are subject to, I would say that maybe you may be even more subject to it than he is, because you interact with people less than he does on a regular basis.

True he's much more social than us. He's much more social than you, but You are extremely colored by what you're supposed to. So I, you're not exempt from this, but it's true.

Malcolm Collins: Yeah, actually I would definitely admit that like a lot of trans stuff recently is just because I've been watching lots and lots of content in that area, which has caused, YouTube to recommend more of it to me, which has caused sort of a loop on the topic.

Historically, I wouldn't have cared about that much.

Simone Collins: [00:32:00] One thing I'll just end with though and I'm still not even finished reading this. But Leopold Aschenbrenner, I don't know actually how his last name is pronounced, but he is like in the EA, X Risk. I think he's even pronatalist,

Malcolm Collins: no, he is, he's famously one of the first people to talk about pronatalism, he just never put any money into it, even though he was on the board of FTX.

Simone Collins: He published a really great piece on AI that I now am using as my mooring point from for helping me think through the implications of where we're going with AI. Seeing how steeped he is in that world and how well he knows many of the people who are working on the inside of it, getting us closer to AGI, I think he's a really good person to turn to in terms of his takes.

I think that they're better, more than reality. And they're also more practically oriented. He wrote this thing called situational awareness, the decade ahead, you can find it at situational awareness. [00:33:00] ai. And if you look at his Twitter, if you just search Leopold Aschenbrenner on Twitter, it's like his Twitter URL link.

He's definitely promoting it. I recommend reading that. In terms of the conversation that I wish we were having with AI, he sets the tone of what I wish we were talking about, like how we should be building on energy infrastructure, the immense security loopholes and concerns that we should be having about, for example, foreign actors, Getting access to our algorithms and weights and the AI that we're developing right now because there's very little security around it.

So yeah, I, I think that people should turn to his write up.

Malcolm Collins: That's a great call to action. And I was just thinking I had another idea as to why maybe I recognize this when he didn't, because this is very much like me asking, why did somebody smarter than me or who I consider smarter than me Not see something that I saw as like really obvious and he didn't [00:34:00] include and like discount in his piece.

Of course you would cross correlate the instances of success with the instances of failure in these predictions. I suspect it could also be that my entire worldview in philosophy, and many people know this from our videos, comes from a memetic Cloud first perspective, I am always looking at the types of ideas that are good at replicating themselves and the types of ideas that aren't good at replicating themselves when I am trying to disturb it, why large groups act in specific ways or end up believing things that I find off or weird, like how could they believe that?

And that led me to, in my earliest days, become, as I've mentioned, like really interested with cults. Like, how do cults work? Why do religions work? Like, how do people convince them things of stuff that to an outsider seem absurd? And so when I am looking at any idea, I am always seeing it through this memetic lens first.

And I think when he looks at ideas, he's He doesn't [00:35:00] first filter it through a memetically why would this idea exist before he is looking at the merits of the idea? Whereas I often consider those two things as of equal standing to help me understand how an idea came to exist and why it's in front of me.

And I don't think that he has this second obsession here. And I think that's probably

Simone Collins: Maybe. Yeah.

Malcolm Collins: Yeah.

Simone Collins: But I like it when people come to different conclusions because it's always something in between there that I find the value.

Malcolm Collins: I don't know if that's helpful. I actually think that's an unhelpful way to look at things.

I think you shouldn't look for averages, but you can look for averages.

Simone Collins: I find stuff. I think when you look at what is different, you find interesting insights. It's not an average of the two. It's not a mean, a median or a mode. It is unique new insights. It's more about emergent properties. of the elements of disagreement that yield entirely new and [00:36:00] often unexpected insights, not something in between, not compromise.

Malcolm Collins: You are a genius, Simone. I am so glad as the comments have said, you're the smarter of the two of us and I could not agree more. And I will hit you as that every time now, because I know, this drives me nuts. You know

Simone Collins: that you're the smarter one. That even our polygenic scores for intelligence.

Show that you're the smarter one.

Malcolm Collins: Yeah, we went through our polygenic scores recently, and one of the things I mentioned in a few other episodes is that I have the face of somebody who, you know, when they were biologically developing, was in a high testosterone environment. When contrasted with Andrew Tate, like that's where I often talk about it, is he has the face of somebody who grew up in a very low testosterone environment.

Believe it or not, when I was going through the polygenic markers, I came up 99 percent on testosterone production. In terms of the top 1 percent of the population in terms of just endogenous testosterone production. So yeah, of course, when I was developmental, I was just flooded in this stuff.

That's why I look like [00:37:00] this.

Simone Collins: 1 percent of pain tolerance that I was a 99 percent 99

Malcolm Collins: percent for pain tolerance that I would explain so much. No, I like it being a high testosterone, but actually feeling pain and just being like, nah, not going to engage in those scenarios.

Yeah, it's probably a good mix of noping out of there. It's a good mix of being tough, but noping out the moment it becomes dangerous. Yes.

Simone Collins: High risk, but good survival instinct. Very good. Yeah. Especially because you also have fast twitch muscle, which I don't. When you know, out of a place, about real fast

Malcolm Collins: joke about me being able to like BAMF out of a situation whenever like night crawler, whenever something dangerous happens, 20 feet away somewhere else.

Yeah. Like I turn and he's just gone and like a car is hurtling toward me.

You are so slow. You actually remind me of like a [00:38:00] sloth. I need to get better at making you out of the way. And you literally have to pull me because I'm Like when cars are coming at us because like we started crossing the road and she like didn't expect. She cannot like speed up above a fast walk.

Simone Collins: And I hate moving so quickly. I'm also like contemplating do I want to die or should I try to move?

Malcolm Collins: You really come off that way. Yeah, I do. And you've got those night crawlers that have a bamf over, got a bamf back to grab you. God, I'm going to die. Yeah. I love you. I love you so much, Simone.

You're amazing. Hey, I would love to get the slow cooker started on the tomatoes and meat that I got.

But you

Simone Collins: still have about two days worth of the other stuff.

Malcolm Collins: Yeah, but it's easier to just freeze this stuff if I do it all at once and now and then I can also do it overnight. I can also leave it cooking for a few days.[00:39:00]

Simone Collins: I can do that. Do I have time to make biscuits or muffins, cornmeal muffins? Yeah. If I go down right now, I can make cornmeal muffins.

Malcolm Collins: Would you like cornmeal muffins? I'm okay with that. Yeah.

Simone Collins: Okay.

Malcolm Collins: You're so nice. Cornmeal goes great with slow cooked beef.

Simone Collins: And you're still going to have the slow cooked beef that you made earlier this week, right?

I'm heading down. She's asleep on my lap. I don't want to look. She's so but

Malcolm Collins: she

Simone Collins: loves

Malcolm Collins: sleeping. I love you so much, Simone. You're a perfect mom. And you got to get that pouch so you can get that pocket on. Okay. Order it right now.

Simone Collins: No, I need to contemplate whether or not we should just spend money on that or new carbon monoxide detectors.

Malcolm Collins: No, you're getting the new carbon monoxide detectors. Just let me get this for you as a gift. Okay. Here, I'm getting it right now. It's 19.

Simone Collins: I'll get it with my money. Okay. I just

Malcolm Collins: got it. No, it was my money. I'm the one who's demanding [00:40:00] that you get a pocket because I'm so fricking annoyed that you walking around without a pocket.

Simone Collins: All right, Malcolm.

Malcolm Collins: It is annoying, Simone. It causes me dissatisfaction. Okay.

Simone Collins: I will see you downstairs with my corn muffin hands ready to go. Okay. Love you. Bye. I guess you could call it the Dunning Kruger trap where, you know the Dunning Kruger effect is where people who know less about something, feel more confident about it, right?

Malcolm Collins: What happened? He just noped right out of there.

Oh, what is it? Is it a bug is it a mouse? No. It was a beer. It was the beer that you knocked over yesterday.

Simone Collins: Oh, the one that, that no Titan knocked it off the table.

Malcolm Collins: Oh, and you're like, you better not open that one. [00:41:00] That's what just happened.

Simone Collins: Okay. So I actually, so that the Dunning Kruger effect whereby, by. People who know less about something feel more confident about it.

By the way, Denning Kruger

Malcolm Collins: effect does not replicate.

Simone Collins: And then, but anyway, still people are familiar with it. And then people who know more about something often say that they know less. And I think that there gets to be a certain point where when you know a ton about something, you just start to become very uncertain about it.

And you're not really willing to take any stance, which is something I saw a lot in academia, where the higher up in academia I got, the more the answer was always, it depends instead

Malcolm Collins: of that is, this, whatever you're talking about has nothing to do with any of the points I'm going to make.

Simone Collins: See if you smile when daddy appears on the screen. Daddy?

Malcolm Collins: Look at [00:42:00] that! It's daddy!

She doesn't see. She doesn't see.

Simone Collins: She's, she I haven't gotten her eyes on the screen. She's gotta

Malcolm Collins: look at the screen. Do you recognize me at all? I don't know if they can recognize things on screens in the same way that adults can. I don't know either. Yeah, she doesn't seem to be focusing on it. So yeah, she doesn't, she can't see

Simone Collins: me.

We love you anyway.

Malcolm Collins: I will get us started here. Oh, we'll pull this aside. How could you tell that it was bad at creating websites by the way?

Simone Collins: Because it, after you buy a domain, Will like literally take the names of your domain, like the words within it. And then assume that based on, okay, for example, cause I got a pragmatist foundation.

org, they're like, Oh, you're a pragmatic foundation and your. org. So [00:43:00] you're a nonprofit. And so here's a nonprofit website for a foundation that likes pragmatism. And then it made up copy based on that and had a picture of kids. It's sitting at desks and with something like, creating solutions that are pragmatic.

Which is not terribly far off, but

Malcolm Collins: it sounds so bad. No, it's not

Simone Collins: so bad. It's just

Malcolm Collins: in case you are wondering the reason why we're looking at buying a websites right now is when we needed to get the. org for the pragmatist foundation because people were emailing the wrong address because we have. com for that.

But also I've been thinking about building a website for the techno Puritan religion and seeing if I can get it registered as a real religion. Which would be pretty fun. Especially if I am able to, put religious wear in there, like you always have to be armed, it, it was with a significant strain to see if you can get religious exceptions for which I do believe there is a religious mandate for concealed carry and stuff.[00:44:00]

That would be interesting from a legal perspective. It'd

Simone Collins: be funny if we had like a religious mandate for always having to carry ceremonial sloths with us. Just but it's my religious sloth. You can't let me not go to your restaurant wearing it.

Malcolm Collins: You want to enshrine specific rights that people would want.

I think you can do stuff around sorts of data privacy and stuff like that makes sense within a religious context to us, but also provide a legal tool to people who want the access to this stuff.

Simone Collins: That could be interesting.

Malcolm Collins: Which, also helps the religion spread. So that'd be fun. All right.

So I am opening this up here.

What are you doing, Wiggles? Okay, you better not let her wiggle. I better

Simone Collins: not. She's full of all the Wiggles.

0 Comments
Based Camp | Simone & Malcolm
Based Camp | Simone & Malcolm Collins
Based Camp is a podcast focused on how humans process the world around them and the future of our species. That means we go into everything from human sexuality, to weird sub-cultures, dating markets, philosophy, and politics.
Malcolm and Simone are a husband wife team of a neuroscientist and marketer turned entrepreneurs and authors. With graduate degrees from Stanford and Cambridge under their belts as well as five bestselling books, one of which topped out the WSJs nonfiction list, they are widely known (if infamous) intellectuals / provocateurs.
If you want to dig into their ideas further or check citations on points they bring up check out their book series. Note: They all sell for a dollar or so and the money made from them goes to charity. https://www.amazon.com/gp/product/B08FMWMFTG