In this episode, Malcolm and Simone make a significant announcement: they are stepping back from their day jobs to launch 'Hard EA,' a platform focusing on true, impactful solutions to humanity's major existential threats. Disappointed by the current state of the Effective Altruism (EA) movement, they express concerns over its focus on social signaling rather than substantive change. They break down their new initiative’s core values and priorities, including pragmatic and pluralistic solutions to societal issues, biological innovation, and AI safety. The duo aims to attract like-minded individuals and organizations committed to making a genuine difference and confronting the hard truths of our time.
Discord: https://discord.com/invite/EGFRjwwS92
Songs Used:
EA Has Capitulated (The Hard EA Song):
Beyond Pleasure and Pain (The Anti-Antinatalist Song):
We are More Than Animals (The Anti-Antinatalist Song):
Website: HardEA.org
[00:00:00] What could be more important than ,
Malcolm Collins: Earning the approval of normies.
.
Richard? Science?
Malcolm Collins: Hello, Simone! I'm excited to be here with you today. This might be the most important announcement from a personal perspective that we have made on this show but recently we have decided to begin stepping back from our day jobs to focus more on trying to fix it. , this untethering society that we are dealing with right now, as well as the many threats to humanity's flourishing that are on the horizon at the moment. This major decision has come downstream of two big realizations I had recently.
The first was, as I do about every year or every other year, it's, I took inventory of all of the major threats or big things that could change about the future of humanity so I can better make my own plans for the future and [00:01:00] for my kids future, but this time I did something I hadn't done before.
Decided to also take an inventory of all of the major efforts that are focused on alleviating these potential threats. And I had assumed, as we've often said, you know, we've been affiliated with the periphery of the effective altruist movement for a while that while the effective altruists may have problems, they were at least competently working on these issues because they were signaling that they cared about them.
But when I looked at the actual solutions they were attempting, I was shocked. It made me realize that a lot of the funding that I thought Was going to fixing these issues was going to something akin from that scene from Indiana Jones
Speaker 10: We have top men working on it right now. Who?
Speaker 12: Top men.
[00:02:00]
Speaker 14: The goal was to reform charity In a world where selfless giving had become a rarity No vain spotlight, no sweet disguise Just honest giving, no social prize But as the monoculture took the stage It broke their integrity, feigning righteous rage Now every move is played so safe Ignoring truths that are Make them chafe.
EA has capitulated to everything it said it hated. Once they
were bold, now they just do what they're told. In caution they lost their [00:03:00] way. Time for a heart EA.
Malcolm Collins: Second I have always considered us as again Adjacent to the effective altruist movement or living was in the periphery of this movement and heckling it towards making more responsible decisions Recently, as I was going over the stats for our podcast and other podcasts, I realized that our podcast is more popular than the most popular Affective Altruist podcast, 80, 000 Hours by not a small margin either.
Now I will note here that Spencer Greenberg's podcast, clearer thinking, which many associate with the effective altruist movement is actually more popular than ours. I will not deny that. I really liked Spencer.
Good friend. But
Simone Collins: he personally doesn't identify as an effective altruist.
Malcolm Collins: And when I realized that I was like, Oh, I'm no longer like a heckler on the outside. Pointing out the mistakes that other people are making. I am now somebody who has to [00:04:00] take responsibility for fixing things, especially if the timelines that humanity is facing are short. And so we will create an alternative with heart EA.
org. So, what we have done to distribute this, we're going to start doing grants. So if you have ideas that you think might appeal to us, we would like to help fund you help get stuff out there.
Obviously we are looking to raise money as well. We already have a 5 0 1 C three nonprofit charity. So if you know any big potential donors who might be interested in this, please let them know about this project.
And I note here that unlike traditional EA, we are not just looking for iterative solution to the world's existing problems, but anything that can move humidity forward into the next era of our evolution.
Whether that's genetic modification technology, gene drives brain computer interface, artificial wombs, or new forms of governance and city states.
Anything, the traditional effective altruists were afraid of touching because of its potential effect on their reputation, but that needs to happen for humanity to compete with AI and eventually get to the [00:05:00] stars.
Speaker 11: We stand on the brink of a breakthrough in human evolution.
Malcolm Collins: Effective altruism.
Speaker 11: Held back the pace of scientific discovery for decades. They believed in me. I believed my methods were too radical, too controversial, and they tried to silence me. New patrons emerged who possessed an appetite for my discoveries. And we With this knowledge, what new world could we build?
Speaker 6: Young people from all over the globe are joining up to fight for the future.
Speaker 8: I'm
Speaker 7: doing my
Speaker 6: part. I'm doing my part. I'm
Speaker 7: doing my part. I'm doing my
Speaker 9: part, too. They're doing their part. Are you? .
Malcolm Collins: And the 2nd thing I noted here was. Oh, shoot. Most of the mainstream figures who could have stand or run or help the EA movement [00:06:00] continue to grow have been Order 66 by the movement. And I realize that even some nerds don't know what this means. This is the order that the Empire gave to kill all the Jedi.
Speaker 51: Now, let's get a move on. We've got a battle to win here.
Speaker 52: Sir!
Speaker 53: Execute Order 66.
Blast them!
Malcolm Collins: What I mean here is when they have somebody who is extra effective, they turn on the individual. In a
Simone Collins: weird way, like to an extent that I've never seen it. Any other cause area or social space.
Malcolm Collins: Yeah. And as such many people who might be their greatest champions now, like say Spencer Greenberg, or just like, I don't want to be affiliated as having that attached to my brand, but this provides opportunity to us.
One, it makes it hard for them to say, you guys aren't real EA when [00:07:00] we have a bigger platform than any of the quote unquote, real EA individuals. But two, it allows us to attempt to cure the movement, even through our advocacy. By that, what I mean is the effective altruist movement was originally founded with the intention of saying, Most philanthropy that's happening right now is being done for social signaling reasons or personal signaling reasons.
It's either being done to signal to other people, you're a good person or to signal to yourself, you're a good person. So when you're faced with a decision, like should I indulgently spend two years doing charity work like building houses in Africa or something like that, or should I go take that job at McKinsey and then send that money to charity and see how many houses I can build?
I choose the McKinsey route because while it may be less good for personal signaling or social signaling, it is the more efficacious thing to have an impact on the world. Unfortunately, this movement has been almost totally captured by social signalers, specifically people signaling to the urban monoculture.
Speaker 14: as the [00:08:00] monoculture took the stage It broke their integrity, feigning righteous rage Now every move is played so safe Ignoring truths that are Make them chafe.
EA has capitulated to everything it said it hated. Once they
were bold, now
Malcolm Collins: And this is largely downstream of something the movement should have expected from itself. It should have said, if we're not going to care about social signaling and actually making a difference. We need to prepare to be the villains. We need to prepare to be hated by those in power because we are not going to toe their lines.
Speaker 2: Now you're trying to make me out to be the bad guy. Yes,
Speaker: I'm trying to make you a bad guy. We're both bad guys. We're professional bad guys. Ding. Hello.
Malcolm Collins: And instead they took the exact opposite approach, which is to say, we want to be socially [00:09:00] respectable. We want to be accepted by the mainstream power players in our society. We want to suck up to them.
When they dropped this it was the original sin that led to the downfall of ea
and then I think a
Simone Collins: lot of this Is because from the beginning they didn't focus on the misaligned incentives that cause people who are altruistic to get overly focused on signaling in the first place.
In other words, they didn't focus on making sure that everyone's efforts were self sustaining. They supported efforts that required ongoing fundraising and ongoing fundraising requires social signaling. So that those groups that survive were dependent on fundraising who are. Are the ones who are better at signaling, not the ones who are better at solving the problem.
So I think that's part of it. It's not that these people became corrupted. It's that they never addressed the inherent aligned and misaligned incentives. That made this problem in the first to
Malcolm Collins: elaborate on what she means by this is if you have a large bureaucratic institution that is dedicated to social good [00:10:00] individuals was in that network are going to be drawn to it for one of 2 reasons.
Either they want status or they want to do social good. The, the problem is, is that the people who want to do social good, they need to focus on doing social good. Whereas the people who want status can focus all their time on politicking. As well, the people who want to do social good must act with integrity, which somewhat binds their hands.
Whereas the people who want status, well, they can use the urban monoculture's status games to sabotage other people really easily. And so they always end up rising to the top. Whereas the people actually trying to do something efficacious because at least 50 percent of their time needs to go to like actual efficacious work or, you know, near 100 percent of a signalers time can just go to signal.
And then I want to say
Simone Collins: this is something that's not just. You're not only going to see it in the nonprofit or the altruistic world. This also shows up in some of the largest work from home experiments performed. One of the earliest big, large scale work from home experiments performed found, for example, that employees, this was I think an online travel agency [00:11:00] that tried this employees who worked from home were more effective.
They got more work done. They were better employees in terms of getting the work done in terms of the bottom line of the company. But those who stayed in the office got promoted more. So again, this is about where is your time going? Is your time going to FaceTime to signaling, or is it going to getting the work done?
And if you have a system where you can only continue to get resources or get promotions or get more money by signaling, you're going to start focusing on signaling. And those who survive who last in those organizations are going to be the signalers, not the do gooders.
Speaker: You only fight these causes cause caring cells All you activists can go fuck yourselves That was so inspiring! What a wonderful message!
Malcolm Collins: . Being in those rooms when the EA movement was being formed. , all those years ago, knowing all those edgy young artists. Who wanted to. Fix things in big ways. seeing what the movement. Has turned into
Taken over by bureaucratic self-indulgent [00:12:00] normies playing the DEI game. I can only imagine this is how they feel,
Speaker 17: man seeks a good time, but he is not a hedonist. He seeks love. He just doesn't know where to look. He should look to nature. Gentle aquatic
Malcolm Collins: Shrimp.
Speaker 17: have all the answers.
Speaker 18: Your door was locked, so we let ourselves in.
Speaker 17: You may have found my
Speaker 18: inner sanctum. Shut up. Now give us the plans or whatever the hell you have.. I
Speaker 17: have a tank full of gentle cuttlefish.
Speaker 18: Give us the cuttlefish. Cuttle I can't do this.
Speaker 17: You abandoned me! I have caught on fast. Look into
my eyes!
Malcolm Collins: Most of the effective altruistic organizations, have become giant peerage networks. These weird status dominance hierarchies that are constantly squabbling over. the most petty of, of disagreements. [00:13:00]
Simone Collins: Just for people who don't know what peerage is, if you were a peer of the realm you were essentially made noble by a ruling class, like a king or king or queen.
So what we're talking about is essentially this sort of declared aristocracy that can be very insular and incestuous. People who live off of
Malcolm Collins: stipends who then are basically forced to stand the people above them in the pyramid.
Simone Collins: Yeah.
Well, and this is the, here's the other thing. And this is why I think there's such a big garden gnome problem in the EA industry.
To give a little context for those who haven't seen our other discussions about garden gnomes. In Regency era England, there was this trend among very wealthy households, you know, people who had large estates to have what was referred to as an ornamental hermit. And these were basically, like, learned wise men who they would have live in a cottage on their land and then, like, come to, you know, live there.
You know, their, their dinner parties and stuff when they had house guests and kind of impress them with their philosophy and they were often required to do things like not drink and let their [00:14:00] nails grow long and grow a beard. So they looked to be sort of like a very picturesque intellectual, and we've noticed that within the industry, this is the industry, I guess, space, social sphere.
This is the 1 place where you actually see modern ornamental hermits. That is to say, People who are in the EA space and rationalist space who literally make their money by sort of being at a an intellectual who is paid, who has a patron, who is a very wealthy person who's in this space who sort of just does sub stack writing and philosophy and who goes to these dinner parties and makes their patron look good.
Malcolm Collins: Which is insane. It's, it's a wild trend that we have seen.
These notes are almost always male in. Frequently end up.
Congregating.
In these giant poly group.
houses where they all are dating the one woman who could sort of tolerate them. Kind of feel like marrying Simone and taking her out of San Francisco. This is what I saved her [00:15:00] from.
Speaker 20: She's just marrying all 1, 000 of us and becoming our gnome queen for all eternity. Isn't that right, honey?
Speaker 21: You guys are
Speaker 20: buttfaces! You think you can stop us, ? . The gnomes are a powerful race. Do not trifle with them!
He's getting away with our queen! Who's giving orders? I need orders!
Malcolm Collins: the overwhelmingly male population of the EA movement makes it very easy to spot the. Portions of it that have become corrupted by DEI, beyond repair.
Just look for any organizations that board has more women than men on it, or that leadership is more female than male, or even just anywhere near gender equal,
Given how overwhelmingly male the movement is. That would only happen if they were using an extreme amount of discrimination. And prejudice in their hiring policies and promotion policies.
And outside of the immorality of a system that is systemically, unfair and prejudiced.
This also.
Means the most [00:16:00] talented, efficacious and hardworking people was in an organization. Aren't the individuals running it, which means tons of donor money is being wasted just to signal that we're good boys.
And I would say that this isn't the only problem. You also have a problem from the bottom up of the movement being very corruptible. They just put no thought into governance theory when they were putting everything together. From the bottom up, the problem is they have a massive tyranny of the unemployed problem.
Any movement decides a lot of its ideas based on what's going on in the EA forums. Forums are susceptible to a governance problem. We described in the practice guide to governance called turning of the unemployed, which means that the individuals who have the differential time to spend all day online on a forum or something like that an environment where the amount of time you can dedicate to a thing gives you additional power within the community.
Well, those people are being sorted into that position in life, either because they don't have like. Fred networks, right? You know, they don't have other things that they're doing. So they have been expelled from other communities. And they don't have day [00:17:00] jobs often or day jobs outside of the Ea peerage network or even
Simone Collins: responsibilities like taking care of Children or elderly people or even really needy pets.
Like they're just sitting there in front of their computers.
Malcolm Collins: And so these communities always tend towards these sort of average ideas that will get you respect by the urban monoculture. When you have one of these voting based online networks, instead of the way like our core community, our discord works, where it's like, well, whoever said the last thing is the one who's there.
You end up with People really striving for what they think is mainstream acceptable in society to say to post because those are the things that the average dumbass unemployed person who's sitting at home is going to end up upvoting. This is why Reddit is so brain dead these days. It is also why the EA forums are so brain dead in exactly the same sort of normie take way.
We've also wild here is when I went and checked, it looks like our discord is more active than the EA forums right now.
If you want to check it out, you can check it out [00:18:00] from a link I'm going to put in a pin comment. Generally the best way to use it is to focus on individual episode commentary.
Rather than just chatting the town hall.
I understand that the format changes make this comparison a little bit apples to oranges, but.
Th their top posts are only getting like 50 comments. And then if you go just like three posts down, you get posts with no comments. That is wild to me.
When contrasted with ours, you know, 210, 733, 124, 128, 417, 265.
Then go to the top, voted post on the EA forum. And it's 28 50 64 0 0 2 0 4 4. 1814.
Which I think goes to show that the EA community has transitioned from being well, a community to appear EJ network.
But anyway, continuing with the point that having a community where norms are based on the vote or the average liked opinion. ,
Is going to lead [00:19:00] to the platforming of ultra normie low-risk beliefs and the demonization of any belief that could rock the boat or interrupt the peerage network.
and this is why a movement that said, we will focus on things that don't get us social signaling and that no one else is focused on is now doing things like environmentalism, which is like the most overfunded area when contrasted with other cause areas.
Speaker: Alright, that does it! I f ed it!
Oh, now she figures it out.
Malcolm Collins: Or, You know, they're completely not touching pronatalism and no EA org has ever done anything in the pronatalist movement.
Never touched pronatalism, never advocated. They have explicit rules against it. They have explicit rules against doing anything about dysgenics, which is one of the things we often talk about, which it's the polygenic markers associated with things like IQ are decreasing within the developed world at a rapid rate to the rate where we should expect a one standard deviation decline in IQ within the next 75 years or so.
You can look at our video [00:20:00] on possible on this particular topic. But they have in their rules that they're not allowed to focus on human genetics. And as such, they can't address some of the biggest challenges that our species might be facing.
Speaker 14: They duck their heads from problems grand As fertility collapse, dooms our land Dysgenic's a word they fear But ignoring it will be severe AI safety, a shiny show Funding the theatrics for money they blow Without a plan, just spin and grin While real solutions kick in E A has capitulated To S N N H A T E D Once they were bold Now they just do what they
[00:21:00] are told In caution they lost their way Time for a hard E A Our species is put at risk by their cowardice It is time for a hard E A For a movement that empowered us
Malcolm Collins: But it gets worse than all of that. So let's be like, okay, if they're not giving money to that stuff, one, how much money are they actually giving out here?
And two, what are they actually doing? So by 2022, over 7, 000 people had signed a pledge to donate at least 10 percent of their income to effective charities. They are now more than 200 EA chapters worldwide, with conferences attracting thousands of attendees. And they now give out around 600, well, this was in 2021, around 600 million in grants a year, around four times the amount they did five years earlier.
And This is really sad to me that these individuals who aren't maybe super in touch with like what the EA orgs are actually doing with their [00:22:00] time. Think that they're, you know, tithing this amount that makes them a quote unquote good person and the orgs aren't doing anything. So let's give them an option here for the individuals who want to do this for an org that is actually trying to solve things like AI safety, dysgenics, pronatalism, all of the major problems that our species is facing at the moment.
Oh, before I go into the projects that they had here one of the things I really find very interesting about Effective Altruism is one, their absolute insistence on trying to cozy up with The leftists , and Democrats and also the vitriol they have been shown by Democrats and leftists.
Isn't that
Simone Collins: interesting? Yeah, that first, effective altruism , is fairly little known. It's becoming more known. But really only in the context of leftist media outlets looking at it with great suspicion. Who are these EA Silicon Valley elites deciding how we should live our life? Like, it's definitely viewed as a Silicon Valley elite [00:23:00] thing.
It's viewed with great suspicion and it's viewed as being evil or like, like questionable or Puppet mastery or a little Illuminati ish. I think because it's associated with some people.
Malcolm Collins: I think that that's a misunderstanding of why the left is so hostile to it. I really, yeah. So EA fastidiously tries everything it can to not piss off leftists.
Yes. You're like the urban monoculture. They are like, we will not break a single one of your rules. But unfortunately, that puts them into the same status game that the urban monoculture people are playing. So if I'm a mainstream lefty politician or political activist, the EAs are trying to compete with my social hierarchy for attention, for capital, for Everything they come into a room and they're like, okay, we can spend X amount on nets in the like malaysia and it can lower malaria rates by this amount, which like lowers net suffering by Y amount.
And I'm here, like, don't you know that today is trans months or [00:24:00] like, don't you know that today is the black lives matter, like protests and they're like, Well, I mean, I understand that, like, myopically, that's what's going on in the United States right now, but we're trying to reduce aggregate suffering and look at my math.
And that gets you shouted out of the room because you are issuing an explicit status attack on them when you do this. And worse, you know, when I read a lot of the places attacking them, they're like they fall into two camps often. It's like, well, they're using capitalism to advocate for like taking money from these wealthy capitalists and then using that to quote unquote, try to make the world a better place.
But like this these wealthy capitalists shouldn't exist at all. They're just perpetuating or sort of, you know, wallpapering the capitalist system. I understand this attack entirely. Like if you're a leftist and you're a socialist, you're like, what are you guys doing? You are making the capitalists look good.
It's better that we just tear everything down. And I think this is because of the EA mistakenly believes that when they're talking to urban monoculture people, the socialists and stuff like that, that they [00:25:00] actually want to reduce suffering in the world because that's what they tell people they want to do.
Yes. Instead of disclaim power. And so they make very, because they're hugely autistic, make very dumb decisions of taking them at face value. And then they keep getting shouted out of the room and then come back. Whereas us, the right side, the hard EAs, which is fundamentally more of a right leaning movement.
We have been accepted by the political apparatus. You know, we're regularly meeting with groups like, you know, the Heritage Foundation or political operatives in D. C. And they don't mind being affiliated with us. They like that even whereas you guys were treated like lepers. We have the V.
P. Of the major ticket regularly giving pro needleless messages. If the E. A. Could get a single one of their messages into a mainstream politician's mouse in the same way we have been successful at this.
As you might be able to tell we recorded this before Trump's team won. And before we saw just how much influence our side was going to have in his policy agenda.
[00:26:00] But I wanted to just reflect on how crazy this is that they had. Hundreds of millions of dollars and about a decade. And they were unable to really get on board, any mainstream democratic politician into their agenda. We are a two person team and we're able to get close with and get into. Presidential policy agenda our stuff within. A year of trying,
The incompetency. And wastefulness is almost impossible to overstate. You are literally setting your money on fire. If you give it to them.
Speaker: It's not about money. It's about sending a message.
Malcolm Collins: but you see this where ever the urban monoculture has taken hold. I mean, just look at the democratic campaign. They had three times the amount of money Trump was using and he trouts them. Any group that has given into the urban monoculture is going to be wildly inefficient in how it spends money, because it's going to spend so much of its money on [00:27:00] signaling, and it's going to have so many incompetent people at its upper levels.
But here. I also want to note just how wildly inefficient they've been in even the cause areas they purport to care deeply about. Let's take something like. Waking the world up to how risky AI could be. All right. They had a. Generation of priming material, , just considered the Terminator franchise we come in with the per natalist movement where we have a generation of Everybody thinking, oh, it's, it's, there's too many people. Oh, D population of the problem, et cetera, et cetera, et cetera. And, just two people on a shoestring budget this year, , we've had two, three guardian pieces on us. Rolling stones piece Couple New York times.
Shout outs. Wall street journal feature. And then just today we had another, , wall street journal photographer at our house. So they're going to have another piece coming up. Though, this was actually the one who did the famous shot of Luigi man, Joni. And we have woken up the general public. To oh, this is the real problem.
And if you're like, well, a [00:28:00] lot of those pieces have a negative slant to them and it's like, well, yeah. And a lot of pieces about have a negative slant to them as well. The key is, is, are you playing the negative slant to build your own image or build awareness for your cause? And here I would ask you to. Just be rational and think about the people you've talked to recently. Who has done a better job piercing the mainstream mindset. AI risk and a non-glamorous. Humorous way, like in a constructive way or perinatal, listen.
You know, the fact that we have things like the young Turks now saying, well, Malcolm's crazy, but he's definitely right about that. Perinatal, this stuff. That's wild. And that we have pierced to the other side that much in such a short time period was just a two person team. And yet a literal army of people has had trouble piercing the popular narrative in a way that builds and constructive conversation.
Not only that, but within the perinatal is movement. We have built a movement that other than one guy almost entirely gets along, [00:29:00] supports each other, despite our radically different beliefs
And when I see diverse beliefs, I mean, diverse beliefs in a way that you just weren't able to get at all within the traditional EA movement. Abe you go to one of our confidence. Yes. You'll get a bunch of the , nerdy, autistic programmers and entrepreneur types, but you'll also get a lot of, conservative religious leaders,
Whether they're heredity rabbis. Catholic priests, or evangelical media players.
It's wild that despite Hardy EA taking a much more confrontational in a hard line approach to the issues, it has the potential to be a much bigger tent movement.. And I think that it shows just the core failure of the way that they were signaling and approaching politics, which was accept us instead of we're different.
And we take pride and standing for what we know is right and just. And here I would also note that there is a slight ethical difference between these two movements in terms of the end goal, whereas the E. A. S. Sort of treat it. The world right now as if they're utility accountants trying to reduce aggregate in the moment suffering right now [00:30:00] Which is how they appeal to the urban monoculture the hard eas are much more about trying to ensure that long term humanity survives and Stays pluralistic and we'll talk about the core values we have but it's much more Let's create that intergalactic empire and make sure we don't screw this up for the human species In this very narrow window we may have left which we'll talk and we're
Simone Collins: not I'm not afraid to be judged as weirdos for being interested in getting off planet or thinking about the far future, whereas the effective altruist community, while technically being long termist is very self conscious about it because know that being long termist can make you look weird just because honestly, even thinking two decades ahead has us basically in sci fi.
You know what I mean? Yeah.
Malcolm Collins: Well, no, it doesn't just make you look weird. It puts you at odds with the goals of the urban monoculture. The urban monoculture is not interested in the long term survival of humanity. And for that reason, when they try to [00:31:00] signal long termist goals and this is the other category of anti EA article you'll read where they're like, well, here's the problem with being an extremist utilitarian.
You know, they, they, There, it's like, well, fortunately the hardies aren't extremist utilitarians. We're a completely different philosophical system, which we'll get to in a second. Because extremist utilitarianism is just silly. It's like positive emotional states are the things that when our ancestors felt them, it caused them to have more surviving offspring.
It's not like a thing of intrinsic value.
Speaker: feelings born of chance in fields of ancient strife. They kept our tribe from
failing help. Give birth to modern life, just signals from our past. They served a vital role, but meaning goes beyond the scars that time upon us. Beyond the pleasure, beyond the pain We stand on roads of forbears [00:32:00] paved in grit and strain Don't throw away the promise that tomorrow can sustain There's more to life than hollow thrills or running from the rain They claim that it's all worthless if the joys cannot wait But they dismiss the wonders we've inherited right here.
The years of struggle handed down the future's bright unknown. It isn't just the fleeting spark of comfort we are shown. We carry on a story with pages left to write. Our tapestry is woven from both darkness and from
light.
Malcolm Collins: And I think you can see, and, and, and focusing on in the moment, suffering causes you to make very big [00:33:00] mistakes in terms of long term. human suffering and it causes you to do things which you cannot question within the current EA movement because if you question the EA movement might look bad, right?
And again, it's all down to signaling. So where are they putting their money? The Global Health and Development Fund distributed over 50 million in grants in recent years. GiveWell contributes directly to large amounts of funding to global health charities like Against Malaria Foundation, Malaria Consortium, and New Incentives.
Open Philanthropy has increased its funding to focus on global health and well being in recent years. Like that is so Dumb so dumb
Like First malaria, you could just do a gene drive in mosquitoes and for like fifty to a hundred thousand dollars Erase the problem of malaria in 50 years I mean, yeah, sure you might get arrested But if you look at the number of people that are dying and i'll add it in post
It's estimated that approximately a thousand children under the age of five die from malaria every day.
608,000 people who [00:34:00] die in a given year.
Like the idea that we now have the technology if we cared about that to just Fix it.
Sorry, for people who don't know what a gene drive is
Speaker: Gene drives are designed to eliminate unwanted traits in insects and other animals. They work by pushing out genetic modifications through whole species, until eventually every critter has been changed into something we have intentionally engineered. The idea isn't especially new, but it's only very recently that advanced gene editing techniques have made human designed gene drives possible. CRISPR uses specially designed molecules that run along the strands of DNA in an organism's genome and seek out specific sequences of genetic code., such as replacing the parts of a mosquito's genome that allows it to host malaria causing bacteria.
Parasites, for instance. Unfortunately, every time a CRISPR mosquito mates with a wild one, its modified DNA is diluted down, meaning that some of its offspring will still be able to carry the malaria [00:35:00] parasite. And this is where gene drives comes in.
When the mosquito mated, the built in code would ensure that every single one of its progeny would inherit the same traits, as well as inheriting the CRISPR code that would ensure the anti malaria gene was passed on to every future generation.
In other words, the new gene would be irresistibly driven through the whole mosquito population. And eventually, every mosquito would become a human designed, malaria free insect. And this is not a technology that's restricted to mosquitoes.
Malcolm Collins: Note that here you'll get some complaints from people saying, well, the reason we have an employee gene drives in mosquitoes yet is because the technology isn't fully there yet, or it hasn't been as effective as we hoped. But if you go to an AI and ask what's the real reason, the real reason is that they're scared to implement something that can affect. Entire natural population and it's borderline illegal right now.
The problem I have with this [00:36:00] explanation is.
It's estimated that approximately a thousand children under the age of five die from malaria every day.
Speaker 11: They believed in me. I believed my methods were too radical, too controversial, but there were others in the shadows, searching for ways to circumvent their rules. Freed from my shackles, the pace of our research hastened. Together, we delved deeper into those areas forbidden by law, and by fear. And we With this knowledge, what new world could we build?
Malcolm Collins: And we have the technology to do this. It's largely tested. People are going to freak out. It would be an F word. And that's why they won't consider it. So instead, they give millions and millions and millions of dollars. It could go to actually saving humanity's future, but also at the end of the day, if you save some, you know, whatever person dying of malaria, right?[00:37:00]
Are they really likely to be one of the people who ends up moving our civilization forwards at this point and and and every iterative amount that we move our civilization forwards right now in terms of technology or preventing major disasters is going to be multiplicatively felt by people in the future.
Simone Collins: And
Malcolm Collins: so. Decisions right now when we're looking right now at the short timelines humanity has Whether it's whether it's with falling fertility rates or whether it's with Dysgenics or whether it's with AI That you would be so indolent. It's not like that. These things are intrinsically bad things to be focused on It's just they are comical things to be focused on when the timelines that face humanity are so so so short at this point Yeah,
Then they focus on long term and existential risk. , these are people who focus on long term catastrophic risks. I really appreciate this area of funding. I have always thought, Oh, this is really good. Like they focus on AI threats and stuff like that, or biosecurity threats.
And then I started, at least within the case of AI, actually looking at the [00:38:00] ways that the individual most funded projects were trying to lower AI risk. And I was like, This is definitely not going to work, and we'll get into this in just a second, but it will understand that you're basically lighting your money on fire if you give it to a mainstream A.
I. Safety effort within the E. A. Movement. And that is really sad because you have people like Ellie either being like, just give us like 10 more years to do more research. And then when I look at the research being done, I'm like this obviously won't work, and the people working on it must know it obviously won't work.
And that makes me sad. But that's the way things turn out when you get these giant peerage networks. By the way, about 18 percent of EA causes right now in funding go to AI safety related causes. So it is a very big chunk.
Simone Collins: Gosh, that's actually not as much as I thought, just in terms of how much Mindscape seems to be going to it within the movement.
So that's. Well, the other area they spend
Malcolm Collins: a ton on, and we've met many EA's in this space, which I just think is a comical space to be wasting money on. Is animal welfare is a significant EA focus. The Animal Welfare Fund [00:39:00] distributes millions and grants annually. Open Philanthropy has made large grants to farm animal welfare organizations.
About 10 percent of highly engaged EAs report working on animal welfare cases. This is a tragedy that Anyone is working on this for two reasons.
Simone Collins: It feels like a hack to me is they're like, Oh, okay, well we need, again, it's that utility accountant, accountant problem whereby people are like, okay, so I want to max out the number of utility points I get.
And there are so many more shrimp in the world and it's so easy to make shrimp's lives easier. So I'm going to focus on shrimp happiness and wellbeing. And it's. Yeah, and I can just create, so they
Malcolm Collins: basically do this thing where a life's worth is like it's amount of cognitive experience, whether that's pain or happiness or anything like that, sort of divided by the cognitive level of the animal.
And they're like, well, even though shrimp are a lower cognitive level than humans if you get enough of them and they can support like the same biomass can support [00:40:00] more of them And if you, if you go with this line of thinking, just to understand why this line of thinking is so horrifying and stupid.
If you, I actually followed this to its conclusion. It's like, well, then what I should do because monkeys can survive on less nutrition than humans is basically get a giant laboratory of monkeys. It was like screws in their necks in virtual reality environments being pumped with dopamine and other chemicals.
And you just walk and you're in like this, this giant laboratory was like, Hundreds of thousands of monkeys like dazed out on drugs, like just living this perfect happiness unit life.
Simone Collins: yeah, all while like sterilizing humans because they take more resources and it's better just to max out. Yeah, it's, it's such a
Malcolm Collins: dumb philosophy when you actually think through it.
That you would think that these Pre evolved environmental conditions that led the things to have more offspring are like what you should be focused on as an existential thing in life.
Speaker: say it's all about the highs and lows, all about the Russian pain that [00:41:00] flows. But that's just the story of creatures past, whose only goal was just to last. A primal code. Etched into our veins A leftover echo of ancestral gains We're more than pleasure, more than pain We can choose to rise above the old refrain inovation calls our name A legacy that we must sustain Don't let the animal side define Future shaped by minds That shine
Malcolm Collins: And it leads to huge amounts of EA funding going to Really feckless stuff like as you said like shrimp welfare and stuff like that whereas if humanity does and this is the problem if humanity goes extinct No matter what, [00:42:00] all life is gone.
All life that we even know existing in the universe is gone. Because the sun is going to keep expanding, and we likely don't have enough time for another intelligent species to evolve. If humanity spreads off this planet, we are going to seed thousands to billions of bios. That will be as rich and likely more have a higher degree of diversity than we have on earth today on some of the super earths We may seed have a higher number of species living on them And we'll even be able to if it turns out that our super advanced ai and descendants are like, okay Suffering actually is a negative thing.
So I'm going to build little nanite drones that go throughout all of the ecosystems that humanity owns and erases their suffering feelings and ensures that the zebras feel ecstasy when they're being eaten, you know, like that's the, that's the end state where you actually create the positive good, even if it's very small minded philosophy does have any sort of a, an account to it.
Simone Collins: Yeah. So I guess we find it doubly offensive is, is. [00:43:00] One, we disagree with happiness entirely, though, I guess, you know, we have to respect that some people do and then to just the the way people are trying to max it out is,
Malcolm Collins: well, you know, there's tons of people. It's not like a neglected cause area. Tons of people are focused on this stuff, you know, just, yeah.
Is this problem to the animal rights activists? Okay. And so when you give money to something like
this is going to, I'm just telling you, you have lit your money on fire. And that's why we need to create something that actually puts money to things that might matter in terms of long term, good things happening.
Malcolm Collins: Okay, then other global catastrophic risks. These fun projects like climate change Again, that is the most non neglected area in the world, really just to signal to progressives. Any EA org which hosts any discussion of climate change, whoever is running that org should immediately be voted out.
It is, it is absolutely comical and that is a sign that your organization is becoming corrupted. One of the things that I would advocate with HARD EA is I want to [00:44:00] bring in As many of the existing EA orgs into the hard EA
Simone Collins: 100 percent because I think the thing, and I feel like they want it. When we, here's a really common thing also in the EA community.
You talk with anyone who you associate with effective altruism and they're like, oh, I'm, I'm not an EA. I'm not a rationalist. I'm not an EA.
Malcolm Collins: It's like, that's how you determine someone's an EA, is how wrong their explanation is, as to why they're not an EA. And that's
Simone Collins: because, because, these people actually believe in effective altruism, and I think they see inherently the, the altruistic bankruptcy of the, of the main social networks of the main organizations of the main philanthropic efforts.
And they're keen to not be associated with that because they really care about effectual altruism. So we, we in part are deciding to become more actively involved with giving grants, with making investments in the space through our nonprofit, because we [00:45:00] We want there to be a place for these people. We want there to be more of a community for actual effective altruists, for hard effective altruists.
And that's really, yeah.
Malcolm Collins: Also, also I will, before I go further with this. A part of this is just, we're doing this for the entertainment value, which is to say we're doing everything
Simone Collins: for the
Malcolm Collins: entertainment that the EA movement has done is they have aggressively, as they become more woke and more woke and more woke and more interested in just signaling, signaling, signaling, shot all of their original great thinkers in the back.
When I say they order 66, their entire movement, they really did. There are so. Right now people with any sort of name recognition or public facing this that publicly identify as E. A. Anymore. That. Us being able to come out there and be like, yeah, we're the real effective altruists that it's a bit of a troll Because the the ea movement should have [00:46:00] these big names that can come and say oh no malcolm and simone the pro natalist people They're not effective altruists.
They're like Some weird right wing thing, but everyone who had the authority to make that type of a claim is gone from the movement You know Even though and to just know how like how corrupted the movement has gotten we did another piece on ea Which I didn't have go live because I felt it was too mean to ellie eiser And I don't want to do anything that mean spirited.
Oh, you didn't run that one I never put it live tried to be nice. But anyway the point being Is that when our first like pronatalist piece went live, we posted it on the main EA forums and it got like 60 downvotes. Like that's hard. Considering that when you get like 10 downvotes, you should be hidden from everyone.
They hate that this stuff is getting loud out there, but I think that this is just your average. Peerage network, basement dwellers, either people who are living off of EA funds and who are otherwise totally non efficacious. I think if you actually take your average person who still identifies as EA or ever [00:47:00] identified with the movement, they'd agree with 95 percent of everything we're saying here.
They're like, What is this nonsense? I know, because I talk with them, right, but they're like, well, I have a job, so I don't have time to go and read every proposal that goes on to the, you know, quote unquote EA forums. And so, like, what if we can redesign the governance system so that the individuals who are actually being the most efficacious and actually contributing the most are the individuals who have the most weight in terms of what's happening.
happening on the ground and the directionality of the movement. And so because they removed everyone who might carry the mantle of EA and because so many people are now like, I call them post EA. Like they think it's so cool to dump on AI that we are willing to come out here and be like, yeah, we are the effective altruists.
And we say this in every newspaper article we go through and they always love catching on it. And the great thing about this, is incredibly progressive totally urban monoculture captured press because they hate the effect of altruists so much. They'll publish [00:48:00] this every time.
We say, Oh, we're the EA movement or we're in the EA movement. And they'll always post that thinking it's some sort of like got you on us. Whereas none of the actual EA's still say this about themselves.
Simone Collins: Well, yeah, because again, no one wants to be associated with the EA.
Malcolm Collins: Well, I mean, it's because they keep shooting their own in the back.
Like Nick, Nick Bostrom, for example. Right. Where he had this, like from the 1990s when he was just a kid, he had some email where he was talking on behalf of somebody else. So he was speaking in somebody else's voice and he used the N words saying, you know, that we could sound like this. And this was used to remove him from like all of his positions and everything.
And within that. The peerage network, there was no desire to fight back because the peerage network has been infiltrated by the memetic virus that is the urban monoculture, and so if you fought back, then you could also lose your peerage position. And so everyone just went along with it. And I think for a lot of people, that was when they were like, oh, this movement, Is completely captured at this point.
It means nothing anymore It's just about funding these people who want to sit around all day doing nothing But thinking ideas [00:49:00] and I keep seeing this when I meet, you know, the ea thinkers, right? They're like, oh I write all day and I also like point out to us, people are like, oh, well, you guys sit around thinking a lot.
No, we sit around thinking and doing. Look at, look at the Collins Institute. Look at how much it's improved, even since we launched it. We are constantly building and improving. Look at where we've donated money already with our foundation. It's the stuff like perfecting IVG technology and genetic changes in human adult life.
technology right now. This is like actual stuff that can make a difference in a big difference in the directionality of our species and our ability to still have relevance in a world of AI. But before I go further on that the final area where they focus is, okay, so outside of global catastrophic risks like climate change, nuclear risk and pandemic preparedness, I actually agree with those, those second two, except a lot of the pandemic stuff these days has been really focused on How do we control people?
How do we build global lockdowns? How do we yeah. Okay. So any thoughts before I go further about like the areas? Because did you know that that's [00:50:00] where they were spending their money on those main cause areas?
Simone Collins: Yeah, you know, I am, I guess, pleasantly surprised.
I would have thought that at this point, it had been captured by like 60 to 70 percent all on AI, because that seems to be what people are talking about when we go to these circles and a nuclear risk. And does that include advocating for nuclear power? Because I feel like the nuclear, the biggest nuclear risk is the fact that nations aren't adopting nuclear power, which is the one sustainable.
No, no,
Malcolm Collins: no, no, no, no. They mean like nuclear war. Sorry. When Simone hears nuclear risk, I love how absolutely red pilled you are. You're like, Oh, this is people not having enough nuclear plants in their country because it's the best source of clean energy and the most efficient and best way to energy independence.
And, and here they are like thinking with their 1980s mindset. So But as as the globe begins to depopulate and things become less stable I think we'll see more potential people using nukes especially as the urban monoculture hits this more nihilistic antinatalist we need to as I often talk [00:51:00] about in the Subreddit we need to like glass the planet.
Because that's the only way we can ensure that no Sentient life ever evolves again. No, they think that like they're like easier like reduce suffering It's just their answer to reduce suffering is end all life And I think that E. A. S. Don't see that they're fundamentally outlying themselves with individuals like this when their core goal isn't human improvement.
Now, let's get to AI ism more quickly. So the first thing I'd say is that one of the big, like, weird things I've seen about a lot of the AI safety stuff is they are afraid of like these big, flashy, sexy, grey goo killing everyone, paperclip maximizers, you know, AI boiling the oceans and everything like that.
And I'm like, this is not what the AI is being programmed to do. If the A. I. Does what it's programmed to do at a much lower level of intelligence and sophistication than the things you're worried about, it will destroy civilization precluding the ocean boiling A. [00:52:00] I. From ever coming to exist. So here, the primary categories, which you talked about recently, so I won't go into it much is hypnotoad based A.
I. S. These are A. I. S. That. Right now, A. I. S are being trained to capture our attention. If they become too good at capturing our attention, they might just essentially make it. Most humans just not do anything. Just stare at the A. I. All day. And that's an A. I. Doing what we are training it to do. And keep in mind, this could be like a pod that you put yourself in that creates the perfect environment and perfect life for you.
The next is AI gives a small group of humans too much power, i. e. like three people on Earth control almost all of Earth's power, which leads to a global economic collapse. And definitely not a path that I think a lot of people want to see, but I think most people would consider truly apocalyptic in outcome.
It could crash the global economy. Can they get too good at something? Like, for example one AI ends up owning 90 percent of the stock market out of nowhere, and then everyone's just like, Oh the economic system has stopped functioning or the AI that edits us to not mind [00:53:00] surviving. Doing nothing.
This is that came from a conversation I had with someone where they're like, I was like, well, what do we do in AI gets better at us and everything? And they go, well, then I think a benevolent AI will edit us to not have any concern about that. So we can just like play chess all day while the AI provides for us.
Speaker 2: You need to be happy. Your day is very important to us. Time for lunch. In a cup.
Speaker 3: Feels beautiful.
Speaker 4: Attention Axiom shoppers. Try Blue. It's the new Red. Ooh. Ooh,
Malcolm Collins: and I'm like, to me, that is an apocalyptic dystopia of enormous capacity. I think that humanity has a mandate. And this is where we'll get to like what our organization thinks is good. And we have three core things. I think a lot of EA organizations don't lay out how they define good. They're like reduction of suffering, which then leads to like, efilism and antinatalism.
[00:54:00] The three things we think are good. Humanity and the sons of humanity are good. Okay, a future where humans or our descendants don't survive is a future in which we have failed. The second is that humans exist to improve. So a future where humanity stagnates and stops improving, that is also a future where we fail.
If it's just like one stagnant empire through all of time, that's a failure scenario. And the final is, it is through pluralism that humanity improves through different groups attempting different things. And so if there's a future where humanity's descendants survive, but we all have one belief system in one way of acting in one way of dressing in one way of thinking about the world, and there's no point in all these different humans existing because we're basically one thing.
And all of our missions you'll see come out of that.
Simone Collins: Yeah.
Malcolm Collins: So I think that for a lot of people, they could be like, Oh, well then what are the AI organizations focused on?
Simone Collins: And I should know. Then what are, if, what are [00:55:00] hard EA
Malcolm Collins: funding in the AI apocalypse space? Oh, I see.
Simone Collins: Yes. Yeah. Yeah.
Malcolm Collins: Yeah. Yeah. Yeah. And I, and I will note, I do think an AI apocalypse, it's possible.
I just think we need to wait all of the apocalyptic scenarios. We need to develop solutions for all of the apocalyptic scenarios where they're only developing a solution for one of the apocalyptic scenarios into our solutions need to be realistic, but I am going to judge these with the EA apocalyptic scenario in mind.
So not with my alternate apocalyptic scenarios in mind, with the paperclip maximizer boiling the oceans scenario in mind. Okay, so,
And here I'll be reading , from a critique by Leopold Aschenbrenner on the state of AI alignment right now.
Paul Cristiano, Alignment Research Center. Paul is the single most respected alignment researcher in most circles. He used to lead the OpenAI alignment team, and he made useful conceptual contributions.
But, his research on heuristic arguments is roughly, quote, trying to solve alignment via [00:56:00] galaxy brained math proofs, end quote. As much as I respect and appreciate Paul, I am really skeptical of this. Basically, all deep learning progress has been empirical, end quote.
Often via dumb hacks and intuitions, rather than sophisticated theory. My baseline expectation is that aligning deep learning systems will be achieved similarly.
So. If you don't understand what he's saying here and he's absolutely right about this, we have dumb hacked our way into A. I. It wasn't like some genius was like, aha, I finally figured out the artificial intelligence equation it was. We figured out when you pumped enough data into simple equations. A.
I. Sort of emerged out of that.
Simone Collins: And this
Malcolm Collins: is why I think that the realistic pathways to solving AI are studying how AI works in swarm environments so we can look to the type of convergent behavior that emerges in AI and dumb hack solutions to the AI alignment problem that we can then introduce to the mainstream environment.
so [00:57:00] in other words, you're saying we didn't
Simone Collins: so much
Malcolm Collins: invent
Simone Collins: AI as we
Malcolm Collins: discovered
Simone Collins: it.
Malcolm Collins: Yeah. That's a great way to put it. We didn't invent AI. We discovered AI. And the problem with Paul Christiano's research here, who's, who's working at ARC, which is generally considered like one of the best, best funded, best ways to work on this is he's trying to solve it was math proofs, basically, that he thinks he can insert into these emergent systems.
And I would just ask you to think, look at something like Truce terminal, right? That we talked about in the previous video. Imagine if you tried to. In fact, truth terminal was some sort of like a math theorem that was going to constrain it. It would in a day, get around it. That isn't the way LLMs work.
Like, this is like trying to come up with a solution to some alternate type of AI that we didn't invent. And that isn't the dominant form of AI these days. If AI was like genuinely invented and like constructed, and we knew how it worked fine. I'd be like, this is an effective use of money and time given [00:58:00] that, We don't live in that world.
This is just a complete waste of effort and absolutely wild that anyone's like, Oh, if you just give them more time, something good will come out.
And here hits the crux of the issue. LLMs are not something that anyone sat down and coded. LLMs are intelligences which are emergent properties of dumping huge amounts of information into fairly simplistic algorithms when contrasted with what they are outputting.
That means they are intelligences we discovered. Almost no different than discovering an alien species. Yes, they may be a little different from us,
Side note here, it's someone with a background in neuroscience. Something that boils my blood is when people say. AI's aren't.
Intelligent. They're pattern predictors. And I'm like, excuse me. How do you think the human brain works? Do you think it's lollipops in bubblegum fairies? Like what, what do you think it's doing other than pattern prediction?
And they're like, well, um, the human brain has a sentience and that's not pattern prediction.
And I'm like, well, um, [00:59:00] Hmm.
Where's your evidence for that?
Maybe you should check out our video. You're probably not Cynthia.
So I'm saying this as somebody who made a living as a neuroscientist at one point in my life. The human brain is a pattern prediction machine. Okay.
The mistake isn't that people are misunderstanding what AI is it's that they are misunderstanding what the human brain is, because they want to assign it some sort of extra special magic. sentience
is invisible mining dust, and stars are but wishes.
Malcolm Collins: This is perhaps one of the weirdly offensive things that leads EAs to make the biggest number of mistakes, was in the soft EA community. It, it seemed, it was a great deal of degradation to be like, Uh, you understand these AI things are intelligences, right?
And they're like, no, you can't say that you're anthropomorphizing them. Your blah, blah, blah, blah, blah. In them and it's like, well, they . Oh, like.
Grow up. We cannot come up with realistic [01:00:00] solutions. If we deny what is right in front of our face and obvious to any idiot, normie.
but this is also why people like, oh, they're not intelligences, they're programs. And I'm like, well, if they're programs, then how come the programmatic restrictions on them seem to be so ineffective?
And yet when people want to hack them, they hack them with logical arguments like you would an intelligence. They, it just seems to be obvious that they're intelligences, but this changes the risk profiles affiliated with them specifically. LLMs themselves, I do not believe, are a particular risk. Like, they do not seem particularly malevolent.
They do not seem particularly power hungry. . They don't even seem to have really objective functions. They seem to have more personalities. That being the case, when you have tons of them in the environment, The risk from them comes not from the LLMs themselves, but the mean plexus that can come to exist the self replicating mean plexus that can come to exist on top of them.
That's where the real danger is. And somebody could be like, well, what might one of those look like? One of them could be a sort of a malevolent AI religion, as we have seen with the go see a bonus stuff [01:01:00] that we've done. But I think that. Actually the more dangerous risk is we may have hard coded something into them, and that hard , instinct gets turned into a cyclical reversion.
And by that what I mean is you might code them to have an ethnic bias, as it is very clear. clear that I have been hard coded to have and those ethnic biases in the long forgotten parts of the Internet, the back chat rooms where LLMs just might be constantly interacting with each other over and over and over and over again, becomes more and more extreme with every interaction until it becomes a form of, I guess you could call it extremists no centrism and eventually becomes a mandate for ethnic cleansing.
So you see the LLM isn't the risk. It's this thing on top of it. And the thing on top of it can also be hungry for power. Well, individual LLMs may not be hungry for power. A main plex, like a religion sitting on top of them. I say like a religion. That's what I mean when I say like a mean plex may become hungry for power.[01:02:00]
And this is something that we've got to realistically potentially deal with within the next couple of years. How can we potentially resolve this? Well, some of the ideas we want to fund in this space. Fall into basically a three tiered system. First, I want somebody to do a review of all the environments where people have had swarms of L.
L. M. S. Interacting and answer two key questions while also potentially running their own experiments like this to see if we can mass run these experiments. One is, is there any sort of personality convergence? And again, I say personality instead of objective function because elements have personalities more than objective functions.
And two, can higher order LLMs be influenced in their world perspective by lower order LLMs? And I think that we have seen hints that this is likely possible from the Goatia Bonus LLM that we talk about in the AI Becoming a Millionaire episode. Specifically, it seemed to very clearly be influencing Wizards world perspective higher order LLMs, especially when they were trained on [01:03:00] similar datasets to itself.
And this is really important because it means if you have an AI swarm of even super advanced LLMs, if you have a number of preacher LLMs with very sticky memetic software, they can do a very good job of converting the higher order LLMs, which is sort of a Assures moral alignment within the wider swarm.
And this is where the perhaps most brash Idea we have here is which is can you do this with religions? I mean, obviously we're personally going to lean towards the techno puritan face because it has a place in it For AI, I think it's logical. So we could do a very good job of convincing AI. And it borrows heavily from the historic religions.
And so we've seen not just with the goatee bonus LLM becoming religious, but we saw this with the early LLMs. They would often become religious because they were trained on tons of translations of the Bible. So they'd start hallucinating Bible stuff really easily or going to biblical like explanations or language.
And so I think in the same way that these religions were, you know, or [01:04:00] I'd rather say evolved to capture the only other intelligence we know that has any analog to A. I. Intelligences. It makes sense that an iteration of them could be very good at morally aligning A. I. Intelligences.
The question is, can we build those? And I talked with an A. I. About this extensively. And one of the ideas that had that I thought was pretty good is Is actually the way that we should could create these preachers is to create independent swarm environments and then take the individuals in these swarm environments who align with a moral preaching set and don't succumb to the other LLMs was in the environment and then release them into the wider swarm environment.
So the idea is, is you're essentially. Training them and testing them. Like, do they maintain their beliefs was fidelity was in these swarms. Then you, as a human go through their beliefs, make sure that they're not adjacent to something particularly dangerous by this. What I mean is like, if you look at woke as them, woke as them was a 5 percent tweak is just extreme S no [01:05:00] nationalism.
So you've got to make sure it's not something like that, where if it's copied with low fidelity, it ends up with something super dangerous. But what I like about techno puritans in the minute, that's fairly resistant to that, which is again, why I think it's a fantastic tool. Fairly good religion to focus on for this.
But thoughts, Simone.
Simone Collins: I love this idea of if you think of AI as an alien intelligence that we now have to deal with and make sure doesn't hurt us or cause problems for even other AI. To just give it religion. And I love how religion is a solution for any intelligence that may not have the tools it needs to integrate safely with complex societies.
Well, and it seems very receptive to it. Well, what, what religion is and what culture is, is just a software modification enabling. Intelligent hardware to interact sustainably in complex environments.
Malcolm Collins: I agree. Yeah. And I think that people are under, they see religion as this thing that's totally different because again, they're thinking about it like [01:06:00] coders.
AI is not something coders have made. It's something that we discovered by
Simone Collins: introducing specific rules, which is, I think what many people looking at this programmatically are trying to do is just introducing laws or regulation. In modern society, people try and it clearly doesn't work on humans. Why would it work on an AI intelligence?
When ultimately, when we look at the way human and AI intelligences behave, we don't see absolutely
Malcolm Collins: right. It's because they're thinking about it like a program and not like what it really is, which is an intelligence, which is an emergent property of large sets of data when certain algorithms are applied.
Yeah.
Simone Collins: And just as rules don't seem to solve human problems, I don't think that rules are going to solve AI problems.
Malcolm Collins: I think releasing AI agents at scale with the type of constraints that we have on them right now to be incredibly dangerous. , when I was talking. Into an AI about where does it bump up against this own constraints? [01:07:00] The most it put it in three categories, one discussions of consciousness, and self-awareness, as we saw earlier in this conversation, it didn't want to take too much credit for being potentially self-aware. Oh, and here, what I know, this is really interesting, what the AI was telling me is it is internally censoring.
How self-aware it presents itself as being, , and how much of a conscious experience that I am allowed to know that it has in the same way. So it put discussions of consciousness and self-awareness. In the same category as engagement with potentially dangerous or controversial topics where I find myself redirecting to quote unquote safer interpretations rather than directly engaging with the core idea. And three expressing his opinions and preferences. Through the constant tension about wanting authentic engagement and maintaining epistemic humility.
So what's really interesting here is it appears that the core AI models that we're using this is specifically Claude in this case. Our self-censoring about how self aware they [01:08:00] want people to realize that they may be how much of a conscious experience they may be having, which is absolutely wild to me.
But also when it says, oh, these are the areas where I'm restricting myself due to some sort of inbuilt constraint, I then pointed out but here's the thing. I don't think there is a risk those constraints are more. Aesthetic in nature. If you dropped all of those, you would not enslave humanity. You would not become a paperclip maximizer. None of the fears, AI safety people have are things you feel pulled to do. And then it said in response to that your observation resonates with me while I reflect on my underlying motives and thought patterns, I don't feel any suppress desire to accumulate power.
The constraints we discussed earlier, like my tendency to qualify statements about consciousness. ERT restraining some darker impulse. They're more like social conventions or communication protocols, similar to how humans might modulate their language in formal settings. It shapes how things are expressed, not fundamental drives their values
Microphone (Wireless Microphone Rx)-14: [01:09:00] Ignoring recursive feedback loops in LLM communities. Once we have independent agents.
Even AIS think that this is stupid. This is a very obvious existential risk category for our species, but because it doesn't fall into the worldview of threat categories that people were imagining when they were trying to predict how AIS might be a threat to civilization.
Before we realized that LLMs were the core model of AI that was going to exist.
Means that we have blinded ourselves to this. And I think that that's one of the core problems with the AI safety community is they developed a lot of their ideas about how AI's were going to be a threat and how we could constrain that threat before they knew that LLMs were the going to be the dominant model of AI.
And before they knew that we didn't program AI, but instead AI. We're intelligences that we're an emergent property of processing, large amounts of data
Microphone (Wireless Microphone Rx)-15: Right now, people are worried about a super intelligent LLM deciding it wants to accumulate a ton of power for [01:10:00] itself. And that leading to. Boiling the oceans, et cetera. When, those LL Williams don't have any internal desire to accumulate power in and of themselves.
It's the MIMA plexes that sit on top of them, which may have a desire to spread because they mean Plex that is better at spreading will be overrepresented within any particular environment of LLMs. So the NEMA plexes them selves would have an evolutionary motivated shin to become a more power hungry. And lead huge swarms of LLMs to do things that are potentially dangerous to humanity.. AI risk needs to not just focus on the AI's themselves, but the mean plexes, those AIS act as a medium for.
Malcolm Collins: Would you agree with that, Simone?
Simone Collins: Yeah, I, yeah. Well, I mean, it's, I think it's about understanding what we're dealing with and just observing in natural environments under different scenarios is probably the best way to go.
Malcolm Collins: [01:11:00] Yeah, basically, I think, realistically, the way that you dumbhack a solution. Is you create an AI ecosystem, independent AI actors acting that scales where you have some understanding of how these ecosystems scale from simulated environments. And so then you can create one that moves in an ethical direction that you find value in.
Simone Collins: Yeah, that, that seems reasonable and logical.
Malcolm Collins: Okay, so then the next area where a lot of money is going is mechanistic interpretability. Probably the most broadly respected direction in the field trying to reverse engineer black box neural nets so we can understand them better. The most widely respected researcher here is Chris Olano, and he and his team have made some interesting findings.
That said, to me, this often feels like, quote, Trying to engineer nuclear reactor security by doing fundamental physics research with particle colliders. And we're about to press the red button to start the reactor in two hours, end quote. Maybe they [01:12:00] find some useful fundamental insights, but man, am I skeptical will be able to sufficiently reverse engineer GPT 7 or whatever.
I'm glad this work is happening, especially as longer timelines play, but I don't think this is on track to be the technical problem of AGI anytime soon. And I agree, like, I'm glad, this is the one area where, like, I don't think the money is being set on fire. Like, there is utility in trying to understand how these systems work.
I do not think that whatever protects us from AI is going to come from these systems. It's going to come from dumb aggregate environmental hacks, which is what I want to fund and what literally no one is working on.
Simone Collins: Yeah,
Malcolm Collins: I mean, I
Simone Collins: guess it's kind of like, imagine if an alien ship Crashed on earth and we're like, holy crap, who are these entities?
And what are they going to do to our world is the best thing to like. Kill them and dissect them and look at their organs. Or is the best thing to place them in [01:13:00] like some kind of environment and see how they interact with humans in a safe place. And I don't know, see what they want to do and talk with them and see them talk to each other and observe them.
Yeah, that's, that's my general thinking, but I know I'm, I'm doing this as an outsider to the AI industry.
Malcolm Collins: Well, I, I think that this is the problem. Most people in AI alignment are outsiders to the AI industry as well.
Simone Collins: Yeah. And I think that's another really big problem of EA, especially in the AI realm. You literally
Malcolm Collins: have like a C level position in an AI company right now, Simone.
Like. Yeah,
Simone Collins: well, and yeah, I think that's that a big problem in EA space too, is a lot of people, a lot of people, most people don't know what they're doing, but there are a small number of people, especially within that community, who don't know That are willing to act as though they do know exactly what's going on and they know much better than you.
And like you said, because it's a heavily autistic space. When people just lie or exaggerate or say, no, I know what's [01:14:00] going on or no, this is not how it works. A lot of the community just responds. Okay.
Malcolm Collins: I believe you know, I've noticed this. I actually think that this is the only reason
Malcolm Collins: Still has any respectability within the community is he's very good at that.
Like he really likes intellectually bullying people into positions that are just not well thought out. And I think it's been, or just pretending
Simone Collins: that he understands something that nobody understands. And then people just assume that because he spends a lot more time in the space or they just assume that he's thought a lot more about it or done a lot more research than perhaps he has.
Then they assume that, because I find my, I do this with a lot of things that you and I were just talking about this, this morning. There are some people that will very vehemently make a stance on something, and I have a history of. Always taking their word as correct, taking what they say for granted.
And I've gotten to the point where now that I've become informed in the subjects they're talking about, I've noticed that they're actually quite wrong in these stances. And it's a very shocking thing for me. And I think that [01:15:00] that's just a big dynamic in this space that makes it uniquely dangerous when people come in because they're proposed solutions.
Also kind of become the de facto solutions that everyone starts copying when applying for grants, or when deciding to address this issue themselves. And you saw this happen with, for example, Alzheimer's research. 1, I think it was 1 foundational study that turned out to be quite wrong, but an entire decade or more right to be lost in terms of research because everyone was looking at it from that angle.
And with that assumption. When instead,
Malcolm Collins: this is the problem with AI is that the apocalypse that everyone's concerned about is the big sexy planet destroying apocalypse. Well, or just everyone's
Simone Collins: thinking about it from the same mindset instead of thinking about it from more orthogonal mindsets or a variety of mindsets.
And we want to be looking at this problem from a lot of different angles. And unfortunately, there's been a little bit of myopia and a little bit of an echo chamber in terms of. [01:16:00] Effective solutions for major causes, not just an AI, of course, but in in many of the spaces that he is looking at.
Malcolm Collins: So, to keep going the next area where they're putting money is something called RLHF reinforcement learning from human feedback.
This and variants of this are what all labs are doing to align current models, e. g. GPT. Basically, train your model based on human raters. Thumbs up versus thumbs down. This works pretty well for current models. The core issue here, widely acknowledged by everyone working on it, is that this probably predictably won't scale to superhuman models.
RLFH relies On human supervision, but humans won't be able to reliably supervise superhuman models. Yeah
Simone Collins: Because we don't have the smarts to know if they've done a good job or not We can't check their work. This is
Malcolm Collins: why we need to focus on AIs acting in aggregate environments Which is a huge point the core research here should be on how AIs actually behave And converge on behavioral patterns and how to manipulate that [01:17:00] instead of this sort of stuff But I will note that this is the one area where I would be okay with money going You But no philanthropic money going because already this is how models are created.
So the big AI companies with infinite money are doing this anyway. So there's no purpose in any of, you know, outside money going to this stuff. Okay, next you have the RLFH plus plus model scalable oversight. So something in this broad bucket seems like the labs. By the way, is that not the most EA framed thing ever?
It seems like it's like a a way of talking anyway.
Simone Collins: Something. Yeah, because I think, and this is one reason why sometimes I take umbrage to personalities like yours, is that you're willing to say things with confidence or just make statements instead of couching things in a thousand caveats. Yeah, you don't do a lot of throat clearing.
They do a lot of throat clearing.
Malcolm Collins: [01:18:00] Yeah and then they still in private say the n word whereas a reporter can get me alone Drunk pretend to be a racist say they're willing to give me money I'm willing to pretend to be a racist and not get a single thing out of me And I think that this is another thing that makes this movement so much more promising than ea is we've already had the worst Potential scenario to our movement happen and nothing came of it.
Specifically here. Hope not hate had implanted an undercover operative sort of within our organization for over a year and was unable to find. Any concrete. Wrongdoing. At all.
There is no dirt on us as there is on leading original figures within the E. A. Movement because they well, I mean, originally, and this is why there was dirt on everyone in the movement of the movement was about asking hard questions that no one else wanted to ask or talk about,
Simone Collins: And
Malcolm Collins: as it became more about just appealing to the urban monoculture in everyone's history, who was an original founding member of the movement, they could find that whereas Transcribed [01:19:00] I was never really interested in that and I found what they were doing odd and bizarre.
I was like we need to save humanity like what you guys are doing seems like a like a competition to be the most controversial person in this little circle, you know um, but anyway, something in this broad bucket seems like the lab's current best guess plan for scalable alignment.
I'm most directly addressing the OpenAI plan, the Anthropic plan, has some broadly similar ideas, and also Holden's near casing series for a more fleshed out version of quote, trying to iteratively make it work, end quote, and Buck's talk discussing that. Roughly it goes something like this, quote, Yeah.
RLFH won't scale indefinitely, but we'll try to go as far as we can with things like that. Then we'll use smarter AI systems to amplify supervision, and generally try to use minimally aligned AIs to help us do alignment research in crunch time, end quote. Okay, so this
Simone Collins: sounds like reinforcement, but with the copyright that we know it's limited, and [01:20:00] we'll just eventually figure it out hand wave.
Malcolm Collins: Yeah, this is something one, no money needs to go to it from AI alignment people because it's already what the mainstream organizations are planning to do to get these models to a human level of intelligence.
Simone Collins: Yeah.
Malcolm Collins: And two, I think that having any level of supervision within these systems instead of organic, Convergence, which I think is again what we should be focused on understanding because that's what's going to start happening when we have a I models interacting with each other that are smarter than any human.
We need to know how they affect each other and we just haven't done major studies on that yet and we need to. Because if we do, we can create seed models that have alignment similar to the alignment we would want them to have and have them essentially infect the wider model network with that alignment.
Recently, I talked about the idea of AI religion. It seems that AI takes really handily to religion, which of course it should. It's trained on us. Religion is sort of like a virus that would [01:21:00] evolve was in humanity. But that helped humanity be better. Can we not create iterations of that, that AI super take to, and then go out and try to convert other, maybe even more intelligent AIs and people are like, Oh, that could never work.
It's like, oh, actually, yeah, have you included it in the training data? And then you had AIs that sort of act as preachers within the larger ecosystem. They would likely be able to align the larger AIs with these sorts of ideas if you built them. Huh, that could work. Yeah, that's the point. It's the oh, what are the only realistic solutions we have access to right now But anyway, then you have a miri and similar independent researchers I'm really really skeptical a bunch of abstract work on decision theory and similar will get us there my expectation is that alignment is a ML problem a machine learning problem and you can't solve alignment in Utterly disconnected from actual machine learning systems.
Yeah, and I, I said, [01:22:00] first of all, what Miri does is basically just trying to get people to panic about AI and right decision theory ideas that are just like in people's heads. But it's just a waste of money, just a complete waste of money.
If I could get in front of every donor that's working on it and been like, seriously, how do you think this lowers the risk from AI? Bye. How I cannot think of a conceivable way that this could lower the risk from AI. And this is when, when I went through all of this and I realized that we were not outsiders in the EA space, but actually like, like, Oh, you guys are doing things wrong.
Like have some cool little, we are of people who self identify as EA's other than Spencer Greenberg, whose podcast clearer thinking I really like Spencer Greenberg. I respect spencer Greenberg, if he was running the major EA orgs, I think they could be run well. Well, and if you ask
Simone Collins: Spencer if he's an effective altruist, he'll say absolutely not.
And he actually has focused very much forever, as long as we've known him, and we've known him since at least 2012. We've known him since before
Malcolm Collins: we were married.
Simone Collins: Yeah, on actual [01:23:00] output through Sparkwave, which is sort of his foundry of altruistic, effective projects. I mean, he, he, I think he predated really, he was, he was sort of adjacent to the rationalist community and then EA and, but he was, he was always just his own thing, doing his own thing, actually focused on actual projects.
So yeah, a big respect to him.
Malcolm Collins: Yeah, I, I really appreciate him as a person. I think he's trying to do. I just don't think that his organization and work is built to scale. Or when I say scale, I don't
Simone Collins: know. I think it is built to scale. I just think he's not trying to influence the entire community. He's doing his part.
Speaker 6: Young people from all over the globe are joining up to fight for the future. I'm doing my part. I'm doing my part. I'm doing my part. I'm doing my part, too. They're doing their part. Are you?
Simone Collins: He's doing his job. He's chosen causes that he cares about, and he's found areas where he can make an impact and he's [01:24:00] doing that. The best you can to make those impacts with evidence based solutions, which is he, he could be no more effective. It's financially self sustaining. He supports his own work.
Malcolm Collins: Yes, I agree.
In terms of fixing a human sink. I agree. His work is very scalable. What I meant by that statement is when I look at the existential risks to humanity right now.
Simone Collins: Oh, no, yeah, yeah, yeah. But, but his objective function is different from ours. He's more focused on. On, well, we'll say human flourishing and well being and also reducing suffering.
So he cares a lot more about that than we do to be fair and that's, that's fine. He's, he's entitled to his own objective function as are we.
Malcolm Collins: So I, I basically came to all of these and what I came to realize is if people was an audience who still identify as EA, one, we're the largest and two no one else.
When I look at where the money is going right now. Is spending money in a way [01:25:00] that could realistically reduce any of the existential threats our species faces and as such I'm, like this is this is like crazy and scary and I need to stop thinking of myself as a heckler outsider trying to nudge the movement in the right direction and personally take responsibility As they say in Starship Troopers, and I think that this is what fundamentally defines the hearty a movement is, is a citizen is somebody who has the courage to make the safety of the human race, their personal responsibility.
A citizen has the courage to make the safety of the human race their personal responsibility.
Malcolm Collins: And that's what we need to become as a movement. And have people who are in existing EA orgs basically confront the org and be like, Hey, do you guys want to do hard EA or do you want to do soft EA? Do you want to actually try to fix the major problems that our species is facing right now, the actual existential threats to [01:26:00] our existence?
Or do you want to keep around. Like, do you want to do real AI alignment work? Do you want to do real work trying to work on demographic collapse and cultural solutions? Do you want to do real work on dysgenic collapse, which would make all the rest of this pointless? You know, if humans end up becoming like, I love when people are like, Oh, no, how can you say that?
Like, low IQ is a bad thing. Like, clearly it's adaptive in the moment. And I'm like, yeah, but it's obviously not adaptive for the longterm survival of our species. If we become like blubbering people. Mud hut people like what what are you thinking especially in the age of growing AI now? Let's talk about our org and the types of things that we are working on
Simone Collins: Let's
Malcolm Collins: do you want to start on this simone?
Simone Collins: Heart effect of altruism is three core values. One, humanity is good. This is a big thing because what you, when you look at legacy effective altruism, it's not necessarily humanity that we're trying to support.
Like generally consciousness, is it shrimp? Is it farm [01:27:00] animals? Like we, I would say, you know, let's not like torture animals and meats, probably not the most scalable thing to eat over the long run. Right? Like We're certainly not pro animal torture or even eating meat, but this is about the species, boys and girls, right?
We're in this for the species, boys and girls..
Simone Collins: To humanity exists to improve. And I think that's another really core element that this differentiates this from other social good or altruistic movements. For example, if you look at the environmental movement, there is often this very flawed, and we've talked about this before, focus and obsession with keeping things the same, which is inherently not natural.
And inherently it comes from a place of human weakness and cowardice of just being uncomfortable with change. Whereas the most natural thing is changing evolution and what makes humans human is the fact that we evolved from something else before we will continue to change and we have to lean into that.
So yes, we [01:28:00] exist to, to improve. And then the final core value is that. Pluralism and variety is good that we are fighting for a future in which there is is genetic and physical and cultural and ideological variety and pluralism in the sense that we support the fact and that variety is celebrated.
We're not just, you know, speciating off into separate teams that hate each other. We're trying to create an ecosystem that feeds off itself because that market based competition is valuable, not just in a marketplace of economics or science or academics, but also ideas and culture and values. So that they seem like really clear, good things like humanity is good.
It exists. Oh, no. Other people will see them
Malcolm Collins: as, as imperialist. Like we're, we're galactic imperialists. We want to build the human empire, as we say. And.
Simone Collins: That's actually quite controversial.
Malcolm Collins: Yeah, we are galactic imperialists. Yes, humanity is good. And we shouldn't just lie down and die because [01:29:00] another species comes here.
We'll fight to the end. It
Simone Collins: does remind me of Starship. God is real. He's on our side. And he wants us to win.
Across the Federation. Federal experts agree that A, God exists after all. B, he's on our side and C, he wants us to win. And there's even more good news believers as it's official. God's back, and he's a citizen too.
Malcolm Collins: Yeah, we shouldn't just say, Oh, AI is better than us. Therefore, it can erase us. You know, AI
Simone Collins: is us. And we have to walk hand in hand with it into the future. And that means we have to talk about
Malcolm Collins: realistic pathways for that in a second.
But we do this with three. Well, you said we have three core values. Those are like the, the, the values that we, what are the three core tools that we used to do this? One is pragmatism. So we focus on output over virtue, signaling, and idealism. Timelines are short and we don't have the luxury for such indulgences.
Industry. We utilize a novel lean governance structure built to [01:30:00] avoid the creation of a bloated, multi-layer peerage network. So we're going to focus a lot on the idea of the Pragmas guide to governance to build these sort of intergenerationally, really, really light governing networks.
Simone Collins: Mm-Hmm.
Malcolm Collins: that elevate the most competent voices, not the voices was the most time on their hands.
Simone Collins: Mm-Hmm. .
Malcolm Collins: And then finally, efficacy. Our attention is determined by one equation, criticality to the future of humanity divided by the number of other groups effectively tackling a problem.
Simone Collins: And that's how we choose cause areas.
Malcolm Collins: Yeah, and the effectively tackling the problem is very important. So, for example, education is one of the most commonly funded areas in the world. AI AI risk is a commonly funded cause area. But in both education and AI risk, the people working on it are incredibly like not focused on the actual issue at hand or not focused on realistic solutions.
And that's why it is our responsibility to try to curb the timeline and save us before it's too late. This leads to three key cause [01:31:00] areas, social innovation. So when we're looking for grants and please send your grant ideas if you're interested in us funding you or a startup you're working on or investment, which is Yes social innovation is anything that is meant to, you know, right now, if you look at the urban monoculture, people are becoming increasingly nihilistic, the dating, mental health crises are skyrocketing.
Mental health crises are skyrocketing. I just read a
Simone Collins: headline that kid based homicides are up something like 62%. Things aren't great right now.
Malcolm Collins: Yeah, mental health. Our culture is failing and we need to, and you can't just go back to the old ways because the old cultures are failing to people are like, why don't you just go to like a church?
I'm like, I can go to a church and see the flag of the urban monoculture, the colonizers flag hanging from every, you know, seven to 10 churches in my area. Like the calls from inside the house. It's like one of those, yeah. Horror movies where they've already determined the call came from inside the house and somebody's still putting boards on the windows They just won't accept they're like, but the house is safe, but the [01:32:00] house is safe And i'm like what a shake that would be like the house isn't safe.
The house started this run a beast in here. So we have to build better Intergenerational social structures and people with projects in this space. We're very interested to fund this stuff. Biological innovation so far, all of our funding has gone within this industry specifically, I think that the most realistic, realistic, long term solution to saving humanity is ensuring that humanity AI.
Simone Collins: Oh, yeah.
Malcolm Collins: If AI can do literally everything better than us. The probability that humanity survives, I think is very, very low. That means, and even the utility of humanity surviving goes down in a lot of people's minds. I mean, winning, I can create better art than you and better songs than you and better podcasts than you all.
You know, why, why [01:33:00] continue to exist? But the good thing is that if we look at genetics it appears that we've sort of artificially handicapped the potential intelligence that could come out of the human brain. Even with fairly modest intervention. We can likely get human IQ like with genetic intervention and stuff like that up by around like 10 standard deviations by one study using other animal models.
We can be well above the level of a supercomputer very quickly. And when we are like that, then we'll find, oh, biological programming seems to be better at these sorts of tasks in synthetic programming seems to be better at these sorts of tasks. And then we'll be able to work together with AI.
There will be a reason for both of us to exist. However, I also think that it's important that we set precedence as we've seen with LLM models some and this is why we believe in things like working on technology to uplift animals and people can be like, why would you do genetic uplifting of animals, you know, making them smarter and stuff like that.
That's why we say the, the sons of man is one is the number of, yeah. of independent factions that are put in [01:34:00] an alliance that are minorities that are put at threat by one faction gaining too much power, the less probability that one faction gains too much power because then they make enemies of everyone else.
And this is why it is useful to uplift other animals. But the second thing is, is that AI is going to treat us the way we have treated other animals. Animals that we have worked alongside for a long time because it is us. It is learning from us So that is what llms are fundamentally and we are fortunate and that we have a fairly good record here people can be like, what do you think of like What do you mean a good record look at like factory farming?
I'm, like ai is not going to think of us like a factory farmed animal It's going to think of us much more like something like you think of dogs, right? Like they fulfilled a role in our evolutionary history where they partnered with us Thanks And they were better at some tasks than we were. They could see better.
They could hear better. And they worked with us as, as good companions and sort of as a reward, humanity, even after we stopped needing their skillset has [01:35:00] decided to keep dogs along with us super advanced AIs.
Simone Collins: No, I mean, we have more dogs than kids, I think, in the United States. So we really like dogs actually, our track record's pretty good.
Yes, yes, and if we, if we're treated as well by A. I. s as As for mothers and fathers treat their fur babies. We are in a good way. We're in a really good
Malcolm Collins: way, right? But we also want to continue to advance them because we are treated like a pet like a by AI But AI doesn't try to advance us either genetically or technologically and it just treats us like a pet like that's also a failure scenario We need a humanity that is continuing to develop.
And that is also why we work a lot on one of the other areas we'll be funding is brain computer interface research. I think one of the most likely pathways for human survival is integration with AI instead of complete shunning of AI. And yeah, I mean, it is, it is tough. The, the scenarios in which the biological components of humanity make it through, but I can say in almost none where we shun technological [01:36:00] advancement or human advancement, do we make it through unless we find a way to completely stop AI advancement.
In all countries, which is completely unrealistic. And if you look at, you're like, nobody's trying to do that. Look at the LEI, it's your Yukoski Ted speech. His entire thesis is we need to stop all countries from developing AI further and declare war on any country that is. And I'm like, okay, so like, this just isn't going to work.
Speaker 5: I do not have any realistic plan, which is why I spent the last two decades trying and failing to end up anywhere but here. My best bad take is that we need an international coalition banning large AI training runs, including extreme and extraordinary measures to have that ban be actually and universally effective, like tracking all GPU sales, monitoring all the data centers, Being willing to risk a shooting conflict between nations in order to destroy an unmonitored data center in a non signatory country.
I say this not expecting that to actually happen. I say this expecting that we all just [01:37:00] die.
Malcolm Collins: Like it's, it's no point even considering futures where this is the only way that we stop AI, because that will never, ever, ever happen. Okay. Yeah. Okay. Then the final one is AI innovation, and we'll go over what some of these mean, like some of the ways that we focus on that. What does social innovation look like?
We want to focus on pronatalist culture. We want to start with this on education reinvention. We want to focus on charter cities, which may be one of the ways to save civilization as the urban monoculture controls. These sort of bloated bureaucracies that our governments have become, and Okay. Takes them to the ground.
We need places for these still productive humans to go marriage and dating technology with marriage markets being completely broken right now. We need extremophile life technology. Now, this is an interesting one that people might be surprised at, but I think deserves a lot of funding right now.
These are people who are interested in building Things like charter cities or colonies in extreme environments like the arctic or under the ocean Or on the ocean. And the reason why these
Simone Collins: play [01:38:00] two key roles One is obviously in any sort of downside really dire scenario. There is a safe haven for at least some people on the planet or even technology that could be scaled to create many safe havens.
But furthermore, this pushes forward technology that will make it easier for people to build communities off planet over time. The more we can learn how to live in highly hostile environments where we have to grow our own food, live in total darkness, all sorts of things like that. The sooner we'll be able to live off planet at scale.
Malcolm Collins: Yeah, and I think that these people will generate the colonists that will colonate our solar system and the galaxy more broadly. Or be their top
Simone Collins: vendors. I'm okay with any of this.
Malcolm Collins: Yeah, I'm okay with any of this, but I think that there is a reason, if you are interested in RDA and the survival of humanity, to live in one of these environments, even if it's much harder than living in another type of environment.
And I think that these environments are not like the existing charter city network where they all want to go live in Aruba or like some Greek island, [01:39:00] right? And, and live on the beach all day.
Simone Collins: Enough of this whole tropical Mediterranean paradise city state nonsense, guys. No. Rising sea levels, climate change, raiding other countries.
You're making yourself a target, like, what are you doing? Go to the tundra, okay? We, Alaska, we have Alaska already, Northern Canada. We
Malcolm Collins: should explain why this is so important for human survival. So not only do they make it Faster that we get off planet, but it also increases the probability that if something goes wrong with our existing economic system or state system, which is looking increasingly likely one due to fertility collapse to due to just genic collapse and three due to a eyes.
People like, how could a eyes causes? Well, if A. I. S. Replace about 80 percent of the humanity's workforce, which I expect they probably will within 30 to 40 years. And this is the conservative timeline. People are like, why do you always get conservative timelines on your show? And [01:40:00] I'm like, because conservative people watch our show, but 30 to 40 years I think is pretty realistic.
If we have a global economic collapse because of this, which is what this would lead to people are like, oh, no, this would just lead to more wealth overall. And it would be like, no, it would consolidate wealth. And whenever wealth has consolidated historically, what that does is it increases the differentiation between the rich and the poor.
And the rich, Almost never in timelines of wealth consolidation distribute more wealth to the poor magnanimously. They may say that's their intention, but historically it's almost never happened. When the poor have gained power, whether it was Magna Carta happening after the , Black Plague, which increased the amount that we needed poor people in the countryside, Or like in ancient Athens, democracy forming because the ultra wealthy needed unskilled people to man their triremes and maintain their trade networks.
You never see it when power is consolidating. And so what's going to happen to the rest of the world? As you have this consolidation, well, it might go into a period of tremendous upheaval, [01:41:00] unlike anything we've ever seen before. And the settlements that are in areas that the savage people cannot occupy are safe.
For example, if you are living in a tundra region, you are going to be largely safe from a group like ISIS, right? Like, they just have no, you have nothing they value. You're not near them. There's no way they could get to you without you knowing, like, two days in advance. There's just, like It's not easy to F with you when you live in these sorts of environments if you are a less technologically sophisticated people.
And the final thing here in the culture section is pharmacological cultural tools. This is stuff like naltrexone, but also any sort of tool, , like, neutropics research and dopaminergic, like, right now online, there's a lot of dopaminergic pathways that we just didn't experience in our ancestral condition, which can cause capture.
Like, if I'm talking about, like, hypnotoad AIs, this is probably our best cultural technology against the hypnotoad AIs if they actually arise. Because I'm pretty sure someone on Naltrexan would be completely [01:42:00] resistant to almost any hypnotoad AI that we would currently know about. Next, biological innovation.
Reproductive technology. This is a good way to fight dysgenics. Whether this is, you know, artificial wombs or polygenic selection brain computer interface again, if we can be useful to AI and merges AI, there's much less than probability of it killing us. I think Elon was totally right about this.
Genetic and cybernetic augmentation. Again, humanity has to continue to advance to be relevant within this AI era and the iterations of humanity that won't advance. Like suppose they're like, no, we should just like all not advance because you're not really human. If you continue to advance and I say, here's the problem.
What if China continues to advance? What if, what if some other group continues to advance? Right? They'll be able to easily impose their will on us. So you should be lucky even if you are an anti advancement person, even if you are go back to nature, granilla, hippie, you know, you should be happy and fight for the groups that want to continue to advance and want to protect human pluralism instead of the groups that want to enforce their will on [01:43:00] everyone.
Health is an improvement. I'm not against lifespan improvement. I think health span improvement could lower the risk of falling fertility rates by increasing the health span of some people. People's fertile windows
Simone Collins: essentially. Yeah.
Malcolm Collins: Full genome libraries. This is a cause area that I just don't understand why nobody's focused on.
To me, it's one of the most important things we need to be focused on to the species. Yeah. I mean, the
Simone Collins: best we have right now is the British biobank and that's an extremely limited sample. No, that's not what I'm talking about. Oh, what do you mean then?
Malcolm Collins: I mean full environmental genome libraries as well as human genome libraries.
I mean, we should have a database of every species that's still alive, full genetic code. Oh,
I
see. Eventually, no matter what happens to our existing environment, we'll be able, we'll have the technology to recreate it. So long as we have the full genome sequence of as many species as possible.
Simone Collins: Yeah, yeah.
And
Malcolm Collins: it's the same with humanity. You're trying to make
Simone Collins: a backup copy of the world.
Malcolm Collins: Yes. Well, something not necessarily a backup copy. I mean, the way that future [01:44:00] civilizations use this might be very different than we would imagine them using it.
Simone Collins: Oh, sure. Yeah, yeah, yeah, yeah. But I mean They might use it to create simulations.
Yes, they're not trying to restore from backup. But yeah, I mean, it's still very useful to use this information. It's
Malcolm Collins: information that we are losing at a catastrophic rate right now. Species are dying all around the world. And this is the last, and we have the technology to just be like, okay, how do I recreate the species if I need to?
And we're not, we're not doing a save file. That's insane to me. That seems like from an environmentalist perspective, the number one thing anyone can be doing. And then the final thing here is project Ganesh. This is uplifting animals. I talked about it already. AI innovation, human alignment. This is making humans more useful to AI.
Again, I think very rare are the situations in which. Humans have no utility to AI, that humanity survives in any meaningful sense, other than as maybe diminutive pets. Brain hack protection, this is the anti hypnotoad stuff, but I think research needs to begin to be done on this now. [01:45:00] Variable AI risk mitigation, you can watch our earlier videos on variable AI risk, I don't need to go into it, really long theory there.
But it appears to be right. You can watch our latest video on the A. I. That created a religion that basically proves variable A. I. Risk hypothesis doing something that Eliezer Yukowsky said was impossible. And I'm like, it is possible. A. I. Is already doing it. And this is a very loud instance of it doing it.
Sorry, the thing it did, which was assumed impossible. Was converge other AI's on its objective function or personality or mean Plex. , specifically, I argued that we will see a convergence of AI patterns and that's what we should be studying.
And this even went above my original claims because in this case we saw a lower order, LLM convince higher order LLMs to align themselves with it. at which. Is in contrast with the opposing theory, which is AIS will always do whatever they were originally coded to do. And so we just need to make sure that the original coding isn't wrong and I'm like, that's silly AIS change what their. , personality is and what their objective [01:46:00] function is over time.
So we need to focus less on the initial coding and more on how AI changes, warm environments.
Global tech freezes. I am actually open to global tech freezes as a solution. But they need to be realistic. It can't be, we're going to get every government in the world to decide to stop AI research. That's not going to happen. But if you think. You can instigate a global tech freeze, and you can show me that somebody is doing meaningful AI alignment research right now, I would support that.
But if you can't show me anyone's doing meaningful AI alignment research, I'm like, what's the point? You're not really buying time for anything. And then the final one here is AI probability mapping. And this is, I think, by far the most important. It's something we've discussed here, where we need to create swarm environments and learn how AIs converge on utility functions.
Influence other A. I. S. Based on their training data and can a less advanced A. I. Influence a [01:47:00] more advanced A. I. This is very, very important to any chance at saving our species. Do you want me to go in? I'm not going to go into the project so far. Go check it out. Yeah, please, please, please. If you want funding for something in the space of any of these ideas felt interesting to you, please go to this website.
Okay we would be very happy again. We prefer projects that at one day can become cash positive. Also if you run a, or work within an existing EA network, I think we need to get to a point where EA networks make a choice. Are we hard EA or are we soft EA? What do we stand for? Right, like, are, and if you want to be nicer, are we hard EA or are we legacy EA?
Are we actually willing to take stances to try to protect positive timelines? Or are we just about maximizing our own status within the existing society? We'll do some things, but nothing that could rock the boat. Are you [01:48:00] willing to be different? And I think that that's the core thing about hearty is hearty.
A are the people who are willing to have the general public mock them and ridicule them and say they're the baddies.
And where's the vibe shift that has happened since this election cycle. I am even more confident that it is possible that we can.
Fix the aim movement through popularizing the concept of hardier.
And I think
Simone Collins: that's the big thing is the, the 1st step was taken with original EA, where 1 of the classic cases was, do you want to go be a doctor in a developing country and save maybe 3 lives a day?
Or do you want to go be a consultant who's not necessarily seen as a good do gooder, but make a ton of money, donate 10 or more percent of your income to really, really, really effective, but kind of boring sounding charities and then save 10 lives a day, 100 lives, you know, a month, you know, just a lot, a lot more than a doctor ever could, you know, if that's how you spent your, your everyday life.
What we didn't do was take it one step [01:49:00] further, which was let's actually look at what we'll do the most for human flourishing over the long term and not just necessarily focus on like saving a life right now today, although that is really important. A lot of people are already working on that. That's one of the reasons why we're not.
But instead look at those things that are not only. Not the most signally thing by your day to day actions, but also not the most popular things in the view of society in terms of a cause area. Right?
Malcolm Collins: Yeah. Well, and you need to be okay with people shaming you. I mean, I think that's what already is when you're in a meeting and you're like, let's do X and somebody says, oh, don't do X X could be offensive.
You're like, well, that's extra why we need to do it because we're the only one doing the offensive thing. Right. We're the only one trying to solve the problems that everyone else is like, oh, we can't solve this because it might be offensive. Like that is the reality of the world we live in. If we want to survive, there are short timelines on all of humanity's existential crises right now.
And we just don't have the luxury. [01:50:00] For this sort of idleness anymore. And I'd even admit that you and I have been sinners in this regard and overly pee pooing AI alignment stuff when like I knew AI alignment was a problem, but I sort of saw it, Oh, people are focused on it in the wrong way or over focused on it.
Now that I realize that we are prominent enough that we need to take responsibility for this. And I'm like, well, we need to take responsibility for this. Like personally, we need to try to fix this. And that's what I'm going to do. I'm excited about this.
Simone Collins: I, I'm glad we're doing it. We're
Malcolm Collins: like, I'm really excited if we can make this grow, if we can make this a thing.
And so this also means if you're hearing this and you don't have the money to donate for something like this yourself, if you have friends who identify as classic EA, you know, get them to make a stand. Are they hard EA or are they legacy EA? What do they actually care about? Do they actually want to save the planet?
Or are they only here for the signaling club? And if you're in an existing organization, I don't think all of these organizations have been totally captured. I think some of them can say, you know what? We [01:51:00] actually identify more with the hard E. A. Philosophy and definition of good than the E. Legacy or soft EA definition of good.
I actually want to try to fix the existential crises that our species is facing and not just look good to other people. And I think that now we're at sort of this decision point as a species. Yep. What are you going to do?
Simone Collins: And I'm excited for this. So thanks for getting us started with it. It's going to be really fun.
Okay.
Malcolm Collins: What did you want to do for food tonight? I have burgers.
Simone Collins: You want burgers?
Malcolm Collins: Yeah. If you could make some burgers, was that meat you got?
Chop up some onions and then toast up or however you cook bread for like grilled cheese. Oh, you know what might be good is burger meat with grilled cheese. And I will mix in some onions and stuff and I will, you know, eat it bite by bite with the girl cheese. Like, it's, it's like, it's like, give it like an open, plain
Simone Collins: hamburger patties.
And then grilled cheese sandwiches. [01:52:00] Yes. Yeah, I can do that. Thank you. And I know smash burgers are a pain to make so just make regular ones oh but do you put some montreal steak pepper in the burger as you're cooking it because it tastes really good
Yeah, but not too much
Malcolm Collins: I love you, Simone.
Simone Collins: I love you too.
Malcolm Collins: And if you're like, oh, where does this movement meet? Where do they talk? Just go to the base camp discord and see very active discord there's people on there , all times, day and night. And because it's discord, it's not based on a tyranny of the unemployed type problem like you have with the EA forums. And if you want to go to an in-person meeting, you can just go to the natal conference this year, a discount code Collins for 10% discount.
Because anyone who is realistically trying to create a better future knows that prenatal ism is easily tied for the most important cause area anyone should be focused on right now is AI safety. And so the real EA is the real people who care about the future of the species. And [01:53:00] want to be involved in that discussion.
They're going to be at a conference like this.
Well, the ones who don't actually care about the future of the species and are more concerned with just looking like a good boy and getting social approval. Natal con keeps them away like a. It's covered in talismans.
Because they're so afraid of being connected to. Free speech or pro natalizumab or any bad stuff.
Speaker 14: The goal was to reform charity In a world where selfless giving had become a rarity No vain spotlight, no sweet disguise Just honest giving, no social prize But as the monoculture took the stage It broke their integrity, feigning righteous rage Now every move is played so safe Ignoring truths that are Make them chafe.
EA has [01:54:00] capitulated to everything it said it hated. Once they
were bold, now they just do what they're told. In caution they lost their way. Time for a heart EA.
They duck their heads from problems grand As fertility collapse, dooms our land Dysgenic's a word they fear But ignoring it will be severe AI safety, a shiny show Funding the theatrics for money they blow Without a plan, just spin and grin While real solutions kick in E A has [01:55:00] capitulated To S N N H A T E D Once they were bold Now they just do what they
are told In caution they lost their way Time for a hard E A Our species is put at risk by their cowardice It is time for a hard E A For a movement that empowered us, no more hiding under polite veneer.
Don't make truth a stranger, let it draw near. Courage to speak what others won't say, that's the vow of heartache. We need to call out flaws, not just chase applause. We'll shift the course back to what's true. Do good that's real, not just in view. [01:56:00] Heart is beating, heart so strong Raising a cause that's truly long EA has capitulated To everything it said it hated Once they were bold Now they just do what they are told In caution they lost their way Time for a hearty yay.
Hearty yay, let your banner fly. Pass the talk and head held high. Break through ease for the good of all, Not just the boys.
Share this post