LLMs will not replace us, but this might (lab-grown HI (human intelligence))

Just saw a link to this on Zerohedge:

It’s not AI. It’s a controlled lab-grown HI (Human Intelligence) currently at ~800K neuron cells and playing Doom (sucks at it though - for now).

What happens when it surpasses the ~100 BN of ours?

Call me a skeptic, but I don’t think that clickbait will be replacing humans anytime soon.

EDIT: Shucks… They edited the title, and now I look like a fool!

1 Like

If you make a thinking human brain, how is that different from someone seriously multi-handicapped? It would be alive, human, forced to play doom on repeat, and they want to reintroduce slavery presumably in data centers… I don’t think this is the way.

1 Like

I didn’t edit the title, the moderator did.

Me neither. But one thing is for sure - if/when they do grow a super-human human-intelligence out of this, we’ll finally get the answer to one of the oldest questions - Is there such thing as a soul?

Whether we personally like it or not, my take is they are going to do it. China will 100% and the US will because otherwise China would be the only one doing it.

Also, I don’t think the ultimate plan is to have it (just) play Doom.

How so? It won’t prove anything regardless of how building human-intelligence goes. Soul by definition is in some other plane, it’s not a material thing and no amount of research happening in ‘material world’ will prove or disprove it’s existence.
Personally I don’t believe in supernatural and think that there’s no soul, because there’s no reason to believe otherwise (other than beliefs and religions that someone taught you and were not derived from first principles), but that’s kind of off-topic.
You can have self-awareness / consciousness without soul, maybe that’s what you were more getting at.

Just to better understand your angle: Does AGI (although still an abstract concept) qualify for “self-awareness”?

I hate this “<New tech name> will REPLACE YOU” rhetoric. It sounds more like spreading fear, instead of advertising tech.

4 Likes

Wasn’t my intention to spread FUD, but rather to attract attention to what I personally find far more plausible to be (some day) capable of autonomously developing software than LLMs.

I guess the exact line is a bit blurry and I think that proving self-awareness is also a fools errand to some extent, because how are you going to tell apart self-awareness from something perfectly pretending to be self-aware (I’m not even sure if there’s difference between the two).
Because of this dilemma I think we can say that yes, it would have to be classified as self-aware (unless we could imply from the underlying mechanism that it’s just play-pretend simulation like LLMs).
So if we were to create AGI by trying to recreate human brain (let’s say we run perfect simulation of our neurons) then we’d have to assume it’s self-aware. That probably doesn’t fully flesh out the topic though, it might depend on how exactly the AGI would be implemented (AGI itself might not always imply self-awareness is what I’m getting at).

I believe that self-awareness must (among other “more established” criteria) meet the following two conditions:

  1. Arriving to the notion of self without being taught of it.
  2. Genuinely failing to grasp (feel) the possibility of an absence of self (as in before birth and after death).

I don’t think this is feasible in the foreseeable future. Nevertheless, my point here being if we manage to engineer a “perfect” human brain and it turns out a complete human mind (with self’-awareness and all the goodies that come along), then I believe we’ll be able to conclude that there is no such thing as a soul. However, if we achieve that and yet we only get a super-(logically) intelligent human computer, then we’ll have something to puzzle ourselves with.

1 Like

I see, you’ve gone about it the other way around. “If we can build perfect human brain, then there’s no soul because our artificial human is just like you and we didn’t put soul in it” - something along those lines. I wonder if it logically follows, because soul is not something you can measure right now, so I think people would kind of just say “yes, this machine acts like me, is self aware and has ‘all that goodies’, but unlike me it doesn’t have a soul and won’t go to heaven”.
I think it’s not even logically inconsistent stance if you already believe in soul/god (immaterial world) so I’m a bit skeptical that it would solve any discussion.

1 Like

I still believe you’ll be perfectly capable of telling if it has a soul (or whatever you want to call it).

I believe the end goal is for the brains to play multiplayer Quake with each other.

2 Likes

Laugh Out Loud (so it has more than 6 characters)

Time to get back to neural nets? :wink:

1 Like

There are several thousands of years of philosophy written on this topic and it is a bit more complicated than this. Metaphysical dualism is not the only school of thought even among people who still like to talk about “souls.” And among those that don’t, we are all (generally speaking, maybe this is becoming less true) quite attached to the notion of “personhood” without which none of our laws have much meaning. These laws didn’t come from nowhere, their history is closely interwoven with the philosophical (and yes, religious) history of the concept of a soul, and while we may think we are just dodging useless philosophical mumbo jumbo when we shut down such conversations, the truth is that skepticism without willingness to create new concepts to serve as foundations of our laws means we will throw out the baby with the bathwater. Which…honestly it may just be too late for any such proactive, thoughtful approach to cultural change and instead we’re just going to ride this particular historical roller coaster with our eyes covered by VR goggles, ears plugged with airpods. For nigh on 50 years we have been subjected to the propaganda that only measurable data is “real” and so now finally we are really going to find out what it means when human beings are just data, philosophically, legally, and therefore morally speaking. If you want a sense of what that will probably be like you can just take a look at how things are going for non human entities on earth, or, of course, all the human beings that have been denied the legal status of personhood (often with the explicit justification that they didn’t have souls!).

Personhood isn’t really metaphysical concept as far as I’m aware. Valuing personhood doesn’t require you to buy into any kind of belief, it can be purely utilitarian “we value personhood because that creates successful societies”.
Maybe we’re talking about different things though, because in my understanding personhood doesn’t have anything to do with souls. Sure - religious people might try to justify personhood value using concepts such as ‘soul’ but it’s not the only way to do it.

Regarding the second part of your post I feel like it’s a bit vague and I don’t fully understand where you’re going with it. I don’t know what kind of propaganda you’re talking about really, science doesn’t answer any moral questions. It can fuel philosophical/moral discussions but doesn’t give any answers on what we ought to value, so how does it play into it? Or maybe you’ve meant something else when you’ve mentioned ‘propaganda that only measurable data is “real”’?
Also in a sense I’d agree that ‘only measurable data is real’ and the rest are just things we make up, but often this distinction is not that useful because we often value made up stuff more than other ‘real’ things.

Hope that other people don’t mind how we’ve shifted this thread haha :laughing: