Just saw a link to this on Zerohedge:
It’s not AI. It’s a controlled lab-grown HI (Human Intelligence) currently at ~800K neuron cells and playing Doom (sucks at it though - for now).
What happens when it surpasses the ~100 BN of ours?
Just saw a link to this on Zerohedge:
It’s not AI. It’s a controlled lab-grown HI (Human Intelligence) currently at ~800K neuron cells and playing Doom (sucks at it though - for now).
What happens when it surpasses the ~100 BN of ours?
Call me a skeptic, but I don’t think that clickbait will be replacing humans anytime soon.
EDIT: Shucks… They edited the title, and now I look like a fool!
If you make a thinking human brain, how is that different from someone seriously multi-handicapped? It would be alive, human, forced to play doom on repeat, and they want to reintroduce slavery presumably in data centers… I don’t think this is the way.
I didn’t edit the title, the moderator did.
Me neither. But one thing is for sure - if/when they do grow a super-human human-intelligence out of this, we’ll finally get the answer to one of the oldest questions - Is there such thing as a soul?
Whether we personally like it or not, my take is they are going to do it. China will 100% and the US will because otherwise China would be the only one doing it.
Also, I don’t think the ultimate plan is to have it (just) play Doom.
How so? It won’t prove anything regardless of how building human-intelligence goes. Soul by definition is in some other plane, it’s not a material thing and no amount of research happening in ‘material world’ will prove or disprove it’s existence.
Personally I don’t believe in supernatural and think that there’s no soul, because there’s no reason to believe otherwise (other than beliefs and religions that someone taught you and were not derived from first principles), but that’s kind of off-topic.
You can have self-awareness / consciousness without soul, maybe that’s what you were more getting at.
Just to better understand your angle: Does AGI (although still an abstract concept) qualify for “self-awareness”?
I hate this “<New tech name> will REPLACE YOU” rhetoric. It sounds more like spreading fear, instead of advertising tech.
Wasn’t my intention to spread FUD, but rather to attract attention to what I personally find far more plausible to be (some day) capable of autonomously developing software than LLMs.
I guess the exact line is a bit blurry and I think that proving self-awareness is also a fools errand to some extent, because how are you going to tell apart self-awareness from something perfectly pretending to be self-aware (I’m not even sure if there’s difference between the two).
Because of this dilemma I think we can say that yes, it would have to be classified as self-aware (unless we could imply from the underlying mechanism that it’s just play-pretend simulation like LLMs).
So if we were to create AGI by trying to recreate human brain (let’s say we run perfect simulation of our neurons) then we’d have to assume it’s self-aware. That probably doesn’t fully flesh out the topic though, it might depend on how exactly the AGI would be implemented (AGI itself might not always imply self-awareness is what I’m getting at).
I believe that self-awareness must (among other “more established” criteria) meet the following two conditions:
I don’t think this is feasible in the foreseeable future. Nevertheless, my point here being if we manage to engineer a “perfect” human brain and it turns out a complete human mind (with self’-awareness and all the goodies that come along), then I believe we’ll be able to conclude that there is no such thing as a soul. However, if we achieve that and yet we only get a super-(logically) intelligent human computer, then we’ll have something to puzzle ourselves with.
I see, you’ve gone about it the other way around. “If we can build perfect human brain, then there’s no soul because our artificial human is just like you and we didn’t put soul in it” - something along those lines. I wonder if it logically follows, because soul is not something you can measure right now, so I think people would kind of just say “yes, this machine acts like me, is self aware and has ‘all that goodies’, but unlike me it doesn’t have a soul and won’t go to heaven”.
I think it’s not even logically inconsistent stance if you already believe in soul/god (immaterial world) so I’m a bit skeptical that it would solve any discussion.
I still believe you’ll be perfectly capable of telling if it has a soul (or whatever you want to call it).
I believe the end goal is for the brains to play multiplayer Quake with each other.
Laugh Out Loud (so it has more than 6 characters)
Time to get back to neural nets? ![]()
There are several thousands of years of philosophy written on this topic and it is a bit more complicated than this. Metaphysical dualism is not the only school of thought even among people who still like to talk about “souls.” And among those that don’t, we are all (generally speaking, maybe this is becoming less true) quite attached to the notion of “personhood” without which none of our laws have much meaning. These laws didn’t come from nowhere, their history is closely interwoven with the philosophical (and yes, religious) history of the concept of a soul, and while we may think we are just dodging useless philosophical mumbo jumbo when we shut down such conversations, the truth is that skepticism without willingness to create new concepts to serve as foundations of our laws means we will throw out the baby with the bathwater. Which…honestly it may just be too late for any such proactive, thoughtful approach to cultural change and instead we’re just going to ride this particular historical roller coaster with our eyes covered by VR goggles, ears plugged with airpods. For nigh on 50 years we have been subjected to the propaganda that only measurable data is “real” and so now finally we are really going to find out what it means when human beings are just data, philosophically, legally, and therefore morally speaking. If you want a sense of what that will probably be like you can just take a look at how things are going for non human entities on earth, or, of course, all the human beings that have been denied the legal status of personhood (often with the explicit justification that they didn’t have souls!).
Personhood isn’t really metaphysical concept as far as I’m aware. Valuing personhood doesn’t require you to buy into any kind of belief, it can be purely utilitarian “we value personhood because that creates successful societies”.
Maybe we’re talking about different things though, because in my understanding personhood doesn’t have anything to do with souls. Sure - religious people might try to justify personhood value using concepts such as ‘soul’ but it’s not the only way to do it.
Regarding the second part of your post I feel like it’s a bit vague and I don’t fully understand where you’re going with it. I don’t know what kind of propaganda you’re talking about really, science doesn’t answer any moral questions. It can fuel philosophical/moral discussions but doesn’t give any answers on what we ought to value, so how does it play into it? Or maybe you’ve meant something else when you’ve mentioned ‘propaganda that only measurable data is “real”’?
Also in a sense I’d agree that ‘only measurable data is real’ and the rest are just things we make up, but often this distinction is not that useful because we often value made up stuff more than other ‘real’ things.
Hope that other people don’t mind how we’ve shifted this thread haha ![]()
Not in every context, no of course not. As I said though, the history of the concept is very much bound up with metaphysical concepts. I assume it’s that history you are not aware of. Most people are simply “aware” of the concept of personhood the way they are with most concepts–as they learned and used them in common language (“you can’t treat people that way“). Most adults have long since stopped asking “why not“ unless they get trapped in a thread like this and then they bow out as soon as they realize answers are not so simple.
Yes, right, utilitarianism, which is one school of thought that rejects metaphysical dualism and thus some conceptions of “soul” but not all. Again not all concepts of the soul imply metaphysical dualism. But, like, how much utilitarian theory have you read? The actual argument can get just as bonkers as any defense of the soul. Like I’d love to hear your take on how “hedonic calculus” can address any of the issues in OPs article.
Look, I’m not trying to hijack this thread just to call you out specifically. But it is certainly relevant to questions of AI whether you care mostly about “practical” issues like maintaining full employment (??) or “moral” ones like slavery, what we can “prove” in any discussion.
Sure, science takes itself to be the ultimate anti-propaganda, which is the domain of all soul talk and the like. Lots of scientists are in fact very religious people who simply do not mix the two. Which is great but doesn’t help us here.
OK, so then what’s your opposition to bring the concept of soul into the conversation, if not that the only things worth discussing are “real” things like data? Are you not, intentionally or not, advocating that we should only value data? Genuine question because my main point is that it’s pretty self-defeating to dismiss the question of what should we value and why in a thread like this. Most forums would probably just lock this so we can all get back to figuring out how to use AI to build the next killer app. But aren’t we tired of doing stuff and ready to stop and ask if we should? Nah?
As I said though, the history of the concept is very much bound up with metaphysical concepts. I assume it’s that history you are not aware of.
Yea, in my post I’ve mentioned that people might want to use metaphysical to grant personhood value, so obviously I’m aware of the history (like everyone). I don’t care about the historical context here because we both are willing to admit that personhood doesn’t have to be connected to metaphysical. It’s handy if you’re religious, because it’s somewhat easier to grant personhood value then, but not the only way to do it.
But, like, how much utilitarian theory have you read?
Probably not nearly enough, maybe I didn’t use the word correctly. I know that there’s some twisted ‘hedonic’ version of utilitarianism, I’ve just meant cost/benefit analysis. I’ll probably need to find some time to educate more on core philosophical material, because most of my knowledge/beliefs are derived from my own thinking and discussions I’ve had and my understanding of theory is very much limited ![]()
Just so you don’t have to wonder what I was meaning there, I’ve meant that you can derive rules that force you to value personhood as necessary requirement to create successful society. Without them society would not be stable and would not be competitive. Of course this doesn’t mean that individuals/groups cannot be stripped of personhood but I’d argue that most of society needs to be granted some inherent rights for collective to work well.
OK, so then what’s your opposition to bring the concept of soul into the conversation, if not that the only things worth discussing are “real” things like data?
You know how there are ‘known unknowns’ and ‘unknown unknowns’? I’d say that there are ‘real unreal’ things and ‘unreal unreal’ things (sounds silly but I believe this analogy holds).
There are some things that are not material but still very much exist. Bonds between people, nationality, culture, personally I’d add consciousness here too.
In the other category I’d put pure fiction - myths, religions, magical things and also concepts such as soul. You can bring those things into conversation but I feel like they’re on much more shaky ground and require you to buy into belief system and discussing them doesn’t get you any closer to any truth (but I bet someone religious would put his religion and things associated in 1st category too so he’d feel different about this).
I guess I don’t get what’s the point of talking about soul instead of consciousness. Proving consciousness is already hard enough and doesn’t require you to buy into metaphysical. Picking the right metaphysical framework is just impossible - I’d argue that there’s no metaphysical, but even if you believe there is how do you tell you’re right if there’s nothing real behind it and you can’t prove it with data? How do you pick the correct one from infinitely many?
I wouldn’t say we should only value data. I’d say that science should help us describe the world and we should arrive at our values in some other way. I’d just hope that at some point all of our derived values connect to real world somehow. Moral rules can compose I guess so we can value ‘unreal’ things like ‘nationality’ that break down to other things all the way down to ‘personhood’, but I’d like to think that ‘personhood’ is linked to real world somehow (so we share some ‘common’ human experience).
I’d be opposed to building rules on top of things that are not linked to real world (like being racist toward people that have green chakra).
Those examples might be a bit silly but I guess they explain my moral framework a bit better. Hope my messages are not too chaotic.
I wonder how in your mind is consciousness vs soul distinction useful in this conversation. Why bring soul into it? How is that useful? Or maybe what’s even the difference between the two if there is any.