Godfather of AI: "You Have No Idea What's Coming" (Interview)

Watched this earlier and thought it was worth sharing. For those not aware Geoffrey Hinton is considered one of the godfathers of AI, yet despite that, he’s been very vocal about how he’s worried where AI might lead.

Edit: please ignore the dumb cover/thumbnail - it does not reflect the quality of the content itself (i.e what Hinton says)

It’s not a long video so perhaps to avoid this thread becoming another mega-topic… maybe we could stick to what they themselves are discussing? Not a hard rule, but would be interesting to hear what people think about his specific points and how the thread develops based on that :icon_biggrin:

4 Likes

Back in the day there was a good series on BBC by Adam Curtis called the Power of Nightmares. The series explores how elites use false flags, fake threats and media scare campaigns to frighten the public into begging for terrible policies. From which we got The Iraq War, The Patriot Act, Covid Lockdowns, Digital ID and other tragedies. The given imagery, satanic robot with red eyes towering above a frightened everyman, is designed to elicit an emotional reaction. Cui bono? When confronted with such obvious media manipulation, practice discernment!

10 Likes

I was about to say but @AndyL beat me to it. Just the cover art of this video reeks of rage-baiting and scare-mongering. Hard to take it seriously. And yeah yeah, we heard it all from supposed “serious” YouTubers that “it drives engagement from certain groups but the video is still good”.

Nope. If you felt confident in your content you would not opt for low-effort baiting.

To the topic at hand, I believe a lot of the original AI researchers grossly overestimate the progress that “AI” made today. That’s where their fear comes from and it’s misplaced because they ascribe too much intelligence / goal / unsupervised-direction to the modern tools.

LLMs as coding assistants have pretty much peaked and people are starting to devise pipelines where multiple LLMs work on a bigger project with 1-2 other LLMs being project managers and asking the human operator for clarifications. This is because LLMs are, well, large, and tend to go way off-course if left unattended (and often do so even as a direct response to a crystal-clear prompt). They are not focused. They must be constantly beaten back into their track.

The “future” in how I can extrapolate it currently is this: have multi-agent systems if you want human workers displaced. This estimation of course can be wildly inaccurate. As a funny example, many of the boomer age Sci-Fi writers predicted even more usage of atomic energy (back when it was called “atomic” and not “nuclear”). Famously even Isaac Asimov based most of the tech in his universe on atomic energy and physics discoveries – but he was far from the only one. And look how that turned out. We went into a very different direction.

I am not a futurist or an oracle though. I am only saying how things look like they would develop. Of course tomorrow one of the LLM companies might actually get closer to AGI and all the castles in the sky get blown by the wind and new ones get made.

It’s a wild ride, this era, I must admit. “May you live in interesting times”, the ancient Chinese curse.

5 Likes

I read the AI-generated transcript of this interview (took me 5 min instead of 20). I’m sure there’s some irony there…

First, the phrase “You have no idea what’s coming”, despite being in quotes, does not appear in the interview. Top notch journalism. (Though of course that’s not Hinton’s fault.)

Second, this interview didn’t strike me as having much substance if you’re already familiar with the discussion. It covered (my personal summary):

  • Questions about the pace and safety of AI (can you slow it down and can you make it safe?)
  • The potential for joblessness / a bleak outlook for future generations
  • The likelihood of a (runaway?) superintelligence

But there weren’t any real insights because nothing was treated with much depth. For example:

I mean, in a world where we have super intelligence, which you don’t believe is that far away.

Yeah, I think it might not be that far away. It’s very hard to predict, but I think we might get it in like 20 years or even less.

So, jot that down.

It’s disappointing because Hinton is obviously an important figure in the field. But perhaps it’s not too surprising. A 20 min interview won’t necessarily elicit the kind of nuanced take that you need when dealing with something so complex and uncertain.

If you’re interested in these topics but want more depth, I can recommend: https://aisafety.dance/

A sweeping tour guide to AI safety for us warm, normal fleshy humans

It’s broken into 5 parts though the last 2 aren’t out yet. Don’t worry, though, there’s still an expected 17min + 45min + 52min = 114min of reading.

It’s for a lay audience for sure. But it manages to cover both the whole Skynet scenario that everyone who isn’t already in the know is worried about and things like the “we accidentally made our hiring process super racist” problem that is a very real concern today and needs to be addressed (IMO with regulation but I’ll leave that for readers to decide for themselves).

3 Likes

I agree with you all about the thumbnail, I was in two minds to make the video a link when I saw the preview but it’s a pain having to click through to another site just to watch a vid so left it as is. I guess his team thought it would be a good idea (probably to appeal to his audience). I still think the interview is worth watching as Hinten speaks in layman’s terms (again, probably for their intended audience). I added a note to the OP.

I can totally understand this. I think most of us are by now accustomed to the problem, reaction, solution playbook that most of our govts have used time and time again, but in this case I can’t see them making us worried about AI being to their benefit (or more specifically, for the benefit of the people they work for - i.e the ruling classes/billionaires/trillionaires). If anything, they’d probably not want to make people worry about AI as that could lead to a backlash or restrictions. The only plausible reason I can think of why it’d be a false flag, is for the investments, but they could probably get that anyway given how much it has taken off.

On top of that this is Geoffrey Hinton, the godfather of AI himself (unless of course he’s in on it).

So personally I don’t think it’s fear mongering for nefarious reasons, at least not from Hinton. It might be from others, like Ilya (for the investment money) but Hinton seems genuinely concerned. (I circle back to this at the end of this post.)

I agree it was a dumb move. I expect somebody from his team thought it would appeal to his audience. Shame really, as the interview itself is actually pretty good, perhaps precisely because it’s aimed at that audience.

I’m guessing it might have been in the full interview as the uploaded video is only 20 minutes. They say something about autonomous weapons being covered but there’s none of that in the video, so some stuff has definitely been cut. But yeah I agree, it was daft to include that if it wasn’t going to be included in the interview, even if it does appear to be the gist of what Hinten is getting at.


Here are some key points from the interview that I felt noteworthy:

Ilya Sutskever

Hinton says Ilya Sutskever was the main force behind GPT2, which led to ChatGPT - but he left the company because of ‘safety reasons’. He’s received billions in investment for his new company:

Sutskever co-founded and was chief scientist at OpenAI. In 2023, he was one of the members of OpenAI’s board that ousted Sam Altman as its CEO; Altman was reinstated a week later, and Sutskever stepped down from the board. In June 2024, Sutskever co-founded the company Safe Superintelligence alongside Daniel Gross and Daniel Levy.

I suppose the accusation could be true, or it could be that he just wanted his own company. Given the kind of people we’re dealing with here, I’d be inclined to think the former was used to make the latter possible. But that’s just my hunch. I don’t trust billionaires or wannabe billionaires.

Hinton on job losses

He thinks it’s going to be like machines in the industrial revolution. He says "you can’t have a job digging ditches now, because a machine can dig ditches better than we can”, which of course is true - machines do some things much better than us. He goes on to say “for mundane intellectual labour, AI is just going to replace everyone” and that “it may well be in the form of fewer people who are using AI assistance … doing the work of 10 people previously”.

We already know this is happening, not just from our own experience (we can do more with AI - that’s why we use it) but there have been lots of stories on DT about thousands of job losses directly because of AI:

Superintelligence

He thinks it’s less than 20 years away, some people think considerably closer, some, further. (Superintelligence is when AI is basically much smarter than us in all/most things.)

They also talk about how good AI is right now, with the interviewer saying he got Replit to make an app for him, which he said was amazing, but terrifying. On this Hinton says “And if it can build software like that, remember that AI, when it’s training it’s using code, and if it can modify its own code, then it gets quite scary right - it can change itself in a way we can’t change ourselves. We can’t change our innate endowment - there’s nothing about itself that it couldn’t change.”

The BORG

Sorry, couldn’t resist! :lol:

When asked why he thinks it’s superior one thing he mentions is how AI can share data with each other a billion times faster than we can.

Well, let me tell you why I think it’s superior.

Interviewer: Okay.

Um, it’s digital. And because it’s digital, you can have you can simulate a neural network on one piece of hardware and you can simulate exactly the same neural network on a different piece of hardware. So you can have clones of the same intelligence. Now, you could get this one to go off and look at one bit of the internet and this other one to look at a different bit of the internet. And while they’re looking at these different bits of the internet, they can be syncing with each other so they keep their weights the same, the connection strengths the same. Weights are connection strengths.

So this one might look at something on the internet and say, “Oh, I’d like to increase this strength of this connection a bit.” And it can convey that information to this one. So it can increase the strength of that connection a bit based on this one’s experience.

Interview: And when you say the strength of the connection, you’re talking about learning.

That’s learning, yes. Learning consists of saying instead of this one giving 2.4 four votes for whether that one should turn on. We’ll have this one give 2.5 votes for whether this one should turn on. And that will be a little bit of learning.

So these two different copies of the same neural net are getting different experiences, they’re looking at different data, but they’re sharing what they’ve learned by averaging their weights together. And they can do that averaging at like, you can average a trillion weights. When you and I transfer information, we’re limited to the amount of information in a sentence. And the amount of information in a sentence is maybe a 100 bits. It’s very little information. We’re lucky if we’re transferring like 10 bits a second. These things are transferring trillions of bits a second. So, they’re billions of times better than us at sharing information. And that’s because they’re digital, and you can have two bits of hardware using the connection strengths in exactly the same way. We’re analog and you can’t do that. Your brain’s different from my brain. And if I could see the connection strengths between all your neurons, it wouldn’t do me any good because my neurons work slightly differently and they’re connected up slightly differently. So when you die, all your knowledge dies with you. When these things die, suppose you take these two digital intelligences that are clones of each other and you destroy the hardware they run on. As long as you’ve stored the connection strength somewhere, you can just build new hardware that executes the same instructions. So, it’ll know how to use those connection strengths and you’ve recreated that intelligence. So, they’re immortal. We’ve actually solved the problem of immortality, but it’s only for digital things.

Interviewer: So, it knows it will essentially know everything that humans know but more because it will learn new things.

It will learn new things. It would also see all sorts of analogies that people probably never saw. So, for example, at the point when GPT4 couldn’t look on the web, I asked it, “Why is a compost heap like an atom bomb?” Off you go. (interviewer: I have no idea.) Exactly. Excellent. That’s exactly what most people would say. It said, “Well, the time scales are very different and the energy scales are very different.” But then it went on to talk about how a compost heap as it gets hotter generates heat faster, and an atom bomb as it produces more neutrons generates neutrons faster. And so they’re both chain reactions but at very different time in energy scales. And I believe GPT4 had seen that during its training. It had understood the analogy between a compost heap and an atom bomb. And the reason I believe that is if you’ve only got a trillion connections, remember you have 100 trillion. And you need to have thousands of times more knowledge than a person, you need to compress information into those connections. And to compress information, you need to see analogies between different things. In other words, it needs to see all the things that are chain reactions and understand the basic idea of a chain reaction and code that, than code the ways in which they’re different. And that’s just a more efficient way of coding things than coding each of them separately. So it’s seen many many analogies probably many analogies that people have never seen. That’s why I also think that people who say these things will never be creative. They’re going to be much more creative than us because they’re going to see all sorts of analogies we never saw. And a lot of creativity is about seeing strange analogies.

Inequality

He says “in a society which shared things fairly equally, if you get a big increase in productivity, everybody should be better off.” “But if you can replace lots of people by AIs, then the people who get replaced will be worse off and the company that supplies the AIs will be much better off, as will the company that uses the AIs, so it’s going to increase the gap between rich and poor”.

“And we know that if you look at that gap between rich and poor, that basically tells you how nice the society is. If you have a big gap, you get very nasty societies in which people live in walled communities and put other people in mass jails. It’s not good to increase the gap between rich and poor.”

Safe development

Going back to what @AndyL said about govt agendas, it seems Hinten’s own agenda is that “we should be making a huge effort to developing it safely". Which seems reasonable, given he’s seen as the godfather of AI… I don’t suppose anyone wants to go down in history as the person who was responsible for the thing that wipes us out…

1 Like

Hinton aside, most AI Safety peddlers offer solutions that lead to Regulatory Capture or Censorship.

3 Likes

Very nice discussion and I think G. Hinton did a very good job in keeping it digestible for a wider audience.
I agree with him in most things he said on this episode.

I have started working with AI in ~2018 (if I remember correctly) though then I didn’t realize it, for me it was just data.

I squashed image feature vectors to represent a user profile and I made a modified cosine similarity function to find similar profiles. When I ran the first tests it blew my mind how well it did group people based on surprising properties.

Ever since then I followed how AI develops here and there and I think it’s going to take over the world in ways we don’t expect now and it’s coming very fast.

So far I believe the Elixir community is adopting well to these changes.
We use Nx, Bumblebee, Axon, Tidewave in my current company and I’m very satisfied with these tools. Of course we use a lot of LLMs too, our record usage is very high, over 20B tokens in a day, however crazy that sounds but on an average day it’s probably around ~200M tokens.

3 Likes

Thanks for referring to the hegelian dialectic. I don’t think “most of us” have understood how this works.

With regard to “AI” (or anything else we should have an opinion about) it’s all about (controlling) the narrative.

We (people of good will) using Elixir because it’s awesome will be expoited by these billionaire {nsfw} eschatology networks unless miraculously many millions of people suddenly understand what’s going on.

Until then our work will be used against us.

4 Likes

My concern is the current AI is just a collection of historical knowledge.

If we do not have a real powerful way to generate new ideas from human. Would it be an interactive library only?

3 Likes

This is currently doing the rounds…

Will make you both :joy: and :sob: :lol:

In my world, people this stupid – both that finance this and those that believe it will happen – will be banned from using technology and will be forced to do farming. At least they’ll be useful to society that way.

We can dream, right.

1 Like

I don’t think you’re meant to take it that seriously/literally Dimi, even if the goal is to get people to think about what could happen..

A post was split to a new topic: Anyone doing/thinking about doing a digital detox?

A lot of people will take sites like these much more seriously than they should, so your argument goes both ways.

I did not, like, get angry at this, but sometimes I am alarmed that such stupidity is allowed to proliferate unimpeded.

1 Like

I had assumed you were talking about the actual real quotes from the people this site is parodying because like, those are all actual real quotes.

1 Like

Oh, I thought it was a given that I got that this site is a parody and that I was facepalming at the real quotes?

If not, sorry for not making it crystal clear. But yes, what you said is spot on. That was it.

1 Like

Ha yes so I was right. This was more of an indirect response to Aston who it seems thought you didn’t get that but ya… uh… now I’m continuing to put words in people’s mouths so I’ll stop here :grin:

2 Likes