OpenAI GPT-3 - Let's have that talk

Hello everyone,

Elon musk funded OpenAI came out with the third version of it’s “Generative Pretrained transformer”, in short, GPT-3. I’ve seen quite a bit of discussion on the subject. Focusing mostly on the coding part, it looks like the thing came come up with at the very least some simple apps (think todo list), just by feeding it thought / ideas.

Considering that i have myself been using at least one strong contender in the no-code movement on a weekly basis (namely, Webflow), but with a knowledge in terms of pure machine learning / AI system rather limited, i’m often left wondering what those changes actually means.

What you folks think about this ? Is it a realistic threat to our jobs in the near future ? Or just an added tool to our toolbox ?

It’s absolutely inevitable for a lot of CRUD apps to become automatically generated at one point. The truth that continually gets denied due to somebody’s ego is that many apps aren’t that different from each other at their core – and the distinguishing features are like 1% to 10% at best.

Machine learning analysis of N repositories that are skeleton CRUD apps and then commanding it to create one while allowing for switching on/off certain aspects isn’t much different compared to what mix phx.new and its flags do. :smiley:

Several examples of app generators I bookmarked some weeks ago:

I am sure that much more exist (and if you know them, please post links, I’d like to collect a lot of them!).

I also started becoming a fan of no-code app making (thanks for Webflow link!) and I believe it’s high time the programming area to get a huge pruning of cruft. We should all laser-focus on bringing real business value and start automating various aspects of project creation and management because they really are almost identical. (I seriously am not a fan of all of the tooling and automation minutiae in apps; stuff like bringing up a regular CI/CD integration, docs generation, syncing with rarely-changing external DBs like timezones or RFCs, code coverage etc. should come baked in with the option of customizing them before putting in production.)

For 18.5y of career I’ve only seen 2, maybe 3, projects that required a different approach – and now that I am thinking about it it’s probably still possible to automate those as well because their format is mostly the same but with different priorities.

I am skeptical about what can ML bring to the table. I’d even think normal humans will beat it in its endeavor to automate app creation.

2 Likes

TBH I doubt it - the impressive results we’ve seen so far with tools like GPT-3 and image synthesis tend to be things that we’re good at extracting meaning from despite the inclusion of weirdly incorrect details.

For instance, you can person camera tv man woman understand sentences even with the inclusion of stray words. Here’s a whole site about spotting anomalies in StyleGAN output http://www.whichfaceisreal.com/learn.html

You don’t want EITHER thing happening if you’re doing business logic: an invoice system that multiplies all the numbers together instead of adding them is not at all useful.

1 Like

I completely agree with you on this. At the very least, the foundation of a large amount of applications start with half of it’s codebase needing similar things.

I do think it’s a bit more complex than that, but i get your point. The system capacity to understand what you ask with natural language, and translate it to something tangible remains impressive when it’s not an actual generator, but instead an “understanding” (big quotes here, Skynet isn’t there (yet :stuck_out_tongue_winking_eye:)) of the intent and meaning of what you’re asking, through what the system can find in it’s “brain” (so internet, basically).

Will it not, at some point, be able to “understand” the principles of what you’re asking it to do, with the information available, and avoid those common pitfalls though ? And with that ability, would come ever more complex problems easier to deal with ?

That’s always something i’m wondering. Everytime i see something come up about some new AI, seems to almost always be oriented to the same stuff that, in your particular example, could be correlated for example to what a DSLR (a camera) does when it’s tracking faces. Would you say then that while improvements are made, it will mostly remain to some specific areas of expertise, while other will remain (forever?) out of reach ?

1 Like

That’s a precondition for Artificial General Intelligence, i.e. Skynet. Ain’t happening anytime soon.

I’d bet on any ML-based solution being able to recognize enough patterns to be able to generate most of the boilerplate at the start of projects and maybe, very very maybe, be able to add / remove generic enough features as well.

Agreed. Also, this article from Daniel Davis (referencing the previous version GPT-2, but he posted it again about GPT-3), seems to think of the concept of what he calls “generative design” a white-whale - basically something unobtainable - forever close to be achieved but never really there.

After watching quite a few so-called amazing examples (which the article actually talks about), i do feel like it’s another tech demo that won’t lead to much, if anything, relevant.

For reference, one of the examples that everybody his talking about. FIY, the guy posting this is actually working on an app called debuild, trying to tackle this idea of automated creation.

1 Like

This is a good read about how GPT-3 represents numbers, which TBH makes it amazing that it can do arithmetic with any skill: