What are your feelings on AI in general?

With AI being a hot topic in the mainstream right now and with our industry at its helm (so making us the people who might be able to do something about it/shape it) this section is being expanded (on a trial basis) to include all/general AI discussions (it’s possible we might create a dedicated section at some point depending on how these threads go). With that said…


What are your feelings on AI in general?

What are your thoughts on AI right now? Not specifically in terms of what it might mean to you as a software developer, or for the software industry, but what kind of impact do you think it will have on society, the planet, our species even? Do you think it will have an overall positive impact? Or do you think we should be worried?

To help kickstart the conversation here’s a clip from Geoffrey Hinton (one of the godfathers of AI) who himself is very concerned…

But what do you think?


Please note:

  • There are no right or wrong answers here - since nobody knows for sure how things will pan out everyone’s opinion is valid.
  • If you disagree with an opinion feel free to debate or challenge it - but please do so tactfully and in good faith.
1 Like

My Mixed Feelings About AI

I find myself torn when it comes to artificial intelligence. On one hand, I’m genuinely amazed by its possibilities; on the other, I’m deeply concerned about its implications. I’ll admit I’m perhaps more fascinated than worried, which sometimes makes me wonder if I should be more cautious.

Why I’m Concerned

My worries stem largely from the perspective of thinkers like Aurélien Barrau, a French astrophysicist, poet, and environmental activist whose work I deeply respect. He argues that without a fundamental shift in our philosophy of living, our current climate and ecological crisis will only worsen. As long as we prioritize capitalist growth over meaningful human relationships and planetary health, technology—including AI—becomes part of the problem rather than the solution.

Barrau identifies several key concerns that resonate with me:

Geopolitical and military risks: AI could encourage us to delegate critical decisions to machines, especially when speed seems more important than wisdom.

Techno-solutionism: The dangerous belief that technology alone can solve all our problems, when what we really need is to fundamentally rethink our values and way of life.

Social alienation: The potential for AI to isolate us further from genuine human connection and community.

As Barrau puts it: “We’ve built a system where prioritizing life over money appears extreme.” His perspective really captures something profound about our current predicament.

Why I’m Still Fascinated

Despite these concerns, I can’t help but be impressed by AI’s rapid progress. Critics who claimed certain capabilities were impossible have been proven wrong time and again.

I’ve been experimenting with AI as a creative assistant in several areas:

Music: Tools like Suno AI let me generate songs that, while perhaps not professional-quality, are genuinely impressive for someone without formal musical training. I’ll even confess I’ve created a few songs glorifying myself—guilty as charged!

Writing: I’m working on several fantasy and fiction projects that involve extensive worldbuilding—creating detailed magic systems, languages, currencies, characters, and entire civilizations. This process used to take forever, but AI assistance has dramatically accelerated my workflow while still letting me maintain creative control.

Programming: AI excels at repetitive, pattern-based tasks. For instance, it’s revolutionized how I handle database seeding—no more lorem ipsum! I can describe a data model and get realistic test data generated automatically. It’s also made documentation much easier for someone whose native language isn’t English, and I use it to create technical cheat sheets for tools I use occasionally but can’t memorize.

Finding Balance

I don’t want AI to do everything for me—I see myself more as a conductor, guiding and infusing soul into the building blocks that AI helps generate. My hope, perhaps utopian, is that AI can truly serve all of humanity while addressing the ecological concerns that make its development sustainable.

The challenge isn’t the technology itself, but ensuring we develop and deploy it in ways that enhance rather than diminish our humanity and our planet’s health.

10 Likes

I’m genuinely worried about the hype. I was in a conversation yesterday with a few people, some who work for small/medium sized companies and some who work for huge companies and they say the same thing… management comes and says “we want to implement GenAI/LLM/buzzword”. No one knows what that means or what it looks like but no one pushes back. Instead they know they’ll be able to pad their resume with “worked on GenAI” and so they work on dubious business value work and produce worse products/features we all have to end up living with and so continues the cycle of software products getting worse and normalized. And eventually the bubble will pop and people will lose their jobs but of course, not anyone from management.

11 Likes

As far as using it as a developer, I do not have a very positive view of AI. I use the fancy auto-complete to deal with boilerplate but otherwise I never do anything like “Write a function that does X” because thinking through those problems and designing code are my favourite parts of programming. I don’t care that it’s slower and in my view tools that allow less people to ship more bloated software at a faster rate isn’t a good thing. Speeding up your side-project so you can have more time with your family is a much different thing, of course, but that gets into the question of if the time saved will actually be used to do something or just be used to spend more time on the computer (remember when computers were supposed to make us work less?)

If it gets to the point where I can’t get a job because I refuse to “vibe code” or offload everything to an agent then I’ll very likely be leaving the industry.

9 Likes

Caught up with a friend who started grad school after engineering for a decade. Group projects where he was with folks who just came out of undergrad - they didn’t know how to do anything without the AI.

Think every git command, every small tweak, every thing. They were writing a video decoder in python (their choice). It was processing a 300 MB file. Their solution used 8 GB of swap memory.

He spent the entire project time refactoring their slop.

So, I’m pretty worried.

5 Likes

It’s a hot topic, and I tip my hat to some of the nuanced thoughts and replies.

AI can’t be ignored… it’s a powerhouse. The ability to describe a solution and have it magically come up with something that works is amazing. Yes, sometimes the code is slop, but this saves countless hours. The era of highly polished “craft coding” may be over.

However… if we look a bit deeper into this, the tradeoffs that it implies start coming more into focus and it might not be all roses. AI is obliterating many jobs – I won’t waste words listing them, but it’s a lot. Junior developers are getting displaced ferociously, and senior devs may not be that far behind.

What really gets me thinking about this is when I think about my own livelihood as representative for many people whose jobs are being taken over. Where do they (we) go? Are we really expected to pivot careers? That’s challenging in the best circumstances, but with the short timelines that these changes require, it almost becomes an extinction level event for huge swaths of the working class. There may be some jobs that get created as others are destroyed, but I suspect it’s killing more jobs than it is creating… whatever the case, I don’t think you can wave your hands at this. And if huge chunks of the working class become unemployed, then our finances will circle more closely around food and housing budgets. And with higher levels of economic uncertainty, there are higher levels of political instability because people get scared and are more likely to believe the stories from an opportunistic demagogue or authoritarian. AI clearly exacerbates some problematic trends that people have… and whether it offers solutions/improvements to some of the problems facing humanity seems less clear. It seems likely that AI will be a direct or indirect cause of greater instability and economic disparity.

Stay tuned!

6 Likes

Personally I’m trying to stay clear of predictions on the future. AI currently looks like it will take over everything if you trust the hype, but right now we’re in a place where we don’t know the costs of AI for various reasons:

  • All the services providing AI tools to us are heavily venture capital juiced or get free azure credits to run their stuff, because microsoft is involved.
  • We’re already seeing some effects like search results being wastly better when scoped to “before AI”, but in general I don’t think we can currently quantify the negative results of the quantity and quality of AI generates stuff.
  • Leaky abstractions. While there’s for sure the case to be made that “craft code” might be over I’d argue that has been the case long before AI. Look at the bazillion wordpress websites out there having been cobbled together to barely working and somehow someone was happy with that. That approach works until you need reliability, reproducability and potentially become liable by law to a certain standard. AI like a junior dev can provide input into certain decision making, but given it’s a machine it cannot be the decision maker anymore. Just like you wouldn’t want a junior dev to decide on e.g. your clustering mechanism. And making (good) decisions will always require expertise or things will eventually go wrong. Leaving to much to AI will eventually hurt.
8 Likes

Whether you’re pro or con, just learn it. Get proficient in the use of local, open-source LLMs.

1 Like

Do you have a source for this claim?

2 Likes

In a perfect definitely world it would be that easy, but in ours it’s like being at least few years behind others. :slightly_frowning_face:

As wrote BigTech companies used money and all their power to popularise their own products and subscriptions. Now everything is based on subscriptions. It’s not like in old, good times when you could simply buy a movie.

Now usually at most you can go to a cinema or pay for at least few subscriptions. Having in mind company practices the same is going to happen with us. By definition I would be crazy if by seeing same patterns I would expect different results.

If people would at least look at the patterns or follow old good practices like “follow the money”, so they would see what’s coming next then maybe we could do something about that, but so far things are most likely not going to change as this way is “easier” a mass … and if you are not with a mass you are against a mass which is just a straight way to become an outsider and then nobody care about “outsider feelings”.

:+1:

If someone really need an example then look at American movies/series … Everyone there someday goes on the long road and if car stops working they take tools and they are fixing the issue.

Should we claim that there is no sense in investing into any car repair company without understanding the context? So what’s the context? On the long road you often don’t have mobile connection.

Therefore you have to change the state of your car to make it work up to the nearest city where someone would fix the issue. If you know this fact are you still scared? Of course not! Now think carefully … Who have interest in scaring you?


Or maybe you think it’s just “coincidence” that so people “randomly” made rebranding on their own, on something that by coincidence is destroying humanity in the movies? When was the last time you’ve named a cup of coffee like an alien’s juice?

Now think by coincidence millions of people are scared about a cup of coffee as it could be an alien’s juice and you can’t prove it’s not. How do you prove that the first coffee’s seeds were naturally created on Earth? You can’t? That’s a prove! Don’t you have a fear? :joy:

1 Like

You could buy it, but you would never own it. You bought the ability to play it (privately) on your device - maybe a friends device - until the tape broke.
So I’m not sure what you mean when you say “buying a movie”.

Anyhow back to the topic.
For me it is like that little plastic bricks, I love building anything with them, I start by putting two bricks together, and than it works itself till the solution.
AI is like, “I got that problem, generate me a building instruction for that”, which take the whole fun out of it.

Like other people said, I loose tinkering and exploring and seeing different solutions (in my mind, in real).

Which brings me to the point, I will only start a new project if I can see it in my mind - AI takes that vision from me.

True, boilerplate gets less boring with it (regarding coding, writing corporate emails), AI will help you stay afloat, but it will never show you the beatifiul underwater world.

Tha big question for me is more like, if an repressive government is using that technology againts its citizens (or citizens of another state), is that morally bad? Compared to other technology?

2 Likes

Depending on law system I guess? Of course you own it, but you own only a copy and not the rights to the original one. If you would own it fully then you would have rights to release a sequel for example which is acceptable. Technically you can make as many backups as long as only you can access them and you can prove you bough it. Not everywhere you can invite a friend, but this rule is terrible and in fact it gives less income to the authors.

I also have fun from coding as you wrote and really can’t imagine to not think about alternative solutions just to “make things faster”. Faster != better - it’s one of many old rules that existed long before movies about AI. Unfortunately like many other old and rules it’s gonna be forgotten soon and everyone is supposed to accept it blindly.

1 Like

Everyone on this forum can buy a RTX3060, run Nx and Ollama with a 4b open source models. You can actually do useful tasks with low end gear and models, you’ll learn a lot, and set yourself up for future availability of cheap powerful hardware and local/distributed intelligence. Most interesting AI developments are at the high-end (subscription models) and low-end (local inference).

5 Likes

There have been a few stories about it on DT, but here’s a vid that covers a bit of it:


Haven’t had a chance to go through it yet, but there’s been lots of talk about the 2027 AI forecast:

The CEOs of OpenAI, Google DeepMind, and Anthropic have all predicted that AGI will arrive within the next 5 years. Sam Altman has said OpenAI is setting its sights on “superintelligence in the true sense of the word” and the “glorious future.”3

What might that look like? We wrote AI 2027 to answer that question. Claims about the future are often frustratingly vague, so we tried to be as concrete and quantitative as possible, even though this means depicting one of many possible futures.

We wrote two endings: a “slowdown” and a “race” ending. However, AI 2027 is not a recommendation or exhortation. Our goal is predictive accuracy.4

We encourage you to debate and counter this scenario.5 We hope to spark a broad conversation about where we’re headed and how to steer toward positive futures. We’re planning to give out thousands in prizes to the best alternative scenarios.

1 Like

Ofc they predict that, we in the next round of 1999 IT boom. The hype now is unreal but still a game changer how we work and focus on what to learn to tag along.

2 Likes

The best “AI” – well, LLMs, let’s call them by their real name – right now are behind corporate subscriptions.

That tells me everything I need to know about “the future of AI”; it will be used to extract rent from everyone who can do meaningful knowledge work because they will be afraid that their colleague is going to be faster and better than them if they don’t use the LLM. A classic race to the bottom / rat race. Chaos and game theories tell us – and proved it with historical examples – that this spirals down at an accelerating rate until the system crashes and burns.

I for one am not impressed by the normalization of this extreme rent-seeking.

And if artificial super-intelligence indeed emerges, I’ll panic. Severely so. These agents will serve the worst of humanity at the cost of everyone else. So much awful practices in hiring and work have been normalized in the last 10-15 years, that some dystopia sci-fi authors are likely hugely impressed that even their bleak novels missed the mark by such a big margin. :icon_biggrin:

And if ASI comes to existence? Multiply those terrible practices by 100x. Even movies like Blade Runner and games like Cyberpunk 2077 and series like Ghost In The Shell will seem tame and optimistic. I would argue we already live in cyberpunk… but we don’t have the choice to get cool and powerful body implants. We only got the worst parts of it.

I believe the AI area has true potential, but only if it’s forcefully plucked out of the hands of the psychopathic mega-capitalist owner class first. Only if LLMs become libre do we as a civilization have a chance to collectively enact a true positive change in the world.

Before that happens – if it ever happens – we are just very predictably moving to an extremely bleak future. The people who own stuff have proven, time and again, that they are not interested in the well-being of the rest of us.

15 Likes

Mixed feelings… The older I get, the more I realize how valuable time is and if I can offload mundane, junior level tasks to AI so I can focus on more interesting problems, why not?

I was also thinking the other day how much of our ego comes into this discussion when it comes to AI completing tasks for us that are in no way historically important and will likely be deleted in the next 5-10 years (if we are lucky). AI erodes the knowledge gap that we previously held on others in regards to technology and I wonder if any of us feel threatened by that at all?

I do like the power it can give me to get unstuck though. I have been working on a personal project that I have been making zero headway on. The other day I threw an LLM at my project to get me unstuck and it came back with some code that at least got me moving again. So maybe we will head more into the role of producers rather than developers. I kinda like that idea, I have always been someone who gets more excited by the idea and concept and I think AI will give us the power to create more of the projects that might have sat idle or never even saw the light of day.

I also use it as part of maintenance tickets. It is nice to be able to throw the LLM at the problem (with a decent prompt and project context) and at least put you in the ballpark of where the root cause lies and eases some of the leg work you need to do when replicating issues.

I think as long as you understand what it is creating (and can guide it to better suited solutions), then there shouldn’t be a problem.

7 Likes

Quite conflicted. Use it all day for programming. It’s revolutionary. Groundbreaking. A magnificent invention. True. But it doesn’t have a soul, and therefore I think it should stay away from songs, emails, prayers and such: whenever a human reaches out. When I find it there I kinda hate it.
But man, the way it codes!

2 Likes

Oh man, this is a big one for me I don’t say out loud much. When I know there are OSS tools you can run locally, but I don’t like the idea that anyone is paying a subscription to work on hobby projects. Paying for hosting and all that has always made sense, but this feels so off.

Though really when it comes down to it, it’s more about not wanting to give these companies my, or even my employer’s, money.

4 Likes

And my data. I’m not sure whats more valuable at the moment.

4 Likes