Dunno about you, but I’m (still) on linux and for what it’s worth I believe the value for money always wins in the end. If your fears were truly substantiated we’d all be running Windows now (back in the day “when I was young” it was the only “serious” “OS” backed by “serious” and large enough companies aka overlords).
It’s the type of project that matters here, not merely its size. If your revenues are dependent on direct exposure to eye balls then you’re gonna get hurt by LLM’s. It’s as easy as that.
Fortunately, we live in (semi-)capitalism and the capitalist part of it (the entrepreneurial part) will not let that happen.
It’s the same sort of thing in that it’s decentralized social media, yes. It is not the same sort of thing in that unlike every other attempt at decentralized socials I’m aware of, atproto stands as an architecturally viable alternative to centralized platforms.
There are many users on this forum who post on Bluesky (including me!), but you’re right that there has not been much discussion of the underlying protocol (atproto) on here. I expect that to change, and I hope I can contribute to that change. Elixir and the BEAM are a very good fit for building atproto infra.
There is one serious atproto project built with Gleam, but nothing serious I am aware of built with Elixir. It’s mostly Go (our nemesis! lol).
That link above is to Tangled, which is a git forge built on atproto. I actually think Tangled is much more promising than Bluesky, and I intend to start hosting projects there soon.
There is a path to victory here. It is possible for us to defeat user-hostile, centralized social media. And Elixir can play a meaningful role in that victory.
I think it’s important to note that Tailwind is surviving just fine. They are making many millions of dollars off of sponsorships, a number which has only gone up since that debacle.
What is not fine is their “pre-built components” business, and I don’t think it’s hard to see why. Probably not a great business to be in right now.
How enforceable is this? The fact that the protocol supports it in the spec does not mean much. Let us not forget the browser wars, the absolute sh1t-show that is to this day the HTML and CSS compatibility, not to mention the 17 ways all browser and proxy and CDN vendors choose to interpret how must various HTTP headers be interpreted, leading to a maintenance nightmare that is still going on, 24/7, to this day. Ask Cloudflare, Varnish, BunnyCDN and likely thousands of others.
In my world I’ll have a bot that checks you for 100% compatibility a few times a month. Don’t comply with one edge case and I am deny-listing you for a month, even if you start complying one hour later.
Lived long enough to see that humans don’t respond well to anything else.
Because make no mistake, the corpos will try to selectively not comply with just that one tiny little detail of the spec.
And that’s not even touching on the inevitable fact that the corpos will make their own ATProto servers and will try to lull the normal people inside with promises of stability and various temporary value-adds their marketing team will pull out of thin air. And then, well, we’ve seen the “Embrace, Extend, Extinguish” multiple times already, haven’t we?
ATProto might be the best thing after the invention of the wheel but if you cannot strong-arm everyone to be a good citizen then its adoption is doomed to fail like every other technically sound idea.
No clue what the way out is. In my life the only political system I’ve ever seen working is " The Benevolent Dictator". Everything else falls apart. Well, that one falls apart as well, but slower.
They took the ability to move “accounts” as a fundamental design constraint. The network has a well-defined data model. Each user has a repository, which is a signed merkle search tree. The MST is a prolly tree variant; it’s like a unicit btree, meaning it’s reasonably quick to look up keys but it’s also very quick to re-compute the root hash from a small number of changes.
You can think of this like a git repo, with the most salient difference being that the MST supports deletions more elegantly than git’s model. Append-only doesn’t really work as well for the general case as it does with source code.
The repo is “owned” by a user, and contains data for that user specifically. For example, your repo would contain your following list for Bluesky. It would not, however, contain your followers list, as that is implicitly stored by everyone else’s repos instead.
What this implies is that some social functionality requires a full view of the network, and atproto embraces this. The repo format I mentioned is designed to be diffed and transferred (again like a git repo) and the idea is that every major provider will have a copy of every single repo. They can then use this to materialize a view of things like “what is every user that follows this user”. Note that there are still things that do not require a full view of the network, like listing a particular user’s posts.
But that brings us to the answer to your question: because every major provider already has a copy of a given user’s repo, there is not actually much of anything to “move” between providers. Instead your data is implicitly backed up by the network, and the “backups”, which are really live copies, are incrementally kept up-to-date. A user could, obviously, also keep their own backup of their repo if they wanted. Or literally anyone else could.
All you need is a DNS-style system to keep track of the authoritative copy (and signing keys), and for that they have DID. There are two methods, one which piggy-backs off of the web, and a newer one called PLC which is currently centralized (but very auditable). The goal with the latter is to have a small consortium to avoid excessive centralization.
The view of the atproto community, which I share, is that this is not a problem as long as there are multiple companies doing this. This largely follows the model of the web, which is the most successful decentralized project of all time. I don’t care what providers you use as long as I can choose or operate my own.
The problem with closed social media, as it exists today, is that there is only one provider and they can disappear at any time, taking the network with them. It’s a very bad situation. Even a handful of major providers would make a world of difference, as they would have to compete with each other, and others would survive if one disappears.
The only question is whether the network can successfully decentralize beyond its current main provider (Bluesky). And that responsibility rests on us. I think Elixir can play a part in that, because I think we have the best language and runtime in the world for building multitenant services. It’s a perfect match. So I hope we can get some Elixir people excited about atproto.
Again, words are important but they are not enough. I will put my code where my mouth is shortly.
That’s not even the worst part of it. The much, much worse thing that they do is manipulate things on their backend and they make their clients thin i.e. they just get whatever the backend is telling them. No double-checking against cache, no quorum with other nodes, nothing.
How many posts have been manipulated for various normal human reasons (money, politics, propaganda) is beyond count at this point.
If ATProto can solve this – sounds like it can – then it could make for a more democratic network. Though maybe it’s vulnerable to the “50% plus one” problem like the blockchains?
I have wanted to read on it for months but I still don’t have enough time or energy for it, sadly. How much data can it store? Is it designed for a few thousand short text blurbs i.e. like a “we have Twitter at home” thing? Or is it a much better GIT and a database at the same time? How much it has in common with FoundationDB in terms of design of the distributed bits?
I can allocate the time to read an intro material but I am fearful I’ll only stumble upon hypers and “influencers” and will waste my time and not learn anything. And don’t send me science papers, please.
Let’s start at $10k a month, net, for two years at least. I always wanted to work on distributed networking but I have the small problem of having to work.
Jokes aside, one interesting question would then be: why Elixir? Why not i.e. Rust or Zig?
Like Github vs Gitlab vs Bitbucket? I agree it is better than a total monopoly, but still far from democratic. In fact, I think a democratic network is a oxymoron term; anything with a strong network effect will not be democratic. Consolidation is inevitable. We had a democratic and federated network before, it is called email. Look what it has become today
I’ll agree that 2025 has been the year during which LLMs improved more than many people expected – myself included – but a more general intelligence seems to be out reach still.
I’ve been blown away by what truly seems like reasoning abilities but I urge anyone thinking we’re close to a general AI to chat for longer with Gemini Pro and they’ll have a very harsh reality check; at 700k token it starts getting confused, repeating previously applied code, suggesting things that were already done, and mistaking your requests for something completely else.
I, like every other living human, am pretty bad at predicting the future. But the current context limit / tokens model seems to have peaked. They’ll either have to find a way to make each token cheaper for themselves or bill us based on another metric.
What’s funny is atproto agrees with you. By design it essentially admits there is no way to build a social network that is completely decentralized (where every node is an equal peer). Instead, the design somewhat follows that of the web, or email, in that some amount of centralization is allowed (even encouraged!) but not total, outright monopoly.
And while the web and email may not be perfect, they are practically utopian compared to the situation with closed social media. If atproto wins, which I think it can, it will be a monumental improvement. Perfect should not be the enemy of good.
The general consensus that I’ve seen is that models did not really improve in 2025, for the first time in years.
What improved was the coding models, and this seems to be driven largely by the top three labs choosing that as their battleground since they couldn’t eek any more performance out of this generation of base models. It seems like all of the gains came from doing RL on problems that have known “solution states” (i.e. “does this code compile/pass tests”). Amusingly this was also the final nail in the coffin for the pervasive “stochastic parrot” theory (which was nonsense anyway). These models are not token predictors, they are “make the tests pass” predictors.
It’s hard to predict what will happen this year because it’s not clear if they’ve picked all of the low-hanging fruit. My guess is there is more work to be done with adversarial techniques: having another model “grade” the correctness of the codebase and so on. After that maybe they’ll hit the wall until enough training capacity comes online for a new generation of base models (and at that point it honestly feels like they are going to run out of GPUs lol).
I’m betting the models will not be getting any better at producing “polished” or “quality” UX in the near future, because I am having a hard time imagining a way to RL such a thing. It’s a very human skill, and even most humans are bad at it.
What training data will be there, even if the capacity of the models increases? And I am talking more about the quality than the quantity (though the amount is important too).
Can LLMs ‘learn’ (good) enough just by analysing prompts.
I admit that I am lacking serious knowledge and argue while listening to my gut feeling
Well first off, I define vibe-coding as the copy-pasting of code and errors back and forth between the LLM and the editor.
Pre LLM, new stacks were able to gain traction due to the community effort that would help newcomers or share more elaborate solutions to problems. With vibecoding, this community effect is near zero, since everyone keeps the conversations to themselves and their bots.
In order for the LLM to be as proficient with the new stack as with the old stack, which is what vibe-coders expect, a lot of code would need to be available for the LLM. Given the closed nature of vibe-coding, it is a gargantuan effort for the almost always small group of people driving the new stack forward to create the amount of training data needed for the LLM to meet the proficiency level of the LLM with the old stack.
Furthermore, because the LLMs work on unsupervised self-labelled data, it will be increasingly less probable that a new stack will be mentioned with regards to questions on architecture. The training step of an LLM will ask a for a prediction like: “the best web framework is …” and then a list of good enough frameworks is given. If the new one is not in this training list there, it won’t be learned by the LLM. This effectively means that the companies training the LLMs are becoming gatekeepers.
The coding agents need to keep the vibecoders happy (the paying customers) and they want an LLM that is proficient. It does not make economic sense to train on an obscure stack, even more so because it might have the added downside to draw down perfomance on the already trained languages.
The contradiction I pointed out was not abut LLM but about Stackoverflow. As a postmortem analysis, did stackoverflow help spreading non-mainstream languages and teck stacks? You have made cases both ways and they are both plausible. However, it is not clear what is your over all opinion. If I may, you seems to imply:
the golden age for new teck stacks is pre-stackoverflow
Stackoverflow made it worse (strong network effect bias on mainstream stacks)
LLM made it even worse still (lack of quality training data made vibe-coding on new stack not viable)
It’s not entirely clear to me that they need more training data. There is a lot of data on the internet, more than enough IMO. They do need to keep the world knowledge up-to-date, which is what necessitates all the crawling. But that’s not about making the models “smarter”.
With that said, over the past 6 months Anthropic has made training on user data opt-out, made it confusing to opt out, and then banned third-party clients. So make of that what you will!
If one used claude code over the last few months they might have noticed the UX changes. First the idea was that you were in control of the agent and had manual validations be the default choice. Then it subtly changed to present “accept everything” as a default choice. Then, “No, tell claude what it needs to do” became just “No” (but you can tab to add details). The UX is moving towards self-governance by the agent.
The bleeding edge people are pushing towards swarms of self-organized agents working autonomously based on specs and reward functions (continuous integration and testing). The idea to be sold is not one person augmented with an AI partner anymore but one person supervising tens, then hundreds of agents. (Less spending on developers and more in tokens). Of course one person cannot do this and remain mentally stable. Or some can ? So the idea will be to create “optimized” hierarchical structures of agents so the supervisor only gets reports synthesized by multiple layers of agents constrained by deterministic assertions (tests). YC startups will probably soon sell this, or it already exists maybe.
I guess someone of this forum could even have a chance to make money with an agent coordination startup in elixir. Write “stores” in javascript that are a state and their transition functions à la Piñia in javascript, push them via some CLI to an elixir platform that calls this a “toolset”. Agents join channels and work together on a same state, the platform auto-gives them the transition functions as tools in openai/anthropic format via MCP, and leverages the latest embeddable JS engine (mquickjs). Boom, you just colocated state and tools in a common format (js) with server-side safe execution guarded by processes (elixir+mquickjs) and created a coordination platform for distributed work by agents.
Will pre-2023 years of experience become a new hiring metric ?