Is Elixir the Hundred Year Language?

In Paul Graham’s 2003 essay/talk, The Hundred Year Language, he makes various points about what we’ll likely need out of a language, want to use in a language and what types of resources will be available to programmers in a hundred years.

He’s a big Lisp fan. I think Elixir took some of the best parts of Lisp such as macros and FP in general.

Of all of the various things Graham muses about, I think Elixir is one of the better candidates for the hundred year language. If not Elixir itself, I would think Elixir is at least in the lineage. Thoughts?


I think a new language consisting of Elixir + Types could be it.

Or perhaps Rust + OTP


Exactly what I was going to say! Thanks. :slight_smile:


Paul Graham doesn’t seem to agree as he wrote

For example, types seem to be an inexhaustible source of research papers, despite the fact that static typing seems to preclude true macros-- without which, in my opinion, no language is worth using.

He continues with

The trend is not merely toward languages being developed as open-source projects rather than “research”, but toward languages being designed by the application programmers who need to use them, rather than by compiler writers. This seems a good trend and I expect it to continue.

Which fits Elixir well, among others. And indeed this approach often bear fruits!


Hi, what is a true macro in this context? And what could be “falsy” macros? Something like C macros?

If you have any examples or some lectures, I’ll be happy to look at them…

The closest thing to an explanation I found here:

Paul Graham is a proponent of having “the whole language there all the time”

As a crude analogy he wants the option of building the plane while it’s flying. I think most people are uncomfortable with this level of freedom. This is why macros are generally considered an option of last resort in Elixir unless you’re using it to build a domain specific language or something.

As a counter to this level of optimism:

Bear in mind that Paul Graham is a philosophy major who wrote a shopping cart in Lisp that was bought by Yahoo and subsequently rewritten. He struck gold, retired young and then spent years writing stuff like this.

I’m still a fan. Dare to dream otherwise nothing gets better.


I never learned Lisp and am not a macro expert but from what I gathered its macro system and the language itself are one and the same. Macros are not mere preprocessing tools working under different rules and taking strings in and out for the language compiler to work on later. As regular Lisp functions, Lisp macros have access to all the power of the Lisp language, and can play around with the abstract syntax tree. I think that would be the most basic requirement for him to consider a macro system “true”.

About the static typing, I think the problem is that it prevents a program from manipulating itself at runtime, so any kind of macro which is able to do that won’t do. Strong typing is possible, but not static typing.

A few interesting links :


I am not a full fan of PG as many on HN are. He’s quite correct on many accounts but as usual, all assertions – his included – must be evaluated in context. In his case, LISP and all the LISP-like languages (of which Elixir is one) have the advantage of being able to do runtime code modification (in-place) and also the much more practical compile-time code generation.

That indeed makes those languages very well suited to deal with the real world – I have used Elixir to reshape loose and often times half-invalid external data to something much more manageable that can be analysed, transformed, stored into data and pulled into reports. Elixir worked absolutely amazing every time for these scenarios.

Once you get to that point though, a strongly and statically typed (plus compiled to native code) language is IMO much more suited to work with those normalised data structures (Rust and OCaml come to mind but I am sure many others qualify as well). The potential for error in a statically typed language is simply much smaller.

Well put! I’ve seen many on HN get offended by this criticism and while I am sure some are using it to degrade I believe many (like you) use it to point out that the guy is suffering from survivorship bias like all of us and that his advice, again, is only valid in certain contexts only.

1 Like

This isn’t my quote by the way. It’s from the link. The person saying this has a point but I still think there’s a place for people like Graham. It’s fine that he got ‘lucky’. Everyone that ends up in a similarly privileged position got lucky in some way. The fact that it worked out for him is proof positive that it works for at least some people at least some of the time.

I think the main point that he tries to get across in his evangelism of Lisp is its power to leverage the work of a small number of programmers to achieve greater productivity. That may have the added limitation of not really being scalable to a larger group of programmers.


Not “may have”, it’s “absolutely has”. :003:

I’ve heard stories about several LISP programmers. They single-handedly saved the business while it was struggling to get traction. They had deep understanding of the business area and invented amazing domain-specific languages on top of LISP. Programs with no more than 3000 lines of LISP code serving a business that made 0.5 - 1.0 million a month through it.

Trouble is, once they burned out and left, nobody could decipher what they did. The projects had to be started from scratch.

1 Like

On the opposite end there is Java, which was made to help projects survive their developers and lower cost by rendering them as replaceable as possible. Yet, a big project full of spaghetti code written in Java would encounter the same fate if its main developers happened to quit en masse!

I agree with Paul Graham that conciseness is a fundamental characteristic of a language. But I never even tried to learn Lisp because I think readability and predictability are even more important… Languages like Elixir are on the right track IMHO, making simple things easy without trying to be too smart. Being functional and encouraging coding functionalities in independent processes help with reusability too, which is another bonus.

Aside from this, I think that the “let it crash” philosophy is better than defensive programming. Even the perfect static typing will never prevent internal errors as the hardware can fuck up too, so it’s better to acknowledge that and move on instead of dreaming about the perfect program proven with formal proofs that will miserably crash when it will enter an “impossible” state.


Yep. That pie-in-the-sky dream of “they all can leave and the project will keep getting new features and bugfixes starting next week with the new team” almost never happens. But it’s one of the brain bugs of us the people (especially businessmen) – it’s an illusion that is so pleasant that we scream at whoever tries to ruin it. It’s still an illusion 99% of the time though.

Code is much more often read than written, so a language being concise helps hugely with that.

Well, Clojure is a very practical modern LISP. But the lack of the BEAM’s runtime guarantees ruins most languages for me.

Absolutely. Additionally, and for all the hate that fringier tech (like Elixir) has to endure on HN/Reddit, Elixir is still quite damn fast in many scenarios. I’ve lately crafted a Rust tool that is 45% faster than the Elixir tool I wrote before it when calculating a histogram on files as big as 1GB (although I am sure the Rust code can be made faster; however, most people do a task and move on so I’m rather sure many won’t spend more than a day on it so it’s still a fair comparison).

That very much depends on what is being done though. “Let it crash” is completely useless if you have a bug in your code that makes a process crash and get infinitely restarted. And if it only gets restarted N times then you now have a system that gave up on trying to run something. I agree that this philosophy can be applied quite well for many scenarios – DB pools and DB exceptions being a prominent example – but it’s also one of the things that people kind of believe will solve a lot of problems via magic. Which it cannot do.