On "Why Elixir?"

there’s a conceit here that other languages need things like k8s, envoy, redis etc. the reality is these are tremendously useful pieces of software with significant features that add value. not for every application, sure, but elixir is not a replacement for them any more than an autobody shop is a replacement for a porsche taycan

i think there’s this misconception amongst the elixir community that elixir is superior to other languages because it doesn’t need all the support and scaffolding that they hear about in use at amazon and google and uber and netflix etc. there’s this narrative where elixir is the outcast underdog doing everything the big megacorps do but with a fraction of the code. the reality though is that no one is rejecting elixir because it’s too good or because they’d rather struggle with complicated rube goldberg machines. to a large extent elixir isn’t seeing enterprise adoption because the effort required to integrate it into the existing infrastructure at these companies isn’t outweighed by it’s advantages over go or python or java

part of that increased effort is the papercuts mentioned by @shanesveller. part is the outsider attitude and resistance to the idea that companies are using things like redis, kafka, k8s and docker because they add value and not because they cover up some deficiencies in go/python/java/etc. part is just that elixir isn’t so good that it’s obviously better

that’s where elixir needs to get to – obviously better for some niche – if it hopes to compete on the same level as other languages. that’s why i can easily get approval to write new services in julia or rust at work. they’re obviously better than the alternative. it’s really hard to make that case for elixir. falling back on ‘elixir is different’ is not a winning argument

4 Likes

I think you’re overstating a good bit.

Elixir doesn’t see the same adoption as Java, .NET, Go and Python because:

  1. Java App Servers have long provided a tremendous amount of tooling that was not available in other languages for almost a decade. K8s and Docker gives a more general purpose version of what these were giving since pre 2000. Once a company invested in that Java infrastructure adding in any new language was not a simple decision because it wasn’t a matter of using language X here, language X also missed the deployment tooling, logging, variable abstraction, clustering, zero downtime deployments, etc. When the JVM opened up to other languages and the development experience improved it became even more entrenched because now you could run lots of other languages on that same infrastructure investment.

  2. .NET gained traction from Windows itself, Microsoft certifications, SQL Server, etc and the cross promotion arm of Microsoft that gives you huge technology deals based on how many MS Certified employees you have. This becomes a self-feeding beast that the longer you’re in, the more invested you become in staying in so pivoting is hard.

  3. Go got an uptick because of portability (like java), single binary deployment (easy to run anywhere), better concurrency than 1 & 2 and designed for an extremely low learning curve (protection from turnover…but the “is this it?” experience), along with the backing of Google. Not to say that Go isn’t without other benefits, but those were the selling points that made it easier to introduce in big corps.

  4. Python has continued to be the general purpose language that’s just everywhere. Use it for web apps with Django, server admin, it shipped with linux, admin tooling like Ansible was built with it, easy call outs to C that have led to adoption for advanced math and data science. It’s everywhere and the increase in its use as a teaching language only advanced it further.

These are the very real forces at work pushing languages 1-4 and any language (Elixir or otherwise) is fighting the same uphill battle for entrenchment in the enterprise world. I don’t see Elixir ever becoming an enterprise type language because of that. If it does happen it will be a decade long journey. It will gain attention as more and more startups use it as a disrupting force.

But just like Ruby, one of the biggest obstacles to greater adoption numbers is that one person can do SOOOOO much with the language that you don’t need huge teams.

8 Likes

I think it’s worth considering how the narrative around Elixir has changed over the last 5 years.

When I first picked up Elixir, it was for no other reason than it was a functional language that could be confidently run in production. I wasn’t comparing it with Python, Go, Java and the like because they were a completely different paradigm. I was comparing it to Haskell, Scala, F# and Clojure. Of those, I’d still pick Elixir for most tasks, barring a requirement for interop with existing .NET/JVM code. There was an acknowledgement that the benefits of the concurrent functional paradigm outweighed the costs that come with using a niche language (remember when we didn’t yet have standardized calendar types? :stuck_out_tongue:).

Now we’re talking about Elixir competing against some of the most mainstream, highly resourced languages out there. What a great position to be in! There’s always more packages required, but I’d encourage anyone who runs into a limitation with an existing package to submit PRs. The barriers to contribution are much lower than other ecosystems thanks to source based packages with standardized build tooling.

20 Likes

I feel that some might need a reminder that while this topic was spun off a thread about why people are ditching Elixir, the topic is now “Why Elixir” :stuck_out_tongue_winking_eye:

I have been driven to Elixir after 10+ years of, “Oh, we’re gonna use varnish and tokyo cabinet oh but now it’s memcache and redis, and we’re going to need to provision a separate server for these and another one for background jobs because we can’t have them running on the web workers…” and then there’s concurrency… AND THEN there’s doing some form of this dance on the frontend (Apollo cache, anyone?).

Yes, setting up Redis is easy (if your company just lets you just install a new service without red tape). I’ve done it several times and was pretty easy even the first time. And once you’ve set up docker a few dozen times it probably becomes second nature (but I wouldn’t know). But other than thinning out your resume, I don’t see how “less tech and less context switching” could possibly be a bad thing (please note my use of “less”). @sasajuric made this point very well, but I wanted to second it (to be clear, he made it without the resume-padding quip).

I’m not using Elixir professionally yet but I’m floored by what it’s capable of and by its potential in the web dev space (which is my true love). Everyone is clearly aware of its gaps and I hope more and more people show up to help fill in them in. I’m hoping to contribute in the very near future.

And I’m now seeing @mbuhot’s answer which just makes almost all of my post a simple “hear, hear!” reply :stuck_out_tongue:

13 Likes

Can ports be used without reservation when Elixir lacks a library? What could be a problem when using ports? Performance? Memory use?

One of my concerns if getting stuck without having the time or knowledge to implement a missing library. I’m thinking of using python ports in those cases.

1 Like

My brother is a professional con man, but he is such a charming guy! After a successful deal, he always throws a party and pours the most expensive champagne. In the end we all benefit! :slight_smile:

I have the same question. What kinds of libraries could be accessed through say a Python port?

1 Like

All of them, as long as you are able to define protocol for data exchange.

1 Like

Such has been my experience with Ruby for example. The goto solutions for running something in background or periodically (and most of the systems I’ve worked on needed this), such as sidekiq or resque, required running another OS process (job scheduler) and Redis. The equivalent in BEAM can be done with e.g. Task (or e.g. proc_lib in Erlang) for background jobs, and :timer.apply_interval for periodic ones. In a simplest form you don’t even need to add a library dependency, let alone start the scheduler in a separate OS process and manage an external product such as Redis. Will this suit all scenarios? Of course not, but there will be many situations where these lightweight options will work just fine, and allow me to reduce the number of moving parts.

Same thing holds for reverse proxy. Back when I was working with Rails this was basically mandatory, while on BEAM we can get by with cowboy or Phoenix. Back in my days any Rails based production, no matter how simple and small, required nginx, redis, and a separate OS process for the job scheduler. I haven’t been working with Ruby/Rails for some ten years now, so not sure whether things have improved here.

The former sentiment doesn’t negate the latter. A tool can be both useful, but also an overkill in simpler scenarios, as can be seen from the background job & reverse proxy examples. So I prefer to reach for an external tool when built-in option don’t suit my needs. But in general I strive to get by with built-in options, b/c this reduces the amount of moving parts and the number of different technologies used, and I believe we can all agree that, all other things being similar enough, that simplifies things.

I feel that many of the external tools we take for granted are an overkill in many simpler situations (and it’s worth remembering that many of us are not operating at the scale of Facebook, Google, Twitter, Netflix & co), but we still end up using them because there are no simpler alternatives. I feel that BEAM, with its OS-like capabilities, and properties such as separation of failures and latencies through shared nothing lightweight concurrency, has great potential to provide simpler-to-operate alternatives in the form of libraries. A nice example of this is Phoenix, and otherwise we could take some cues from Mnesia or riak_core, which despite all their issues demonstrated that services such as database or distributed system infrastructure can be provided as libraries. I’ve also posted some thoughts on this matter in another thread, here and here.

10 Likes

Why I use (& love to use) Elixir!

Probably “just” because I’m in a very lucky position.
→ Single developer deciding everything on my own.

I will call myself still an un-professional Elixir developer.
But the things I do with Elixir, works (very very well) form my application areas.
→ Small scale application backends (non of them have > 100.000 active users).

I often came along not finding one “best practise” solution to solve my problems in Elixir.
But I find a hand full of possible solutions and most time take the easiest in my view and have a look if it works.
I’ve learned to love this multiple opportunities.
If I’m in doubt my solution isn’t good enough - telemetry comes to my rescue.

I see the dynamic typing still as charming to me.
Often I’ll prototype things just fast, as time is the most expensive thing in my work (bugs are the second).
So in some larger projects I also of course use Dialyzer - and yes it’s not perfect.

About the package ecosystem. Yes there are viewer. But I’m glad of it.
Working with other large ecosystems you’ll get forced to - there is certainly a package for it - we’ll take it as this would be cheaper.
Not taking into the account of not knowing the quality of the package in up front.
Seeing yourself changing the package with another package later.
That’s really not a fun thing - so I’ve stopped working like that.

Means, yeah there are still things to solve on my own - and having fun doing so.

About the deployments. I did a lot with all of this moving parts (Reverse Proxy, Docker, …).
And now moving in the same direction as sasajuric or hauleth.
For my small scale applications it’s definitely the cheaper way of operations.
I call them atomic deployments - one thing to start - one thing to destroy.

In the end! Thanks to all the people bringing Elixir to live.
& letting me my fun with programming.

PS: No I won’t use Elixir for an Game Engine or for Desktop GUI applications.

5 Likes

I feel some of difference in position between “we need redis anyways” and “no need for redis on the beam” might come from different scale. At a certain point of complexity it makes sense to move to more specialized tools like redis or k8s. What I like about elixir is that given me – being a one-man show working on a web application of non-big scale – a way to not need those immediately. I’m well aware I don’t get all the bells ans whistles, but also I don’t need or even want them. If I compare that to my prev. language of PHP, where as soon as I need a tiny bit of async stuff I need to pull in redis, child process management and what not elixir becomes a quite solid choice (on top of other reasons).

13 Likes

Ports can definitely be used for this (I did it myself a few times), but not without reservations.

There is another data hop, so the data being sent to the port and received by it needs to pass through the extra serialization/deserialization step. Depending on the particular situation this may or may not cause a significant overhead As usual, it’s best to measure :slight_smile:

In addition, if external term format codec is not available in the guest language, you’ll need to fallback to a language neutral format (e.g. json or protobuf), so some fidelity might be lost. For example instead of sending a tuple directly to the port, you might need to somehow transform it into a type supported by the exchange format before encoding.

Whatever you can use from Python should work, with the caveats mentioned above.

4 Likes

This summarizes it perfectly. We are not saying “we don’t need redis”, we are rather saying that redis is not a strong requirement from day one as seen elsewhere. As a platform, Erlang/OTP gives the ability to reduce operational complexity in some cases, and to accept this complexity as a starting point because “that’s the way everyone does things” or because “we already have so much complexity anyway” would be a mistake. Of course there are still many situations were Redis is useful, and it is productive to everyone to double-check when that’s actually the case.

I also dispute the perception these operational tools are ignored by the community. There is Redix (maintained by an Elixir Core Member). Tristan, myself, and others have written about k8s+Erlang and there is bonny for k8s control. brod (from Klarna) and broadway_kafka for Kafka, etc. Phoenix includes a Dockerfile in its official deployment guides and it will ship with a Dockerfile generator in the next release. Of course we don’t support everything under the sun but we definitely recognize that they are necessary for a wide range of applications and systems.

Sometimes it feels we are trapped in this three-year old rhetoric that “Elixir/Erlang does not like k8s/Docker/etc” while a large chunk of the community has moved on to recognize and accept its pros and cons.

26 Likes

Having worked with elixir for a year i can say that actor model is an amazing tool for concurrency i cant say the same for elixir and its ecosystem. Dont get me wrong elixir is an amazing project witha n amazing community, but often you feel locked into blessed ways of doing things(looking at global config variables in umbrellas, phoenix and ecto), what other libraries you can have instead of phoenix and ecto on web dev front? I know that phoenix and ecto are quite modular, but the rails like approach just didnt stick with me. So why i moved away:

  • configs
  • frameworks and blessed ways approach
  • overuse of use directives
  • map string/atom keys and pain points related to that
  • need to use rust/c/c++ when i do need to do something fast
  • packages are sometimes unreliable(eg amqp sometimes loses the connection without a reason)

So im right now evaluating rust/clj/scala for my next projects

Both Phoenix and Ecto are built on top of low-level blocks, so you can fallback to Plug itself (with Plug.Router and friends) if you feel like Phoenix is doing too much for you or use the database adapters directly if you would rather not use Ecto.

Configs have been definitely in flux but we have been both consolidating the approaches over the last years (see upcoming config/runtime.exs) and starting to see solid alternatives such as Vapor that provides a more structured and local approach to configuration.

Global configs in umbrellas are also discussed extensively, even in the introduction material we list them as one of its major downsides. I wouldn’t necessarily call umbrellas as a blessed way, it is more of a tool available to you which you can find plenty of critique on, and it has well documented alternatives such as not using them :slight_smile: or using path dependencies (which some people call poncho projects).

Also, the need to use rust/c/c++ is completely natural in most languages. Clj and scala may need to fallback to java in certain occasions too (unless you write your scala in a java-like fashion). Of course, if you pick Rust then you don’t need to change languages to write performant code, but the languages that absolutely maximize performance may often feel too low level to write your business logic in.

12 Likes

I’d go a bit further and say that it can help reduce complexity in many cases. For example, even if a project uses redis it might not need nginx, or vice versa. Even if just one moving piece is eliminated, it’s still a significant simplification.

Furthermore, even if redis is used, it doesn’t mean we need to use it for everything. For example, we can still reach for ETS, GenServers, Agents & friends for local (node-only) caching, which can give us some nice benefits such as reduced pressure on the central storage, no serialization/deserialization (which also means no data loss, such as atoms being converted into strings), and no network hopping. Same argument holds for e.g. running local background and periodical jobs. Just because a central storage is available doesn’t mean I need to use it for everything.

FWIW I’ve been dockerizing practically every beam release I’ve worked on for the past 5 years or so, and it’s my goto approach for running beam in production. I’m not a fan of k8s but I wouldn’t mind using it for pragmatical reasons. However, I think that a lightweight simpler alternative to k8s could be built on top of distributed beam.

9 Likes

I was thinking about writing something like CoreOS’s fleetctl in Elixir, however I think that Go or Rust would result in broader audience. Also I was thinking about Erlang runner for Hashicorp’s Nomad, which would be handy as well (and much simpler than k8s).

3 Likes

That reminds me of a comment you made on one of your talks about some problems with BEAM distribution. Did you ever blogged about that?

You can find an expanded explanation here.

2 Likes

A little off topic, but on the idea of using or not using redis. I am very grateful that with elixir I can do without redis. Don’t get me wrong, redis is a fine tool. However, 90%+ of the time people don’t actually need redis, even memcached can do the job. But since redis is there and has richer functionalities, people use redis.

Sometimes people do want to have a distributed, persistent, in-memory K/V store. The choices are:

  • redis. however, you either risk losing data, or have severe jitters in performance
  • mongodb. I’ve been burned by it: lost production data, the taste is still bitter in my mouth
  • riak. looks good on paper, benchmark number is not great, I also heard the company had financial trouble?

My question is, if I ever grow out of cachex/ETS, should I go back to redis, or should I seek some bigger guns than redis?

2 Likes