On "Why Elixir?"

I feel that some might need a reminder that while this topic was spun off a thread about why people are ditching Elixir, the topic is now “Why Elixir” :stuck_out_tongue_winking_eye:

I have been driven to Elixir after 10+ years of, “Oh, we’re gonna use varnish and tokyo cabinet oh but now it’s memcache and redis, and we’re going to need to provision a separate server for these and another one for background jobs because we can’t have them running on the web workers…” and then there’s concurrency… AND THEN there’s doing some form of this dance on the frontend (Apollo cache, anyone?).

Yes, setting up Redis is easy (if your company just lets you just install a new service without red tape). I’ve done it several times and was pretty easy even the first time. And once you’ve set up docker a few dozen times it probably becomes second nature (but I wouldn’t know). But other than thinning out your resume, I don’t see how “less tech and less context switching” could possibly be a bad thing (please note my use of “less”). @sasajuric made this point very well, but I wanted to second it (to be clear, he made it without the resume-padding quip).

I’m not using Elixir professionally yet but I’m floored by what it’s capable of and by its potential in the web dev space (which is my true love). Everyone is clearly aware of its gaps and I hope more and more people show up to help fill in them in. I’m hoping to contribute in the very near future.

And I’m now seeing @mbuhot’s answer which just makes almost all of my post a simple “hear, hear!” reply :stuck_out_tongue:

12 Likes

Can ports be used without reservation when Elixir lacks a library? What could be a problem when using ports? Performance? Memory use?

One of my concerns if getting stuck without having the time or knowledge to implement a missing library. I’m thinking of using python ports in those cases.

1 Like

My brother is a professional con man, but he is such a charming guy! After a successful deal, he always throws a party and pours the most expensive champagne. In the end we all benefit! :slight_smile:

I have the same question. What kinds of libraries could be accessed through say a Python port?

1 Like

All of them, as long as you are able to define protocol for data exchange.

1 Like

Such has been my experience with Ruby for example. The goto solutions for running something in background or periodically (and most of the systems I’ve worked on needed this), such as sidekiq or resque, required running another OS process (job scheduler) and Redis. The equivalent in BEAM can be done with e.g. Task (or e.g. proc_lib in Erlang) for background jobs, and :timer.apply_interval for periodic ones. In a simplest form you don’t even need to add a library dependency, let alone start the scheduler in a separate OS process and manage an external product such as Redis. Will this suit all scenarios? Of course not, but there will be many situations where these lightweight options will work just fine, and allow me to reduce the number of moving parts.

Same thing holds for reverse proxy. Back when I was working with Rails this was basically mandatory, while on BEAM we can get by with cowboy or Phoenix. Back in my days any Rails based production, no matter how simple and small, required nginx, redis, and a separate OS process for the job scheduler. I haven’t been working with Ruby/Rails for some ten years now, so not sure whether things have improved here.

The former sentiment doesn’t negate the latter. A tool can be both useful, but also an overkill in simpler scenarios, as can be seen from the background job & reverse proxy examples. So I prefer to reach for an external tool when built-in option don’t suit my needs. But in general I strive to get by with built-in options, b/c this reduces the amount of moving parts and the number of different technologies used, and I believe we can all agree that, all other things being similar enough, that simplifies things.

I feel that many of the external tools we take for granted are an overkill in many simpler situations (and it’s worth remembering that many of us are not operating at the scale of Facebook, Google, Twitter, Netflix & co), but we still end up using them because there are no simpler alternatives. I feel that BEAM, with its OS-like capabilities, and properties such as separation of failures and latencies through shared nothing lightweight concurrency, has great potential to provide simpler-to-operate alternatives in the form of libraries. A nice example of this is Phoenix, and otherwise we could take some cues from Mnesia or riak_core, which despite all their issues demonstrated that services such as database or distributed system infrastructure can be provided as libraries. I’ve also posted some thoughts on this matter in another thread, here and here.

10 Likes

Why I use (& love to use) Elixir!

Probably “just” because I’m in a very lucky position.
→ Single developer deciding everything on my own.

I will call myself still an un-professional Elixir developer.
But the things I do with Elixir, works (very very well) form my application areas.
→ Small scale application backends (non of them have > 100.000 active users).

I often came along not finding one “best practise” solution to solve my problems in Elixir.
But I find a hand full of possible solutions and most time take the easiest in my view and have a look if it works.
I’ve learned to love this multiple opportunities.
If I’m in doubt my solution isn’t good enough - telemetry comes to my rescue.

I see the dynamic typing still as charming to me.
Often I’ll prototype things just fast, as time is the most expensive thing in my work (bugs are the second).
So in some larger projects I also of course use Dialyzer - and yes it’s not perfect.

About the package ecosystem. Yes there are viewer. But I’m glad of it.
Working with other large ecosystems you’ll get forced to - there is certainly a package for it - we’ll take it as this would be cheaper.
Not taking into the account of not knowing the quality of the package in up front.
Seeing yourself changing the package with another package later.
That’s really not a fun thing - so I’ve stopped working like that.

Means, yeah there are still things to solve on my own - and having fun doing so.

About the deployments. I did a lot with all of this moving parts (Reverse Proxy, Docker, …).
And now moving in the same direction as sasajuric or hauleth.
For my small scale applications it’s definitely the cheaper way of operations.
I call them atomic deployments - one thing to start - one thing to destroy.

In the end! Thanks to all the people bringing Elixir to live.
& letting me my fun with programming.

PS: No I won’t use Elixir for an Game Engine or for Desktop GUI applications.

5 Likes

I feel some of difference in position between “we need redis anyways” and “no need for redis on the beam” might come from different scale. At a certain point of complexity it makes sense to move to more specialized tools like redis or k8s. What I like about elixir is that given me – being a one-man show working on a web application of non-big scale – a way to not need those immediately. I’m well aware I don’t get all the bells ans whistles, but also I don’t need or even want them. If I compare that to my prev. language of PHP, where as soon as I need a tiny bit of async stuff I need to pull in redis, child process management and what not elixir becomes a quite solid choice (on top of other reasons).

13 Likes

Ports can definitely be used for this (I did it myself a few times), but not without reservations.

There is another data hop, so the data being sent to the port and received by it needs to pass through the extra serialization/deserialization step. Depending on the particular situation this may or may not cause a significant overhead As usual, it’s best to measure :slight_smile:

In addition, if external term format codec is not available in the guest language, you’ll need to fallback to a language neutral format (e.g. json or protobuf), so some fidelity might be lost. For example instead of sending a tuple directly to the port, you might need to somehow transform it into a type supported by the exchange format before encoding.

Whatever you can use from Python should work, with the caveats mentioned above.

4 Likes

This summarizes it perfectly. We are not saying “we don’t need redis”, we are rather saying that redis is not a strong requirement from day one as seen elsewhere. As a platform, Erlang/OTP gives the ability to reduce operational complexity in some cases, and to accept this complexity as a starting point because “that’s the way everyone does things” or because “we already have so much complexity anyway” would be a mistake. Of course there are still many situations were Redis is useful, and it is productive to everyone to double-check when that’s actually the case.

I also dispute the perception these operational tools are ignored by the community. There is Redix (maintained by an Elixir Core Member). Tristan, myself, and others have written about k8s+Erlang and there is bonny for k8s control. brod (from Klarna) and broadway_kafka for Kafka, etc. Phoenix includes a Dockerfile in its official deployment guides and it will ship with a Dockerfile generator in the next release. Of course we don’t support everything under the sun but we definitely recognize that they are necessary for a wide range of applications and systems.

Sometimes it feels we are trapped in this three-year old rhetoric that “Elixir/Erlang does not like k8s/Docker/etc” while a large chunk of the community has moved on to recognize and accept its pros and cons.

27 Likes

Having worked with elixir for a year i can say that actor model is an amazing tool for concurrency i cant say the same for elixir and its ecosystem. Dont get me wrong elixir is an amazing project witha n amazing community, but often you feel locked into blessed ways of doing things(looking at global config variables in umbrellas, phoenix and ecto), what other libraries you can have instead of phoenix and ecto on web dev front? I know that phoenix and ecto are quite modular, but the rails like approach just didnt stick with me. So why i moved away:

  • configs
  • frameworks and blessed ways approach
  • overuse of use directives
  • map string/atom keys and pain points related to that
  • need to use rust/c/c++ when i do need to do something fast
  • packages are sometimes unreliable(eg amqp sometimes loses the connection without a reason)

So im right now evaluating rust/clj/scala for my next projects

Both Phoenix and Ecto are built on top of low-level blocks, so you can fallback to Plug itself (with Plug.Router and friends) if you feel like Phoenix is doing too much for you or use the database adapters directly if you would rather not use Ecto.

Configs have been definitely in flux but we have been both consolidating the approaches over the last years (see upcoming config/runtime.exs) and starting to see solid alternatives such as Vapor that provides a more structured and local approach to configuration.

Global configs in umbrellas are also discussed extensively, even in the introduction material we list them as one of its major downsides. I wouldn’t necessarily call umbrellas as a blessed way, it is more of a tool available to you which you can find plenty of critique on, and it has well documented alternatives such as not using them :slight_smile: or using path dependencies (which some people call poncho projects).

Also, the need to use rust/c/c++ is completely natural in most languages. Clj and scala may need to fallback to java in certain occasions too (unless you write your scala in a java-like fashion). Of course, if you pick Rust then you don’t need to change languages to write performant code, but the languages that absolutely maximize performance may often feel too low level to write your business logic in.

13 Likes

I’d go a bit further and say that it can help reduce complexity in many cases. For example, even if a project uses redis it might not need nginx, or vice versa. Even if just one moving piece is eliminated, it’s still a significant simplification.

Furthermore, even if redis is used, it doesn’t mean we need to use it for everything. For example, we can still reach for ETS, GenServers, Agents & friends for local (node-only) caching, which can give us some nice benefits such as reduced pressure on the central storage, no serialization/deserialization (which also means no data loss, such as atoms being converted into strings), and no network hopping. Same argument holds for e.g. running local background and periodical jobs. Just because a central storage is available doesn’t mean I need to use it for everything.

FWIW I’ve been dockerizing practically every beam release I’ve worked on for the past 5 years or so, and it’s my goto approach for running beam in production. I’m not a fan of k8s but I wouldn’t mind using it for pragmatical reasons. However, I think that a lightweight simpler alternative to k8s could be built on top of distributed beam.

9 Likes

I was thinking about writing something like CoreOS’s fleetctl in Elixir, however I think that Go or Rust would result in broader audience. Also I was thinking about Erlang runner for Hashicorp’s Nomad, which would be handy as well (and much simpler than k8s).

3 Likes

That reminds me of a comment you made on one of your talks about some problems with BEAM distribution. Did you ever blogged about that?

You can find an expanded explanation here.

2 Likes

A little off topic, but on the idea of using or not using redis. I am very grateful that with elixir I can do without redis. Don’t get me wrong, redis is a fine tool. However, 90%+ of the time people don’t actually need redis, even memcached can do the job. But since redis is there and has richer functionalities, people use redis.

Sometimes people do want to have a distributed, persistent, in-memory K/V store. The choices are:

  • redis. however, you either risk losing data, or have severe jitters in performance
  • mongodb. I’ve been burned by it: lost production data, the taste is still bitter in my mouth
  • riak. looks good on paper, benchmark number is not great, I also heard the company had financial trouble?

My question is, if I ever grow out of cachex/ETS, should I go back to redis, or should I seek some bigger guns than redis?

2 Likes

The view in the Erlang/OTP/elixir world has always been that no language is good at everything, get over it. And in any real system different parts of the system will have different requirements so the best solution is generally to use different languages, use the right language for each part and then glue them together. I would hate to try bit fiddling in raw memory in Erlang, I would use C for that in a NIF so I could control it from Erlang/Elixir. And I would hate to use C to build large, concurrent, scalable, fault tolerant systems. Of course it could be done but the pain. And if I really has to do it I would use the right library for it, the BEAM :wink:

26 Likes

It’s pretty interesting how @josevalim explain exactly what means “you may not need Redis with Elixir” in this post: https://dashbit.co/blog/you-may-not-need-redis-with-elixir

Even the @whatyouhide talks more about it on twitter

4 Likes

What I have to say about Elixir pro/con

Here I am after… what? 2 years? of almost exclusively using Elixir. Maybe more? It’s hard to keep track of when we retired other programming languages for good.

I was the driving force behind going full on Elixir, and I am ready to take to full blame for everything that goes wrong in our apps. But I am feeling pretty good - most of our problems come from server issues (except for one memory leak we could not pin down yet…)

But here is the story.

We started with 100% Ruby on Rails, and I would say I went through all the states of love/hate with Rails. At first I really loved it. I did not know Ruby, I just knew a little bit of JavaScript, and Java (the stuff they taught in Uni) - but I still did not have a real idea how a web-app works.
Along comes Rails… You just need to google what helpers to use, and stuff works!

But oh oh oh! Don’t try to do things that are not included in the “Rails and go yaaaaaaaaaa” package. After 3 years with Rails, and struggling with their lackluster real-time support (what we did back then was make a poll request to the server and return javascript that had to check if anything changed) I started working with a highschool friend of mine, and I was hoping for some insight into how to work the Web 2.0!

I had to learn EmberJS - which is kind of Rails for frontends, probably emerged a year or more after the first real frontend JS frameworks became popular. And I ran into the same problem… “but what if I want to do something not in your text-book, that seems to be written in Web 1.0 times?”
And additionally, we had to maintain the Rails API server (written before rails_api was a thing, so we had to manually remove gems), and the EmberJS frontend.

My predecessor started fooling around with Elixir early on, and we had one actually very important app running on Phoenix pre 1.0 and Elixir (beta version?)

At first I hated Elixir… I did not understand how to use it, given my Ruby on Rails background, and basically used to things just happening magically.
And then I had to actually use Elixir. And I realized that everything was so much more explicit.
Want to know where something comes from? Do a project wide search for something and you see everything where it’s passed down to the view etc., instead of being some magic shoot you can never find if you don’t know how Rails works.

But the real kicker for me was when Phoenix made the switch from the MVC style to Domain Driven Design. I read the blog post (help me out here, probably by Chris McCord?) about how DDD works in Phoenix and I immediately thought “this is so much better than RoR!” I went throught the redesign for Phoenix with Elixir, and I am just in love with the barebone design of the language. But it also has all those web development packages, that follow the same design principle: as barebone as possible, if you want more, add dependencies.

I have updated Ruby, React, Ember, and Elixir server for years now, and I have to say Elixir is the easiest to update (except for the one error we run into every now and then with hackney! why tf does it actually prevent me from ignoring ssl for crappy APIs.)

So, for me personally the context focused Phoenix structure was the reason why I wanted to use Elixir more. But I can also attest for the performance and reliability.
We had a Elixir/Phoenix server running for almost 4 years before I first looked into it - and the reason I did was Heroku updating stuff, and we moved it to a cloud server. Almost no problems, super easy with edeliver (once you understand how it works, admittedly.) And another server I updated w/o understanding Elixir yet, also on a version that was probably still alpha, maybe beta. Updated the server and app, and all the errors were very descriptive and easy to understand. I would honestly say the hardest update in Elixir happened just recently, and it’s just because of hackney!

Another reason why to use Elixir: we recently had a few spikes from a customer who wanted to do a bulk interaction. A few thousand requests/minute + usual traffic. I saw the spike on the server, but no problems other than the database server sometimes crapping out.

PROs:

  • Lightweight stlib, once you know Elixir you know it (unlike Ruby, where they add aliases for methods every 3 monts)
  • Once it is deployed, it will just run… and run… and run…
  • Phoenix is a great web dev framework for intermediate developers
  • NEW The language is still quite new, so sometimes you can implement packages for everyone. I have a package a few people use, and it feels very good to be part of the community.
  • EXPLICIT Elixir/Phoenix values explicity over implicity, so unless you screw up, you always know what is going on.

CONs:

  • first deploy can be a pain the A**
  • erlang is slow to install, and the community seems to be stuck in the 90s when you look at their docs
  • NEW the language is still quite new - opens opportunities, but means you will have to implement stuff yourself where other languages have packages ready.
  • EXPLICIT There is more code to type, more data to pass, and most things don’t just work magically.

I put 2 things in caps, because I think they can be a blessing or a curse, depending on how you look at it…

Overall Elixir, Phoenix, and especially PhoenixLiveView have changed the way how I do my poopy web apps, and gave me the feeling of finally having “one app” that does almost everything, instead of a bunch of “micro services” in Ruby/PHP/Python and a separate client.

Also, we developed an app in ReactNative - guess how hard it was to establish a websocket connection to the server from react native. Right, ridiculously easy.

So, why Elixir?

  1. I just love @chrismccord for Phoenix (great name, great technology!) and of course @josevalim for developing Elixir and still being active in the community!
  2. Because for me it is a modern take on an old, battle tested technology. Erlang syntax is a crime against humanity IMO - so thank you again Jose to make the technology actually usable.
  3. Phoenix LiveView - we have been using it in production since alpha, because I just wanted to use it! Every update is an improvement, and usually once we see bugs, they already fixed it in the next version. I forgot who the guy for the Javascript was, but if you tell me his elixiforum name, I will include him here, because that s**t is just incredibly cool.
  4. You can easily use Elixir w/o Phoenix or anything, and it’s still incredibly good in spawning lightweight processes all over the place (the orchestrator)
  5. honestly, I am just attached to this programming language/community now. And as long as I am in charge, we will keep using it! Keep up the great work!
16 Likes