On "Why Elixir?"

:wave: Thanks for your attention here!

Looking at the latest documentation from the Mix hexdocs, I think the only outstanding bits are things that you could have in spirit with native releases, just not in identical forms. The configuration providers have excellent parity already, and the tarball support came not too much later.

  • I get a lot of mileage out of custom subcommands over eval and rpc calls, to reduce cognitive overhead during operational or triage tasks. This is also a place where documentation and runbooks could supplement about as well.
  • Iā€™ve seen them used for significant evil but book hooks are a nice model sometimes. I expect that the guideline for this is to use configuration providers as much as possible, and for anything else just roll them into your supervision tree directly, perhaps with an :ignore result as appropriate.
  • I academically miss the prospect of release plugins for a way for Hex packages to incorporate into your release build more intrusively without a lot of copypasta, but have not actually seen them used in practice so far. Like the boot hooks, I think this probably just evolves into supervision tree entries as well, plus documentation.

I hope it doesnā€™t seem petty that I say I miss these things in this specific form, but also the Mix.Release module docs and mix task docs leave them out as topics thus far. For most folks, theyā€™d need to be aware of the existence of these features as prior art in order to synthesize the native solutions above from first principles, and part of my stance is that deployments are already hard to teach good instincts for. Not having a canonical reference for these things in the first-party material means I need to say things like ā€œalso check out the Distillery docs even though you may not use it, thereā€™s some good ideas to steal fromā€.

Iā€™m just as excited for this as I am for runtime.exs! :star_struck:

Very true, I wonā€™t get everything I mentioned from any one place. I donā€™t want to turn this into an advertisement thread for Rust so I will just say it already has some traction in most of the spaces Iā€™ve mentioned having an unfulfilled interest in, with ML being probably the weakest and webdev having some ergonomic concerns in its current form.

1 Like

Thatā€™s something we left out on purpose because it adds a lot of complexity and many people reported Distillery scripts complexity as one of its biggest downsides. I encourage to build regular scripts on top of the main script, which is more Unix-like.

Similar for boot scripts. If you do need to hook into a particular command, then you can use the env.sh file, but I would avoid these particular papercuts. :slight_smile:

If there is something that is hard to achieve with those approaches, please let us know, and we can see how to tackle the individual problems. I would prefer to move the discussion from ā€œwe donā€™t have thisā€ to ā€œwe canā€™t do thisā€.

Plugins are done with custom steps. Thatā€™s something we had since day one because they are required for projects like Nerves. All of those are documented and used by the community, maybe what is lacking is actual guides to help people migrate from Distillery.

3 Likes

I use this for a project since switching to Ė‹mix releaseĖ‹ from distillery:

But to be fair for new projects I use eval as in my case CI is the only place to call those commands, so the benefit is minor.

4 Likes

I honestly donā€™t see Rust displacing any of the major players on web dev. It is not about framework capability but rather the level you want to sit at when writing business/domain logic. To be clear, it is definitely doable, but far from first choice.

Similar thoughts apply for distributed systems. Doable from pretty much everywhere but if your goal is to start something, Erlang removes many decisions from the process and it has proven over and over again its scalability and operability.

This is not a Rust critique, just trying to drive home the point that languages wonā€™t check all of those boxes. Rust is probably yet to pass the peak of inflated expectations until it settles into certain domains.

Other than that, I definitely hope Rust succeeds in domains such as ML because its interoperability story means that it will make ML directly available to many other communities.

15 Likes

Are C, Java, Node or Rust ecossystem ā€œproductive and funā€ to code as Elixir?

IMO this is one of the greatest advantages of Elixir and maybe we are missing here. I never found before a tech stack that brings me Joy of development and nice performance (in its niche) like Elixir.

Elixir syntax and the language itself is a very nice built.

Do you like ā€œ[] === []ā€ to be evaluated as ā€œfalseā€ like node? Or to write ā€œpublic system main args out print wharever pom.xmlā€ like Java? Or to donā€™t have a REPL/interative shell like Rust? Or to work with pointers while you are building a simple CRUD web app in C?

I really respect all of those. But for Web Apps or Distributed Systems i found nothing ā€œā€ā€œbetterā€"" than Elixir.

IMO expressiveness are very very very important thing while modeling business rules. And ā€œitā€™s a thingā€ work with a language that not few like you are hiting your head in your keyboard writing a code thats seems like visual noise. :rofl:

Sorry for my bad english.

12 Likes

I work with ML professionals, and my opinion is that are very smart people, and like many very smart people they donā€™t want to put up with a lot of things on the way to getting what they really want to do. Even if Rust manages to abstract a lot of the most challenging parts into macros, theyā€™re still relatively uncomposableā€¦ In this sense: Fighting the borrow checker just to parse a poorly structured CSV that came out of a shell script dropped from an ETL pipeline, into a high performance pipeline of matrix-matrix multiplication on a GPU is, I believe, not something they are going to want to do in general.

Maybe a ā€œbest practicesā€ guidelines for eval and rpc might be useful. One of my eval statements in my shell script looks like this, which I think is maybe evocative of some of the struggles:

#/bin/sh
my_app/bin/my_app eval "import IP; Application.ensure_all_started(:my_app); :some_atom1; :some_atom2; MyApp.run_file('~/configuration.toml', ~i/$1/);" 
  • Iā€™m not sure thereā€™s guidance about sometimes needing to do Application.ensure_all_started/1 in an eval statement; I only knew immediately to put that in from lots of experience, and this may be frustrating for beginners.
  • the :some_atoms are there because i have been maybe overly cautious about String.to_existing_atom.
  • the first parameter to run_file/2 is a charlist to avoid having to escape double quotes, and is a bit of a hack because I happened to remember run_file just calls File.read!, which admits charlists.
  • the second parameter uses this sigil_i that I happened to remember Iā€™d written because that lets me avoid a quotation mark in IP.from_string!/1. If I had a function that took a String, I would probably want to use ~s// or something, but guidance on that as a best practice would be nice.

It might also be nice to be able to package shell/batch scripts into the release relatively easily in the Release pipeline (like how we have :assemble and :tar directives) I donā€™t do this currently (and just copy/paste from my source directory into vi).

That said, I love releases. I did a release on a personal project - persistent websocket datascraping into Sqlite last night directly onto an free-tier aws. Smooth as a baby.

1 Like

5 posts were merged into an existing topic: Julia + elixir

I think the main appeal towards Rust for Elixir devs is the static typing (and one really useful product of it: exhaustive pattern matching). :slight_smile:

I claim that the experience of making a web app in Rust is miles behind Phoenix.

2 Likes

Write a guide in a blog post, please! :pleading_face:

7 Likes

That. The practicality and aesthetics of Elixir code.

You raised some valid points, and I can see how various issues combined can lead to frustration. Iā€™ve been using BEAM languages for the past decade, and while Iā€™m in general a happy user, I agree that thereā€™s a lot of room for improvement :slight_smile:

In particular, I think that the lack of strong typing is a big deficiency, and Iā€™m hoping that some of the ongoing initiatives will address this. If projects such as Gleam reach enough maturity I could see myself migrating to them, at least partially.

I also agree that ecosystem is far from perfect, in terms of size, as well as support. It is indeed worrying that some of the prominent libraries are developed as a private effort of a few individuals, with lot of the work probably done outside of working hours.

All that being said, Iā€™m still in general a very happy user of BEAM languages, and I believe that they are by far the most suitable options for building fault-tolerant soft real-time backends of any size and complexity. Projects such as WhatsApp have demonstrated that BEAM can take us very far, but at the same time, in my experience BEAM languages, especially Elixir, excel at building small scale simpler systems. Iā€™ve worked on a couple of such systems which were implemented completely in a single BEAM language, as a single project, running as a single standalone OS process in production, requiring no external dependency at all. One interesting example was a proprietary CI server, a sort of hard-coded Circle/Travis/Jenkins, which had to deal with all of the standard CI challenges, such as monitoring changes in a remote repo, running multiple concurrent builds, managing load and concurrency, dealing with docker containers, caching, running scheduled jobs, persisting state, etc. All of that was implemented as a single standalone Elixir OTP app, using nothing else on the side. Iā€™m not aware of any other technology that would allow me to reduce the operational complexity so much.

As a smaller-scale example of the kind of simplification we can get with BEAM, take a look at my site_encrypt library, which Iā€™ve also showcased in this blog post. Again, Iā€™m not certain that such level of operational simplification can be achieved outside of BEAM, at least not with the similar set of guarantees.

Itā€™s probably impossible to asses objectively, but I personally believe that these benefits are much more important than the downsides Iā€™ve experienced. For example, when I was building a CI, I needed to interact with GitHub graphql API, and had to implement the client from scratch. It took me about a day to research the docs and get a working prototype, and then a few more days to write a proper solution. In a richer ecosystem I might be able to find a library and solve this in a matter of an hour or so. This seems like a radical time overhead, but in the grand scheme of things it was insignificant, because the bulk of the time was spent on the essential domain logic where no library could help me. Such has been my general experience in the past decade of working with BEAM. Sure, I occasionally had to reimplement some wheel manually, such as a basic client for an external service. But usually most of the work was spent on the actual domain logic, and so this occasional overhead didnā€™t add up to anything significant.

In the end, it comes down to how each of us values given pros & cons, and it depends on what are the challenges weā€™re trying to solve. For example, I agree that BEAM is not a good fit for some domains, such as GUI apps, CLIs, fast numerical processing, etc, and I usually advise people to look for something else for such domains. But, like @josevalim, Iā€™m not sure that any language/runtime will be a good at everything. I believe that BEAM is a great fit for fault-tolerant soft real-time systems precisely because this is the thing it focuses on.

In any case, while I may disagree with some of your points, I still think you raised valid concerns, and thatā€™s always a good thing.

One final minor comment to the point from your gist:

Umbrellas as a project structure are an extremely permeable form of ā€œisolationā€ and in my mind provide neutral or negative value to oneā€™s architecture. Elixir has no concept of module-level privacy or hierarchy, only public and private functions

I share your sentiments about umbrellas, and Iā€™ve never used them myself. The boundary project is my attempt to tackle this issue in a different way. Feedback is welcome :slight_smile:

27 Likes

I think this is an important note to make. You mentioned you have a decadeā€™s worth of experience using the BEAM. This puts you into the same category as Jose himself, or someone who is at the extreme end of being super proficient and capable of writing pretty much anything you want without too much of a headache.

As someone who doesnā€™t have that type of experience, your few days of coding a custom client becomes an impossible hurdle to cross or involves asking for help from external sources and potentially never reaching a solution. When this happens multiple times during a project on every corner, thatā€™s when you think about dropping out of the language.

Itā€™s very much a deterrent when you want to integrate with something (such as Stripe) and seeing an officially supported library for Python, Ruby, Node, PHP, Go, .NET and Java but no Elixir. Almost every service I want to integrate with doesnā€™t have an Elixir client.

Also seeing libraries like ex_aws stop being maintained because the sole author who maintains it doesnā€™t use AWS anymore is a bummer. The forum post was around for months with an ask for help to find a new maintainer but no one replied, so now one of the most popular back-ends for storing file uploads becomes a problem you need to solve as an individual developer.

In a lot of cases, to develop a web app youā€™re on the hook for having to become a library creator instead of an application developer just to begin your project. Some folks might want that, but itā€™s not exactly a productive environment where the goal is to go from no app to launching an app.

2 Likes

If youā€™re judging the usefulness of a programming language by its ecosystem that maybe shouldā€™ve been a (bigger) factor in choosing it as the platform for whatever youā€™re building in the first place. Youā€™re by no means wrong in your assessment, but elixir is still a quite niche language. Itā€™s to be expected that you might not find whatever you need in third party libraries and even less so in official libraries by companies, which are often provided by a mixture of popularity and what people within the company can program. If youā€™d go just by the metric of ecosystem I guess ruby, php or js are likely unbeatable.

@sasajuricā€™s point as I understand it is that the tradeoff for not having that ecosystem might not matter as soon as elixir/the beam does provide (greater) benefits in other places. And one doesnā€™t need a decade in experience to see or benefit from them. It depends on many more factors besides experience like how complex/big a project youā€™re working on, how stateful your service is, the ā€œdevops storyā€ (being able to connect to the vm at runtime and observe/debug), how much moving pieces you want to deal with, ā€¦. And one still might come to the conclusion of elixir not being the correct choice.

8 Likes

Although there isnā€™t an ā€œofficialā€ lib for Stripe (I know itā€™s just an example) but since you mentioned it, the stripity_stripe is pretty much ok and well maintained - it also allows you to pass additional params into all requests or generate custom requests, so basically even if Stripeā€™s API is incremented accepting new key-values or new endpoints, you can use them even if stripity_stripe hasnā€™t been updated.

I also understand the value of having libraries for interacting with common APIā€™s, but on the other hand I think itā€™s a bit overrated - if I was not asked specifically to use libraries for those interactions, I personally would just use the actual HTTP API. In the case of the most used/relevant ones theyā€™re pretty much well designed and youā€™ll only be using a very small subset of their functionality - every time I have to use a lib, I have to read the docs of the official HTTP API of the provider and the docs of the lib - and this is not to say theyā€™re not valuable - itā€™s just they can get out of sync and then there you go hunting for what thereā€™s a mismatch.

I think yours are valid concerns, if youā€™re developing something you donā€™t have to worry about writing the interface for the HTTP API and just use something that ā€œno one was fired for using this libā€ (in the sense itā€™s the official, sanctioned one), and writing it might take a toll because handling the APIā€™s responses is not your domain problem. I still think though that in the overall scheme of things itā€™s a minor part, and eventually after you do it once, it shouldnā€™t be that time consuming doing it twice or thrice.

Just my opinion, not sure itā€™s worth even a $0.01 but thatā€™s how I see it.

(and this in the context of the other things the BEAM gives you almost for free, of course all things being equal, without such runtime, there would be no reason to go with a language that has a smaller ecosystem)

3 Likes

To be clear, Iā€™m not suggesting that libraries are bad. All other things being equal (or similar enough), itā€™s of course better to have the library available than to not have it :slight_smile:. My position is that as a backend developer I get some important benefits from BEAM that I personally value more than the amount of available libraries, and that even with the lack of libraries, the end solution often seems significantly simpler to me.

To reiterate, I worked on a couple of systems that were implemented completely in a single BEAM language, without anything else used on the side. With many other languages Iā€™d need to run multiple OS processes (i.e. microservices) tied together with other 3rd party tools such as Redis, message queue, reverse proxy, cronjob. This is a huge amount of technical complexity that is rarely, if ever, mentioned when comparing languages.

In my view, ecosystem can always be grown on top of good foundations, but it doesnā€™t work the other way around. You canā€™t fix some fundamental deficiencies at the runtime layer, like no support for fault-tolerance or stable latency, by adding more libraries. You can only work around such deficiencies outside of the language, e.g. by going down the microservices path, maybe reaching for k8s to assist you with that. This can certainly work, but Iā€™d argue that itā€™s much harder than hand-coding a couple of REST or GraphQL requests and interpreting the responses :slight_smile:

In particular, when it comes to manually integrating with a 3rd party service, my experience is not very extensive, but in the few cases I had it was relatively straightforward and didnā€™t require any advanced knowledge of BEAM. It boiled down to reading the API docs, picking an http client library, issuing the requests, and interpreting the responses. Iā€™d usually need to invoke only a couple of different actions, so it wasnā€™t a lot of work. I agree that this is still far from perfect, and that it can seem unsurmountable to junior, but I donā€™t think itā€™s rocket science :slight_smile:

Thereā€™s definitely a lot of room for improvement. Things can and should be simpler, but we can gradually get to that point, given time and effort. Which is why in general I place more value at the foundational layer (in this case BEAM) than at the ecosystem.

15 Likes

Great post!

Haskell has this elephant in the room too. Personally about 50% of the invites I got for speaking about contractor jobs for elixir were blockchain related. Technically interesting, but ethically?

2 Likes

Technology frontiers are usually pushed by lucrative and questionable motives. Streaming video was pioneered by the porn industry and big data analysis was driven by the privacy invading social medias. In the end, we all benefit.

3 Likes

I donā€™t want to push this thread OT but I donā€™t see those things as being complex in the grand scheme of things and some of them are shared with Elixir in most web apps.

Most web apps written in any language will likely want:

  • At least a persistent database, such as postgres
  • A reverse proxy to properly handle things like SSL termination, redirects, static file caching, country detection, basic load balancing and many other features that nginx supports
  • A way to execute tasks in the background and periodically.

Popular languages like Python and Ruby have battle hardened tools to solve background tasks in all shapes and forms (Celery in Python and Sidekiq in Ruby).

In Elixir, chances are youā€™d still want to use Oban or another library because in a realistic app youā€™d want a bunch of features like separate queues, retries, cancelling, uniqueness and scheduled tasks along with a dozen other things a robust background / que library will offer.

The only extra added complexity you may encounter in another tech stack is using Redis, but Redis is one of the most least complicated things to manage from an infrastructure POV. You can leave it running untouched for months or years and itā€™ll run like a champion. Lots of cloud providers also provide fully managed Redis servers too if youā€™re into using services like that.

Throw in a bit of Docker and suddenly that Redis complexity kind of goes away even if you decide to self manage it. You can literally add 5 lines of YAML to 1 file and now you have Redis up and running along with the rest of your stuff.

Redis also doubles as a cache back-end and has great support in popular web frameworks.

I havenā€™t directly used cron in years on any web app because periodic tasks are solved problems with background tools like Celery. Itā€™s also especially nice because itā€™s distributed too and you can keep your web server pretty much entirely stateless.

Everyone has their preferences but personally I would rather use tools that thousands of folks have been using for many many years because they have a ton of edge cases ironed out and are super well explored and supported. I prefer working in an environment where I can solve the business needs of my apps without having to reinvent a new library from scratch every step of the way.

Itā€™s especially nice when more opinionated frameworks also take care of common things too, because at least you can be confident the library or functionality wonā€™t stop being supported over night because one person decides they are not using it anymore.

1 Like

So supposedly making a REST request is complicated and requires advanced technical expertise, but dealing with nginx, sidekiq (which requires running an external worker), and redis isnā€™t? Iā€™m gonna have a very hard time accepting that :slight_smile:

The (admittedly not fully realized) potential of BEAM is the fact that you need to only learn the programming language and that can take you very far. This reduces the amount of technologies that needs to be mastered, and simplifies the life for everyone on the team. But beyond that, less external technologies reduce the disconnect between dev, test, and prod. Perhaps I was on wrong teams, but none of us usually ran nginx or sidekiq locally. This led to occasional production bugs b/c the stuff not running locally or on CI is the stuff untested.

In contrast, Phoenix can handle many of the reverse proxy features you mentioned. It is used equally on all machines (dev, staging, prod), requires minimum extra operational overhead, and is easily testable.

If you want to obtain an SSL certificate via Letā€™s Encrypt, take a look at site_encrypt. You can get it working in a matter of minutes and it wonā€™t require installing anything else on the side. And you use the language you normally use to work with it, so no need to learn a special flavour of yaml, ini, or anything like that. That language is compiled, so syntax errors are immediately detected during compilation, while semantic errors (e.g. misspelling of the domain name) can be detected in tests. Since the interface is a programming language, thereā€™s a lot of flexibility (like e.g. fetching input parameters from OS env or some secure store), and the stuff works equally well in local dev and test without requiring anything running on the side. Since site_encrypt periodically renews the certificate it runs a periodic job without requiring any extra OS process or external component. Using vanilla OTP supervision tree, the job scheduler binds itself to the Phoenix endpoint, so if the endpoint is stopped, the job will be stopped too, thus avoiding running a certification that is bound to fail. For more details see this post on site_encrypt and this post on periodic jobs.

Compared to using nginx + certbot this gives me simpler usage, simpler operation (less moving parts), reduced dev/prod mismatch, better testability, and more flexibility. To me these are the very important benefits, and this is where I see a huge potential of BEAM. Iā€™ve had enough first hand practical evidence of it to be convinced. Iā€™ve also had the pleasure of working on systems powered by a bunch of moving parts, and I donā€™t feel like going back to that :slight_smile:

Just to be clear I donā€™t hesitate to use external components where it makes more sense. I agree that external database is frequently needed (though oddly enough I managed to get away without it on a few occasions), but other than that I think that many projects can do just fine without needing reverse proxies, redises, external message queues, cronjobs, & such. Iā€™m not saying that these tools are bad per se, but I prefer exploring more lightweight built-in options and move to these tools only when thereā€™s justified need.

25 Likes