Just quickly looked at the first 10 results for “Elixir” on indeed.com. And there are indeed ( ) many from Berlin but they all require relocation and don’t support remote work, which doesn’t work for me personally.
Yeah perhaps between late 2017 and now Elixir a kind of experienced an explosion in Germany, which is great for me as I didn’t actually expect that before I decided to come to Germany in the first place.
Remote is a bit harder but there are companies who have remote team members, even if they don’t explicitly say so in the job description. Though from what I see such companies mostly have Americans who work from the US, maybe just because most people who already live in Europe don’t necessarily have a reason to want to remote. If those companies can arrange for American workers to remote, then if you live in the same time zone it must be even easier.
A rewrite of an internal project I was working on was canned. Partially as a result of my campaigning for an off the shelf product being a better use of time. On one hand it was unfortunate that I was heard nearer the end. On the other it was nice to have the opportunity to chat to the elixir community, use the language and grow as a person.
At the moment I don’t have anything I can practically use elixir for. Personal projects don’t really align at the moment.
I remember getting into the community and wanting to do something with Elixir. Through community work I was fortunate enough to get to know some seasoned Erlang developers, and I remember bouncing ideas with them and they always seemed to shut them down
«how about I build X ?»;
«nah, X; we are not well suited for X»;
«well, what are we good for then ?»
«…telephone switches !»
Can I ask what your general interests are when it comes to your personal projects ?
The counters to “why use another language” may be worth discussing. The issues I’ve encountered seem to be more culture/educationally founded:
- We don’t have time
- Node is concurrent!
- Rails can do that (with no investigation into real differences)
- We couldn’t hire
- Nobody here knows it
While the counterarguments aren’t hard, I think the real issue is learning to be winsome/persuasive in the midst of nonstarters, lack of education, singularly-focused projects, fear-based decision making, etc.
Your point here, is the key: I don’t want a single node or process to handle a large number of system activities.
I want to build nodes that are focussed on as small a number of activities as possible - ideally 1; enabling me to test and deploy whilst considering only a small number of interactions.
I want those nodes to be as ephemeral and friable as possible. I want them to be stateless and anonymous. I want to run multiple parallel nodes so that the cessation of one or some is immaterial to overall system health.
I don’t want to provision and pay for redundant resources, so I want to be able to scale or replace nodes precisely as required. The bigger my service, the bigger my unit of scaling has to be, the more expensive it is, and - to some degree - the slower nodes are to spin up.
Microservices, “nanoservices”, FaaS, autoscaling, and fast spin-up of new nodes give me enormous traction and allow me to externalize most of the concerns we’re discussing here and the more I decompose the more effective these techniques become.
Can I build everything that way? Absolutely not. Can I build most things that way? Absolutely.
You can disagree with me, I know you’re ferociously smart and have your own experiences, but there’s a reason that people on this forum keep coming back to the questions of “why can’t Elixir get traction in the enterprise” and “why won’t big companies use Elixir” and it is because the things that were once cool about Beam are now merely interesting, and the things that are now cool about the Beam - like the splendidly fast startup time, where it crushes the JVM and which is crucial to effective autoscaling, don’t seem to be the that useful to the current community.
I really like Elixir, I wouldn’t be here if I didn’t, I believe it has tremendous potential, but that the current vector is focussed on use-cases outside my field of interest. And that’s OK, I’m not telling you to do anything different, if it works for you, I’ve got no complaints at all.
The important point here surely is that you have constraints which seem to allow you to build like that. I feel like there are other use-cases out there where those do not apply, like “smaller” shops, which can save on dev-ops overhead and infrastructure cost or basically anything involving IOT devices. Also I feel like there is a scale of granularity, where OTP can give you fault tolerance, but where having many nodes would just be waistful. E.g. if you have a lot of processes, which shall work in isolation. If you have a node per process it’s a lot of overhead for containerization and stuff, while a single / a handful of beam instances might handle it just fine with the same or at least similar level of error isolation.
I think you’re 100% right. I’m talking about “enterprise” concerns, where we’ve thousands of engineers, deploying tens of thousands of different apps across hundreds of thousands of nodes.
When all your code can’t fit into any node, you need to start breaking it down and - largely - the benefits are to be had at the ends of the spectrum: one large application has benefits, many small apps have benefits. Many large apps gives you all of the drawbacks of both models and few of the benefits.
Do you use a separate node for each connected client, each database connection, each background job, each distinct piece of state managed? If not, then you’re running multiple activities on a single node, and run the risk of a single corrupted or overloaded activity taking down a larger surface of the system.
If yes, then you’re using a couple of order of magnitudes more resources, compared to what you could use with Erlang. Maybe, as an enterprise you can afford it, but many of us don’t work in big companies, and don’t have those kinds of resources at our disposal.
Going beyond prices of resources, spinning an Erlang process takes much less time than spinning an entire node, which leads to a more responsive system. The solution is technically more uniform, requiring no extra knowledge than the one we need for our main application code, which makes it easier for everyone in the team to work on each part of the system.
I’m not asking these questions myself, and, frankly, I’m not at all surprised that enterprises don’t use Elixir/Erlang, nor do I expect things will change here. I personally feel that enterprises often work by “curious” logic, to say the least. For example, many years ago, I knew of an enterprise which was paying an arm and a leg for a streaming solution which could handle at most 1000 simultaneous users. The limit actually caused them business problems. I already made a similar thing with Erlang, and used it in production successfully for some time. It easily handled much more users on a much smaller hardware. When a friend working at that company discussed the possibility of using my Erlang based solution, which they would have gotten at a fraction of the price they were paying for the insufficient solution, they refused it immediately.
Lol, I didn’t think anything bad at all, honestly I didn’t see the connection from overlord to overmind. ^.^;
I’m not one that starts by thinking the worst of a discussion, I really love discussion and debate and all, it gets my mind revving.
Eh, I’ve dabbled a touch in crystal but I was not even remotely impressed by it… Ignoring its syntax (which I’m not a fan of, but eh…) it has some really bad typing limitations and it has no parallelization at all. If you want to run code in parallel then you have to fork multiple running copies of the process, like Python. If you want a language that is Crystal-ish then I’d recommend Nim, it’s still a similar syntax but just everything it does feels implemented ‘better’, including proper parallelization (crystal only has concurrency, no parallel capabilities), some fantastically written low level libraries, etc… Though if you want to go full on I’d say use Rust anyway, that way you can toss the GC and max your performance and safety both (but ‘quite’ a syntax change).
Go is fast, but coding in it feels as horrible as coding in C itself, and somehow it ends up ‘more’ unsafe than C. >.<
You can even use go-like concurrency in C via libdill/libmill (depending on if you like a macro-drive go-like interface or a c-style interface, same backend though).
I’d definitely pick Rust over Go though, any day, for a large variety of reasons that I’ve enumerated elsewhere on these forums.
I think a good pattern to choose whether I’d use elixir or not is how important performance, reliability, and ability to update code is. If performance and reliability is important and ability to update code is hard then I’d pick rust, otherwise if ability to update code is simple then I’d pick Elixir. If performance is important but reliability is less so I’d probably pick ocaml to minimize my work. If performance is not important then I’d probably pick either Python or OCaml depending on various things like my mood. With a smattering of C++ in everywhere as that always seems to happen to me. ^.^;
It depends. If I have to perform the same activity many times a second, then I probably have a number of different nodes running it in parallel. Might each of them run the same activity multiple times in parallel?-Yes, but I still want to scale that down as much as possible, and I want them to be performing a single type of activity as much as possible. There are fixed overheads with scaling down, a floor below which it is hard to go, but that floor is moving down all the time. Look at AWS Lambdas, where they are single threaded and have an average lifespan of around a second. So, there’s a balancing act between scaling down and cost, and trade-offs to be made.
For me to be able to spin up another process inside an existing VM I have to have spare capacity in that VM to do so. That means I had to buy capacity and leave it sitting around, just in case… which I don’t want to do because over-provisioning is expensive.
I want to scale to exactly the needs I have at that point in time and pay for only that. Maybe there will come a time when I can add and remove system resources on the fly and have applications respect the changes, but that isn’t today. And remember, I don’t want to start a single extra process, I want to start at least hundreds of them, maybe several orders of magnitude more, and have them actively do stuff that consumes resources - IO, CPU, memory, disk… whatever. All that capacity needs to be available as well on whatever node I’m putting things on - that means I need to over-provision all off those things, and a node will only ever manage to utilize one of them.
Scaling nodes up and down, I can reuse the resources without reusing the application. Of course, I get that capacity on demand from a cloud provider, so I can hand it back and stop paying for it when I don’t need it.
I also want need deterministic performance, within some general tolerances. I don’t want to have a CPU constrained process running and then have someone else deploy CPU constrained code to the same node. I need transparent capacity consumption and planning.
When I do spin up nodes, I fully provision their capacity. If I deploy a node that can support 50 threads, I Create them at startup. Within a node I don’t have to wait for anything, so startup time exempted, the time it takes to spin up a process isn’t so important to me. When I do need to spin up new nodes, Beam is great because it starts quickly making that more efficient. Not as fast as Go or Node, but close enough that I don’t care.
Lots of people make poor and curious decisions, small and large. One thing to consider: the goal of an enterprise isn’t to save money, it is to make money. Saving money is nice, but the best way to save all the money is to shut-down - overhead reduced to zero. Making money is about capitalizing on opportunity whilst managing risk.
Every company has limits, quite often those limits are people and attention. There are only so many things any group of people can work on in parallel, and that number doesn’t scale linearly. You can’t exceed those limits just by hiring more people. If companies can trade cost for opportunity in a profitable way, they’re going to do that. If companies can eliminate risk by manageable expenditure, they’re going to do that. Nothing else makes sense.
Obviously, I have no idea what the thinking process behind the people you describe was. Maybe they were just plain wrong. But perhaps, when you considered the cost of the existing solution, plus the cost of the problems it caused and compared it to the cost of your solution, plus the risks of dealing with a different vendor, plus the time and energy it would take, and the alternative opportunities they could be trying to capitalize on… it just didn’t make sense. Perhaps just didn’t make sense in terms fo risk and opportunity, even if it notionally made sense in terms of cost.
I was an IT engineer of SAP ABAP, then Ruby on Rails. After frustrated about GIL in RoR, I just moved to Elixir & Phoenix learning from a bottom.
Currently I’m running a small new website based on Elixir 1.7 and working to grow; https://nohogu.com
Here’s my short story of introducing Elixir.
But now there’s a serious problem, I started looking for a job and Elixir demand is rare in South Korea. And I should make a decision to learn Node.js quickly or to find a remote Elixir jobs.
I believe the trends that slimming server-side is the most reason why Elixir isn’t grow. Elixir has a stiff learning curve, while there’s far less demand. Lack of proven services are a matter of Elixir world too.
Elixir and Phoenix provide the same server-side patterns as Rails, .Net. and all the various Java application servers, so skills learned here are widely applicable. Client-side stacks are extremely popular, but there is a ton of legacy code running on server-side stacks. I would not worry about growth just yet. The work required to migrate all that legacy server-side code to either newer server-side stacks or client-side alternatives is way behind, and Elixir and Phoenix are pushing the envelope for server-side stacks. I believe there is plenty of time for the enterprise server-side world to catch up to Elixir and Phoenix.
We do have some very good success stories involving Elixir. We can show some pretty impressive numbers, and often the community do. Perhaps we should make more noise about these success stories.
I remember that the WhatsApp story was going around when I got into the Erlang/Elixir community. I wouldn’t say that was the deciding factor for me, but for the corporate world it might be.
Don’t forget about things like RabbitMQ. Unfortunately, most companies don’t know that it’s made in Erlang.
I don’t live in Croatia, but one of the lead devs in our team does. We’re a ruby shop, looking at moving over to Elixir for some parts of our stack. My team-mate loves Elixir too, interesting to see a few more people in Croatia getting into it! Perhaps I can make it over for a meetup
I thought OCaml had strong static typing? Or you mean something else?
A strong type system doesn’t help you when your database is down. That sort of thing…I think
This is most definitely true, Erlang/Elixir/OTP is not a silver bullet and is not good for everything, that was never its intention. Get over it. Accept that for most systems different parts of the system have different requirements so don’t try to have one language, use the right tool for the job.
Now erlang/elixir is a very good concurrent “glue” which you can use to bind all the different bits together.
Static typing and the OTP-style of failure handling are two different things.
It’s an awesome glue. ^.^