Each language will give you a different perspective on how to build software.
I think this may require some elaboration for neophytes. The statement holds as long as the languages cover different Programming Paradigms. For example Java and Elixir cover different paradigms.
If your first language is Python and then you learn one of: C#, Java, Ruby, Go; that is more like learning a new “dialect”. I’m not implying there is no effort involved in learning new “dialects” and it is quite possible to prefer one “dialect” over another. I’m simply stating that learning new “dialects” may not broaden your horizons as much (considering the effort involved).
In that context I think SQL is often overlooked as a representative of the declarative paradigm - instead of describing “how to get/find the data that is needed” you have to describe “what the properties of the required data are”.
SQL is often just seen as a necessary nuisance to manipulate data stuck in an RDBMS so there is often little motivation to develop more than just a superficial competence in it. But it can be helpful in thinking more about the “what” rather than the “how”, it requires you to develop your “reasoning about sets” which can also be helpful for “reasoning about types” (regardless whether you are working in a dynamically or statically typed language).
So anybody using Phoenix probably already has PostgreSQL installed, so there is plenty of opportunity to use SQL inside psql
. It’s a good idea to stay aware of which features are PostgreSQL extensions and which are part of ANSI SQL (i.e. non-product specific knowledge).
I’d imagine that good SQL skills are also helpful with Ecto.
For me OO goes way back to the 1990s:
Of course Smalltalk existed even before then.
Reenskaug and Coplien’s Data, Context and Interaction (DCI) is a more modern take but seems to be ignored by the mainstream.
One problem with many OOP(-like) languages is that they where developed during a period when due to technological limitations data was unshared and mutated in place. During that period FP languages were at a disadvantage because they preferred immutable data which required copying and more memory which meant that they were significantly less efficient.
But technology has changed. Multi-cores and multi-threading is now standard which means that data is much more commonly shared. This isn’t a problem with FP’s immutable data because it doesn’t change. Also persistent data structures have made FP more memory efficient (and memory is cheaper and more plentiful). But in traditional programming languages additional measures (not necessarily part of their original design) have to be taken when they use shared data because they mutate it.
https://twitter.com/jessitron/status/333228687208112128?lang=en
Thinking Outside the Synchronisation Quadrant
One thing to keep in mind is that popularity doesn’t necessarily relate to quality - it sometimes does but often doesn’t.