The Serverless Architecture Paradigm & Elixir/Phoenix

“I skate to where the puck is going to be, not to where it has been.” — Wayne Gretzky

After a cursory look at serverless architecture I can see this paradigm replacing current server models.

I’m in a situation where I’ll need to keep some code in “black boxes”… meaning on servers we control. This code will need to work seamlessly receiving inputs & sending outputs with other nodes on the network on remote servers.

Serverless architecture might be just the thing for this problem.

But is it possible to integrate serverless architecture with Elixir/Phoenix currently?

Are there any solutions that would segment a Phoenix project code into these compartmentalised code snippets the serverless architecture requires?

Or is it just too early to be thinking about serverless?

Anyone with some experience here please chime in…

1 Like

You can probably create a “serverless” platform in elixir, similar to erlang on xen.

I’ve used FaaS/serverless stuff in the past to do stuff like thumbnail generation, etc. where the main app can carry on without that data. It could be useful for ETL style actions where you need to step through a flow of distinctions (and I didnt have or need something like NiFi or mulesoft in house).

As for how I’d personally consider using Elixir in this kind of environment, I’m not entirely sure I would. Things that attract me to using Elixir for a project, like genservers, clustering, Phoenix channels, etc. arent really condusive to the empheral servers that FaaS/serverless really are under the hood, but are more long running.


In general servrless (should be called lamda functions) are short running functions that glue other cloud provider functionality/services and generate/react on cloud events. So would say this architecture does not match Phoenix functionality as Phoenix was designed as long running process.

But I suppose you could write some lamda function in Elixir.
Preferred languages are with quick startup times like javascript, go …

AWS lamda limitation

We have some forum thread about server less.


Again, just opinions …

“I skate to where the puck is going to be, not to where it has been.” — Wayne Gretzky

And while there is only one Stanley Cup, there are countless “rewards” in the software industry involving an un-trackable number of “pucks”. So as an individual one has to eventually chose which “puck” is the most relevant. And ultimately even educated guesses at future outcomes will often still be incorrect.

I’m in a situation where I’ll need to keep some code in “black boxes”… meaning on servers we control.

And right there you are already giving up on the greatest benefit of serverless, being able to delegate the overhead and cost of infrastructure provisioning and administration to the vendor. If your are already paying for building and running your own infrastructure then you could easily be in a position where benefits of serverless are outweighed by the disadvantages.

In a monolith all the complexity is hidden on the inside (and if you don’t mind your bounded contexts, you’re going to be in trouble), micro-services manifest your (chosen) bounded contexts (hopefully you chose wisely) but relocate some of the complexity to service interconnections (monitoring, etc.). Serverless creates (exposes) even more “interconnection complexity” between the user functions and vendor services - and it is possible to create a Serverless Monolith because here again, bounded contexts are logical boundaries, not “real” ones.

NDC Olso 2017: Serverless - reality or BS - notes from the trenches (1:00:05)
Lynn Langit is a (vendor-sourced) serverless advocate. She mentions that serveless release and version control tooling still have a long way to go.

Complexity is just being shifted around. While complexity of infrastructure management is now the vendor’s problem there still is the configuration complexity of managing the interconnections between the functions and the foundational vendor services (storage, authentication, etc.) which themselves may have customer specific service configurations. So while serverless may reduce the set of operational concerns for the customer, the remaining concerns will now likely fall onto the customer’s development team.

The other issue is understanding your vendor’s offerings in order to get the best value. Worst, you have to know the competitors offerings as well because they may have a better deal for one part of your architecture. For example aCloud uses Firebase on Google, API Gateway, S3, DynamoDB and Lambda on AWS and Auth0 as well.

Finally serverless has a tendency to make the frontend more complex because of being “server-less” there is some pressure to move some of the orchestration/coordination effort (traditionally handled by backend servers) to the frontend. In some ways the rise of serverless is reminding me of the rise of 4GL tooling and thick clients in the 1980-90s.

I can see this paradigm replacing current server models.

Outright replacement would surprise me - eating into the market, sure. It’s yet another approach that makes different tradeoffs. One target market is the fast growing startup, with a relatively simple system (where the release and versioning complexities are less of an issue), not wanting to pay for idle time, not wanting to be distracted by infrastructure concerns but still wanting to deal with uneven demand on their service (monitoring the costs is still essential). Example: movivo

GOTO 2017 • When should you use a Serverless Approach? • Paul Johnston (40:53)

The other target market is the large corporation wanting to try out something quickly without interfering with (or waiting on) day-to-day operations (a “startup project” of some description).

GOTO 2017 • Serverless + Modern Agile: A Match made in Silicon Heaven • Mike Roberts (46:53)

But is it possible to integrate serverless architecture with Elixir/Phoenix currently?

I see the BEAM of having an entirely different value proposition. Its was designed as a foundation for building fault-tolerant, scalable, soft real-time systems with requirements for high availability. As such it seems more suited for a “do more with less approach” to get the most out of your infrastructure and devops cost. Serverless seems to be more about “change your functionality at will at any time” (whatever the consequences) while you (mostly) only pay for the resources that are in active use.

So by all accounts serverless will be “big business” for some and there will be successful ventures that use serverless. But in the end it’s just yet another different way of doing things. Getting in on serverless could be lucrative but ultimately one is at the mercy of the major vendors (and some businesses have no problem with that) who have been fuelling the hype behind it.

Personally I’m not sure at what scale in-house serverless with products like OpenWhisk makes sense (unless you are intending to sell your unused capacity to others) because it seems to me that complexity is piling up in the wrong place (which could create a wonderful opportunity for tool vendors just like back in the SOA days :face_with_raised_eyebrow:).


Am I wrong, or would serverless architecture simplify segmenting blackbox code locally to our own servers, while scaling the rest the code to 3rd party vendors?

Scaling is going to be the big issue on this project. The more flexibility we have for scaling the better.

I really don’t want to have to scale our own hardware to support all nodes & code just to maintain a small sliver of the codebase in a blackbox.

I don’t thin so.
Serverless architecture is about scalable cloud services and lamda functions.
It Is all about public cloud or private cloud

I assume you can have some running code on your local servers if you can’t put some data in cloud
and some code in cloud.In this case you will have hybrid cloud. But if you can put all data in cloud and don’t
see a point running own servers.

When lamda funtion needs to be run the cloud spin container with code of function .
To spin container it has some orchestration framework to allocate on which server container should run.
After function is executed the container can be reused on destroyed.
But in general you don’t need to worry about this, The cloud will take care of it.
You will pay only for time execution for the function.

About scalable cloud services can be example of
Google big query
Google cloud spanner (scalable sql database)

I would suggest to watch this video


This is semi-relevant to this topic. I came across the Cloudi project a while back and played with it a little bit. It’s an interesting project and actually kinda cool. It may be useful to someone who comes across this topic.


I dislike the serverless architecture HOWEVER I do like using containers - just on top of servers I setup. It lets me get the best of both worlds :slight_smile:

In my mind for this approach to work, a lot of careful consideration has to be put into the implementation in order to steer clear of any scenarios where the shortcomings of either approach start compounding one another.

For example, Lambda charges are based on 100 ms time slices (AFAIK) so your vendor charges could easily mount if your in-house skeleton infrastructure run by a skeleton crew is slow to respond to your Lambda function requests (especially during peak demand).

Now I have no doubt that a crack team with the appropriate budget can handle such an in-house setup but if you already need/have them in place, they likely could also set up a more cost effective (and sufficiently elastic) PaaS solution for the remainder of the architecture. And it’s here where Elixir could potentially help.

Benefits of Elixir: How Elixir helped Bleacher Report handle 8x more traffic

“On our monolith we needed roughly 150 servers to power the more intensive portions of BR. Following our move to Elixir we’re now able to power those same functions on five servers and we’re probably over-provisioned. We could probably get away with it on two,” Marks says.

So while the hybrid-serverless approach is certainly possible, I wouldn’t expect it to necessarily ”simplify” matters.

Scaling is going to be the big issue on this project.

Serverless is supposed to elastically deal with demand on your application. The potential challenge I see is when the serverless application grows to become “large-scale” - I suspect that it’s going to require some serious design discipline to keep it from becoming one giant hairball.

1 Like