Flame – rethinking serverless

yep, and its actually a big deal all by itself

thanks for the notes

Some details about implementation which may not be shared in the official blog post

4 Likes

This is still very beta, but so is FLAME I guess. :wink: A backend to use FLAME within a Kubernetes Cluster:

9 Likes

Is there a way to know the progress of the job that the short-lived application is doing?

think that is up to yourself - eg. use phoenix pubsub or similar to signal back to a liveview/channel etc.

2 Likes

Aha right, since FLAME shares the same global process space, that is possible.

I love the FLAME model - for the potential security advantages as much as for the elastic scaling.

One great thing about traditional serverless functions is that it is (relatively) easy to reason about security boundaries and be darn sure you have enforced them.

Let’s say I am running a script to transform a raw input file to load into a database. When I accept an upload from a user, my chest quakes in fear: who knows what zero-day an attacked has found to blow up my app via a corrupt Excel file that exploits some bug in a Excel-parsing-library? And what if I actually want to be able to run user-defined code - say, a Python script? How can I mitigate against unknown attack vectors? In a Node app I’d reach for total encapsulation: have the user upload the potentially-malicious file to a GCP or S3 bucket; run a well-isolated lambda to transform it; and pass sanitized input to my main backend.

It’s easy to limit a Lambda’s attack surface, with hard boundaries enforced at the infrastructure level. For example, I can pass my Lambda the input file through a presigned readonly URL, so it doesn’t need storage bucket privileges. I can also give it a presigned URL to upload results; or have it call the backend through a POST authenticated with a single shared secret. Access to the app’s network; database; etc - are easy to NOT grant.

It strikes me that FLAME is well-positioned to meet both goals - convenient and simplified access to the app and any requisite reources (and simplified testing, etc as Chris has noted); while also providing a very simple API to dramatically limit the attack surface during vulnerable operations.

As a user, a pretty modest extension to the FLAME API could add FLAME security boundary profiles as an abstraction that is visible within the app and enforced by the backend when the runner spins up. The profile details would get added to the Runner.new() API, similar to how the hardware opts parameter works. This could let a user limit access to secrets; alter the initialized environment vars; and limit network and volume access.

With these low-level additions to the API enforced by FLAME backend’s implementation of Runner.new(), we could easily do powerful things to markedly improve security. For example, I could set up one FLAME profile that sets config a Postgres user with readonly access to the main database, and write access to a staging database for ETL jobs. So for that job I’d specify a FLAME profile that sets the environment variables for that limited Postgres user.

Or to run a user-defined script, I might do that in a FLAME with access to almost none of my resources.

I feel like it would be possible to add all this security boundary functionality almost “for free” with Chris’s FLAME model.

Note - I’m sure others are thinking this as well but I didn’t see it here. Is this under discussion somewhere else? I’m new to Elixir, most recently from a Node/Typescript/React universe.

Anyway, having created plenty of GCFs, encapsulated CloudRun’s, and even maintained a fleet of privilege-free EC2’s for the sole purpose of ensuring that I have infrastructure-level enforcement to run potentially-malicious user scripts, I can see dispensing with almost every single one of them as FLAME evolves. It’s a great idea.

4 Likes

Has anyone tried to do an ECS implementation for FLAME yet?

3 Likes