I’m considering flame for a project where I need workers that pull in a big dependency.
I don’t need it at all on the webserver. So I’d ideally build a container for the server and one for the pool. Is that possible (planned?).
I’d try to handle that a layer deeper first. Check if the container setup and layering of images does well enough in amortising the size of the big dependency before pulling the complexity up to the application layer. Not sure where, but I’ve seen chris suggest to put ML models on docker images running on fly.
No, not separate ones – one image running in all contexts. Let the app decide to not use BigDep outside of flame runners. I’d try that approach first and split stuff up only if it actually turns out to cause problems. Having a homogenous cluster is generally simpler than a heterogenous one.
But I’ve also just seen that the FLAME fly backend seems to support an :app config, which seems to be the way to let it start a different application for flame nodes to the one starting them: FLAME.FlyBackend — flame v0.1.12