I have a Phoenix 1.2 app running sweetly in Heroku, thanks to the great explanations on the official Heroku guide.
Now I want to be prepared to scale up dynos, and I’m quite confused about reading this question in StackOverflow.
My app uses channels, and launches a background process (no Redis, just in-memory).
Will the channels work fine if I scale up dynos?
Will the worker defined in my application start be duplicated if I have 2 dynos?
I’m confused because the StackOverflow post mentions using
phoenix_pubsub, but my app already uses it - but in my case I don’t have any Redis set up.
I just saw that PubSub uses the PG2 adapter by default, so I see there’s no need for Redis then.
Do I need other things to take into consideration before scaling up dynos?
Unfortunately it is not possible to use the PG2 adapter on heroku, because Heroku dynos cannot be configured to form an erlang cluster.
Yes, if you start your application on 2 dynos the worker will be started twice.
Heroku is not an ideal environment for stateful applications.
I see, thanks @benwilson512 - then it’s just a matter of using the Redis adapter instead of PG2?
Workers being started twice - how can this be solved in Heroku, if it can?
It is, although that will not scale as well as PG2 because it serves as a single bottleneck and point of failure.
I don’t use Heroku so I can’t say for sure how one might ensure one worker. One general approach is that if all nodes can get a list of what nodes exist, and know which node they are, they only start the worker if they’re the first node on the list. Fancier and more reliable approaches use consistent hashing algorithms to map identifying data unique to each node to some set of nodes which are supposed to run workers.
Heroku can be a fine platform for elixir; hex.pm is on heroku still last I checked. However if you’re trying to use erlang cluster features or even just more stateful features heroku’s limitations are going to start to be a pain. The daily reboots, the lack of clustering, etc.