So I have a phoenix app (a basic chat app) and I was able to deploy it to two amazon ec2 instances. But for some reason I am unable to connect the two nodes. I’m following this article: https://medium.com/@brucepomeroy/setting-up-and-elixir-cluster-on-ec2-with-edeliver-and-distillery-2500848cc34f#.920iibe7j and I guess is the same configuration for a phoenix app.
The app works fine, except that the broadcast only works in the same node.
When I try to connect one node manually with
I get false no matter what…
I don’t know if this is a network issue or a phoenix config issue, and I don’t know how can start debugging this…
Make sure that the instances use the same “cookie”, also please make sure you changed the example IPs to the actual value and that you can ping one machine from the other.
Hi the machines use the same cookie and the IP’s are correctly set on each machine, and I can not ping with the public but I can ping with the private ip address.
Ok so the problem was not my phoenix or edeliver configuration, the problem was the amazon configuration.
Can you confirm
epmd is running on both hosts? Can you also confirm that the port it listens on is open in the firewall? Remember that firewalls may have open on one ip while shut down on the other even when the IPs share the same interface.
netstat -lnpt | grep epmd
That line might be help full. Per default the epmd runs on port 4369 TCP.
Can you please elaborate which config screws you had to drive? This information might help future readers.
Sorry I don’t fully understand your question, but in the amazon security group of each instance I have to open each port to every origin like this:
Using the security group name, public ip address or even the private ip didn’t work for me.
Well, it was just a german phrase which I translated literally into the english language After fixing broken things, one explains which screws one had to turn/drive/tighten/loosen to make the device work again. Nowadays this phrase is used very metaphorically, since one has to deal with screws very seldomly
So as I assumed already, the firewall was the culprit.