How would you deploy this Phx Server?

Hey there, I have built an APi, fully in Phoenix and I’m super happy about it, however, I dont want to host it on my localhost machine - instead publish it to be used from everyone.

:x: localhost:4000/someApiEndpoint
:white_check_mark: api.mydomain.domain

So the Q ist, how and where would you deploy this?

2 Likes

Currently, I would recommend https://fly.io/ . Their product is so good I honestly feel like I’m stealing from them. Their entire gig is deploying app servers close to the users and routing them to the closest one.

I was previously running on a dedicated cpu + 4GB ram vps and just recently migrated a small site to fly.io with 1x-shared-cpu + 1GB ram in multiple regions. And to ensure they were actually real, I put up a small site there over a year ago on the free tier…and it is still running.

So it’s more reliable, it’s faster for the users…and it’s roughly the same price.

Quite frankly, it’s really too good to be true, so hopefully, it stays that way.

So now that I’m at it, let me give a quick breakdown of what’s amazing and what’s sort-of meh.

The Amazing

Deployment

I dislike docker. But the slickest thing Fly did was making sure no one actually has to run Docker. Each organization has their own very beefy builder machine. Running a deploy just means you have to upload the docker context and they take care of the rest with the remote builder.

So 600 lines of Ansible has been replaced by a <100 line Dockerfile. Deploying from a slow connection is much easier because the context needed to upload to fly is roughly 5MB while a finished binary package was roughly 40MB.

Running a liveview app in multiple regions means it does a rolling deploy, which takes several minutes. Previously the deploys would incur about 30 seconds of downtime, but now there are zero-downtime deploys.

Proxy

The secret-sauce to fly’s awesomeness is their custom proxy.

During a rolling deploy, it pretty much ensures that there is no downtime as it re-balances any connections. If a region is down, it will re-direct any traffic to a different region.

For a Liveview application, this means when the websocket breaks, it will seamlessly re-connect to another region and the user will not notice.

Having built lots of crap, this is quite a feat of engineering to do it so seamlessly. It’s really quite impressive (and that’s coming from a old, grizzled, and grumpy person).

Performance

The shared-1x-cpu runs faster than dedicated cpu on other providers. And the VPS I was using was faster than DigitalOcean / AWS / etc.

For the most cpu intensive operation in the application:

| Machine                  | Lower is better |
|--------------------------+-----------------|
| Laptop 8-core 16GB       | 131us/operation |
| Fly 1x-shared-cpu 256MB  | 156us/operation |
| VPS 1x-dedicated cpu 4GB | 165us/operation |
| VPS shared cpu 1GB       | 174us/operation |

I do expect this to slow down over time as more people migrate to the platform and the VM’s get more used.

User perceived performance

For my application and user-base, the latency for liveview updates changed from 150ms to 50ms by running out of SJC, MIA, and AMS. I had already optimized all the client-side calls, so quite frankly, this is a ludicrous improvement in user performance. It almost gets to the level of, hey, this is as fast as running on localhost.

It’s absurd.

Free tier

There is a very generous Fly App Pricing · Fly Docs and a pretty straightforward developer experience with their CLI tool.

Elixir support

@chrismccord (creator of liveview) works there. They have entire sections of docs dedicated to elixir also.

The meh

Autoscale tuning

It’s pretty tough to tune autoscale for Liveview applications as the connections are mostly idle websockets. There is a soft-limit and hard-limit for the number of connections per app server and the proxy starts re-routing to slower regions when an app server hits the soft-limit. This certainly makes sense from an uptime and a pricing perspective, but for Liveview, these users who were supposed to connect in Seattle might be going across the country to NY for the same page.

Initially, I experimented with setting a lower soft-limit to allow the autoscaling to kick in faster with 5 regions, 20 connections softlimit, and 512MB ram app server. However, requests were often re-routed to other regions as the app scaled up.

Finally decided to go with 3 regions, 100 connection softlimit, and 1GB ram which gives quite a bit of headroom, but also guarantees that the user will hit a fast server most of the time.

This is a graph of the concurrent connections per vm over the past 24 hours. Pretty neat to see how the traffic moves from region to region.

It seems to work and I’m sure fly will have more options in the future.

DNS staleness

Relying on the absolute accuracy of DNS is not recommended. They have an entire internal network based on wireguard and ipv6 which is super-duper nifty. But do not rely on dns records being immediately accurate. For instance, libcluster will query old hosts for a few minutes. This doesn’t cause me any issues, but not sure if others would have problems.

Proxy reliability and random footguns.

Did run into timeouts once which could never be re-produced. Still, it feels more reliable than

  • Apigee - occasional timeouts and dealing with enterprise support that is…meh.
  • AWS ELB - occasional downtime on deploys as failover happens
  • Google Loadbalancer - generally frightening as they take down your entire business every two years.

Fly is really new, so we don’t have the years of data that we have on other vendors.

Also, make sure your restart_limit is not 0 in your fly.toml .

Database backups

Fly.io does not provide a managed database, though they are funding development on litefs which is sqlite replication made by the author of litestream. If you need point-in-time recovery, you may want to connect to a managed database somewhere else if you don’t want to setup something yourself.

I’ve had to setup a separate running host to take backups and forward logs, which will probably end up being part of the platform at some point in the distant future (scheduled jobs, etc.)

Metrics + logging

The dashboard also provides short-term logging and metrics in a reasonable fashion. Nothing spectacular, but passable.

Beta testing

It’s early and the product isn’t polished. You’re a beta tester. The product is not close to finished. However, it is no different than using any AWS product that is less than 2 years old…except that the product is actually usable. They are working on a new machines api that will replace their current one and allow them to scale to zero.

Their current product works fine though, just a little unpolished. For instance, they introduced multi-process apps and they don’t autoscale the same as normal apps and it’s only documented on the forums.

If you stick to the basics, everything should work fine.

Surprise billing

You can fix the number of servers and not use autoscale. At that point, the only surprise bill would be the bandwidth. And the billing for bandwidth is at 2 cents / gb, not 10 cents / gb like AWS. And, really generous free tier.

Remote builder/deploy flakiness

They supply the remote builder, but if it runs out of disk space (50 GB), you will have to destroy and restart.

Community forum.

They have a forum…which mostly gets overrun about questions of why X isn’t working and gives an unusual impression that the platform doesn’t work. Most questions involve the user not using ipv6 or trying to get fancy with Docker. On the plus side, they do have people responding, which is completely unlike other vendors. And the CEO @mrkurt responds on their forums with exemplary patience.

Conclusion

The amazing outweighs the meh by several orders of magnitude.

Like I said earlier, my deployment is now:

  • faster
  • more reliable
  • and roughly the same price.

I’m still waiting for the other shoe to drop as it’s still pretty early for fly.io, but for right now for a price to value ratio…it’s really unmatched as far as I can see.

9 Likes

This is the best review of fly so far and i agree with the experience

2 Likes

ye I tried then, the website goes online without problems but when I call the api with https://myweb.fly.dev/api/myendpointhere it just says 404 cant be found hmm

Would you be able to post all of your sourcecode somewhere for us to look over?

what exact file do you need? like endpoint?

maybe your mix.exs, router.ex, and prod.exs

Im not super familiar with fly.io. but this might give us a good start

mix.exs

defmodule MyWeb.MixProject do
  use Mix.Project

  def project do
    [
      app: :myweb,
      version: "0.1.0",
      elixir: "~> 1.13",
      elixirc_paths: elixirc_paths(Mix.env()),
      compilers: Mix.compilers(),
      start_permanent: Mix.env() == :prod,
      aliases: aliases(),
      deps: deps(),
      docs: [
        main: "readme",
        extras: [
          "README.md"
        ]
      ]
    ]
  end

  # Configuration for the OTP application.
  #
  # Type `mix help compile.app` for more information.
  def application do
    [
      mod: {MyWeb.Application, []},
      extra_applications: [:logger, :runtime_tools]
    ]
  end

  # Specifies which paths to compile per environment.
  defp elixirc_paths(:test), do: ["lib", "test/support"]
  defp elixirc_paths(_), do: ["lib"]

  # Specifies your project dependencies.
  #
  # Type `mix help deps` for examples and options.
  defp deps do
    [
      {:phoenix, "~> 1.6.10"},
      {:telemetry_metrics, "~> 0.6"},
      {:telemetry_poller, "~> 1.0"},
      {:gettext, "~> 0.18"},
      {:plug_cowboy, "~> 2.5"},
      {:httpoison, "~> 1.8"},
      {:poison, "~> 5.0"},
      {:jason, "~> 1.3"},
      {:hammer, "~> 6.1"},
      {:dotenv_parser, "~> 2.0"},
      {:corsica, "~> 1.2"}
    ]
  end

  # Aliases are shortcuts or tasks specific to the current project.
  # For example, to install project dependencies and perform other setup tasks, run:
  #
  #     $ mix setup
  #
  # See the documentation for `Mix` for more info on aliases.
  defp aliases do
    [
      setup: ["deps.get"]
    ]
  end
end

router

defmodule MyWebWeb.Router do
  use MyWebWeb, :router

  pipeline :api do
    plug :accepts, ["json"]
  end

  # localhost:4000/api/{route}
  scope "/api", MyWebWeb do
    pipe_through :api

    get "/random", RandomController, :index
  end
end

prod

import Config

# For production, don't forget to configure the url host
# to something meaningful, Phoenix uses this information
# when generating URLs.
#
# Note we also include the path to a cache manifest
# containing the digested version of static files. This
# manifest is generated by the `mix phx.digest` task,
# which you should run after static files are built and
# before starting your production server.

# Do not print debug messages in production
config :logger, level: :info

# ## SSL Support
#
# To get SSL working, you will need to add the `https` key
# to the previous section and set your `:url` port to 443:
#
#     config :myweb, MyWebWeb.Endpoint,
#       ...,
#       url: [host: "example.com", port: 443],
#       https: [
#         ...,
#         port: 443,
#         cipher_suite: :strong,
#         keyfile: System.get_env("SOME_APP_SSL_KEY_PATH"),
#         certfile: System.get_env("SOME_APP_SSL_CERT_PATH")
#       ]
#
# The `cipher_suite` is set to `:strong` to support only the
# latest and more secure SSL ciphers. This means old browsers
# and clients may not be supported. You can set it to
# `:compatible` for wider support.
#
# `:keyfile` and `:certfile` expect an absolute path to the key
# and cert in disk or a relative path inside priv, for example
# "priv/ssl/server.key". For all supported SSL configuration
# options, see https://hexdocs.pm/plug/Plug.SSL.html#configure/1
#
# We also recommend setting `force_ssl` in your endpoint, ensuring
# no data is ever sent via http, always redirecting to https:
#
#     config :myweb, MyWebWeb.Endpoint,
#       force_ssl: [hsts: true]
#
# Check `Plug.SSL` for all available options in `force_ssl`.

My runtime

import Config

# config/runtime.exs is executed for all environments, including
# during releases. It is executed after compilation and before the
# system starts, so it is typically used to load production configuration
# and secrets from environment variables or elsewhere. Do not define
# any compile-time configuration in here, as it won't be applied.
# The block below contains prod specific runtime configuration.

DotenvParser.load_file(".env")

config :myweb,
  env: Config.config_env(),
  secret: System.fetch_env!("SECRET_KEY_BASE"),
  url: System.fetch_env!("URL"),

# ## Using releases
#
# If you use `mix release`, you need to explicitly enable the server
# by passing the PHX_SERVER=true when you start it:
#
#     PHX_SERVER=true bin/myweb start
#
# Alternatively, you can use `mix phx.gen.release` to generate a `bin/server`
# script that automatically sets the env var above.
if System.get_env("PHX_SERVER") do
  config :myweb, MyWebWeb.Endpoint, server: true
end

if config_env() == :prod do
  # The secret key base is used to sign/encrypt cookies and other secrets.
  # A default value is used in config/dev.exs and config/test.exs but you
  # want to use a different value for prod and you most likely don't want
  # to check this value into version control, so we use an environment
  # variable instead.
  secret_key_base = Application.get_env(:myweb, :secret)
    # System.get_env("SECRET_KEY_BASE") ||
    #   raise """
    #   environment variable SECRET_KEY_BASE is missing.
    #   You can generate one by calling: mix phx.gen.secret
    #   """

  host = System.get_env("PHX_HOST") || "myweb.fly.dev"
  port = String.to_integer(System.get_env("PORT") || "4000")

  config :myweb, MyWebWeb.Endpoint,
    url: [host: host, port: 443, scheme: "https"],
    http: [
      # Enable IPv6 and bind on all interfaces.
      # Set it to  {0, 0, 0, 0, 0, 0, 0, 1} for local network only access.
      # See the documentation on https://hexdocs.pm/plug_cowboy/Plug.Cowboy.html
      # for details about using IPv6 vs IPv4 and loopback vs public addresses.
      ip: {0, 0, 0, 0, 0, 0, 0, 0},
      port: port
    ],
    secret_key_base: secret_key_base
end

I wish i had a better answer here @spizzy. What i did was did a completely stock mix phx.gen --no-ecto. i added the bare minimum to add a json endpoint and then pushed it to fly.io. I was able to get it to call the website (‘/’) and the api (‘/api/random’). Please feel free to take a look at my code here. GitHub - zpeters/elixir-minimal-json: The most minimal json api endpoint with phoenix

If you look at this commit, it will show you the changes i made to add just the route and minimal json endpoint.

I hope this helps. Let us know what you find out or if you have more questions!

1 Like

I generated my api with mix phx.new MyWeb --no-ecto --no-html --no-assets --no-mailer --no-dashboard but I cant see any bigger difference between your code and mine

Maybe try logging into the host itself and see if you can access the endpoint?

$ fly ssh console -a yourapp # this will get you in
# curl http://localhost:4000/yourendpoint # test your endpoint

Anything interesting in the logs or is it just 404?

$ fly logs -a yourapp

Locally everything is working fine but docker or fly can’t read my .env file :confused:

Hmmm…sounds like a Dockerfile issue. I’m not sure if I’ve used .env. Just set the environment manually and see what happens if you’ve isolated it to env not being read.

You can also try injecting secrets which will be read into the environment and see what’s missing.

well, yea I thought about hard coding my api keys and db credentials into my code but it wouldnt be a long time solution as I want to go open source too.