Best practices when working with periodic tasks

I was tasked with a low-priority project at work that will be used to monitor any websites/services belonging to our clients (mantained by us) and figured that it could be a good time to show the benefits of using Elixir. My thought was that since this is a very simple task, I could focus on following the best practices and as a practical excercise for me, since I always read/listen about Elixir but don’t have much time to actually write it…

What I have planned so far is a supervised GenServer that routinely calls itself to execute the task every X seconds, where the task is performing a GET at /healthz and looking for a 200 reply or a basic ping if I get a 404 there. That task would run that function for each domain registered using a Task. All of this would be published to a very simple LiveView dashboard (to show LiveView’s potential, too).

I was also planning on using a document-store in GCE or AWS to store the registered domains (keeping the program very simple and sort of stateless, I’d like to try and run it in Kubernetes for testing).

I’m aware that this is a very simple task but I’d love any opinions or suggestions on this. I don’t want to over-engineer since this will be used only for a few websites, but I also don’t want to “brute force” everything, so to speak.

As a sidenote, I had the idea of doing this with LiveView but optimizing it for viewing through a terminal with cURL or similar programs. e.g. calling the service’s endpoint and start viewing a TUI dashbord. Would this be too crazy?

Thanks in advance.

There’s a good writeup (plus a library) for period tasks on @sasajuric 's blog - This approach is very light on infrastructure. If you need full tracking, retry etc then the community seems to lean towards @sorentwo 's Oban project - - but that comes with more infrastructure (i.e database) to persist job status etc.

They should be able to give you some inspiration.


I did a similar project for monitoring, but never had time to finish it to be ready to open source it.

First of all often there aren’t such a thing as best practices in Elixir.
There could be more then one suitable good solution.
That’s one of the beautiful things in Elixir.

In my project for the scheduled “ping” requests it just used Task and TaskSupervisor.

The periodic task in its simplest form (in my opinion):

use Task, restart: :transient

# 60 s
@interval 60 * 1_000

def start_link(_) do

def process() do
  receive do
    @interval ->
      # do your work here
      IO.inspect "ping"


For storing the registered domains I just used a File which I read at application start.
About the TUI thing I can’t give any suggestions.

1 Like

You mentioned you were tasked with the project at work; was there any discussion of build-vs-buy? It’s going to be challenging to show the benefits of Elixir when competing with off-the-shelf tools like Pingdom that do this exact job (plus alerting, dashboards, etc) for about 1 US dollar per month per site.

If there are requirements that aren’t served by the market - could be anything from regulatory requirements, to IP filtering on the target sites, to client confidentiality - then make sure your Elixir hype focuses on how it can help solve those unique problems.


We’re a VERY small team so we try to keep our tooling costs down, specially when they won’t really benefit our productivity. The truth is that this project is very low priority and it wouldn’t even be a problem to just not do it. As someone who struggles finding or coming up with projects to practice, this seemed like a perfect opportunity.

Otherwise, I’d agree with you that buying would be better in most cases but I’m looking at this as a small personal challenge.

Thank you. I read up Dockyard’s write up about not using external dependencies for periodic tasks, but that was from 2017… I’ll check the links you shared :grin:

Thank you for sharing your code and insight, I’ll look into Task and TaskSupervisor more.

A very simple way is using Process.send_after/4. A gen_server can just send itsself a repeating message.

1 Like

There’s also which is free up to 50 sites and polls your site every 5 minutes. You can get alerts, a 30 day uptime history and response times on the free tier. I’ve been using them for years.

But if you do roll your own, database persistence seems reasonable so you can measure uptime over X time. That would likely mean storing the results in a DB, in which case Oban wouldn’t introduce additional complexity if you’re already using Postgres.


That might actually be a good option, I guess we could use that but I’ll still build the app either way as practice.