Should one use pipes or with?

I think René really illustrates well the use-cases of with and |>, and then introduces another idea about creating a token struct to track effects (kind of like Plug) when the pipeline becomes more complicated.


As an elixir pipeline cannot exit early this doesn’t change anything for the problem though. Each function of the pipeline needs to handle errors even if it’s just a “pass the error forward”.

@tme_317 Ahh, quite valuable ! Your experience makes sense to me and I agree with pretty much everything you said.

@dbern Is this a 3rd party tool? Well, looks interesting non the less, I look forward giving it a check!

Which is why I placed the word ‘exit’ with quotes and in italic. Your assessment is correct, in a real pipeline the error is simply passed forward - this is ROP in action.

My point is that with does this automatically while with a normal pipeline you need to do it by hand.

Been there, done that, nightmares persist. In many cases such “pipelining” processing will end up as a big piele of sh… bad code. TBH even with time I come to the concept that even Plug.Conn isn’t that good idea and having to deal with process directly, like in Raxx, Cowboy, Elli, etc. would be better solution.

So there is no solution to big pipelines ? =(

I’m not following yet; do you have an example of how you organize data transformations via processes?

Break them down into smaller functions? (Sorry, couldn’t resist)

Since ROP was mentioned:

Type and spec - Dialyzer not detecting error


As others already mentioned or implied, pipe is syntax sugar, not control flow mechanism. The question is more of how to handle control flow - and it depends what you want to achieve.

Things to consider / think about

  • Which func should care about control flow? individual func, or top-level “orchestration” func?

  • What’s the required behavior when pattern match failing?

  • Wrap small functions in a larger func (e.g. send_email vs build_recipient, build_email_body, send_email) to make orchestration func focusing on control flow

  • Warp others func as needed (e.g. make it returning ok/error tuple)

1 Like

Processes are irrelevant for the discussion I want to have. My main objective is to discuss the pros and cons of pipelines and functions based on the examples I have give.

So, instead of having a pipeline with 10 functions, you would have a pipeline with 9 functions, one of them being another pipeline with X functions. This is one of the problems I don’t like with pipelines - it adds useless indirection while still forcing me to multi-clause for every error case.

It’s fine if my neat example has 3 functions, but when you have pipelines of pipelines of pipelines, things get weary very quickly.

As for libraries, I was actually using this one:

Which is very nice, however I was met with some resistance from my team because no one is familiar with monads and people usually don’t like to use 3rd party tools when they can do the same with bare Elixir (using pipelines with multiclause functions or with statements).

So here I am, trying to figure out which one is best :smiley:

the pipe operator is a macro, just like with. According to you, both are syntax sugars. I am afraid I am missing the point of your message. Could you elaborate?

The pipe (Kernel.|>/2) is a macro, with (Kernel.SpecialForms.with/1) is only documented to be one. As with every “macro” in Kernel.SpecialForms, it is something understood by the compiler itself. It is one of the basic building blocks of the languages and “expands to itself”. It is treated differently than a macro from anywhere else.

1 Like

So, it is a special kind of macro, correct? Or is it something else completely different but it is documented in such a way for users to better understand?

Basically you can say, that regular macros are syntactic sugar. Even those that are created by third party libraries or yourself. SpecialForms though are actual syntax.

with can be a true life saver especially when you communicate with other systems, I talk about it here. Basic idea is first validate my own data, then validate with external system(s) then insert locally. Or authentication where things might go wrong at multiple places.

|> just shows a transformation of input values to output values to me. I usually don’t expect error handling to take place there but just a smooth transformation.


Can’t you raise on unexpected errors?

I personally use with when errors are expected, and pipelines with functions that raise when errors should not happen (the boundary is not always obvious):

def my_fun(url) do
  res =
    |> f1()
    |> HTTPoison.get!()
    |> Jason.decode!()
    |> f2()
    |> f3()

  {:ok, res}
  %HTTPoison.Error{} ->
    {:error, :http_download_error}

  %Jason.DecodeError{} ->
    {:error, :json_parse_error}

  e ->
    {:error, e}

There is no one size fits all … best is highly context sensitive.

Without a library the with/1 pattern demonstrated by @tme_317 is probably the best starting point.

def something(args) do
  with {:step1, {:ok, result1}} <- {:step1, task1(args)},
       {:step2, {:ok, result2}} <- {:step2, task2(result1)} do
  {:ok, result2}
  {_, error} -> error

Granted it isn’t particularly pretty but it gets the job done and there is some flexibility that goes beyond what the pipe can do.

Now I suspect that this has more to do with your own frustration - “why isn’t this already a solved problem within the language itself”.

Likely because this “problem” doesn’t actually come up all that often.

Erlang introduced {:ok, result}/{:error, reason} more than likely as a poor mans Either (or Result) type.

Given how optimized pattern matching is :ok/:error tuples are a good enough solution.

Putting my C hat on, I can easily imagine an Erlang programmer cringing at the thought of wasting precious function reductions passing an error value around through function calls just to comply with ROP. The attitude would be to drop everything and return the error value promptly - even if it meant a few more lines of code here and there, as long as it benefitted the runtime budget.

The Elixir pipe operator is merely a DevX function application feature that takes the place method chaining in OO languages and is almost as useful as function composition. The pipe operator never meant to take on the :ok/:error tuple issue.

That is really the domain of with/1. But in order to make it useful beyond just plain {:ok, result}/{:error, reason} values it is also more verbose than a pipe. And finally with/1 will quit at the first sign of trouble and is capable of soaking up all sorts of sins committed by the functions that it calls.

The same argument can be made against factoring a 1000 line function into multiple smaller functions. To me those smaller functions add value as long as they are well named and often they tend to make the code more declarative.

I hate trying to figure something like this out:

self.addEventListener('activate', event => {
  console.log('Activating new service worker...');

  const cacheWhitelist = [staticCacheName];

    caches.keys().then(cacheNames => {
      return Promise.all( => {
          if (cacheWhitelist.indexOf(cacheName) === -1) {
            return caches.delete(cacheName);

I find this much easier to reason about:

// Activate event
const cacheWhiteList = [staticCacheName]
const isObsoleteCache = name => cacheWhiteList.indexOf(name) === -1
const selectCachesToDelete = cacheNames => cacheNames.filter(isObsoleteCache)
const deleteNamedCache = name => self.caches.delete(name)
const deleteCaches = cacheNames => Promise.all(

function activateListener (event) {
  console.log('Activating new service worker...')

self.addEventListener('install', installListener)
self.addEventListener('fetch', fetchListener)
self.addEventListener('activate', activateListener)

… code for which some members in the JS community would probably lynch me for

So when you have a 10 function pipeline (or with/1) then maybe, just maybe that pipe is spanning multiple, distinct transformations that are just begging to be named for the benefit of future maintainers.


It’s not about using pipe or not. The fundamental question is how to control flow and where the logic should be placed.

  • with is good for orchestration func (handling all control flow) calling simple func (returning ok/error tuple, focusing on single job)
  • pipe operator is good for pipeline funcs that are aware of context.
    • this control flow is delegated to individual func.

Note that it’s not all or nothing. For example, you may use pipe for funcs for data transformation or small control flow.

I try to use with as much as I can and for all other stuff I like sage,

See the post:


The with examples given above all seem like they could be cleaned up with macro to handle threading the success result back around into the next function. Similar to pipe, actually. Would that be possible? Or are real-life uses not so neat and tidy?

If you’re just simply threading a success result through to the next function then you would want to look at the previously mentioned

(or if you want something more monady)


happy and throw are not mentioned during this discussion, I wonder why that is.
It seems to me that expressing a |> clear |> happy(path) is the most important benefit of using pipes.
Functions can be designed to throw when they are not happy.
Data that is thrown can be formatted in such a way that it is easy to catch just like you would catch an unhappy with outcome using else.
Combining |>, throw and catch can help express both the happy path and every unhappy scenario very clearly.
Of course I’m not saying that a public api should throw stuff.