Why doesnt Elixir use results instead of exceptions

I see in Elixir that I can call Integer.parse! or Integer.parse right. Where the first one errors out if something is wrong, while the second one returns a result right. Or thats the general idea behind the exclamation mark after a method name.

So my question is, why can Integer.parse still fail?

If we look at Integer — Elixir v1.12.3 we see that the method is clearly documented to throw an exception “if base is less than 2 or more than 36.”

My question is, why? Why havent Elixir made the language so that you can choose between exceptions or results?

1 Like

A lot of times I use the ! variants if an error is considered unexpected behaviour. Then I want my app to crash, I don’t care about handling those cases. This makes code easier to read as far as I’m concerned.

Consider having a method where you retrieve a database records. Sometimes you want to be able to handle the fact the record is not there. But somethimes you know it is there; and if not the app can go into oblivion.

Furthermore, this can help with typespecs in your app, if you want to go down the ‘happy’ path. In our projects we make a distinction between errors that can occur and should be handled and unexpected behaviour where there is no sensible thing to do anyway. I hope my reply makes a bit of sense.


The way I think about it is this: there are expected and unexpected scenarios/errors; or to put it differently - some data is known/expected to be good and some needs validation.

(I think you meant String.to_integer instead of Integer.parse!, so let’s take DateTime.from_unix instead).

Let’s say I have a number which represents a timestamp and I want to convert it to a %DateTime{}:

  • If it’s an input from a user, I’d use DateTime.from_unix(number) to see if it’s correct because I expect the user to make mistakes or put invalid data,
  • If it’s a timestamp coming from an external API I’m expecting this to always be correct, so I’ll use DateTime.from_unix!(number) or {:ok, dt} = DateTime.from_unix(number) if a bang version is not available; I don’t want to litter my code with error handling - those code paths will never be run; obviously the timestamp might be wrong due to some sort of a glitch, but an answer to that is putting a higher-level measures for handling such situations (retrying a job, responding with proper status code, etc.).

From that perspective, for me, it makes sense that String.upcase(1) crashes. It’s my job to verify if what I’m passing is actually a string. I’d say that for Integer.parse/2 it’s more debatable what should happen when the second argument is invalid. I guess the behaviour is optimized for cases (which I would say is the majority) when the base is hardcoded.

If you’re writing something like a calculator and both arguments are coming from user input, then it’s pretty straightforward to build your own parse function (base not in 2..36).

1 Like

I think it goes against the philosophy of “Let it crash”. If you have controlled exceptions, for example, timeout on database connection, file no found and so on, It’s totally fine to return an {:error, Reason}, but if you have some unexpected behaviours in your code, like “no function clause matching” which probably means that you have bad code somewhere, It’s easy to crash than returning the useless {:error, FunctionClauseError}.


See also Trailing bang (foo!) in the Elixir documentation.

The version without ! is preferred when you want to handle different outcomes using pattern matching:

case File.read(file) do
  {:ok, body}      -> # do something with the `body`
  {:error, reason} -> # handle the error caused by `reason`

However, if you expect the outcome to always to be successful (for instance, if you expect the file always to exist), the bang variation can be more convenient and will raise a more helpful error message (than a failed pattern match) on failure.

1 Like

Looks like i have to change my mindset a little bit then. I mean I would still try to gracefully handle errors in a rest controller in phoenix. But the problem we are talking about here is maybe the toll it takes to consider every single corner case. However, think about this corner case:

The rest controller is called from a client, could be react, could be anything right. The called endpoint is called regularly, or quite often. If we expect certain conditions to be met or expect certains rows to exist etc, the process would die and be recreated over and over and over again.

So we have two ways of handling this scenario right:

  1. Handle the error and log it and return a sensible error to the caller
  2. Die and burn, and let the process be recreated.

Im not sure if im good with option #2. So thats the reason why i am bringing it up.

This expects a crash within the system does not result in a proper response at another place. That’s most often not the case though. Even if you don’t explicitly handle crashes on a web endpoint clients will receive a 500 status. If your client doesn’t react accordingly to that you’re back in #2 land, but I’d say that’s a problem of your client, not a problem on the server side. A 500 is maybe not a very detailed error response, but it is an error response.

1 Like

Do you really want to cover every possible error case? You’ve got better things to do than worry about someone trying to parse an integer with a base of -1.


No :slight_smile: It was merely an example on a symptom i saw in the language.

But its ok really, im just getting to terms with how things are done. Im not giving up on it yet, but for now I will put it to rest. I dont think its a good practice to avoid catching errors “because erlang platform handles process crashes”. But again the latter could be its own thread. So lets leave it at this.

I would treat crashing regularly and returning 500s as a sign of laziness or simply bugs. I’d go with option #1 for all the cases where I expect that something can go wrong. I think it’s a matter of your own judgment - is the specific corner case something that can and will happen or is it only a theoretical option. If it’s something that happens, then I’m all for handling it gracefully in the code at the cost of needing to write and maintain more code. There’s a bit of guesswork in deciding what needs to be handled explicitly, but you can always do a bit of “hardening” of the business logic.


IMHO in this particular case, Integer.parse/2 should raise FunctionClause error if base is not an integer or is an integer out of the 2…36 range, to mimic the rest of the language.

Just because the function does not end with an exclamation mark it is not guaranteed that the function will not fail. If you provide the wrong data, it could either fail with FunctionClauseError (meaning you input does not pattern match or matches any guarded function), ArgumentError (like in this case) or Protocol.UndefinedError (the protocol X is not implemented for data type Y, or any other exception that the developer considered more appropriate to describe the failure.

1 Like

There are two main ways for the inputs to Integer.parse to be “wrong”:

  • the first argument doesn’t start with a valid digit in the selected base. This returns :error

  • the second argument is out-of-bounds, rendering the operation meaningless. This raises ArgumentError

I see these as two different situations; in HTTP-status terms the former is a 4xx (“something is wrong with the request”) while the latter is a 5xx (“something is wrong with the server”).


thanks for feedback. I come with multiple perspectives on this. Before i got a job signed for Elixir i tried to land another with Haskell. And i would have gotten it hadn’t it been for the other candidate had more experience (only two relevant candidates). Later i found out how stupid that would have been of a jump to go from general purpose to pure FP :laughing: In addition I have worked for many years with Scala. And Java for even more years. So i know how to reason about exceptions and to handle “all corner cases”. Sometimes the latter would result in _ => Left "Something went horribly wrong" (using pseudo language here) :laughing:. Other times explicit care is needed with a match arm for all different exceptions.

So the notion of completely failing the process just doesn’t feel right for me. I feel its an exit hatch. But, maybe this exit hatch is the way to solve things in Elixir and Erlang? Do i get benefits over failing the process versus handling the errors? I have worked with Actors in Scala with the akka framework, and actors dying is not a problem.

But maybe this now borderlines into needing its own thread? For ex “Whats the benefits on failing a process versus handling exceptions explicitly?” It has do with more than just the tediousness of error handling. The language syntax in Elixir doesnt make it hard to code it up or even abstract out an error handler object.

I think there are errors and there are errors. They can be of different types and reasons.

For example you may be using input that has come from “outside” then it would be reasonable to check the data and if it has bad format then return some value saying it was bad and please resend it in the correct format.

If however the error is due to internal errors in your code (yes they do happen :wink:) then maybe the only sensible thing to do is to let it crash, i.e. crash that process and let the system clean up around so it can keep going. Checking error values everywhere and at all levels will result in really messy and error prone code.

There are errors and there are errors and there is no one way which is best to handle them all.


One distinction nobody seems to have talked about yet is also the fact that there are independend processes on the beam. Where in other languages one might need to wrap things in layers of “don’t let errors out” on the beam you often just spawn another process to do the work and just see if it responds with something useful or crashes.


I think phoenix has nice facilities for this: fill out the error handler, and in controller processing it will 1) return the correct response to the user’s request and 2) then continue to propagate the error.

More generally, your callers should be expected to be good citizens of your API and interpret the error codes accordingly and back off. Then you only need handle poor citizens the same way you do others: with a rate-limiting policy.

1 Like

Yes. This. We don’t talk about this often enough. Once someone gets truly comfortable with lightweight processes with fault tolerance, then processes because a design technique. They can be used to limit the “blast radius” of an error. On a prior project I worked on we had a sort of batch job we did on behalf of our customers. We didn’t really need concurrency, but we launched each customer’s data in a separate process. This way, just in case one customer had bad/invalid/unexpected data/errors, it wouldn’t prevent all the subsequent customers’ data from being processed.


By the mere virtue of using Phoenix, you are some 5 to 15 coding lines away from having it return HTTP 400, 422 or whatever else according to your specs when input is mis-shapen. It’s really easy to do.

I agree that when it comes to web endpoints we should make a very best effort to not just crash with an HTTP 500. In my work I return a lot of HTTP 400s plus appropriate validation error messages.

That being said, some errors should be left to bubble up so they can pop in our monitoring systems, and then we can investigate. Would it be useful to have super defensive code that issues a warning buried somewhere: "Integer.parse has been called with base 47"? IMO it would be more noticeable if we deliberately let this crash so we can get visibility of the problem and see if we can make the calling code stop doing stupid crap – or should we really add the super defensive code.

Like with all things, it’s a balance. Don’t get overly paranoid. Put guards around DB operations and 3rd party APIs, absolutely; but you can’t recover from a situation where somebody wants to parse an integer with a base 47.