Difference between @type and @opaque

I have added a type definition for a struct (an Ecto schema actually) with @opaque like this:

@opaque t() :: %__MODULE__{}

I did this so I can do Schema.t() Where I want to annotate my schema.

But I noticed when dialyzer detects that I need fields from inside the struct, it errors something like The @spec for the function does not match the success typing of the function.

When I change @opaque to @type error disappears.

So what is happening?

My guess is that I’m actually using internal fields from this opaque type, and dialyzer detects this and is trying to tell me this is not the way an opaque type should be used.

@opaque mean that there is such type, that can be returned by public functions in this module, but you should not care or try assign any meaning to this value, as its structure is private.


So basically, one should not define Ecto schemas as opaque, as their fields are gonna be used in code.

Good read: Help Dialyzer Help You!. …or Why you should use specs if you use… | by Brujo Benavides | Erlang Battleground | Medium

Opaque types are just like exported types in the sense that you can use them from outside of the module where you define them. But there is a subtle difference: You are not supposed to use the definition of an opaque type outside its module.

Check, for instance, the docs for HashSet.t(): there is only the name of the type there and that’s intentional. The docs won’t tell you how that type is implemented and that’s because you should treat those things as black-boxes. You’re not supposed to deconstruct or pattern-match a HashSet.t(), you’re supposed to use the functions in the HashSet module to work with it.

For comparison, check the types in the String module. There, all exported types expose their internal structure and that’s intentional again. The idea here is that you are more than allowed to pattern-match on them.

The internal representation of HashSet.t may eventually change and, since you never knewit, your code will still work. String.t, on the other hand, is not expected to ever change and you can benefit from the fact that it’s implemented as a binary() to write your code.


Last night I ran into a bug with a ratelimiter. Somehow it performed the first job after the interval was increased earlier than expected.

After debugging I found an extension of the ratelimiter that changed the value of state.interval directly. Doing so never triggered the schedule_next_run function which would cancel the currently set next_run timer and set a new one with the new value of interval.

Setting the type of state.interval to opaque made a good reminder not to change the value of state.interval directly, but use the function set_interval(state, interval)

(and as a todo: wrap the interval so it’s harder to make the mistake again)