As above it depends … 
Let’s take at very simple example:
Mix.install([:benchee])
defmodule Example do
@map %{a: 5, b: 10, c: 15}
for {key, value} <- @map do
def compile_time(unquote(key)), do: unquote(value)
end
def run_time(key) do
Map.fetch!(%{a: 5, b: 10, c: 15}, key)
end
end
Benchee.run(%{
"compile time" => fn -> Example.compile_time(:b) end,
"run time" => fn -> Example.run_time(:b) end
})
So there is improvement, but is it worth? Well … If it’s your script that you use at most once a day then you save around 2ns
. That’s why I wrote that usually I’m not writing macros in such cases. 
However at scale we have around 10%
improvement. If same would happen in important part of your app or service then it means you have significantly improve UX (faster replies) or you are able to support 10% more clients at a time. If the numbers would be same then you could have extra 4M clients working at the same time without performance penalty. 
Does it mean that macros are good only for big projects or projects that in production have millions of users? Definitely no. There are cases where you need to deal with lots of data alone. Surprisingly it happens more often than you think. A good example here are … web scrapers! You parse a huge amount of text as HTML
file (sometimes you also would like to parse CSS and JavaScript). If there is an improvement in parser like the one in example above then we have a big improvement already, but it’s not all. 
Often when writing scrapers you have to deal with a list of links and / or pagination, so you may want to parse hundreds or even thousands of pages. Alone it’s still not much, but you may start to feel extra time counted in seconds. If you made such a change for learning purposes it was definitely worth, but otherwise you simply have other things to do than looking for a “possible improvement”. That’s said … if you take a look at your code after few months or especially after years then you would easily find that some things you can immediately find and fix. 
So there are cases where any optimisation generally are not worth, but sometimes they may be even expected. That’s how we’re back to the start: “it depends”. 
I never worked on anything “gigantic”. Think about Phoenix
contexts. As same as you do not put all contexts into single file as same you most likely would not put all enums and related logic to one file. There are few reasons for it:
- Readability i.e. too many lines of code
- Naming problems, so if lots of enums or constants would have lots of macros or other functions imported then it would be hard to find a naming that does not conflicts (in terms of readability) with any imports
- Warnings -
Elixir
would warn when a module compiles longer than 10s - that’s very important hint for a refactor
- Content - I have no idea what you want to put there, but dozens of hundreds of imported functions most probably does not affect the compilation time too much.
There are most probably many other reasons, but this should be more than enough, so sooner or later you would end up with splitting all of that into smaller pieces and then it then it would be never a problem. This is rather a theory as in practice you would rather not end up with a gigantic single file.