I’ve been adding and refining specs on my (12 KLOC Elixir) project. This is an iterative process, so I run Dialyzer a lot. Even after the PLT has been built, each run takes several seconds. I’d like to know if there is anything I can do to speed this up.
I’m currently running Dialyzer on a MacBook Air (1.8 GHz Intel Core i7, 4 GB RAM, dual core, 1 TB SSD, …) Would it run faster on a machine with more cores or RAM? Any other suggestions?
Hi @Rich_Morin, do you specify a
By defining it as
:apps_direct, you can restrict manually the app tree, and the whole process could be a bit faster (not only making the plt, but running dialyzer after it too).
Good suggestion; I’ll try that out. The documentation lists several options, but I’m not sure about their implications:
- :project - Direct Mix and OTP dependencies
- :apps_direct - Only Direct OTP application dependencies - not the entire tree
- :transitive - Include Mix and OTP application dependencies recursively
- :app_tree - Transitive OTP application dependencies e.g.
mix app.tree (default)
Comments and/or explanations would be appreciated!
I tried adding
plt_add_deps: :apps_direct. This caused the PLT to be rebuilt, taking about 20 minutes. After that, I found that Dialyzer took about 10 seconds per run (down from 12). However, it now issues quite a few complaints about not being able to find information on
I assume that I can get rid of these by adding more options, but at 20 minutes per experiment, this isn’t my idea of fun. Can anyone suggest a set of options that Just Works for a typical Phoenix project?
I found the default to never error (in terms of not being able to find modules).
This likely isn’t the answer you are after but MacBook Airs are quite weak and sadly 10-12 secs per Dialyzer run is pretty normal. My MBP 2015 checks ~400 files in 6-7 seconds.
Thanks for the feedback and data point. I also have a 2010 Mac Pro (12 cores, 3.4 GHz, oodles of RAM), so I plan to try running on that. I’d love to know whether Dialyzer takes advantage of multiple cores; can someone point me to any information on that?
AFAIK anything that uses OTP makes use of multicore implicitly. I’ll be very interested in your Mac Pro 2010 datapoint btw; I am still torn between buying a refurbished (but maxed) Mac Pro 5,1 2012 or go all in with an iMac Pro 2017. I am very curious how well does the parallel compilation do with those old-ish Xeon CPUs.
AFAIK anything that uses OTP makes use of multicore implicitly.
I’m sure Dialyzer gets some implicit multicore benefits in terms of IO and such, but unless its analysis is split up into multiple actors, only one core can be working on it at a time.
FWIW, I’m quite happy with the Mac Pro 5,1. I particularly like the ability to add disks, graphics cards, memory, etc. I’ll post my timing results RSN…
It tries to as best as it can, but some tasks are basically not parallelizable.
:apps_direct mode, you’d need to add some apps manually through the option
plt_add_apps, in my case for a phoenix app I do:
I got there by having failures such as the one you’ve had with
Plug. It took me some time, but, by doing that, I remember having reduced quite a bit the plt cache building time, and also the running time, compared to before.
Don’t have the measures anymore though.
Just ran it in a project both options to compare, both phases are indeed faster, but it’s the PLT caching that gets more impact comparing to default:
||dialyzer clean run
||dialyzer with plt cached
||1m 9s 277ms
||3m 56s 530ms
I finally got around to running Dialyzer on my 2010 Mac Pro. It takes about six seconds, as compared to the 10-12 it takes on my MB Air.
4-6 seconds for a normal Dialyzer run is pretty normal.
What CPU does your Mac Pro 2010 have? Additionally, is it a single or dual CPU?
It has two 6-core CPUs at 3.5 GHz.
Nice, I was thinking of getting the same but still unsure.
Have you ever measured how long does it take to compile Erlang from scratch on that machine? One of the recent 22.0.x releases.