I’ve been adding and refining specs on my (12 KLOC Elixir) project. This is an iterative process, so I run Dialyzer a lot. Even after the PLT has been built, each run takes several seconds. I’d like to know if there is anything I can do to speed this up.
I’m currently running Dialyzer on a MacBook Air (1.8 GHz Intel Core i7, 4 GB RAM, dual core, 1 TB SSD, …) Would it run faster on a machine with more cores or RAM? Any other suggestions?
By defining it as :apps_direct, you can restrict manually the app tree, and the whole process could be a bit faster (not only making the plt, but running dialyzer after it too).
I tried adding plt_add_deps: :apps_direct. This caused the PLT to be rebuilt, taking about 20 minutes. After that, I found that Dialyzer took about 10 seconds per run (down from 12). However, it now issues quite a few complaints about not being able to find information on Plug....
I assume that I can get rid of these by adding more options, but at 20 minutes per experiment, this isn’t my idea of fun. Can anyone suggest a set of options that Just Works for a typical Phoenix project?
I found the default to never error (in terms of not being able to find modules).
This likely isn’t the answer you are after but MacBook Airs are quite weak and sadly 10-12 secs per Dialyzer run is pretty normal. My MBP 2015 checks ~400 files in 6-7 seconds.
Thanks for the feedback and data point. I also have a 2010 Mac Pro (12 cores, 3.4 GHz, oodles of RAM), so I plan to try running on that. I’d love to know whether Dialyzer takes advantage of multiple cores; can someone point me to any information on that?
AFAIK anything that uses OTP makes use of multicore implicitly. I’ll be very interested in your Mac Pro 2010 datapoint btw; I am still torn between buying a refurbished (but maxed) Mac Pro 5,1 2012 or go all in with an iMac Pro 2017. I am very curious how well does the parallel compilation do with those old-ish Xeon CPUs.
AFAIK anything that uses OTP makes use of multicore implicitly.
I’m sure Dialyzer gets some implicit multicore benefits in terms of IO and such, but unless its analysis is split up into multiple actors, only one core can be working on it at a time.
FWIW, I’m quite happy with the Mac Pro 5,1. I particularly like the ability to add disks, graphics cards, memory, etc. I’ll post my timing results RSN…
I got there by having failures such as the one you’ve had with Plug. It took me some time, but, by doing that, I remember having reduced quite a bit the plt cache building time, and also the running time, compared to before. Don’t have the measures anymore though.
EDIT:
Just ran it in a project both options to compare, both phases are indeed faster, but it’s the PLT caching that gets more impact comparing to default: