Hardware considerations for development

Sorry, I missed that, or lost track of it over the last few days.

For those cases where it does not saturate available cores up to at least 4-8, most likely problem is going to be poor “disk” performance.

A high res screen. 4K at 100% scaling so bigger then 32" but stop at 40".

I’ve got the Philips 43" monitor however it is too big. I had an ultra wide however the 1080px vertical sucked.

Having multiple A4 sized screen spaces available is great. Also awesome for tailing multiple log files etc.

Just thought that I’d expand on my earlier findings. I’m still using the same method and project to test different hardware configurations, but the project has grown so compile times are quite different now, and the majority of the tests are disabled as the app is being rewritten at the moment.

Here’s the latest tests:

$5 DigitalOcean droplet - 1 vCPU & 1GB RAM
The single thread spent all of it’s compile time pinned at 98-100%. RAM usage was variable but stayed around 280MB.
Compile time: 2min 06sec
Test time: 3.6sec

$15 DigitalOcean droplet - 2 vCPU & 2GB RAM
Both threads varied between 50 - 95% during compilation. Rarely was a single thread pinned at 100%. RAM usage was the same as above.
Compile time: 1min 10sec
Test time: 1.2sec

$40 DigitalOcean droplet - 4 vCPU & 8GB RAM
All threads varied between 30-80%. Never pinned at 100%. RAM same as above.
Compile time: 1min 01sec
Test time: 1.1sec

$40 CPU-optimised DigitalOcean droplet - 2 dedicated vCPU & 4GB RAM
Both threads varied between 50 - 95% during compilation.
Compile time: 51sec
Test time: 0.8sec

$320 DigitalOcean droplet - 16 vCPU & 64GB RAM
Often threads were left unused, but sometimes all used at peaking at ~40%. Spikes to 60% on individual threads, but infrequent and short. RAM usage was slightly higher.
Compile time: 52sec
Test time: 0.9sec
!~ Interestingly one of my tests started failing from here on in. Issue with isolation when running tests in parallel most likely.

$320 CPU-optimised DigitalOcean droplet - 16 dedicated vCPU & 32GB RAM
Very similar to regular $320 server. Spikes were at 50% instead and RAM use climbed further to around ~650MB.
Compile time: 38sec
Test time: 0.6sec

c1.xlarge.x86 - Xeon E5-2640 (2x), 16 core (32 thread) at 2.6GHz & 128GB RAM, 1.6TB NVMe drive. $1300 p/m
Work spread across all cores, with single threads pinned at 100% for brief moments. Usually around 10-30%.
Compile time: 48sec
Test time: 0.9sec

t1.small.x86 - Atom C2550, 4 core @ 2.4GHz & 8GB RAM, 80GB SSD.
Huge variability in thread usage. Occasionally a single thread will be pinned at 100% whilst the other three are doing nothing. Only server to display this behaviour.
Compile time: 1min 54sec
Test time: 3.2sec

I had wanted to try out their Cavium server with 96 cores @ 2.0Ghz, but I found getting the project setup on the ARM version of Ubuntu quite tricky, and unfortunately I didn’t have the spare time to go configuring an environment I wasn’t going to use again. Given the relatively small changes in performance between 2 threads and 32, I don’t foresee 96 cores doing much for compilation but I imagine it would rock as the production server!

Reference: DO standard clock speed is 1.8GHz and 2.6GHz according to a blog post they made in 2018.

Conclusion: For remote development, I think DO’s $15 droplet hits the sweet spot. Spending a bit more on either of the $40 might be justified, but there’s very little time-savings thereafter. I think this probably carries through to local dev too, where the point of diminishing returns comes quite quickly. I think if I were using the full test suite I’d see more dramatic improvements in test time as threads increased.

For Elixir development I think you’re in pretty good standing as long as you’ve got more than one thread (surely that’s everyone these days) and clock speeds around 2.5GHz and above just to help with NodeJS compilation. It’s very impressive just how fast the core team have managed to make working with this language. Realistically, most local environments are going to be constrained by RAM more than their processor due to browser tabs, music playing, etc.

So in short, for all of you out there thinking that an iMac Pro or even the new Mac Pro is justifiable because of the time-savings you’ll see, I’m afraid it’s probably not. The reality is that unless you do something else like photo or video editing with your computer, we really don’t need powerful hardware. We’re more likely to find time savings in learning our tools (editors, linters, test suites, etc) better. Guess I’ll have to start taking more photos to justify my hardware-lusting!

7 Likes

Interesting analysis! Thank you so much for putting the time – and money – behind this and give us more actionable info.


iMac Pro 2017 vs. cloud dev machines

Allow me to chime in with some of my observations on the iMac Pro 2017 (some first-hand in a local store) and a lot of forum research in the last several months. There are several factors at play when it is compared to beefy cloud servers or end-user grade high-end machines for programming/gaming:

  1. Hosting providers optimise for user volume and as such you will not get a truly dedicated 8+ core CPU unless you pay some hefty sum a month. Even if the hosting services you alluded to give you the best Xeon in the world on your VPS, the insane amount of context switching (due to other users being on the same physical CPU) will render any L1 / L2 / L3 caching entirely pointless. Compilation often loads a lot of stuff in cache lines and Xeon CPUs benefit hugely from this due to their bigger L caches (the 18-core has 42MB IIRC). So IMO, unless you tested on a dedicated bare-metal server reserved only for you on a mid-range (or high-end) Xeon, I am not sure we can see the true and objective numbers on compilation performance. Plus the older Xeons don’t do that well on turbo boost. The Xeon-W 21xx and 3xxx CPUs are much better in that aspect.

  2. Some compilation is hampered due to the I/O bandwidth and the storage speed (not sure about Erlang/Elixir’s though). The iMac Pro’s NVMe SSD has 3.3GB/s writing speed and 2.8GB/s reading speed. There’s no widely used compiler in the world that can saturate that, but I’d love to be proven wrong.

  3. iMac Pro’s RAM has only 4 channels but the RAM is quite fast, and is ECC. Still, in some workloads having the 14 or 18 core version would probably be pointless if 8+ cores (taking hyperthreading in consideration here) have to read and write to the RAM at several gigabytes a second. I am not seeing any widely known programmer workflow ever being bottlenecked by this but still, this is a legitimate limitation of the iMac Pro that some professional users might bump into.

  4. It’s undeniable that 2X cores does not magically give you 2X performance, that’s well proven by many people. However even at 2.20GHz (this is the throttled speed when all cores are utilised at 100%), the 18-core model definitely makes a lot of multicore workflows just fly. I’ve seen several people say their compilation speeds got 3x better by using the 10-core version, compared to a 2016 MacBook Pro.


Alternatives

As for alternatives, good Threadripper 2990WX configurations can and have outpaced the iMac Pro in some multicore compiler benchmarks. Sadly I lost those links but the message was pretty clear: if your software is well-written to fully saturate your CPU cores and if your I/O bus is not a bottleneck (which sadly for many motherboards is a fact of life) then even 3-4 more cores can make a noticeable difference.

Sadly though, none of these configurations have macOS so I gotta say, even though the maxed iMac Pro configuration is outrageously expensive (~$20k with the 256GB RAM variant), I still think it’s worth the investment if it lasts me 8-10 years.

Another important point is that AMD’s Zen2 architecture is coming in several months and it’s very wise not to spend a lot of money on their current high-end chips since they’ll be outmoded pretty soon. (But buying 1st gen Threadrippers when their prices drop is definitely going to be one of the best ever CPU deals in the history.)


Some iMac Pro benchmarks and observations links:


Why I still want to buy the iMac Pro 2017

There’s a lot of bashing on the iMac Pro 2017 out there. I’ve read at least 50 articles in the last several months but the interesting thing is that for most people the argument goes like this: “yeah this is super fast but I don’t need it so here are gazillion reasons why you don’t need it as well”.

Post-hoc rationalisation when you are not willing to invest isn’t a good argument and I found most of these articles to be badly written opinion pieces which repeatedly get surprised by the obvious revelation that 2X cores does not equal 2X performance and that they can’t spend $10k+ on a machine – both of which are well-known facts by many and I just can’t take seriously the journalists and photographers complaining about the iMac Pro 2017 when it is very obviously not targeted at them.

My reasons to still want to buy it:

  1. We are at the very brink of Moore’s law. There’s only like what, the Zen2 AMD CPUs on the near horizon and that’s it. And all these beefed up consumer CPUs require more watts and much better cooling. So, from here and on all programming will have to become much better at CPU core saturation. The future is multicore computing – maybe CPU and GPU working in tandem on the same tasks even. Investing in machines with a lot of cores, even if they work at “only” 2.20GHz with all cores loaded, is smarter than buying an i7-9900K. It’s a future-proof investment.

  2. Many computer configurations only accentuate a singular aspect – usually the CPU – and then cheap out on everything else to keep the prices in the mass-affordable range. This results in some really weird and anaemic configurations where the CPU spends most of its time waiting for data to be moved around. The iMac Pro is beefed on almost all fronts however; its NVMe SSD is practically just a few times slower than the RAM! Its only downside is that the memory is 4-channel which, as mentioned above, can be a bottleneck in very specific workloads (which I don’t believe us the programmers will ever stumble upon).

  3. Something almost nobody ever mentions – the iMac Pro is super silent even when fully loaded for hours! Why is nobody EVER talking about this? I can’t stand my MacBook Pro’s jet engines when I load it. I want to throw it out the window and run away. Am I the only one person on the planet who feels much better when their work machine can be heard slightly only once or twice a day?

…I can list several more but this became way too long.

In the end the answer to the question “should I buy $SHINY_EXPENSIVE_TECH?” is always “it depends on what you need exactly and if you are willing to invest long-term”. :slight_smile:

2 Likes

I actually went into a local Apple stockist (that wasn’t an official Apple store) hoping that as they’re a smaller company in a smaller city with a closer, friendlier community; they’d let me run these benchmarks on their base spec iMac Pro. They were very helpful, impressed at the Nix-based solution I proposed for installation/cleaning up afterwards, and perfectly willing to help, but unfortunately all the demo machines are really tightly locked down to a specific version of the OS that allows you to install absolutely nothing. First I’d seen or heard of such a macOS variant, but they were 100% right. I tried and failed to install anything extra on it.

The cache is a figure that I’ve been keeping half an eye on. Since I started this thread back in October, I’ve used a few different Hetzner servers. Most were consumer processors, usually i7 4770, but a few were older Xeons. The difference in cache vs. clock-speed was noticeable depending on task. Compilation favoured cache when most other things were equal, but clock speed differentials of about 1Ghz overcame this. I didn’t do any actual timed compiles, hence the results not being here.

The machines from Packet should have all been dedicated bare-metal servers. I don’t know much about their infrastructure and provisioning but something felt amiss with their performance, not to mention their pricing!

I felt that the twin Xeon E5 server with NVMe drive should have been faster than it was, particularly for $1300 per month! DigitalOcean’s $320 CPU optimised droplet outperformed it both compiling and running tests. My 2015 MacBook Pro also performs similarly…so somethings not right here.

Equally, their Atom server has four times the cores, and at least 0.6Ghz on the $5 droplet yet performed pretty similarly. My 2014 dual-core 1.4Ghz Mac Mini was also in the same ballpark (faster than both).

Both Packet servers displayed quite erratic behaviour in htop, with threads getting pinned at 100% whilst others sat idle, sometimes being used around 10-20%, others 80-90%. There were few similarities over multiple runs, whereas all other servers/computers were pretty consistent and used whatever resources were available in a stable and predicable way.

I feel that DigitalOcean’s VPS’ are actually very performant for their specs. This is widely accepted when compared to other VPS but I had expected dedicated servers to walk all over them past a certain price threshold (anecdotally this was always the $40p/m point).

I may try some benchmarks on Hetzner’s dedicated servers, but their provisioning is slower and comes with setup fees. Also, at this point feel I’m flogging a dead horse as I think I know what I should be doing.

Despite everything that Apple has done lately, all of which I think is largely positive, I can’t ignore the work that AMD are doing. I’m finding it really hard not to just build a home server using a Threadripper. Only thing stopping me is that I lose the great benefit of having an always on, powerful server connected to a fast internet connection with stable electricity supply that I don’t have to manage. Internet speeds at my home are OK, electricity is variable and unless I really have to, I don’t want to be building servers and it distracting from actual work.

Another thing that’s kept me from the Threadripper. I actually prefer developing on Linux these days, but setting up/managing a Linux desktop — no thanks. Also, I do some photography work, and if I’m spending all this money on a computer, I’d like to be able to use Capture One on it — that rules Linux out. I either get in Hackintosh territory or dual-boot with Windows & Linux on the Threadripper and macOS on everything else. It’s just too messy…

Another reason not to drop any large sums of cash just yet…

I’ve actually seen little of that. Most people I read/watch/talk to have been very complimentary and that’s across a wide range of professions. Many have said that they would have been entirely unfussed if the Mac Pro hadn’t been announced as the iMac Pro does everything they need. The only real complaint has been the all-in-one nature, but it’s price is so good that it’s less about upgrades being cost-effective and more about their requirements just can’t be catered to by an AIO.

Journalists love to hype/divide opinions/create a good story. Photographers might be using Lightroom Classic, in which case all the competing power in the world won’t make their workflow fast. Plus, it’s widely understood that Lightroom loves clock-speed, not cores. Recent shifts to 42mp cameras don’t help matters…

Agreed, very little actually uses a CPU properly. Even professional applications that can benefit hugely from multiple cores don’t use them/use them well.

Biggest draw towards it, the Mac Pro and my iPad Pro. Using the iPad for heavy photo editing, you never get the impression it’s working hard. My MacBook Pro takes off just importing the same photos…


All in all, I’m coming back around to local development which actually puts a lot of these discussions aside as the pool of available and suitable hardware is that much smaller if you want to use macOS, which I do. As I said, I do photography work too and don’t want to spend £££ on hardware (rented in the cloud or purchased and sitting under my desk) that can’t help with that. The possibility of developing my first iOS app soon means that I’m firmly stuck with macOS/Apple.

I don’t know what hardware will replace my ageing computers, but they’ve got life left in them yet, and for now at least they are “enough” and sit above that threshold where they are damaging my productivity. I suspect it will be Swift compilation or photo editing that prompts an upgrade, and not because Elixir compilation has gotten too slow.

2 Likes

Well, I tested a server from Hetzner just out of curiosity — I can’t help myself!

It was an i7-4770 which is a quad core with hyper threading, a base clock of 3.4GHz and boost to 3.9Ghz along with 32GB DDR3 RAM and SATA SSDs — all for around €36 a month!

Compile time was down to 32 seconds, which is comfortably the fastest I’ve seen, and once again suggests that clock speed is more important than available cores for compilation. Again, the inverse is probably true for test speed.

That’s good news for you @dimitarvp as I suspect the 8 and 10 core iMac Pros would fly on both!

I’ll be sticking with my MacBook Pro and shutting down my remote development environments for now, but may end up trying a proper setup on Hetzner in the near future.

2 Likes

From my links above: https://openbenchmarking.org/showdown/pts/build-linux-kernel

No Xeon-W there but still, if you have not entirely given up on the home lab idea (I want to do it and at this point it’s not even the money that’s a blocker, it’s my health and energy levels) and are OK with Xeon E5-2650 v3 or E5-2670 v3 then a refurbished Dell PowerEdge 730 can be used for a SSH’d programming above – amongst probably 500 other things.

Granted, the specs on that thing are a bit like killing mosquitoes with a space ship though. :003:


I’ve read a lot on cores-vs-clock for compiling and tests and truthfully, I’m still leaning towards the 18-core CPU (Xeon W-2195). I can live with a theoretically little slower compilation compared to the supposedly best variant (which is the 10-core variant (Xeon W-2155). And that’s because I run tests a lot more often than compiling, that’s first, and second, I plan to transition to a researcher who works with heavy simulations. Having more cores will always win in these scenarios.


Unrelated: today was very hot and my MBP 2015’s fans kept spinning the entire damn day! :angry: I really have to go out there and see what can I do to arrange an iMac Pro for myself. Not easy when you don’t have the entire sum though.

I am using now Thinkpads for some years, now and used other Linux machines and I can not see my experience reflecting what you say.

So sorrry but this is not true… Not all Linux Laptops may be free of problems, but also not Macs are… the difference is that Macs have status, and people forgive them for the defects, and say that was bad luck, but with Linux they just go and say that is not worth it.

This is one I don’t get… wasting so much money in low spec machines, when compared with Linux machines that give you much more for the same money. Yes a MAC turns heads and gives you that status, but unless I will need to develop for the Apple ecosystem I will never waste my money in one.

1 Like

random MBP2015 tips (how I have prolonged the life of mine…):
Disable turboboost http://tbswitcher.rugarciap.com - (I manually reenable it if I’m compiling a native app or similar)
Give it an internal cleanup and apply new thermal paste (cpu/gfx) Amazon.com
If needed get a new battery (ifixit.com) - check current battery with coconutBattery 3.9 - by coconut-flavour.com
very much use safari for browsing…

Already done. It just doesn’t help when it’s 37 Celsius around here for 12 hours.

1 Like

About the Packet servers btw: https://www.phoronix.com/scan.php?page=article&item=packet-server-tests&num=4

I think your overlooking the obvious …

  • Are MBPs overpriced - yes
  • Are MBPs designed for hardcore software development - no
  • Are MBPs problem free - no

Are MBPs much easier to deal with than what-ever-favour-of-linux - YES. So you’re paying for convenience. People pay for convenience.

While my 15+ month stint with Arch Linux on a Thinkpad was educational and useful enough for software development I really didn’t trust it for day-to-day consumer type use - at the time I swapped back to Windows 10 for that.

While an MBP isn’t a brawny dev machine - it isn’t Windows and usually Unix-like enough for a lot of software development.

If forced to again, I might just try Ubuntu on a Thinkpad but frankly I’d likely just stick Ubuntu on a brawny mini-PC for the few times where I do really need it (or perhaps look into a temporary, rental “cloud solution”).

1 Like

Well I use Ubuntu for a lot of years without any major headaches, thus I cannot speak for Arch Linux.

I have seen my colleagues from the mobile team having some headaches with their Macs, specially the ones from the Android team, that where forced to start using Macs, thus I don’t don’t see that much convenience, but this is my point of view,

Now saying that Linux computers are not even up for consideration and that Macs are the way to go, is just an illusion, but hey I see this behavior in anything, cars, mobile phones, you name it… if the brand have status, anything else is treated as not worth it.

I couldn’t care less about the brand, I just have better things to do than playing sysadmin on my primary platform.

1 Like

You may not, but a lot care about.

So I have to ask if you never dropped to the terminal to configure or fix something in your MAC?

I live on the terminal - it’s not an matter of “dropping to it”. So far I’ve been fortunate enough not to be faced with a non-bootable MBP.

Several times Arch presented me with a “there goes my day” problem for some inexplicable reason (Linux Mint was a no-go before that).

2 Likes

Never had a non bootable Linux machine, but once more I always used Ubuntu, and you always seem to choose niche Linux distributions, thus I am not surprised that you have a so negative image from Linux.

But the problem now is that once you have persisted this bad image, even if you really try to use Ubuntu you will say that is not worth your time at the minimal issue you face with it, that is how us human beings behave regarding past negative experiences.

Sorry to intrude in your discussion guys, but can you stick to the topic in question? CPU count vs speed vs memory vs disk for compilation

2 Likes

Unfortunately I don’t have anywhere that a full on rack could live (noise, space, heat) in my current property, so it would have to be a tower. Given those benchmarks above, I’d be inclined to go for a Threadripper 2950X over the Xeons, but there is still the issue of Zen 2 making any purchases at the moment unwise.

I may have to do these. Coconut battery puts it at 81% of original capacity which isn’t bad, but it feels worse than that.

@peerreynders & @Exadra37 I really wanted to avoid this discussion becoming a Mac vs. Linux vs. X pissing contest like so many threads have before (including this one). I wanted to discuss hardware in quite broad (but also specific terms) and OS is a part of that. Yes, some of us agree and some of us disagree, but I’m sure that we can all agree that this great holy OS war is dull now.

We all use the tools we want to use. We don’t all need to be told we’re wrong, just because our priorities and preferences differ from yours.

4 Likes

OK it’s 2020 and I was wondering if there is a way to lure me away from Apple Macbook Pros for development. Are there any viable alternatives out there now? And significantly cheaper?