Content Delivery Networks are used all over the place. Many libraries provide easy options for inclusion of their core code in your application. However, I have the feeling that CDNs are greatly over-used.
That is, I totally understand using a separate webserver whose only job is to provide certain static pieces of content, like:
Images (logos, backgrounds) that are used by your application’s styling.
Font files that include the typefaces and icons your application uses.
And maybe your application uses some audio snippets or other kinds of static data.
In these cases, using a CDN is nice because it reduces the stress on your main application. (Still, I would say that for smaller applications with only a couple of hundred visitors at most, the efficiency gain of this would be neglegible because even when also hosting these media files, your server should not be under that much load.)
When the other service is down, your web app breaks, because its logic and/or its styling depended on the external service. When the CDN-provider that many websites use to include e.g. jQuery, suddenly 75% of the commonly visited websites breaks.
Even if the code you include from external source looks fine now, you have no influence on when the CDN service alters the code files that you point to, (and no guarantee that this will not happen) which is a huge security vulnerability.
I get really fed up when I attempt to work on a project in the train, which is running locally in my development environment, but everything stops working because certain files are loaded in from external sources.
Unfortunately I see both coworkers and other developers use code-hosting CDNs with increasing frequency. I am trying to understand what the reason for this is. Is it just because ‘it is simple’, even though it is highly dangerous? Is it because they have not thought about the negative consequences? Or are there strong (enough) positives that I am not seeing?
Why would anyone use a code-hosting-CDN for anything but the most simple throwaway proof-of-concepts?
The other thing you have to take into consideration is the server load, transfer etc. If you can make your web server less busy, the better it performs.
Speed. I believe the main idea behind CDNs is that they are able to serve files from locations that are very close to the end user - because their ‘network’ spans all major areas of the world with servers in most countries or sometimes several spread across each country.
Personally I think they are over-rated as well - the infrastructure of the internet in most countries is far better than what it was when CDNs became a thing, making them less useful imo.
@qqwy: I definitely agree with you on all of those points. I personally also avoid using CDNs.
In my opinion, they are often a typical case of premature over-optimization and unless you actually encounter problems with latency related to your severs being to far away from end users, there is simply not a problem you need to solve with CDNs.
Let’s be honest: For most applications it doesn’t matter if a JS or image file has an additional 20 ms latency or not.
On top of this, CDN traffic is insanely expensive. Even if you want to separate your application from hosting static resources (which can totally make sense), I would try to find a high-quality shared or managed hosting provider before anything else.
In my experience you can easily get good shared hosting for 10 to 20 Euros/month and serve several TB of data from them. Serving the same amount of data from a CDN would set you back hundreds or even thousands of Euros.
What is the reason for this? Mostly the ‘high uptime’ + ‘high availability in local data centers’ that large CDN-providers promise?
I just found out that there exists something called Subresource Integrity, which might in the future be used to ensure that a CDN is actually serving the files you expect it to serve. But until that feature has become stable, I cannot fathom that the small speed increase (even if you serve a three-megabyte JS file once per unique visitor) is worth the huge security risk.
The reason is probably simply that big companies who actually do need CDNs are still willing to pay those prices
If you look at AWS Cloudfront, they aim for 99.9% availability which amounts to less than 9 hours of downtime/year and which is pretty reasonable for most applications. Traditional hosting providers offer similar or even better numbers.
Cloudflare is sloooooow in my experience, it increases page access time in tests I did from a split second to over 10 seconds for full download, I’m not a fan of it.
A CDN itself is not bad, the localization is nice and caching across sites is nice, however I don’t trust most of the networks due to stupidity they’ve pulled and I really ReallyREALLY do not like the crap most websites pull that download thousands of things per page. It is reasons like this that my workplace blocks non-SSL 3rd party traffic (both the requester and the requestee need to be SSL to work around it) by default. It substantially reduces the network load, no virus issues since then either, and a few other bonuses.
Definitely material performance improvements if your audience is more than a few hops away from your origin content. A lot of my work is in Asia and a good CDN will improve response time to the user in the order of seconds or more in many cases.