Remember that time when thousands, or even Oh My Goodness tens of thousands of downloads a month was worth mentioning as a measure of project or product health? It’s not so long ago. But the game has irrevocably changed. What happens when everyone is contributing code, builds are automated, and libraries are stupid easy to include in your code? With good engineering practice you won’t be open to a shit show like left-pad but this post isn’t really about dependency management. It’s more about the absurdity of the scale we’re dealing with in the age of package managers like npm.
“Algolia is developing multiple open source projects (like InstantSearch.js) to simplify the integration process of our search engine. This month, with the help of jsDelivr, we reached a milestone 1 billion downloads (that’s 26TB!) across all of our libraries.
Now seems like a good time to explain how this works and the choices we made.” [italics mine]
So with this kind of scale around library consumption, which would have been unprecedented even 10 years ago, we now have specialised CDNs for delivering open source libraries. jsDeliver. For an old guy this is totally mind-blowing.
As Algolia’s blog explains
“jsDelivr is a free CDN built exactly for this. It offers production quality of service and natively integrates with npm.”
According the post on Algolia
“jsDelivr serves about 19 billion requests every month. That’s almost 500TB of bandwidth of small ~3kb js files — all that from 176 locations all around the world.”
Check out Docker Hub download numbers for more high scale craziness in libraries and packages. This really is a different world, with new domain specific tools for new ways of writing, consuming and managing software emerging.
for disclosure purposes I should mention Docker is a client.
(Read this and other great posts @ RedMonk)