Considering the ongoing impact of hyper-connectivity and the pervasive nature of the Internet it messes with your head a little to think that what we view as such a modern entity grew out of point-to-point connectivity of mainframe computers in the 1950’s and 1960’s. The Internet Protocol Suite or TCP/IP was standardized in 1982 making way for the Internet as we now know it. Conceptually the network was designed to provide point-to-point connectivity between two relatively fixed end points. In fact, the network mimics the older telephony style of connectivity in much the same way that email is simply an electronic version of snail mail. The Internet that we use today is essentially a telephone system, from one host you “dial up” the other and communicate, the “work” happens at the end points.
The problem is that both the purpose of the network and the end points have drastically changed from the original design. What was host-to-host communication is now one to many or many to many information dissemination. And as for hosts, they’re in our pockets and our computer backpacks instead of / as well as on a desk somewhere. The network itself isn’t aware of the information or content, not even the content type, it simply answers a request for the content. Now this is okay when one person requests to view a YouTube video stream, but what happens when 10,000 people simultaneously ask to view that same video? The host acts as designed and answers each request individually, there’s no efficiency or ability to optimize for the video stream, the hosts is just answering the “phone”. As I said, the work, all of it, happens at the end point…or in this case call it the choke point. The new Internet is about huge scale activities, facebook has 750M members, Google processes more than 1,000,000,000 search queries per day (as of March 2011), 35 hours of video are uploaded to YouTube every minute and there are over 2 billion views per day (March 2011)…I could keep going but you get the point. Companies like Google and facebook have massive compute power oriented datacenters scattered about the globe to do the heavy lifting of your “communication” request, it’s resource intensive and horribly inefficient. The Internet today is about connecting people to each other and to relevant content yet the network is unaware of the content or even the nature of your request to the host…it’s a telephone line, completely unaware of the nature of the traffic / packets. The network is transmitting from place to place yet in this context, location is irrelevant, it’s really about the content. From a security perspective the only approach is to try and provide security at the channel level, not at the content / data level. For example a secure email server, as long as it has network access, will send spam because the network has no idea what is traveling on the channel, the security protocols have been met.
Now add to the network design problems by factoring in the explosive growth of content / data. IDC estimates that the amount of data created in the world doubles every two years and that there was approximately 1.2 zettabytes in 2010, growing to 1.8 zettabytes this year. Now to put that in perspective that would be the equivalent of every person in the US tweeting three tweets per minute non-stop for ~27,000 years or ~200 billion 2 hour high def movies, which would take about 47 million years to watch end to end! Okay, let’s just say a bunch of data is created every minute.
So what’s the solution to this nasty and growing problem? Well, according to the folks at Zerox PARC, who by the way brought you Ethernet, the mouse, the laser printer and a lot of other useful stuff, it’s to make the content the focus of the network. In other words instead of communicating with location specific hosts to “get” content, the network would be built of self-aware content. The focus is on the data, not on the physical location of the data so the network disseminates content instead of holding an abstract conversation between two location specific nodes. Changing to a context-aware, content-centric network allows content to move to where ever it’s needed, when it’s needed. The “smart” content can be anywhere so location doesn’t matter and any device that hears a request for content can respond using whatever means available. Content-centric networks (CCN) use the existing plumbing but change the way the plumbing is managed and content distributed. There are already some solutions that have started to bridge this gap between the old node based network to that of smart content like Bittorrent and Akamai Content Delivery Networks that demonstrate the concept.
According to the PARC research CCN’s are self-organizing and provide a much higher level of security by securing at the content / data level instead of at the channel. Think of the flow like this, you’re on the train to Boston using your iPad and need a blog post on CCN, so instead of having to manually connect to the hot spot on the train, bounce around on a bunch of servers until you get to the TypePad server for my blog, find the post and retrieve it you would simply bounce off of the smartphone of the person sitting in front of you and retrieve the post (yes, I probably oversimplified that a bit much, but hopefully you get the point). CCN is location flexible or mobile, automated and uses local protocols, so instead of being in or out of network, you’re always connected, even in low connectivity areas. It’s like being able to receive a package anywhere you are, not just at the shipping address you specify. CCN is the enabler for what I’ve referred to as people-centric networks. They’re a part of the new organic business network that I believe is the future architecture of business. Here’s some more research on CCN’s.
CCN is real with actual networks under development and should be available in the next 18 months. PARC has set up an open source project called CCNX to distribute the software and promote the standard. CCN’s promise content caching to reduce network congestion and greatly decrease delivery of content, greater security and a much simpler network design.