The last time the Internet had a major upgrade was in 1986.
At this point, it’s hard to imagine life without the Internet, at least in the developed world. But buried underneath the breathtaking Web applications and streaming media that we use on a daily basis, the actual software that makes the Internet work is starting to show its age.
As recent events have demonstrated all too clearly, the Internet is especially vulnerable to deliberate attacks. Massive networks of hijacked computers, known as “botnets,” can be used to deluge target websites with enough traffic to essentially shut them down, much as a radio station running a call-in contest will have a constantly busy phone number. These attacks succeed, at least partially, because they are able to exploit weaknesses in the existing Internet protocols.
Twitter, the social-networking site with millions of users, was the victim of just such a denial of service (DoS) attack in early August. There is much speculation that the attack was triggered by the postings of a Georgian separatist, but whatever the root cause, all it took was a few keystrokes to unleash a botnet’s fury against Twitter, taking it down for half a day.
Like a jazzy sports car that has never had its oil changed, the underlying protocols of the Internet have remained largely unchanged since it came into being in the mid-1980s. The Internet can be surprisingly fragile at times and is vulnerable to attack.
The Internet evolved from the experimental military ARPAnet project, where technical decisions were made by consensus among the researchers involved. When consensus was reached, changes were made throughout the entire network. As it became clear that there was interest in the uses of the Internet beyond the limited research community it encompassed, the military (and later the National Science Foundation, who inherited the Internet) opened it up gradually to commercial traffic.
Page 1 of 5