Americas

  • United States

From client-server to peer-to-peer

Opinion
Sep 13, 20043 mins
Enterprise Applications

* Client-server to peer-to-peer: A short history

If it accomplished nothing else, the last issue – on whether or not the word Internet should be capitalized – reinforced my belief that we’ve been spending too much time talking about “Longhorn” and “Windows XP SP2” lately. More people weighed in with an opinion on the “I” than on all of the other newsletters of the past month or so – you know, the ones about operating systems and patches.

Longtime reader Ken Etter reacted to my statement that both intranet and extranet are terms that have dropped out of favor by asking “….our company has always called our internal private Web site an ‘intranet’….what is the current label?” Intranets appeared to morph into “portals” around the turn of the century, but let’s help Ken out: what do you call your internal private Web site? Send me your answers and perhaps we can chew them over in a future issue.

As promised last time, today I want to look at the terms “client-server” and “peer-to-peer”.

In the 1980s, “client-server” computing was all the rage. This was the system developed for (mostly) personal computers set up in apposition to the mainframe computing of the 60s and 70s. Generally, there were two types of servers: file servers and database servers. As time went on, other types of “servers” were developed: print servers, fax servers, mail servers, even Web servers. Servers, in general, acted upon requests from “clients” (i.e., desktop computers and users) to access shared resources (files, printers, fax machines, etc.). Servers were dedicated machines, dedicated to providing a service to their clients.

The 90s, though, saw the introduction (generally ascribed to Windows for Workgroups, although other systems, such as Lantastic, had been around for a few years) of what came to be called “peer-to-peer” networking. Every computer on the network had the capability of being both a server and a client. Users could share out access to their own resources (files, printers, other peripherals, etc.) while continuing to use the PC as their personal machine. These networks were generally quite small. They were limited by the use of unroutable protocols, such as NetBIOS, as well as the need for geometrically escalating management as the number of participants increased.

In the late 90s, it appeared that a combination of Web servers and dumbed-down PCs would reinvent mainframe style computing as “network computing.” This attempt to remove both Microsoft and personal freedom from the computing landscape had a mercifully short, but highly vocal, run.

The dawn of the 21st century showed an inclination to return to the client-server model through the introduction of so-called “Web services.” A service almost always requires a server to provide it. Unfortunately, we’d already used the term “Web server” to identify a machine (or service) that provided HTML documents. Web services is an all-encompassing term for services which include HTML documents but also the full panoply of services that the old client server models provided as well as newer services (messaging services, identity services, and so on) which have only recently emerged.

But just as the Web services model was resurrecting the benefits of client-server, along came applications such as Kazaa, which attempted to reinvent the peer-to-peer model but on a much broader scale and, of course, using fully routable TCI/IP as its carrier.

Mainframe-terminal computing, client-server, workgroup peering, “network computing,” Web services (server-client, if you will) and modern “peer-to-peer.” As the French say, “plus ca change, plus c’est la meme chose.” Today’s programmers might say it as “history is a do loop with no exit condition.” How would you put it?