One of my favorite weekly responsibilities is supporting the meeting at Norwich University of the Association for Computing Machinery Special Interest Group on Security, Audit and Control. At a recent meeting, we discussed the November announcement by Microsoft that it would be offering Internet-based Windows and Office services through its new Windows Live and Office Live Web sites.According to the Computerworld news story by Eric Lai cited above, services will be introduced in phases. However, eventually, businesses will be able to \u201clet distant users together edit documents in Word, Excel and other Microsoft formats through the Internet.\u201d These functions will apparently be available without necessarily having to install Microsoft Office products on the workstations.Is interesting for an old guy like me to see the changes in architecture brought about through the workings of Moore\u2019s Law (roughly speaking, the observation that the density or computing power of components of a fixed cost doubles roughly every 12 to 18 months).As an example, 1MB of RAM for the HP3000 Series III minicomputer cost $64,000 in 1980 (roughly $180,000 in today\u2019s currency). As I write this article at the end of November, a cost comparison on the Web shows 1MB RAM (in 1GB chipsets) costing about 12 cents or less. That\u2019s a 56% reduction in price per megabyte per year compounded over 25 years. Similarly, the 1980 HP7925 120MB disk drive cost $25,000 (about $75,000 in today\u2019s money). Today, if you could find a 120MB drive at all (much too small to be useful), it would cost about 6 cents (based on the $200 price of a 400GB Western Digital internal hard drive). That\u2019s a 57% reduction in price per MB per year compounded over 25 years.Computer performance is determined by five components of a system:* Access to and speed of the CPU* Access to and speed of RAM* Access to and speed of mass storage* Data indications bandwidth* Application softwareThe move from mainframes to workstations and LANs was driven in part by data communications bottlenecks. Data transfers on mainframes using asynchronous RS-232 links maxed out at around 19.2Kbps; systems using coaxial cables and standards such as SNA provided bandwidth into the megabits-per-second range. Multiplexers optimized bandwidth utilization to squeeze every last bit per second out of the network infrastructure. Shifting work to local computers reduced the amount of data transfer and sped up processing by reducing contention for scarce CPU and RAM resources. LANs allowed information sharing at speeds that rose from 10Base-T\u2019s 10Mbps to 100Base-T\u2019s 100Mbps and then onward into fiber-optic networks and today\u2019s gigabit bandwidths.All of this history is now coming back to the feasibility of centralized computer services. With corporate users almost universally shifted away from low-speed modems (56Kbps, tops) into broadband access (megabits and even gigabits per second), accessing remote code and data is once again a reasonable option.However, the shift may move increasing amounts of confidential data through public networks; more significantly, organizations will return to depending for critical business functions on servers completely out of their control, just like when I ran technical services for a service bureau that supported 28 companies with 1,000 users in Montr\u00e9al in the mid-1980s.Organizations moving in this direction will have to pay attention to encryption for transferred and for stored data, remote backup policies and implementation, contractor employee hiring practices, and service-level agreements. Failure to think carefully about these issues will put customers of the new online services at risk.