Home

Roy Schestowitz

The Rise of The Web at the Expense of the O/S


Long before computer communication was conceived and surely before inter-computer bandwidth was sufficient, interaction between the user and the hardware was most fundamental. In the dawn of computers, the main purpose of computing was to run some computational recipes (algorithms), thus obtaining some output, often to be manually handled by a person. Computers were not much more than calculators with decent brute force and they were rewarding equipment even as stand-alone entities.

The processor and the memory did their job before their operator (the user) and received the necessary signals from peripherals which were directly or indirectly connected to them. On top of the entire process sat the operating system. Embedded or non-embedded operating systems defined the way in which components in the single computers interacted with each other. Operating systems somehow conducted the flow of information. Known interfaces for communication with hardware were exploited and the user needed to know an ever-decreasing amount about underlying technicalities as time went by.

Unlike hardware, the operating system was easily customised, so it could have some changes applied to it rather rapidly. This observation is an important one when discussing the 'dynamicity' of operating systems and the 'persistence' of hardware principles.

The operating system was a relatively flexible component in the system, which has evolved tremendously and became more and more complex by the year. This tight coupling between hardware and some code was one to be taken for granted for a long time. It was believed that this relationship which was assumed to be so inherent that it would be as far as one can go. As technology advanced, hardware became far more capable and operating systems became more demanding accordingly. In other words, more sophisticated processes could be handled within the same computational unit. In the past few decades, a pixelated representation of a command-line became a more colourful one. More latterly we came to see rather natural user interfaces with shadows, transparencies, animations, and more. That aspect will continue to evolve in ways that we cannot yet imagine too well, but spherical desktop appear to be one possible path, which has already been explored. Larger displays, 3-D displayed and more hands-on experience (e.g. power gloves) may be a way forward, but it can take this discussion in an entirely different direction.

The argument above describes a very independent unit where a single user takes advantage of some hardware, software (operating system), and personal data. This served many tasks for a good number of years and seemed like the natural way forward. In fact, the largest software manufacturer approached this rather naively and woke up to the new reality slightly too late. It now struggles to keep things the "way they used to be", i.e. on the desktop rather than the Net.

There have always been inherent deficiencies in having a single computer to work on. Firstly, there was the issues of mobility. Greed was on a rise for computers that could be moved from place to place and still be small enough to carry around. There was also the issue of backups. A single machine was susceptible to damage and data loss. Then there additionally the issue of communication, which could in practice enabled any computer to interact with another. This is a key feature in social networks such as an industrial organisation. So suddenly, some requirements were re-raised, which encouraged developing a computer that was not just very capable on its own, but could also be managed in a distributed fashion so that it was backed by a network, accessible from the network and was a component of the network if not just one among many concrete parts of it.

Later emerged the need to centralise the data in a single place, keeping aware of the fact that rather than transporting information between peers it can be managed, administered and maintained on a server or servers. Each node in the network can access such server regardless of its underlying hardware and operating system. This caters for diversity, openness and extensions, of course. Protocols and layering are used to strictly agree on some standards and use them in a consistent, yet negotiable manner.

Many problems are resolved by taking this approach whereby a computer gets bound a central repository of services and data. This establishes a trend of computers as hosts that face users, as well as backroom machines where data is stored and managed. There is a true separation between computation abilities and data, which may involve application too. Having got this new relationship, computers can easily be exchanged and hosts need not be of a consistent brand so long as they stick to the protocol. There can be various unique hosts, all of which serve a similar purpose as defined by the server, which is a form of 'authority'.

In recent years we witness the fact that more users choose to move their data on-line. By doing so, any user should be able to do similar things from different places provided that there is a connection available. Moreover, the issue of hard-drive crashes is partially resolved, collaborative work is simplified and upgrades are needed in fewer places. How can one argue against it? It turns out that the migration of data is time-consuming, but the change of habit is the major daunting factor.

Let us look at a few practical examples of these ideas. The essay has thus far been very dry and one would say 'fluffy'. One of the most common examples is Web-based mail. A mail application is virtually stored on a server, which is accessible given an authentication process (or a cookie). Any connected user can gain access to that application and manage it via the Web browser (or alternative pro-purpose application). Previously, this entire process of mail management was handled at the client's side, i.e. on the user's operating system and the user's physical hardware. The recent wave which persuaded a migration was enrichment of on-line services, partly as a strategic move that recognised future impact. With quota for web-based mail reaching 2GB at present, there is little reason to use local mail clients. There are many factors that continue to improve on-line services and that includes bandwidth and proxies, for instance, which can also enhance the looks of on-line mail clients. Meanwhile, local mail client appear rather static in terms of their development from an innovation-centric point-of-view. Improved technologies -- those that can be described as Web 1.5 -- also enable more efficient handling of the user interface over the Internet, hence the gap between the browser-level and the desktop-level is bridged. Due to Methodologies like AJAX (Asynchronous JavaScript and XML), functionality is greatly improved and development time is put in to take advantage of the latest server-side and client side support. All of these factors lead to a point where the on-line mail client is comparable with that which is local. The only difference is: Web-based account, simple being Web-based, always offer more provided a connection that is fast. Will connection get faster? It definitely seems to be growing very rapidly with Gigabit Ethernet scheduled for deployment in Hong Kong. Even less developed countries are beginning to get the gains of broadband (DSL) connection.

In the future years, we should expect to see an increasing number in the amount of applications that are available on-line, as well as the number of users who are willing to use them. It is only a matter of time before documents and spreadsheets are composed over the Web rather than locally. There will still be the option to stick to desktop-side software, but this option will suffer from many cons that will lead to its diminish. Slow connections might be the only reason for the old trend to linger on.

Finally, all computers will be connected by wiring 'thick' enough so that they can be conceived as a cohesive piece of hardware. This becomes rather apparent already on networks that are as fast as 100MBit transfer rate. Data movement is just about as quick between several computers as it is within one particular computer.

September 2005 on flight to Zurich; unlikely to ever be extended.


This page was uploaded in October 2004 Maintained by Roy Schestowitz