In the area of large-scale data processing systems many companies have turned to outsourcing. This can take many forms from outsourcing people skills through hosting of physical equipment to developing and running applications.
If the outsource supplier owns and rents out the equipment to the consumer then it need not be an independent machine. To use a single machine to service a number of customers partitioning technology is needed. Each customer must have a "virtual machine", fully isolated and protected from other users of the machine. Virtual machine services such as this have long been the strength of mainframe computers, but today many of the bigger Unix machines also offer the same basic capabilities, if not as tried and tested as mainframes.
The technology to implement application servers, time-sharing and outsourcing is now well enough established and a lot of companies are taking advantage of it. There is however a further requirement that must be met (as well as improved communications) before corporations and all the small and medium sized companies will commit en masse. A standard feature of power and telephone billing is that they normally charge a fixed base fee and then an additional charge dependent on how much of the service has been used. Utility computing must implement a similar "on-demand" model. In practice this means that partitioning technology must be dynamic, a feature which only IBM can so far deliver for business applications. Thus on-demand computing is a big thing for IBM as they seek to exploit the advantage they have over H-P, Sun and Microsoft, this advantage won't last forever! In the long term user organisations want a monthly bill based on actual usage, which will vary from month to month. Indeed a big technical challenge is to deliver dynamically re-configurable systems which can cope with peak demands and still meet service level agreements.
And so we have a wonderful new word for the IT industry, "Autonomic" systems. It has been "stolen" from biology by IBM. It refers to the ability of the human body to adapt itself to change and in particular to concurrently manage a wide range of tasks such as regulating body temperature or fighting infection. IBM of course are applying the term to self-regulating computer systems. The key objectives are to develop systems that can automatically correct failing systems and to dynamically configure themselves to optimise the use of available resources as work loads change.
Systems that detect failed components and exploit redundant facilities to bypass them, the so-called fault tolerant systems, have been playing a major role in transaction and control systems for many years. IBM themselves have a significant track record for high up-time business systems, both mainframes and Unix machines, while specialist products made by Tandem and Stratus were very successful in financial transaction processing and manufacturing applications. But IBM has recently announced that they are to spend $10 billion to back "on-demand" business capabilities. Much of this will go on R&D and marketing, but they are aware that autonomic technology is a fruitful field for high tech start-ups and so acquisitions will probably feature strongly too.
Such an all embracing concept is not going to mature overnight. It will depend on the development of a lot of under-pinning technology. While the focus at the moment is on technical issues, it is obvious that IBM, H-P and Sun, not forgetting Oracle, are all hoping to use their technology to gain revenue from services, so that utility computing could become the biggest change in the IT industry for many years.
Martin Healey, pioneer development Intel-based computers en c/s-architecture. Director of a number of IT specialist companies and an Emeritus Professor of the University of Wales.