IBM Future Systems project

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

The Future Systems project was a research and development project undertaken in IBM in the early '70s, aiming to develop a revolutionary line of computer products, including new software models which would simplify software development by exploiting modern powerful hardware.

Background and goals

Until the end of the 1960s, IBM had been making most of its profit on the hardware, bundling support software and services along with its systems. Only hardware boxes carried a price tag, but those prices included an allocation for software and services.

Other manufacturers had started to market compatible hardware, mainly peripherals such as tape and disk drives, at a price significantly lower than IBM, thus shrinking the possible base for recovering the cost of software and services. Early in 1971, after Gene Amdahl had left IBM to set up his own company offering IBM compatible mainframes, an internal IBM taskforce (project Counterpoint) concluded that the compatible mainframe business was indeed a viable business, and that the basis for charging for software and services as part of the hardware price would quickly vanish.

Another strategic issue was that the cost of computing was steadily going down while the costs of programming and operations, being made of personnel costs, were steadily going up. Therefore, the part of the customer's IT budget available for hardware vendors would be significantly reduced in the coming years, and with it the base for IBM revenue. It was imperative that IBM, by addressing the cost of application development and operations in its future products, would at the same time reduce the total cost of IT to the customers and capture a larger portion of that cost.

At the same time, IBM was under legal attack for its dominant position and its policy of bundling software and services in the hardware price, so that any attempt at "re-bundling" part of its offerings had to be firmly justified on a pure technical basis, so as to withstand any legal challenge.

In May–June 1971, an international task force convened in Armonk under John Opel, then a vice-president of IBM. Its assignment was to investigate the feasibility of a new line of computers which would take advantage of IBM's technological advantages in order to render obsolete all previous computers - compatible offerings but also IBM's own products. The task force concluded that the project was worth pursuing, but that the key to acceptance in the marketplace was an order of magnitude reduction in the costs of developing, operating and maintaining application software.

The major objectives of the FS project were consequently stated as follows:

  • make obsolete all existing computing equipment, including IBM's, by fully exploiting the newest technologies,
  • diminish greatly the costs and efforts involved in application development and operations,
  • provide a technically sound basis for re-bundling as much as possible of IBM's offerings (hardware, software and services)

It was hoped that a new architecture making a heavier use of hardware resources, the cost of which was going down, could significantly simplify software development and reduce costs for both IBM and customers.


Data access

One design principle of FS was a "single-level store" which extended the idea of virtual memory to cover persistent data. Working memory, files, and databases were all accessed in a uniform way by an abstraction of the notion of address.[citation needed]

Therefore, programmers would not have to be concerned whether the object they were trying to access was in memory or on the disk.

This and other planned enhancements were expected to make programming easier and thereby reduce the cost of developing software.

Implementation of that principle required that the addressing mechanism at the heart of the machine would incorporate a complete storage hierarchy management system and major portions of a data base management system, that until then were implemented as add-on software.


Another principle was the use of very high-level complex instructions to be implemented in microcode. As an example, one of the instructions, CreateEncapsulatedModule, was a complete linkage editor. Other instructions were designed to support the internal data structures and operations of programming languages such as FORTRAN, COBOL, and PL/I. In effect, FS was designed to be the ultimate complex instruction set computer (CISC).[citation needed]

Another way of presenting the same concept was that the entire collection of functions previously implemented as hardware, operating system software, data base software and more would now be considered as making up one integrated system, with each and every elementary function implemented in one of many layers including circuitry, microcode, and conventional software. More than one layer of microcode and code were contemplated, sometimes referred to as picocode or millicode. Depending on the people one was talking to, the very notion of a "machine" therefore ranged between those functions which were implemented as circuitry (for the hardware specialists) to the complete set of functions offered to users, irrespective of their implementation (for the systems architects).

The overall design also called for a "universal controller" to handle primarily input-output operations outside of the main processor. That universal controller would have a very limited instruction set, restricted to those operations required for I/O, pioneering the concept of a reduced instruction set computer (RISC).

Meanwhile, John Cocke, one of the chief designers of early IBM computers, began a research project to design the first reduced instruction set computer (RISC).[citation needed] In the long run, the RISC architecture, which eventually evolved into IBM's Power and PowerPC architecture, proved to be vastly cheaper to implement and capable of achieving much higher clock rate.


Project start

In the late 1960s and early 1970s, IBM considered a radical redesign of their entire product line to take advantage of the much lower cost of computer circuitry expected in the 1980s.

The IBM Future Systems project (FS) was officially started In September 1971, following the recommendations of a special task force assembled in the second quarter of 1971. In the course of time, several other research projects in various IBM locations merged into the FS project or became associated with it.

Project management

During its entire life, the FS project was conducted under tight security provisions. The project was broken down into many subprojects assigned to different teams. The documentation was similarly broken down into many pieces, and access to each document was subject to verification of the need-to-know by the project office. Documents were tracked and could be called back at any time.

In Sowa's memo (see External Links, below) he noted The avowed aim of all this red tape is to prevent anyone from understanding the whole system; this goal has certainly been achieved.

As a consequence, most people working on the project had an extremely limited view of it, restricted to what they needed to know in order to produce their expected contribution. Some teams were even working on FS without knowing. This explains why, when asked to define FS, most people give a very partial answer, limited to the intersection of FS with their field of competence.

Planned product lines

Three implementations of the FS architecture were planned: the top-of-line model was being designed in Poughkeepsie, NY, where IBM's largest and fastest computers were built; the middle model was being designed in Endicott, NY, which had responsibility for the mid-range computers; and the smallest model was being designed in Rochester, MN, which had the responsibility for IBM's small business computers.

A continuous range of performance could be offered by varying the number of processors in a system at each of the three implementation levels.

Early 1973, overall project management and the teams responsible for the more "outside" layers common to all implementations were consolidated in the Mohansic ASDD laboratory (half way between the Armonk/White Plains headquarters and Poughkeepsie).

Project end

The FS project was killed in 1975. The reasons given for killing the project depend on the person asked, who puts forward the issues related to the domain with which he/she was familiar. In reality, the success of the project was dependent on a large number of breakthroughs in all areas from circuit design and manufacturing to marketing and maintenance. Although each single issue, taken in isolation, might have been resolved, the probability that they could all be resolved in time and in mutually compatible ways was practically zero.

One symptom was the poor performance of its largest implementation, but the project was also marred by protracted internal arguments about various technical aspects, including internal IBM debates about the merits of RISC vs. CISC designs. The complexity of the instruction set was another obstacle, which was considered "incomprehensible" by IBM's own engineers. Moreover, simulations showed that the execution of native FS instructions on the high-end machine was slower than the System/370 emulator on the same machine.

The FS project was finally terminated when IBM realized that customer acceptance would be much more limited than originally predicted because there was no reasonable application migration path for 360 architecture customers. In order to leave maximum freedom to design a truly revolutionary system, ease of application migration was not one of the primary design goals for the FS project, but was to be addressed by software migration aids taking the new architecture as a given. In the end, it appeared that the cost of migrating the mass of user investments in COBOL and assembler based applications to FS was in many cases likely to be greater than the cost of acquiring a new system.


Although the FS project as a whole was killed, a simplified version of the architecture for the smallest of the three machines continued to be developed in Rochester. It was finally released as the IBM System/38, which proved to be a good design for ease of programming, but it was woefully underpowered. The AS/400 inherited the same architecture, but with performance improvements. In both machines, the very CISCy instruction set generated by compilers is not interpreted, but translated into a lower-level machine instruction set and executed; the original lower-level instruction set was a CISC instruction set with some similarities to the System/360 instruction set.[1] In later machines the lower-level instruction set was an extended version of the PowerPC instruction set, which evolved from John Cocke's RISC machine.

Besides System/38 and the AS/400, which inherited much of the FS architecture, bits and pieces of Future Systems technology were incorporated in the following parts of IBM's product line:

  • the IBM 3081 mainframe computer, which was essentially the System/370 emulator designed in Poughkeepsie, but with the FS microcode removed
  • the 3800 laser printer, and some machines that would lead to the IBM 3279 terminal and GDDM
  • the IBM 3850 automatic magnetic library
  • the IBM 8100 mid-range computer, which was based on a CPU called the Universal Controller, which had been intended for FS input/output processing
  • network enhancements concerning VTAM and NCP


  • Pugh, Emerson W. (1995). Building IBM: Shaping an Industry and Its Technology. MIT Press. ISBN 0-262-16147-8. 
  • Pugh, Emerson W.; et al. (1991). IBM'S 360 and Early 370 Systems. MIT Press. ISBN 0-262-16123-0. 


  1. "The Library for Systems Solutions Computing Technology Reference" (PDF). IBM. pp. 24–25. Retrieved 2010-09-05. 

External links