In order to integrate a legacy application it is necessary to identify the functions to be "re-used". It is then required to isolate the code which implements these functions and finally some method must be found to expose this code so that the new application at the front end can call its execution. There is also a technical problem of interconnecting the new systems with the old, which requires significant skills in communication networks and the associated software. The "legacy" problems may well involve different operating systems and network hardware and protocols. Many organisations already have this level of skills, derived from experience with interconnecting PC networks with older systems, but it is going to be a problem for some organisations.
The re-useable functions in the legacy code are different from the business functions, although they are strongly related. An application will perform a set of specific business functions, but these are made up inside the software of combinations of smaller units of code. These modules of code are not so easily identified with business functions. In an ideal world all the code modules would be well documented and hence it should in theory be relatively easy to work out how to use them in the new applications. Unfortunately this is not often the case. It is not sufficient to know vaguely what is available, it is essential to know exactly what all code modules do!
The ease with which code modules can be identified, documented and extracted is influenced by how the code has been created initially and maintained since. If design tools had been used it would be a huge advantage, but this is not common. In any case a large percentage of applications are packages, old ones at that, and there is little chance of modularising them. It would be a good opportunity for vendors of application packages to provide upgrades which include modularisation but most have the same problems with their code as do in-house developers!
The objective must be to identify the elemental modules in the legacy code and then to modify the code to create a set of documented, re-useable components. Systems built with a TP monitor (CICS applications in particular) will be much easier to modularise than code written for a time-shared environment such as Unix. Conversely the Unix applications will be newer and should be in a better state. In any case it is common that transaction modules such as CICS routines are (because of historic performance problems) very large. Thus while the code is being modularised, it is essential to try to break large modules into a number of smaller ones. It is probably worth investing in maintenance tools which will help to break code up into smaller modules while checking for logical consistency. Such tools are always better for Cobol source code than other languages in practice. These tools are expensive but nevertheless should be good value for money.
Finally a method must be implemented to enable the new re-useable code modules to be executed by calls from the Application Server. Again if a TP monitor has been used on the legacy side it will provide a remote call facility, but otherwise proprietary solutions will be invoked such as ODBC and variants. There are a number of specialised middleware products available to provide various flavours of remote procedure calls. In many cases it is worth investigating transaction messaging based solutions as well as interactive interfaces.
While the interfaces to the internal code modules of most legacy applications are not exposed, most will have a terminal interface. This is relatively easy to interface to and does not need any modification to the legacy code. Because it accesses the legacy system at a high level, rather than at the component level, it is not as powerful but it is well worth keeping in mind, even if only as a transient solution. Modern "screen-scrapping" products will support Windows GUI interfaces (Windows Terminal Server) as well as ASCII, 3270 and 5250 terminals.