2 Feb 1995

Summary

Nobody has to act as the advocate for Chicago. It will receive most of the publicity. It will probably be the system installed on all new computers in 1996. Clearly Chicago is much better than DOS 6 and Windows 3.1, so users will have an incentive to convert. The burden of proof is on anyone who might suggest another system.

Chicago is like a luxury mobile home. It is spacious and comfortable. It has a real kitchen. Someone could live in it for years and think it was a regular home. It costs less than a standard home and generally fits on a smaller lot. A structural engineer will note that it has no real foundation, and the construction may not be as durable as a traditional house. Even the best mobile home is no place to be in a tornado or hurricane.

Chicago lacks a clear separation between an authorized system kernel and the application programs. What passes for a kernel (the Virtual Machine Manager and the VxD's) do not really provide the full set of services that we might expect of an operating system. Application programs are not cleanly separated from each other or from the system.

Unlike Plain Old Windows, Chicago has preemptive multitasking. This means that Chicago has threads. However, without clear barriers between programs, Chicago has no concept of separate processes. In Windows 3.1, the user could run Word and PowerPoint at the same time, but internally these "separate application programs" operated as subroutines of the Windows system. In Chicago, programs run as threads that share a common virtual machine, common control blocks, and shared privileges. It is necessary to move to NT before programs run as separate processes with distinct files and privileges.

OS/2 runs each native program in its own address space. A badly behaved program should not be able to crash other programs or the whole system. However, OS/2 has no view of data security, so a badly behaved program could wipe all the system files off the disk.

Windows NT provides the missing security. It is a "real" operating system with all the features expected of minicomputer used as a server.

The NT advocate will retell the story of the Three Little Pigs. Chicago built its house of straw. OS/2 built its house of twigs. Windows NT is built of bricks. NT is clearly the right choice if you are expecting the BBW.

The OS/2 advocate will ask you to consider Goldilocks. One bowl was too hot. One bowl was too cold. The one in the middle was just right. The extra features that make NT an interesting system have no place on the average desktop. An executive does not need a RISC multiprocessor, and a secretary does not want to become a system administrator in order to install a word processing package. If Chicago is too flimsy and NT is too complicated, OS/2 is a good intermediate choice.

Chicago Makes No Sense Without NT

Up to this point, Surviving the Next OS has made the same mistake everyone else makes by examining each operating system one at a time. Even small companies install a cluster of PC's in a Workgroup. An Information System consists of many machines, and there is no pressing need for them to all run one operating system.

Windows NT was a much more complicated development project than Chicago, yet Microsoft released NT almost two years before Chicago will be a product. Chicago doesn't make sense as a Client operating system unless it has NT as a Server.

Because it can rely on Windows NT, Chicago doesn't have to worry about security, integrity, and recovery. It doesn't need database services. It does not need to act as a gateway. Microsoft can offer a full-function information system without burdening Chicago development with unnecessary requirements and without burdening desktop machines with higher memory cost.

Because Chicago exists, NT does not have to worry about making tons of money. Microsoft will make its money selling Chicago and desktop applications software. However, these desktop systems will be able to run advanced applications and high-end function because they will receive services through the network from an NT Server.

In its first year, Windows NT could offer a reasonable file and print server, database (SQL Server), and a communications gateway (for mainframe SNA and for remote dial-in users). With the Windows NT 3.5 release, Microsoft upgraded remote dial-in to support access to the Internet (TCP/IP), Microsoft Servers (NETBEUI), and Novell Server (IPX). The SMS product (Hermes) then provides a tool to inventory hardware and software and upgrade program products. A new mail system will support large scale message exchange.

Every large company is struggling with the system management and communications problems that Microsoft proposes to solve with the Chicago- NT 3.5 system. As users upgrade to Chicago, or as it comes preinstalled on machines, the existence of an already prepared Client will drive the demand for some NT 3.5 servers. If NT 3.5 comes in first for more specialized services (SMS) it may then drive the customer to reconsider earlier choices for other servers.

Chicago & NT provides Microsoft with the kind of technology shift that it needs to overthrow Netware. All of a sudden, the "server" is expected to do a lot more than just share files and printers. NT 3.5 can provide all the new management functions. It can also, incidentally, provide the traditional services that Netware used to supply. Because Microsoft can synchronize the client and server platforms, it can use this leverage to loosen and eventually displace Novell.

Security

The major difference between OS/2 and Windows NT is the data security model. NT has a well developed set of rules that it does not always obey. OS/2 is much muddier, but more flexible.

In Windows NT, every process runs with an Access Token that determines its ability to open files or use restricted system resources. If the program runs in the background as a Service, then the Access Token defaults to the dummy user "SYSTEM" though it can be configured to match any defined Userid when the Service is installed. If the process runs in the foreground, then it inherits the Access Token acquired by the user when he or she logged on. Network servers can receive access credentials with a request and can therefore perform an operation with an authority borrowed from the remote network client.

An NT workstation operating within a Workgroup keeps its own local database of users or groups. When the workstation joins a Domain, it authorizes requests using a database managed by the Domain Controllers. NT supplies its own set of administrative tools that simplify the definition and management of users and groups.

However, the NT system cannot be configured to rely upon a non-NT security mechanism. Nor can NT be configured to act as a Kerberos security server or to participate in a Cell of heterogeneous machines under the control of the Open Systems Foundation Distributed Computing Environment (DCE). If an Enterprise can satisfy all of its computing requirements using only NT servers, then it can maintain a single, simple security mechanism. Otherwise, it is forced to maintain multiple, parallel security databases because NT will not interoperate with open systems standards.

NT reflects a security model based on the Unix and VMS systems with which its authors were most familiar. OS/2 security seems suspiciously to reflect an IBM predisposition to see the world from the perspective of CICS. CICS is probably the single most successful software product ever written. It handles very large volumes of transactions on the mainframe by creating the illusion that several programs are running concurrently when, in reality, there is only one process and one thread. In this regard, CICS runs programs in very much the same way as Windows 3.1 (except that CICS is much, much larger).

A remote user sends requests to a CICS system. The start of the request names a Transaction Program (TP) that can be loaded from a library of such routines. Although the remote user can select the program, all the TP's are written by the professional staff that manages the CICS system itself. It then becomes the responsibility of the TP to verify the remote user's right to perform the requested operation using internal CICS tables, the services of the external operating system, or networked security mechanisms. Since the end user cannot write programs that run under the CICS system, there is no need for the system to provide security against misbehavior by the TP's.

Unix and VMS started out as timesharing systems. A timesharing user has access to compilers and builds and executes private programs. When these systems migrated to the Client/Server model, they tended to enforce transactional security through the existing timesharing mechanisms. However, neither OS/2 nor NT make good timesharing systems. In a more modern Client/Server environment, the end user is allowed to program the Client machine, but the Server runs some kind of centrally managed "stored procedures." Microsoft accepted this ambiguity when it allowed SQL Server running under NT to use either its own internal security mechanism or a hybrid involving some internal tables and some elements of the external Domain User database.

OS/2 enforces no access control against the programs running on the server machine. It provides a variety of programming interfaces that allow transaction programs to validate the identity of a remote user. Through the User Profile Management API, a remote user can be authenticated in the Domain structure of the File Server. Through the services of DCE for OS/2, requests can be handled through the Cell Security mechanisms of the Distributed Computing Environment shared with Unix and mainframe systems.

Since IBM is incapable of technical coordination, design, and strategy across a divisional boundary, the various different software packages do not coordinate their security model. For example, TCP/IP for OS/2 has its own access control database for FTP instead of using User Profile Management calls. Nobody inside IBM high enough to talk to both Raleigh and Austin is smart enough to realize there is a problem here.

The final result may be that Microsoft has too much, and IBM has too little. The Microsoft design works well if every machine in the enterprise runs Chicago or Windows NT, so the NT Domain structure can be the sole and authoritative basis for access control decisions. When other types of systems require a more industry standard open security model, Windows NT does not provide an interface to coexist.

Yet OS/2, because it has so very little thinking and coordination across products, is willing to defer to any authoritative external source. It can validate requests through security services provided by another OS/2 machine running LAN Server, through an AIX machine running DCE, and even against a Windows NT box. Security is not enforced by the operating system services, so they have to be managed by the transaction programs themselves. Like CICS, an OS/2 server will run only reliable programs written by a central staff. Unlike CICS, OS/2 can be faster, cheaper, and more robust against program errors. Security becomes part of the application design instead of part of the system.

Windows NT can duplicate the OS/2 function. A service can be started in the background. It can run under a Userid that gives access to the entire database. Transaction programs running under the process can then call communications services to perform external validation before proceeding. When system services are called, they validate access to the data under the authorization of the service and not that of the remote user. When NT is configured to run this way, it simply duplicates the limited security which is the only OS/2 option. However, NT lacks full support for DCE programming services and therefore makes this type of coding more difficult.

One, Two, Many

Ultimately, the choice of system depends on a vision of the entire information system. Microsoft proposes a tightly integrated, homogeneous network of Chicago- NT 3.5 machines, with the occasional external connection to the Internet or through SNA Server to legacy mainframes. If this is enough to meet your needs, then it is clearly the simplest and least expensive choice.

IBM suggests that the enterprise already has a mainframe, AS/400, or Unix machine to function as the "large server." It therefore needs OS/2 as a more powerful client that is able to operate in the heterogeneous, multi-vendor, multi-system network.

A choice between the two may be determined when Corporations finally cut through the crap about application development (client/server, object-oriented, 4GL, upper-CASE, lower-CASE, non-procedural). Essentially there are two vendor-proposed futures:

MS: End users will no longer write programs. Users will buy shrink-wrapped applications to perform queries, generate reports, and build forms and other GUI dialogs. The applications will communicate using OLE2. Companies may be reduced to only writing stored procedures in SQL and Excel macros in Visual Basic for Applications. Eventually, corporations will be no more likely to have their own accounting system than to write their own word processor. Microsoft is positioned to deliver tools to Independent Software Vendors that write the applications that customers will purchase.

IBM: Corporate customers will continue to write programs, but at a very high level. They will buy shrink- wrapped components that can be added to the Workplace. "Programming" will then tie these components together using scripts. Initiatives and products that further this view include VisualAge, Talligent, Kaleida (multimedia scripting and objects, joint with Apple), OpenDoc (compound document objects, joint with Apple and Word Perfect), DCE (Open Systems Foundation), and CORBA (multivendor object standard).

Which is the right answer? Ultimately, such a question assumes that organizations make rational decisions or that they select and live with long term planning. There is no evidence that such behavior has or ever will exist. The Data Processing profession spent the 1980's swinging back and forth on the issue of decentralization. Large corporations spent years and tens of millions of dollars decentralizing to put equipment closer to the users, then turned around and spent years more consolidating data centers to save money on hardware and license fees.

Any reasonable model of information will appeal to some group. Microsoft has a simple bottom-up approach that will appeal to those who are suspicious of central authority. These are departments that would, in the 1980's, have built their own unauthorized mini-datacenters based on VAX VMS machines, but who today are scared that DEC is going under. IBM tells a "you can get there from here" story that may appeal to customers, just as soon as they figure out where "there" is.

Yale University has created a sequence of committees with the mission to select software for specific local applications. One criteria is that proposed software has to really work now. Unfortunately, there is very little usable application software based on any modern technology. Even when the advantages of more advanced technology are understood, the only deployable programs are obsolete and create what we have come to call "instant legacy systems." This leads to the rather desperate suggestion that success may come to whatever strategy moves first from BS to IS.

In that spirit, it might be noted that PC Lube and Tune articles are generally written on the most current product or beta version of OS/2 2.x. The PCLT server runs the most current product version of Windows NT. Once an organization overcomes the fear of diversity, individual platforms can be deployed anywhere on the network to provide specialized services for which they are best equipped. The key to surviving the new operating systems may be infidelity.

Return to the Table of Contents

Copyright 1995 PCLT -- Surviving the Next Operating System -- H. Gilbert

This document generated by SpHyDir another fine product of PC Lube and Tune.