|
|
Managing Multivendor Networks
- 1 -
Introduction
Overview
ultivendor networks and a plate of liver and onions have a lot in common: they
are both undeniably real; they both provide sustenance; and both of them are repulsive
to a lot of people.
Unfortunately, multivendor networks get a bad rap from computer manufacturers
who want to keep customers in their folds. The promotion of connections to other
manufacturers' computers and networks is cloaked in a shadow of mystery and intrigue.
"Sure," says the sales rep, "you can connect their equipment to
ours as long as you choose applications and interfaces that conform to the ISO's
seven-layer standard. As you probably know, we are committed to providing solutions
that conform to the OSI Reference Model."
The customer replies, "Can you be more specific? I want to implement one
LAN to accommodate both types of systems."
"Well," the rep responds, "we support the IEEE 802.3 LAN using
the IEEE 802.2 discipline. Of course, you might need to implement TCP to accommodate
both systems until our upper-layer OSI products become available. And you might need
to implement some type of LAN bridge if the other system doesn't accommodate the
combined 802.2 and 802.3 frame formats." In defense of the manufacturers, however,
the question of connecting two systems rarely can be answered with a simple yes or
no. Yet, the issues involved in connecting networks are often made more complex than
they need to be. So, rather than looking to manufacturers to provide multivendor
solutions, many customers turn to independent standards organizations such as the
International Standards Organization (ISO), the Institute of Electrical and Electronics
Engineers (IEEE), and the American National Standards Institute (ANSI). These organizations
are discussed in Chapter 6, "Standards."
In theory, these organizations provide standards that can, and often should, be
adopted by computer manufacturers to provide interoperability between systems. Unfortunately,
standards organizations, which define standards on paper, are normally far ahead
of the manufacturers, which must invent and produce the products. Thus, the ISO might
recommend the perfect solution to a specific problem, but providing products that
conform to that standard is a long-term goal or, even worse, not scheduled at all.
Nonetheless, adopting third-party standards remains a feasible approach for customers
for-mulating long-range plans. This customer backing is extremely important to the
standards organizations. After all, these organizations rarely have a stick big enough
to beat any of the manufacturers into compliance; it is the pressure applied by customers
that compels manufacturers to adopt standards.
Because of this chasm between standards and products, another set of solutions
comes into play: those implemented by third-party companies or independent organizations
to address specific or general data communications and networking products. Third-party
solutions include the following:
- File transfer products. Specialized software and/or hardware that performs
file transfers between two specific types of systems, as well as general products
like Kermit (which was developed by Columbia University in New York City) and XMODEM.
- Terminal access products. Emulation software and/or hardware that enables
one type of terminal to look like another type of terminal, specialized terminals
that might have built-in emulation that enables them to look like two completely
different types of terminals, or a "universal" terminal standard that might
be layered onto two different manufacturers' networks.
- Personal computer (PC) local area network (LAN) software. Products like
Novell Inc.'s NetWare and Banyan System's VINES allow the sharing of files, printers,
and other resources in a LAN. Although they can work in conjunction with existing
standards, they are not yet standards themselves.
- Transmission Control Protocol/Internet Protocol (TCP/IP). The suite of
TCP/IP protocols and services provides basic interoperability (file transfer, terminal
access and mail exchange) between diverse systems. Furthermore, TCP/IP is positioned
as both an alternative to the Open Systems Interconnect (OSI) standards and a possible
contender for the mantle of OSI upper-layer compliance. The growth of the Internet,
which depends on TCP/IP, has led to widespread acceptance of this protocol suite
and a decreased reliance on OSI standards.
- Middleware. A vaguely defined software product that typically sits between
the client and server and forms a third tier in the client/server network. Middleware
is a sort of go-between that attempts to translate between different types of systems.
However, middleware is no universal translator: it is often limited to products conforming
to a specific API.
There are other categories of products that fall under this umbrella. What role,
if any, will these products play when the manufacturers adopt more international
standards? In many cases the answer is none, because these products are short-term
solutions that fill an immediate need. In other cases, the products might be adopted
by the standards organizations and thus become standards in their own right.
Multivendor Network Scenarios
Just why you might require a multivendor networking solution is no great mystery
or surprise. Some of the more prevalent reasons include the following:
- Merger/acquisition. When two companies or organizations merge and each
entity uses a different manufacturer for its data processing equipment, there is
a need for some type of connectivity. This requirement can be as simple as posting
accounting data from one system to the other, or as complex as enabling the combined
set of users to access both types of systems.
- Large organizations. In a large organization (or government), smaller
agencies or operating departments frequently have unique computing equipment. Yet
because these departments are all part of a larger entity, they must pass information
upward (and possibly sideways) for the common good. Although this situation is similar
to the merger scenario, it normally occurs less abruptly because the requirement
for cross-connectivity can be seen well in advance.
- Conversions. This tends to be the least pleasant environment, primarily
because the outgoing vendor does not have a burning desire to solve the communications
problems of its soon-to-be ex-customer. Furthermore, many conversions require parallel
processing between two dissimilar systems sharing a common set of users.
- New applications. When new applications are implemented on new computer
systems, the multivendor network must interface existing users with the new application
or combine the new application's data with data derived from another application
on a different system.
- No-growth position, increased demand. Sometimes the demand for end-user
access to a certain application can dramatically increase even though the data processing
budget does not also increase. Here, the multivendor network must pool resources
so more people can access the information.
- Information management. Because many computer systems have highly specialized
software and hardware components, they are often used to address specific technical
requirements. In a medical environment, for example, one type of hardware might oversee
and monitor laboratory instruments, another might run the patient tracking system,
and yet another might run the general administration and accounting systems. Although
these systems function properly without interacting, the need to cross-reference
this information might arise.
- Legacy connectivity. Organizations migrating from a mainframe to a distributed
client/server environment do not always merely unplug the mainframe and sell it for
scrap iron. Millions of lines of legacy code can be involved, and it is often wise
to leverage that investment. Consequently, the mainframe might be pressed into use
as a server in the client/server network or as a repository of data. This, of course,
creates a need for a whole new class of connectivity products to join the legacy
environment to the new environment.
Network Tools and Services
To solve the problems or answer the needs described in the previous scenarios,
you might use the following network tools:
- Common terminal access. In many cases, the issue is simply to get a larger
set of users to access an existing, or even new, application. In a multivendor environment,
the issue is how to access the application from a terminal that is foreign to the
program.
- Resource sharing. Often, the sharing of printers, tapes, and disk space
not only saves money, but actually expands an application's usefulness by enabling
it to use these new resources.
- File transfer. The most common way to share information between systems
is to place the information in a separate file and then transfer it from one system
to another. If the information is integrated with an application program on either
end, special processing is typically required to write or read the information from
these transfer files.
- Program-to-program communications. In many cases, program-to-program communications
are used to exchange information between systems in real time. They can be implemented
as an alternative to file transfer, or they might be used to tie together online
databases operating on different systems.
Furthermore, from a broader perspective, two additional tools can be part of a
networking solution:
- Electronic mail. In a multivendor network, electronic mail can be used
in two ways. First, a single system can be the central electronic mail system that
all users should access, regardless of the system in which their primary application
resides. Second, if several systems are being connected and each system has its own
electronic mail system, a means of integrating these separate mail systems might
be required. Electronic mail access has taken on even greater importance, as it is
now used to facilitate the flow of business documents and electronic forms.
- Network management. When multivendor networks are connected, it is desirable
(but difficult) to manage the combined networks as one unified entity. Unfortunately,
each system's local network is usually unaware of its neighboring networks; therefore,
the management and maintenance of multiple logical networks is necessary.
Each of these tools and applications is covered in more detail in the following
sections.
Common Terminal Access
Being able to access any application from any terminal can solve a great number
of problems, but it is not a simple technical task. In most cases, this function
is provided by enabling one type of terminal to emulate another type of terminal
when accessing a particular system, as shown in Figure 1.1. For example, Digital
Equipment Corp. (DEC) terminals would emulate International Business Machines (IBM)
terminals when they access IBM systems, and IBM terminals emulate DEC terminals when
they access DEC systems.
FIG. 1.1 Common Terminal Access
The beauty of this approach is that the application program is totally isolated
and unaware that the terminal it is communicating with is a foreign device. Because
the emulation is handling the translation of terminal functions, no changes are required
to the application program(s). With common terminal access, adding support for foreign
terminals is conceptually (and sometimes literally) no different from adding support
for additional native terminals.
This emulation process is not without sacrifices and difficulties. To begin with,
having one type of terminal emulate another uses processing overhead. Taking the
data stream of one terminal and transforming it into the data stream of another terminal
involves intensive central processing unit (CPU), character-by-character processing.
If this processing is performed on the system that the terminal is physically attached
to or is accessing, the emulation process will, by default, consume application resources.
For this reason, emulation is often performed in a separate box or dedicated computer.
Another problem arises when more than two types of systems are involved. Though
it is one thing to have two types of terminals that emulate each other, it is entirely
another matter to have three types of terminals, each terminal emulating the other
two. In this three-terminal scenario, six separate emulation products are employed
(two for each terminal), and the chances of finding six such products are slim.
In terms of the emulation process itself, it can be broken into several technical
tasks:
- Keyboard mapping. Different types of terminals sport very different keyboards
and key usage. For example, an IBM terminal might have 24 function keys while a DEC
terminal might only have four. Enabling each type of terminal to simulate the keyboard
of the other is a difficult task, but it is mandated by the emulation process. A
user must be able to generate the key sequences of the native terminal through the
emulation process.
- Screen presentation. Although most terminals support video attributes
(for example, bright, reversed, or underlined text) and commands to position the
cursor at points on the screen, these attributes and positioning commands differ
from terminal to terminal. At the same time, translating these sequences exactly
between terminal types is critical to successful emulation. The bottom line is that
the screen must appear the same (or very similar) on all types of terminals.
- Transmission characteristics. IBM terminals are block-oriented--that is,
they transmit the information typed onto the screen only when the Enter key is pressed.
Conversely, DEC terminals are character oriented--they transmit when each key is
pressed. Therefore, when a character terminal is emulating a block terminal, the
emulation process must buffer the characters typed until the equivalent of an Enter
key is pressed. Conversely, when a block terminal is emulating a character terminal,
it must take the full buffer and feed it to the application on a character-at-a-time
basis. This is among the most difficult tasks of emulation processing.
Because of these technical difficulties and considerations, it would be advantageous
to introduce a common type of terminal to which all applications conform. Historically,
this has not been done successfully on a large scale. In modern times, however, X
Window Systems terminals (multisession graphics devices developed by Massachusetts
Institute of Technology) have come to play a significant role in defining universal
standards.
Putting the technical issues aside, implementing universal terminal access can
offer simple, straightforward solutions to many different problems. Some of these
solutions include the following:
- Centralizing electronic mail. Because every terminal user can access a
common system, a single electronic mail product that addresses the total user population
could be implemented on one system.
- Increasing user access without increasing the number of terminals or line
costs. Common access eliminates the need to place two or more terminals at the
work spaces of users who require access to multiple systems. It also eliminates the
steep costs of duplicating data communications lines to these work spaces.
- Standardizing on one type of terminal. Because any terminal can access
the application pool, a single type or style of terminal can be used for end-user
applications, regardless of who manufactures the terminal and who manufactures the
system hosting the applications.
- Choosing applications without regard for the system they run on. Implementing
terminal emulation can provide freedom of choice on future applications. Rather then
being restricted to certain systems for future applications, possible future applications
can be reviewed on their own merit.
Before the availability of emulation products, many office workers had problems
finding a place to set down their morning coffee because their desks had to hold
both a dumb terminal and a PC. 5250 emulation has advanced significantly over the
years, and now includes mouse and hot spot support, and the capability to run multiple
sessions. Some 5250 emulators, such as Walker Richer & Quinn's (Seattle, Washington)
Reflection software, are programmable, so end users are able to add functionality.
Reflection comes with its own implementation of Visual Basic, called Reflection Basic,
and a separate API for controlling terminal sessions from applications. IBM's own
Client Access software offers an alternate method for connecting PCs to the AS/400.
Client Access replaces the older PC Support product, which was widely panned as sluggish
and suffering from an awkward interface. Client Access, on the other hand, has an
attractive, graphical interface, and as a native Windows product, offers significantly
better performance.
Resource Sharing
In addition to terminals, other resources in a network are normally controlled
by the system or manufacturer. By distributing these resources, you can often avoid
duplication of expensive devices. The three resources that are primary candidates
for sharing are printers, disk drives, and tape drives or other storage media (see
Figure 1.2).
Moreover, although each of these resources can be shared in the context of a particular
LAN implementation (for example, DEC's DECnet or Novell's NetWare), the same resources
might not be shared among different implementations. For example, a LAN-attached
printer might be used by any DEC system in a network but be unavailable to any Hewlett-Packard
system or PC in the same LAN.
FIG. 1.2 An Example of Resource Sharing
Finally, each type of resource has its own considerations, which are explored
in the following sections.
Printers
Sharing a printing device among multiple users is commonplace. One system handles
all output to a given printer and queues (or spools) the output to the printer. Therefore,
in multivendor environments, the issue is rarely interfacing directly with the printer
but interfacing with the spooling process.
In many ways, printer handling is a variation of file transfer processing. However,
in addition to performing standard character code translations--translating American
Standard Code for Information Interchange (ASCII) to Extended Binary Coded Decimal
Interchange Code (EBCDIC) or vice versa--the printer sharing process must also deal
with printer-specific directives that might differ from system to system. For example,
the directive to issue a form feed might be different for specific IBM and DEC printers.
This level of conversion is required because the process creating the output thinks
it is writing to a native printer, so it uses native printer codes.
In addition to printer-specific code conversions, the print-sharing process must
read and write these specialized queue files. On most systems, these files are stored
in special locations using cryptic names, so the task of finding a print file to
reroute to another printer might not be trivial. After the source print file is found,
it is then written into a specialized queue file on the system handling the printer
(see Figure 1.3).
FIG. 1.3 Printer Sharing
When sophisticated multivendor printer sharing is implemented, it is normally
invisible to the user. The users simply initiate their output without thinking about
the process that gets the file to the printer.
Disk Drives
Given the nature of minicomputers and mainframes, sharing raw storage space in
multivendor networks is a rarity. For one thing, each system has its own operating
system (typically a proprietary implementation) that interacts directly with disk
devices in an optimized and nonsharable manner.
For PC LANs and their interfaces with minicomputers and mainframes, however, specific
products have been engineered that enable PCs to access a larger computer's disk
space as if it were local disk space. A portion of the larger computer's disk is
transformed into a virtual PC hard disk or diskette (see Figure 1.4). Through the
magic of the LAN, the PCs can read and write files and programs on these virtual
disks as if they were native network disks. Of course, the load must be balanced
appropriately; files should be distributed closest to where they are needed most.
Otherwise, the server will be overburdened with file requests.
Although this strategy does address PC access to minicomputer and mainframe disk
space, it does not do much for the computer host. The sponsoring computer, in fact,
might not get similar access to the PC resources, so this style of implementation
might be somewhat one-sided in terms of benefits.
At a higher level, some products allow a program on one system to read and write
records in a file that resides on another system--for example, IBM's Distributed
Data Management (DDM) implementation. Although IBM's implementation is, of course,
specific to IBM systems, other companies have developed similar techniques to enable
this level of access between dissimilar systems. Of these implementations, Sun Microsystem's
Network File System (NFS) is widely implemented and has, in fact, been adopted as
a network file access methodology by many of the leading computer manufacturers,
including IBM and DEC.
FIG. 1.4 Virtual Network Files
Storage Media
Although tape drives are rarely viewed as network-level devices, the minimal expense
of this media makes sharing attractive (see Figure 1.5). In a multivendor network,
a shared tape drive can be used in one of two ways: switched access and networkwide
access.
FIG. 1.5 Tape Drive Sharing
In switched-access mode, the tape drive can be shared between multiple systems,
but only one system can use it at a time. The advantage of this approach over transferring
files from other systems to the system that controls the tape is that if the tape
is switched for direct access for different systems, each system can read and write
to the tape in its native format. The switch, in this case, might be hardware, software,
or more likely a combination of the two.
If the tape drive is controlled by a single network function available networkwide,
files distributed throughout the multivendor network can be merged on a single tape.
This is similar to how tape servers function in PC LANs. Although this is an efficient
way of providing networkwide backup, it does not necessarily provide portability
from system to system.
An effective enterprise storage management system goes beyond network backup--it
pro-vides for the best use of resources, and makes sure that end users can access
data when it is needed. An issue in storage management is establishing a way to access
data stored on multiple devices and environments, and automating storage and retrieval
of the data. Data classification establishes policies for different classes of data
so that managers can decide on the best type of storage media for each type of data.
For example, non-critical reports can be stored on less expensive media, while customer
information might need to be stored so that it is immediately available. Hierarchical
storage management (HSM) tools are available to automate the process of data classification,
and subsequently migrate the data to the most approp-riate type of storage.
Managing storage and backup can be done from either a UNIX perspective, which
offers an open architecture and less-expensive products, or with the older, highly
reliable IBM DFSMS product family. Although many tape and disk storage products for
storing critical data are available, mainframe-class storage devices still offer
the best reliability and performance. The availability of high-speed tape-mounting
robots can yield an impressive data transfer rate, nearly approaching that of DASD.
Still, other solutions must be considered. Remote distance unlimited DASD, for example,
is sometimes used to provide for the availability of urgent, critical data. Redundant
Arrays of Inexpensive Disks (RAID) technology has also become standard in many large
enterprises, while CD-ROM jukeboxes and other optical storage solutions are becoming
more efficient and affordable.
File Transfer
Of all multivendor services, file transfer is probably the best understood and
most sought after. Files were being moved from one type of system to another long
before LANs became popular. In the first implementations, files were moved via such
common storage media as magnetic tape, punched card, or paper tape. As data communications
and networking developed, these media-based transports were replaced by communications-based
methods that emulated such products as the IBM 2780 and 3780 Remote Job Entry (RJE)
stations. One computer would emulate a card punch, for example, while the other would
emulate a card reader.
As data processing grew in size and in scope, these approaches became too limited
to satisfy the variety of needs and demands for moving data. For example, nontechnical
(or semi- technical) users often want to control the "when" and "what"
of file transfers. In many cases, they even want to initiate the transfer themselves.
This level of involvement by nontechnical personnel is simply not possible when using
magnetic tape or RJE transport--both approaches require too much hands-on knowledge
of hardware and/or the operating system.
Typical file transfer solutions have a relatively simple user interface to accommodate
all levels of personnel (see Figure 1.6). A file transfer product can perform functions
such as enabling the accounting department to transfer a file from the administration
department to verify payroll or facilitating the exchange of documents and spreadsheets
between users on dissimilar systems (as long as the actual word processing and spreadsheet
packages can understand each other's information). And if the file transfer product
is simple enough to use, these transactions can occur under the management of the
people responsible for the information--no big brother from data processing required.
However, an easy-to-use interface does not diminish the potential power for file
transfer. The same product can also be used to address some complex, application-oriented
problems. It can extract information from one database, transfer the information
to another system, and then update another database with that information. Therefore,
in a dual (or multiple) database environment, file transfer is often used to move
subsets of information from one system to another.
CAUTION: File transfer is not a good solution for moving an entire
database from one system to the other because the information must be specifically
extracted from the database and put into another format for the transfer.
Many file transfer products accommodate time-fired transfers. These transfers
enable one system to collect information and transfer it to another system at predefined
times. For example, a bank could transmit the day's transactions at the close of
business, or a retail operation could send the cash registers data at the end of
the day.
In addition to time firing, some transfer products provide event-firing mechanisms.
These mechanisms perform such functions as transferring a file as soon as it becomes
available or after two other files are transferred. By combining time firing and
event firing, you can create extremely sophisticated transfer scenarios.
FIG. 1.6 File Transfer Among Dissimilar Systems
Behind the end-user interface and application aids are significant technical issues
regarding file transfer and implementation. Some of the technical issues that affect
the movement of information from one system to another include:
- How information is encoded. Information is normally stored in one of three
formats: ASCII, EBCDIC or binary code. ASCII and EBCDIC are incompatible codes for
infor-mation storage; computer systems normally use one or the other, but rarely
both. DEC minicomputers, for example, use ASCII, while IBM mainframes use EBCDIC.
Binary information, on the other hand, is common to all systems.
- How codes are translated. Any reasonable file transfer product can translate
between ASCII and EBCDIC standards. This is important because a document written
on an ASCII computer and then transferred to an EBCDIC computer must be translated
to be read on the second system.
- How information is structured in a file. Files being transferred must
preserve their structure. For example, if a file contains records of names and addresses,
then the same record structure must be created on the other machine.
- How fields are converted. Within records, individual fields (or items)
might have unique storage and translation requirements. If the information is financial,
for example, numbers might be stored as whole numbers, numbers with fractional values,
positive numbers, negative numbers, or combinations of these. When information is
stored in unusual formats, translating the data between machines is often beyond
the capability of even the best file transfer products, so a separate conversion
might be required before the information can be transferred.
Encapsulation, one of the primary attributes of object-oriented technology, affords
a new approach to data transfer. An object is a self-enclosed body of data, functions,
and services, in which the system invoking the object is shielded from its internal
workings. Techniques such as OpenDoc, CORBA, and Microsoft's OLE, which permit formatted
data from one application to be brought into another application unchanged, are based
on this technology.
Another factor that greatly contributes to the effectiveness (or ineffectiveness)
of file transfer is how the product is structured. For the purpose of simplification,
this structure can be broken into two parts:
- Communication links. At best, a file transfer product can move information
between two systems only as fast as the communications link operates. If the product
uses a 56 Kbps link, it can theoretically move up to 57,244 bits per second. Compared
with a LAN connection operating at 10 Mbps, this is quite slow. Similarly, products
that use even higher-speed channel attachments obtain an even higher data transfer
rate.
- Relationship to the applications. If a file transfer product is based
on software that resides on one (or both) computer systems, the software will compete
with the application programs in the computers. Therefore, if the application programs
demand a lot of the computer's attention, the performance of the file transfer will
suffer. Similarly, if the file transfer consumes its fair share of the computer,
the applications will suffer. An alternative approach is to implement the bulk of
transfer processing in a third computer. This independent device then handles the
CPU-intensive chore of data conversion and reformatting.
In both cases, there is a direct association between performance and price. The
higher the performance...well, you know. Choosing the file transfer product that
offers the best price/performance ratio is often as difficult as the transfer process
itself.
Program-to-Program Communications
Whereas file transfer is the easiest multivendor networking tool to understand,
program-to-program communications is the most difficult. For one, the user community
usually can't see which programs manage what information. Without this knowledge,
it is difficult to understand the reasons for implementing program-to-program communications.
Despite this difficulty, the flexibility of program-to-program communications
enables it to address many situations for which common terminal access or file transfer
products are inadequate. Examples of these situations include:
- Control remote access. Often, allowing users the freedom that goes with
common terminal access and many file transfer products is inappropriate. When a high
degree of control and security is required, a program-to-program solution can be
implemented to control and monitor the files that a user can transfer between systems
or to force a user accessing a remote system to use a predefined sign-on sequence.
This prevents the user from accessing the remote system with a higher level of authorization
than he or she needs.
- Hide remote access. Because program-to-program communications can accomplish
many multivendor networking functions (common terminal access, file transfer, and
so on), the end user can be completely isolated from the multivendor environment.
This scenario is similar to controlling remote access except that program-to-program
communications is used to accomplish all required multivendor functions; the end-user
never sees any remote activity.
- Provide coordination of real-time events. Because program-to-program communications
can pass along information as it is created, previously separate events can become
interconnected. For example, in a manufacturing process in which the beginning of
one event must wait for the conclusion of another event, program-to-program functions
can automate the notification process. In a medical environment, information collected
by a laboratory computer can be updated in a patient's medical history database and
in billing records. Workflow software can implement program-to-program communications
in this manner, and is in fact often used to automate business processes, allow simultaneous
access to data, and avoid the "lag time" involved in handing off the process
from one desk to the next.
- Implement distributed databases. Sometimes the total information requirement
of an application is so large that it spans multiple computer systems. In this distributed
environment, program-to-program communications can enable each system to access the
other systems' information.
- Implement a central database. Rather than implement a distributed database,
you can centralize information in a single computer system. If application programs
running on other systems need access to any of the central data, they can use program-to-program
services to obtain the information.
Program-to-program communications can be implemented in two ways. One implementation
enables one program to appear to a remote system as though it were a terminal (see
Figure 1.7). In this approach, the program logs on to the remote computer, accesses
the program it wants, and interacts with it by simulating a user sitting at a terminal.
Although this approach requires the overhead of emulating an end user, it requires
custom programming on only one of the systems; the other program remains the same.
FIG. 1.7 Program-to-Program Communications
The second type of implementation enables two or more programs to communicate
directly with each other. In most cases, both programs use a common set of access
routines that let them establish a link with one another and transfer information.
In large IBM networks, this is accomplished through the SNA LU 6.2 interface. For
LANs, the implementation is usually unique to the networking service (for example,
DECnet's task-to-task communications, HP's InterProcess Communications, and Sun's
Remote Procedure Calls).
Although the tools to implement program-to-program communication are well-defined,
the applications for it are wide open. Like most programming tools, the uses of program-to-
program communications are highly dependent on their environment, applications, and
programmers.
Electronic Mail
Electronic mail, or e-mail, can enable a network of people to communicate interactively.
The backbone of every mail system is its capability to send notes and replies between
its users (see Figure 1.8). This facility is faster (theoretically) than issuing
memos and more convenient than tracking someone down via telephone.
FIG. 1.8 Sample Electronic Mail System
Besides providing basic electronic communications, many e-mail packages also include,
or are bundled with, software for automating common desktop and office functions,
or facilitating the flow of documents throughout the enterprise. These functions
include time management (appointment scheduling and to-do list maintenance) and information
management. Many products provide special scripting languages so they can be customized
for more specific functions (companywide bulletin boards, structured training, help
guides, and so on).
Implementing an e-mail package in a multivendor network can be done in a centralized
or distributed manner. When implementing a centralized solution, one system can be
designated the e-mail host. In this case, each terminal, regardless of manufacturer,
must have access to that common system. As previously discussed, common terminal
access is an appropriate means of accommodating this need.
In a distributed implementation, two or more systems serve as hosts to an e-mail
solution. Although there are obvious benefits to running the same e-mail software
throughout the enterprise (simplified training, better licensing deals, and universal
access to vendor-specific features), different vendors' e-mail systems can still
interoperate. Most modern systems comply with the SMTP standard, which allows for
at least a basic level of interoperability. In this case, you must only ensure that
the physical and logical links between the systems are compatible with the product's
requirements.
Electronic mail differs significantly from file transfer. Some of the unique attributes
of e-mail exchange include:
- Distribution lists. To send or receive mail, a user must be defined as
a participant within the e-mail system. When multiple systems are used, this list
of users must be cross-referenced to track which users belong to which systems. If
cross-referencing is not practical because of too many users or incompatible products,
you must find a way to enable one system to distribute information to another system
without knowing what users are where.
- Format of mail messages. In many cases, you can use the system's native
word processor to generate the contents of a message. However, because different
manufacturers' word processing systems are usually incompatible with one another,
you usually must implement a means of identifying the originating format and converting
it into a more acceptable format.
Workflow Systems
Much has been made of the trend towards Business Process Reengineering (BPR).
BPR is an extremely arduous procedure in which all business processes are examined,
rethought, and reworked from the ground up. There is often a lot of resistance to
BPR, especially on the part of end users, because of the disruption that affects
people's comfortable routines. However, if effectively carried out, BPR can significantly
enhance productivity.
BPR is often associated with workflow documentation, and in fact, the first step
of a BPR analysis is often to document workflow. Looking at the flow of work throughout
a department, or indeed the entire enterprise, lends itself to redesigning the very
operations being documented. Oftentimes, a certain task that was done for years might
prove irrelevant after it is documented. BPR has led to new technologies, including
collaborative computing, document management, and automated workflow systems. Besides
merely automating the flow of work and documents throughout the business unit, these
tools can also significantly transform and redesign the way the work is done.
Workflow dispatches electronic documents or forms throughout a queue, routing
the document to the next person based on pre-set business rules or a defined access
list. It can be used to automate business processes, route projects throughout a
business unit, and track the status of a project. Workflow software, when built on
top of a client/server architecture, permits business tasks to be performed in rapid
succession or even simultaneously by different workers. If the documents are presented
in a traditional paper format, then only one person can use it at a time and it must
be physically transferred to the next person in the workflow line. The electronic
presentation of documents greatly speeds up the process.
Until recently, there was no way to connect messaging systems and workflow engines.
Microsoft addressed this situation by adding extensions to its Messaging Application
Programming Interface (MAPI), which permits the linking of messaging and workflow
systems. In addition, a workflow consortium, of which Microsoft is a member, is also
planning to publish an API to define how front-end applications can access multiple
workflow systems. Microsoft's MAPI Workflow Framework defines a set of extensions
for routing work from desktop applications to workflow systems in the form of MAPI
messages. Under the Microsoft framework, a MAPI-based e-mail system can now trigger
a workflow procedure.
Similar to the workflow model is groupware, a type of solution that permits groups
of individuals to work collaboratively on a project. This also is built on an e-mail
framework, and provides many of the same capabilities as workflow products. One of
the most prominent groupware products is Lotus Development's Notes, a fully programmable
tool that can highly automate tasks, facilitate communications, and streamline access
to data. To scale to the enterprise level, however, groupware products must be able
to integrate with existing network management tools. To accommodate this need, Lotus
released SNMP agents for OpenView, which permits event messages to be sent from a
Notes server to an SNMP management console. IBM, Lotus' parent, also has plans to
integrate Notes with network managers from Sun and IBM.
Network Management
Network management in a multivendor network is a technical task rarely seen by
end users. However, end users do often notice the effectiveness of a network's management
when they experience network changes and problems.
The problems involved with managing multivendor networks are numerous and complex.
In some cases, networks are geographically separate and linked through bridging and
gateway devices. In others, various manufacturers share the same physical network,
but each runs its information over that network independently. Sometimes, information
runs over the same physical and logical network.
Therefore, when a component in a network ceases to operate correctly, there are
many potential causes for the failure. Furthermore, given the increasing use of WANs,
the geographical size of the network can be huge. For example, a DEC LAN in California
might connect to an IBM network in Texas that might connect to a HP network in New
York. Even worse, the group responsible for managing the network might be located
only in New York, thus increasing the difficulty of diagnosing the Texas and California
networks, even though they are linked together.
The primary job of network management is to monitor and report on the status of
the whole network. A network management solution tracks the status of every component
in the network, regardless of who the manufacturer is or what type of network it
is operating on (see Figure 1.9).
As already mentioned, network management products are often invisible to the end
users. But the use of such a product in conjunction with the overall networking strategy
is an important aspect of maintaining any large single-vendor or multivendor network.
FIG. 1.9 Network Management System
The Bottom Line
Examining networking and application needs to find the best solution is a complex
task. It is important to understand networking issues and some underlying networking
considerations. For example, to understand the difficulties in implementing a combined
IBM and DEC network, it is important to understand how each network operates on its
own. Similarly, to shop for multivendor solutions, you need to understand the application
and range of available options.
The rest of this book is organized with these considerations in mind. Chapters
2 through 5 deal with the products and native networking architectures of Digital
Equipment Corporation, Hewlett-Packard, IBM, and Sun Microsystems. Chapters 6 through
13 address multivendor networking issues, standards, product approaches, and network
management. Finally, a glossary defines the terms and acronyms used in this book
and throughout the data communications and networking industry.
© Copyright, Macmillan Computer Publishing. All
rights reserved.
|