|
|
Managing Multivendor Networks
- 13 -
Software Considerations
The advent of multivendor networks and client/server architectures has resulted
in more software being cross-platform in nature. Cross-platform software development
is simple for programs without a GUI; a simple recompile of a C program will do the
job. It is more complicated for programs with a graphical front-end, but end users
now expect this front-end from developers. Fortunately, there are several development
tools available for this purpose. These include:
- Uniface 6 (Uniface Corp.). Uniface can be used to create a generic interface,
which is defined in an object repository instead of in code.
- Zapp Developer's Suite (Inmark Development Corp.). This suite is actually
an application framework, which includes a set of C++ class libraries with prebuilt
services. Screens can be designed by dragging and dropping interface objects, and
the resulting C++ code that is automatically generated can be compiled for either
UNIX or Windows.
- UIM/X (Bluestone Communications, Inc.). UIM/X is an object-oriented development
tool. It uses native libraries to create a more compliant look and feel, and has
an interactive GUI builder. The UIM/X Cross Platform Toolset provides developers
with a set of cross-platform interface components.
WinSock
WinSock (Windows Sockets) is an open API designed by Microsoft that provides
the means of using TCP/IP with Windows. The newest version, WinSock 2, will add support
for IPX/SPX, DECnet, and OSI. WinSock 2 is transport independent, and includes a
complete set of APIs for programming to multiple network transports concurrently.
(In addition, WinSock 2 will permit applications to take advantage of high-speed
ATM switching technology. The API will permit existing applications to be adapted
to ATM with only a minimal amount of reprogramming.)
The OAG and Multivendor Application Integration
The Open Applications Group (OAG) has demonstrated a specification for multivendor
application integration. Two members of the consortium plan to deliver systems with
snap-together functionality by next year. The OAG specification will enable client/server
applications to be integrated "out of the box," without having to add on
extra software interfaces. The applications will pass data directly between one another
in a common format. Compliant applications will contain an API written to the OAG
message format specification, known as the Business Document Exchange. If widely
accepted, applications written to this specification will be capable of recognizing
each other's data.
Macintosh File Sharing
The Macintosh is not widely used in corporate networks, although it does have
its niche areas, such as graphics and multimedia. Several utilities are available
to enable PCs to recognize Macintosh files. TCP/IP ships with the Macintosh hardware
and is actually simple to configure on the Apple Macintosh platform. Any TCP/IP application
can work with the Macintosh TCP/IP drivers.
AppleTalk is the Macintosh's native network protocol, although TCP/IP might actually
be simpler. Some network managers prefer to avoid AppleTalk on the corporate net,
despite the fact that there is actually little justification for this. Although AppleTalk
uses a small packet size, this does not necessarily mean it will generate more traffic.
AppleTalk does generate, however, some additional traffic because of the automation
inherent in the protocol. Devices communicate with each other over the AppleTalk
network in order to make AppleTalk a plug-and-play network; it is not necessary to
have to type in addresses and setup data for each device. TCP/IP is moving more toward
this model with Dynamic Host Configuration Protocol (DHCP), which is very
similar to the AppleTalk Address Resolution Protocol (AARP).
Tools such as Wall Data Inc.'s RUMBA enable the Mac to participate in IBM-based
networks. With this tool, Mac users can communicate with IBM mainframes and minicomputers,
and with other platforms. Mac RUMBA client software integrates Wall Data's (Kirkland,
Washington) SNA*ps mainframe gateway technology with the company's RUMBA PC-to-mainframe
client software.
Component Technology
The concept of distributed objects holds great potential. A distributed object
is a software component that performs functions for other objects. They can be distributed
throughout the network and accessed by any network user with authorization, and the
objects can be assembled into complete distributed applications.
There are four separate, and sometimes conflicting, standards for distributed
objects: OLE, CORBA, DCE, and OpenDoc. These standards offer a way for different
objects to communicate, regardless of vendor origin, and bring developers a higher
level of abstraction. Instead of focusing on clients and servers, the developer works
with users, objects, and methods. It is no longer necessary to track which server
process is executing each function because this information is encapsulated within
each object. When a message is sent to an object requesting action, the object will
then execute the appropriate methods. The object encapsulates data, functions, and
logic, which is then shielded from the receiving application.
Object technology can also simplify maintenance and network management tasks.
For example, changes and adds can be abstracted to the point of plugging or unplugging
visual objects in a graphical interface.
OLE
Component technology's goal is to permit development, management, and other tasks
through interoperable, cross-platform, off-the-shelf components. Windows developers
have at their disposal a large collection of Visual Basic ActiveX custom controls.
Based on Microsoft's OLE (object linking and embedding) technology, ActiveX
has evolved from the earlier VBX and OCX models. OLE, however, carries a high learning
curve and lacks object-oriented features such as inheritance, a technique whereby
both data and functions are moved from one object into a new object.
Network OLE
Microsoft is working on a version of Network OLE to provide this same functionality.
Network OLE will use RPCs to distribute components throughout the enterprise. Network
OLE will be released with the next version of Windows NT. It adds a third tier to
a client/server network, with business rules and code encapsulated into components
and distributed across the network. This third layer is transparent to the end user,
who will not have to know where the OLE objects are located.
OLE (Microsoft) is based on the Common Object Model (COM), an open spec
for object technology. OLE objects are interoperable, and can be created in any one
of several languages. OLE is only available on Windows platforms. Microsoft's Visual
Basic 4.0 takes some steps towards a Distributed OLE model, which permits
VB functions to be declared remote.
Under pressure to at least marginally embrace open systems and the World Wide
Web, Microsoft has come up with an OLE enhancement technology it calls ActiveX.
Besides Windows, ActiveX supports Macintosh and UNIX, and supports a large set of
tools and programming languages. Microsoft's goal in releasing ActiveX is to make
it easier to create interactive applications and World Wide Web pages. Already, there
are more than 1,000 reusable ActiveX controls--which means that when you are building
a Web page, you don't have to build every piece from scratch. Although it doesn't
compete directly against Sun Microsystem's enormously popular Java language, Microsoft
certainly had Java's market in mind when they created this little gem. Java programmers
can access ActiveX controls from Java applets, and ActiveX also establishes a bridge
to Java to let other programming languages use Java applets as reusable components.
Microsoft's Visual J++ Java development tool integrates the Java language
with ActiveX.
NeXT Computer is planning to beat Microsoft at its own game, by offering distributed
OLE technology before Microsoft releases its own distributed OLE products. NeXT plans
to ship an extension of its current OLE object environment, called Distributed
OLE for Windows. With this tool, developers can create Windows applications that
send OpenStep objects across a distributed network.
CORBA
Common Object Request Broken Architecture (CORBA), however, does support
object-oriented development. OpenDoc is a CORBA-based platform developed by
an industry alliance led by Apple Computer, Inc. OpenDoc is better suited to cross-platform
development and works well on UNIX, Mac, and OS/2 environments. OpenDoc does support
OLE and an OLE 2.0 object can be embedded in an OpenDoc component. Because OpenDoc
is a derivative of CORBA, it is networkable. CORBA 2.0 has a method for distributing
objects throughout the enterprise.
CORBA's ORB (Object Request Broker) architecture affords developers more
freedom than OLE in terms of programming languages and operating systems. OMG's (Object
Management Group) CORBA 2.0 is based on the ORB structure. ORBs facilitate interoperability
and establish a single platform on which objects request data and services on the
client side or provide them from the server side. TCP/IP is used by CORBA as a standard
communications protocol. Compared with the other standards for distributed objects,
CORBA is still immature and lacks some features for large-scale production.
Version 2.0 of the CORBA specification includes the Internet Interoperability
Object Protocol (IIOP), which provides for multivendor connectivity. The previous
implementation of CORBA, although it provided for portability, did not include a
specification for interoperability. The availability of IIOP will significantly increase
CORBA's potential to become widely accepted.
The ORB model is rapidly maturing, and several vendors are bringing ORBs to market.
Some of these products extend the CORBA specification to support mission-critical
applications, by providing fault tolerance, support for shared memory, and multithreading.
Microsoft OLE-based applications will communicate with CORBA applications through
a CORBA 2.0 ORB.
CORBA (Object Management Group) provides the specifications for the development
of ORBs. An ORB instantiates objects, establishes communications between objects,
and invokes methods on behalf of objects. The CORBA Interface Definition Language
(IDL) is used to define the object's interface, but the existing specification,
1.2, does not provide for a standard communications protocol. As a result, few ORBs
are interoperable between vendors. (The next version, 2.0, will specify such a standard.)
CORBA does not specify a mechanism for locating or securing objects.
ExperSoft's PowerBroker 4.0 is an extension to the XShell 3.5. It is the
only product available that supports both the Common Object Request Broker Architecture
(CORBA) 2.0 and Microsoft's OLE. This is accomplished through the product's Meta
object request broker, which works as a translation layer that understands the
two object models, as well as the predominant object-oriented programming languages.
CORBA 2.0 defines mappings between object-oriented languages. ORBs are a type of
software that defines how a software object is identified and used across the network.
CORBA and OLE are integrated through the PowerBroker OLE feature, which automates
interactions between OLE automation clients and PowerBroker objects.
OpenDoc
OpenDoc developers are currently able to more easily migrate a component between
platforms, and OpenDoc is much more interoperable than OLE. OpenDoc is promoted by
Component Integration Laboratories (Sunnyvale, California), an Apple-led consortium.
The OpenDoc (Component Integration Laboratories) consortium comprises several
vendors, including Apple, IBM, and Novell. Similar to OLE, OpenDoc is based on IBM's
System Object Model (SOM) and presents a visualization system for compound documents.
(Members of the consortium are planning to provide OpenDoc support in their applications,
and development kits have become available.) However, OpenDoc is a latecomer into
the distributed object market.
OpenDoc introduces a component-based architecture suitable for cross-platform
development. It is implemented as a set of shared libraries, which include the protocols
for creating software components across a mixed environment. The standard is vendor-independent,
and has a layered architecture that offers five services: Compound Document Services,
Component Services, Object Management Services, Automation Services, and Interoperation
Services. Many of the features of OpenDoc can be accessed through API calls. OpenDoc
is based on the CORBA-compliant System Object Model (SOM). Developed by IBM,
SOM is a tool for creating cooperative objects, it's used in the OS/2 Workplace Shell,
and has proven itself to be a reliable and mature technology.
The goal of OpenDoc is to enable users to call up compound documents that might
include graphics, text, or other elements, without having to invoke all the various
applications involved in creating them. Under the OpenDoc view, vendors replace their
traditional large applications with part editors and part viewers, and therefore
represents a significant change in the way software is created and used. This differs
from the traditional, application-centered model, where users call up specific applications
to create platform-specific documents. Despite large vendors' attempts at throwing
everything imaginable into one large application, it is impossible to provide every
feature that every user could possibly want. OpenDoc instead makes features separately
available as parts, so end users can customize their application environments to
suit them. Companies are starting to deliver OpenDoc parts to the market.
DCE
Distributed Computing Environment (DCE) is one of the most mature standards.
Microsoft's OLE, because it is proprietary, is not a true standard, but has become
a de facto standard for Microsoft environments. OLE is widely used, but specifications
have not been provided to other vendors. OpenDoc is not widely accepted.
A product of the Open Software Foundation (OSF), DCE is fully vendor-independent
and is widely available from several vendors and most operating systems. It includes
services for locating distributed objects, and secure access facilities. It also
includes a protocol for communicating in a heterogeneous environment.
The widespread availability of DCE objects makes it a good framework for building
applications. The DCE Remote Procedure Call (RPC) is not dependent on one
protocol or network type. The DCE RPC lets a server communicate with multiple clients
on different types of networks, and DCE's Global Directory Service (GDS) and
Cell Directory Service (CDS) is a useful technique for managing an internetwork.
In this model, a local node set is represented as a CDS on the bigger GDS hierarchy.
DCE has been commercially available only for a short time, and supporting commercial
software products are still not widely available or are in their early stages of
development. When better tools become available, managing the distributed environment
will be easier.
OSF's Distributed Management Environment (DME) is DCE-enabled management
services. DCE's administration is consolidated under DME, providing a programmable
process for managing the distributed environment. Implementing a successful DCE migration
might take years and it requires detailed planning and strategies. Migration is hindered
by DCE incompatibilities, a slow emergence of standardization, and resistance by
users and management. While major vendors have announced DCE support, there are not
yet any application development or management tools; although some products do offer
DCE support. DCE decreases the complexity of a migration to a distributed computing
environment by reducing the amount of variables, simplifying transition, and lessening
dependence on multiple vendors.
The Motif GUI was one of the earliest successes of OSF. Motif has been accepted
as a standard open systems interface by most major UNIX vendors. DCE includes RPC
technology, which provides application and file sharing, enterprise security, and
directory services. These are all transparent to operating systems, hardware, and
protocols.
More widespread availability has led to an increase in DCE's popularity: DCE is
now available on Windows NT, MVS, and AIX. DCE is a set of integrated directory,
security, and transport services for building distributed applications that can run
over multiple operating systems. It can support large-scale distributed environments
in a multivendor environment. Other object technologies lack the same level of standardization
and security to be effective in an enterprise-wide multivendor environment. More
tool vendors are bringing products to the market that make DCE programming easier.
Several UNIX vendors have shipped DCE code with their operating systems, including
IBM (AIX) and HP (HP-UX).
Although DCE was originally targeted strictly at interoperability between UNIX
systems, there has been a migration to accommodate many different operating systems.
Microsoft is planning to use the specification as a way to move into the enterprise.
Data Warehouses and Repositories
The combination of larger networks, multiple database products, and a greater
demand for business information on all levels demands new tools and technology. In
striving for an interconnected enterprise, made up of heterogeneous hardware and
software, the data warehouse can provide an excellent solution. Imagine an enterprise
with a legacy mainframe system, a transaction processing environment, and several
departmental LANs. Imagine again, an executive coming to you and saying, "Give
me a report on the Big Picture." You sweat a little as you imagine trying to
gather all this information from these various systems and then integrate it all
into a single report. You know you will spend weeks on the report and then the executive
will look at it for ten seconds and file it, having no idea the amount of trouble
it took you to prepare it.
The data warehouse can be used to bring together a variety of information from
legacy systems, transaction processing environments, and other areas. Furthermore,
an Executive Information System (EIS) can be deployed on top of the data warehouse,
which will provide the executive or manager with direct access to this data. The
executive no longer has to wait for reports, and you no longer have to spend precious
time preparing endless management reports.
Systems within the enterprise are too often incompatible or just unconnected.
Take, for example, the poor fellow who has to generate a series of monthly reports
based on mainframe data. Every month, he has delivered to his desk a familiar wide
printout, that after unfolding eventually drops down to the floor and across the
hall. It is a major accomplishment when the mainframe guys even convert the dataset
into a delimited ASCII file! Of course, they have to deliver it by hand, on a floppy
disk, and then this unfortunate soul has to massage and rekey the data into a Lotus
spreadsheet.
However, if he had one of the many data mining applications that are currently
available, not only could he have directly accessed that data, but he could have
"drilled down" to any level of detail down to an individual transaction.
Is this a familiar scenario? It is likely that most large companies have situations
like this, where data has been entered once but must be entered again because of
a computer incompatibility. What makes it even more frustrating is that it is no
longer even necessary. Yet, the problem continues to increase as data gets more spread
out and departmental LANs are created as autonomous entities. A centralized management
of this wealth of information is absolutely essential.
This centralization can be achieved through the repository--a "meta"
data system that collects information about the various data that exists through
the enterprise. The repository provides information about data relationships, regardless
of format. It does not actually hold the databases, but rather provides a sort of
central, overall view.
Running on top of this repository is the data warehouse, which is able
to bring together and manipulate corporate data, and make it more accessible for
the end user. The warehouse puts data into a consistent format for simplified access.
The repository/warehouse model provides an effective platform for connectivity throughout
a heterogeneous enterprise. By having access to all corporate data, end users are
empowered and the company maintains a competitive edge.
The data warehouse does not necessarily take the form of a central physical data
store. Although this is one option, the distributed data mart approach to
data warehousing lets the end user select a subset of a larger scheme, which is organized
for a particular usage.
The data from the data warehouse appears to the end user as a single, logical
database. In reality, the information might come from multiple databases and heterogeneous
platforms. The differences between these DBMSs and platforms become transparent to
the end user.
End users are able to access this information without having to access the production
applications that were used to create the data in the first place. One of the most
effective approaches to data warehousing is a three-tiered architecture that uses
a middleware layer for data access and connectivity. The first tier is the host,
where the production applications operate; the second tier is the departmental
server; and the third tier is the desktop. Under this model, the host
CPU, or first layer, can be reserved for the operation of the production applications;
the departmental server handles queries and reporting; and the desktop manages personal
computations and graphical presentations of the data. The data access middleware
is the key element of this model. Middleware is what translates the user requests
for information into a format to which the server can respond. This three-tiered
architecture can then establish connections with many different types of data sources
on different platforms, including legacy data.
Tasks involved in building a data warehouse include extracting the production
data on a scheduled basis, removing redundancies, and cataloging the metadata. After
extracting and restructuring operational data, the data warehouse environment then
places it in a database that can be accessed by the end user. A traditional RDBMS
can be used, although multidimensional databases offer special advantages for the
warehouse environment.
With the increasing use of data warehouses, companies might need to extend the
capabilities of the network to provide access to the warehouse across the enterprise.
The number of end users needing access to the data warehouse is increasing, partly
due to the trend towards downsizing and elimination of middle management. One solution
is the establishment of the data mart, a smaller, departmental database that contains
a relevant subset of the larger data warehouse, and is synchronized with the central
data warehouse. The data mart might contain information that is most frequently requested,
or relevant to only specific departments. This can keep the load on the data warehouse
down, and make it easier to retrieve information.
World Wide Web
The World Wide Web is emerging as a tool for internal corporate networking
and communications. Some large companies are deploying Web servers strictly for internal
communications and applications (often referred to as intranets), and a way for employees,
regardless of location, to access databases and other information. Because data written
for posting on a Web site is created in a common format, using the HTML mark-up language,
the originating platform is irrelevant.
Through these types of internal intranets, users can access applications
through their Web browser, instead of having to log in through a remote access program.
The Internet and World Wide Web are also being widely used to offer publicly accessible
data such as customer contact systems, where customers can check bank balances, order
status, or other information.
Networking vendors are using the Web to deliver network management information.
Viewing this data over the Web presents many obvious advantages. Network managers
can access this critical information from any location, from any computer equipped
with a modem and a Web browser. With this capability, it is no longer necessary to
logon to the internal network or be physically in front of a specific management
console to view network management data.
Web Plans--Cabletron, NeXT, and IBM
Cabletron Systems Inc. (Rochester, New Hampshire) is planning a Web reporting utility
in the next version of its enterprise network management software. Cabletron's Spectrum
4.0 enterprise management software will include a reporting option that will send
updated information to a Web server.
NeXT Computer has a software object library that will permit developers to write
Web applications that can link with a back-end, object-oriented, client/server system.
The tool set will include a number of objects for building electronic commerce-enabled
Web sites, including credit card authorization objects, catalog objects, and inventory
objects.
IBM is offering a solution for linking IBM PC Servers to the Internet that will
enable customers to manage LANs through the Internet, from any PC, or from a workstation
equipped with a Web browser. The solution will permit the management of remote locations
around the world, while also permitting the administrator to perform management tasks
from any desktop. This function is included in IBM's PC SystemView 4.0 systems management
software.
The Web is emerging rapidly as a tool to make networks more powerful. This attractive
section of the Internet is an effective way to make information readily available,
both internally and externally. IBM has made a commitment to Web technology with
its MVS Web Server, which can enable a mainframe to be used as a Web site.
(IBM is also planning a similar access tool for the AS/400.) Lotus Development Corp.,
now an IBM subsidiary, also has a product to incorporate the Web in internetworks.
The InterNotes Web Publisher permits a Lotus Notes database to be published
and accessed over the Web.
Standardizing on the Web for internal publishing addresses many network limitations
and compatibility problems. The Web is the easiest way available for enabling Macintoshes,
UNIX workstations, and Intel-based PCs to share information. Anyone can create a
page in HTML from any platform, which can then be made available to anyone with a
Web browser, regardless of operating system or hardware.
© Copyright, Macmillan Computer Publishing. All
rights reserved.
|