The people, places, and things with which you come in contact each and every day are the objects of your life. You have a transportation object (your car), a companion object (your spouse, children, or pet), and a number of other objects that you interact with throughout the day. This is not a suggestion that you are living a mechanical life, but rather that you can express the relationships between yourself and those things around you by thinking about the attributes that define those objects. It is this set of specific, meaningful attributes that lets you differentiate between the kitchen chair and the sofa. Both provide you a place to sit, but each has its own specific function within your life.
Abstracting the essence of real-world objects, events, and processes and then creating a road map or blueprint of that occurrence is the rationale behind object-oriented development. This chapter examines objects, their attributes, and their relationships to other objects. By understanding the pieces of the underlying technologies (OLE, ActiveX) and how each fits into the Active Server Pages environment, you will become a more proficient and educated developer.
Understand important keys to extensibility within Active Server Pages by examining objects and components.
Find out how component technology solves many problems faced by traditional developers.
When you create objects within ASP, you are creating OLE or ActiveX components. Understanding OLE and how it works, lets you leverage its features in your ASP development.
From transactions to distributed objects, you will take a quick look at the future of distributed processing.
You will see object and component used somewhat interchangeably within this chapter. When you hear component, you can think object, but with one main difference: The component is always packaged separately from the application, as a dynamic link library or as a COM or ActiveX object. This provides a number of benefits that will be examined in the section "The Component Object Model."
In the aftermath of the second world war, the United States was the world's provider of choice for goods and services. Shortly after that time, a man named Dr. W. Edward Demming spoke of a new concept in manufacturing: total quality management (TQM). At the time, not many U.S. companies were interested in TQM. They already were the world's first and best supplier. So Dr. Demming took his message to the Japanese, who quickly took his teachings to heart. Now, some 50 years later, that country is beating most others in quality production at nearly every turn.
In the past ten years, the idea of total quality management was revived in the offices of corporate America. What corporations had once spurned, they now embraced. Out of this new focus on quality, a number of methods and techniques were developed to examine problem processes within an organization and ferret out the root causes. The next steps involved process redesign and, in many cases, process automation. The developer was given a process map that showed the new process to be automated, and more specifically, the way the data flowed through the process. Many developers then used this as a road map to develop the new application. The problem with the result was that it was a data-centric, not a process-centric, design.
For the developer, this road map was a godsend, giving him a step-by-step, data-driven guide to developing the system. Many systems continue to be developed in this manner today. There are, however, a number of issues that arise from this traditional, structured application-development methodology.
Working from a data-driven process map, the developer tends to focus on creating functions that let the data flow as it does in the map. This is a great way to implement the application, based on the process flows. In reality, however, most processes that are reengineered are changed again (tweaked) just before or shortly after implementation. So a step that was in the beginning of the process might be moved to the middle, and then a few weeks later, it might be moved to the end and then back to the beginning. Adapting the procedural, data-based application to these changes is a major effort, and in most cases, the application cannot adapt to the requested modifications.
The ability to rapidly change an application in a changing environment is one that faces every developer. As a solution to this issue, many development shops have moved to object-oriented design and development. Traditional application development involves using a structured methodology, looking at top-down functional decomposition of the processes involved and their associated systems. When you use data flow diagrams and structure charts, the processes are identified, as is the data that moves through the processes.
Object-oriented development is a methodology that strives to decrease the complexity of a problem by breaking down the problem into discrete parts, which then manifest as objects. The objects within this problem domain are then discovered and abstracted to a level where the inherent complexity within the real-world object in removed. What you are left with is some number of objects that have a state (data members) and that provide services (methods) for other objects within the domain. The nice thing about encapsulating functionality within objects is that they are self-sustaining units. If a process step is changed within a flow, there is no need to change the object itself, just its place within the program.
As new requirements are added, new functionality can be easily added to the object. Even better, when a new application is required, existing objects can be used in the new development, either directly, through combination, or through inheritance, all of which you will learn about in this chapter. Even though there is no support for object-oriented development per se using VBScript, many of the lessons learned about the value of code reuse and encapsulation (data hiding) can be applied in your ASP development.
To you, the Active Server Pages developer, an object or component is a prebuilt piece of functionality that you can immediately integrate in your scripts. These include components such as database connectivity, interaction with a host environment, and a number of other functions that you cannot perform through scripting alone. By understanding the principles that drive the component implementation, you will be better able to leverage components' use in your development.
At its most basic level, an object is an instantiation of a class. A class is a blueprint for the creation of an object and the services that reside within it. The class describes the state of the object using data members (private) and provides services (member functions or methods) that are available to owners of the object (those that are public members) and to the object itself for internal use (non-public members: protected or private). The class also may be related to other classes through inheritance or composition.
Which came first, the object or the class? The question certainly falls into the chicken or the egg category. Who has the time or energy to try and figure that one out?
When you begin trying to identify objects within your problem domain, you are struck by the complexity of the world in which you live. Abstractions are a useful way of reducing your environment's complexity into manageable pieces. When you use abstraction, you pick out the most important elements or properties of an object that let you view the object from a higher place, a reduced complexity.
If you look at a piece of paper under a microscope at high magnification, you see millions of fibers, intertwining with no discernible pattern. As you lessen the magnification, the fibers begin to run together. Eventually, you are at 0x magnification (looking at the page on a table, perhaps), and the fibers within the paper are abstracted to such a level that they become insignificant. For understanding the use of paper in a printer or to write on, the microscopic level has been abstracted, so the piece of paper (object) can be understood and utilized. The more an object is abstracted, the smaller the number of intricate details involved. Of course, you can abstract something too much, to a point where you lose the essence of the object. As a developer, you will determine the level of abstraction that will enable you to integrate the object into your code.
As you abstract objects, you identify those attributes that are essential to understanding the object. Once the attributes are identified, you can move into the services or member functions that operate to manipulate the object attributes that were abstracted.
As your applications begin to interact with the objects in your Active Server Pages scripts, you will set properties and call methods of those objects. All your interactions with those objects take place when you access them through a well-defined set of functions, or a public interface. When you execute a method of an object, you don't need to know how the object will perform its duties; you just need to object to get the job done.
The idea of having a public interface and private implementation is one of the key OO (Object-oriented) concepts. You may have heard this concept referred to as data hiding; another name for this technique is encapsulation. In essence, all the implementation details, all the internal variables that are created and used, and all the internal support functions are encapsulated within the object. The only view into the object that you as a user of the object have is the public interface.
Encapsulation provides a number of benefits. First, you don't need to know how the requested service is implemented, only that the object performs the requested service. The second benefit is that the underlying implementation of the services that the object provides can change without the public interface changing. The client (calling procedure) often is totally unaware that the implementation of the function has changed at all, because it never has access to that part of the object.
Many people refer to this type of system as a black box interface, and this is a fair analogy. Imagine that you are creating a transaction program to interface with a legacy database system. You will define a set of transactions that the black box, or object, will accept, and then it will return the result set to your client application. Initially, the communication between the black box and the legacy system will be performed using LU6.2 communications. A few months later, the protocol changes to TCP/IP. Your client application is never aware of the protocol that the black box is implementing, and the change does not affect the client app at all, because the public interface (the transaction set) does not change. This is the benefit of encapsulation: the public interface remains constant, and the implementation can be changed without affecting the object's users.
The only thing you need to know to understand inheritance is how to use kind of in a sentence, as in "a bicycle is a kind of vehicle." The whole idea behind inheritance is that you create a class of a base type (say, vehicle) and then derive a new class from the base class (bicycle). The neat thing about inheritance is that all the functionality residing in the base class are available to the derived class. Those functions unique to the derived class are then implemented within the new class, called the subclass. There is also the opportunity to then derive a new class, say Huffy, from the bicycle class. The Huffy class will again have all the methods from each of the classes that it is derived from. This is called single inheritance, when a subclass is derived from only one base class.
In the preceding example, the vehicle base class has a number of functions that include things like starting, stopping, and turning. All the vehicles derived from the base class (bicycle, car, motorcycle, boat) have the methods of the base class, but their implementation is different. To turn right in the car class, the steering wheel is turned. On a bicycle, the handlebars are moved. I think you get the idea.
In another case, you can derive an object from more than one base class. This is called multiple inheritance. Say you are creating an object that will provide a visual interface for the abstraction of a document scanner. You can derive a ScanView class from a ViewWindow class and a ScannerControl class.
You might be saying to yourself, so what? Why do I need the base class, when many of the functions in the subclasses are implemented differently anyway? Well, the answer is illuminating. Polymorphism lets you use a variable of a base class type to reference any of the classes derived from that base type. This means that you can have a procedure that accepts as a parameter an object variable declared as a vehicle. Then you can call the procedure passing any of the subclasses of vehicle that have been derived-boat, car, an so on. The great thing is that when you say objectVar.TurnRight() within the procedure, the appropriate method within the subclass is invoked. As you can see, this is unbelievably powerful.
Your application can be controlling any type of vehicle, turning left or right, starting, or stopping, regardless of the class of vehicle it is controlling. But just as important, each time you create a new abstraction of a vehicle, you don't need to start from scratch. All the basic functions already have been defined for you. Those methods that need additional implementation code are all that you have to add. Notice in the following code, listing 5.1, that the procedure takes as a parameter, a pointer to a vehicle.
Listing 5.1 DRIVE.C-A sample of using polymorphism within a procedure
Bicycle bike; Drive(&bike); void Drive(Vehicle * veh) { veh->TurnRight(); // wil invoke the method TurnRight in // the Bicycle class, not the vehicle class }
When the method TurnRight is called from within the Drive procedure, the correct method, within the subclass, is called.
Here's another example. You are working for the zoo and are in the midst of creating an audio application that will reproduce the sounds of all the zoo's animals. You create a base class called Animal, with an associated member function Speak. Now, you derive each of your subclasses-goat, bird, etc. Back in the application proper, the user selects an animal sound to hear. An instance of that animal subclass is created and passed as an Animal. Then, when the Speak method is called, the appropriate animal sound is played.
To enable implementation of polymorphism with your classes and objects, the object variable veh within the Drive function, in Listing 5.1, must be associated with the actual Bicycle object at runtime. In effect, the object is sent the "turn right" message and figures out how to implement it. This ability to assign the object type during the program's execution is called dynamic binding and is the method through which polymorphism is executed.
Static binding, on the other hand, is what happens when all object references are resolved at compile time. During static binding, the object associated with any object variable within the application is set by the object declaration. As you can see, dynamic binding is imminently more powerful-and required-within an OO environment.
In the section "Understanding Inheritance", inheritance was expressed as a "kind of" relationship. Classes created through composition are expressed through a "part of" relationship. For example, a car is a type of vehicle, but a vehicle is not a part of a car. When you use composition to create a class, you are finding those classes that are a part of the class that you are building. An engine is a part of a car. Wheels are a part of a car. A windshield is a part of a car. You can create a car class composed of an engine, wheels, a windshield, and any other parts that are appropriate to the car class.
The car class will be derived from vehicle, but those other "parts" of the car class, the engine object, the wheel object, and so on will become private member variables of the car class. Listing 5.2 shows a class definition for our hypothetical Auto class.
Listing 5.2 AUTO.HPP-Defining the Auto class, using inheritance and composition
Class Auto: public Vehicle // inheritance from Vehicle class { Public: Auto(); ~Auto(); Private: // composition of objects within the class Engine engine; // engine object as private member variable Wheels wheels; // wheels object as private member variable };
Why all the fuss about object-oriented development? The most important feature that OO provides is the capability to create new applications from existing code without introducing new bugs. This is not to say that there will be no bugs within your implementation, just that there should be no bugs within these production-hardened objects that you are going to use.
This capability to reuse proven, production-ready code is one of the main forces driving the OO movement into corporate development. As the business environment continues to become more complex, developers need to be able to quickly represent that complexity within the systems they build. In addition, the rapid changes in the current business environment demand systems that can be modified easily, without having to recompile an entire application each time a change is made. If your system is developed using component objects, when the objects are enhanced, the client will not need to be changed.
The object-oriented methods of inheritance, composition, and polymorphism are not implemented in VBScript within the ASP environment. Nevertheless, you can take the overriding principle of OO development to heart as you develop ASP applications. That principle is reuse of existing, production-ready code. You can create libraries of functions that you can include within your Active Server Pages applications. You will also be able to imbed the functionality of ASP component objects and other third-party components within the functions that reside in your library.
![]()
See "Examining Procedures" for information about creating libraries of functions to include in your Active Server Pages applications, in Chapter 9.
There are a number of prebuilt components that ship with Active Server Pages. If you had to reproduce the functionality of each of the components, either in native scripting or by creating your own components in Visual Basic or Visual C++, you would expend a considerable amount of time and money. The wonderful thing about components is that they give you innumerable choices for implementing solutions to a particular problem.
We've known (actually, still know) developers who have a particularly disturbing disorder. This disorder has been known by many names over the years, but we tend to refer to it as NBH Syndrome, for not built here. Anything that these developers did not create within their development shop is no good, no how. True, they have created some exciting applications over the years, but the time they take to create them could have been cut by at least half had they integrated other development groups' code into their own.
The same is true of components. It is easy to say "Sure, I'll build it myself. How long could it take?" Many have fallen into this trap. One good example of a build/buy component decision that often comes to mind is the ubiquitous calendar control. This is a user interface component that lets you select a date from a calendar by clicking a calendar graphic. There are hundreds of applications that require this type of functionality. Although it is not an overwhelming project to design and build a calendar component, why should you bother? There are numerous calendar components available out there in the market. They have been tested and proven in production. Why waste time implementing an object that is available in the market? You have business process expertise. You understand the specific business logic that rules the process. Put your development time to the best use by implementing the business processes. Don't waste time reinventing the wheel. In the build versus buy decision, remember that a development project's success is determined by how well an application meets a business need, not by who built a particular component that is part of an application.
We'll hop off the soapbox now. We were getting a little light-headed up there, anyway.
The history of the Component Object Model (COM) follows somewhat the history of Windows and the applications created for use on the system. In the early days of the Windows environment, the need for users to have the capability to share data across applications was paramount. The capability to copy and paste data between applications using the Clipboard metaphor was the first step. In the late eighties, Microsoft implemented the dynamic data exchange (DDE) protocol to provide this Clipboard functionality in a more dynamic implementation. The only problem was that this new dynamic implementation was quirky, slow, and somewhat unreliable.
By 1991, DDE effectively was replaced by a new technology called object linking and embedding, or OLE 1.0. The new OLE enabled applications to share data or to link objects, with the linked data remaining in the format of the application that created it. When you embedded or linked objects, they would show up within the client application. When the linked data needed to be edited, the object would be double-clicked by the user, and the application that created the base data would be started.
As nice as OLE 1.0 was, it still was a far cry from the easy application integration promised in Microsoft's "Information at Your Fingertips." From this point, Microsoft came to the conclusion that the only way to provide truly seamless object integration was to create little pieces of functionality that could be plugged from one application into another to provide specific services. From this was born the idea of component objects, objects that could provide services to other client applications. The Component Object Model (COM) specification came out of this desire to create a standard for component objects.
COM, as implemented with OLE 2.0 in 1993, became more than just a specification for component objects. COM supports a vast range of services that let components interact at many levels within the application environment. The same service that provides local processing can be invoked on a remote machine to provide similar services, all of which are transparent to the user.
As Microsoft moved from DDE to OLE 1.0 and finally to the component model specification, there were a number of design goals that guided the company in the development of COM and OLE. This set of functionality was derived partly from the history of creating monolithic, complex applications, but more so from the ongoing maintenance and inevitable changes that an evolving environment demands on any system.
To create a truly generic interface architecture, the model was created with the following goals in mind:
All of the design goals in the preceding list boil down to providing developers with the tools to create dynamic, flexible applications that are quick to create and easy to maintain. As business processes continue to become more complex and require increasing levels of adaptability, the old monolithic development architecture is breaking under the weight of the changes. In traditional development, when one part of an implementation within a system changes, the entire application needs to be recompiled to ensure that all references to functions are correct. The need to provide dynamic changes, without new versions of applications having to be compiled, is another central goal of the component model.
To support larger applications and distributed data, client applications must be able to access appropriate services, wherever they reside, to fulfill user requests. Once again, if a service resides on another machine, across the hall, or across the continent, the client must not be aware of the difference.
As corporations move toward improving quality within their organizations, every process is being looked at and, where appropriate, redesigned. The requirement for new applications continues to outpace information systems' capability to keep up. By creating new applications from proven, existing components, new applications can be built more quickly and more reliably. As improvements are made to base components and rolled into production, each new application can immediately benefit from the new refinements, while existing applications will not break.
COM is an object-based model, a specification and implementation that defines the interface between objects within a system. An object that conforms to the COM specification is considered a COM object. COM is a service to connect a client application to an object and its associated services. When the connection is established, COM drops out of the picture. It provides a standard method of finding and instantiating (creating) objects, and for the communication between the client and the component.
Under COM, the method to bring client and object together is independent of any programming language that created the app or object, as well as from the app itself. It provides a binary interoperability standard, versus a language-based standard. COM helps ensure that applications and objects that are created by different providers, often writing in different languages, can interoperate. As long as the objects support the standard COM interfaces and methods for data exchange, the implementation details within the component itself are irrelevant to the client.
Client applications interact with components through a common collection of function calls named interfaces. An interface is a public agreement between a service provider and a service requester about how to communicate. The interface defines only the calling syntax and the expected return values for the member function. There is no definition or even hint about how the services actually are implemented by the service provider object. The interfaces available within an object are made known to COM through the IUnknown object, which then makes them available to other applications.
Here are some key points to help you understand what a COM interface is and is not:
All COM objects must implement a generic interface known as IUnknown. This is the base interface of a COM object; the client uses it to-among other things-control the lifetime of the object that is being instantiated. It is the first interface pointer returned to the client. To find out what additional interfaces are supported by the object, the QueryInterface method of IUnknown is called using the initial IUnknown pointer. QueryInterface is called with a requested interface and returns a pointer to the new interface if it is implemented within the object.
QueryInterface must be implemented in all COM objects to support adding additional functionality to objects, without breaking existing applications expecting the original object; in effect, not requiring the client application to be recompiled. Through use of the QueryInterface, objects can simultaneously support multiple interfaces.
In Figure 5.1, you can see an example of how the interfaces supported by an object can grow over time, as well as how new interfaces don't break existing applications. In the top pane, you can see that the first version of the client is connected to the component's interface A. Later, a second version of the client also uses interface A. In the second pane, when the component is modified to add a new interface, the new client takes advantage of the newer functionality. Notice that the original client still is fully functional and using the original interface of the object. Powerful stuff, huh?
An object's interfaces never change; they just add new ones.
Using a naming convention to ensure that all functions have unique names within an application is a perfectly viable solution to the name collision problem. Any name collisions within modules are caught by the compiler at runtime. In the object universe, where the object can live on a local computer or a remote host, the number of opportunities for getting the wrong object increase exponentially. To make sure that the correct object always is instantiated, COM uses globally unique identifiers (GUID).
Globally unique identifiers provide a method to ensure that each object residing on a system has a unique ID that identifies it. GUIDs are 128-bit integers generated by an algorithm that guarantees that they are unique at any given place and time in the world. The parameters to the function that determine the GUID are the machine's Internet address and the date and time that the function is called.
A COM Server is a piece of code that lets the COM service locator find and call upon it to enable the classes residing within the server to be instantiated. The servers can be implemented as a dynamic-link library (DLL) or as executables (.EXE).
The server must implement a class factory (IClassFactory) interface for each interface supported. The class factory is responsible for creating the object for the client. The general graphical syntax for expressing interfaces within servers is to portray an interface for an object as a socket or plug-in jack (a circle, sometimes shaded). The known interfaces are defined on the right or left side of the object, with the IUnknown interface coming out of the top of the server. Given this representation, Figure 5.2 shows the structure of a COM Server.
A graphical illustration of the structure of a COM Server.
A server is implemented in relation to the context of the client using it. A server executes in-process, meaning within the address space of the client, or out-of-process, meaning in another process on the same or a different machine. These server types break into three conceptual types, as outlined here:
During this discussion of COM Servers, think of them in terms of the objects they create instead of as the server itself. As far as the client knows, all of the objects are accessed through the function pointer to the object, whether in-process or out-of-process, on the same machine or a different one.
Because all function pointers are, by default, in the same process, all the COM objects accessed by the client are accessed in-process. If the object is in-process, the client connects directly to the object. If the client is on another machine, the client calling the object is a stub object created by COM; this, in turn, picks up a remote procedure call (RPC) from the "proxy" process on the client machine. The net result is that through COM, the client and the server believe they are dealing with in-process calls. This transparent access mechanism is illustrated in Figure 5.3.
The client and server have location transparency within the COM model.
COM itself does not support the traditional method of inheritance and, for this reason, is considered by many object purists to be of little value. While working within the framework of object-oriented development, the inheritance mechanisms available under COM are aggregation and containment/delegation. Although they are not implementations of true inheritance based upon the definitions found in the section "Understanding Inheritance," they do provide a framework for reaping some of the same reuse benefits.
In containment/delegation, there are two objects participating to fulfill a service request. One object, the outer object, becomes a client of the second object, the inner object. In effect, it passes the service call or reissues the method, from itself to the inner object to be fulfilled. The client of the outer object is not aware of this handoff, so encapsulation requirements are fulfilled in containment/delegation. The outer object, being a client of the inner object, accesses its services only through the COM interface, just like a normal client. The outer object delegates the fulfillment of the service to the inner object. So, although inheritance is not explicitly supported in COM, the capability to call the methods of the contained object provides similar functionality to calling the method of a base class from a subclassed object.
When using aggregation, the interfaces of the inner object are again exposed to the client application via IUnknown, but in this case, the client interacts directly with the inner object through an interface pointer to the inner object returned.
There are two basic operations that all COM objects must support. The first, exposing and navigating between the interfaces provided by an object, was covered in the discussion on IUnknown and the QueryInterface member function in the section "Com Interfaces." The second is a method to control an object's lifetime.
Speaking of an object's lifetime when that object is an in-process server is very different from discussing an object's lifetime in a heterogeneous, distributed environment. Because COM objects support multiple instances, there must be a way to ensure that all clients have completed their use of an object before it is destroyed. This is handled by using reference counting within the COM object.
During Active Server Pages development, you create objects and then release them when they are no longer needed. In C++ development, any memory that you dynamically allocate must be freed. If the memory associated with an object is not freed when all the object's users are done using it, you have what is called a memory leak. Now, if the memory taken up by the leak is only 1K per object, and you create one object a day, no big deal. But if you are in a transaction environment where you might perform thousands of transactions per hour, those leaks add up fast!
The same care that you take to free any objects that you create in ASP is also handled for objects within the environment by COM. Each COM object must implement the two IUnknown functions, AddRef and Release. These two functions are used to fulfill the reference-counting specification, as the mechanism to manage the object's life. Each time an object is instantiated, the reference count is incremented. It is decremented as clients implicitly call the release function. When the reference count eventually returns to zero, the Release function destroys the object.
Over the years, many developers have been confused as OLE 1.0 came on the scene, followed by COM, and then OLE 2.0-which was totally different from OLE 1.0. There also has been much confusion over COM versus OLE-are they the same thing? Can one exist without the other?
OLE is a number of services and specifications that sit on top of the basic COM architecture and COM services. OLE version 2.0 is the first implementation of this extended COM specification.
As an Active Server Pages developer, you will be interested primarily in the custom services supported through the OLE specification. They include such services as OLE Documents, OLE Controls, OLE Automation, and drag and drop.
OLE acts as a facilitator for the integration of component objects into applications and into a system itself. OLE through COM provides an open, widely supported specification to enable developers to create component software. In the real world, the distinction between COM and OLE has become cloudy. Just remember that OLE is drawing on all of the basic object services that COM provides to standardize the next level within component development.
The movement away from monolithic computer architectures began with the coming of the first client/server revolution. There were a number of reasons why information technology managers were so enamored with the multi-tier architecture that client/server proposed. First, there was a logical division of work between the client, the business logic, and the database back end. The client would be responsible for data input and front-end validation, as well as the graphical user interface and display. The business logic tier would handle the process-specific validation and calculation and send and receive the appropriate data to/from the database server.
Breaking the application down into these logical pieces provided a number of benefits for the organization. First, as each tier was created, it could be tested independently. This made debugging much easier. Also, as the pieces were put together, the appropriate hardware could be selected for each tier. The capability to scale the application by splitting out processing across multiple machines also was a boon to rapidly growing enterprises.
As new applications were developed, the existing middle and back-end services often could be reused, again enhancing the speed at which new systems could be implemented. There were, however, a number of challenges faced in developing and managing these systems. The mechanisms for the tiers to interact (protocol, transaction format, and so on) often were proprietary and specific for a certain type of operating system. It was difficult-and often not worth the time and effort-to move pieces of functionality between the tiers when it made sense to do so.
For example, say that a key piece of data validation that is called hundreds of times per client session is bringing the application to its knees due to the network traffic getting to the remote tier. Ideally, you just pick up this validation functionality and place it closer to the clients, or even on the clients themselves. Sadly, in the current environment, this requires a recompilation of all the client code, as well as demands changes to the interface for the validation routines.
In an effort to leverage the multi-tier architecture that makes client/server computing so attractive, as well as to deal with the problems encounterd during its use, Microsoft created the DCOM specification.
To address the concerns and problems associated with traditional program development on the desktop, the COM specification was created. As you learned in the section "The Component Object Model," COM provides the architecture and mechanisms to create binary-compatible components that can be shared between applications. The goal of distributed computing in a component-based environment is to ensure that the components working on a local machine look the same to the client as on a remote machine. Distributed COM (DCOM) is built upon the foundations of COM, and just as OLE provides a new level of services for the desktop on top of COM, DCOM extends COM's component architecture across the enterprise.
One of DCOM's main goals is to provide to the enterprise the same reduction in complexity that COM and OLE provide in desktop development. For developers, this means not having to learn a new set of interfaces, because COM and DCOM share the same component model. The open, cross-platform support of COM and DCOM provides the mechanism to let objects reside anywhere within the enterprise, transparently accessible from the desktop or other component servers.
The location of the component providing the service must be transparent to the local machine. For example, consider the case of a client accessing a remote database. You can provide a service on the local machine that will connect to the remote database and then execute queries against that database. In this scenario, there are a number of things of which the client must be aware.
First, the client needs to know where the physical data resides. Second, the client needs a connection to the database. In most cases, this connection is permanent for the duration of the query session. This might not be a big concern when only one or two users are concurrently accessing the database, but when the application scales, hundreds or even thousands of connections can be a huge overhead burden on the database server. Finally, the client must "speak the language" of the connection that provides the link between client and database. The majority of a user's time on the client is spent performing local tasks, formatting the data, creating queries, or building charts. The actual time that the database connection is in use usually is very short, yet the server is carrying the overhead of that connection.
Take the example a step further and examine a typical corporate application. In a company that processes mortgages, there are applications with extensive calculations to perform risk analyses on potential customers. In a distributed application built upon component technology, the calculation engine is a component on a server. The component pulls a variety of information from legacy databases and then performs the calculations. As the business environment changes and new Federal laws are introduced, the calculations need to be modified. Due to its basis in COM, the DCOM calculation component is the only object that needs to be modified, leaving the data-entry and processing applications as they are. In working with the database, the DCOM object may maintain five or six concurrent connections to the database, or it might, in turn, connect to another DCOM object that handles all database interaction. In either case, as requests come in, the database service spins off a thread to handle the request on one of the available connections. In this way, hundreds of users can be supported with only a few concurrent connections to the database.
DCOM is created as an extension to the Component Object Model. As stated, COM provides a specification and mechanism to let clients and objects connect to other objects. When COM objects run outside the process of the client, COM provides the interprocess communication methods between the objects in a transparent fashion. When the interprocess communication takes place across a network, DCOM steps in to replace the interprocess communication with a network protocol.
There are a number of benefits gained from using DCOM that are inherited directly from COM and fall out when the two new technologies are used in concert. A few of the more impressive benefits are outlined next.
The client and the server under COM believe that they are running in the same process. A COM object originally created to perform services on a client desktop can be moved easily to perform the same functions on a server. DCOM provides the transparent access between the objects running on different machines. This lets the objects live in relation to each other in the most appropriate place, and as that place changes over time, they can be moved quickly to other locations.
DCOM has effectively inherited all of the benefits of the binary compatibility specification of COM. As new COM objects are created in a variety of languages, they can be scaled immediately for distributed performance under DCOM.
When the same system can support one or hundreds of users, it is said to be scaleable. Scaleability refers to the ability to scale, increase or decrease, the number of users on the system without affecting the applications performance.
Say that you have just created a new COM component-based application that includes a user interface, a business rules logic, a transaction manager, and a database services component. This is amazing for a number of reasons, the first of which is that you currently are unemployed. Second, all of the COM components are running on the same machine. Next week, you decide to start your own business in the garage, and your application is up and running. Business picks up, and you hire four people to use the new system to keep up with demand. To allow all the new workers to use the same database, you move the (now shared) database access component to another server (which happens to be where the database lives). You don't have to change the client application at all (location independence). A few months later, business takes off further, and you find that all your business logic-which is quite complex-takes forever to process on your client machines. You go out, purchase a nice SMP box, and move the logic component to that box. Once again, there is no need to recompile any code. You just change the DCOM configuration on the clients, and viola! You are back in business.
This scenario could be followed by your meteoric rise and the phenomenal growth of your company (and the continued scaling of your application)-we think you get the idea. The point to remember is that you can take a component-based application that resides on one box and split it across multiple machines.
You spent this chapter learning about object models, COM, and distributed computing. You now can use this information as you begin to develop applications in the Active Server Pages environment. All of Part III is devoted to examining and showing you how to use all of the components that ship with ASP. Here's what you can look forward to in the chapters on the ASP objects:
© 1997, QUE Corporation, an imprint of Macmillan Publishing USA, a Simon and Schuster Company.