Escolar Documentos
Profissional Documentos
Cultura Documentos
Table of Contents
Preface...................................................................................................................................................................4 1 The environment..................................................................................................................................................5 Introduction........................................................................................................................................................5 1.1 Commitment................................................................................................................................................6 1.2 The business process..................................................................................................................................6 1.3 The business process model.......................................................................................................................7 1.4 Translated business process model.............................................................................................................8 1.4.1 Fields and business context...........................................................................................................9 1.5 Implementation model................................................................................................................................9 1.5.1 An example using validation rules.................................................................................................10 1.6 The architecture of design by interface......................................................................................................11 1.6.1 Contracts......................................................................................................................................13 1.6.2 Documentation..............................................................................................................................15 1.6.3 Implementing the design...............................................................................................................16 1.7 Writing and deploying code........................................................................................................................16 1.8 The implemented business process...........................................................................................................16 2 The mission.......................................................................................................................................................18 Introduction......................................................................................................................................................18 2.1 Maintainability............................................................................................................................................19 2.2 Interoperability...........................................................................................................................................19 2.3 Robustness................................................................................................................................................20 2.4 Reusability.................................................................................................................................................21 2.5 Extensibility................................................................................................................................................21 3 The vision..........................................................................................................................................................23 Introduction......................................................................................................................................................23 3.1 Respect for environmental constraints.......................................................................................................23 3.2 Layers and iterations.................................................................................................................................23 3.3 Architectural coupling.................................................................................................................................24 3.3.1 Definitions and delineations of coupling........................................................................................25 3.3.2 Processing types and coupling.....................................................................................................27 3.4 Principles for communication between systems........................................................................................29 3.4.1 Routing and the law of Demeter...................................................................................................29 Routing...................................................................................................................................30 Law of Demeter......................................................................................................................30 Routing and the law................................................................................................................32 3.4.2 Exchanging data using the Principle of Privacy............................................................................34 Your own constraints are private............................................................................................34 Generalize the data sent as much as possible.......................................................................35 Do not use private terms for shared data...............................................................................35 3.4.3 System execution and the Liskov Substitution Principle...............................................................35 Contracts................................................................................................................................36 Construction phase.................................................................................................................36 Execution phase.....................................................................................................................37 3.5 Inversion of Control....................................................................................................................................37 3.5.1 Usage of Inversion of Control.......................................................................................................38 3.6 The banking example.................................................................................................................................40 3.6.1 The example......................................................................................................................................40 3.6.2 Discussing the example.....................................................................................................................43 UML relationships...................................................................................................................44 The data models of the banking example...............................................................................44 3.6.2.1 Environmental constraints..........................................................................................49 3.6.2.2 Layering and iteration.................................................................................................49 3.6.2.3 Coupling.....................................................................................................................49 2
3.6.2.4 Principles for communication......................................................................................50 3.6.2.5 Inversion of Control....................................................................................................52 Design patterns......................................................................................................................52 4 The primary process..........................................................................................................................................53 Introduction......................................................................................................................................................53 4.1 Definition of a design pattern.....................................................................................................................54 4.1.1 Characteristics of design patterns.................................................................................................54 4.1.2 Purposes......................................................................................................................................54 4.1.3 Definition.......................................................................................................................................55 4.2 Classification of design patterns................................................................................................................56 4.2.1 Pillars of classification...................................................................................................................56 4.2.2 Description of the effects..............................................................................................................57 4.3 The classification system...........................................................................................................................58 4.3.1 Transformational patterns.............................................................................................................59 Memento pattern....................................................................................................................60 Prototype pattern....................................................................................................................60 Singleton pattern....................................................................................................................61 Factory pattern.......................................................................................................................62 Flyweight pattern....................................................................................................................63 Abstract Factory pattern.........................................................................................................63 Template pattern.....................................................................................................................64 Bridge pattern.........................................................................................................................65 State pattern...........................................................................................................................66 Decorator pattern....................................................................................................................66 Object Pool pattern.................................................................................................................68 Service Locator......................................................................................................................69 Dependency Injection.............................................................................................................69 4.3.2 Transportational patterns..............................................................................................................70 Flow pattern............................................................................................................................71 Collection handling pattern.....................................................................................................72 Composite pattern..................................................................................................................73 Symbolic Proxy pattern ..........................................................................................................73 Publish/Subscribe pattern ......................................................................................................74 Chain of Responsibility...........................................................................................................75 Mediator pattern.....................................................................................................................76 Exception handling.................................................................................................................77 Facade pattern.......................................................................................................................78 4.3.3 Translational patterns...................................................................................................................79 Observer pattern.....................................................................................................................79 Interpreter pattern...................................................................................................................81 Visitor pattern.........................................................................................................................82 Builder pattern........................................................................................................................83 Figure 38: Builder pattern.......................................................................................................83 Proxy pattern..........................................................................................................................85 Adapter pattern.......................................................................................................................86 Command pattern...................................................................................................................86 4.4 Conclusions about the classification system..............................................................................................87
Preface
I have been working with Java in different roles for some years right now. Every time writing or designing code I would like ot use design patterns. But which design pattern to use why and when? Reading about those patterns the information seems so easy, just analyse the situation and start using the proper one. But at the design table I get caught by the possible layser of patterns to use. It is not as easy to to use just one pattern. Most situation require the use of different patterns at the same time and then the problem arises which one to use first? Sometimes I started to write the code right away, curious if in the end I would have used any pattern. All these years of working with Java I had the idea that if I would have to get to the next level of understanding I should spent some 'quality' time to study more intensively. I never did, resulting in a working situation in which slow by slow I learned about the essence of each design pattern. I always had the restless unsatisfactory feeling to work in a situation having to look before to leap. Several months ago I had enough of this pressure to feel the need to spent some time and decided to dive into the deep: look and leap. I sacrificied all my free time to this project resulting in this document, which ended in a 90 pages booklet. I hope you enjoy reading it as much as I enjoyed writing it. And when you think it will take of lot of time reading, imagine the time it took writing it. I hope that it will give you ideas how to work writing and designing applications as it helped me. I am a native speaker of the Dutch language. It has a lot of resemblances with English, but there are quite some, often subtle, differences in the grammar. Therefore can it happen that some sentences, even after I have reviewed the text three times, will falter in the eyes of a native speaker. Is it 'loose coupling' or 'loosely coupling'? If coupling is viewed as a conjugation of a verb it is loosely coupling, if coupling is used as a noun it should be loose coupling. I preferred viewing coupling as a noun, because it lessens the complexity of the structure of a sentence. In Dutch is it good to restrict any sentence to one message. That implies that long sentences are cut into short ones and subclauses are avoided. It will give you the feeling reading a telegram. I never read texts from native speakers of English who used this kind of style, but when it would otherwise be too complicated for me to express myself using subclauses, I caught myself switching to the recreation of sentences using this typical Dutch solution. Avoiding to write from a personal point of view might be another example how I am influenced by my Dutch and scientifical background. Next that it is considered polite gives this avoidance a freedom of thought, which is not possible when I would narrow myself down staying hitched to my own preferences. My family name is Bergman, which probably means that I had ancestors who worked in the coal mines. This might have set a maximum level on my English, very down to earth, very charcoal English like. Charcoal English is the English used by Dutch harbor laborers talking to the English on charcoal ships. The best example is the sentence 'I always get my sin' meaning 'I always get what I want'. On the other hand is my first name 'Loek' pronounced as Luke by native speakers of English. Inspired by that analogy it might sound very alien to native speakers with sentences that land nowhere and will lead you astray. Lacking a good example of charcoal English I will apologize in Joda style: You me forgive but understand, I do hope.
1 The environment
Introduction
Designing applications does not start directly with the design of the application itself. It starts with the exploration of the environment in which the application resides. An application does not stand on its own. It serves a purpose. The purpose of an application is to streamline the business processes covered by the application at its best. The functional business process is at the core of the application. Without it the application has no reason to exist. Designing applications is all about serving this purpose best. Building an application for an organization asks for a return on investment. This return on investment can be accomplished when the application serves the business process well. So the design of the application starts with understanding the business process and how it relates to an application. If successful, the application will influence how the business process is perceived. Figure 1: Relationship between business process and an application
The architect does not work on an island, loosely coupled from its environment, but he works in a team within an organization. To be able to do his job he must have enough information to start with. That information is realized in several steps. If any part of this information is not met sufficiently, designing an application is like taking a longshot. I will start with describing what has to be set before the architect can start its work. Writing about architecture can not be complete without describing these preliminary steps. While describing these steps some basic concepts are put in perspective giving the architect further reference points. Before an architect can start with the design of the application, the next requirements must be 5
met: 1. commitment from the business owners and the financial stakeholders to the concretization of the application, 2. the business process in question must be identified, 3. the process must be modeled 4. the model must be translated into a logical model, and 5. the logical model serves as the basis for an implementation model. After these steps of preparation 6. the design of the application can start, 7. code can be written and deployed, where after 8. the end user can use the application as a vehicle for the original business process.
1.1 Commitment
The whole project ends and starts with commitment. During the process to create or maintain an application the crucial stakeholders must be convinced that the investment is worth it. Not only because during the time this project is executed another project is probably set on hold. Not only because one has to pay for the project. Not only because people have got to be set free to help make the application become a success. Not only because ... . There are many reasons why an organization would commit itself to the creation of an application. But the only thing that really matters is that this commitment is there and is big enough to support the application through its process of creation and implementation. No commitment, no project, no application. Commitment is about giving trust. Commitment management is explaining and proving that one is still trustworthy, despite the current problems. Having a position on the edge of organization and technique one has to communicate two ways. Towards the organization one talks mainly like a technician. In the communication towards the business the major message of any message is the relationship. Information about the technique is the apparent subject. Trying to convince using technical arguments, although correct, might actually lessen the trustworthiness. It can give raise to the thought the architect is hiding something. Towards the technicians one has to keep in mind what the business wants and one talks mainly like a representative of the business. Although establishment of the relationship will always stay important, the meaningfulness of the content among peers shows the relative expertise and will enhance trustworthiness. For each situation one must have a different communication style to serve commitment for the project.
People live their business process. They are very tightly coupled to these processes and therefore is it very difficult for people to talk in an abstract way about 'their' process. The information collected will always invite to create an application, which has to many tightly coupled business entities. It is a common pitfall in describing a business process accurately. For the work of an architect the business process itself should be out of scope. But when having doubts about the presented model to implement, one might have to talk with the people, who actually work with the business process. Having no doubts at all might even be more a reason to talk. Only if one has a deep and profound knowledge of the actual business process one could skip visiting the people. Otherwise talking with the people who work with it is always a good idea. The business process will come alive. People can show what the business objects currently are, what is the reasoning behind some procedures, what are critical values and ways of conduct in their jobs. Knowing that, one can have better insight in the relationships between the business objects and therefore better understanding which objects are crucial and what might change in the future.
data depending on the special wishes of the solliciter. A solliciter may want to work only in the proximity of his house or he is only looking within a certain industry. In the second example the flexibility should be in the profiles. Decisions are made by a steadily more experienced recruiter. The recruiter could look in many different ways a solliciter would never think of. If a search should exist, a fuzzy search would be useful. It can not be overestimated how important it is to for these statements to be accurate. If the application is not covering the mission and vision of the business process the application will not fulfill its purpose. The accuracy with which the business process is described in the model is the maximum result of the application to be build. When the business process model is not described accurately the application will fail anyway. The most complex situation is when the mission, vision and primary process are met actually quite well, but not precisely enough. Then the application works more or less, but the business process owners will never be really satisfied. A lot of calls for change requests will exist and maintenance becomes a burden. Meeting the mission and vision reasonably well but not well enough for practice means that the change requests can not be build logically into the application. With every change request the application will grow rapidly in complexity until a simple change request can become so complex, that it is wiser to redesign the application as a whole. A primary process might be constituted of several subprocesses. Each of these processes will have their own mission, vision, input, processing and output, which again have to be reflected in the translated business process and as a result also in the design of the application.
in the Java GUI. For the RDBMS falls sending an email out of scope. It would not return in that implementation model. In the business process model the language used is the language in which the organization talks. The language of the tBPM has a strong flavor of logic. The language used in the implementation model is close to the grammar of the programming language used. When the implementation would be in PL/SQL the validation rule would be totally different as probably is the signature of the method used. The descriptions of the procedure and the business rule would however not be affected by that. From the functional viewpoint is the implementation model more abstract then the tBPM. Roles for instance are called by name in the tBPM, whereas in the implementation model the way to use a role is described, not the value of the role. From a technical point of view is the implementation model far more elaborated then the tBPM. Aspects like logging and exception handling are described, where in the tBPM they do not exist at all. In the latter the feedback to the end user is formalized telling which feedback will be given to the end user in which situations. In the implementation model this is standardized and the concrete feedback will be returned at run time. In the implementation model the way to pass the data throughout the application is described, where in the tBPM this is out of scope. Authentication and authorization for instance are subjects of the implementation model being out of scope for the tBPM. The concrete results of these processes however are prescribed by the tBPM.
pE.size() == 3 and ((pE.get(0).id <> pE.get(1).id) <> pE.get(2).id) and pE.get(0).equals(pR.get(0)) and pE.get(1).equals(pR.get(1)) and pE.get(2).equals(pR.get(2)) end if
the underlying technique. Secondly it serves as a contract between the business process owner and the technicians. The business process owner describes which steps have to be fulfilled in which order and a set of interfaces is performing this job. The total of the requirements realized in a set of interfaces should be equal to the total of requirements stipulated in the business process by its owner. A business process will normally become split up in several interfaces, each performing a sub task of the process. This splitting up of interfaces should be intuitive to the business process owner as the total will sum up the business process and it should be done at moments in the business process at which new lines of actions come up. When a car shop sells a car or leases a car there might be two interfaces, one for each process. In the end the result will be stored in a database. For that only one interface will be created. A business process will be referenced by an indefinite number of interfaces. The number of interfaces is dependent on the number of distinctable steps in the business process. There is no number which describes the ideal number of interfaces to create, or it must be 42 of course. That makes the use of interfaces different from any type of layering as an architectural design. Interfaces serve as the organizing principle for services and aspects. The interfaces organize the separate tasks to be performed and the services and aspects do that kind of job. This will greatly reduce the complexity of any application platform as routes through otherwise loosely coupled systems can be traced back. The architect of layering is a procedural organization principle applying to an object oriented platform. It defines several fixed steps to be created, which is a procedural way to organize. Organizing loosely coupled systems using the interface architecture instead creates the flexibility to group systems on demand. A system can now be designed to perform a certain task independent of the place it will be called for. That was impossible in the architecture of layering. Using interfaces as the organizational principle can help the system to focus on what it is designed for. The linking code is put out of services and aspects and concentrated in interfaces. To adapt to new requirements of a business process might be restricted to newly grouping of systems. A service becomes then really a service. A SOA architecture does not have an inherent organizational principle like layering. Use of SOA architecture creates the opportunity to create systems designed for one purpose only, like the services in the interface architecture. But in SOA architecture this comes with the price of losing an organizational principle to connect services together. Although services can be managed by service contracts, the information by which business processes a service is needed is lost. Interface architecture combines the advantage of layering to have an organizational principle for calls to services and the advantage of SOA to be able to use loosely coupled services, which are designed independent from the ordering in any process. In the next diagram an example how interfaces, services and aspects can work together is depicted. At the end of chapter 3 another example is presented. Figure 2: relations between interfaces, services and aspects
12
The yellow hexagons are the architectural interfaces, the blue hexagons the services and the pink ones the aspects. The numbers next to the lines coming from the interfaces tells the ordering of the calls. Interface A for instance calls first two services before it calls upon the other two services. Interface C calls interface D before proceeding the call to a service in line three. Some services in this diagram are not used by these line of interfaces or other services. It might be that they are used by other systems or not. That can not be told from the point of view of this set of interfaces.
1.6.1 Contracts
The benefit of this architecture is clearly that systems can be grouped on demand, services can be designed for one purpose and they can all be created independent of one another. That benefit is at the same time its major drawback, but the drawback is inevitable when working with systems, which do not have any fixed position in an application platform. Designing loosely coupled systems implies that change management can be something called from the past when this problem is not met seriously. At the end of the day the business must rely on the reliability of the overall application landscape. Loosely coupled systems are a beautiful way to design applications, but for the sake of continuity the business demands hard coded paths being followed. When an employee of a bank enters a transaction in the system the business must have some predefined results. In reports on transactions this transaction must be known for instance. It is unacceptable when a business process is started that it would end half way, because there is a leak in transference from one system to the other. The application landscape is one big service to the business. It is the business which is bringing in the money which pays the development of the application landscape. Not being able to meet the requirements of the business is not an option. On the other hand will a sophisticated system give the business a competitive advantage and a higher return on investment, because integration of a new application in the total landscape can be accomplished in a more standardized way and therefore in shorter time. That is exactly what Java and other OO languages promise: write once, run anywhere. That is what the business would like to have. But it can become very complicated in change management, when the interdependencies are not registrated well. If change management becomes a burden the profit of 13
having loosely coupled systems changes into a real nightmare and the only solution left is to return to a design of strictly coupled systems. Dependency hell has two faces. One is having to change code every time there is a change. The other face of the dependency hell is that one does not know what will be effected by a change. There is no architect in the world who would like to be forced to return to strictly coupled systems I guess. But the danger of having loosely coupled systems is the preservation of once layed out routes for applications while adding new ones. It is the danger to change the behaviour of a system without knowing the effect it will have on other systems. Because systems act independently of each other change in one system will not show where in the line another system might fail. Systems however have to be changed. There are a variety of reasons why they should. One could think about upgrading technology or improvement of performance or new business constraints, when the business process changes or a different database system will be used by the organization. While some changes might be kept local, other changes will have an impact throughout the whole application landscape. Both type of changes must be met. The organization must be ready for them. That can only be accomplished when the organization has registrated during any change of a system within the application landscape which systems it relies upon and which systems rely on the system in question. To use loosely coupled systems one needs a tight organizational coupling of systems. This registration of organizational dependencies between systems is managed by contracts. In contracts these dependencies are registrated together with ownership of every system and which business process is using this system. It is the responsibility of the architect to registrate these dependencies. He has the knowledge to perform this crucial duty for the organization. System administrators should force the architects to hand over the full lists of contracts before allowing any new system to be deployed. Testing is not a solution for this problem however favorable this would be. Testing can hopefully predict problems, but the problem with testing is that it can not know all possible errors which might occur because of a change in any system. And when a system unwantingly might fail due to a change in some other system one would like to know which business processes are influenced by this failure. Registration of dependencies using contracts can give answers to these questions. In a contract several dependencies must be registrated. The first dependency is the owner of the process. The owner is responsible for the maintenance of the contract and how to implement any changes. The contract owner of an interface is the business process owner. The business process owner is responsible for the contract of the interface, because that person has the knowledge which functionalities should be addressed by the interface. In practice he might delegate this to the team of architects, but it is in the interest of the business process that an interface is performing a specific set of functionalities. The total set of functionalities should resemble the business process as a whole. Some of these functionalities are implemented in different platforms. The only position who has the overview how the functionalities are implemented across different platforms is the business process owner. Often will this function be performed by a domain architect, who has a solid technical background and is able to talk to different kind of technical teams. The contract owner of services and aspects are the team of architects team and the IT department. With IT department that organization is referred to which is in control of the deployment of the application into production. It does not imply that the internal IT department of the organization is responsible. Services and aspects belong to a specific application platform and therefore should the persons responsible for deployment of these systems made responsible for the contracts. Within a contract there should be noted which interfaces and which services are addressed. It must be traceable to which implementation of an interface or service is used. The exact class does 14
not have to be addressed as this would put a burden on the maintenance of the contract itself, but at least the key by which the other system is called should. Otherwise the added value to describe which system is used is too little. If it is not traceable which system is used by any other system then will maintenance become very difficult and can it be compared to throwing darts blindfolded. Aspects need not to be registrated in each contract they are used. There function is so general, that they demand from all other systems how to be used.
1.6.2 Documentation
Furthermore should in the contract the functional specifications which are concretized by the system be specified. That is most important for interfaces as they are connected to the business process. They must describe the functional specifications elaborately as this gives the opportunity to control if and where the specifications are met. The example of the validation rule presented in section 1.5.2 should be part of the contract of the interface. Working out the functional specifications this thoroughly by contract let any contract serve also as the documentation of the business process. Using contracts of interfaces to document the business process has several more advantages. The first one being that the location of the documentation is the same as the implementation of the functionalities. The second one that the documentation is performed by the person, who designs the system having these functionalities in mind, which is the architect. The third one that the documentation will change in accordance with the change of functionalities and not with the technical implementation. As the services are loosely coupled from the interfaces it should be unimportant for the business process owner how the job is done as long it is done. When the technical implementation changes but not the functional requirements then the documentation of the business process does not need any change too. The process stays the same and so should the documentation be. The fourth one that developers are relieved from documentation. That has again two positive side effects. Developers tend not to write and update documentation. It is not their main responsibility and they are not directly related to the business process. Often they will not get enough time to document their work properly and they have to make a big leap from their daily focus to what their achievements mean for the business process. That can be quite hard to do, which many times results in poorly documented applications. They should however document exceptional solutions or unwanted dependencies to inform their colleagues about the implementation in question. That is in line with their daily focus and should be logic to perform. The other positive side effect is for maintenance. When the documentation is not technically inclined any developer who has to implement a change must be able to understand how the functional requirements are met by the technical implementation. This can serve as a test if a developer is capable of understanding this translation process. When the developer understands this translation from functional requirement to technical implementation he will adapt the implementation to the new situation with the desired business result in mind. The last advantage of restricting the documentation to the functional requirements is that the implementation can be checked if it does, what it should do. When the documentation is focused on describing the technical implementation it reads more or less like 'this is what we do' and indeed, it happens. But that does not explain if that should be done and if yes, why? The 'what' is documented, but that could already be found out reading the code. The 'why' is much more important as this is what will return in discussions with the business. The 'why' is referring to the validity of the code and that is what a business process owner needs to know. The 'what' is about the reliability of the code. The business process owner will believe that.
15
16
17
2 The mission
Introduction
The purpose of an application is first and above all to serve the business process. If the application manages that its first and most important purpose is obtained. But that is not the first principle of application design. Application design has purposes, which serve the goal on a more abstract level, namely serving the return on investment for the organization as a whole. Organizations do not only have the need to let the application do its main job, but also that the application can communicate with other applications as well. And that the knowledge of the technical application can be shared in a team. That best practices are used and work is standardized. The use of standardization can also enhance the level of complexity which is covered in applications, because the wheel does not have to be reinvented all the time. The purposes of designing applications are therefore: 1. maintainability, 2. interoperability 3. robustness, 4. reusability, and 5. extensibility. These purposes are high level purposes. A lot of concrete purposes can be derived from them. The goal of a purpose is not to be too specific, so that it can be applied to a lot of different situations. The reason is that a high level purpose can be used as a criterion in many different situations. Validity requires to be more specific and therefore constrains the domain in which it can be applied. That creates the need for an endless lists of valuable purposes. The maintenance, interoperability, robustness and reusability of that list is limited. It is prone to changes, new insights and can have a lack of continuity. A list of purposes should be quite abstract in order to avoid these pitfalls and have valid purposes in any circumstance. These purposes appear in order of importance. When an application is not maintainable, the rest does not matter. Maintainability is about the here and now of the application itself. Interoperability is about the communication with its current environment. Often the interoperability of an application will suffer from the maintainability of the application. The validness of the data, which is a conditio sine qua non1 for exchanging information, can hardly be trusted to be high when the application is not very maintainable. When it is not well known how the application works, how can the data it delivers to be trusted well and how can it be expected to deliver valid data for exchange with other systems? The usefulness of an application will suffer severely when its interoperability demands are not met. Robustness is about the vulnerability of the application to expected changes in the future. Therefore it is considered less important then the first two purposes as they deal with the current application. That is the one the organization has to work with. However, when the application is not robust to change, the maintainability of the application might become a burden. In the course of maintenance the robustness of an application can change. It is not a fixed given. The reusability of the application is about how useful its components can be for other applications. If MoSCoW would be applied to reusability, it would get a C. The effectiveness of the application is
1 Conditio sine qua non means 'condition without which it could not be'. To stress the importance of the validness of the data the more formal expression is used.
18
not measured by its reusability. It is a desirable side effect. The extensibility of an application is an appendix to robustness. As robustness is about the internal extensibility of the application then extensibility is about the external extensibility of the application. At some point it can be very important, but many times it does not play a role in the evaluation of an application. In general it would get the W from MoSCoW.
2.1 Maintainability
Maintainability is by far the most important feature of an application. An application which lacks maintainability is very expensive and is not designed well by definition. If the application is well designed but not considered maintainable then the organization lacks sufficient support for the application platform. Actually, that is one of the benchmarks in designing applications. What use is it to use a lot of complex frameworks, when the developing team consists of people not able to handle them? More important then the use of design patterns is using a complexity in the design, which can be successfully handed by the team, which are responsible for maintenance. The maintainability of an application can be enhanced in many ways like using best practices, standardization, design patterns, coding guidelines, taking care of the people who perform the maintenance, giving the opportunity to schooling, giving responsibility, paying good wages. Maintainability is about how an application is doing something. When it is not clear how an application is performing its tasks, it can not said to do its own tasks trustworthy. That means it does not know exactly what it is doing.
2.2 Interoperability
Interoperability is defined as the ability of two or more systems or components to exchange information and to use the information that has been exchanged. Exchanging data is in the contemporary organizations crucial to the usability of an application. Seldom is an application used without integrating its results with other applications. The interoperability is based on what the application is supposed to do. Out of what it does results data, which further in the organization is used as benchmarks for the process(es) the application supports. In the design the validity and the reliability of the data should be taken into account. The technical exchange of data used to be a problem in software. Having xml nowadays it is no real problem anymore. It would have been very interesting to discuss when data had to be exchanged on the level of operating systems or networking protocols and the like. One of the great advantages of modern programming languages is that this has not to be met. It is settled. The focus for interoperability is on functional exchange. The reliability of the data is ensured by some kind of transaction mechanism. A transaction is not restricted to a database system. Without reliability of data the validity of the data can not be assured. But the validity is what really counts for interoperability. Validity is accomplished, when difference in data values reflect differences in real world values in a predictable way. That condition can only be met when the definition of the data in the application is comparable with the definition of the business objects. When definitions of business objects change the definitions of the data changes accordingly. Even when the data itself is not changed. That is because after the change in definition of a business object its data is evaluated differently. After any change in definition of a business object a conversion should be considered. Interoperability is the most difficult purpose to hold. Even when 19
the data has virtually become totally meaningless the application will still work and produce reliable results. The only way this can be ensured is to test on a regular basis if input is still conform the definition of the business objects. The definition of the business objects is subject to the mission of the business process. If data is valid can only be checked upon the mission statement. The validity of the data is out of control of the technical model. Therefore should the design of data be constructed having this lack of control in mind. The data in the application serves the business process. If it does not provide sufficient ways to deal with changes in the functional definitions of its data objects it will end up storing incompatible values within the data objects. For the majority of data this is not a very restrictive purpose. Most of the data does not have uncontrollable change of definitions. If a grocery store sells one banana or a spray of bananas is not that uncontrollable. But applications which have to deal with laws or with guidance of people or public services will have to deal with this problem actively. The design how to store data, how to aggregate them and how to convert them should be designed carefully. It will be a certainty that the definition of these business objects will change significantly over time and it will be very important for the organization that these differences can be met. An application does not have to be robust to the change in definition as these changes are most often unpredictable. It should be robust to work with data, whose definition might be changed over time. That is the part which should be accounted for in the design.
2.3 Robustness
Robustness is the demand to the application that it is well designed enough that it can handle expected changes in the (business) process with little effort as possible. Every application must have some basic assumptions what is essential for the identity of the process. A process has some input, a transformation and a output. As long as these basic assumptions are met the application should not need to be redesigned. Robustness stands or falls with the success with which the vision of the business process model is translated in technical design. Robustness can be further pointed out by the open/closed principle. The open/closed principle states software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification. The robustness of a design is a combination of the open/closed principle for the software involved with the identifying objects of the process it is involved in. Robustness of a design can be applied to the design of any type of system, a workflow engine for instance. Basic questions to come to a robust design are:
what are the presumptions of the design? are they coherent to the current demands of the process? what are the expected changes in the future? how important will these expected changes be for the organization? where should they influence the design of the current system? what will be the impact not taking these expected changes into account during design?
This is called the change request profile of the business process. Every business process has its characteristic change request profile for its business objects. Likewise business objects in different processes can have different change request profiles. Having different change request profiles implies that business objects are different. 20
Robustness is not an isolated purpose. Often the robustness of a design will be influenced by the time given for the design and build phase. Using iterations might give better opportunity to make the design more robust as the application and the thoughts of the customer about the application are realized in cohesion.
2.4 Reusability
Were the three previous purposes concentrated on the application, this purpose has its focus on the components of the application. The more reusable components are used the simpler the application will be for the maintenance team, the bigger the ROI on the original application, the easier it will be to create meaningful test code, the lesser the likelihood on bugs. Just as business objects are marked by their change request profile is the reusability of a component marked by that. A component can only be reused somewhere else in the application landscape as the expected changes serving the new system are the same as in the original. If the expected changes are markedly different reusing the component will become a burden. In the example of the solliciters and vacancies two different implementations of the same business process were provided. The business object 'vacancy' in both processes are incompatible with one another, although they share the name. Both business objects can be expected to fulfill different change requests, which will inevitably lead to incompatible features. On the down side is there a bigger chance specific demands are more difficult to meet, change in often used systems is virtually impossible, it must be clear which systems are used where, deploying becomes more tedious, if there is a bug it can be much harder to solve, upgrading an application having a lot of reused components can be more complex. Reusability demands organizational administration to handle its dependencies. Anyhow, the professional deformation of an architect requires, insists and demands to adhere to this principle. Therefore it is left out of any discussion and considered a very good purpose, although any application could work without it.
2.5 Extensibility
With the purpose of extensibility the circle closes itself. It started with the maintenance of the current functionalities in the application and ends with delivering hooks for external functional extensions to the application. Opposite to these both ends of the circle stands robustness, which is defined as the possibility to provide extensions to the current functionalities of the application. Extensibility is specially important for systems, which deliver common functionalities for unpredictable implementations. Aspects and jdbc jars and other libraries are examples for which extensibility is a requirement. It is also a core feature for frameworks. When interfaces can be used to enter the system extensibility is delivered on contract. How else could loosely coupled systems be addressed? Every time a loosely coupled system is used by another system extensibility has been accomplished. To give extensibility a distinctive definition therefore requires a strict description. Extensibility will be restricted to delivering hooks on business applications. Even then can it be obtained easily by technique using interfaces. The distinction is made in the contract stated by the interface. To evaluate if an application fulfills the purpose of extensibility is to look at the contracts it offers, while having the business process in mind. How well can it integrate new business objects? Does the whole application need to be redesigned or can it be done quite straightforward? Questions like how a new view can be added to the user interface are in this respect evaluations 21
of the robustness of the application. That is an internal extension, because an user interface with views already exists in general. Adding an interface for mobile cell phones for instance can be done fairly easy if the Observer pattern was used for the creation of interfaces. That is therefore a question about the robustness as well. Extensibility is therefore quite an abstract matter to discuss here. It should be evaluated at the time a new extension to the business process will have to be integrated in the application.
22
3 The vision
Introduction
In this chapter I will provide some design principles. Use of these principles will help the architect to design an application. The principles are: 1. respect for environmental constraints, 2. architectural layering and iteration, 3. architectural coupling, 4. data exchange, and finally 5. Inversion of Control. The first one is not a real design principle though, but it can restrict the freedom of the architect how to design and should not be forgotten. The ordering of these principles is from a high abstraction level to a low abstraction level, except for Inversion of Control. The reason is that Inversion of Control is a two faced principle closely related to architectural layering and to the implementation of code when it is used to describe frameworks using Dependency Injection. Out of respect for the practical value of that second face of Inversion of Control is it put at the end of the chapter.
Arriving at a critical position one has to reevaluate all judgments so far to find out if they are still applicable. The point is that at every critical position one has to start all over again. As it is a best practice in chess to calculate until a critical position is met, it should be a best practice for the design of applications as well. The first model which requires layering is the implementation model. The previous models are descriptions of isolated processes given as input to the architect. In the construction of the implementation model the architect must not only design the business process, but also has to take into account how to connect to other processes. How to integrate the different business contexts into interfaces on the same process. And he can be forced to translate the tBPM objects into different objects as described previously, because of requirements coming from other processes. The step from a business model into a design is often too big to handle at once. It is better to first clarify which steps can be discovered during the process, which objects will serve as input, which objects serve as output and how objects can be identified positively during the transformation. If too many questions are handled at the same time the number of possible solutions makes it too hard to come to decisions. But when the unraveling of the process is done like peeling an union the questions to be answered can be grasped successfully. The advice of the King of Hearts in Lewis Carroll's Alice of Wonderland to the White Rabbit bears great wisdom in this respect. Begin at the beginning, the King said, very gravely, and go on till you come to the end: then stop. Every time an input, transformation and output is untangled the implementation model gets its shape. Using this technique of layering can cause to stay too close in design to the original business model. That would imply that the design components can not be reused again for some other process. Which again would imply that the systems are not really independent. For that iteration comes into play. Redesigning the model again with the previous knowledge in mind could help to generalize the design even further and become less entangled with the business process.
The Observer pattern is in itself good, but the use of the Observer pattern to convert documents is not with respect to its restrictions. The question for an architect how to maximize loose coupling is a different one then for developing. The question for an architect is if the chosen design pattern or combination of systems is apt to meet the set of functional demands? That is, assume the implementation and the functional demands a similar change pattern? In an article in SOA Magazine a new form of coupling was introduced: unintended coupling. This type of coupling has a very high ad hoc level and it surely is not a design principle, but it is worth mentioning here. The term can serve as a reflective mechanism to control the design. What are the couplings in the design? Which dependencies do they pose? Is that acceptable? It serves as a good reflection mechanism to find out if the created relationships are the couplings planned for. As technical coupling is to be minimized, the only coupling left should be these relationships between classes and systems, which is necessary to fulfill the requirements. Coupling is unavoidable and wanted. Without coupling no business process could be implemented. No data could be entered into a system, being transferred by a system, stored and retrieved in reports without a decent coupling. If coupling was not unavoidable and wanted the work of an architect would be much easier to deal with. It is the unavoidable necessity together with the never ending aim to minimize coupling, which makes the architectural world go round. From now on the term coupling refers to architectural coupling only. There are four distinct coupling relationships. The four distinct forms of coupling are: 1. tight coupling, 2. strict coupling, 3. loosely coupling, and 4. aspect coupling.
I call errors self explaining when it is obvious that the logic in the code makes assumptions, which are not accounted for in the business process. These errors are typical for strict coupling processes. It is considered loose coupling when from the viewpoint of the input no valid assumptions can be made about the concretization of the output nor the path traversed to get the output. That is when the control is handed over to the other system. It is considered double loosely coupled when the reverse can be stated too. Please take a look at the next lines of code: if (obj != null){ obj = receiver.returnObject(obj); } There appears to be a thin line of coupling between these two systems. The request will only be sent, when the object is not null. Making no assumptions about the other system would imply, that the null pointer exception must be handled by the receiving system and therefore should the if statement be removed and the code should be like this: obj = receiver.returnObject(obj); Although the first system does not make any assumption about the second system anymore, the second system will act as the sender, when returning data. From the point of view of the second system it has to make an assumption about how the first system will respond on null pointer exceptions. The need for this assumption is in the absence of any delineation by the first system, when their data is ready to be sent. When the second system will only get data from the first system, if the first system states the data is valid to send, then the second system can handle the received object independently of that system. The lines of code would then for both become: if(sender.validObject(obj) == true){ obj = receiver.returnObject(obj); When the receiver validates the object before returning it to 'sender' the two systems can be said double loosely coupled with respect to this connection. Aspect coupling is a special type of coupling. On the one hand one would call it a type of loose coupling. Reusability, which is always an indication of loose coupling, is very high. On the other hand the caller of the aspect can exactly predict what the result of the call to the aspect will be. From that perspective it is strict coupling. Even more, an aspect can put restrictions on how it will be used. Therefore it can have tight coupling features as well. Because it can be used by any system, will it put demands on how it is used by all other systems. Aspect coupling is loosely coupled from the callee point of view, but strictly coupled from the caller point of view. Aspects and libraries share this type of coupling. Common libraries like jdbc drivers or for mime handling can be viewed as platform wide aspects. Aspect coupling is unilaterally defined by the callee side. In a business process tight coupling is the standard. An actor must behave according to the business process demands. Procedures and guidelines can have strict coupling. The organizational culture, norms and values can be considered loose coupling. The way people have to registrate their working time or what to do when fire breaks out can be interpreted as examples of aspect coupling. Only the tight and strict coupling behaviour of business processes is translated into the business process model. In the tBPM will this tight and strict behaviour described using logic. In the implementation model will loose coupling and aspect coupling reappear. Again will it have nothing to do with the actual business process. Loose coupling and aspect coupling are more related to the features of the platform in which they are constructed. Tight and strict coupled 26
It is a very basic schema. Not surprisingly, but in practice this kind of process can be complicated. Transformation processes can range from a new appointment in an agenda to a permit to build a house. The contract of the transformation process is that it must be able to handle a business object of the type which Object A is constituted of. Therefore it must have knowledge about the type of business object and the methods it can be applied to. As a result the process has a tight coupling with the type of Object A. There is also tight coupling between both Objects A, as they are functionally the same object. For the system it is irrelevant which object it is. The only thing that matters for the transformation process is that it is capable of transforming the type of object. During the process the identity of the object is preserved, but the data and the behaviour of the object can differ. The actual class of the object might change repeatedly during the process, but the core business fields to cover the identity of the business object will remain the same. Each business object has three types of data for the process, namely: 1. identifying data, 2. status variables, and 3. content. 27
The values of the status variables will vary based on the content of the data. A transformation process is the only process, which is concerned with the meaning of the content and will compare this information directly or indirectly with content of other business objects. Indirect comparison of business objects is happening, when a status value is given to an object based on its content. The challenge for the design in this type of business process is to abstract the causal relationships between the content and changes in statuses as much as possible. In the next figure the schema for transportation is shown. It is something less basic. Figure 5: transportation process
A transportation process is the only process which can generate identities for an object and which is able to change the underlying class to represent the business object. The class is a vehicle for the business object and every time it changes, the business process is entering a new subprocess of the transportation process. During the conversion of a document for instance first there is the translation from object of type A to a general object of type X and then the translation from that general type X to an object of type B. Every conversion of a document will consist at least of two transportational subprocesses. During the transportational processing the business object A does not change at all. The object of class X refers to the same business object A as does the object of class Z. In order to be a reliable transportation process the business identity and content of object A must be preserved. Otherwise transportation is not loosely coupled to the object it processes. The implication of this is that a successful transport of an object means at the business level there is a tight coupling between the objects of class X and Z. If object x1 of class X is different compared to object x2 of class X then these same differences will be found among the objects z1 and z2 from class Z. Knowing the input is knowing the output. The identity and content must not change during the transport. For the transportation process the business object does not have data at all. The transportation process is loosely coupled to the business object, but has a tight coupling between input and output. The challenge for this type of process is to use different representations of the same business object in different subprocesses, but still preserve the identity and content of the business object. An example of a dedicated system for transportation is a tracing system concerning the delivery of a package. A workflow is a process which consists of two intermingled types of processes. The
28
flow from one step to another step belongs to the transportation process, the content of every step belongs to the transformation process. Workflow is a peculiar type of processing, because the navigation is based upon the results of every step. Normally the transportation processing serves the transformational processing, but in a workflow it is the other way around. The transformational processing serves the transportational processing. In figure 6 the schema for the process of translation is presented. Figure 6: translation process
In this process business object of type A ceases to exist as an object from the perspective of the process and the output is the new business object of type B. Examples of this type of business process are for instance making a report of a meeting to the processes of facturation or marketing analysis. The format of the data is more important then the actual content of the data. The identity of the object is its type. Different objects of the same type are treated the same. There is a strict coupling between input and reading of the input object and there is a strict coupling between the result of this reading to the output object, but there is no direct and reversible relationship between object A and object B. There is though a strict coupling between the content of the business objects A and B as they should be the same with regards to the translation process. If object A has certain characteristics they will be met with the characteristics of object B insofar as they are applicable to the type of business object B. It is not true that object B after reversed engineering will result in an object C of the type A with exact the same content as the original object had. The content can irreversibly change during the translation, but in a predictable way. Objects of type B will probably have some characteristics not present in objects of type A. These characteristics need to be added when compounding the new object of type B. The greatest challenge in this type of process is the precision of the translation process. It can be difficult to translate data from one type of business object to another type, as data is often tightly connected to the context in which it is formulated. When the translation is not precise enough, data corruption can occur.
the routing, the act of sending and receiving data, in- and external communication of a system.
29
Routing Routing is a complex task to design as routing has to serve opposing requirements. The first requirement routing has to adhere is the fact that routing is following a prescribed order. Some steps in the process can only be accomplished after some other steps have been successfully fulfilled. Consider for example the storage of a data object. Storing data will demand that the object has some intrinsic properties, may be even specific values before it will be saved. The check if the content of the data makes it a valid storable object must be performed before the data is stored. It can be considered worst practice if the check is made after the storage of the data. Because of that and many other practical reasons a predefined arrangement of a routing process is unavoidable and wanted. The fulfillment of this requirement can already turn out to be complex as often there are more routings for one type of object possible. Next to the ordering is the result of every routing step predictable. Every step has a specific set of validation rules by which it is governed. The results of these steps are defined in the business process. The objects, which should be used in these steps must therefore be closely related to objects found in the business process. Otherwise the validation rules can not be applied logically. In every step the routing must have the capacity to return to the concrete object and a concrete set of validation rules. When for instance a publisher has separate routings for the number of a magazine or a new book the objects used in these routings must be closely related to a magazine or a book. Routing must have the capacity to preserve the identity together with the content of the data through the whole process. No matter which class is used at a specific point in the process, the identity of the data at the start of the routing must be equal to the identity of the data at the end of the routing. These two requirements both demand from the routing that the process is tightly related to the actual business process. The closer the routing stays to the actual data the easier the specific demands of the actual data can be met, because a logical change in the data is mirrored in a logical change of the routing. The third requirement is that the routing should be robust to change. The number of steps required and the implementation of each step as the relation between the different steps must be able to change without effecting the routing process severely. That requires the routing to be as independent as possible from the actual data. In this way the routing becomes robust and can process a bigger variety of data. The consequence is that any logical change in data is preferably not mirrored in the logical process of the routing. A routing should serve both types of processings at the same time. At every step of the routing should the routing be able to process logical changes in data differently, but at the same time should the overall processing of the routing be independent of any actual data. That requires that at every step of the process an object performs two separate functions simultaneously, namely supporting the overall processing of the routing and supporting the requirements of the business process for that step. To meet these contradictory requirements routing has to be designed layer by layer. The most abstract layer is almost entirely focusing on the overall routing process and the most concrete layer focusing almost entirely on the actual content of the data. Every layer in between will show a gradual transition from focus on processing the routing to the focus on processing data. The gradual transition can be designed using the Law of Demeter.
Law of Demeter The design of the routing system is guided by the Law of Demeter, which states that a system should only talk to its neighbours, not to strangers. The Law of Demeter is intensively studied by the research group of Karl Lieberherr. A lot of valuable information about this subject can be found here and here.
30
The Law of Demeter states about a method M of object O that: 1. it should only invoke methods of object O, or 2. methods of objects, which are parameters of method M, or 3. any of the objects, which are created in method M, or 4. the direct component objects of object O. The main concern for the Law of Demeter (=LoD) is to sustain robustness of design. Central to the idea is that one uses the maximum of information available without making assumptions what is present here and in the future. The relations of a class are all these classes, which can work on one of the objects being set in the rules 1, 2 or 4. The component objects of rule 4 should not be addressed by the external object directly. The class having component objects must have public members to let the external object exchange these objects and leave the responsibility to the class itself how to handle it. This is a technical interpretation of LoD. It is however possible to use these restrictions on invocations for the architectural design of routings. With LoD in mind one can investigate the chain of dependency between objects and the dependency between systems within a routing. LoD is particularly useful as a guidance to design a routing, because it focuses on the relationships between classes and defines a maximum how far a class can reach out to other classes at the same time. Designing a routing with LoD in mind forces the design to progress step by step as a class is inhibited to reach only one class away. In an article of Brad Appleton the analogy of quantum mechanics is used to make a distinction between a 'particle view' and a 'wave view' on objects. The particle view is looking at the object itself. The wave view he is talking about, addresses the relationships an object has with other objects. The analogy goes even one step further. Movement in a 'particle view' is moving from A to B, like passing a bean from the front end to the back end. Movement in a 'wave view' is different. In a wave the particles do not move. They stay at their place but respond to an event, when it is passing by. With the guidance of LoD one can investigate where an object behaves like a 'particle' and where like an element of a 'wave' and if that is useful. In all public methods which belong to the contract of the class should the object behave like an element of the wave. That is that the code in these methods should be dedicated to the relationship of the objects with the other objects. In the private methods outside the contract the particle behaviour of the object should be collected. That is that the code in these methods should be dedicated to the concrete actions for which the class is made. If any validation in a class should be executed this should be done in private methods, not belonging to the contract of the class. The handling of the result should be done in the public methods, which belong to the contract of the class. The same line of thought can be applied to the dependency of systems. A system should only talk to nearby systems. When it talks to systems by which it can only arrive after having passed another system see the example in the next section it is making too much assumptions about both systems involved. Not only does it have knowledge about both systems, it has also knowledge of the relationships between these two systems. These assumptions will make it harder to change all three systems as they are linked to each other. A system should therefore behave like an element in a wave, never like a particle. Only systems which behave like the element of a wave towards all its surrounding systems can be said to be loosely coupled from its environment. The behaviour towards its surrounding classes can be easily detected using the four possible invocations of the LoD. If somewhere in the code none of these four rules is applied the class or system is violating LoD. It can indicate that there is enough thought given to the design. The relationships between classes can be more extensively described then being neighbours. Classes which have another class as a component object inside need the other class to function properly itself. These two classes might be called family, where the component object might be 31
called a child. When classes both must assume what the other class wants, they can be called family too. Consider the method 'List<String> returnNames(List<Person> pPersons)'. Caller and callee both assume that the Person class has a property 'name' and they both know what type of name, let it be a surname, first name or the full name and that the other class uses the same interpretation to return the proper names. Then they are family too. Classes can be called friends when they share a method after which the caller knows how to proceed indifferent of the answer returned by the callee. An example is the method 'boolean isPhoneNumber(String s)'. Indifferent of the answer of the callee will the caller know how to proceed. And there are classes which can exchange their component objects using methods. Together with the handing over of the component object the responsibility is handed over. Both classes know how to deal with the class of the object. The caller is not only independent of the processing done by the callee, the callee can even not predict how the caller will respond to the results of its processing. The internal processing of the caller does not have to be dependent on the results of the processing done by the callee. These classes can be called neighbours. Family, friends and children all visit each other. Neighbours exchange. The more neighbour relationships there are in a routing, the more it will behave like a wave. Loosely coupled systems are neighbours of each other. Strictly coupled means you are friends as behaviour can be predicted and tightly coupled means you have yourself a family. The objects specified in rule 3 in this analogy can have any type of relationship. Not all classes require to become neighbours to be designed most effectively. Aspects for instance are never neighbours. A class which is calling an aspect knows what it will get in return. An aspect ensures predictable results. If a class needs a list of employees from an aspect then the aspect will assure that a list of employees is returned in a fixed format. An aspect serves as an extension to any class that calls the aspect. Therefore are aspects always friends of anyone. They are not family, because family members provide each other unique capabilities. The capabilities of aspects can be performed by any class itself. It is only far more convenient to let an aspect do the job.
To give an example of a wave I present a possible implementation of a translation portlet. It is an arbitrary example. Figure 7: implementation of a translation portlet
The user will start the routing by sending a submit request to the portlet. The portlet will create a bean, validate the request and if satisfied send it to the server facade. The server facade knows 32
how to process this request to a next layer and hand it over to the business layer. The business layer will validate if the request sent can be processed further. If affirmative then the request is trespassed to the DAO which will communicate with the database. After receiving the data back from the database will the data object return to the portlet and after use of internationalization will the result presented to the user, who started the request. The quality of the wave is defined by the different relationships between the different layers. How is the communication between the server facade and the business layer and DAO for instance established? Is it done first calling the business layer by the server facade and then from the server facade directly to the DAO? Or is the bean handed over to the business layer, which in return will hand it over to the DAO? In the first scenario has the server facade knowledge of both classes and knowledge about the relationship between these classes too. The server facade knows based on the result of the business layer if it can proceed calling the DAO or not. The server facade is first visiting the business layer, then returning to itself and afterwards stretching itself out to the DAO. Hardly the way a wave works. If from the server facade on the other hand the object is handed over to the business layer with a method like 'Bean returnRequest(Bean b)' then the server facade makes no assumptions about the internal working of the business layer nor the DAO. Making less assumptions about the internal processing of other classes will enlarge the maintainability and the robustness of the application as a whole. The process will behave more like a wave in which all elements stay at their place, but do their movements when the data object is passing by. Take a look at the next to graphics to point out the difference. Figure 8: Ridge and wave
In the ridge figure the facade is stretching itself out and not handing over the responsibility to the business layer. As a result the facade is acting like a particle. In the second figure the responsibility is handed over to the business layer and a wave arises. The facade is now decoupled from the DAO and does not have to make any assumptions anymore about the relationship 33
between the business layer and the DAO layer. Applying the golden rule of the LoD not to talk to strangers will automatically create a wave in the processing assuring each class does not need to make more assumptions about its environment then strictly necessary. The small routing from the portlet to the resource bundle and back is not depending on a business process. The routing is totally in control of the technical group, which created it. That makes the design of this routing independent on external, uncontrollable factors. It is not a big problem when the implementation of this routing is coded straightforward to its goal. Both validations can be considered extensions to the class, which is calling for the validation. It is not part of the routing as the routing can proceed anyway whatever result is returning from the validation. As pointed out in the section about routing is the routing a friendly process. In the example of the portlet is the user visiting the database to get his translation. To fulfill a routing every class in the line must have enough information to know to which class the data will be trespassed next. This information can be stored in the first method call or in the data object sent across the line, but anyhow it must be there. A routing must always make some assumptions. A routing of only neighbours is leading to nowhere.
This guideline states that a system will not send any information, which it knows is only useful for its own functioning. Status information relevant to the internal functioning of the system will not be sent across the line. It will be kept private. A basic example is shown by the Google translate portlet. When entering the value 'nu' for translation and asking for the translation from 'recognize language' to English the result is from Swedish to English and the result is 'now'. Although correct it could have been from Dutch to English as well having the same result in English or from French to English in which case the translation should have been 'naked'. That is the essence of this constraint. Only take the decision at the time it is appropriate. In the translation portlet there seems to be a preferential order to find words in different languages. That preferential order is uncoupled from information of the user interface. Feedback is given using the local set in the browser, but the preferred recognized language when no language is specified appears to be Swedish. The system to return the translation uses no information from the user interface object. The preferred language of the user information is status information of the subject. It indeed should not be used by any other system as a guidance for behaviour and therefore not included in the transmission of the data to other systems. Another way this constraint serves as a guideline is by transferring data in such a format, that any other system does not require the same functionality the system itself has. Imagine system A, which connects to a database. It should have a jdbc driver and manage SQLExceptions. System B, to which data is transferred, should not have knowledge of these requirements in order to process the data received from system A. Therefore should system A never transfer data, which requires or jdbc or the catching of SQLExceptions. From system B to A the same rule applies. The source 34
where the information comes from, like the form used, should never be transported across the system boundaries. System B will ask system A for a certain action. What that action precisely will be, will be defined by system A, so system B will be able to talk to different kind of systems. This constraint can be very useful in delineating systems. As long as the use of libraries or the exchange of exceptions is meaningfully classes belong to the same system.
The other way this principle restricts the act of sending data is that the data sent should not pose demands on the contract of the receiving system. On the level of implementation of the receiving system the data sent can be translated back to the original object type. That way any receiving system can serve the maximum number of data types to process. The translation to the original object type is therefore not part of the contract of the receiving system but it is the responsibility of a specific implementation of the receiving system. Consider a publisher who wants to store information about a certain publication. The publication can be a book or a number of a magazine. The action storing the data is equal, the use of the receiving system likewise, but the place to be stored and the fields to be stored quite differently. The sending system will sent data in the format of a Publication object and the receiving system will at run time decide which implementation is the correct one to process the storing of the Publication object. The implementation will have the responsibility to translate the Publication object to the proper instance and process the storage accordingly. This guideline is complementary to the Liskov Substitution Principle aka design by contract.
The last way it influences data transmission between systems is by exposing information. There are pieces of information which have their meaning across systems. Even if these systems are neighbours or totally loosely coupled systems. A banking account number is a banking account number indifferent of the system in which it is processed. This information is therefore never private and should be sharable system wide. This example touches the design of business object fields. That is out of scope for this document. For now it is important to recognize that 'private no' can also mean that for some information to be useful it must be identifiable throughout the whole application landscape.
navigation path in a system is inheritance itself. Communication between systems should be independent of the navigation within a system and solely focus on interoperability of systems. How the contract specifications are met by any implementation of a system is the responsibility of the implementation. This makes it possible to replace a system with mock objects for unit testing, which is a logical layer to choose for mocking, because a system should be considered one unit of action. The Liskov Substitution Principle for architecture can be defined as: System contracts should be specified for external communication only leaving internal navigation the responsibility of each implementation of the system.
Contracts
There are three types of contracts as there are three types of systems. One for interfaces, one for services and one type for aspects. Within a contract the input, output and actions to be performed can be described. A miscellaneous implementation of a system might need other classes to fulfill its contract. These classes are called contract partners. A contract partner can be a reference to a single class or a whole system. Characteristic of a contract partner is that the implementation of the contract partner will change in line with the implementation of the system. The contract for interfaces are formulated in the business process and are owned by the business process owner. The input for these contracts are objects, which represent business objects. The contract is a description of the mission and vision statements for that step in the business process. The actions and arrangement of these actions are formulated in the translated business process model and described in the contract. Contract partners are these classes which help the interface to fulfill its contract. Contract partners can be architectural interfaces as an interface might perform a subcontract of the business contract or another part of the business process. The contract for services are formulated by the architect and owned by the it department. The input for these contracts are objects, which do not represent business objects. In the implementation of such a contract business objects can be recreated, but in the construction phase the received objects do not represent business objects. That makes it possible that these services can be reused for different business processes. A service should perform a relatively isolated task, making it possible to be architectural loosely coupled. Contract partners should therefore be all classes which help to fulfill the contract. It is preferable when these services do not need subcontracts to perform their job to heighten reusability. The contract for aspects are more considered extensions to the contract of the class, which is calling the aspect. The situation which classes are contract partners should be quite straightforward as all used classes in the aspect will normally be contract partners.
Construction phase
Entering a system starts with the construction of the system object. At that time the system has no information at all. No new instance, which would cause unnecessary dependencies, should be created. System status variables, which are used by all implementations of the system can be declared together with all contract partner classes. These classes can be loaded, but not should not be initialized yet in the construction phase. According to the Liskov Substitution Principle the only task that can be performed in this phase is to process the received information. Having processed the received information using setters and getters the system can be initialized outside the constructor. This does not imply that there are no
36
dependencies in the classes received by the system. When for instance an Employee object is received by the constructor and that class needs the Person class, both these classes can be initialized during this construction phase. Restricting this phase to the processing of the received information lets the construction of the system be independent of any implementation and will therefore put no restrictions on the calling system.
Execution phase
After the call for construction of the system will come the call for execution of the system. The first step will be collecting the required information to start itself up, setting all status variables to their initial values and transforming the received data into the requested format. External systems and contract partners are fully instantiated during the execution phase. The technique to instantiate other classes is dependent on the congruence in change request profile. Calls to external systems or subcontracts imply that no congruence in change request profile can be expected and will therefore preferably instantiated at run time. All implementations of contract partners can be expected to have change request profiles in line with the main system and are therefore preferably instantiated using design by interface. Contract partners can have getters and setters, systems instantiated at run time do not require that. On behalf of the status variables will the system navigate through its own path. Decisions about the path to be processed is like the wave view mentioned in the section about LoD. Designing a system as described here should assure that no dependencies caused by any implementation can exist. That will make change requests to a system better manageable.
assistance of the caller, then is IoC applied successfully. As a result is it neccessary to be able to postpone the choice of implementation to run time, because that is the first moment the callee will know that there is a call on which it has to respond. Often IoC is restricted to this moment of the call, but its power goes beyond that moment in time. Let's take a look at the constraints applying to the communication between two independent systems or within a system. Table 1: Comparison of communication constraints for systems Between systems Within a system
The existence of the other system is not known All elements are designed in relation to one or need not to be known at design time. another. Systems can only focus on what they do themselves. Replacing implementations of systems has no side effect on the other system. Construction of the callee will be done at run time. Implementation is hidden by definition as the callee is not known before run time. Data exchange is restricted to personal yes, private no. Focus on cooperation with other elements. Together the overall job is performed. Replacing whatever has likely effect on other elements of the system. Construction of cooperating elements is known at compile time. It is best practice to hide the implementation. All data is by definition private.
The first five constraints for the situation 'between systems' are automatically applied when using IoC. Usage of IoC and this type of communication fits very well. The first five constraints for communication within any system on the other hand are not points of focus when applying IoC and sometimes even in contradiction with handing over control from the caller to the callee. When the caller and the callee are designed in relation to one another, then is it not very beneficial to hand over the control from the caller to the callee as they are both designed to work together. The usage of constructing objects using Dependency Injection or any other technique favoring IoC is not very usefull within systems. It has far more added value to restrict the usage of IoC to those situations in which communication between independent systems has to take place.
other two guidelines as IoC demands a two sided responsibility to control communication. IoC is like the viewpoints 'you' and 'me' in a conversation. They change with every speaker. The other two guidelines do not have this change of viewpoints, both have the viewpoint 'we'. Concerns can only be separated within a framework that connects them, otherwise one of the two concerns is not fulfilled. The same kind of reasoning applies to the Single Responsibility Principle, where to segregate these responsibilities both must be met in the end. As a result is IoC preferrably used to delineate independent systems and should both other guidelines be used within a system. The delineation of systems should be done by the architect, the design of the systems within preferrably by the development team. It is them who will likely be responsible for the maintenance too and let the development team design the system will ensure that the code created is maintainable by them. As stated before in chapter 1 this will enrich the work of the development team and give each developer the possibility to explore different career paths. Testing using mock objects can be restricted to the testing of systems as a whole, where technical testing like Junit in Java can be applied within systems. Another guideline where to apply IoC next to communication between independent systems is the moment before business logic is expected to change significantly. An example of that is the moment before a workflow will be started. At that time independent status objects will be needed and for the sake of simplicity it is more convenient to create concurrent implementations using IoC. Workflows are prone to functional changes and should therefore be instantiated as independent of another as possible. Compare that to messages in an email sent by any application. The content of the messages might change, but the logic rarely and it would therefore be unnecessary to instantiate the mail class using IoC. When Inversion of Control is used for the design a complex process of an application, then it will still create a complex process of an application. The complexity of a process is not a property of the coding language, but of the business process. The requirements, constraints and dependencies of the process will have to be coded. There is no guideline for code implementation which can avoid that. That is out of control of these guidelines. Using other principles as a guideline might result in an even more complex implementation of the process. With respect to the five purposes mentioned in chapter 2 will IoC support the interoperability, robustness, reusability and extensibility of the application. Separation of Concerns and the Single Responsibility Principle will support from the purpose robustness onwards. IoC is often equated with Dependency Injection. That IoC and Dependency Injection are so strongly associated with each other has to do that Dependency Injection is used as the main technique in frameworks to deliver IoC. Reading the article of Martin Fowler the term Dependency Injection is said to be a less confusing term then IoC. According to the PicoContainer community is Dependency Injection focusing on component assembly, where IoC also refers to configuration and lifecycle management. IoC in this view is a design pattern or principle directed at dependency resolution. Stefano Mazzochi in this blog comments on this conception stating that IoC is a general principle to increase isolation and thereby improve reuse. Although I tend to agree with Stefano Mazzochi that IoC is more then the technical concept, if this is a misconception, then it is one of the most productive ones in history of programming. Moreover I think that these two viewpoints on IoC do not bite each other. Looking more closely on Dependency Injection reveals that it is describing accurately how to solve dependency resolution, where Stefano Mazzochi is referring to what IoC is. If you see what Dependency Injection is doing, then that could be described as Independent Instantiation, which exactly matches the principle described by Stefano Mazzochi. IMHO I think that these frameworks are excellent technical concretizations by which the community can make use of IoC. That IoC can be used independent of a IoC container and for instance be applied using the Command pattern does not make a big impact. The best practice to implement IoC is using some way of Dependency Injection within a IoC Container. 39
It was build. The development team realized that some extensions would be made later on. Therefore they not just build according to the specifications, but made a more robust design. They had a configuration document in which several different types of flows could be configured. After a while the banking people came back indeed and asked if it was possible to have three flows, because different amounts of transfer required different security. They needed six eyes version for amounts above some threshold and even the control of the manager when the amount was considered big. The team looked at it and asked about a managerial role. It was not there. It had to be created and implemented in the authorization mechanism. In the original routing there was no code present to make distinctions between roles on a particular place in the flow. It could be done without any real problem and the managerial role could be added as well, but was this really necessary? The banking people now insisted on it and the team started to work on it. The next release came out and it looked more or less like this: Figure 10: process description bank transfer example including threshold
The banking people were very satisfied with this system and with the adjustments made. All worked very well. But later they realized, they had some more wishes to be implemented. Actually there should always be a supervisor control involved in the second and third flow. And by the way, not everybody has enough authority to start the transfer of big amounts. The team discussed about it and came back to the banking people. First of all, there is no role for supervisors. It should be added to the application and to the authentication mechanism in the same way as it was done for the managers. Would it be acceptable if the supervisor in the second 40
flow would always perform the last check? The supervisor would then only have to check these transfers which are already approved once. Time of a supervisor is more valuable and sparse then the time of the other controllers. And finally, which role should start the third flow? The banking people answered that the role of supervisors should have been there, they should indeed perform the last check and only they and the managers can be trusted enough to perform the transfer of the third flow. And so the team created the next release, which more or less looked like this: Figure 11: process description bank transfer example including supervisor control
Again the banking people were satisfied for a while, but then realized some important issues were not met. They got back to the team and pointed out what was missing. 1. if a manager is for some reason away, he must appoint someone, who can take over his position. That other person should have at least the role of supervisor, but another manager is preferable. Whoever it was, with every transaction the name must be known, because he will be responsible in the end for the transference of the money, 2. as all names of employees should be logged anyway, and 3. what about peculiar transactions? The bank will not cooperate in dubious transfers. They should be directly sent to the manager, who then will decide what has to be done. The team grew desperate. A whole new type of flow added? That can be instantiated by the decision of the manager and not by some algorithm? The manager herself can be temporarily replaced by someone else? The new type of flow can interfere with every step? That is not how was agreed upon in the first place. That is far from the original design. The banking people understood the problems, but to stay a trustworthy bank these rules should be applied. It can not be done otherwise. They started to bargain. A lot of meetings were held. Sometimes there were emotions on both sides, but they also knew they had to find a way out together. In the end, after several escalations, they agreed upon the following:
the system will not be rebuild. Until now it did a good job. The bank has always been very satisfied with the team effort and the team always responded well to the new features the bank must have implemented to stay secure, the control by the manager will from now on be called 'managerial control', and who should have the managerial role is read from a new configuration document for which content the manager has the end responsibility, 41
the content of the role that is the person will be read from the user object and added to the logging of the system, and the peculiar transactions will be checked before the amount is checked. If the transaction seems to be peculiar that is directly told to the manager. The manager will look at it and only when he approves that the transaction is not peculiar it will go into the normal flow.
Both were at the end satisfied. The banking people that their system was becoming better and better. The team that they ended up with manageable changes. The role of the manager should now not be set in the configuration document of the flows, but be collected from some other place and it was the only real exception to the core of the system. The logging of the person who performed the task was a simple adjustment, the change of the peculiar transactions was moved out of the system to the entrance of the system. Doing that the core did not have to be changed. The result of the peculiar transactions could afterwards continue the normal flow. The situation that on every step the flow could be interrupted was avoided. The flow became like this: Figure 12: process description bank transfer example including peculiar transactions
For a while the banking people, although not amused with the last discussions, where satisfied with the system. But eventually, they agreed internally that the system could be made more secure and therefore appointed someone, whose daily task would be to control transfers at random. That person had the authority to uphold any process at any time. He would not have to ask people when he would control a task, he could overrule any. By second thought, they also agreed that the manager could never be overruled by the controller, as she was already controlled by her manager and that ought to be sufficient. Having made these decisions they went to the team and announced them. How do you think the team reacted? This story is totally fictitious. Well, not totally. The distinction between the four and six eyes principle does exist. The manager involved in controlling exists, separate workflows for peculiar transfers exist and functions like the controller do exist. What is fictitious is the bank and a bank 42
who does not have an effective workflow for this money transference. But the process is very useful in showing how an originally robust designed system according to the original demands in the end could not be robust enough. It actually crashes under its own success. One could argue that the system was not robust in the first place. Although principally correct, not correct in respect to the fact that this is meant as an example and is never happened in reality and that its success opened new flows to be thought of, which never could have been thought of if the system was not there. Have you never seen a process like this? Where a system is evolving during the years, becomes more and more important for the organization and in the end suffers the combination to be very important and very unmaintainable? Or is that only happening in Holland?
The yellow hexagons are the interfaces, the blue hexagons the services and the pink ones the aspects. The Data storage interface is presented as an interface that is logical for the account transfer to call. It is not implemented in the code as it is considered out of scope for this example. Every line in the diagram stands for communication between independent systems. The numbers refer to the order in which they will appear. Every communication line between systems is managed by a contract. As interfaces are tightly linked to the business process is the responsibility of their mutual contracts with the business process owners. All other contracts in which at least one aspect or service is involved are managed by the IT department. The business owner should be able to recognize the ordering of interfaces, but should have no idea which services perform the majority of the actions.
43
Logging, exception handling and internationalization are aspects which are widely used throughout the application landscape and it is therefore not necessary to stipulate contracts for them with every system they are involved in. They have each a general contract. Therefore are these aspects and classes not mentioned in the two presented models. It would make the models unnecessary complicated.
UML relationships
I used four types of relationships, each having two variants. The types of relationships are composition, aggregation, association and generalization. I use composition when the life cycle of part object is controlled by the whole. This is like a family relationship described in the section about the Law of Demeter. Aggregation when the whole has the part as a member variable or when it assumes much knowledge about the referred object. That is a friendly type of relationship. Association when the whole does not have the referred object as a member, or when it only uses to exchange an object, which itself has a member or when the referred object is an aspect. Then the classes can be considered neighbours. Generalization is used to show inheritance or interfacing. Although aspects are actually friends and should therefore have an aggregational type of relationship, an association is used. That is because aspects are always in control of the contract. The class, which is calling an aspect, has to oblige to the contract stated by the aspect. An aspect does not belong to any class. Composition, aggregation and association can be one or two way. One way is when the whole is not expecting an answer in return, two way when an answer is retrieved. Different line types are used to depict generalization in case of inheritance and interfacing.
The green rectangles are classes, the yellow ones interfaces or superclasses. Splitting the data model of the application into data models of each system creates a faceted overview of the application as a whole. The whole has been disappeared from the data model. That overview should be provided by the contracts. Figure 14a: Data model of the Employee aspect
44
45
The _UserInterface class communicates with its surroundings using interfaces. It suffices to have knowledge at compile time and call for the proper class to instantiate at run time. In the code the interface of the _TransferResult class is directly called by the _UserInterface, but that is a matter of choice and in this case of simplicity for the coding. Figure 14d: Data model of the Transfer Result Interface
46
47
48
The constraint in this code is the fact that it is an example, which means that not all aspects are worked out thoroughly. For instance the flow in the TransferFlow class is performed using an iteration and decisions about the validness of a transfer are made at random. One could hardly think that is how a bank would work. The _UserInterface class does not have an interface. In practice this would be one of many implementations each called upon by a command coming from an user interface. Although the architectural interface to store the data is depicted in the architectural implementation model there is no code equivalent to the interface. A configuration mechanism is lacking. That could be an aspect and would be used instead of the FlowConfigManager. Now the FlowConfigManager is designed as an internal aspect which is a contradiction in terms - of the flow engine.
The first layer which can be created are the different architectural interfaces and the objects needed to exchange between these interfaces. Next aspects can be isolated and eventually the services. In this example, which is developed based on a fictitious process isolation of architectural interfaces is already quite arbitrary and therefore complex. Normally an architectural interface should be tightly linked to the description of a business process and the objects which are exchanged between these interfaces should be recognized in the set of business objects. For instance the TransferStatus object is intuitive as each transfer will get some status in order to decide if the transfer will be executed eventually. The TransferContainer is not directly intuitive, but it serves as a vehicle for the combination of a transfer business object together with its status object. Having these objects will make the distinctions between the different architectural interfaces more robust to change as they will process objects close to the set of business objects. Objects which are exchanged between the interfaces of the architectural interfaces and services should be as independent from the business process as possible. That optimizes the robustness and reusability of the service. The Flow service can be used by more architectural interfaces then the ITransferControl architectural interface and because of that the exchanged object is of type Object.
3.6.2.3 Coupling
Coupling between architectural interfaces is designed double loosely coupled. Every time an object is exchanged between these interfaces the object is checked to be valid by the transmitting interface. The receiving interface can rely upon that fact and process the received object using its own standards. The overall process is a transportation process as the object created at the beginning of the process remains the subject throughout all steps and its values do not change. The TransferStatus object accompanies the Transfer object throughout the processing. Based on the changes in this object is the processing of the Transfer object given direction. The workflow in the Flow subsystem is very basic. A workflow consists of the combination of a transformational and transportational processing at the same time. Based on transformational changes the transport of the subject is directed. Only the transformational processing has been worked out using randomization, but the transportational processing, which should be configured based upon the transformational processing, is kept straightforward. If the transportational processing would have been worked out more properly then after every 49
output of the transformational processing it should evaluate what has to be done next. Every step in a workflow is an event for the workflow engine. The history of previous events can be important in deciding what should be the next step in the process. In a processing of bank account transactions are events often historical. It would make the example to complex and was therefore not implemented. There is no example of translational processing available as all the time the business object is the same. Aspect coupling is used three times. The _Logger and the _EmployeeManagement class have made their own interpretation how they will return results. In the _Logger class the basic properties for each logging instance returned are centrally defined. No logger object will call its parent logger as this is set to false during the construction of the logger in the _Logger class. Likewise has the _EmployeeManagement class its own logic in returning employees, when a request with the exclusion of a role is made.
The first way the Law of Demeter is respected is that there is a wave in the navigation. Interfaces are only aware of their direct neighbours and the systems they call upon themselves. Another way to respect the Law of Demeter is that the communication between systems should be restricted to the instantion of a system and one method. That method should or return the object sent to the object or return a verdict about the object sent. The Account Transfer architectural interface returns a boolean to the user interface, because the request made by the user interface is if the transaction can be accomplished. For the user interface is that enough information as it has already the information stored in the _TopTransfer class to return the proper feedback to the end user. It has no need to know about the particular status the transfer business object has been received during the processing by the Account Transfer interface. The Transfer Control architectural interface returns the complete TransferContainer object, because the Account Transfer asks the Transfer Control interface to check the transfer. This is a more complicated question then simply yes or no. The result of a peculiar transfer for instance is not only a no, but it should still be saved somewhere as these transfers must be reported by the bank. The (not implemented) architectural interface Data Storage on the other hand would return a boolean to the Account Transfer interface telling if the storage of the data has been successful or not. The Law of Demeter can be said to be violated in every line in which the Transfer object and its TransferStatus object are retrieved from the TransferContainer as a method is questioned two lines of objects deep as can be seen in tc.getTs().getStatus(). Combining the Transfer and its accompanying TransferStatus object in one TransferContainer object is actually simplifying code and communication between classes. Whenever both objects would have been transported separately the communication between the different classes would have to make more assumptions then now. As the main purpose of the Law of Demeter is to lessen assumptions to be made during communication between classes the use of the TransferContainer is in the end respecting the Law instead of violating it. Finally the LoD is respected letting all architectural interfaces and systems be neighbours of each other sharing only one method in which an object or a boolean is exchanged. The first rule of thumb of the privacy principle that own constraints should be kept private is respected can be seen in the throwing of an Exception by the _InterfaceManager class. Any class 50
making use of the service of the _InterfaceManager class does not have to know what can go wrong within that class. Handling a general exception will do for them. The message sent by the _InterfaceManager class in its most general form is then already clear enough. When you look at the technical implementation model you can see that there is a line from the Flow class directly to the TransferContainer class and not from the _IFlow interface. This can be done because from the Transfer Control interface is the Flow service receiving an object of the class Object. The data sent by the Transfer Control interface is as general as possible. Doing that the Flow service is not restricted to be used as a private subsystem of the Transfer Control interface. Any implementation of the IFlow interface will have to cast the received object to the class needed. The contract of the IFlow interface can be kept very general, having the effect that all the implementation classes have a lot of freedom to adapt themselves to almost any kind of request. This is in coherence with the privacy principle that when data is sent no constraints should be posed on the contract of the receiving class and with the Liskov Substitution Principle that no contract should be dependent on implementation matters. The way the exchange of the business objects between the architectural interfaces and their implementations is organized supports this rule of thumb of the privacy principle too. From the User interface to the Transfer Result interface is the exchange generalized using the superclass _TopTransfer. Every implementation of _TransferResult will cast the superclass to the class needed. This is depicted in the data model having an uniassociation with the superclass and an association with the subclass. The third way the privacy principle comes into play is by acknowledging that a _TopTransfer object can be used by different systems. The _TopTransfer object used in the _UserInterface is the same as the _TopTransfer object which would be used in the Flow object. In the current example there is no actual use of the _TopTransfer object in the Flow object, but it is easy imaginable it would. Not reinventing the wheel to create a _TopTransfer object in every system serves the maintainability and the interoperability of the system as a whole. It is the effect of the Liskov Substitution Principle to postpone the initialization of the system until the system is requested to respond. If the initialization would take place during the construction of an object the calling class would become linked to the inner functionality of the callee. In the current versions of the constructors the only action taking place is the addressing of the received data to an inner placeholder. The constructor is not part of the contract and should therefore not intermingle with the processing of the received data. If there is any exception returning to the caller, then they must occur while using the methods which they share. Therefore should the initialization of the system occur in these methods and not in the constructor. It decouples the constructor and the contract specifications from each other. It restricts the influence of the constructor to the processing of the received data and alleviates the contract implementation of the responsibility handling that. Another effect of LSP is the organization of status variables. The status of a transfer belongs to the Transfer status business object. These variables are used for navigating the Transfer through the application. Variables referring to this business object are collected in the TransferStatus object and can and should therefore be used in different systems. When the business changes these statuses all systems will have to be changed too. These statuses are expected to change rarely. To make these variables publicly available to all relevant systems are these variables stored in the TransferStatus class. The variables to make decisions about any transfer on the other hand is very implementation specific and as a result put in the implementation of the flow. Putting it there leaves no trace in the implementation of the architectural interface nor in the implementation of the TransferStatus class. When a change in the decision process is made only the implementation of the flow is 51
affected. The last way mentioned here can be seen looking at the methods in the contracts. All contract methods can be divided in two groups, namely one group of methods for the handling of the object received by the constructor and the other group of methods without any parameter. The latter makes it easier to provide independent implementations. The casting to the requested subclass of the business object is performed within the contract methods. That gives great flexibility in the way how to implement the contract. A good example of this is that even with this kind of simple flow two different styles of implementation can be created. The PeculiarFlow has first a step in which is checked if the transfer is peculiar or not and on affirmation sending it to the manager. The TransferFlow performs a couple of steps in an iteration until or the iteration ends or one of the bank employees disapprove this transfer. That flexibility would not have been accomplished when the design of the TransferControl interface knew much about the flow systems.
The use of Dependency Injection is preferably restricted to calls made to an architectural interface. There are two reasons for this. The first being performance and the second to benefit maximally of it. That performance will benefit from restricted use of reflection mechanisms is obvious. The latter might require some explanation. Please take a look at the three interfaces. Together they form concurrent lines of implementation. One can imagine a line for transfer of shares, disposals and money next to each other. The class which starts the user interface for disposals will call the disposal class for the account transfer interface, which will call the disposal class for the transfer control interface. It works equivalent to the example. Each interface will be called for using Dependency Injection. That implies that all classes, which are called by one of these classes are dependent in their implementation by the interface. The flow for the control of disposals is only called by the disposal implementation of the Transfer Control interface. Therefore the implementations of the interface IFlow will at run time change synchronized with the implementation of the ITransferControl interface. Using Dependency Injection for the architectural interface causes all callees by this implementation to be selected at run time. When these implementations still would be instantiated using Dependency Injection the benefit of the Dependency Injection used for the architectural interface would be minimized instead of maximized. How contra intuitive it might seem, restricting the use of Dependency Injection might actually maximize the effect of Dependency Injection. I think that every serious use of Java or .Net should use a Dependency Injection container, but that the use of the Dependency Injection as a technique is restricted to those systems which serve as crossroads in deciding which way to go. In this example I used a form of constructor injection. This does not imply that I favor constructor injection above setter injection. Both types of dependency injection have proven their value. It was just the simplest form for me to use.
Design patterns
All navigational paths in the systems use the Flow pattern. As will be shown in the last chapter this is to be expected. Wherever appropriate the Dependency Injection pattern is used. Its use is restricted in order to benefit maximally using it. If there would be enough information the Specification implementation would have been used to validate the request in the FlowStep. Aspects are called using the Facade pattern, by which means the implementation of an aspect can still be changed without interfering with the overall service of the aspect. The Facade pattern is not implemented using the Singleton pattern as this would put a restriction on performance. 52
53
4.1.2 Purposes
Important to realize about a design pattern is that it uses coupling to do its job. Coupling is unavoidable and wanted to have a process inside an application. The use of design patterns is to benefit from coupling instead of suffering from it. All design patterns share some purposes. These purposes are: the open/closed principle. A design pattern is a best practice for a given functional contract. That means that the explicit functions of the contract might change without a need to change 54
the relationships between the elements of the pattern. The design pattern has the required flexibility to only need a different implementation in its relationships to stay tuned to the changed functions of the contract. Because of that it serves extensibility of the code and is at least able to postpone modifications, hiding implementation. Every design pattern is hiding the implementation by separation the needed abstracted relationships from its implementation, standardize solutions. Complex applications will always need maintenance. Using design patterns can ease maintenance, because best practices are used to solve known problems. The team members working on the application will have better understanding of the issues, when the solutions until that time were using standardized solutions instead of idiosyncratic solutions of former team members. Often it is said that design patterns help loose coupling. Yes, they do. No, they do not. Yes, they do, because they adhere to the open/closed principle and are able to extend the code as long as the changes are in the range of the functional contract of the design pattern. No, they do not as loose coupling is between patterns, not inside patterns. Inside patterns abstracted relationships are used to fulfill the requirements of the functional contract. Inside a pattern elements have a coupling. Having an optimized pattern of coupling they perform well. Loose coupling is not a real issue at the level of design patterns. It exists at the higher level of system integration. Reusability is left out of the equation for the same reason as loose coupling. Reusability is decided upon a higher level of abstraction than the choice for a design pattern. When the implementation using the design pattern is effective, the implementation can be reused effectively. It is not a intrinsic characteristic of a design pattern. Not using a design pattern will not prevent the reuse of a certain implementation, although it helps. Reusability and loose coupling have the same pitfall: it can complicate the change and upgrade of a system by wiring too much components together. Striving for these goals can have its downfalls, even when it is done well at the time of execution. The use of appropriate design patterns will never have this type of downfall, reason that reusability and loose coupling are not considered intrinsic characteristics nor purposes of design patterns. The terminology of tight, strict and loose coupling is used in the classification system. There it has another definition then loose coupling in the meaning having no restrictive relationship at all. Elements within design patterns always have relationships among each other, which is a synonym of saying that elements within a pattern have a coupling. That a pattern is intrinsically based on loose coupling does not imply that that pattern is favouring loose coupling on an architectural level and other patterns do not. Proper use of a design pattern should favour the forementioned purposes though and that should be independent of reusability and architectural loose coupling.
4.1.3 Definition
The definition of a design pattern is to be a standard description of abstracted relationships between elements as an answer to a functional contract optimizing the preservation of the open/closed principle through maximizing the separation between the functional contract and its technical implementation. With 'optimizing the preservation' is expressed that the design pattern is considered the best solution to the functional contract. Other solutions are possible, but none will serve the preservation of the extensibility of the application as good as the solution provided by the design pattern. Not every set of abstracted relationships can be said to be a design pattern. Only those patterns which accomplish the maximal separation between functional contract and technical implementation can be said to be a design pattern. Maximal separation2 does not imply that the
2The notion of maximizing the separation might be the reason, that loosely coupling is often used to describe the effectiveness of a design pattern.
55
care for the open/closed principle is maximized too. It is the combination which will define the design pattern. Both are needed. This latter implies that the same set of abstracted relationships is in one situation a design pattern and in another it is not. How strange that might seem, that is what the first part of the definition is describing. The set of abstracted relationships is an answer to a specific functional contract. One can not use the Observer pattern all the time and create the perfect application doing that. In some situations the relationships described by the Observer pattern fits well, in other situations it does not.
among others. Furthermore that the characteristic is independent of any external constraint. Finally that the characteristic causes unique effects. There is only one characteristic to which all these criteria can be applied. That is the set of abstracted relationships of a design pattern. Therefore is the set of abstracted relationships of a design pattern its genotype. The first criterion for the effect is that it must inevitably be shared by all design patterns. Next that there must be an one on one correlation with the set of abstracted relationships. If they fail to do that, then differences in the set of criteria will have no predictable outcome in the ordering of design patterns. Finally that the differences in effect must be solely caused by the set of abstracted relationships. There is only one characteristic again to which all these criteria apply and that is the relation between the in- and output. Therefore will the design patterns be arranged using their in-, output and processing. As there are only three types of processing, design patterns can be arranged in three main groups. That makes the ordering more comprehensible than displaying all in one group.
the formatting rules of the set and checked if it can be a key within the set or not. When the value of the key is changed, the key is changed. A class name or protocol are examples of an ordinal key. Both belong to a set having strict formats to apply to its members. One could throw an exception when the key is not applying to the format of the set. That is the major distinction with nominal keys. No exception can be thrown because of their value. Input can also function without any special trigger provided. An object or class can still be part of the input, but there are no further instructions needed by the processing to accomplish the job. All in all does this imply that for each type of processing there are at most twenty seven possible design patterns. This does not mean, that all possible situations need a design pattern or that there is one available. It also means, that there are possibilities, which are covered by keywords of Java relieving the architect and developer to use a design pattern for it. Anti-patterns are patterns, which are abstracted solutions for possibilities where no pattern should be used. A design pattern used in the wrong way is not considered an anti-pattern, but an antiimplementation. An example of an anti-implementation is when a design pattern is used outside its restrictions. In an article about patterns there was a pattern named 'Negotiating Agents'. This is said to cover the situation in which agents should resolve possible conflicts before running. These agents should negotiate with one another and finally decide which agent should run how. That is asking for dead locks and with every agent added all other agents should be reconsidered to find out if they have a possible conflict with the new agent and how to solve it. This pattern can be applied succesfully however when the situation is very well described and it can not be afforded to use a central controller, like in the wiring of telephone connections and the managing of a lot of simultaneous conversations. Then there is no time to wait for the decision of a central controller. Outside these strict perimeters this pattern should not be used and will have the effect of an antiimplementation. Outside these strict perimeters the Flow pattern should be used to cover the situation. It is only an anti-pattern when the solution should not be there and it will by definition guide the developer in the wrong way. There are two ways to let a pattern be an anti-pattern. The first one is when the pattern is a solution to a problem, which should never exist anyway. It is difficult to describe a situation like that in a programming language, but there is luckily a good example from history. From the Greeks the Ptolemaeus system to describe the slolar system was inherited. In this system only perfect circles were supposed to exist for the orbit of planets. To describe the measured orbit of planets a lot of auxiliairy circles had to be created, because most of the time the orbit of the planets could not be described using perfect circles. That is what with a design pattern will happen, when the pattern (perfect circles) is to describe a situation which in reality does not exist. The implementation is good, but the preassumptions are not, reason that the application will grow out of control and cause piling up solutions to new problems. The solution for the description of the solar system was to use elliptical orbits for planets as proposed by Keppler. The second way for an anti-pattern is when there is no need to create a pattern as a solution and it is even best practice not to do so. Casting for instance is the answer to the possibility of input 'ordinal key', using inheritance and a transformational processing. Any pattern providing a solution to this situation is an anti-pattern as there is already an optimal solution for this type of possibility. The language will develop supposing everyone is using casting as the solution. Using a solution other then casting seriously damages the robustness of the solution when upgrading to new versions. Nearly all these kind of anti-patterns will be performance triggers.
is because I have extra thoughts about these patterns I would like to share. These patterns are the Visitor pattern, the Template pattern, the Publish/Subscribe pattern, the Flow pattern and the Proxy pattern. The first one because the implementation of the pattern does not fit with the definition of the pattern. I suggest the implementation first provided by Robert C. Martin in his article and theoretically analyzed and described by Bernard Meyer and Karine Arnout in this article. The Template pattern is discussed because of the extra constraint I specify. I think that is an added value to the general description of this pattern. There is an old discussion if the patterns Publisher/Subscribe and Observer pattern are the same. In this overview of patterns I will point to a situation in which there is a clear distinction between these two patterns. I think the Flow pattern is already often used, but not acknowledged a pattern of its own. The Proxy pattern as described consists in my view of two highly related but different patterns. I split it up in two different patterns with each a unique description of abstracted relations. The first pattern is still called the Proxy pattern, the second the Symbolic Proxy pattern. Throughout this section and hereafter the distinction between tight, strict and loose coupling is referred to. These are closely related to the concepts described in chapter 3 when writing about architectural coupling. The implementation of these types of coupling is slightly different here, but consistent with the previous descriptions. To define the effect of the set of abstracted relationships belonging to one of these three types the relationship(s) within the pattern must be identified, which are crucial for the identity of the pattern. Having identified these relationships, then the characteristics of these relationships must be estimated and based on that the pattern can be said to use tight, strict or loose coupling. It is tight coupling when in the UML the crucial relationships have a compositional relationship. An unidirectional relationship form A to B indicates most of the time that A creates B, but A is not involved in the use of B. At the time of creation A owns B. It is strict coupling when the most crucial relationships use aggregation or when the processing of the pattern is relying on inheritance or interfacing. The use of inheritance or interfacing is said to be strict, because the effect is partly based on shared characteristics and partly based on unique characteristics. Differences between classes within a hierarchy or using the same interface do not own each other, but it is like a referring to each other. A class realizing an interface can be said to refer to the interface. The same line of reasoning applies to inheritance. An unidirectional aggregation relationship between A and B implies that A reads and writes B, but that B does not change the characteristics or behavior of A. It is loose coupling when the most crucial relationships use associations. An unidirectional relationship between A and B indicates that A is doing something with B, but not the other way around. The use of keys does not influence the type of relationship. It is a characteristic of the relationship, but for the determining if the processing is using tight, strict or loose coupling does not matter. The abbreviation 'n.a.' stands for not applicable.
59
Table 2: transformational processing Transformational processing Tight coupling Nominal key Class Ordinal key No key Nominal key Inheritance Ordinal key No key Nominal key Interface Ordinal key No key Memento n.a. Prototype, Singleton Factory Flyweight Abstract Factory n.a. n.a. n.a. Strict coupling n.a. n.a. n.a. Template n.a. Bridge State n.a. Decorator Loose coupling n.a. n.a. Object pooling n.a. n.a. n.a. Service Locator Dependency Injection n.a.
Memento pattern
The Memento pattern is using a state to decide if the old object must be returned or not. A state is a nominal key, because its meaning depends on the application, not on the object in question. The relationship between the Memory Class and the Original object is compositional as the Memorty class controls the content of the object. In a similar way have the Caretaker and the Memento a compositional relationship, because the Caretaker controls the life cycle of the Memento object. The Memory class and the Caretaker need an association to exchange state and objects. The focus of the Memento pattern is on the separated existence of the Original and Memento object and therefore on the two unidirectional composition relationships and can be concluded that this pattern is using tight coupling. The use of inheritance and interfacing is not an issue here. This is a behavioral pattern.
Prototype pattern
In the tool used for creating UML (ArgoUML) it is not possible to establish a relationship with the class itself. Therefore the second class had to be created. The essence of the Prototype pattern is that the class has an unidirectional composition relationship with itself as it can instantiate the same object twice. UML is restricted to the use of relationships and can not be used for the expression of actions. Therefore can only one object be shown and no relationship between these objects as the relationship between these objects is on the level of the class. On the level of the class it signifies that the class has an unidirectional composition relationship to itself. When in UML for instance an Employee has a relationship to itself, it should be using a different role like 'Manager', in this pattern the role is 'CopyControl'. As the class maintains an unidirectional composition relationship with itself, the Prototype pattern uses a tight coupling processing. The pattern does not require any key. The Prototype pattern uses the interface Cloneable in Java. The class will not compile when a method 'clone' is designed not having the Cloneable interface implemented. Use of cloning in Java is therefore using a technical construct like exception handling. As a result it is debatable if the Prototype pattern is a design pattern in the Java language. Creating an implementation in the Java language other then provided by the Java specification can be called an anti-implementation. It might be a design pattern though in other languages, which do not provide this type of technical construct. In languages in which this is a design pattern it is a creational pattern.
Singleton pattern
Not surprisingly has this pattern only one name. Figure 17: Singleton pattern
By far the most important relationship in this pattern is the control of the SingletonFactory over the instance of the Singleton class. The SingletonFactory should have total control over the life cycle of the object. The processing of this pattern is tightly coupled.
61
Usage of this pattern is restricted to those situations where continuous control over the state of the object is demanded. That is a very profound requirement, because it means that the object must be visible from everywhere and its management is still in control. The need for a single point of access from anywhere may cause performance problems and uncontrollable dependencies as the design of the services controlled by the Singleton are designed being accessed from one point only. The desire to have only one instance of a class throughout the whole JVM is not in line with the basic assumptions of the language. In Java object management is performed by the JVM and the garbage collector, not by the architect or developer. It is an essential characteristic of the language. When the Singleton pattern is used to create an unique public access point I consider this an anti-pattern, as it is then a solution to a problem, which should not be formulated anyhow. It is very difficult to create a solid solution for that type of Singleton pattern in Java. When the Singleton is used specifically for restricted situations like the Mediator implementation in the Mediator pattern, then it can function good, because its scope is narrowed down to a well defined, controllable situation in which the Singleton object is not public throughout the whole platform. In addition to the Singleton pattern a 'Multiton' pattern is proposed. This type of pattern already exists being the Object Pool pattern. This is a creational pattern. The Prototype and Singleton pattern have the same place within the classification system, which should imply that they have an equal effect as the result of their abstracted set of relationships and that their abstracted set of relationships are actually equal. It appears that they both have only one unidirectional composition relationship, which is controlling the effect of the pattern.
Factory pattern
62
The Factory pattern has an unidirectional composition relationship with BaseProduct, which means here that each Factory owns a Product, but does not use it for its internal processing. The combination of the nominal key with the hierarchy is used to create a specific BaseProduct. As a result is the combination of these relationships implying that the hierarchy on the left is copied in the hierarchy on the right. Therefore is the processing of this pattern an example of tight coupling. This is a creational pattern.
Flyweight pattern
The Flyweight pattern in its pure form distinguishes two states on any object, namely an intrinsic and extrinsic one. The intrinsic state is what characterizes the object. The extrinsic state is what uniquely describes the object among other objects with the same intrinsic properties. Intrinsic properties are already initialized in the Flyweight object, when the object is returned from the FlyweightFactory. Extrinsic properties are added at creation time to the Flyweight object. These characteristics of this pattern are expressed in the combination of the Ordinal key from the client to the FlyweightFactory and the unidirectional composition relationships the Client has with each Flyweight implementation. The Flyweight uses an ordinal key, because the creation of a specific class is requested. This can be implemented using the Command pattern, which in combination with the Memento pattern would provide a rollback functionality to this pattern. Crucial however is that a particular key is used for the instantiation of an object of a certain class and that the Client must know how to provide the extra features of the object to get the fully initialized object needed. To fully implement each Flyweight class the Client must know the ins and outs of each class, which implies that this pattern is an example of tight coupling as is indicated showing unidirectional composition relationships from the Client to each Flyweight Class. This is a structural pattern.
63
The most important relationships in this pattern are the combination of inheritance for each factory and the unidirectional composition relationships. The unidirectional relationships imply that each component is owned by the Abstract Factory. The inheritance on the left side is showing a strict relationship, which implies that for every implementation of the Abstract Factory an unique combination of components exists. Together with the characteristic that each component is owned by the Abstract Factory is the processing of this pattern tightly coupled. This is a creational pattern.
Template pattern
64
This pattern is aka the Provider pattern. The nominal key is used to make distinction between the different template implementations. Each implementation will have its unique concretization of the bidirectional aggregation relationship to the template subject. It is crucial for the Template pattern to have a Subject around which it is functioning. The Template does not have the ownership of the Subject, that is with the Client. Therefore is it the hierarchy which is at the service of the aggregational relationship and is it the latter who is crucial to define the type of processing this pattern is using: strict coupling. Use of the Template pattern is only appropriate when there is a related set of methods around a type of object for which different implementations are required. Take for example the creation of a player profile for the card game of Hearts. A player profile is used to provide each computer player a guideline how to play the game. Each profile should do equal actions, like:
evaluation of a Hand, deciding after evaluation which cards to exchange at the beginning of a round, decide for a strategy, and deciding which card to select for play.
The way the hand is evaluated will have an effect which cards will be exchanged, which strategy will be used for play and it will have an effect on the selection of the card for play. When the type of evaluation is changed, the implementation of the other methods will change too. When the strategy how to play is changed, the way to evaluate the cards, which cards to exchange and which card will be selected for play will change either. Using the Template pattern for creating the different profiles is the way to go. This is a behavioral pattern.
Bridge pattern
65
The most important relationships in this pattern are the combination of inheritance for each Bridge and the bidirectional aggregation relationships. The bidirectional relationships imply that each component can influence any implementation of the Abstract Bridge. The inheritance on the left side is showing a strict relationship, which implies that for every implementation of the Abstract Bridge an unique combination of components will exist. The methods add and remove are shown to express that any Bridge object can change its composition later. All in all is the processing of this pattern strictly coupled. The pattern succeeds in decoupling implementation from its abstraction, but it is useful though to let each implementation have its unique influence on the class, which is using its abstraction. Think about artifacts for a character in a game. Each character will be created with a default set of artifacts. Later in the game can each character acquire different artifacts, each of them uniquely changing the capabilities of the character. Artifacts can be added to a Bridge object and later on removed. This is a structural pattern.
State pattern
This pattern is aka the Strategy pattern or Policy pattern. The State pattern uses a nominal key to make distinctions between the different implementations of the same interface. A nominal key is the only way to make this distinction. The most important relationships in this pattern are the realizations of the interface. This is by definition strict coupling. This is a behavioral pattern.
Decorator pattern
66
The Decorator pattern is used to provide several new implementations of an interface to prevent a Cartesian product of subclasses. Using a Decorator pattern a class can be enhanced with new functionalities without altering the class itself. The input is the same interface as implemented by the class, which is decorated. The output an enhancement to the class using the same interface. The constraint is that decoration can only take place sharing one interface. Basically it returns the same object. The essence of this pattern is that all classes apply to the contract of the DecoratingInterface. Therefore are the realizations referring to the DecoratingInterface the crucial relationships for this pattern, describing it as a pattern using strict coupling. This is a structural pattern.
67
The implementations are left out of the picture, because otherwise the picture might become too complex to grasp easily. Both yellow classes will need an extra implementation. The ObjectPool and ObjectManagement are one unit together as the ObjectPool is holding refererences to the Objects created by the ObjectManagement. Pool- and MemoryManagement should work without knowledge about the exact implementation. Their implementation does not vary with the ObjectPool. They can work together with more pools at the same time. The behavior of each Object Pool is partly controlled by the configuration of the PoolManagement and the MemoryManagement, which is expressed by the absence of associations from each configuration to the ObjectPool. Because the behavior of the ObjectPool, which is the class communicating with the rest of the application platform, is partly under control of classes to which it has only associations, the pattern is said to use loose coupling. The methods activate() and passivate() belong to the poolEntry, not specifically to the object itself. Whenever an object is borrowed from the pool will the poolEntry get the status activated to prevent MemoryManagement to clean the object from the pool, causing an unexpected status for the object. Directly after the object has been returned to the pool or after the creation of the object will the poolEntry have the status 'passivated'. This is a creational pattern.
68
Service Locator
This pattern is also known as Dependency Lookup, Object Factory, Component-Broker or Component Registry. Most relevant in this pattern is the association between the ServiceRegistry and -Factory using a nominal key. The Service Locator does indeed create the InitialContext, which on his turn controls the life cycle of the Cache, but that does not determine the pattern as a whole. Nor does the unidirectional composition relationship between the ServiceFactory and the Service. That is an essential part of any transformational processing. Only when there are no real other relationships, like in the three first mentioned patterns, does this determine the character of the processing. Unique to this pattern is that it has a ServiceRegistry to connect to a ServiceFactory. The registry between the InitialContext and the Factory gives this pattern the flexibility it provides. Because these relationships are associations is this pattern using a loosely coupled processing. This is a creational pattern.
Dependency Injection
69
The key differences between the Service Locator pattern and Dependency Injection are the facts that in Dependency Injection an ordinal key is used instead of a second nominal key and that there is a DependencyRegistry helping to resolve dependencies for creation of certain objects. Within the InjectionManager the dependencies are resolved prior to the creation of the requested object, by which means the pattern can be used for instantiation of all different kind of objects at the same time and there does not need to be a new factory for every service. The ordinal key used in Dependency Injection is the class, which will be instantiated. That makes it a WYSIWYG pattern. The Nominal key does not matter for the success of the processing to return the requested object. Its value is irrelevant as long as it is unique and can be called a 'whatever'-key. It are the same type of associations compared to the Service Locator who determine the characteristic processing of this design pattern and therefore is Dependency Injection using loose coupling as well. In Java an object can be constituted of several interfaces. That is not the idea of the Service Locator or Dependency Injection patterns, which can be a restriction. This is a creational pattern.
70
Transportational processing Tight coupling Nominal key Class Ordinal key No key Nominal key Inheritance Ordinal key No key Nominal key Interface Ordinal key No key Flow n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. Strict coupling n.a. n.a. Collection handling, Composite n.a. n.a. n.a. n.a. n.a. Symbolic Proxy, Publish/Subscribe Loose coupling n.a. Chain of Responsibility Mediator n.a. Exception handling n.a. n.a. n.a. Facade
Flow pattern
71
This is a newly acknowledged pattern, although it is probably often used. It is closely related to the Mediator pattern as it is inverting the flow of control as compared to that pattern. The purpose of this pattern is to perform a set of related actions in a prescribed order. Based upon return values coming from the different steps in the flow will the Flow class decide how to continue. How every action is conducted is left over to each class ensuring the implementation of the action is loosely coupled from the call of the action. The classes called by the Flow are unaware of the existence of the Flow class. The most determining relationships are the bidirectional aggregation relationships with the Status class. Both the Flow and the class Step 1 will have to 'understand' every status it uses in its processing and which status it would have to return based on the situation in an identical way. Although both classes do not 'own' the Status class, their behavior requires an identical indepth knowledge of the Status class, which lets the processing of this pattern to be described using tight coupling. It sends status objects to the classes it is controlling. How the classes will respond to these status objects is up to the implementation of these classes. The possible answers provided by the classes can have an influence on the path set by the Flow, but they can not create unexpected pathways. All possible paths are managed by the Flow. Therefore is it considered a transportational processing as each different status on a higher abstraction level is for the Flow the same. For the processing it is irrelevant which status is returned by the class. The path might change, but the processing remains the same. The goal of the pattern is to perform a set of related actions and the different statuses of the Status class serve as the medium to have the required information how to continue the transportation of the action. The FilterChain is an example of the Flow pattern. It is a special case of the Flow pattern as there is only one route to be followed, which is the reason that it does not require a Status class, but it shares the characteristic with the Flow pattern that subsequent classes are called to perform their task in the line of duty. Classes used by the Flow are unrelated to each other. There are no restrictions to it, which makes it useful without applying inheritance or interfacing. The classes used for each step in the navigation are unaware that they are part of the processing. That makes it different from the Observer pattern, which is another pattern dealing with status changes in objects. In the Observer pattern the relationship between the Subject and its Observer classes is parallel. There is one central status of the Subject and that is communicated to each Observer class. In the Flow pattern the relationships between the Flow and the Step classes is serial. There is no central status to be updated to each Step class. If a Step class is called upon by the Flow is decided at run time and therefore optional. That makes it different from the Observer pattern in which the all Observers should be updated. This is a behavioral pattern.
All collection handling patterns have to do with basic actions on collections. These can include iteration or methods like 'getFirst()' and 'getLast()' or 'containsKey()' and 'getKey()'. These standard actions can be used for different types of collections, but must be implemented with the proper type of collection in mind. They all need therefore a compositional relationship from the collection to the implementation of the collection handler as they should have the interface or base class of the extra functionality as a member. But each collection does not have to know exactly the implementing class. Therefore are all these patterns strictly coupled to the type of collection. They do not need external information and are used to find a way in a collection. In the majority of mature programming languages are these functionalities provided by the language and have 72
they evolved into technical constructs. All these patterns are behavioral ones.
Composite pattern
The essence of this pattern is that it creates the possibility to traverse through different layers of objects. The Flyweight pattern is relying on this possibility, just as the Interpreter pattern. Its best known use is however not using inheritance, but using interfacing for traversing through different layers of classes like traversing from a journal to an edition to an article. But it could also be used to traverse through different objects of the same class. It basically needs the methods to add a child, remove a child, get one and perform some basic operation. It does not need any key, is used for traversing different layers implying it is a transportational processing. In any type of implementation every class must apply to the same contract, whether it be a leaf or composite. If the class under the attention of the pointer is a composite or a leaf is unknown in advance. Having the realization of the interface as the most important relationship makes it strict coupling. This is a structural pattern.
73
This pattern is aka the Remote Proxy pattern, but that name is too close to the functional implementation to be a description of the pattern itself. This pattern is not a new design pattern as it is already used often in practice. It comes close to the original Proxy pattern, but has an extra restriction in respect to the original pattern. In the original pattern the implementation of the proxy could differentiate from the original interface. In this pattern there is only one implementation, which is the implementation of the proxied object. The implementation class is always called through the symbolic interface, never directly. Together they are one business object and actually are one, because only one of these objects has an implementation. The object as a whole only exists at run time. A well known example is the stub of the EJB as the Symbolic Interface and the Implementation as the EJB bean. The object as a whole does not change during execution of this design. Only the call to the Symbolic Interface is transferred to the Implementation. It is not required that for every call of the Symbolic Interface the same object of the Implementation class is used. This design process describes therefore transportational processing. The constraint is that the contract of the Symbolic Interface is in its total shared by the Implementation and only the Implementation class has an implementation. The essential relationships are the inheritance of the SymbolicAction interface from the Action interface and the realization of the RealAction class. This is a strict coupling. This is a structural pattern.
Publish/Subscribe pattern
74
The Publish/Subscribe pattern is a new pattern closely related to Event Driven Architecture. The distinction between Publish/Subscribe and traditional design patterns is that input and output have an asynchronous relationship. Events will be of a certain class, ideally having a specific interface as it is for the exchange between publisher and subscriber important that every event must be interpretable using a specific contract. It can be compared to the exchange of xml files. Every xml file should use a xml schema to ensure that the file is valid. Not providing a xml schema does not mean that the xml file is not valid, but the opposite is true either. It does not imply that the xml file is valid. In the long run it is therefore more robust to use a xml schema for a xml file and an interface for an event. It suffices for the key used to publish and subscribe to the event that it is technical as the name of the interface class works like the uri to the xml schema. The most important relationship is therefore the realization of the interface. The exchange can only take place as long as both publisher and subscriber apply the same interface. Therefore is the processing of this pattern called strictly coupled. This pattern is different from the Observer pattern, the main cause being the Publisher/Subscriber pattern to be asynchronous. This has the effect, that publishers can not make any prediction about any subscribers and can not control them. The latter is crucial for the Observer pattern to work properly as it will update its observers. An analogy to show this point of view is the process of reporting. In a business process data is stored. For reports this data is a set of new events. The original storage process does not have the awareness nor responsibility of a publisher. But the event is used in the reports and the interface of the data can not be changed without having an effect on the content of the report. Therefore is there a dependency between these processes and when there is a functional dependency there is a design pattern. The Observer pattern would not be the correct pattern to use as the Subject is not aware that it is working like a Subject. This is a structural pattern.
Chain of Responsibility
For an interesting article about this pattern, please see the article by Michael Xinsheng Huang. He 75
is presenting a good alternative for a different implementation. In his alternative he makes a distinction between the wave and particle functionality of the pattern. And he makes a distinction between two versions of implementation for the pattern. One version which will walk down the line anyway like the FilterChain in the servlet API and one in which the chain members are visited until one answers affirmative. I disagree with him however that the FilterChain is a version of the Chain of Responsibility pattern. I think that the FilterChain is actually an implementation of the Flow pattern. Therefore I would restrict the implementation of the Chain of Responsibility as presented by the GoF. The most crucial relationship for this pattern is the bidirectional aggregation relationship between the ChainCollection and the BaseChain. It implies that a chain can consist of zero or more chain links. The result of the test is dependent on the chain links, which have registered themselves. The output of the method loopChain is therefore not predictable and the processing is considered to be loosely coupled. If the cardinality would have been 1...* then the processing would have been strictly coupled. Every chain link will have a specific implementation with an unique answer on the test. Every test has its unique meaning in respect to other tests, making the test an ordinal key. Consider the selection of a jdbc driver. That is done using a test. The input key is the test to which every Driver will apply with exactly one value. The Driver Manager will test all listed Drivers to find out which one will apply to the test. That is the Driver that will be returned to connect to the database. If it turns out to be an old version of the Driver, incompatible with the current requirements, then is that a problem for the application but not for the working of the pattern. This is a behavioral pattern.
Mediator pattern
76
The Mediator is used as a gateway for communication for a group of collegues. Every collegue has got to registrate itself by the Mediator after which it can send messages to and receive them from the Mediator. This alleviates collegues from the responsibility to establish connections to any or all other collegues. The Mediator itself does not have to know which collegues it has to send to. That is arranged in the MediatorManager, which will provide this information to any implementation of the Mediator by request. The Mediator itself is not processing the content of what it is delivering to each collegue, which makes it transportational processing. It does not need a trigger to perform its transportation nor does it rely on inheritance or interfacing specifically. The essential relationship here is the association between the collegue and the MediatorManager. That makes the processing of this pattern behave loosely coupled. This is a behavioral pattern.
Exception handling
Exception handling is very language specific and is therefore not really a design pattern, but a technical construct. Exception handling in Java and many OO languages can not be said to support the open/closed principle as there is only one decent way to implement. Nor can it be said to meet functional requirements as it is technically prescribed how to handle exceptions. Design patterns appear within the boundaries of the language, they are not part of it. Essential for a design pattern is that the developer has the choice to avoid the use of it at all. That is not possible with exception handling, which is always an elementary characteristic of the programming language. Therefore did I not create an UML pattern. Exception handling is too language specific. Sometimes the exception caught is prescribed by the input. However, one can decide to use a more general exception to catch the prescribed exception and one can throw even another exception then prescribed to be caught. Last but not least exceptions can arise at run time. The output is loosely coupled from the input as the input can make no predictions about the output and the output can be generated without any input coming from the lines of code in the try-block. An essential characteristic of exceptions is that they belong to the hierarchy of exceptions. Therefore is exception handling placed using hierarchy. To solve the exception thrown the type of the class is used. That makes it using an ordinal key. It is a transportational processing because of two reasons. The first one that it is the expression that the normal processing is stopped. Every processing is based upon transportation. When normal processing can not take place anymore, the only type of processing which is available is transportational processing. The other reason is that transformational and translational processing serve a functional goal. Exception handling does not serve a functional but a technical goal and can therefore not be defined as one of these two types of processing. If it was a design pattern it is a creational pattern.
77
Facade pattern
Although the Facade is implemented using an interface, this relationship is not crucial to its processing. There is no added value to implement the Facade using inheritance, because the Facade only couples classes. It does not do anything for itself, it just transports requests and responses. For its processing relies this pattern completely on associations, reason that this pattern is loosely coupled. This pattern does not put any restrictions on the data to be processed. To reduce the number of methods needed the objects to be exchanged might be constructed as general as possible, but it is the choice of the architect to fulfill this requirement and it is not prescribed by the pattern. It is not necessary to implement the Facade pattern using the Singleton pattern as there is no change during the processing of the Facade. In Java is it therefore preferable to create a new Facade object within the method, which is calling it, as it will not cost any action on the garbage collector to have the Facade object cleaned. This is a structural pattern.
78
Observer pattern
79
This pattern is aka MVC and publisher/subscriber pattern. This UML description is inspired by the code example provided. In the example the different functions within the pattern are performed by different classes in accordance with the Single Responsibility Principle. In the code example it works on the Subject side a little bit different then presented in the UML, but that is because of the composite structure used in the example to control the subjects. In effect is this UML equal to the Booch model presented by the GoF, but as stated before the different functions are encapsulated using different classes. In the current UML can the Subject implementation focus on being the Subject and is the communication between subjects and observers handled by the combination of the communication manager and update profiles. The UpdateProfile is not a pure necessity for the pattern, but it creates the opportunity to group statusses and therefore let only those observers update, which will react on a particular status change. That way network traffic can be minimized. The minimization of network traffic is not part of the definition of the Observer pattern, but it sure is one of the main side objectives. The Observer must have knowledge about the concrete Subject at hand, because the Observer must update itself based upon the Subject. Upon notification from the CommunicationManager will any Observer receive the new Subject instance and based on the information will it update itself accordingly. The Observer interface does not have to know for which type of Subject it needs to 80
update, but each concrete Observer class must know to which Subject it is linked. Therefore is the most crucial relationship in this pattern the update from the ConcreteObserver using the ConcreteSubject. This relationship is on the level of classes and is an unidirectional composition relationship. Although the concrete observer does not need the concrete subject as a member, as can be seen in the code, it can control life cycle events of the Subject and it can display it, which makes this processing tightly coupled. Furthermore is the Observer using common coupling, because all observers refer to the same Subject and observers are grouped in parallel. That makes it inevitable that somewhere data must be exchanged and stored, which indicates that on the Observer side of the pattern the use of inheritance is more logical then solely interfacing. This is a behavioral pattern.
Interpreter pattern
The input object is left untouched, but used to produce tightly related output. One might think using the Interpreter pattern for the construction of a search request or the calculation on a calculator. The constraint of the Interpreter pattern is in the presence of a well defined, prescribed set of input to translate. The Interpreter pattern will use inheritance to accomplish its task. No key is needed, only an expression to parse. The output of processing is dependent on the processing of the expression as depicted by the bidirectional composition relationship, which makes the Interpreter pattern behave tightly coupled. The Specifications Pattern, proposed by Eric Evans and Martin Fowler is a very useful example of this pattern. In their original document they have not provided an implementation, reason I provide one, which is clearly showing their pattern is an example of the Interpreter pattern. It is however an extraordinary good idea and I would suggest they get a 'Golden Wheel' award for presenting a very pratical idea to use. Whenever someone would create a solution for this kind of problem I would suggest to use their proposition. In 2002 a likewise example for the Interpreter pattern was provided by SUN here. Compared to mine implementation it is more straightforward, but lacks extensibility as there are no brackets present and there is no algorithm to construct more complex expressions. Compared to the implementation provided by Sun the contribution of Eric Evans and Martin Fowler in my opinion is the thoroughness of their description how the use of specifications can add to the logic of 81
applications. The code provided is quite extensive and effectively a dynamic expression builder. In their article Eric Evans and Martin Fowler told their preference for a composite implementation, but I chose a paramerized implementation as this would lessen the classes needed and changes should have only effect on the implementation of the Interpreter class, nowhere else. For readability purposes, especially the arrangement of the operations, I suggest you to open it in another editor like Eclipse or Netbeans. This is a behavioral pattern.
Visitor pattern
The Base class and the Agent interface still need implementations. This pattern is aka Extension Object pattern. The Visitor pattern is defined by the GoF as 'an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates'. Traditionally, the Visitor pattern is implemented using an accept() method in the classes on which it operates. Reading the definition of the Visitor pattern, shows that using an accept() method in the classes on which it is operating, is in contradiction with the definition. According to the definition no change in the subject classes ought to be made. The definition sounds logic and in line with the Open/Closed principle, whereas implementation using an accept() method is not. Using an accept() method can have several consequences with regard to the Open/Closed principle. These consequences are:
whenever the hierarchy of the base class changes, a new accept() method has to be added, whenever the hierarchy of the base class changes the Visitor classes must be adjusted as well, the use is also restricted to a hierarchy of classes about which one as fully access as all derived classes must be accessible intercepting an accept() method, and adding the accept() method is changing the classes in the hierarchy. Consider the situation in 82
which the behaviour of the class should be extended but the accept() method is forgotten to implement. The implementation will not work. Reflection is proposed as an alternative to the use of the accept() method. Although decoupling the visitor classes from the visitee classes successfully, reflection is a technical solution. A new set of abstracted relationships, which decouples the visitor classes from the visitee classes by abstraction, is preferable. Bertrand Mayer and Karine Arnout have written an article about this subject in which they discuss this subject thoroughly. After a structured analysis they come up with a new pattern as solution. To show how it works please take a look at the code listings in the resource 'visitor.jar'. The dependency is now put in control by the visitor object and all of the classes in the hierarchy will remain untouched. The control which classes are visited is in control of the visitor classes. Any random collection of classes can be visited without any preparation on the visitee classes. Extensibility is assured. When the base hierarchy is changed, the visitor classes do not have to be changed. Situations in which modification is required is minimized. Finally, because with this type of implementation all the action is performed by the visitors classes the code has become easier to read and maintain. At the core of the Visitor pattern is the wish that functional requirements are extended. In his article about the Visitor pattern Robert C. Martin describes this line of implementation as the Extension Object pattern. This pattern is the same as pointed out by Bernard Meyer and Karine Arnout how to implement the Visitor pattern. The most determining relationship is the unidirectional composition relationship between the Visitor implementation and the Agent interface. Whenever the implementation of the pattern has to be updated, it will be certainly has its effect on the realization of this relationship. Therefore is the Visitor pattern using a tightly coupled processing. This is a behavioral pattern.
83
Every yellow class will need at least one implementation, but for the sake of clarity are these classes left out of the diagram. From one object a set of possible new objects is created. Where the Interpreter pattern in- and output are a reversible set, the translation from the input in the Builder pattern delivers an irreversible output. Lets say an object of type A is translated by the Builder pattern to an object of type B. If that object of type B is then translated back to an object of type A it will not be an object of type A with the same characteristics of the original object. During each translation some information can get lost. However, differences between output is predictable by the combination of input and the translation used. Often the case of a pizza delivery or constructing a search request is presented as an example of the Builder pattern. However the GoF are clearly showing this pattern having a two step process of first reading/parsing the input object and then building an output object. The pizza delivery example lacks the parsing phase and could designed using a State pattern. The search request has only one version of output and could therefore be implemented using the Interpreter pattern. An elaborate example is provided in the resource 'builder.jar'. An example could be the change of marital status. When someone is marrying for the first time the status is changed from single to married. Together with the status change a lot of rules applying to the person will change. The change can never be undone completely. When afterwards the person has a divorce there will apply some rules to the person, which are related to the new status. The status of divorced is not the same as the status single. The crucial relationships of this pattern are the two unidirectional aggregation relationships from respectively AbstractReader and AbstractConverter to AbstractTransfer. Every implementation of the AbstractTransfer is a shared object, which each Reader and Converter for a specific type of object can handle. Every reader and converter will have its specific parsing relating to the Transfer. Although compatible Readers and Converters must both understand the same type of Transfer is their interpretation independent from each other. Every implementation of the AbstractTransfer is a standardized representation of the type of object, which serves like a contract between the readers and converters for that type of object. Therefore are the relationships of readers and converters independent of each other and is the processing using strict coupling. 84
Proxy pattern
The definition of this pattern is that the proxy has a reference to the proxied object and must be able to initialize the proxied object. The ProxyClass can have at least two kinds of behavior, namely presenting information about the Proxied object and its instantiation. To express this dual characteristic of the ProxyClass it is implementing two different interfaces. The Proxy interface to describe the behavior of the ProxyClass itself, the Metadata interface to know how to instantiate the Proxied object next to displaying essential information about it. The Proxy interface does it share with other type of proxy objects, the Metadata interface does it share with the Proxied interface. It does not share the interface IProxied in which the bulk of the concrete information of the ProxiedClass is made available. In the original definition the constraint is put forward that the proxy object should have the same interface as the proxied object. In my opinion that would be too much. It must share some meta data to provide information and being able to open the document, but it should be a lightweight object and have as minimal methods and data as possible. Because of that it would only need to share a 'Metadata' interface with the Proxied interface and will it share with other proxies the common behavior to open the Proxied object and show its properties. Creating proxies this way uncouples compared to the original proxy pattern the proxy further from the proxied object and couples proxy objects meaningfully with one another. As a result of having a dual characteristic the ProxyClass is not merely a transformational pattern, but a translational pattern. It can open the object of the ProxiedClass, but it is not mandatory that 85
it happens. The fact that the pattern can have a different output makes this a translational pattern. The most crucial relationships of this pattern are the relationships to the interfaces the ProxyClass and the IProxied interface implement and even more the two relationships with the Metadata interface. The pattern will not work it both the ProxyClass and the IProxied interface do not share this interface. As interfacing and inheritance are both examples of strict coupling is this pattern based on that as well. Every instance of the ProxyClass has an unique relation to a specific instance of an object of the Proxied interface, which is depicted by the unidirectional aggregation relationship the ProxyClass has with the ProxiedClass. This is a structural pattern.
Adapter pattern
This pattern is aka Wrapper pattern. The function of the Adapter pattern is to translate the contract of an existing class to the new demands without changing its original behaviour. This change is permanent. The benefit doing that is the original contract of the existing class can still be used in other parts. It should be used cautiously however as the Adapter pattern couples two unrelated contracts permanently to each other. The Adapter is based on interfacing and uses no key. The essential relationship is obviously the unidirectional aggregation relationship between the Adapter and the Adaptee. That makes this pattern using a strictly coupled processing. This is a structural pattern.
Command pattern
86
The Command pattern is aka the Action or Transaction pattern. The purpose of the Command pattern is to separate the request of the sender from the response of the receiver. The request of the sender is reverted to the creation of a command object, which knows how to perform the command. This pattern does not use any type of key. It is created to prevent using keys. The key relationships are the unidirectional associations from the Invoker to the Command interface and the relationship from the ConcreteCommand to the Receiver. That implies that this pattern uses loose coupling in its processing. The Command pattern is used to execute an action concerning another object then itself. Therefore is there a translation from a command object to another one and is this a translational processing. Central to idea of the Command pattern is that any object should adhere to the contract having an execute method. Although the pattern can be implemented using inheritance and that might be useful when undoes have to be performed, it is more convenient to say that this pattern is using interfacing. This is a behavioral pattern.
I started with the presumption that every genotype should have its own place within the classification. It turns out that this is not quite a legitimate assertion. It must be alleviated, because some places in the classification system are shared by more options. The Composite pattern shares its place with the Collection handling patterns and the Symbolic Proxy with the Publish/Subscribe pattern. Anyhow, the original does it not seem to be a valid. The reason is that the effect of the abstracted set of relationships is not dependent on all relationships, but mostly on only one relationship, which creates the possibility that more patterns share this characteristic. The benefit of this type of classification is that one can now find out which pattern to use based upon the type of relationship there exists between the business objects. It is a different way of finding out which design pattern to chose. Using the method provided by the GoF design patterns should be chosen on functional demands. Using this classification system an extra method to make a choice between design patterns based on the relationships between the objects needed is available. It does not replace the method provided by the GoF as that method has proven its aptness throughout the years. It gives an extra possibility and it puts the design patterns in relation to each other. For instance it now becomes clear why the Factory and Abstract Factory pattern are so apt for creating different Template and Bridge objects respectively. The creational paterns precede their creations, having a similar pattern but using tight coupling for processing instead of strict coupling. They are actually paired. Another example, something less visible, is the placement of the Memento pattern in relation to the Flow pattern. Both use an object for memory, both appear at the same place in their overview of processings. Both Visitor and Adapter pattern are often used to extend the behavior of a class and they appear next to each other. The Visitor pattern however needs more information to perform its job, as it also has to control the combination of the agents with the classes, which behavior they are overriding, where the Adapter pattern will provide one wrapper around a certain interface. It was one of my silent expectations during the start of the project that the Vistor, Adapter and Decorater pattern would end up next to each other. The Visitor and Adapter pattern did, but the Decorator pattern turned out to have a different kind of processing. Still, it ends up the same place in the transformational processing as the Adapter pattern, to which it resembles more then the Visitor pattern. That the Visitor pattern uses tight coupling was a surprise to me, but nevertheless in retrospect comprehensible. Another silent expectation was that the Interpreter and Builder pattern should line up. They do not, but the Observer and Builder pattern turn out to line up together. Although unexpected, it makes more sense indeed. One Reader can have different Converters. These converters are grouped in parallel, just like the Observers in the Observer pattern. But Converters do not require any direct relationship with the Reader, on the contrary. In the Observer pattern the Observers must share the same instance of the Subject, where in the Builder pattern the demand is lessened to sharing the same type of object. This makes the Builder pattern less demanding then the Observer pattern in terms of sharing objects - and in line with that makes it less controllable what the output will be. I doubted if the relationship between the Reader and the AbstractTransfer should be of a compositional type, but I decided not to, because the fact that the Reader will create these objects is not the essence of the relationship. The essence is that the Reader will provide the needed implementation of the AbstractTransfer in order to provide its results to the available Converters. It is more an instantiation then creation. The creational patterns have as their purpose to create new objects, where any Reader has as its purpose to provide its results. That is a big difference, enough reason to make it an aggregational relationship. 88
As mentioned before are the Service Locator and Dependency Injection two highly related design patterns. Their main difference is the use of a nominal and ordinal key. It is a big difference to connect the registry to their factories as can be expressed with the analogy of a famous paradox. Compare 'I always lie' to 'I lie'. The first one being the analogy to the nominal key and the latter being the analogy to the ordinal key. Both patterns do their job well, the first one making one more assumption then the other. This classification system asks for more patterns to be discovered. There are still a lot of possibilities, having currently 'n.a' as its value, which all could be patterns. May be I oversaw them and do they already exist. If so, then I would love to hear of these patterns. These patterns should be around a specific relationship, like a translational process using loose coupling and therefore mainly based on one or more associations. Now there is only the Command pattern filling a cell in this column, but there might be more patterns around as I can imagine that there are more translational patterns. There are already a lot more patterns described, but close examination of much of these patterns reveal that they are often new names for already described patterns. One of the main strong points of the book of the GoF is that they describe a restricted set of design patterns, but do that in such a general way that the same pattern can be applied to a lot of different situations. When for every new situation a new name is created, but the design pattern is already known, then there is an overload of names, but no clarity which patterns are really useful. An example is the Facet pattern. With that pattern the situation is described to restrict an interface to a smaller interface, most often used for security. This can be conveniently handled using the Visitor pattern. I doubt it if the Facet pattern should be mentioned a separate pattern. Not every situation should be a pattern and it could be more benificial if the number of situations in which a pattern could be used is extended rather then creating a new name and isolating the particular situation from its related situations. May be it could be called a Facet implementation as part of the Visitor pattern in order to show the extensibility of the pattern. The more situations are described in which a pattern could be useful, the deeper the understanding of patterns can grow. On the Wikipedia page the next patterns do not actually add a new design pattern to the book of the GoF:
Multiton resembles the Object Pool pattern, Lazy Initialization is more a technique or a language property then a design pattern, the Null Object pattern is not a real pattern but a concept very important indeed, the Blackboard pattern is an example of the Chain of Responsibility pattern, RIIA or RAII is an important technique within some languages, and the Restorer pattern finally is never described.
Peter Norvig stated that design patterns do not exist in a lot of functional programming languages and he presents this overview how design patterns can be replaced or being invisible alltogether in Dylan and LISP. This is interesting, just like this article about functional programming, because it shows that design patterns are dependent on the programming language. Design patterns consist of two parts: the problem and the solution. I think the problem will always exist and every language has its ways to deal with it. Within the boundaries of the language certain type of solutions are available. When you take a look at the example of the Interpreter pattern I think it would not be a real challenge for Peter Norvig to rewrite this code in LISP using functions in combination with Higher Order Functions and that the code would be more compact too. He could then show this code and say 'You see, it works and the pattern has become invisible'. I would agree and reply 'and that is just my point: it has become invisible, but nevertheless it is still there.', because the relationships 89
which form the pattern will still be necessary to create the code. The pattern might have become invisible, but that does not mean it is not there. Maintainability is the most important feature of any application. Without it, the application will be replaced. Visibility what the code is doing is for me a very important feature. I would prefer an application having visible design patterns above an application having invisible design patterns, although I would agree that both applications serve their first job well and that is that they have to do what they should do. Directly after that comes the question for maintenance. That is not an urgent demand when the application is quite straightforward, but the more complex a system grows, the more urgent the demand of maintainability becomes. And I think the more structured a language demands the programmers to do their job, the more 'unnecessary' lines of code they have to write, the better the next programmer, who is not known yet, may understand what is going on. That is why I prefer to have very visible design patterns, but that might be a matter of taste. May the Force be with you. 42
90