Você está na página 1de 90

Inside architecture

Write once, run everywhere

Author Date Version

Loek Bergman 19 08 2009 1.00 Final

Table of Contents
Preface...................................................................................................................................................................4 1 The environment..................................................................................................................................................5 Introduction........................................................................................................................................................5 1.1 Commitment................................................................................................................................................6 1.2 The business process..................................................................................................................................6 1.3 The business process model.......................................................................................................................7 1.4 Translated business process model.............................................................................................................8 1.4.1 Fields and business context...........................................................................................................9 1.5 Implementation model................................................................................................................................9 1.5.1 An example using validation rules.................................................................................................10 1.6 The architecture of design by interface......................................................................................................11 1.6.1 Contracts......................................................................................................................................13 1.6.2 Documentation..............................................................................................................................15 1.6.3 Implementing the design...............................................................................................................16 1.7 Writing and deploying code........................................................................................................................16 1.8 The implemented business process...........................................................................................................16 2 The mission.......................................................................................................................................................18 Introduction......................................................................................................................................................18 2.1 Maintainability............................................................................................................................................19 2.2 Interoperability...........................................................................................................................................19 2.3 Robustness................................................................................................................................................20 2.4 Reusability.................................................................................................................................................21 2.5 Extensibility................................................................................................................................................21 3 The vision..........................................................................................................................................................23 Introduction......................................................................................................................................................23 3.1 Respect for environmental constraints.......................................................................................................23 3.2 Layers and iterations.................................................................................................................................23 3.3 Architectural coupling.................................................................................................................................24 3.3.1 Definitions and delineations of coupling........................................................................................25 3.3.2 Processing types and coupling.....................................................................................................27 3.4 Principles for communication between systems........................................................................................29 3.4.1 Routing and the law of Demeter...................................................................................................29 Routing...................................................................................................................................30 Law of Demeter......................................................................................................................30 Routing and the law................................................................................................................32 3.4.2 Exchanging data using the Principle of Privacy............................................................................34 Your own constraints are private............................................................................................34 Generalize the data sent as much as possible.......................................................................35 Do not use private terms for shared data...............................................................................35 3.4.3 System execution and the Liskov Substitution Principle...............................................................35 Contracts................................................................................................................................36 Construction phase.................................................................................................................36 Execution phase.....................................................................................................................37 3.5 Inversion of Control....................................................................................................................................37 3.5.1 Usage of Inversion of Control.......................................................................................................38 3.6 The banking example.................................................................................................................................40 3.6.1 The example......................................................................................................................................40 3.6.2 Discussing the example.....................................................................................................................43 UML relationships...................................................................................................................44 The data models of the banking example...............................................................................44 3.6.2.1 Environmental constraints..........................................................................................49 3.6.2.2 Layering and iteration.................................................................................................49 3.6.2.3 Coupling.....................................................................................................................49 2

3.6.2.4 Principles for communication......................................................................................50 3.6.2.5 Inversion of Control....................................................................................................52 Design patterns......................................................................................................................52 4 The primary process..........................................................................................................................................53 Introduction......................................................................................................................................................53 4.1 Definition of a design pattern.....................................................................................................................54 4.1.1 Characteristics of design patterns.................................................................................................54 4.1.2 Purposes......................................................................................................................................54 4.1.3 Definition.......................................................................................................................................55 4.2 Classification of design patterns................................................................................................................56 4.2.1 Pillars of classification...................................................................................................................56 4.2.2 Description of the effects..............................................................................................................57 4.3 The classification system...........................................................................................................................58 4.3.1 Transformational patterns.............................................................................................................59 Memento pattern....................................................................................................................60 Prototype pattern....................................................................................................................60 Singleton pattern....................................................................................................................61 Factory pattern.......................................................................................................................62 Flyweight pattern....................................................................................................................63 Abstract Factory pattern.........................................................................................................63 Template pattern.....................................................................................................................64 Bridge pattern.........................................................................................................................65 State pattern...........................................................................................................................66 Decorator pattern....................................................................................................................66 Object Pool pattern.................................................................................................................68 Service Locator......................................................................................................................69 Dependency Injection.............................................................................................................69 4.3.2 Transportational patterns..............................................................................................................70 Flow pattern............................................................................................................................71 Collection handling pattern.....................................................................................................72 Composite pattern..................................................................................................................73 Symbolic Proxy pattern ..........................................................................................................73 Publish/Subscribe pattern ......................................................................................................74 Chain of Responsibility...........................................................................................................75 Mediator pattern.....................................................................................................................76 Exception handling.................................................................................................................77 Facade pattern.......................................................................................................................78 4.3.3 Translational patterns...................................................................................................................79 Observer pattern.....................................................................................................................79 Interpreter pattern...................................................................................................................81 Visitor pattern.........................................................................................................................82 Builder pattern........................................................................................................................83 Figure 38: Builder pattern.......................................................................................................83 Proxy pattern..........................................................................................................................85 Adapter pattern.......................................................................................................................86 Command pattern...................................................................................................................86 4.4 Conclusions about the classification system..............................................................................................87

Preface
I have been working with Java in different roles for some years right now. Every time writing or designing code I would like ot use design patterns. But which design pattern to use why and when? Reading about those patterns the information seems so easy, just analyse the situation and start using the proper one. But at the design table I get caught by the possible layser of patterns to use. It is not as easy to to use just one pattern. Most situation require the use of different patterns at the same time and then the problem arises which one to use first? Sometimes I started to write the code right away, curious if in the end I would have used any pattern. All these years of working with Java I had the idea that if I would have to get to the next level of understanding I should spent some 'quality' time to study more intensively. I never did, resulting in a working situation in which slow by slow I learned about the essence of each design pattern. I always had the restless unsatisfactory feeling to work in a situation having to look before to leap. Several months ago I had enough of this pressure to feel the need to spent some time and decided to dive into the deep: look and leap. I sacrificied all my free time to this project resulting in this document, which ended in a 90 pages booklet. I hope you enjoy reading it as much as I enjoyed writing it. And when you think it will take of lot of time reading, imagine the time it took writing it. I hope that it will give you ideas how to work writing and designing applications as it helped me. I am a native speaker of the Dutch language. It has a lot of resemblances with English, but there are quite some, often subtle, differences in the grammar. Therefore can it happen that some sentences, even after I have reviewed the text three times, will falter in the eyes of a native speaker. Is it 'loose coupling' or 'loosely coupling'? If coupling is viewed as a conjugation of a verb it is loosely coupling, if coupling is used as a noun it should be loose coupling. I preferred viewing coupling as a noun, because it lessens the complexity of the structure of a sentence. In Dutch is it good to restrict any sentence to one message. That implies that long sentences are cut into short ones and subclauses are avoided. It will give you the feeling reading a telegram. I never read texts from native speakers of English who used this kind of style, but when it would otherwise be too complicated for me to express myself using subclauses, I caught myself switching to the recreation of sentences using this typical Dutch solution. Avoiding to write from a personal point of view might be another example how I am influenced by my Dutch and scientifical background. Next that it is considered polite gives this avoidance a freedom of thought, which is not possible when I would narrow myself down staying hitched to my own preferences. My family name is Bergman, which probably means that I had ancestors who worked in the coal mines. This might have set a maximum level on my English, very down to earth, very charcoal English like. Charcoal English is the English used by Dutch harbor laborers talking to the English on charcoal ships. The best example is the sentence 'I always get my sin' meaning 'I always get what I want'. On the other hand is my first name 'Loek' pronounced as Luke by native speakers of English. Inspired by that analogy it might sound very alien to native speakers with sentences that land nowhere and will lead you astray. Lacking a good example of charcoal English I will apologize in Joda style: You me forgive but understand, I do hope.

1 The environment

Introduction
Designing applications does not start directly with the design of the application itself. It starts with the exploration of the environment in which the application resides. An application does not stand on its own. It serves a purpose. The purpose of an application is to streamline the business processes covered by the application at its best. The functional business process is at the core of the application. Without it the application has no reason to exist. Designing applications is all about serving this purpose best. Building an application for an organization asks for a return on investment. This return on investment can be accomplished when the application serves the business process well. So the design of the application starts with understanding the business process and how it relates to an application. If successful, the application will influence how the business process is perceived. Figure 1: Relationship between business process and an application

The architect does not work on an island, loosely coupled from its environment, but he works in a team within an organization. To be able to do his job he must have enough information to start with. That information is realized in several steps. If any part of this information is not met sufficiently, designing an application is like taking a longshot. I will start with describing what has to be set before the architect can start its work. Writing about architecture can not be complete without describing these preliminary steps. While describing these steps some basic concepts are put in perspective giving the architect further reference points. Before an architect can start with the design of the application, the next requirements must be 5

met: 1. commitment from the business owners and the financial stakeholders to the concretization of the application, 2. the business process in question must be identified, 3. the process must be modeled 4. the model must be translated into a logical model, and 5. the logical model serves as the basis for an implementation model. After these steps of preparation 6. the design of the application can start, 7. code can be written and deployed, where after 8. the end user can use the application as a vehicle for the original business process.

1.1 Commitment
The whole project ends and starts with commitment. During the process to create or maintain an application the crucial stakeholders must be convinced that the investment is worth it. Not only because during the time this project is executed another project is probably set on hold. Not only because one has to pay for the project. Not only because people have got to be set free to help make the application become a success. Not only because ... . There are many reasons why an organization would commit itself to the creation of an application. But the only thing that really matters is that this commitment is there and is big enough to support the application through its process of creation and implementation. No commitment, no project, no application. Commitment is about giving trust. Commitment management is explaining and proving that one is still trustworthy, despite the current problems. Having a position on the edge of organization and technique one has to communicate two ways. Towards the organization one talks mainly like a technician. In the communication towards the business the major message of any message is the relationship. Information about the technique is the apparent subject. Trying to convince using technical arguments, although correct, might actually lessen the trustworthiness. It can give raise to the thought the architect is hiding something. Towards the technicians one has to keep in mind what the business wants and one talks mainly like a representative of the business. Although establishment of the relationship will always stay important, the meaningfulness of the content among peers shows the relative expertise and will enhance trustworthiness. For each situation one must have a different communication style to serve commitment for the project.

1.2 The business process


Next comes the identification of the business process and the reasons why an application should be build for it. An application should be a solid answer to the problem situation rosen around the business process. It is not the task of the architect to handle this, but it belongs to the task of the architect to know about it. The political situation in which the application must be build can have an effect on the creation process and even on what can be designed.

People live their business process. They are very tightly coupled to these processes and therefore is it very difficult for people to talk in an abstract way about 'their' process. The information collected will always invite to create an application, which has to many tightly coupled business entities. It is a common pitfall in describing a business process accurately. For the work of an architect the business process itself should be out of scope. But when having doubts about the presented model to implement, one might have to talk with the people, who actually work with the business process. Having no doubts at all might even be more a reason to talk. Only if one has a deep and profound knowledge of the actual business process one could skip visiting the people. Otherwise talking with the people who work with it is always a good idea. The business process will come alive. People can show what the business objects currently are, what is the reasoning behind some procedures, what are critical values and ways of conduct in their jobs. Knowing that, one can have better insight in the relationships between the business objects and therefore better understanding which objects are crucial and what might change in the future.

1.3 The business process model


How helpful interviewing people might be, the first piece of information for the architect is the business process model. Even when people do not work as described in the business process model, it is the official description and should therefore serve as the base of the design. In this step the business process is presented in an organizational dependent structure. It merely constitutes of activity diagrams and use cases next to a lot of documentation. The quality of the documentation might not be too reliable, but it is the official description available. At the base of the business process model are the mission and the vision. The primary process of the business process is a derivative of the mission and the vision. The mission is the reason why the business process is created. This can be misunderstood specifying the targets of the primary process as the mission statement. Targets are derivatives or concretizations of the mission. The mission itself can best be compared with a lifestyle for the business process. The vision or strategy is the way how this lifestyle can be materialized best in the business process. The primary process is the result of the combination of the mission and the vision. It has an expected input, one or more lines of processing and for every line of processing expected output. As the primary process is a derivative of the mission and vision should these statements return in the design of the primary process. An application is built around the primary process and should therefore match the mission and vision statements. Think about an application to match solliciters to vacancies. When the mission is to provide solliciters and companies as much as matches as possible and the solliciters have to find them themselves a website with an extended search will be convenient. Essential is for instance that solliciters should have easy access to all vacancies. That every vacancy should be categorized with as much as plausible categories to be found by any solliciter. That the search criteria are well know to anyone who is looking for a job. That the search engine is highly optimized. When the mission on the other hand is to match solliciters to vacancies only when the recruitment organization thinks there will be a successful match possible, then a total different application will be created. Essential will then be a thorough profile of solliciter and vacancy and the company offering the job. These profiles should match before contact is settled between the two. Their mission would state that they would deliver matches of high quality so that the company will only spend time to solliciters, which are serious candidates for the job. The primary process is basically the same, but the implementation will differ completely because of the mission and the vision. In the first example flexibility is required for the presentation of the 7

data depending on the special wishes of the solliciter. A solliciter may want to work only in the proximity of his house or he is only looking within a certain industry. In the second example the flexibility should be in the profiles. Decisions are made by a steadily more experienced recruiter. The recruiter could look in many different ways a solliciter would never think of. If a search should exist, a fuzzy search would be useful. It can not be overestimated how important it is to for these statements to be accurate. If the application is not covering the mission and vision of the business process the application will not fulfill its purpose. The accuracy with which the business process is described in the model is the maximum result of the application to be build. When the business process model is not described accurately the application will fail anyway. The most complex situation is when the mission, vision and primary process are met actually quite well, but not precisely enough. Then the application works more or less, but the business process owners will never be really satisfied. A lot of calls for change requests will exist and maintenance becomes a burden. Meeting the mission and vision reasonably well but not well enough for practice means that the change requests can not be build logically into the application. With every change request the application will grow rapidly in complexity until a simple change request can become so complex, that it is wiser to redesign the application as a whole. A primary process might be constituted of several subprocesses. Each of these processes will have their own mission, vision, input, processing and output, which again have to be reflected in the translated business process and as a result also in the design of the application.

1.4 Translated business process model


The translated business process model (= tBPM) is the second useful piece of information for the architect and the one on which the architect primarily bases his work. The tBPM is independent of the language of the organization and of the technique to implement the application. In this model the line from business to technique is crossed. Based on the organizational business process model is it the model, which describes the business process in a logical language. It has its own vocabulary. Whereas the business process model uses procedures, the tBPM will use the term business rule instead. Whereas in the business process concrete actors or functions are mentioned, in the tBPM these will be abstracted to business roles. Whereas in the business process model forms and documents are described, in the translated model these can return as business objects. The only type of information, which must be preserved during translation is the flow of the process. It must be preserved during any translation of the business process. The reason I call this the translated business process model is to stress the crossing of the border from organization to technique. The language used is different from the business model and the end users, who perform the business process on a daily basis will recognize the business process model, but might not understand the tBPM. The tBPM is the first standardized abstraction from the organizational business model. All organizational specific terms are encapsulated in standard objects, relations and sequences. On the level of tBPM the abstraction should be independent of technique. For instance a business object in the tBPM does not have any methods. It has only fields. Actions are performed by the roles in the process. That is why procedural programming is so compelling. It is a way of creating an application in a way close to the original process and very close to the logical language of tBPM. OO is an abstraction which requires an extra layer of translation. The benefit of OO in comparison to procedural programming is that it provides extra flexibility towards possible changes in the business process. But it also complicates this translation due to this same abstraction. Therefore it will require an extra translation layer before design can start. 8

1.4.1 Fields and business context


Before introducing the next translation layer there are two more concepts in the tBPM which are crucial to the application and should be mentioned first. The first concept is the field, the second the business context. Fields exist only in this model. They do not exist in the application. In no application at all. A field can have a technical counterpart, but the way this counterpart is implemented can change as a result of the implementation technique used. A field in a web form or a column in a database table can both refer to the same field in the tBPM. In the application to be build there will always have to be some mapping between fields in the data input system and fields, columns, tags or whatsoever in the data storage system. This mapping already proves that the real object 'field' does not exist in any application. It resides outside the application and to be more precisely: it exists only here, in the tBPM. In depth discussion of fields is out of scope for this document. The other major concept introduced by the tBPM are the different business contexts within the process. A business context is a way to look at the process itself. Think about the employee who registrates banking transactions or his manager who checks at the end of the day the list of banking transactions she has to approve. Both will have a different need of information about basically the same process. They will make use of the same business objects, but react differently according to their different roles. The employee will look at all transactions, whereas the manager will only look at banking transactions of a certain amount or peculiar behaviour. The tBPM should provide the distinction between these different business contexts. In the business process model these situations would be described using different actors, in the tBPM different roles will be used. There will be for instance the roles 'transaction department employee' and 'transaction department manager'. Every business context will have its own primary process with its own business rules and roles. Within every business process they will share the same business objects.

1.5 Implementation model


The implementation model is the next model in line. It is the first product of an architect. So far was the scope of any model the business process. In the implementation model is the scope extended with the integration of the business process in the application landscape. The implementation model is related to the technique or language used. The data model of the implementation model can differ from the data model of the tBPM. This can happen because for instance the implementation model must take into account how the application will be able to communicate with other applications. Or because the technical solution might require more classes and different classes to implement the model successfully as compared to the tBPM. If an application is made using Java and a RDBMS, then both platforms will have their own implementation model. Both models will rely on the same tBPM. Their implementation will differ. Consider the situation in which a person has more contracts as an employee with one organization. The person is member of the personnel. In the tBPM this might be modeled as an employee being member of the personnel. In the Java GUI this would implemented with an object Employee having an employee id and a personell id. In the implementation model of the RDBMS a person object would have to be created so there is still a 1-1 relationship with the personell and a 1-n relationship between a person and employee contracts. The Java GUI can stay unaware of this situation. When for instance in the workflow of an tBPM a notification email has to be sent, this would return 9

in the Java GUI. For the RDBMS falls sending an email out of scope. It would not return in that implementation model. In the business process model the language used is the language in which the organization talks. The language of the tBPM has a strong flavor of logic. The language used in the implementation model is close to the grammar of the programming language used. When the implementation would be in PL/SQL the validation rule would be totally different as probably is the signature of the method used. The descriptions of the procedure and the business rule would however not be affected by that. From the functional viewpoint is the implementation model more abstract then the tBPM. Roles for instance are called by name in the tBPM, whereas in the implementation model the way to use a role is described, not the value of the role. From a technical point of view is the implementation model far more elaborated then the tBPM. Aspects like logging and exception handling are described, where in the tBPM they do not exist at all. In the latter the feedback to the end user is formalized telling which feedback will be given to the end user in which situations. In the implementation model this is standardized and the concrete feedback will be returned at run time. In the implementation model the way to pass the data throughout the application is described, where in the tBPM this is out of scope. Authentication and authorization for instance are subjects of the implementation model being out of scope for the tBPM. The concrete results of these processes however are prescribed by the tBPM.

1.5.1 An example using validation rules


Procedures in the business model are converted into business rules in the tBPM. In the implementation model business rules are converted into validation rules. I choose this term to stay close to the already used term 'input validation', which is a subtype of the more general term 'validation rule'. A business rules describes how to act in certain situations. It will always define what is a preferred situation and which actions should be avoided. Being compliant to the guidelines of a business rule means that the new situation is approved by the owner of the business process. In more technical terms: the situation remains valid. That is why validation rules is an adequate term to describe the implementations of business rule. Let's have an example of a procedure, business rule and validation rule to pinpoint the differences between these three. A procedure in the business process model might be 'when a banking transaction is above $50.000,- the six eyes procedure applies'. The business rule in the tBPM would state something like if banking transaction > $50.000,- then approval needed by two members of the role 'transaction department employee' and one approval needed by one member of the role ' transaction department manager'. fi The set of validation rules in the implementation model could be: if transaction.amount > Transaction.threshold then validateTransaction(Transaction pT, List<Employee> pE, List<String> pR) where 10

pE.size() == 3 and ((pE.get(0).id <> pE.get(1).id) <> pE.get(2).id) and pE.get(0).equals(pR.get(0)) and pE.get(1).equals(pR.get(1)) and pE.get(2).equals(pR.get(2)) end if

1.6 The architecture of design by interface


From the tBPM to the data model linked to the code is a huge step. In the data model the information about the different processes is lost. It shows the relationships between the different classes without any grouping, which could relate it to the business processes they should cover. Next to that is the relationship between the requirements of the business processes and the implementation missing. Both these types of information can be covered by the implementation model. The implementation model makes it possible to group technical processes and link them to the requirements coming from the business. For each group a data model can be created by which means the link from the business process to the data model is restored. That is very useful to manage the application platform and have an overview of the impact any change might have. Within the implementation model systems perform the role of these groupings. Their characteristics are that each system performs a separate task, can function independent of any other system, requires standardized input, has one or more lines of navigation to transform the input and eventually has possible standardized output. Their characteristics have close resemblance to a business process, which make it an adequate translation. And just as a business process might have subprocesses can a system have subsystems. The difference between a subprocess in the business and a subsystem is that where a subprocess in the business is often restricted to be used within one business process, subsystems are meant to be reused as much as possible. There are three types of systems. The first type of system is the the turning point where organizational requirements and technical implementation meet. It prescribes the steps, which have to be fulfilled in order to implement the business process. Together these systems form the first layer in the application landscape. Often is each system connected to one business process only, but that is not a requirement. Then there is the type of system which is performing a separate task and can be used by more then one business process. Examples of these kind of systems are connecting to a database or the handling of validation rules, services for the outside world or the templating of screens. The third type of system performs a task, which is so general that it can be merely seen as a generally available extension for any class. They perform a function, which is independent of any application. Logging, exception handling and security are examples of this type of systems. The latter type of systems are the aspects. The second type of systems are called services and the first type of systems interfaces. All three names for these systems are chosen to stay close to intuitive understanding. When names are intuitive they can be used. The names of aspects and services are almost self explanatory. The difference between a web service and a service is that a web service is able to hide the implementation platform totally, whereas the service is located within a certain implementation platform, but they both share that a separate task is performed and that the implementation is hidden from the caller. I prefer to call the systems most closely related to the business process interfaces, because its meaning is actually very close to the current meanings of an interface. An interface can be used to talk about an end user experience or about a contract for a class. The way interface is used here is a combination of these two concepts. First of all it is the link between the business process and 11

the underlying technique. Secondly it serves as a contract between the business process owner and the technicians. The business process owner describes which steps have to be fulfilled in which order and a set of interfaces is performing this job. The total of the requirements realized in a set of interfaces should be equal to the total of requirements stipulated in the business process by its owner. A business process will normally become split up in several interfaces, each performing a sub task of the process. This splitting up of interfaces should be intuitive to the business process owner as the total will sum up the business process and it should be done at moments in the business process at which new lines of actions come up. When a car shop sells a car or leases a car there might be two interfaces, one for each process. In the end the result will be stored in a database. For that only one interface will be created. A business process will be referenced by an indefinite number of interfaces. The number of interfaces is dependent on the number of distinctable steps in the business process. There is no number which describes the ideal number of interfaces to create, or it must be 42 of course. That makes the use of interfaces different from any type of layering as an architectural design. Interfaces serve as the organizing principle for services and aspects. The interfaces organize the separate tasks to be performed and the services and aspects do that kind of job. This will greatly reduce the complexity of any application platform as routes through otherwise loosely coupled systems can be traced back. The architect of layering is a procedural organization principle applying to an object oriented platform. It defines several fixed steps to be created, which is a procedural way to organize. Organizing loosely coupled systems using the interface architecture instead creates the flexibility to group systems on demand. A system can now be designed to perform a certain task independent of the place it will be called for. That was impossible in the architecture of layering. Using interfaces as the organizational principle can help the system to focus on what it is designed for. The linking code is put out of services and aspects and concentrated in interfaces. To adapt to new requirements of a business process might be restricted to newly grouping of systems. A service becomes then really a service. A SOA architecture does not have an inherent organizational principle like layering. Use of SOA architecture creates the opportunity to create systems designed for one purpose only, like the services in the interface architecture. But in SOA architecture this comes with the price of losing an organizational principle to connect services together. Although services can be managed by service contracts, the information by which business processes a service is needed is lost. Interface architecture combines the advantage of layering to have an organizational principle for calls to services and the advantage of SOA to be able to use loosely coupled services, which are designed independent from the ordering in any process. In the next diagram an example how interfaces, services and aspects can work together is depicted. At the end of chapter 3 another example is presented. Figure 2: relations between interfaces, services and aspects

12

The yellow hexagons are the architectural interfaces, the blue hexagons the services and the pink ones the aspects. The numbers next to the lines coming from the interfaces tells the ordering of the calls. Interface A for instance calls first two services before it calls upon the other two services. Interface C calls interface D before proceeding the call to a service in line three. Some services in this diagram are not used by these line of interfaces or other services. It might be that they are used by other systems or not. That can not be told from the point of view of this set of interfaces.

1.6.1 Contracts
The benefit of this architecture is clearly that systems can be grouped on demand, services can be designed for one purpose and they can all be created independent of one another. That benefit is at the same time its major drawback, but the drawback is inevitable when working with systems, which do not have any fixed position in an application platform. Designing loosely coupled systems implies that change management can be something called from the past when this problem is not met seriously. At the end of the day the business must rely on the reliability of the overall application landscape. Loosely coupled systems are a beautiful way to design applications, but for the sake of continuity the business demands hard coded paths being followed. When an employee of a bank enters a transaction in the system the business must have some predefined results. In reports on transactions this transaction must be known for instance. It is unacceptable when a business process is started that it would end half way, because there is a leak in transference from one system to the other. The application landscape is one big service to the business. It is the business which is bringing in the money which pays the development of the application landscape. Not being able to meet the requirements of the business is not an option. On the other hand will a sophisticated system give the business a competitive advantage and a higher return on investment, because integration of a new application in the total landscape can be accomplished in a more standardized way and therefore in shorter time. That is exactly what Java and other OO languages promise: write once, run anywhere. That is what the business would like to have. But it can become very complicated in change management, when the interdependencies are not registrated well. If change management becomes a burden the profit of 13

having loosely coupled systems changes into a real nightmare and the only solution left is to return to a design of strictly coupled systems. Dependency hell has two faces. One is having to change code every time there is a change. The other face of the dependency hell is that one does not know what will be effected by a change. There is no architect in the world who would like to be forced to return to strictly coupled systems I guess. But the danger of having loosely coupled systems is the preservation of once layed out routes for applications while adding new ones. It is the danger to change the behaviour of a system without knowing the effect it will have on other systems. Because systems act independently of each other change in one system will not show where in the line another system might fail. Systems however have to be changed. There are a variety of reasons why they should. One could think about upgrading technology or improvement of performance or new business constraints, when the business process changes or a different database system will be used by the organization. While some changes might be kept local, other changes will have an impact throughout the whole application landscape. Both type of changes must be met. The organization must be ready for them. That can only be accomplished when the organization has registrated during any change of a system within the application landscape which systems it relies upon and which systems rely on the system in question. To use loosely coupled systems one needs a tight organizational coupling of systems. This registration of organizational dependencies between systems is managed by contracts. In contracts these dependencies are registrated together with ownership of every system and which business process is using this system. It is the responsibility of the architect to registrate these dependencies. He has the knowledge to perform this crucial duty for the organization. System administrators should force the architects to hand over the full lists of contracts before allowing any new system to be deployed. Testing is not a solution for this problem however favorable this would be. Testing can hopefully predict problems, but the problem with testing is that it can not know all possible errors which might occur because of a change in any system. And when a system unwantingly might fail due to a change in some other system one would like to know which business processes are influenced by this failure. Registration of dependencies using contracts can give answers to these questions. In a contract several dependencies must be registrated. The first dependency is the owner of the process. The owner is responsible for the maintenance of the contract and how to implement any changes. The contract owner of an interface is the business process owner. The business process owner is responsible for the contract of the interface, because that person has the knowledge which functionalities should be addressed by the interface. In practice he might delegate this to the team of architects, but it is in the interest of the business process that an interface is performing a specific set of functionalities. The total set of functionalities should resemble the business process as a whole. Some of these functionalities are implemented in different platforms. The only position who has the overview how the functionalities are implemented across different platforms is the business process owner. Often will this function be performed by a domain architect, who has a solid technical background and is able to talk to different kind of technical teams. The contract owner of services and aspects are the team of architects team and the IT department. With IT department that organization is referred to which is in control of the deployment of the application into production. It does not imply that the internal IT department of the organization is responsible. Services and aspects belong to a specific application platform and therefore should the persons responsible for deployment of these systems made responsible for the contracts. Within a contract there should be noted which interfaces and which services are addressed. It must be traceable to which implementation of an interface or service is used. The exact class does 14

not have to be addressed as this would put a burden on the maintenance of the contract itself, but at least the key by which the other system is called should. Otherwise the added value to describe which system is used is too little. If it is not traceable which system is used by any other system then will maintenance become very difficult and can it be compared to throwing darts blindfolded. Aspects need not to be registrated in each contract they are used. There function is so general, that they demand from all other systems how to be used.

1.6.2 Documentation
Furthermore should in the contract the functional specifications which are concretized by the system be specified. That is most important for interfaces as they are connected to the business process. They must describe the functional specifications elaborately as this gives the opportunity to control if and where the specifications are met. The example of the validation rule presented in section 1.5.2 should be part of the contract of the interface. Working out the functional specifications this thoroughly by contract let any contract serve also as the documentation of the business process. Using contracts of interfaces to document the business process has several more advantages. The first one being that the location of the documentation is the same as the implementation of the functionalities. The second one that the documentation is performed by the person, who designs the system having these functionalities in mind, which is the architect. The third one that the documentation will change in accordance with the change of functionalities and not with the technical implementation. As the services are loosely coupled from the interfaces it should be unimportant for the business process owner how the job is done as long it is done. When the technical implementation changes but not the functional requirements then the documentation of the business process does not need any change too. The process stays the same and so should the documentation be. The fourth one that developers are relieved from documentation. That has again two positive side effects. Developers tend not to write and update documentation. It is not their main responsibility and they are not directly related to the business process. Often they will not get enough time to document their work properly and they have to make a big leap from their daily focus to what their achievements mean for the business process. That can be quite hard to do, which many times results in poorly documented applications. They should however document exceptional solutions or unwanted dependencies to inform their colleagues about the implementation in question. That is in line with their daily focus and should be logic to perform. The other positive side effect is for maintenance. When the documentation is not technically inclined any developer who has to implement a change must be able to understand how the functional requirements are met by the technical implementation. This can serve as a test if a developer is capable of understanding this translation process. When the developer understands this translation from functional requirement to technical implementation he will adapt the implementation to the new situation with the desired business result in mind. The last advantage of restricting the documentation to the functional requirements is that the implementation can be checked if it does, what it should do. When the documentation is focused on describing the technical implementation it reads more or less like 'this is what we do' and indeed, it happens. But that does not explain if that should be done and if yes, why? The 'what' is documented, but that could already be found out reading the code. The 'why' is much more important as this is what will return in discussions with the business. The 'why' is referring to the validity of the code and that is what a business process owner needs to know. The 'what' is about the reliability of the code. The business process owner will believe that.

15

1.6.3 Implementing the design


After the implementation model has been created the architect can start with the actual design of the application. He now knows which systems have to be created specially for this application. He knows how to extend already existing services to integrate the new application. He can find out how the relationships between the different systems will be and therefore know which type of connections between systems have to be created. He knows how to connect to already existing services and aspects. He can organize the systems into interfaces. This step in the creation of the application is the main subject of this document. The rest of this document can be placed as the in depth exploration of this section.

1.7 Writing and deploying code


This subject is a world of its own but out of scope now. Worth mentioning though is that contracts should be present, which is most important task of the architect here. In the phase of writing code the way it is organized sets the responsibilities for the architect. When the agile concept is used, the role of the architect is minimized as the developers are given a lot of responsibilities themselves. I adhere to this principle, because it creates a natural career path for developers. Some developers will be inspired to become senior developers, others to become the new architects and again others for instance project managers or business consultants. I see it as very healthy for an organization to have an inspiring working environment for their employees.

1.8 The implemented business process


The circle has returned to its starting point. The future of the business process will be influenced by the way it is implemented now. It will start influencing people in their perception what is really important in their organization, it can give rise to new wishes or demands, it can help the business to compete successfully in the outside world or the opposite.

16

Figure 3: from business process to business process

17

2 The mission

Introduction
The purpose of an application is first and above all to serve the business process. If the application manages that its first and most important purpose is obtained. But that is not the first principle of application design. Application design has purposes, which serve the goal on a more abstract level, namely serving the return on investment for the organization as a whole. Organizations do not only have the need to let the application do its main job, but also that the application can communicate with other applications as well. And that the knowledge of the technical application can be shared in a team. That best practices are used and work is standardized. The use of standardization can also enhance the level of complexity which is covered in applications, because the wheel does not have to be reinvented all the time. The purposes of designing applications are therefore: 1. maintainability, 2. interoperability 3. robustness, 4. reusability, and 5. extensibility. These purposes are high level purposes. A lot of concrete purposes can be derived from them. The goal of a purpose is not to be too specific, so that it can be applied to a lot of different situations. The reason is that a high level purpose can be used as a criterion in many different situations. Validity requires to be more specific and therefore constrains the domain in which it can be applied. That creates the need for an endless lists of valuable purposes. The maintenance, interoperability, robustness and reusability of that list is limited. It is prone to changes, new insights and can have a lack of continuity. A list of purposes should be quite abstract in order to avoid these pitfalls and have valid purposes in any circumstance. These purposes appear in order of importance. When an application is not maintainable, the rest does not matter. Maintainability is about the here and now of the application itself. Interoperability is about the communication with its current environment. Often the interoperability of an application will suffer from the maintainability of the application. The validness of the data, which is a conditio sine qua non1 for exchanging information, can hardly be trusted to be high when the application is not very maintainable. When it is not well known how the application works, how can the data it delivers to be trusted well and how can it be expected to deliver valid data for exchange with other systems? The usefulness of an application will suffer severely when its interoperability demands are not met. Robustness is about the vulnerability of the application to expected changes in the future. Therefore it is considered less important then the first two purposes as they deal with the current application. That is the one the organization has to work with. However, when the application is not robust to change, the maintainability of the application might become a burden. In the course of maintenance the robustness of an application can change. It is not a fixed given. The reusability of the application is about how useful its components can be for other applications. If MoSCoW would be applied to reusability, it would get a C. The effectiveness of the application is
1 Conditio sine qua non means 'condition without which it could not be'. To stress the importance of the validness of the data the more formal expression is used.

18

not measured by its reusability. It is a desirable side effect. The extensibility of an application is an appendix to robustness. As robustness is about the internal extensibility of the application then extensibility is about the external extensibility of the application. At some point it can be very important, but many times it does not play a role in the evaluation of an application. In general it would get the W from MoSCoW.

2.1 Maintainability
Maintainability is by far the most important feature of an application. An application which lacks maintainability is very expensive and is not designed well by definition. If the application is well designed but not considered maintainable then the organization lacks sufficient support for the application platform. Actually, that is one of the benchmarks in designing applications. What use is it to use a lot of complex frameworks, when the developing team consists of people not able to handle them? More important then the use of design patterns is using a complexity in the design, which can be successfully handed by the team, which are responsible for maintenance. The maintainability of an application can be enhanced in many ways like using best practices, standardization, design patterns, coding guidelines, taking care of the people who perform the maintenance, giving the opportunity to schooling, giving responsibility, paying good wages. Maintainability is about how an application is doing something. When it is not clear how an application is performing its tasks, it can not said to do its own tasks trustworthy. That means it does not know exactly what it is doing.

2.2 Interoperability
Interoperability is defined as the ability of two or more systems or components to exchange information and to use the information that has been exchanged. Exchanging data is in the contemporary organizations crucial to the usability of an application. Seldom is an application used without integrating its results with other applications. The interoperability is based on what the application is supposed to do. Out of what it does results data, which further in the organization is used as benchmarks for the process(es) the application supports. In the design the validity and the reliability of the data should be taken into account. The technical exchange of data used to be a problem in software. Having xml nowadays it is no real problem anymore. It would have been very interesting to discuss when data had to be exchanged on the level of operating systems or networking protocols and the like. One of the great advantages of modern programming languages is that this has not to be met. It is settled. The focus for interoperability is on functional exchange. The reliability of the data is ensured by some kind of transaction mechanism. A transaction is not restricted to a database system. Without reliability of data the validity of the data can not be assured. But the validity is what really counts for interoperability. Validity is accomplished, when difference in data values reflect differences in real world values in a predictable way. That condition can only be met when the definition of the data in the application is comparable with the definition of the business objects. When definitions of business objects change the definitions of the data changes accordingly. Even when the data itself is not changed. That is because after the change in definition of a business object its data is evaluated differently. After any change in definition of a business object a conversion should be considered. Interoperability is the most difficult purpose to hold. Even when 19

the data has virtually become totally meaningless the application will still work and produce reliable results. The only way this can be ensured is to test on a regular basis if input is still conform the definition of the business objects. The definition of the business objects is subject to the mission of the business process. If data is valid can only be checked upon the mission statement. The validity of the data is out of control of the technical model. Therefore should the design of data be constructed having this lack of control in mind. The data in the application serves the business process. If it does not provide sufficient ways to deal with changes in the functional definitions of its data objects it will end up storing incompatible values within the data objects. For the majority of data this is not a very restrictive purpose. Most of the data does not have uncontrollable change of definitions. If a grocery store sells one banana or a spray of bananas is not that uncontrollable. But applications which have to deal with laws or with guidance of people or public services will have to deal with this problem actively. The design how to store data, how to aggregate them and how to convert them should be designed carefully. It will be a certainty that the definition of these business objects will change significantly over time and it will be very important for the organization that these differences can be met. An application does not have to be robust to the change in definition as these changes are most often unpredictable. It should be robust to work with data, whose definition might be changed over time. That is the part which should be accounted for in the design.

2.3 Robustness
Robustness is the demand to the application that it is well designed enough that it can handle expected changes in the (business) process with little effort as possible. Every application must have some basic assumptions what is essential for the identity of the process. A process has some input, a transformation and a output. As long as these basic assumptions are met the application should not need to be redesigned. Robustness stands or falls with the success with which the vision of the business process model is translated in technical design. Robustness can be further pointed out by the open/closed principle. The open/closed principle states software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification. The robustness of a design is a combination of the open/closed principle for the software involved with the identifying objects of the process it is involved in. Robustness of a design can be applied to the design of any type of system, a workflow engine for instance. Basic questions to come to a robust design are:

what are the presumptions of the design? are they coherent to the current demands of the process? what are the expected changes in the future? how important will these expected changes be for the organization? where should they influence the design of the current system? what will be the impact not taking these expected changes into account during design?

This is called the change request profile of the business process. Every business process has its characteristic change request profile for its business objects. Likewise business objects in different processes can have different change request profiles. Having different change request profiles implies that business objects are different. 20

Robustness is not an isolated purpose. Often the robustness of a design will be influenced by the time given for the design and build phase. Using iterations might give better opportunity to make the design more robust as the application and the thoughts of the customer about the application are realized in cohesion.

2.4 Reusability
Were the three previous purposes concentrated on the application, this purpose has its focus on the components of the application. The more reusable components are used the simpler the application will be for the maintenance team, the bigger the ROI on the original application, the easier it will be to create meaningful test code, the lesser the likelihood on bugs. Just as business objects are marked by their change request profile is the reusability of a component marked by that. A component can only be reused somewhere else in the application landscape as the expected changes serving the new system are the same as in the original. If the expected changes are markedly different reusing the component will become a burden. In the example of the solliciters and vacancies two different implementations of the same business process were provided. The business object 'vacancy' in both processes are incompatible with one another, although they share the name. Both business objects can be expected to fulfill different change requests, which will inevitably lead to incompatible features. On the down side is there a bigger chance specific demands are more difficult to meet, change in often used systems is virtually impossible, it must be clear which systems are used where, deploying becomes more tedious, if there is a bug it can be much harder to solve, upgrading an application having a lot of reused components can be more complex. Reusability demands organizational administration to handle its dependencies. Anyhow, the professional deformation of an architect requires, insists and demands to adhere to this principle. Therefore it is left out of any discussion and considered a very good purpose, although any application could work without it.

2.5 Extensibility
With the purpose of extensibility the circle closes itself. It started with the maintenance of the current functionalities in the application and ends with delivering hooks for external functional extensions to the application. Opposite to these both ends of the circle stands robustness, which is defined as the possibility to provide extensions to the current functionalities of the application. Extensibility is specially important for systems, which deliver common functionalities for unpredictable implementations. Aspects and jdbc jars and other libraries are examples for which extensibility is a requirement. It is also a core feature for frameworks. When interfaces can be used to enter the system extensibility is delivered on contract. How else could loosely coupled systems be addressed? Every time a loosely coupled system is used by another system extensibility has been accomplished. To give extensibility a distinctive definition therefore requires a strict description. Extensibility will be restricted to delivering hooks on business applications. Even then can it be obtained easily by technique using interfaces. The distinction is made in the contract stated by the interface. To evaluate if an application fulfills the purpose of extensibility is to look at the contracts it offers, while having the business process in mind. How well can it integrate new business objects? Does the whole application need to be redesigned or can it be done quite straightforward? Questions like how a new view can be added to the user interface are in this respect evaluations 21

of the robustness of the application. That is an internal extension, because an user interface with views already exists in general. Adding an interface for mobile cell phones for instance can be done fairly easy if the Observer pattern was used for the creation of interfaces. That is therefore a question about the robustness as well. Extensibility is therefore quite an abstract matter to discuss here. It should be evaluated at the time a new extension to the business process will have to be integrated in the application.

22

3 The vision

Introduction
In this chapter I will provide some design principles. Use of these principles will help the architect to design an application. The principles are: 1. respect for environmental constraints, 2. architectural layering and iteration, 3. architectural coupling, 4. data exchange, and finally 5. Inversion of Control. The first one is not a real design principle though, but it can restrict the freedom of the architect how to design and should not be forgotten. The ordering of these principles is from a high abstraction level to a low abstraction level, except for Inversion of Control. The reason is that Inversion of Control is a two faced principle closely related to architectural layering and to the implementation of code when it is used to describe frameworks using Dependency Injection. Out of respect for the practical value of that second face of Inversion of Control is it put at the end of the chapter.

3.1 Respect for environmental constraints


Although this is not a truly design principle, it is very important to consider how environmental constraints affect the design process. Some of these constraints are there because of legacy systems or hardware restrictions, some of these constraints can be caused by the available expertise, sometimes it can happen that the functional requirements are that complex that a good design will never work. An example of dealing with an objective environmental constraint is the use of numbers for exception handling. In the old days, when memory footprint was a real design issue, the use of text to categorize exceptions would be too expensive. The use of digits was far better for performance. An organizational example is the expertise of the development team as stated while writing about the purpose of maintainability. Throughout this document examples of these environmental constraints can be found. It is the challenge for the architect to deal with them over and over again.

3.2 Layers and iterations


The most important design principle I would suggest is to design step by step, layer by layer. In chess there exists in each opening a vast experience called theory, which is constructed out of matches between very talented people. Only the games of the real talented are considered to contribute to the theory of chess. But still after some ten or fifteen moves the opening theory can arrive at what is called a 'critical position'. That is a position from which several new lines of investigation can be mentioned, but no conclusive judgment about the position can be made yet. 23

Arriving at a critical position one has to reevaluate all judgments so far to find out if they are still applicable. The point is that at every critical position one has to start all over again. As it is a best practice in chess to calculate until a critical position is met, it should be a best practice for the design of applications as well. The first model which requires layering is the implementation model. The previous models are descriptions of isolated processes given as input to the architect. In the construction of the implementation model the architect must not only design the business process, but also has to take into account how to connect to other processes. How to integrate the different business contexts into interfaces on the same process. And he can be forced to translate the tBPM objects into different objects as described previously, because of requirements coming from other processes. The step from a business model into a design is often too big to handle at once. It is better to first clarify which steps can be discovered during the process, which objects will serve as input, which objects serve as output and how objects can be identified positively during the transformation. If too many questions are handled at the same time the number of possible solutions makes it too hard to come to decisions. But when the unraveling of the process is done like peeling an union the questions to be answered can be grasped successfully. The advice of the King of Hearts in Lewis Carroll's Alice of Wonderland to the White Rabbit bears great wisdom in this respect. Begin at the beginning, the King said, very gravely, and go on till you come to the end: then stop. Every time an input, transformation and output is untangled the implementation model gets its shape. Using this technique of layering can cause to stay too close in design to the original business model. That would imply that the design components can not be reused again for some other process. Which again would imply that the systems are not really independent. For that iteration comes into play. Redesigning the model again with the previous knowledge in mind could help to generalize the design even further and become less entangled with the business process.

3.3 Architectural coupling


Coupling is a central concept in ICT. In wikipedia there is an excellent article about it. It describes several possible ways code can be coupled unwantedly. The information of the article can be used as a checklist to improve any implementation. The coupling described in the article of Wikipedia is very useful for developers. For the work of an architect however there is little practical value presented in this article. Both developers and architects aim at maximizing loose coupling. They accomplish this aim at different ways, because the field of work is different. The main concern for an architect is to design an application in which systems are maximally loosely coupled. The business process requires predictable paths to be executed. It is not an option that systems have unpredictable output. The challenge for an architect is to create systems that fulfill the functional demands of a business process, but not becoming to vulnerable to changes of these functional demands. Loose coupling for an architect requires therefore a different definition than the definition needed by developers. Both definitions should be closely related to each other as they strive at more or less the same goal. To judge if the coupling is according to standards is different. Every design pattern is a way to couple classes. Looking at the Observer pattern it is an example of common coupling. It uses common coupling explicitly. This does not mean that the Observer pattern is judged as bad. It is a commonly used design pattern and very useful. That is, as long as its contract is met. When the Observer pattern would be used to convert documents for instance, every conversion is independent of any other conversion. The code to convert a document of type A into a document of type B or C would have to be rewritten all over again. Needless to say this is bad design. 24

The Observer pattern is in itself good, but the use of the Observer pattern to convert documents is not with respect to its restrictions. The question for an architect how to maximize loose coupling is a different one then for developing. The question for an architect is if the chosen design pattern or combination of systems is apt to meet the set of functional demands? That is, assume the implementation and the functional demands a similar change pattern? In an article in SOA Magazine a new form of coupling was introduced: unintended coupling. This type of coupling has a very high ad hoc level and it surely is not a design principle, but it is worth mentioning here. The term can serve as a reflective mechanism to control the design. What are the couplings in the design? Which dependencies do they pose? Is that acceptable? It serves as a good reflection mechanism to find out if the created relationships are the couplings planned for. As technical coupling is to be minimized, the only coupling left should be these relationships between classes and systems, which is necessary to fulfill the requirements. Coupling is unavoidable and wanted. Without coupling no business process could be implemented. No data could be entered into a system, being transferred by a system, stored and retrieved in reports without a decent coupling. If coupling was not unavoidable and wanted the work of an architect would be much easier to deal with. It is the unavoidable necessity together with the never ending aim to minimize coupling, which makes the architectural world go round. From now on the term coupling refers to architectural coupling only. There are four distinct coupling relationships. The four distinct forms of coupling are: 1. tight coupling, 2. strict coupling, 3. loosely coupling, and 4. aspect coupling.

3.3.1 Definitions and delineations of coupling


Coupling is said to be tight when the execution of the program is inevitably linked to functional requirements. The most obvious example happens during logging in. Logging in must be done before any other action can take place. The place where this part of the program is executed is unavoidable and wanted. Authentication and authorization are always tightly coupled to functional requirements. A characteristic of tightly coupled code is that it is good or not. No in between. Consider the validation of a zip code. The moment and place where it has to be performed is prescribed and the possible results are well defined too. The implementation of the validation of a zip code can be checked using a set of assertions. It is easy testable. It is strict coupling when differences in input can predict differences in output. Editing an existing document and then store the result in a database will have the effect that an update statement is used, not an insert statement. Placing the right values in the right columns in the right tables is tight coupling. The reliability of the process is strict coupling. A bug in a process providing strict coupling is harder to find then in a process providing tight coupling. It can only be found using errors coming from tight coupling. Bugs in strict coupling processes consist of logical errors. That for instance not all required data is presented in a report. Or that the result of aggregation is wrong. When data come up in the wrong place in a report that is a tight coupling error. Strict coupling is more difficult to test as knowledge about the where and why of the implementation is needed. To find out if strict coupling errors exist, documentation is necessary. As long as the code compiles and no self explaining errors are made, strict coupling can only be tested using the documentation manual. 25

I call errors self explaining when it is obvious that the logic in the code makes assumptions, which are not accounted for in the business process. These errors are typical for strict coupling processes. It is considered loose coupling when from the viewpoint of the input no valid assumptions can be made about the concretization of the output nor the path traversed to get the output. That is when the control is handed over to the other system. It is considered double loosely coupled when the reverse can be stated too. Please take a look at the next lines of code: if (obj != null){ obj = receiver.returnObject(obj); } There appears to be a thin line of coupling between these two systems. The request will only be sent, when the object is not null. Making no assumptions about the other system would imply, that the null pointer exception must be handled by the receiving system and therefore should the if statement be removed and the code should be like this: obj = receiver.returnObject(obj); Although the first system does not make any assumption about the second system anymore, the second system will act as the sender, when returning data. From the point of view of the second system it has to make an assumption about how the first system will respond on null pointer exceptions. The need for this assumption is in the absence of any delineation by the first system, when their data is ready to be sent. When the second system will only get data from the first system, if the first system states the data is valid to send, then the second system can handle the received object independently of that system. The lines of code would then for both become: if(sender.validObject(obj) == true){ obj = receiver.returnObject(obj); When the receiver validates the object before returning it to 'sender' the two systems can be said double loosely coupled with respect to this connection. Aspect coupling is a special type of coupling. On the one hand one would call it a type of loose coupling. Reusability, which is always an indication of loose coupling, is very high. On the other hand the caller of the aspect can exactly predict what the result of the call to the aspect will be. From that perspective it is strict coupling. Even more, an aspect can put restrictions on how it will be used. Therefore it can have tight coupling features as well. Because it can be used by any system, will it put demands on how it is used by all other systems. Aspect coupling is loosely coupled from the callee point of view, but strictly coupled from the caller point of view. Aspects and libraries share this type of coupling. Common libraries like jdbc drivers or for mime handling can be viewed as platform wide aspects. Aspect coupling is unilaterally defined by the callee side. In a business process tight coupling is the standard. An actor must behave according to the business process demands. Procedures and guidelines can have strict coupling. The organizational culture, norms and values can be considered loose coupling. The way people have to registrate their working time or what to do when fire breaks out can be interpreted as examples of aspect coupling. Only the tight and strict coupling behaviour of business processes is translated into the business process model. In the tBPM will this tight and strict behaviour described using logic. In the implementation model will loose coupling and aspect coupling reappear. Again will it have nothing to do with the actual business process. Loose coupling and aspect coupling are more related to the features of the platform in which they are constructed. Tight and strict coupled 26

features of a business process are less dependent of the platform used.

3.3.2 Processing types and coupling


There are different types of business processes. The type of business processes is important for the architect, because it gives insights how a robust application might be designed. Knowing the key factors of the process will help the architect to understand how in general the business objects will change and how they must be identified throughout the process. There are three logical types of processing, namely: 1. transformation, 2. transportation, and 3. translation. In chapter 4 will these three logical forms of processing used again, but then in perspective to design patterns as these have an input, processing and output as well. Transformation is about returning the same (type of) business object. Transportation is the logical form to move the object from one place to another not affecting the content of the object itself and translation is the logical form to have some object as input and a newly created object, based on the content of the input object as output. In every type of processing characteristics of the other processings can exist too. There is no processing without a transportation. In every processing transportation serves the main characteristic of the business process. A processing is called a transformation process, when the main characteristic for design is the transformation of an object. The same applies for a transportation or a translation process. Next will the schema's presented for each of these forms of processing and some characteristics of each type of processing described. Every type of processing is a specific combination of coupling, which has consequences for the design of the process. In the first schema the transformation of an object is depicted. Figure 4: transformation process

It is a very basic schema. Not surprisingly, but in practice this kind of process can be complicated. Transformation processes can range from a new appointment in an agenda to a permit to build a house. The contract of the transformation process is that it must be able to handle a business object of the type which Object A is constituted of. Therefore it must have knowledge about the type of business object and the methods it can be applied to. As a result the process has a tight coupling with the type of Object A. There is also tight coupling between both Objects A, as they are functionally the same object. For the system it is irrelevant which object it is. The only thing that matters for the transformation process is that it is capable of transforming the type of object. During the process the identity of the object is preserved, but the data and the behaviour of the object can differ. The actual class of the object might change repeatedly during the process, but the core business fields to cover the identity of the business object will remain the same. Each business object has three types of data for the process, namely: 1. identifying data, 2. status variables, and 3. content. 27

The values of the status variables will vary based on the content of the data. A transformation process is the only process, which is concerned with the meaning of the content and will compare this information directly or indirectly with content of other business objects. Indirect comparison of business objects is happening, when a status value is given to an object based on its content. The challenge for the design in this type of business process is to abstract the causal relationships between the content and changes in statuses as much as possible. In the next figure the schema for transportation is shown. It is something less basic. Figure 5: transportation process

A transportation process is the only process which can generate identities for an object and which is able to change the underlying class to represent the business object. The class is a vehicle for the business object and every time it changes, the business process is entering a new subprocess of the transportation process. During the conversion of a document for instance first there is the translation from object of type A to a general object of type X and then the translation from that general type X to an object of type B. Every conversion of a document will consist at least of two transportational subprocesses. During the transportational processing the business object A does not change at all. The object of class X refers to the same business object A as does the object of class Z. In order to be a reliable transportation process the business identity and content of object A must be preserved. Otherwise transportation is not loosely coupled to the object it processes. The implication of this is that a successful transport of an object means at the business level there is a tight coupling between the objects of class X and Z. If object x1 of class X is different compared to object x2 of class X then these same differences will be found among the objects z1 and z2 from class Z. Knowing the input is knowing the output. The identity and content must not change during the transport. For the transportation process the business object does not have data at all. The transportation process is loosely coupled to the business object, but has a tight coupling between input and output. The challenge for this type of process is to use different representations of the same business object in different subprocesses, but still preserve the identity and content of the business object. An example of a dedicated system for transportation is a tracing system concerning the delivery of a package. A workflow is a process which consists of two intermingled types of processes. The

28

flow from one step to another step belongs to the transportation process, the content of every step belongs to the transformation process. Workflow is a peculiar type of processing, because the navigation is based upon the results of every step. Normally the transportation processing serves the transformational processing, but in a workflow it is the other way around. The transformational processing serves the transportational processing. In figure 6 the schema for the process of translation is presented. Figure 6: translation process

In this process business object of type A ceases to exist as an object from the perspective of the process and the output is the new business object of type B. Examples of this type of business process are for instance making a report of a meeting to the processes of facturation or marketing analysis. The format of the data is more important then the actual content of the data. The identity of the object is its type. Different objects of the same type are treated the same. There is a strict coupling between input and reading of the input object and there is a strict coupling between the result of this reading to the output object, but there is no direct and reversible relationship between object A and object B. There is though a strict coupling between the content of the business objects A and B as they should be the same with regards to the translation process. If object A has certain characteristics they will be met with the characteristics of object B insofar as they are applicable to the type of business object B. It is not true that object B after reversed engineering will result in an object C of the type A with exact the same content as the original object had. The content can irreversibly change during the translation, but in a predictable way. Objects of type B will probably have some characteristics not present in objects of type A. These characteristics need to be added when compounding the new object of type B. The greatest challenge in this type of process is the precision of the translation process. It can be difficult to translate data from one type of business object to another type, as data is often tightly connected to the context in which it is formulated. When the translation is not precise enough, data corruption can occur.

3.4 Principles for communication between systems


Any application consists of one or more systems. Designing applications for now and the future implies that those systems should be able to do their work independently of their environment. Meanwhile will systems have to be able to communicate with one another and as a consequence exchange data. Communication between systems falls apart in three subjects, namely

the routing, the act of sending and receiving data, in- and external communication of a system.

For each subject a guideline will be presented.

3.4.1 Routing and the law of Demeter

29

Routing Routing is a complex task to design as routing has to serve opposing requirements. The first requirement routing has to adhere is the fact that routing is following a prescribed order. Some steps in the process can only be accomplished after some other steps have been successfully fulfilled. Consider for example the storage of a data object. Storing data will demand that the object has some intrinsic properties, may be even specific values before it will be saved. The check if the content of the data makes it a valid storable object must be performed before the data is stored. It can be considered worst practice if the check is made after the storage of the data. Because of that and many other practical reasons a predefined arrangement of a routing process is unavoidable and wanted. The fulfillment of this requirement can already turn out to be complex as often there are more routings for one type of object possible. Next to the ordering is the result of every routing step predictable. Every step has a specific set of validation rules by which it is governed. The results of these steps are defined in the business process. The objects, which should be used in these steps must therefore be closely related to objects found in the business process. Otherwise the validation rules can not be applied logically. In every step the routing must have the capacity to return to the concrete object and a concrete set of validation rules. When for instance a publisher has separate routings for the number of a magazine or a new book the objects used in these routings must be closely related to a magazine or a book. Routing must have the capacity to preserve the identity together with the content of the data through the whole process. No matter which class is used at a specific point in the process, the identity of the data at the start of the routing must be equal to the identity of the data at the end of the routing. These two requirements both demand from the routing that the process is tightly related to the actual business process. The closer the routing stays to the actual data the easier the specific demands of the actual data can be met, because a logical change in the data is mirrored in a logical change of the routing. The third requirement is that the routing should be robust to change. The number of steps required and the implementation of each step as the relation between the different steps must be able to change without effecting the routing process severely. That requires the routing to be as independent as possible from the actual data. In this way the routing becomes robust and can process a bigger variety of data. The consequence is that any logical change in data is preferably not mirrored in the logical process of the routing. A routing should serve both types of processings at the same time. At every step of the routing should the routing be able to process logical changes in data differently, but at the same time should the overall processing of the routing be independent of any actual data. That requires that at every step of the process an object performs two separate functions simultaneously, namely supporting the overall processing of the routing and supporting the requirements of the business process for that step. To meet these contradictory requirements routing has to be designed layer by layer. The most abstract layer is almost entirely focusing on the overall routing process and the most concrete layer focusing almost entirely on the actual content of the data. Every layer in between will show a gradual transition from focus on processing the routing to the focus on processing data. The gradual transition can be designed using the Law of Demeter.

Law of Demeter The design of the routing system is guided by the Law of Demeter, which states that a system should only talk to its neighbours, not to strangers. The Law of Demeter is intensively studied by the research group of Karl Lieberherr. A lot of valuable information about this subject can be found here and here.

30

The Law of Demeter states about a method M of object O that: 1. it should only invoke methods of object O, or 2. methods of objects, which are parameters of method M, or 3. any of the objects, which are created in method M, or 4. the direct component objects of object O. The main concern for the Law of Demeter (=LoD) is to sustain robustness of design. Central to the idea is that one uses the maximum of information available without making assumptions what is present here and in the future. The relations of a class are all these classes, which can work on one of the objects being set in the rules 1, 2 or 4. The component objects of rule 4 should not be addressed by the external object directly. The class having component objects must have public members to let the external object exchange these objects and leave the responsibility to the class itself how to handle it. This is a technical interpretation of LoD. It is however possible to use these restrictions on invocations for the architectural design of routings. With LoD in mind one can investigate the chain of dependency between objects and the dependency between systems within a routing. LoD is particularly useful as a guidance to design a routing, because it focuses on the relationships between classes and defines a maximum how far a class can reach out to other classes at the same time. Designing a routing with LoD in mind forces the design to progress step by step as a class is inhibited to reach only one class away. In an article of Brad Appleton the analogy of quantum mechanics is used to make a distinction between a 'particle view' and a 'wave view' on objects. The particle view is looking at the object itself. The wave view he is talking about, addresses the relationships an object has with other objects. The analogy goes even one step further. Movement in a 'particle view' is moving from A to B, like passing a bean from the front end to the back end. Movement in a 'wave view' is different. In a wave the particles do not move. They stay at their place but respond to an event, when it is passing by. With the guidance of LoD one can investigate where an object behaves like a 'particle' and where like an element of a 'wave' and if that is useful. In all public methods which belong to the contract of the class should the object behave like an element of the wave. That is that the code in these methods should be dedicated to the relationship of the objects with the other objects. In the private methods outside the contract the particle behaviour of the object should be collected. That is that the code in these methods should be dedicated to the concrete actions for which the class is made. If any validation in a class should be executed this should be done in private methods, not belonging to the contract of the class. The handling of the result should be done in the public methods, which belong to the contract of the class. The same line of thought can be applied to the dependency of systems. A system should only talk to nearby systems. When it talks to systems by which it can only arrive after having passed another system see the example in the next section it is making too much assumptions about both systems involved. Not only does it have knowledge about both systems, it has also knowledge of the relationships between these two systems. These assumptions will make it harder to change all three systems as they are linked to each other. A system should therefore behave like an element in a wave, never like a particle. Only systems which behave like the element of a wave towards all its surrounding systems can be said to be loosely coupled from its environment. The behaviour towards its surrounding classes can be easily detected using the four possible invocations of the LoD. If somewhere in the code none of these four rules is applied the class or system is violating LoD. It can indicate that there is enough thought given to the design. The relationships between classes can be more extensively described then being neighbours. Classes which have another class as a component object inside need the other class to function properly itself. These two classes might be called family, where the component object might be 31

called a child. When classes both must assume what the other class wants, they can be called family too. Consider the method 'List<String> returnNames(List<Person> pPersons)'. Caller and callee both assume that the Person class has a property 'name' and they both know what type of name, let it be a surname, first name or the full name and that the other class uses the same interpretation to return the proper names. Then they are family too. Classes can be called friends when they share a method after which the caller knows how to proceed indifferent of the answer returned by the callee. An example is the method 'boolean isPhoneNumber(String s)'. Indifferent of the answer of the callee will the caller know how to proceed. And there are classes which can exchange their component objects using methods. Together with the handing over of the component object the responsibility is handed over. Both classes know how to deal with the class of the object. The caller is not only independent of the processing done by the callee, the callee can even not predict how the caller will respond to the results of its processing. The internal processing of the caller does not have to be dependent on the results of the processing done by the callee. These classes can be called neighbours. Family, friends and children all visit each other. Neighbours exchange. The more neighbour relationships there are in a routing, the more it will behave like a wave. Loosely coupled systems are neighbours of each other. Strictly coupled means you are friends as behaviour can be predicted and tightly coupled means you have yourself a family. The objects specified in rule 3 in this analogy can have any type of relationship. Not all classes require to become neighbours to be designed most effectively. Aspects for instance are never neighbours. A class which is calling an aspect knows what it will get in return. An aspect ensures predictable results. If a class needs a list of employees from an aspect then the aspect will assure that a list of employees is returned in a fixed format. An aspect serves as an extension to any class that calls the aspect. Therefore are aspects always friends of anyone. They are not family, because family members provide each other unique capabilities. The capabilities of aspects can be performed by any class itself. It is only far more convenient to let an aspect do the job.

Routing and the law

To give an example of a wave I present a possible implementation of a translation portlet. It is an arbitrary example. Figure 7: implementation of a translation portlet

The user will start the routing by sending a submit request to the portlet. The portlet will create a bean, validate the request and if satisfied send it to the server facade. The server facade knows 32

how to process this request to a next layer and hand it over to the business layer. The business layer will validate if the request sent can be processed further. If affirmative then the request is trespassed to the DAO which will communicate with the database. After receiving the data back from the database will the data object return to the portlet and after use of internationalization will the result presented to the user, who started the request. The quality of the wave is defined by the different relationships between the different layers. How is the communication between the server facade and the business layer and DAO for instance established? Is it done first calling the business layer by the server facade and then from the server facade directly to the DAO? Or is the bean handed over to the business layer, which in return will hand it over to the DAO? In the first scenario has the server facade knowledge of both classes and knowledge about the relationship between these classes too. The server facade knows based on the result of the business layer if it can proceed calling the DAO or not. The server facade is first visiting the business layer, then returning to itself and afterwards stretching itself out to the DAO. Hardly the way a wave works. If from the server facade on the other hand the object is handed over to the business layer with a method like 'Bean returnRequest(Bean b)' then the server facade makes no assumptions about the internal working of the business layer nor the DAO. Making less assumptions about the internal processing of other classes will enlarge the maintainability and the robustness of the application as a whole. The process will behave more like a wave in which all elements stay at their place, but do their movements when the data object is passing by. Take a look at the next to graphics to point out the difference. Figure 8: Ridge and wave

In the ridge figure the facade is stretching itself out and not handing over the responsibility to the business layer. As a result the facade is acting like a particle. In the second figure the responsibility is handed over to the business layer and a wave arises. The facade is now decoupled from the DAO and does not have to make any assumptions anymore about the relationship 33

between the business layer and the DAO layer. Applying the golden rule of the LoD not to talk to strangers will automatically create a wave in the processing assuring each class does not need to make more assumptions about its environment then strictly necessary. The small routing from the portlet to the resource bundle and back is not depending on a business process. The routing is totally in control of the technical group, which created it. That makes the design of this routing independent on external, uncontrollable factors. It is not a big problem when the implementation of this routing is coded straightforward to its goal. Both validations can be considered extensions to the class, which is calling for the validation. It is not part of the routing as the routing can proceed anyway whatever result is returning from the validation. As pointed out in the section about routing is the routing a friendly process. In the example of the portlet is the user visiting the database to get his translation. To fulfill a routing every class in the line must have enough information to know to which class the data will be trespassed next. This information can be stored in the first method call or in the data object sent across the line, but anyhow it must be there. A routing must always make some assumptions. A routing of only neighbours is leading to nowhere.

3.4.2 Exchanging data using the Principle of Privacy


The Principle of Privacy states 'personal yes, private no'. I use this phrase as a shortcut. It is my own formulation and comes from a guideline how to write interesting blogs for other people expressed by Wim de Bie, a world famous entertainer in the Netherlands. Content received by a system should have no influence upon the internal processing of data by the system. There are three guidelines stemming from this principle.

Your own constraints are private

This guideline states that a system will not send any information, which it knows is only useful for its own functioning. Status information relevant to the internal functioning of the system will not be sent across the line. It will be kept private. A basic example is shown by the Google translate portlet. When entering the value 'nu' for translation and asking for the translation from 'recognize language' to English the result is from Swedish to English and the result is 'now'. Although correct it could have been from Dutch to English as well having the same result in English or from French to English in which case the translation should have been 'naked'. That is the essence of this constraint. Only take the decision at the time it is appropriate. In the translation portlet there seems to be a preferential order to find words in different languages. That preferential order is uncoupled from information of the user interface. Feedback is given using the local set in the browser, but the preferred recognized language when no language is specified appears to be Swedish. The system to return the translation uses no information from the user interface object. The preferred language of the user information is status information of the subject. It indeed should not be used by any other system as a guidance for behaviour and therefore not included in the transmission of the data to other systems. Another way this constraint serves as a guideline is by transferring data in such a format, that any other system does not require the same functionality the system itself has. Imagine system A, which connects to a database. It should have a jdbc driver and manage SQLExceptions. System B, to which data is transferred, should not have knowledge of these requirements in order to process the data received from system A. Therefore should system A never transfer data, which requires or jdbc or the catching of SQLExceptions. From system B to A the same rule applies. The source 34

where the information comes from, like the form used, should never be transported across the system boundaries. System B will ask system A for a certain action. What that action precisely will be, will be defined by system A, so system B will be able to talk to different kind of systems. This constraint can be very useful in delineating systems. As long as the use of libraries or the exchange of exceptions is meaningfully classes belong to the same system.

Generalize the data sent as much as possible

The other way this principle restricts the act of sending data is that the data sent should not pose demands on the contract of the receiving system. On the level of implementation of the receiving system the data sent can be translated back to the original object type. That way any receiving system can serve the maximum number of data types to process. The translation to the original object type is therefore not part of the contract of the receiving system but it is the responsibility of a specific implementation of the receiving system. Consider a publisher who wants to store information about a certain publication. The publication can be a book or a number of a magazine. The action storing the data is equal, the use of the receiving system likewise, but the place to be stored and the fields to be stored quite differently. The sending system will sent data in the format of a Publication object and the receiving system will at run time decide which implementation is the correct one to process the storing of the Publication object. The implementation will have the responsibility to translate the Publication object to the proper instance and process the storage accordingly. This guideline is complementary to the Liskov Substitution Principle aka design by contract.

Do not use private terms for shared data

The last way it influences data transmission between systems is by exposing information. There are pieces of information which have their meaning across systems. Even if these systems are neighbours or totally loosely coupled systems. A banking account number is a banking account number indifferent of the system in which it is processed. This information is therefore never private and should be sharable system wide. This example touches the design of business object fields. That is out of scope for this document. For now it is important to recognize that 'private no' can also mean that for some information to be useful it must be identifiable throughout the whole application landscape.

3.4.3 System execution and the Liskov Substitution Principle


Robert Martin has paraphrased in this article the Liskov Substitution Principle this way: Functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it. In its originally formulation is the Liskov Substitution Principle restricted to the statement that a subclass should adhere to the same contract as its base class. The principle is also know as 'design by contract'. Every subclass should have equal or weakened preconditions and equal or strengthened postconditions to sustain the principle. A subclass can pose less restrictions to its environment then the base class and can be more restrictive for itself then the base class. In the original principle the subject deals with inheritance. From an architectural point of view the communication between systems is like dealing with separate lines of inheritance and the 35

navigation path in a system is inheritance itself. Communication between systems should be independent of the navigation within a system and solely focus on interoperability of systems. How the contract specifications are met by any implementation of a system is the responsibility of the implementation. This makes it possible to replace a system with mock objects for unit testing, which is a logical layer to choose for mocking, because a system should be considered one unit of action. The Liskov Substitution Principle for architecture can be defined as: System contracts should be specified for external communication only leaving internal navigation the responsibility of each implementation of the system.

Contracts

There are three types of contracts as there are three types of systems. One for interfaces, one for services and one type for aspects. Within a contract the input, output and actions to be performed can be described. A miscellaneous implementation of a system might need other classes to fulfill its contract. These classes are called contract partners. A contract partner can be a reference to a single class or a whole system. Characteristic of a contract partner is that the implementation of the contract partner will change in line with the implementation of the system. The contract for interfaces are formulated in the business process and are owned by the business process owner. The input for these contracts are objects, which represent business objects. The contract is a description of the mission and vision statements for that step in the business process. The actions and arrangement of these actions are formulated in the translated business process model and described in the contract. Contract partners are these classes which help the interface to fulfill its contract. Contract partners can be architectural interfaces as an interface might perform a subcontract of the business contract or another part of the business process. The contract for services are formulated by the architect and owned by the it department. The input for these contracts are objects, which do not represent business objects. In the implementation of such a contract business objects can be recreated, but in the construction phase the received objects do not represent business objects. That makes it possible that these services can be reused for different business processes. A service should perform a relatively isolated task, making it possible to be architectural loosely coupled. Contract partners should therefore be all classes which help to fulfill the contract. It is preferable when these services do not need subcontracts to perform their job to heighten reusability. The contract for aspects are more considered extensions to the contract of the class, which is calling the aspect. The situation which classes are contract partners should be quite straightforward as all used classes in the aspect will normally be contract partners.

Construction phase

Entering a system starts with the construction of the system object. At that time the system has no information at all. No new instance, which would cause unnecessary dependencies, should be created. System status variables, which are used by all implementations of the system can be declared together with all contract partner classes. These classes can be loaded, but not should not be initialized yet in the construction phase. According to the Liskov Substitution Principle the only task that can be performed in this phase is to process the received information. Having processed the received information using setters and getters the system can be initialized outside the constructor. This does not imply that there are no

36

dependencies in the classes received by the system. When for instance an Employee object is received by the constructor and that class needs the Person class, both these classes can be initialized during this construction phase. Restricting this phase to the processing of the received information lets the construction of the system be independent of any implementation and will therefore put no restrictions on the calling system.

Execution phase

After the call for construction of the system will come the call for execution of the system. The first step will be collecting the required information to start itself up, setting all status variables to their initial values and transforming the received data into the requested format. External systems and contract partners are fully instantiated during the execution phase. The technique to instantiate other classes is dependent on the congruence in change request profile. Calls to external systems or subcontracts imply that no congruence in change request profile can be expected and will therefore preferably instantiated at run time. All implementations of contract partners can be expected to have change request profiles in line with the main system and are therefore preferably instantiated using design by interface. Contract partners can have getters and setters, systems instantiated at run time do not require that. On behalf of the status variables will the system navigate through its own path. Decisions about the path to be processed is like the wave view mentioned in the section about LoD. Designing a system as described here should assure that no dependencies caused by any implementation can exist. That will make change requests to a system better manageable.

3.5 Inversion of Control


When designing an application there are sets of tasks which relate to each other and there are sets of tasks which are independent of each other. Tasks that relate to each other will be organized in one system. Tasks that are independent of each other will be organized in different systems preferrably. At several points in the application independent tasks will have to make use of one another. But because they are independent of each other, they can not make valid assumptions about behavior nor do they know which implementation to use. The calling task does not have more information then the need to make the call. The callee knows best what has to be done and how it should be done. It is at that time that Inversion of Control is best used as a design principle, because implementing this call with Inversion of Control will make it possible to decide at run time which implementation should do the job. This might sound abstract, but in the example talking about the wave of the server facade was this principle already demonstrated. When the server facade was calling the business interface and letting the business interface do the rest, the server facade was handing over the control. It is the business facade who knows best what has to be done. Another example was presented while talking about the double loose coupling, when it was shown that a call to another system should only be made, when according to the caller the call is valid. That had the effect that the callee could judge the call at its own merits and is relieved from making assumptions about the internal processing of the caller, and therefore return the call based on its own processing solely. In both examples Inversion of Control is applied. Inversion of Control (=IoC) is the design principle to hand over the control how to perform the call from the caller to the callee. The reason why is shown above in the two examples, namely to maximize the independence of any system and minimize the number of assumptions necessary for communication between systems. When the callee is capable of reacting to any call without 37

assistance of the caller, then is IoC applied successfully. As a result is it neccessary to be able to postpone the choice of implementation to run time, because that is the first moment the callee will know that there is a call on which it has to respond. Often IoC is restricted to this moment of the call, but its power goes beyond that moment in time. Let's take a look at the constraints applying to the communication between two independent systems or within a system. Table 1: Comparison of communication constraints for systems Between systems Within a system

The existence of the other system is not known All elements are designed in relation to one or need not to be known at design time. another. Systems can only focus on what they do themselves. Replacing implementations of systems has no side effect on the other system. Construction of the callee will be done at run time. Implementation is hidden by definition as the callee is not known before run time. Data exchange is restricted to personal yes, private no. Focus on cooperation with other elements. Together the overall job is performed. Replacing whatever has likely effect on other elements of the system. Construction of cooperating elements is known at compile time. It is best practice to hide the implementation. All data is by definition private.

The first five constraints for the situation 'between systems' are automatically applied when using IoC. Usage of IoC and this type of communication fits very well. The first five constraints for communication within any system on the other hand are not points of focus when applying IoC and sometimes even in contradiction with handing over control from the caller to the callee. When the caller and the callee are designed in relation to one another, then is it not very beneficial to hand over the control from the caller to the callee as they are both designed to work together. The usage of constructing objects using Dependency Injection or any other technique favoring IoC is not very usefull within systems. It has far more added value to restrict the usage of IoC to those situations in which communication between independent systems has to take place.

3.5.1 Usage of Inversion of Control


IoC as a designing principle is therefore the layering of tasks into independent systems. After the construction of the network of independent systems can each system be created. Using IoC to construct this network is the first step when creating the implementation model out of the tBPM. A question in this phase of design is 'where becomes the care for resolving the callee in the call more important then the request from the caller?'. During the design within systems IoC should be avoided. Separation of Concerns and the Single Responsibility Principle are more apt to guide the design at that time. Both guidelines focus on the distinction between the different elements of each task and how to organize the necessary elements concurrently. The Separation of Concerns will normally precede in the thought process the Single Responsibility Principle, but they will often guide each other to the optimal result. All three design principles belong to the technique of architectural layering. They all have their place in it so they can work together to create with the least effort the best solution at that time thought of. The natural ordering of these guidelines is to first apply IoC, then the Separation of Concerns and finally the Single Responsibility Principle. The viewpoint of IoC precedes that of the 38

other two guidelines as IoC demands a two sided responsibility to control communication. IoC is like the viewpoints 'you' and 'me' in a conversation. They change with every speaker. The other two guidelines do not have this change of viewpoints, both have the viewpoint 'we'. Concerns can only be separated within a framework that connects them, otherwise one of the two concerns is not fulfilled. The same kind of reasoning applies to the Single Responsibility Principle, where to segregate these responsibilities both must be met in the end. As a result is IoC preferrably used to delineate independent systems and should both other guidelines be used within a system. The delineation of systems should be done by the architect, the design of the systems within preferrably by the development team. It is them who will likely be responsible for the maintenance too and let the development team design the system will ensure that the code created is maintainable by them. As stated before in chapter 1 this will enrich the work of the development team and give each developer the possibility to explore different career paths. Testing using mock objects can be restricted to the testing of systems as a whole, where technical testing like Junit in Java can be applied within systems. Another guideline where to apply IoC next to communication between independent systems is the moment before business logic is expected to change significantly. An example of that is the moment before a workflow will be started. At that time independent status objects will be needed and for the sake of simplicity it is more convenient to create concurrent implementations using IoC. Workflows are prone to functional changes and should therefore be instantiated as independent of another as possible. Compare that to messages in an email sent by any application. The content of the messages might change, but the logic rarely and it would therefore be unnecessary to instantiate the mail class using IoC. When Inversion of Control is used for the design a complex process of an application, then it will still create a complex process of an application. The complexity of a process is not a property of the coding language, but of the business process. The requirements, constraints and dependencies of the process will have to be coded. There is no guideline for code implementation which can avoid that. That is out of control of these guidelines. Using other principles as a guideline might result in an even more complex implementation of the process. With respect to the five purposes mentioned in chapter 2 will IoC support the interoperability, robustness, reusability and extensibility of the application. Separation of Concerns and the Single Responsibility Principle will support from the purpose robustness onwards. IoC is often equated with Dependency Injection. That IoC and Dependency Injection are so strongly associated with each other has to do that Dependency Injection is used as the main technique in frameworks to deliver IoC. Reading the article of Martin Fowler the term Dependency Injection is said to be a less confusing term then IoC. According to the PicoContainer community is Dependency Injection focusing on component assembly, where IoC also refers to configuration and lifecycle management. IoC in this view is a design pattern or principle directed at dependency resolution. Stefano Mazzochi in this blog comments on this conception stating that IoC is a general principle to increase isolation and thereby improve reuse. Although I tend to agree with Stefano Mazzochi that IoC is more then the technical concept, if this is a misconception, then it is one of the most productive ones in history of programming. Moreover I think that these two viewpoints on IoC do not bite each other. Looking more closely on Dependency Injection reveals that it is describing accurately how to solve dependency resolution, where Stefano Mazzochi is referring to what IoC is. If you see what Dependency Injection is doing, then that could be described as Independent Instantiation, which exactly matches the principle described by Stefano Mazzochi. IMHO I think that these frameworks are excellent technical concretizations by which the community can make use of IoC. That IoC can be used independent of a IoC container and for instance be applied using the Command pattern does not make a big impact. The best practice to implement IoC is using some way of Dependency Injection within a IoC Container. 39

3.6 The banking example

3.6.1 The example


Let's say that a bank wants to have a secure way to transfer money and their original idea was this: Figure 9: first process description bank transfer example

It was build. The development team realized that some extensions would be made later on. Therefore they not just build according to the specifications, but made a more robust design. They had a configuration document in which several different types of flows could be configured. After a while the banking people came back indeed and asked if it was possible to have three flows, because different amounts of transfer required different security. They needed six eyes version for amounts above some threshold and even the control of the manager when the amount was considered big. The team looked at it and asked about a managerial role. It was not there. It had to be created and implemented in the authorization mechanism. In the original routing there was no code present to make distinctions between roles on a particular place in the flow. It could be done without any real problem and the managerial role could be added as well, but was this really necessary? The banking people now insisted on it and the team started to work on it. The next release came out and it looked more or less like this: Figure 10: process description bank transfer example including threshold

The banking people were very satisfied with this system and with the adjustments made. All worked very well. But later they realized, they had some more wishes to be implemented. Actually there should always be a supervisor control involved in the second and third flow. And by the way, not everybody has enough authority to start the transfer of big amounts. The team discussed about it and came back to the banking people. First of all, there is no role for supervisors. It should be added to the application and to the authentication mechanism in the same way as it was done for the managers. Would it be acceptable if the supervisor in the second 40

flow would always perform the last check? The supervisor would then only have to check these transfers which are already approved once. Time of a supervisor is more valuable and sparse then the time of the other controllers. And finally, which role should start the third flow? The banking people answered that the role of supervisors should have been there, they should indeed perform the last check and only they and the managers can be trusted enough to perform the transfer of the third flow. And so the team created the next release, which more or less looked like this: Figure 11: process description bank transfer example including supervisor control

Again the banking people were satisfied for a while, but then realized some important issues were not met. They got back to the team and pointed out what was missing. 1. if a manager is for some reason away, he must appoint someone, who can take over his position. That other person should have at least the role of supervisor, but another manager is preferable. Whoever it was, with every transaction the name must be known, because he will be responsible in the end for the transference of the money, 2. as all names of employees should be logged anyway, and 3. what about peculiar transactions? The bank will not cooperate in dubious transfers. They should be directly sent to the manager, who then will decide what has to be done. The team grew desperate. A whole new type of flow added? That can be instantiated by the decision of the manager and not by some algorithm? The manager herself can be temporarily replaced by someone else? The new type of flow can interfere with every step? That is not how was agreed upon in the first place. That is far from the original design. The banking people understood the problems, but to stay a trustworthy bank these rules should be applied. It can not be done otherwise. They started to bargain. A lot of meetings were held. Sometimes there were emotions on both sides, but they also knew they had to find a way out together. In the end, after several escalations, they agreed upon the following:

the system will not be rebuild. Until now it did a good job. The bank has always been very satisfied with the team effort and the team always responded well to the new features the bank must have implemented to stay secure, the control by the manager will from now on be called 'managerial control', and who should have the managerial role is read from a new configuration document for which content the manager has the end responsibility, 41

the content of the role that is the person will be read from the user object and added to the logging of the system, and the peculiar transactions will be checked before the amount is checked. If the transaction seems to be peculiar that is directly told to the manager. The manager will look at it and only when he approves that the transaction is not peculiar it will go into the normal flow.

Both were at the end satisfied. The banking people that their system was becoming better and better. The team that they ended up with manageable changes. The role of the manager should now not be set in the configuration document of the flows, but be collected from some other place and it was the only real exception to the core of the system. The logging of the person who performed the task was a simple adjustment, the change of the peculiar transactions was moved out of the system to the entrance of the system. Doing that the core did not have to be changed. The result of the peculiar transactions could afterwards continue the normal flow. The situation that on every step the flow could be interrupted was avoided. The flow became like this: Figure 12: process description bank transfer example including peculiar transactions

For a while the banking people, although not amused with the last discussions, where satisfied with the system. But eventually, they agreed internally that the system could be made more secure and therefore appointed someone, whose daily task would be to control transfers at random. That person had the authority to uphold any process at any time. He would not have to ask people when he would control a task, he could overrule any. By second thought, they also agreed that the manager could never be overruled by the controller, as she was already controlled by her manager and that ought to be sufficient. Having made these decisions they went to the team and announced them. How do you think the team reacted? This story is totally fictitious. Well, not totally. The distinction between the four and six eyes principle does exist. The manager involved in controlling exists, separate workflows for peculiar transfers exist and functions like the controller do exist. What is fictitious is the bank and a bank 42

who does not have an effective workflow for this money transference. But the process is very useful in showing how an originally robust designed system according to the original demands in the end could not be robust enough. It actually crashes under its own success. One could argue that the system was not robust in the first place. Although principally correct, not correct in respect to the fact that this is meant as an example and is never happened in reality and that its success opened new flows to be thought of, which never could have been thought of if the system was not there. Have you never seen a process like this? Where a system is evolving during the years, becomes more and more important for the organization and in the end suffers the combination to be very important and very unmaintainable? Or is that only happening in Holland?

3.6.2 Discussing the example


Before starting any discussion I show the architectural and implementation model of the code after the insertion of the flow to check on peculiar transactions. The code is provided in the bankingexample.jar and the classes are packaged in relation to their function in the overall example. In this discussion the words interface might point to a technical interface and as shown in the figure below an architectural interface. The technical name is the first association with the word. Therefore I will use the term architectural interface in this discussion whenever it is unclear what type of interface is referred to while speaking about this type of interface. In this paragraph all previous mentioned principles will be discussed using this example as a guide. Figure 13: Architectural implementation model of the banking example

The yellow hexagons are the interfaces, the blue hexagons the services and the pink ones the aspects. The Data storage interface is presented as an interface that is logical for the account transfer to call. It is not implemented in the code as it is considered out of scope for this example. Every line in the diagram stands for communication between independent systems. The numbers refer to the order in which they will appear. Every communication line between systems is managed by a contract. As interfaces are tightly linked to the business process is the responsibility of their mutual contracts with the business process owners. All other contracts in which at least one aspect or service is involved are managed by the IT department. The business owner should be able to recognize the ordering of interfaces, but should have no idea which services perform the majority of the actions.

43

Logging, exception handling and internationalization are aspects which are widely used throughout the application landscape and it is therefore not necessary to stipulate contracts for them with every system they are involved in. They have each a general contract. Therefore are these aspects and classes not mentioned in the two presented models. It would make the models unnecessary complicated.

UML relationships

I used four types of relationships, each having two variants. The types of relationships are composition, aggregation, association and generalization. I use composition when the life cycle of part object is controlled by the whole. This is like a family relationship described in the section about the Law of Demeter. Aggregation when the whole has the part as a member variable or when it assumes much knowledge about the referred object. That is a friendly type of relationship. Association when the whole does not have the referred object as a member, or when it only uses to exchange an object, which itself has a member or when the referred object is an aspect. Then the classes can be considered neighbours. Generalization is used to show inheritance or interfacing. Although aspects are actually friends and should therefore have an aggregational type of relationship, an association is used. That is because aspects are always in control of the contract. The class, which is calling an aspect, has to oblige to the contract stated by the aspect. An aspect does not belong to any class. Composition, aggregation and association can be one or two way. One way is when the whole is not expecting an answer in return, two way when an answer is retrieved. Different line types are used to depict generalization in case of inheritance and interfacing.

The data models of the banking example

The green rectangles are classes, the yellow ones interfaces or superclasses. Splitting the data model of the application into data models of each system creates a faceted overview of the application as a whole. The whole has been disappeared from the data model. That overview should be provided by the contracts. Figure 14a: Data model of the Employee aspect

44

Figure 14b:Data model of the Interface construction aspect

Figure 14c: Data model of the UserInterface

45

The _UserInterface class communicates with its surroundings using interfaces. It suffices to have knowledge at compile time and call for the proper class to instantiate at run time. In the code the interface of the _TransferResult class is directly called by the _UserInterface, but that is a matter of choice and in this case of simplicity for the coding. Figure 14d: Data model of the Transfer Result Interface

46

Figure 14e: Data model of the Account Transfer Interface

Figure 14f: Data model of the Transfer Control Interface

47

Figure 14g: Data model of the Flow system

48

3.6.2.1 Environmental constraints

The constraint in this code is the fact that it is an example, which means that not all aspects are worked out thoroughly. For instance the flow in the TransferFlow class is performed using an iteration and decisions about the validness of a transfer are made at random. One could hardly think that is how a bank would work. The _UserInterface class does not have an interface. In practice this would be one of many implementations each called upon by a command coming from an user interface. Although the architectural interface to store the data is depicted in the architectural implementation model there is no code equivalent to the interface. A configuration mechanism is lacking. That could be an aspect and would be used instead of the FlowConfigManager. Now the FlowConfigManager is designed as an internal aspect which is a contradiction in terms - of the flow engine.

3.6.2.2 Layering and iteration

The first layer which can be created are the different architectural interfaces and the objects needed to exchange between these interfaces. Next aspects can be isolated and eventually the services. In this example, which is developed based on a fictitious process isolation of architectural interfaces is already quite arbitrary and therefore complex. Normally an architectural interface should be tightly linked to the description of a business process and the objects which are exchanged between these interfaces should be recognized in the set of business objects. For instance the TransferStatus object is intuitive as each transfer will get some status in order to decide if the transfer will be executed eventually. The TransferContainer is not directly intuitive, but it serves as a vehicle for the combination of a transfer business object together with its status object. Having these objects will make the distinctions between the different architectural interfaces more robust to change as they will process objects close to the set of business objects. Objects which are exchanged between the interfaces of the architectural interfaces and services should be as independent from the business process as possible. That optimizes the robustness and reusability of the service. The Flow service can be used by more architectural interfaces then the ITransferControl architectural interface and because of that the exchanged object is of type Object.

3.6.2.3 Coupling

Coupling between architectural interfaces is designed double loosely coupled. Every time an object is exchanged between these interfaces the object is checked to be valid by the transmitting interface. The receiving interface can rely upon that fact and process the received object using its own standards. The overall process is a transportation process as the object created at the beginning of the process remains the subject throughout all steps and its values do not change. The TransferStatus object accompanies the Transfer object throughout the processing. Based on the changes in this object is the processing of the Transfer object given direction. The workflow in the Flow subsystem is very basic. A workflow consists of the combination of a transformational and transportational processing at the same time. Based on transformational changes the transport of the subject is directed. Only the transformational processing has been worked out using randomization, but the transportational processing, which should be configured based upon the transformational processing, is kept straightforward. If the transportational processing would have been worked out more properly then after every 49

output of the transformational processing it should evaluate what has to be done next. Every step in a workflow is an event for the workflow engine. The history of previous events can be important in deciding what should be the next step in the process. In a processing of bank account transactions are events often historical. It would make the example to complex and was therefore not implemented. There is no example of translational processing available as all the time the business object is the same. Aspect coupling is used three times. The _Logger and the _EmployeeManagement class have made their own interpretation how they will return results. In the _Logger class the basic properties for each logging instance returned are centrally defined. No logger object will call its parent logger as this is set to false during the construction of the logger in the _Logger class. Likewise has the _EmployeeManagement class its own logic in returning employees, when a request with the exclusion of a role is made.

3.6.2.4 Principles for communication

The first way the Law of Demeter is respected is that there is a wave in the navigation. Interfaces are only aware of their direct neighbours and the systems they call upon themselves. Another way to respect the Law of Demeter is that the communication between systems should be restricted to the instantion of a system and one method. That method should or return the object sent to the object or return a verdict about the object sent. The Account Transfer architectural interface returns a boolean to the user interface, because the request made by the user interface is if the transaction can be accomplished. For the user interface is that enough information as it has already the information stored in the _TopTransfer class to return the proper feedback to the end user. It has no need to know about the particular status the transfer business object has been received during the processing by the Account Transfer interface. The Transfer Control architectural interface returns the complete TransferContainer object, because the Account Transfer asks the Transfer Control interface to check the transfer. This is a more complicated question then simply yes or no. The result of a peculiar transfer for instance is not only a no, but it should still be saved somewhere as these transfers must be reported by the bank. The (not implemented) architectural interface Data Storage on the other hand would return a boolean to the Account Transfer interface telling if the storage of the data has been successful or not. The Law of Demeter can be said to be violated in every line in which the Transfer object and its TransferStatus object are retrieved from the TransferContainer as a method is questioned two lines of objects deep as can be seen in tc.getTs().getStatus(). Combining the Transfer and its accompanying TransferStatus object in one TransferContainer object is actually simplifying code and communication between classes. Whenever both objects would have been transported separately the communication between the different classes would have to make more assumptions then now. As the main purpose of the Law of Demeter is to lessen assumptions to be made during communication between classes the use of the TransferContainer is in the end respecting the Law instead of violating it. Finally the LoD is respected letting all architectural interfaces and systems be neighbours of each other sharing only one method in which an object or a boolean is exchanged. The first rule of thumb of the privacy principle that own constraints should be kept private is respected can be seen in the throwing of an Exception by the _InterfaceManager class. Any class 50

making use of the service of the _InterfaceManager class does not have to know what can go wrong within that class. Handling a general exception will do for them. The message sent by the _InterfaceManager class in its most general form is then already clear enough. When you look at the technical implementation model you can see that there is a line from the Flow class directly to the TransferContainer class and not from the _IFlow interface. This can be done because from the Transfer Control interface is the Flow service receiving an object of the class Object. The data sent by the Transfer Control interface is as general as possible. Doing that the Flow service is not restricted to be used as a private subsystem of the Transfer Control interface. Any implementation of the IFlow interface will have to cast the received object to the class needed. The contract of the IFlow interface can be kept very general, having the effect that all the implementation classes have a lot of freedom to adapt themselves to almost any kind of request. This is in coherence with the privacy principle that when data is sent no constraints should be posed on the contract of the receiving class and with the Liskov Substitution Principle that no contract should be dependent on implementation matters. The way the exchange of the business objects between the architectural interfaces and their implementations is organized supports this rule of thumb of the privacy principle too. From the User interface to the Transfer Result interface is the exchange generalized using the superclass _TopTransfer. Every implementation of _TransferResult will cast the superclass to the class needed. This is depicted in the data model having an uniassociation with the superclass and an association with the subclass. The third way the privacy principle comes into play is by acknowledging that a _TopTransfer object can be used by different systems. The _TopTransfer object used in the _UserInterface is the same as the _TopTransfer object which would be used in the Flow object. In the current example there is no actual use of the _TopTransfer object in the Flow object, but it is easy imaginable it would. Not reinventing the wheel to create a _TopTransfer object in every system serves the maintainability and the interoperability of the system as a whole. It is the effect of the Liskov Substitution Principle to postpone the initialization of the system until the system is requested to respond. If the initialization would take place during the construction of an object the calling class would become linked to the inner functionality of the callee. In the current versions of the constructors the only action taking place is the addressing of the received data to an inner placeholder. The constructor is not part of the contract and should therefore not intermingle with the processing of the received data. If there is any exception returning to the caller, then they must occur while using the methods which they share. Therefore should the initialization of the system occur in these methods and not in the constructor. It decouples the constructor and the contract specifications from each other. It restricts the influence of the constructor to the processing of the received data and alleviates the contract implementation of the responsibility handling that. Another effect of LSP is the organization of status variables. The status of a transfer belongs to the Transfer status business object. These variables are used for navigating the Transfer through the application. Variables referring to this business object are collected in the TransferStatus object and can and should therefore be used in different systems. When the business changes these statuses all systems will have to be changed too. These statuses are expected to change rarely. To make these variables publicly available to all relevant systems are these variables stored in the TransferStatus class. The variables to make decisions about any transfer on the other hand is very implementation specific and as a result put in the implementation of the flow. Putting it there leaves no trace in the implementation of the architectural interface nor in the implementation of the TransferStatus class. When a change in the decision process is made only the implementation of the flow is 51

affected. The last way mentioned here can be seen looking at the methods in the contracts. All contract methods can be divided in two groups, namely one group of methods for the handling of the object received by the constructor and the other group of methods without any parameter. The latter makes it easier to provide independent implementations. The casting to the requested subclass of the business object is performed within the contract methods. That gives great flexibility in the way how to implement the contract. A good example of this is that even with this kind of simple flow two different styles of implementation can be created. The PeculiarFlow has first a step in which is checked if the transfer is peculiar or not and on affirmation sending it to the manager. The TransferFlow performs a couple of steps in an iteration until or the iteration ends or one of the bank employees disapprove this transfer. That flexibility would not have been accomplished when the design of the TransferControl interface knew much about the flow systems.

3.6.2.5 Inversion of Control

The use of Dependency Injection is preferably restricted to calls made to an architectural interface. There are two reasons for this. The first being performance and the second to benefit maximally of it. That performance will benefit from restricted use of reflection mechanisms is obvious. The latter might require some explanation. Please take a look at the three interfaces. Together they form concurrent lines of implementation. One can imagine a line for transfer of shares, disposals and money next to each other. The class which starts the user interface for disposals will call the disposal class for the account transfer interface, which will call the disposal class for the transfer control interface. It works equivalent to the example. Each interface will be called for using Dependency Injection. That implies that all classes, which are called by one of these classes are dependent in their implementation by the interface. The flow for the control of disposals is only called by the disposal implementation of the Transfer Control interface. Therefore the implementations of the interface IFlow will at run time change synchronized with the implementation of the ITransferControl interface. Using Dependency Injection for the architectural interface causes all callees by this implementation to be selected at run time. When these implementations still would be instantiated using Dependency Injection the benefit of the Dependency Injection used for the architectural interface would be minimized instead of maximized. How contra intuitive it might seem, restricting the use of Dependency Injection might actually maximize the effect of Dependency Injection. I think that every serious use of Java or .Net should use a Dependency Injection container, but that the use of the Dependency Injection as a technique is restricted to those systems which serve as crossroads in deciding which way to go. In this example I used a form of constructor injection. This does not imply that I favor constructor injection above setter injection. Both types of dependency injection have proven their value. It was just the simplest form for me to use.

Design patterns

All navigational paths in the systems use the Flow pattern. As will be shown in the last chapter this is to be expected. Wherever appropriate the Dependency Injection pattern is used. Its use is restricted in order to benefit maximally using it. If there would be enough information the Specification implementation would have been used to validate the request in the FlowStep. Aspects are called using the Facade pattern, by which means the implementation of an aspect can still be changed without interfering with the overall service of the aspect. The Facade pattern is not implemented using the Singleton pattern as this would put a restriction on performance. 52

4 The primary process


Introduction
In this chapter I will discuss mainly design patterns and present a concurrent way to classify them. If necessary I will discuss the design patterns more thoroughly, but for many design patterns the information can be obtained from many other sites next to the outstanding book 'Design patterns' by the GoF, which was the main source of inspiration writing this chapter. Before starting I will touch briefly the subject of the life cycle of design patterns. There are best practices which used to be a design pattern, but are standardized into technical constructs. Nobody writes a new way to handle exceptions nor does anyone write a new interface for iteration. In the times the book of the GoF was introduced the Iteration pattern was definitely a pattern, which had to be designed in programs. Nowadays programming languages provide standard solutions for these problems and it is best practice to use these solutions instead of developing new solutions. It is like the analogy how people learn to walk. It is very hard for a human to learn. As a baby he really has to use its brains. Maturing and getting more experienced the person will have to think less and it starts to become an automatic process to walk. In the end the person can reach a very high level in the process of walking and based on the level develop new patterns to walk, like the Grand Plie in ballet, the Moonwalk of Michael Jackson or the akka of Ronaldinho. Design patterns can evolve in this way too. First an architect has to think how to implement the functionality, then it becomes the responsibility of the developer to use the best practice, in the end it is base knowledge of the developer to use the appropriate solution provided by the programming language. The Stack is like the Iterator an example of an used to be design pattern, but is now implemented as a type of collection in Java and many other languages. In Lotus Notes I had a self written library to implement the Stack functionality, but in Java this was needless. At that time it was in Lotus Notes a design pattern, but in other languages it has developed into a technical construct. The Iterator pattern and the Stack handling among other nowadays technical constructs are grouped together in the pattern 'Collection handling'. Design patterns can therefore serve as a guidemap how to enrich a language with technical constructs favouring a higher result on investment. I will present some newly acknowledged design patterns as well, which does not mean these patterns are actually new. These are the Symbolic Proxy pattern, the Publish/Subscribe pattern, the Flow pattern and the Dependency Injection pattern. The Symbolic Proxy pattern is already used in practice, the Publish/Subscribe pattern is a common situation describing dependencies and the Flow pattern is probably silently used quite a lot. The Dependency Injection pattern is a description of this modern best practice as I expect it basically to work. The main concern of this chapter is the way design patterns are classified. I will present a concurrent way to classify design patterns. In this classification system design patterns are classified using their intrinsic characteristics. I call this classification system a genotypical classification system. This chapter will start with describing what the intrinsic characteristics of a design pattern are, what the shared purposes of all design patterns are resulting in a definition of a design pattern and end in the description of a classification system in which design patterns are arranged using the intrinsic characteristics of design patterns.

53

4.1 Definition of a design pattern

4.1.1 Characteristics of design patterns


First of all there is the name of the design pattern. It has the function to give a clue what the pattern is doing. Some patterns have synonyms like Adapter and Wrapper. That means there are more ways to describe the same pattern. Each name will reveal an essential characteristic of the design pattern like what (Adapter) or how (Wrapper) the design pattern is behaving. The second main characteristic is therefore the definition what it is doing. All design patterns are best practices to a set of functional demands. This set of functional demands is called the functional contract of the design pattern. Every design pattern is an optimal answer to a reoccurring functional contract. The choice which design pattern to use must be made by first analyzing which functional contract should be served. Having found out which functional contract should be met determines which design pattern should be used. This is the main characteristic used to classify design patterns by the GoF. Third is the description of the abstract relationships it consists of. The description can range from very simple like the State pattern to quite elaborate like the Bridge pattern. It shows which relationships have to be realized as a response to the functional contract. Fourth is the relation between the input it requires to serve and the output it delivers. The relationship between the input and the output is the technical translation of the functional contract. It consists of three parts, namely: 1. the type of input, 2. the type of output, and 3. as a consequence of both the type of processing. If the object for input is that same as the object for output for instance, then the type of processing is transformation. The type of processing is not an independent characteristic of a design pattern but a derived one. It is still stipulated, because it can be used as a way to order design patterns effectively. Sorting design patterns by type of processing can help to investigate the input and output differences of each design pattern and compare them. This is the main characteristic used to arrange design patterns here. Fifth can a design pattern have some specific restrictions, which makes it more powerful and flexible and can delineate its use. That a restriction will make a pattern more powerful and flexible has to do with the fact, that if its use is restricted to the described situation, it will adhere the general purposes at its best. When ignoring the restrictions and only looking at the description of the design pattern itself or the functional contract one can be disappointed in its effectiveness. In the end a design pattern will be measured by its serving effectiveness. All previous characteristic are shared by each design patterns, but all in a different way. Looking from an external point of view differences between design patterns start when the functional demands are different. From an internal point of view differences between design patterns start after looking at the set of abstracted relationships they consist of. From an external point of view there is a difference between the State and Strategy pattern. From an internal point of view there is no difference.

4.1.2 Purposes
Important to realize about a design pattern is that it uses coupling to do its job. Coupling is unavoidable and wanted to have a process inside an application. The use of design patterns is to benefit from coupling instead of suffering from it. All design patterns share some purposes. These purposes are: the open/closed principle. A design pattern is a best practice for a given functional contract. That means that the explicit functions of the contract might change without a need to change 54

the relationships between the elements of the pattern. The design pattern has the required flexibility to only need a different implementation in its relationships to stay tuned to the changed functions of the contract. Because of that it serves extensibility of the code and is at least able to postpone modifications, hiding implementation. Every design pattern is hiding the implementation by separation the needed abstracted relationships from its implementation, standardize solutions. Complex applications will always need maintenance. Using design patterns can ease maintenance, because best practices are used to solve known problems. The team members working on the application will have better understanding of the issues, when the solutions until that time were using standardized solutions instead of idiosyncratic solutions of former team members. Often it is said that design patterns help loose coupling. Yes, they do. No, they do not. Yes, they do, because they adhere to the open/closed principle and are able to extend the code as long as the changes are in the range of the functional contract of the design pattern. No, they do not as loose coupling is between patterns, not inside patterns. Inside patterns abstracted relationships are used to fulfill the requirements of the functional contract. Inside a pattern elements have a coupling. Having an optimized pattern of coupling they perform well. Loose coupling is not a real issue at the level of design patterns. It exists at the higher level of system integration. Reusability is left out of the equation for the same reason as loose coupling. Reusability is decided upon a higher level of abstraction than the choice for a design pattern. When the implementation using the design pattern is effective, the implementation can be reused effectively. It is not a intrinsic characteristic of a design pattern. Not using a design pattern will not prevent the reuse of a certain implementation, although it helps. Reusability and loose coupling have the same pitfall: it can complicate the change and upgrade of a system by wiring too much components together. Striving for these goals can have its downfalls, even when it is done well at the time of execution. The use of appropriate design patterns will never have this type of downfall, reason that reusability and loose coupling are not considered intrinsic characteristics nor purposes of design patterns. The terminology of tight, strict and loose coupling is used in the classification system. There it has another definition then loose coupling in the meaning having no restrictive relationship at all. Elements within design patterns always have relationships among each other, which is a synonym of saying that elements within a pattern have a coupling. That a pattern is intrinsically based on loose coupling does not imply that that pattern is favouring loose coupling on an architectural level and other patterns do not. Proper use of a design pattern should favour the forementioned purposes though and that should be independent of reusability and architectural loose coupling.

4.1.3 Definition
The definition of a design pattern is to be a standard description of abstracted relationships between elements as an answer to a functional contract optimizing the preservation of the open/closed principle through maximizing the separation between the functional contract and its technical implementation. With 'optimizing the preservation' is expressed that the design pattern is considered the best solution to the functional contract. Other solutions are possible, but none will serve the preservation of the extensibility of the application as good as the solution provided by the design pattern. Not every set of abstracted relationships can be said to be a design pattern. Only those patterns which accomplish the maximal separation between functional contract and technical implementation can be said to be a design pattern. Maximal separation2 does not imply that the
2The notion of maximizing the separation might be the reason, that loosely coupling is often used to describe the effectiveness of a design pattern.

55

care for the open/closed principle is maximized too. It is the combination which will define the design pattern. Both are needed. This latter implies that the same set of abstracted relationships is in one situation a design pattern and in another it is not. How strange that might seem, that is what the first part of the definition is describing. The set of abstracted relationships is an answer to a specific functional contract. One can not use the Observer pattern all the time and create the perfect application doing that. In some situations the relationships described by the Observer pattern fits well, in other situations it does not.

4.2 Classification of design patterns


Design patterns will be classified using the set of abstracted relationships they have. The classification will be independent of any implementation and independent of assumptions about the phase of the application. When you look for instance at the design patterns 'State' and 'Strategy', they describe the same set of abstracted relationships. They are however considered different design patterns. The main reason is that they are answers to different functional demands. The 'State' pattern is used whenever based on different states the implementation should differ and the 'Strategy' pattern is used for context based differences for implementation. The set of abstracted relationships is actually the same. The differences between the categorization of the GoF and the proposed classification system can be described using the analogy of phenotype and genotype. To describe that difference the unexpected wisdom of a humorous question is introduced. In the movie 'Monthy Python and the Holy Grail' the question was asked is it an African swallow or an European swallow?. It is the same bird, living in different locations throughout the year. The phenotypes are African and European swallow and the swallow itself is the genotype. One genotype can have more phenotypes, one phenotype will always correspond to one genotype. The proposed classification system is a genotypical classification system. It describes one on one relationships between a distinct item in the collection and the classification result. That is the benefit of a genotypical classification system. Within such a system design patterns can be compared to each other and they can be arranged based on their internal differences. It poses new questions to be explored, like 'are these all patterns?', 'how do design patterns relate to classes, inheritance and interfaces?', 'how do they complement each other?'. The new classification system should respect the main characteristics by which design patterns are differentiated from each other as the classification guideline. First there has to be discovered how genotypes of design patterns can be identified. Next design patterns should be classified along this genotype.

4.2.1 Pillars of classification


The aim of the classification system will be to arrange design patterns independent of answering the functional contract. That ordering has already been provided by the classification of the GoF. The aim of the concurrent system is to be able ordering design patterns based on the effects of their unique genotype. If their genotypes are different, their effects must be as well. There are two pillars on which this classification system will be build. The first one the resulting items to be arranged. In this classification system will that be constituted of the genotype of the design pattern. The other pillar is the set of criteria by which the resulting items will be arranged, being the effect of the genotype. The first criterion to select a genotype for design patterns is that it is characteristic of design patterns, which is shared by all patterns. Next that it is able to identify a design pattern uniquely 56

among others. Furthermore that the characteristic is independent of any external constraint. Finally that the characteristic causes unique effects. There is only one characteristic to which all these criteria can be applied. That is the set of abstracted relationships of a design pattern. Therefore is the set of abstracted relationships of a design pattern its genotype. The first criterion for the effect is that it must inevitably be shared by all design patterns. Next that there must be an one on one correlation with the set of abstracted relationships. If they fail to do that, then differences in the set of criteria will have no predictable outcome in the ordering of design patterns. Finally that the differences in effect must be solely caused by the set of abstracted relationships. There is only one characteristic again to which all these criteria apply and that is the relation between the in- and output. Therefore will the design patterns be arranged using their in-, output and processing. As there are only three types of processing, design patterns can be arranged in three main groups. That makes the ordering more comprehensible than displaying all in one group.

4.2.2 Description of the effects


The main groups are the design patterns supporting a transformational, a transportational and a translational processing. In section 3.3.2 were these types of processing introduced. Every design pattern has an unique way to support this processing. All design patterns together should add up to all possible processings in combination with a specific set of characteristics provided by the programming language. As this document is about software architecture in Java the specific set of characteristics apply at least to Java. In Java everything is an object that is almost. Design patterns are only interesting, when they describe relationships between classes, because then they provide a type of solution which is at the core of the Java language. Classes can relate in three different ways to the environment, namely as an object belonging to a certain class, as member of a certain hierarchy or as an implementation of an interface. A class can therefore be returned based upon being a specific class or based on belonging to a certain hierarchy or based as it implements a certain interface. Objects belonging to a class share to a certain degree the characteristic of inheritance, but only one level deep. Classes which share an interface can not be linked to one another directly. Each individual processing has the three characteristics of input, processing and output. Sometimes the input is in control what will be the result, sometimes the output and equally the processing. When the input is in control what the result of the processing will be it will be called tight coupling. When the output is in control of what the result will be, it will be called loose coupling. When the processing is in control or both the input and output contribute to the result, it will be called strict coupling. Each processing will require one of three situations for input: 1. a nominal key, 2. an ordinal key, or 3. no key. The terms nominal and ordinal here are borrowed from statistics. A nominal key is a key whose value has no relationship to values of other keys except for being different. What exactly the key will be for an action like 'save the form' is arbitrary, but there should be one. There is no way to check if a nominal key is used properly otherwise then looking at the place where it is defined. When the value of the key is changed, nothing is changed really as long as its relation to the referred object remains untouched. A ordinal key is a key, which is part of a formatted set. Each key within the set has a distinctive meaning and can be identified using the formatting rules of the set. Every value can be checked to 57

the formatting rules of the set and checked if it can be a key within the set or not. When the value of the key is changed, the key is changed. A class name or protocol are examples of an ordinal key. Both belong to a set having strict formats to apply to its members. One could throw an exception when the key is not applying to the format of the set. That is the major distinction with nominal keys. No exception can be thrown because of their value. Input can also function without any special trigger provided. An object or class can still be part of the input, but there are no further instructions needed by the processing to accomplish the job. All in all does this imply that for each type of processing there are at most twenty seven possible design patterns. This does not mean, that all possible situations need a design pattern or that there is one available. It also means, that there are possibilities, which are covered by keywords of Java relieving the architect and developer to use a design pattern for it. Anti-patterns are patterns, which are abstracted solutions for possibilities where no pattern should be used. A design pattern used in the wrong way is not considered an anti-pattern, but an antiimplementation. An example of an anti-implementation is when a design pattern is used outside its restrictions. In an article about patterns there was a pattern named 'Negotiating Agents'. This is said to cover the situation in which agents should resolve possible conflicts before running. These agents should negotiate with one another and finally decide which agent should run how. That is asking for dead locks and with every agent added all other agents should be reconsidered to find out if they have a possible conflict with the new agent and how to solve it. This pattern can be applied succesfully however when the situation is very well described and it can not be afforded to use a central controller, like in the wiring of telephone connections and the managing of a lot of simultaneous conversations. Then there is no time to wait for the decision of a central controller. Outside these strict perimeters this pattern should not be used and will have the effect of an antiimplementation. Outside these strict perimeters the Flow pattern should be used to cover the situation. It is only an anti-pattern when the solution should not be there and it will by definition guide the developer in the wrong way. There are two ways to let a pattern be an anti-pattern. The first one is when the pattern is a solution to a problem, which should never exist anyway. It is difficult to describe a situation like that in a programming language, but there is luckily a good example from history. From the Greeks the Ptolemaeus system to describe the slolar system was inherited. In this system only perfect circles were supposed to exist for the orbit of planets. To describe the measured orbit of planets a lot of auxiliairy circles had to be created, because most of the time the orbit of the planets could not be described using perfect circles. That is what with a design pattern will happen, when the pattern (perfect circles) is to describe a situation which in reality does not exist. The implementation is good, but the preassumptions are not, reason that the application will grow out of control and cause piling up solutions to new problems. The solution for the description of the solar system was to use elliptical orbits for planets as proposed by Keppler. The second way for an anti-pattern is when there is no need to create a pattern as a solution and it is even best practice not to do so. Casting for instance is the answer to the possibility of input 'ordinal key', using inheritance and a transformational processing. Any pattern providing a solution to this situation is an anti-pattern as there is already an optimal solution for this type of possibility. The language will develop supposing everyone is using casting as the solution. Using a solution other then casting seriously damages the robustness of the solution when upgrading to new versions. Nearly all these kind of anti-patterns will be performance triggers.

4.3 The classification system


I will discuss design patterns most of the time only describing the effect of their genotype and the restrictions they have. There are however a few design patterns about which I will say more. That 58

is because I have extra thoughts about these patterns I would like to share. These patterns are the Visitor pattern, the Template pattern, the Publish/Subscribe pattern, the Flow pattern and the Proxy pattern. The first one because the implementation of the pattern does not fit with the definition of the pattern. I suggest the implementation first provided by Robert C. Martin in his article and theoretically analyzed and described by Bernard Meyer and Karine Arnout in this article. The Template pattern is discussed because of the extra constraint I specify. I think that is an added value to the general description of this pattern. There is an old discussion if the patterns Publisher/Subscribe and Observer pattern are the same. In this overview of patterns I will point to a situation in which there is a clear distinction between these two patterns. I think the Flow pattern is already often used, but not acknowledged a pattern of its own. The Proxy pattern as described consists in my view of two highly related but different patterns. I split it up in two different patterns with each a unique description of abstracted relations. The first pattern is still called the Proxy pattern, the second the Symbolic Proxy pattern. Throughout this section and hereafter the distinction between tight, strict and loose coupling is referred to. These are closely related to the concepts described in chapter 3 when writing about architectural coupling. The implementation of these types of coupling is slightly different here, but consistent with the previous descriptions. To define the effect of the set of abstracted relationships belonging to one of these three types the relationship(s) within the pattern must be identified, which are crucial for the identity of the pattern. Having identified these relationships, then the characteristics of these relationships must be estimated and based on that the pattern can be said to use tight, strict or loose coupling. It is tight coupling when in the UML the crucial relationships have a compositional relationship. An unidirectional relationship form A to B indicates most of the time that A creates B, but A is not involved in the use of B. At the time of creation A owns B. It is strict coupling when the most crucial relationships use aggregation or when the processing of the pattern is relying on inheritance or interfacing. The use of inheritance or interfacing is said to be strict, because the effect is partly based on shared characteristics and partly based on unique characteristics. Differences between classes within a hierarchy or using the same interface do not own each other, but it is like a referring to each other. A class realizing an interface can be said to refer to the interface. The same line of reasoning applies to inheritance. An unidirectional aggregation relationship between A and B implies that A reads and writes B, but that B does not change the characteristics or behavior of A. It is loose coupling when the most crucial relationships use associations. An unidirectional relationship between A and B indicates that A is doing something with B, but not the other way around. The use of keys does not influence the type of relationship. It is a characteristic of the relationship, but for the determining if the processing is using tight, strict or loose coupling does not matter. The abbreviation 'n.a.' stands for not applicable.

4.3.1 Transformational patterns

59

Table 2: transformational processing Transformational processing Tight coupling Nominal key Class Ordinal key No key Nominal key Inheritance Ordinal key No key Nominal key Interface Ordinal key No key Memento n.a. Prototype, Singleton Factory Flyweight Abstract Factory n.a. n.a. n.a. Strict coupling n.a. n.a. n.a. Template n.a. Bridge State n.a. Decorator Loose coupling n.a. n.a. Object pooling n.a. n.a. n.a. Service Locator Dependency Injection n.a.

Memento pattern

Figure 15: Memento pattern

The Memento pattern is using a state to decide if the old object must be returned or not. A state is a nominal key, because its meaning depends on the application, not on the object in question. The relationship between the Memory Class and the Original object is compositional as the Memorty class controls the content of the object. In a similar way have the Caretaker and the Memento a compositional relationship, because the Caretaker controls the life cycle of the Memento object. The Memory class and the Caretaker need an association to exchange state and objects. The focus of the Memento pattern is on the separated existence of the Original and Memento object and therefore on the two unidirectional composition relationships and can be concluded that this pattern is using tight coupling. The use of inheritance and interfacing is not an issue here. This is a behavioral pattern.

Prototype pattern

Figure 16: Prototype pattern 60

In the tool used for creating UML (ArgoUML) it is not possible to establish a relationship with the class itself. Therefore the second class had to be created. The essence of the Prototype pattern is that the class has an unidirectional composition relationship with itself as it can instantiate the same object twice. UML is restricted to the use of relationships and can not be used for the expression of actions. Therefore can only one object be shown and no relationship between these objects as the relationship between these objects is on the level of the class. On the level of the class it signifies that the class has an unidirectional composition relationship to itself. When in UML for instance an Employee has a relationship to itself, it should be using a different role like 'Manager', in this pattern the role is 'CopyControl'. As the class maintains an unidirectional composition relationship with itself, the Prototype pattern uses a tight coupling processing. The pattern does not require any key. The Prototype pattern uses the interface Cloneable in Java. The class will not compile when a method 'clone' is designed not having the Cloneable interface implemented. Use of cloning in Java is therefore using a technical construct like exception handling. As a result it is debatable if the Prototype pattern is a design pattern in the Java language. Creating an implementation in the Java language other then provided by the Java specification can be called an anti-implementation. It might be a design pattern though in other languages, which do not provide this type of technical construct. In languages in which this is a design pattern it is a creational pattern.

Singleton pattern

Not surprisingly has this pattern only one name. Figure 17: Singleton pattern

By far the most important relationship in this pattern is the control of the SingletonFactory over the instance of the Singleton class. The SingletonFactory should have total control over the life cycle of the object. The processing of this pattern is tightly coupled.

61

Usage of this pattern is restricted to those situations where continuous control over the state of the object is demanded. That is a very profound requirement, because it means that the object must be visible from everywhere and its management is still in control. The need for a single point of access from anywhere may cause performance problems and uncontrollable dependencies as the design of the services controlled by the Singleton are designed being accessed from one point only. The desire to have only one instance of a class throughout the whole JVM is not in line with the basic assumptions of the language. In Java object management is performed by the JVM and the garbage collector, not by the architect or developer. It is an essential characteristic of the language. When the Singleton pattern is used to create an unique public access point I consider this an anti-pattern, as it is then a solution to a problem, which should not be formulated anyhow. It is very difficult to create a solid solution for that type of Singleton pattern in Java. When the Singleton is used specifically for restricted situations like the Mediator implementation in the Mediator pattern, then it can function good, because its scope is narrowed down to a well defined, controllable situation in which the Singleton object is not public throughout the whole platform. In addition to the Singleton pattern a 'Multiton' pattern is proposed. This type of pattern already exists being the Object Pool pattern. This is a creational pattern. The Prototype and Singleton pattern have the same place within the classification system, which should imply that they have an equal effect as the result of their abstracted set of relationships and that their abstracted set of relationships are actually equal. It appears that they both have only one unidirectional composition relationship, which is controlling the effect of the pattern.

Factory pattern

Figure 18: Factory pattern

62

The Factory pattern has an unidirectional composition relationship with BaseProduct, which means here that each Factory owns a Product, but does not use it for its internal processing. The combination of the nominal key with the hierarchy is used to create a specific BaseProduct. As a result is the combination of these relationships implying that the hierarchy on the left is copied in the hierarchy on the right. Therefore is the processing of this pattern an example of tight coupling. This is a creational pattern.

Flyweight pattern

Figure 19: Flyweight pattern

The Flyweight pattern in its pure form distinguishes two states on any object, namely an intrinsic and extrinsic one. The intrinsic state is what characterizes the object. The extrinsic state is what uniquely describes the object among other objects with the same intrinsic properties. Intrinsic properties are already initialized in the Flyweight object, when the object is returned from the FlyweightFactory. Extrinsic properties are added at creation time to the Flyweight object. These characteristics of this pattern are expressed in the combination of the Ordinal key from the client to the FlyweightFactory and the unidirectional composition relationships the Client has with each Flyweight implementation. The Flyweight uses an ordinal key, because the creation of a specific class is requested. This can be implemented using the Command pattern, which in combination with the Memento pattern would provide a rollback functionality to this pattern. Crucial however is that a particular key is used for the instantiation of an object of a certain class and that the Client must know how to provide the extra features of the object to get the fully initialized object needed. To fully implement each Flyweight class the Client must know the ins and outs of each class, which implies that this pattern is an example of tight coupling as is indicated showing unidirectional composition relationships from the Client to each Flyweight Class. This is a structural pattern.

Abstract Factory pattern

Figure 20: Abstract Factory pattern

63

The most important relationships in this pattern are the combination of inheritance for each factory and the unidirectional composition relationships. The unidirectional relationships imply that each component is owned by the Abstract Factory. The inheritance on the left side is showing a strict relationship, which implies that for every implementation of the Abstract Factory an unique combination of components exists. Together with the characteristic that each component is owned by the Abstract Factory is the processing of this pattern tightly coupled. This is a creational pattern.

Template pattern

Figure 21: Template pattern

64

This pattern is aka the Provider pattern. The nominal key is used to make distinction between the different template implementations. Each implementation will have its unique concretization of the bidirectional aggregation relationship to the template subject. It is crucial for the Template pattern to have a Subject around which it is functioning. The Template does not have the ownership of the Subject, that is with the Client. Therefore is it the hierarchy which is at the service of the aggregational relationship and is it the latter who is crucial to define the type of processing this pattern is using: strict coupling. Use of the Template pattern is only appropriate when there is a related set of methods around a type of object for which different implementations are required. Take for example the creation of a player profile for the card game of Hearts. A player profile is used to provide each computer player a guideline how to play the game. Each profile should do equal actions, like:

evaluation of a Hand, deciding after evaluation which cards to exchange at the beginning of a round, decide for a strategy, and deciding which card to select for play.

The way the hand is evaluated will have an effect which cards will be exchanged, which strategy will be used for play and it will have an effect on the selection of the card for play. When the type of evaluation is changed, the implementation of the other methods will change too. When the strategy how to play is changed, the way to evaluate the cards, which cards to exchange and which card will be selected for play will change either. Using the Template pattern for creating the different profiles is the way to go. This is a behavioral pattern.

Bridge pattern

Figure 22: Bridge pattern

65

The most important relationships in this pattern are the combination of inheritance for each Bridge and the bidirectional aggregation relationships. The bidirectional relationships imply that each component can influence any implementation of the Abstract Bridge. The inheritance on the left side is showing a strict relationship, which implies that for every implementation of the Abstract Bridge an unique combination of components will exist. The methods add and remove are shown to express that any Bridge object can change its composition later. All in all is the processing of this pattern strictly coupled. The pattern succeeds in decoupling implementation from its abstraction, but it is useful though to let each implementation have its unique influence on the class, which is using its abstraction. Think about artifacts for a character in a game. Each character will be created with a default set of artifacts. Later in the game can each character acquire different artifacts, each of them uniquely changing the capabilities of the character. Artifacts can be added to a Bridge object and later on removed. This is a structural pattern.

State pattern

Figure 23: State pattern

This pattern is aka the Strategy pattern or Policy pattern. The State pattern uses a nominal key to make distinctions between the different implementations of the same interface. A nominal key is the only way to make this distinction. The most important relationships in this pattern are the realizations of the interface. This is by definition strict coupling. This is a behavioral pattern.

Decorator pattern

Figure 24: Decorator pattern

66

The Decorator pattern is used to provide several new implementations of an interface to prevent a Cartesian product of subclasses. Using a Decorator pattern a class can be enhanced with new functionalities without altering the class itself. The input is the same interface as implemented by the class, which is decorated. The output an enhancement to the class using the same interface. The constraint is that decoration can only take place sharing one interface. Basically it returns the same object. The essence of this pattern is that all classes apply to the contract of the DecoratingInterface. Therefore are the realizations referring to the DecoratingInterface the crucial relationships for this pattern, describing it as a pattern using strict coupling. This is a structural pattern.

67

Object Pool pattern

Figure 25: Object Pool pattern

The implementations are left out of the picture, because otherwise the picture might become too complex to grasp easily. Both yellow classes will need an extra implementation. The ObjectPool and ObjectManagement are one unit together as the ObjectPool is holding refererences to the Objects created by the ObjectManagement. Pool- and MemoryManagement should work without knowledge about the exact implementation. Their implementation does not vary with the ObjectPool. They can work together with more pools at the same time. The behavior of each Object Pool is partly controlled by the configuration of the PoolManagement and the MemoryManagement, which is expressed by the absence of associations from each configuration to the ObjectPool. Because the behavior of the ObjectPool, which is the class communicating with the rest of the application platform, is partly under control of classes to which it has only associations, the pattern is said to use loose coupling. The methods activate() and passivate() belong to the poolEntry, not specifically to the object itself. Whenever an object is borrowed from the pool will the poolEntry get the status activated to prevent MemoryManagement to clean the object from the pool, causing an unexpected status for the object. Directly after the object has been returned to the pool or after the creation of the object will the poolEntry have the status 'passivated'. This is a creational pattern.

68

Service Locator

Figure 26: Service Locator

This pattern is also known as Dependency Lookup, Object Factory, Component-Broker or Component Registry. Most relevant in this pattern is the association between the ServiceRegistry and -Factory using a nominal key. The Service Locator does indeed create the InitialContext, which on his turn controls the life cycle of the Cache, but that does not determine the pattern as a whole. Nor does the unidirectional composition relationship between the ServiceFactory and the Service. That is an essential part of any transformational processing. Only when there are no real other relationships, like in the three first mentioned patterns, does this determine the character of the processing. Unique to this pattern is that it has a ServiceRegistry to connect to a ServiceFactory. The registry between the InitialContext and the Factory gives this pattern the flexibility it provides. Because these relationships are associations is this pattern using a loosely coupled processing. This is a creational pattern.

Dependency Injection

Figure 27: Dependency Injection

69

The key differences between the Service Locator pattern and Dependency Injection are the facts that in Dependency Injection an ordinal key is used instead of a second nominal key and that there is a DependencyRegistry helping to resolve dependencies for creation of certain objects. Within the InjectionManager the dependencies are resolved prior to the creation of the requested object, by which means the pattern can be used for instantiation of all different kind of objects at the same time and there does not need to be a new factory for every service. The ordinal key used in Dependency Injection is the class, which will be instantiated. That makes it a WYSIWYG pattern. The Nominal key does not matter for the success of the processing to return the requested object. Its value is irrelevant as long as it is unique and can be called a 'whatever'-key. It are the same type of associations compared to the Service Locator who determine the characteristic processing of this design pattern and therefore is Dependency Injection using loose coupling as well. In Java an object can be constituted of several interfaces. That is not the idea of the Service Locator or Dependency Injection patterns, which can be a restriction. This is a creational pattern.

4.3.2 Transportational patterns


Table 3: Transportational processing

70

Transportational processing Tight coupling Nominal key Class Ordinal key No key Nominal key Inheritance Ordinal key No key Nominal key Interface Ordinal key No key Flow n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. Strict coupling n.a. n.a. Collection handling, Composite n.a. n.a. n.a. n.a. n.a. Symbolic Proxy, Publish/Subscribe Loose coupling n.a. Chain of Responsibility Mediator n.a. Exception handling n.a. n.a. n.a. Facade

Flow pattern

Figure 28: Flow pattern

71

This is a newly acknowledged pattern, although it is probably often used. It is closely related to the Mediator pattern as it is inverting the flow of control as compared to that pattern. The purpose of this pattern is to perform a set of related actions in a prescribed order. Based upon return values coming from the different steps in the flow will the Flow class decide how to continue. How every action is conducted is left over to each class ensuring the implementation of the action is loosely coupled from the call of the action. The classes called by the Flow are unaware of the existence of the Flow class. The most determining relationships are the bidirectional aggregation relationships with the Status class. Both the Flow and the class Step 1 will have to 'understand' every status it uses in its processing and which status it would have to return based on the situation in an identical way. Although both classes do not 'own' the Status class, their behavior requires an identical indepth knowledge of the Status class, which lets the processing of this pattern to be described using tight coupling. It sends status objects to the classes it is controlling. How the classes will respond to these status objects is up to the implementation of these classes. The possible answers provided by the classes can have an influence on the path set by the Flow, but they can not create unexpected pathways. All possible paths are managed by the Flow. Therefore is it considered a transportational processing as each different status on a higher abstraction level is for the Flow the same. For the processing it is irrelevant which status is returned by the class. The path might change, but the processing remains the same. The goal of the pattern is to perform a set of related actions and the different statuses of the Status class serve as the medium to have the required information how to continue the transportation of the action. The FilterChain is an example of the Flow pattern. It is a special case of the Flow pattern as there is only one route to be followed, which is the reason that it does not require a Status class, but it shares the characteristic with the Flow pattern that subsequent classes are called to perform their task in the line of duty. Classes used by the Flow are unrelated to each other. There are no restrictions to it, which makes it useful without applying inheritance or interfacing. The classes used for each step in the navigation are unaware that they are part of the processing. That makes it different from the Observer pattern, which is another pattern dealing with status changes in objects. In the Observer pattern the relationship between the Subject and its Observer classes is parallel. There is one central status of the Subject and that is communicated to each Observer class. In the Flow pattern the relationships between the Flow and the Step classes is serial. There is no central status to be updated to each Step class. If a Step class is called upon by the Flow is decided at run time and therefore optional. That makes it different from the Observer pattern in which the all Observers should be updated. This is a behavioral pattern.

Collection handling pattern

All collection handling patterns have to do with basic actions on collections. These can include iteration or methods like 'getFirst()' and 'getLast()' or 'containsKey()' and 'getKey()'. These standard actions can be used for different types of collections, but must be implemented with the proper type of collection in mind. They all need therefore a compositional relationship from the collection to the implementation of the collection handler as they should have the interface or base class of the extra functionality as a member. But each collection does not have to know exactly the implementing class. Therefore are all these patterns strictly coupled to the type of collection. They do not need external information and are used to find a way in a collection. In the majority of mature programming languages are these functionalities provided by the language and have 72

they evolved into technical constructs. All these patterns are behavioral ones.

Composite pattern

Figure 29: Composite pattern

The essence of this pattern is that it creates the possibility to traverse through different layers of objects. The Flyweight pattern is relying on this possibility, just as the Interpreter pattern. Its best known use is however not using inheritance, but using interfacing for traversing through different layers of classes like traversing from a journal to an edition to an article. But it could also be used to traverse through different objects of the same class. It basically needs the methods to add a child, remove a child, get one and perform some basic operation. It does not need any key, is used for traversing different layers implying it is a transportational processing. In any type of implementation every class must apply to the same contract, whether it be a leaf or composite. If the class under the attention of the pointer is a composite or a leaf is unknown in advance. Having the realization of the interface as the most important relationship makes it strict coupling. This is a structural pattern.

Symbolic Proxy pattern

Figure 30: Symbolic Proxy pattern

73

This pattern is aka the Remote Proxy pattern, but that name is too close to the functional implementation to be a description of the pattern itself. This pattern is not a new design pattern as it is already used often in practice. It comes close to the original Proxy pattern, but has an extra restriction in respect to the original pattern. In the original pattern the implementation of the proxy could differentiate from the original interface. In this pattern there is only one implementation, which is the implementation of the proxied object. The implementation class is always called through the symbolic interface, never directly. Together they are one business object and actually are one, because only one of these objects has an implementation. The object as a whole only exists at run time. A well known example is the stub of the EJB as the Symbolic Interface and the Implementation as the EJB bean. The object as a whole does not change during execution of this design. Only the call to the Symbolic Interface is transferred to the Implementation. It is not required that for every call of the Symbolic Interface the same object of the Implementation class is used. This design process describes therefore transportational processing. The constraint is that the contract of the Symbolic Interface is in its total shared by the Implementation and only the Implementation class has an implementation. The essential relationships are the inheritance of the SymbolicAction interface from the Action interface and the realization of the RealAction class. This is a strict coupling. This is a structural pattern.

Publish/Subscribe pattern

Figure 31: Publish/Subscribe pattern

74

The Publish/Subscribe pattern is a new pattern closely related to Event Driven Architecture. The distinction between Publish/Subscribe and traditional design patterns is that input and output have an asynchronous relationship. Events will be of a certain class, ideally having a specific interface as it is for the exchange between publisher and subscriber important that every event must be interpretable using a specific contract. It can be compared to the exchange of xml files. Every xml file should use a xml schema to ensure that the file is valid. Not providing a xml schema does not mean that the xml file is not valid, but the opposite is true either. It does not imply that the xml file is valid. In the long run it is therefore more robust to use a xml schema for a xml file and an interface for an event. It suffices for the key used to publish and subscribe to the event that it is technical as the name of the interface class works like the uri to the xml schema. The most important relationship is therefore the realization of the interface. The exchange can only take place as long as both publisher and subscriber apply the same interface. Therefore is the processing of this pattern called strictly coupled. This pattern is different from the Observer pattern, the main cause being the Publisher/Subscriber pattern to be asynchronous. This has the effect, that publishers can not make any prediction about any subscribers and can not control them. The latter is crucial for the Observer pattern to work properly as it will update its observers. An analogy to show this point of view is the process of reporting. In a business process data is stored. For reports this data is a set of new events. The original storage process does not have the awareness nor responsibility of a publisher. But the event is used in the reports and the interface of the data can not be changed without having an effect on the content of the report. Therefore is there a dependency between these processes and when there is a functional dependency there is a design pattern. The Observer pattern would not be the correct pattern to use as the Subject is not aware that it is working like a Subject. This is a structural pattern.

Chain of Responsibility

Figure 32: Chain of Responsibility

For an interesting article about this pattern, please see the article by Michael Xinsheng Huang. He 75

is presenting a good alternative for a different implementation. In his alternative he makes a distinction between the wave and particle functionality of the pattern. And he makes a distinction between two versions of implementation for the pattern. One version which will walk down the line anyway like the FilterChain in the servlet API and one in which the chain members are visited until one answers affirmative. I disagree with him however that the FilterChain is a version of the Chain of Responsibility pattern. I think that the FilterChain is actually an implementation of the Flow pattern. Therefore I would restrict the implementation of the Chain of Responsibility as presented by the GoF. The most crucial relationship for this pattern is the bidirectional aggregation relationship between the ChainCollection and the BaseChain. It implies that a chain can consist of zero or more chain links. The result of the test is dependent on the chain links, which have registered themselves. The output of the method loopChain is therefore not predictable and the processing is considered to be loosely coupled. If the cardinality would have been 1...* then the processing would have been strictly coupled. Every chain link will have a specific implementation with an unique answer on the test. Every test has its unique meaning in respect to other tests, making the test an ordinal key. Consider the selection of a jdbc driver. That is done using a test. The input key is the test to which every Driver will apply with exactly one value. The Driver Manager will test all listed Drivers to find out which one will apply to the test. That is the Driver that will be returned to connect to the database. If it turns out to be an old version of the Driver, incompatible with the current requirements, then is that a problem for the application but not for the working of the pattern. This is a behavioral pattern.

Mediator pattern

Figure 33: Mediator pattern

76

The Mediator is used as a gateway for communication for a group of collegues. Every collegue has got to registrate itself by the Mediator after which it can send messages to and receive them from the Mediator. This alleviates collegues from the responsibility to establish connections to any or all other collegues. The Mediator itself does not have to know which collegues it has to send to. That is arranged in the MediatorManager, which will provide this information to any implementation of the Mediator by request. The Mediator itself is not processing the content of what it is delivering to each collegue, which makes it transportational processing. It does not need a trigger to perform its transportation nor does it rely on inheritance or interfacing specifically. The essential relationship here is the association between the collegue and the MediatorManager. That makes the processing of this pattern behave loosely coupled. This is a behavioral pattern.

Exception handling

Exception handling is very language specific and is therefore not really a design pattern, but a technical construct. Exception handling in Java and many OO languages can not be said to support the open/closed principle as there is only one decent way to implement. Nor can it be said to meet functional requirements as it is technically prescribed how to handle exceptions. Design patterns appear within the boundaries of the language, they are not part of it. Essential for a design pattern is that the developer has the choice to avoid the use of it at all. That is not possible with exception handling, which is always an elementary characteristic of the programming language. Therefore did I not create an UML pattern. Exception handling is too language specific. Sometimes the exception caught is prescribed by the input. However, one can decide to use a more general exception to catch the prescribed exception and one can throw even another exception then prescribed to be caught. Last but not least exceptions can arise at run time. The output is loosely coupled from the input as the input can make no predictions about the output and the output can be generated without any input coming from the lines of code in the try-block. An essential characteristic of exceptions is that they belong to the hierarchy of exceptions. Therefore is exception handling placed using hierarchy. To solve the exception thrown the type of the class is used. That makes it using an ordinal key. It is a transportational processing because of two reasons. The first one that it is the expression that the normal processing is stopped. Every processing is based upon transportation. When normal processing can not take place anymore, the only type of processing which is available is transportational processing. The other reason is that transformational and translational processing serve a functional goal. Exception handling does not serve a functional but a technical goal and can therefore not be defined as one of these two types of processing. If it was a design pattern it is a creational pattern.

77

Facade pattern

Figure 34: Facade pattern

Although the Facade is implemented using an interface, this relationship is not crucial to its processing. There is no added value to implement the Facade using inheritance, because the Facade only couples classes. It does not do anything for itself, it just transports requests and responses. For its processing relies this pattern completely on associations, reason that this pattern is loosely coupled. This pattern does not put any restrictions on the data to be processed. To reduce the number of methods needed the objects to be exchanged might be constructed as general as possible, but it is the choice of the architect to fulfill this requirement and it is not prescribed by the pattern. It is not necessary to implement the Facade pattern using the Singleton pattern as there is no change during the processing of the Facade. In Java is it therefore preferable to create a new Facade object within the method, which is calling it, as it will not cost any action on the garbage collector to have the Facade object cleaned. This is a structural pattern.

78

4.3.3 Translational patterns


Table 4: translational processing Translational processing Tight coupling Nominal key Class Ordinal key No key Nominal key Inheritance Ordinal key No key Nominal key Interface Ordinal key No key n.a. n.a. n.a. Observer n.a. Interpreter n.a. n.a. Visitor Strict coupling n.a. n.a. n.a. Builder n.a. n.a. Proxy n.a. Adapter Loose coupling n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. Command

Observer pattern

79

Figure 35: Observer pattern

This pattern is aka MVC and publisher/subscriber pattern. This UML description is inspired by the code example provided. In the example the different functions within the pattern are performed by different classes in accordance with the Single Responsibility Principle. In the code example it works on the Subject side a little bit different then presented in the UML, but that is because of the composite structure used in the example to control the subjects. In effect is this UML equal to the Booch model presented by the GoF, but as stated before the different functions are encapsulated using different classes. In the current UML can the Subject implementation focus on being the Subject and is the communication between subjects and observers handled by the combination of the communication manager and update profiles. The UpdateProfile is not a pure necessity for the pattern, but it creates the opportunity to group statusses and therefore let only those observers update, which will react on a particular status change. That way network traffic can be minimized. The minimization of network traffic is not part of the definition of the Observer pattern, but it sure is one of the main side objectives. The Observer must have knowledge about the concrete Subject at hand, because the Observer must update itself based upon the Subject. Upon notification from the CommunicationManager will any Observer receive the new Subject instance and based on the information will it update itself accordingly. The Observer interface does not have to know for which type of Subject it needs to 80

update, but each concrete Observer class must know to which Subject it is linked. Therefore is the most crucial relationship in this pattern the update from the ConcreteObserver using the ConcreteSubject. This relationship is on the level of classes and is an unidirectional composition relationship. Although the concrete observer does not need the concrete subject as a member, as can be seen in the code, it can control life cycle events of the Subject and it can display it, which makes this processing tightly coupled. Furthermore is the Observer using common coupling, because all observers refer to the same Subject and observers are grouped in parallel. That makes it inevitable that somewhere data must be exchanged and stored, which indicates that on the Observer side of the pattern the use of inheritance is more logical then solely interfacing. This is a behavioral pattern.

Interpreter pattern

Figure 36: Interpreter pattern

The input object is left untouched, but used to produce tightly related output. One might think using the Interpreter pattern for the construction of a search request or the calculation on a calculator. The constraint of the Interpreter pattern is in the presence of a well defined, prescribed set of input to translate. The Interpreter pattern will use inheritance to accomplish its task. No key is needed, only an expression to parse. The output of processing is dependent on the processing of the expression as depicted by the bidirectional composition relationship, which makes the Interpreter pattern behave tightly coupled. The Specifications Pattern, proposed by Eric Evans and Martin Fowler is a very useful example of this pattern. In their original document they have not provided an implementation, reason I provide one, which is clearly showing their pattern is an example of the Interpreter pattern. It is however an extraordinary good idea and I would suggest they get a 'Golden Wheel' award for presenting a very pratical idea to use. Whenever someone would create a solution for this kind of problem I would suggest to use their proposition. In 2002 a likewise example for the Interpreter pattern was provided by SUN here. Compared to mine implementation it is more straightforward, but lacks extensibility as there are no brackets present and there is no algorithm to construct more complex expressions. Compared to the implementation provided by Sun the contribution of Eric Evans and Martin Fowler in my opinion is the thoroughness of their description how the use of specifications can add to the logic of 81

applications. The code provided is quite extensive and effectively a dynamic expression builder. In their article Eric Evans and Martin Fowler told their preference for a composite implementation, but I chose a paramerized implementation as this would lessen the classes needed and changes should have only effect on the implementation of the Interpreter class, nowhere else. For readability purposes, especially the arrangement of the operations, I suggest you to open it in another editor like Eclipse or Netbeans. This is a behavioral pattern.

Visitor pattern

Figure 37: Visitor pattern

The Base class and the Agent interface still need implementations. This pattern is aka Extension Object pattern. The Visitor pattern is defined by the GoF as 'an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates'. Traditionally, the Visitor pattern is implemented using an accept() method in the classes on which it operates. Reading the definition of the Visitor pattern, shows that using an accept() method in the classes on which it is operating, is in contradiction with the definition. According to the definition no change in the subject classes ought to be made. The definition sounds logic and in line with the Open/Closed principle, whereas implementation using an accept() method is not. Using an accept() method can have several consequences with regard to the Open/Closed principle. These consequences are:

whenever the hierarchy of the base class changes, a new accept() method has to be added, whenever the hierarchy of the base class changes the Visitor classes must be adjusted as well, the use is also restricted to a hierarchy of classes about which one as fully access as all derived classes must be accessible intercepting an accept() method, and adding the accept() method is changing the classes in the hierarchy. Consider the situation in 82

which the behaviour of the class should be extended but the accept() method is forgotten to implement. The implementation will not work. Reflection is proposed as an alternative to the use of the accept() method. Although decoupling the visitor classes from the visitee classes successfully, reflection is a technical solution. A new set of abstracted relationships, which decouples the visitor classes from the visitee classes by abstraction, is preferable. Bertrand Mayer and Karine Arnout have written an article about this subject in which they discuss this subject thoroughly. After a structured analysis they come up with a new pattern as solution. To show how it works please take a look at the code listings in the resource 'visitor.jar'. The dependency is now put in control by the visitor object and all of the classes in the hierarchy will remain untouched. The control which classes are visited is in control of the visitor classes. Any random collection of classes can be visited without any preparation on the visitee classes. Extensibility is assured. When the base hierarchy is changed, the visitor classes do not have to be changed. Situations in which modification is required is minimized. Finally, because with this type of implementation all the action is performed by the visitors classes the code has become easier to read and maintain. At the core of the Visitor pattern is the wish that functional requirements are extended. In his article about the Visitor pattern Robert C. Martin describes this line of implementation as the Extension Object pattern. This pattern is the same as pointed out by Bernard Meyer and Karine Arnout how to implement the Visitor pattern. The most determining relationship is the unidirectional composition relationship between the Visitor implementation and the Agent interface. Whenever the implementation of the pattern has to be updated, it will be certainly has its effect on the realization of this relationship. Therefore is the Visitor pattern using a tightly coupled processing. This is a behavioral pattern.

Builder pattern Figure 38: Builder pattern

83

Every yellow class will need at least one implementation, but for the sake of clarity are these classes left out of the diagram. From one object a set of possible new objects is created. Where the Interpreter pattern in- and output are a reversible set, the translation from the input in the Builder pattern delivers an irreversible output. Lets say an object of type A is translated by the Builder pattern to an object of type B. If that object of type B is then translated back to an object of type A it will not be an object of type A with the same characteristics of the original object. During each translation some information can get lost. However, differences between output is predictable by the combination of input and the translation used. Often the case of a pizza delivery or constructing a search request is presented as an example of the Builder pattern. However the GoF are clearly showing this pattern having a two step process of first reading/parsing the input object and then building an output object. The pizza delivery example lacks the parsing phase and could designed using a State pattern. The search request has only one version of output and could therefore be implemented using the Interpreter pattern. An elaborate example is provided in the resource 'builder.jar'. An example could be the change of marital status. When someone is marrying for the first time the status is changed from single to married. Together with the status change a lot of rules applying to the person will change. The change can never be undone completely. When afterwards the person has a divorce there will apply some rules to the person, which are related to the new status. The status of divorced is not the same as the status single. The crucial relationships of this pattern are the two unidirectional aggregation relationships from respectively AbstractReader and AbstractConverter to AbstractTransfer. Every implementation of the AbstractTransfer is a shared object, which each Reader and Converter for a specific type of object can handle. Every reader and converter will have its specific parsing relating to the Transfer. Although compatible Readers and Converters must both understand the same type of Transfer is their interpretation independent from each other. Every implementation of the AbstractTransfer is a standardized representation of the type of object, which serves like a contract between the readers and converters for that type of object. Therefore are the relationships of readers and converters independent of each other and is the processing using strict coupling. 84

This is a creational pattern.

Proxy pattern

Figure 39: Proxy pattern

The definition of this pattern is that the proxy has a reference to the proxied object and must be able to initialize the proxied object. The ProxyClass can have at least two kinds of behavior, namely presenting information about the Proxied object and its instantiation. To express this dual characteristic of the ProxyClass it is implementing two different interfaces. The Proxy interface to describe the behavior of the ProxyClass itself, the Metadata interface to know how to instantiate the Proxied object next to displaying essential information about it. The Proxy interface does it share with other type of proxy objects, the Metadata interface does it share with the Proxied interface. It does not share the interface IProxied in which the bulk of the concrete information of the ProxiedClass is made available. In the original definition the constraint is put forward that the proxy object should have the same interface as the proxied object. In my opinion that would be too much. It must share some meta data to provide information and being able to open the document, but it should be a lightweight object and have as minimal methods and data as possible. Because of that it would only need to share a 'Metadata' interface with the Proxied interface and will it share with other proxies the common behavior to open the Proxied object and show its properties. Creating proxies this way uncouples compared to the original proxy pattern the proxy further from the proxied object and couples proxy objects meaningfully with one another. As a result of having a dual characteristic the ProxyClass is not merely a transformational pattern, but a translational pattern. It can open the object of the ProxiedClass, but it is not mandatory that 85

it happens. The fact that the pattern can have a different output makes this a translational pattern. The most crucial relationships of this pattern are the relationships to the interfaces the ProxyClass and the IProxied interface implement and even more the two relationships with the Metadata interface. The pattern will not work it both the ProxyClass and the IProxied interface do not share this interface. As interfacing and inheritance are both examples of strict coupling is this pattern based on that as well. Every instance of the ProxyClass has an unique relation to a specific instance of an object of the Proxied interface, which is depicted by the unidirectional aggregation relationship the ProxyClass has with the ProxiedClass. This is a structural pattern.

Adapter pattern

Figure 40: Adapter pattern

This pattern is aka Wrapper pattern. The function of the Adapter pattern is to translate the contract of an existing class to the new demands without changing its original behaviour. This change is permanent. The benefit doing that is the original contract of the existing class can still be used in other parts. It should be used cautiously however as the Adapter pattern couples two unrelated contracts permanently to each other. The Adapter is based on interfacing and uses no key. The essential relationship is obviously the unidirectional aggregation relationship between the Adapter and the Adaptee. That makes this pattern using a strictly coupled processing. This is a structural pattern.

Command pattern

Figure 41: Command pattern

86

The Command pattern is aka the Action or Transaction pattern. The purpose of the Command pattern is to separate the request of the sender from the response of the receiver. The request of the sender is reverted to the creation of a command object, which knows how to perform the command. This pattern does not use any type of key. It is created to prevent using keys. The key relationships are the unidirectional associations from the Invoker to the Command interface and the relationship from the ConcreteCommand to the Receiver. That implies that this pattern uses loose coupling in its processing. The Command pattern is used to execute an action concerning another object then itself. Therefore is there a translation from a command object to another one and is this a translational processing. Central to idea of the Command pattern is that any object should adhere to the contract having an execute method. Although the pattern can be implemented using inheritance and that might be useful when undoes have to be performed, it is more convenient to say that this pattern is using interfacing. This is a behavioral pattern.

4.4 Conclusions about the classification system


The first conclusion is that it is another confirmation of what was already known: the quality of the book of the GoF. Having a completely different setup and having a completely different classification system, the outcome is remarkably the same. Of course, their contribution was never questioned. The completeness of the overview of design patterns have proven their aptness in numerous projects all over the world. Providing a classification system which ends up with the same result as described in the book supports the classification system, not the usefulness of the design patterns. That is already out of dispute long ago. The difference between the genotypical classification system and the traditional one is that the distinction in Creational, Behavioral and Structural patterns is organized by the functional purposes they serve, whereas the genotypical classification system is organized around the intrinsic characteristics of the design patterns themselves. They are two concurrent classifications, like two views in an implementation of the Observer pattern. As the set of abstracted relationships are an answer to the functional contract both classification systems mirror each other. With the help of this classification system one has another way to select design patterns. Now it can also be found using the relationships between classes and with the UML can the auxiliairy classes be identified. 87

I started with the presumption that every genotype should have its own place within the classification. It turns out that this is not quite a legitimate assertion. It must be alleviated, because some places in the classification system are shared by more options. The Composite pattern shares its place with the Collection handling patterns and the Symbolic Proxy with the Publish/Subscribe pattern. Anyhow, the original does it not seem to be a valid. The reason is that the effect of the abstracted set of relationships is not dependent on all relationships, but mostly on only one relationship, which creates the possibility that more patterns share this characteristic. The benefit of this type of classification is that one can now find out which pattern to use based upon the type of relationship there exists between the business objects. It is a different way of finding out which design pattern to chose. Using the method provided by the GoF design patterns should be chosen on functional demands. Using this classification system an extra method to make a choice between design patterns based on the relationships between the objects needed is available. It does not replace the method provided by the GoF as that method has proven its aptness throughout the years. It gives an extra possibility and it puts the design patterns in relation to each other. For instance it now becomes clear why the Factory and Abstract Factory pattern are so apt for creating different Template and Bridge objects respectively. The creational paterns precede their creations, having a similar pattern but using tight coupling for processing instead of strict coupling. They are actually paired. Another example, something less visible, is the placement of the Memento pattern in relation to the Flow pattern. Both use an object for memory, both appear at the same place in their overview of processings. Both Visitor and Adapter pattern are often used to extend the behavior of a class and they appear next to each other. The Visitor pattern however needs more information to perform its job, as it also has to control the combination of the agents with the classes, which behavior they are overriding, where the Adapter pattern will provide one wrapper around a certain interface. It was one of my silent expectations during the start of the project that the Vistor, Adapter and Decorater pattern would end up next to each other. The Visitor and Adapter pattern did, but the Decorator pattern turned out to have a different kind of processing. Still, it ends up the same place in the transformational processing as the Adapter pattern, to which it resembles more then the Visitor pattern. That the Visitor pattern uses tight coupling was a surprise to me, but nevertheless in retrospect comprehensible. Another silent expectation was that the Interpreter and Builder pattern should line up. They do not, but the Observer and Builder pattern turn out to line up together. Although unexpected, it makes more sense indeed. One Reader can have different Converters. These converters are grouped in parallel, just like the Observers in the Observer pattern. But Converters do not require any direct relationship with the Reader, on the contrary. In the Observer pattern the Observers must share the same instance of the Subject, where in the Builder pattern the demand is lessened to sharing the same type of object. This makes the Builder pattern less demanding then the Observer pattern in terms of sharing objects - and in line with that makes it less controllable what the output will be. I doubted if the relationship between the Reader and the AbstractTransfer should be of a compositional type, but I decided not to, because the fact that the Reader will create these objects is not the essence of the relationship. The essence is that the Reader will provide the needed implementation of the AbstractTransfer in order to provide its results to the available Converters. It is more an instantiation then creation. The creational patterns have as their purpose to create new objects, where any Reader has as its purpose to provide its results. That is a big difference, enough reason to make it an aggregational relationship. 88

As mentioned before are the Service Locator and Dependency Injection two highly related design patterns. Their main difference is the use of a nominal and ordinal key. It is a big difference to connect the registry to their factories as can be expressed with the analogy of a famous paradox. Compare 'I always lie' to 'I lie'. The first one being the analogy to the nominal key and the latter being the analogy to the ordinal key. Both patterns do their job well, the first one making one more assumption then the other. This classification system asks for more patterns to be discovered. There are still a lot of possibilities, having currently 'n.a' as its value, which all could be patterns. May be I oversaw them and do they already exist. If so, then I would love to hear of these patterns. These patterns should be around a specific relationship, like a translational process using loose coupling and therefore mainly based on one or more associations. Now there is only the Command pattern filling a cell in this column, but there might be more patterns around as I can imagine that there are more translational patterns. There are already a lot more patterns described, but close examination of much of these patterns reveal that they are often new names for already described patterns. One of the main strong points of the book of the GoF is that they describe a restricted set of design patterns, but do that in such a general way that the same pattern can be applied to a lot of different situations. When for every new situation a new name is created, but the design pattern is already known, then there is an overload of names, but no clarity which patterns are really useful. An example is the Facet pattern. With that pattern the situation is described to restrict an interface to a smaller interface, most often used for security. This can be conveniently handled using the Visitor pattern. I doubt it if the Facet pattern should be mentioned a separate pattern. Not every situation should be a pattern and it could be more benificial if the number of situations in which a pattern could be used is extended rather then creating a new name and isolating the particular situation from its related situations. May be it could be called a Facet implementation as part of the Visitor pattern in order to show the extensibility of the pattern. The more situations are described in which a pattern could be useful, the deeper the understanding of patterns can grow. On the Wikipedia page the next patterns do not actually add a new design pattern to the book of the GoF:

Multiton resembles the Object Pool pattern, Lazy Initialization is more a technique or a language property then a design pattern, the Null Object pattern is not a real pattern but a concept very important indeed, the Blackboard pattern is an example of the Chain of Responsibility pattern, RIIA or RAII is an important technique within some languages, and the Restorer pattern finally is never described.

Peter Norvig stated that design patterns do not exist in a lot of functional programming languages and he presents this overview how design patterns can be replaced or being invisible alltogether in Dylan and LISP. This is interesting, just like this article about functional programming, because it shows that design patterns are dependent on the programming language. Design patterns consist of two parts: the problem and the solution. I think the problem will always exist and every language has its ways to deal with it. Within the boundaries of the language certain type of solutions are available. When you take a look at the example of the Interpreter pattern I think it would not be a real challenge for Peter Norvig to rewrite this code in LISP using functions in combination with Higher Order Functions and that the code would be more compact too. He could then show this code and say 'You see, it works and the pattern has become invisible'. I would agree and reply 'and that is just my point: it has become invisible, but nevertheless it is still there.', because the relationships 89

which form the pattern will still be necessary to create the code. The pattern might have become invisible, but that does not mean it is not there. Maintainability is the most important feature of any application. Without it, the application will be replaced. Visibility what the code is doing is for me a very important feature. I would prefer an application having visible design patterns above an application having invisible design patterns, although I would agree that both applications serve their first job well and that is that they have to do what they should do. Directly after that comes the question for maintenance. That is not an urgent demand when the application is quite straightforward, but the more complex a system grows, the more urgent the demand of maintainability becomes. And I think the more structured a language demands the programmers to do their job, the more 'unnecessary' lines of code they have to write, the better the next programmer, who is not known yet, may understand what is going on. That is why I prefer to have very visible design patterns, but that might be a matter of taste. May the Force be with you. 42

90

Você também pode gostar