Escolar Documentos
Profissional Documentos
Cultura Documentos
•
• Figure 1. Simple push-button.
• The CPushButton class definition might look something like this:
• /* 1 */
• class CPushButton
• {
• protected:
• Rect bounds;
• Str255 title;
• BooleanisDefault;
• WindowPtrowningWindow;
• ControlHandle buttonControl;
•
• public:
• CPushButton( WindowPtr owningWindow, Rect *bounds,
• Str255 title, Boolean isDefault );
• virtual~CPushButton();
• virtual void Draw();
• virtual void DoClick();
• };
• For the moment, ignore the keyword virtual that’s sprinkled throughout the CPushButton class
definition. We’ll get to it in a bit.
• Take a look at the CPushButton data members. Notice that they were declared using the
protected access specifier. protected is very similar to the private access specifier. A data
member or member function marked as private is only accessible from within member
functions of that class. For example, if you defined an object in main() that featured a private
data member, referring to the data member within a member function works just fine, but
referring to the data member within main() will cause a compile error, telling you that you are
trying to access a member inappropriately.
• A member marked as protected is accessible from member functions of the class and also
from member functions of any classes derived from that class. For example, the bounds data
member might be accessed by the CPushButton classes’ Draw() function, but never by an
outside function like main().
• In addition, a protected member is also accessible from the member functions of any classes
derived from its class.
This reuse also plays a basic role in inheritance that is a characteristic of object oriented
programming.In inheritance, new objects are defined that inherit the existing objects and
provide extended functionality as required. It means that if we require some new object
that also needs some functionality of existing one's then it is easy to inherit the existing
one instead of writing all the required code once again.
To supress the base class virtual method and wanted derived method to be called then Base class
virtual method is overrided in the derived class.
class A
{
Console.Write("Base function1");
}
}
class B : A
{
Console.Write("Base function2");
}
}
A a new B();
a.func1();
}
Method overriding, in object oriented programming, is a language feature that allows a subclass
to provide a specific implementation of a method that is already provided by one of its
superclasses. The implementation in the subclass overrides (replaces) the implementation in the
superclass.
A subclass can give its own definition of methods which also happen to have the same signature
as the method in its superclass. This means that the subclass's method has the same name and
parameter list as the superclass's overridden method. Constraints on the similarity of return type
vary from language to language, as some languages support covariance on return types.
Method overriding is an important feature that facilitates polymorphism in the design of object-
oriented programs.
Some languages allow the programmer to prevent a method from being overridden, or disallow
method overriding in certain core classes. This may or may not involve an inability to subclass
from a given class.
In many cases, abstract classes are designed — i.e. classes that exist only in order to have
specialized subclasses derived from them. Such abstract classes have methods that do not
perform any useful operations and are meant to be overridden by specific implementations in the
subclasses. Thus, the abstract superclass defines a common interface which all the subclasses
inherit.
Method overloading is a feature found in various programming languages such as Ada, C#, C+
+, D and Java that allows the creation of several methods with the same name which differ from
each other in terms of the type of the input and the type of the output of the function.
For example, doTask() and doTask(object O) are overloaded methods. To call the latter, an
object must be passed as a parameter, whereas the former does not require a parameter, and is
called with an empty parameter field. A common error would be to assign a default value to the
object in the second method, which would result in an ambiguous call error, as the compiler
wouldn't know which of the two methods to use.
Another example would be a Print(object O) method. In this case one might like the method to
be different when printing, for example, text or pictures. The two different methods may be
overloaded as Print(text_object T); Print(image_object P). If we write the overloaded print
methods for all objects our program will "print", we never have to worry about the type of the
object, and the correct function call again, the call is always: Print(something).
Method overloading is usually associated with statically-typed programming languages which
enforce type checking in function calls. When overloading a method, you are really just making
a number of different methods that happen to have the same name. It is resolved at compile time
which of these methods are used.
Object database
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Object databases are a niche field within the broader DBMS market dominated by relational
database management systems (RDBMS). Object databases have been considered since the early
1980s and 1990s but they have made little impact on mainstream commercial data processing,
though there is some usage in specialized areas.
Contents
[hide]
• 1 Overview
• 2 History
• 3 Adoption of object
databases
• 4 Technical features
• 5 Standards
• 6 Advantages and
disadvantages
• 7 See also
• 8 References
• 9 External links
[edit] Overview
When database capabilities are combined with object-oriented (OO) programming language
capabilities, the result is an object database management system (ODBMS).
Today’s trend in programming languages is to utilize objects, thereby making OODBMS ideal
for OO programmers because they can develop the product, store them as objects, and can
replicate or modify existing objects to make new objects within the OODBMS. Information
today includes not only data but video, audio, graphs, and photos which are considered complex
data types. Relational DBMS aren’t natively capable of supporting these complex data types. By
being integrated with the programming language, the programmer can maintain consistency
within one environment because both the OODBMS and the programming language will use the
same model of representation. Relational DBMS projects using complex data types would have
to be divided into two separate tasks: the database model and the application.
As the usage of web-based technology increases with the implementation of Intranets and
extranets, companies have a vested interest in OODBMS to display their complex data. Using a
DBMS that has been specifically designed to store data as objects gives an advantage to those
companies that are geared towards multimedia presentation or organizations that utilize
computer-aided design (CAD)[2].
Some object-oriented databases are designed to work well with object-oriented programming
languages such as Ruby, Python, Perl, Java, C#, Visual Basic .NET, C++, Objective-C and
Smalltalk; others have their own programming languages. ODBMSs use exactly the same model
as object-oriented programming languages.
[edit] History
Object database management systems grew out of research during the early to mid-1970s into
having intrinsic database management support for graph-structured objects. The term "object-
oriented database system" first appeared around 1985.[3] Notable research projects included
Encore-Ob/Server (Brown University), EXODUS (University of Wisconsin–Madison), IRIS
(Hewlett-Packard), ODE (Bell Labs), ORION (Microelectronics and Computer Technology
Corporation or MCC), Vodak (GMD-IPSI), and Zeitgeist (Texas Instruments). The ORION
project had more published papers than any of the other efforts. Won Kim of MCC compiled the
best of those papers in a book published by The MIT Press.[4]
Early commercial products included Gemstone (Servio Logic, name changed to GemStone
Systems), Gbase (Graphael), and Vbase (Ontologic). The early to mid-1990s saw additional
commercial products enter the market. These included ITASCA (Itasca Systems), Jasmine
(Fujitsu, marketed by Computer Associates), Matisse (Matisse Software), Objectivity/DB
(Objectivity, Inc.), ObjectStore (Progress Software, acquired from eXcelon which was originally
Object Design), ONTOS (Ontos, Inc., name changed from Ontologic), O2[5] (O2 Technology,
merged with several companies, acquired by Informix, which was in turn acquired by IBM),
POET (now FastObjects from Versant which acquired Poet Software), Versant Object Database
(Versant Corporation), VOSS (Logic Arts) and JADE (Jade Software Corporation). Some of
these products remain on the market and have been joined by new open source and commercial
products such as InterSystems CACHÉ (see the product listings below).
Object database management systems added the concept of persistence to object programming
languages. The early commercial products were integrated with various languages: GemStone
(Smalltalk), Gbase (LISP), Vbase (COP) and VOSS (Virtual Object Storage System for
Smalltalk). For much of the 1990s, C++ dominated the commercial object database management
market. Vendors added Java in the late 1990s and more recently, C#.
Starting in 2004, object databases have seen a second growth period when open source object
databases emerged that were widely affordable and easy to use, because they are entirely written
in OOP languages like Smalltalk, Java or C#, such as db4o (db4objects), DTS/S1 from Obsidian
Dynamics and Perst (McObject), available under dual open source and commercial licensing.
[edit] Adoption of object databases
Object databases based on persistent programming acquired a niche in application areas such as
engineering and spatial databases, telecommunications, and scientific areas such as high energy
physics and molecular biology. They have made little impact on mainstream commercial data
processing, though there is some usage in specialized areas of financial services.[6] It is also
worth noting that object databases held the record for the World's largest database (being the first
to hold over 1000 terabytes at Stanford Linear Accelerator Center)[7] and the highest ingest rate
ever recorded for a commercial database at over one Terabyte per hour.
Another group of object databases focuses on embedded use in devices, packaged software, and
real-time systems.
[edit] Standards
The Object Data Management Group (ODMG) was a consortium of object database and object-
relational mapping vendors, members of the academic community, and interested parties. Its goal
was to create a set of specifications that would allow for portable applications that store objects
in database management systems. It published several versions of its specification. The last
release was ODMG 3.0. By 2001, most of the major object database and object-relational
mapping vendors claimed conformance to the ODMG Java Language Binding. Compliance to
the other components of the specification was mixed. In 2001, the ODMG Java Language
Binding was submitted to the Java Community Process as a basis for the Java Data Objects
specification. The ODMG member companies then decided to concentrate their efforts on the
Java Data Objects specification. As a result, the ODMG disbanded in 2001.
Many object database ideas were also absorbed into SQL:1999 and have been implemented in
varying degrees in object-relational database products.
In 2005 Cook, Rai, and Rosenberger proposed to drop all standardization efforts to introduce
additional object-oriented query APIs but rather use the OO programming language itself, i.e.,
Java and .NET, to express queries. As a result, Native Queries emerged. Similarly, Microsoft
announced Language Integrated Query (LINQ) and DLINQ, an implementation of LINQ, in
September 2005, to provide close, language-integrated database query capabilities with its
programming languages C# and VB.NET 9.
In February 2006, the Object Management Group (OMG) announced that they had been granted
the right to develop new specifications based on the ODMG 3.0 specification and the formation
of the Object Database Technology Working Group (ODBT WG). The ODBT WG plans to
create a set of standards that incorporates advances in object database technology (e.g.,
replication), data management (e.g., spatial indexing), and data formats (e.g., XML) and to
include new features into these standards that support domains where object databases are being
adopted (e.g., real-time systems).
On January 2007 the World Wide Web Consortium gave final recommendation status to the
XQuery language. XQuery has enabled a new class of applications that managed hierarchical
data built around the XRX web application architecture that also provide many of the advantages
of object databases. In addition XRX applications benefit by transporting XML directly to client
applications such as XForms without changing data structures.
The main benefit of creating a database with objects as data is speed. OODBMS are faster than
relational DBMS because data isn’t stored in relational rows and columns but as objects[8].
Objects have a many to many relationship and are accessed by the use of pointers. Pointers are
linked to objects to establish relationships. Another benefit of OODBMS is that it can be
programmed with small procedural differences without affecting the entire system[9]. This is most
helpful for those organizations that have data relationships that aren’t entirely clear or need to
change these relations to satisfy the new business requirements. This ability to change
relationships leads to another benefit which is that relational DBMS can’t handle complex data
models while OODBMS can.
Benchmarks between ODBMSs and RDBMSs have shown that an ODBMS can be clearly
superior for certain kinds of tasks. The main reason for this is that many operations are
performed using navigational rather than declarative interfaces, and navigational access to data is
usually implemented very efficiently by following pointers.
Critics of navigational database-based technologies like ODBMS suggest that pointer-based
techniques are optimized for very specific "search routes" or viewpoints; for general-purpose
queries on the same information, pointer-based techniques will tend to be slower and more
difficult to formulate than relational. Thus, navigation appears to simplify specific known uses at
the expense of general, unforeseen, and varied future uses.[citation needed] However, with suitable
language support, direct object references may be maintained in addition to normalised, indexed
aggregations, allowing both kinds of access; furthermore, a persistent language may index
aggregations on whatever its content elements return from a call to some arbitrary object access
method, rather than only on attribute value, which allows a query to 'drill down' into complex
data structures.
Other things that work against ODBMS seem to be the lack of interoperability with a great
number of tools/features that are taken for granted in the SQL world, including but not limited to
industry standard connectivity, reporting tools, OLAP tools, and backup and recovery standards.
[citation needed]
Additionally, object databases lack a formal mathematical foundation, unlike the
relational model, and this in turn leads to weaknesses in their query support. However, this
objection is offset by the fact that some ODBMSs fully support SQL in addition to navigational
access, e.g. Objectivity/SQL++, Matisse, and InterSystems CACHÉ. Effective use may require
compromises to keep both paradigms in sync.
In fact there is an intrinsic tension between the notion of encapsulation, which hides data and
makes it available only through a published set of interface methods, and the assumption
underlying much database technology, which is that data should be accessible to queries based
on data content rather than predefined access paths. Database-centric thinking tends to view the
world through a declarative and attribute-driven viewpoint, while OOP tends to view the world
through a behavioral viewpoint, maintaining entity-identity independently of changing attributes.
This is one of the many impedance mismatch issues surrounding OOP and databases.
Although some commentators have written off object database technology as a failure, the
essential arguments in its favor remain valid, and attempts to integrate database functionality
more closely into object programming languages continue in both the research and the industrial
communities.[citation needed]
Features of Object oriented Programming
The Objects Oriented programming language supports all the features of normal programming
languages. In addition it supports some important concepts and terminology which has made it
popular among programming methodology.
• Inheritance
• Polymorphism
• Data Hiding
• Encapsulation
• Overloading
• Reusability
Let us see a brief overview of these important features of Object Oriented programming
But before that it is important to know some new terminologies used in Object Oriented
programming namely
• Objects
• Classes
Objects:
In other words object is an instance of a class.
Classes:
These contain data and functions bundled together under a unit. In other words class is a
collection of similar objects. When we define a class it just creates template or Skelton. So no
memory is created when class is created. Memory is occupied only by object.
Example:
Class classname
{
Data
Functions
};
main ( )
{
classname objectname1,objectname2,..;
}
Member functions:
The functions defined inside the class as above are called member functions.
Here the concept of Data Hiding figures
Data Hiding:
This concept is the main heart of an Object oriented programming. The data is hidden inside the
class by declaring it as private inside the class. When data or functions are defined as private it
can be accessed only by the class in which it is defined. When data or functions are defined as
public then it can be accessed anywhere outside the class. Object Oriented programming gives
importance to protecting data which in any system. This is done by declaring data as private and
making it accessible only to the class in which it is defined. This concept is called data hiding.
But one can keep member functions as public.
So above class structure becomes
Example:
Class classname
{
private:
datatype data;
public:
Member functions
};
main ( )
{
classname objectname1,objectname2,..;
}
Encapsulation:
The technical term for combining data and functions together as a bundle is encapsulation.
Inheritance:
Inheritance as the name suggests is the concept of inheriting or deriving properties of an exiting
class to get new class or classes. In other words we may have common features or characteristics
that may be needed by number of classes. So those features can be placed in a common tree class
called base class and the other classes which have these charaterisics can take the tree class and
define only the new things that they have on their own in their classes. These classes are called
derived class. The main advantage of using this concept of inheritance in Object oriented
programming is it helps in reducing the code size since the common characteristic is placed
separately called as base class and it is just referred in the derived class. This provide the users
the important usage of terminology called as reusability
Reusability:
This usage is achieved by the above explained terminology called as inheritance. Reusability is
nothing but re- usage of structure without changing the existing one but adding new features or
characteristics to it. It is very much needed for any programmers in different situations.
Reusability gives the following advantages to users
It helps in reducing the code size since classes can be just derived from existing one and one
need to add only the new features and it helps users to save their time.
For instance if there is a class defined to draw different graphical figures say there is a user who
want to draw graphical figure and also add the features of adding color to the graphical figure. In
this scenario instead of defining a class to draw a graphical figure and coloring it what the user
can do is make use of the existing class for drawing graphical figure by deriving the class and
add new feature to the derived class namely add the feature of adding colors to the graphical
figure.
Introduction
This is the seventh installment in a series of articles about fundamental object-oriented
(OO) concepts. The material presented in these articles is based on material from the
second edition of my book, The Object-Oriented Thought Process, 2nd edition. The
Object-Oriented Thought Process is intended for anyone who needs to understand the
basic object-oriented concepts before jumping into the code. Click here to start at the
beginning of the series.
Now that we have covered the conceptual basics of classes and objects, we can start to
explore specific concepts in more detail. Remember that there are three criteria that are
applied to object-oriented languages: They have to implement encapsulation,
inheritance, and polymorphism. Of course, these are not the only important terms, but
they are a great place to start a discussion.
In the previous article, this article, and several of the ones that follow, we will focus on a
single concept and explore how it fits in to the object-oriented model. We will also begin
to get much more involved with code. In keeping with the code examples used in the
previous articles, Java will be the language used to implement the concepts in code.
One of the reasons that I like to use Java is because you can download the Java
compiler for personal use at the Sun Microsystems Web site http://java.sun.com/. You
can download the J2SE 1.4.2 SDK (software development kit) to compile and execute
these applications and will provide the code listings for all examples in this article. I
have the SDK 1.4.0 loaded on my machine. I will provide figures and the output for
these examples. See the previous article in this series for detailed descriptions for
compiling and running all the code examples in this series.
Checking Account Example
Recall that in the previous article, we created a class diagram that is represented in the
following UML diagram; see Figure 1.
•
}
Listing 2: Encapsulation.java
The offending line is the where this main application attempts to set balance directly.
myAccount.balance = 40.00;
This line violates the rule of data hiding. As we saw in last month's article, the compiler
does not allow this; however, it fails to set balance to 40 only because the access was
declared as private. It is interesting to note that the Java language, just as C++, C#, and
other languages, allows for the attribute to be declared as public. In this case, the main
application would indeed be allowed to directly set the value of balance. This then would
break the object-oriented concept of data hiding and would not be considered a proper
object-oriented design.
This is one area where the importance of the design comes in. If you abide by the rule
that all attributes are private, all attributes of an object are hidden, thus the term data
hiding. This is so important because the compiler now can enforce the data hiding rule.
If you declare all of a class's attributes as private, a rogue developer cannot directly
access the attributes from an application. Basically, you get this protection checking for
free.
Whereas the class's attributes are hidden, the methods in this example are designated
as public.
public void setBalance(double bal) {
balance = bal;
};
public double getBalance(){
return balance;
};
- Data hiding is a characteristic of object-oriented programming. Because an
object can only be associated with data in predefined classes or templates, the
object can only "know" about the data it needs to know about. There is no
possibility that someone maintaining the code may inadvertently point to or
otherwise access the wrong data unintentionally. Thus, all data not required by an
object can be said to be "hidden."
protected--->pubic and
protecated members of the
base class
will become protected in
derived class
Private-->pubilc and
proteacted members will
become private
in derived class
Function templates
Function templates are special functions that can operate with generic types. This allows us to create a
function template whose functionality can be adapted to more than one type or class without
repeating the entire code for each type.
In C++ this can be achieved using template parameters. A template parameter is a special kind of
parameter that can be used to pass a type as argument: just like regular function parameters can be
used to pass values to a function, template parameters allow to pass also types to a function. These
function templates can use these parameters as if they were any other regular type.
The format for declaring function templates with type parameters is:
The only difference between both prototypes is the use of either the keyword class or the keyword
typename. Its use is indistinct, since both expressions have exactly the same meaning and behave
exactly the same way.
For example, to create a template function that returns the greater one of two objects we could use:
Here we have created a template function with myType as its template parameter. This template
parameter represents a type that has not yet been specified, but that can be used in the template
function as if it were a regular type. As you can see, the function template GetMax returns the greater
of two parameters of this still-undefined type.
To use this function template we use the following format for the function call:
For example, to call GetMax to compare two integer values of type int we can write:
1 int x,y;
2 GetMax <int> (x,y);
When the compiler encounters this call to a template function, it uses the template to automatically
generate a function replacing each appearance of myType by the type passed as the actual template
parameter (int in this case) and then calls it. This process is automatically performed by the compiler
and is invisible to the programmer.
1 // function template 6
2 #include <iostream> 10
3 using namespace std;
4
5 template <class T>
T GetMax (T a, T b) {
6
T result;
7 result = (a>b)? a : b;
8 return (result);
9 }
10
11 int main () {
12 int i=5, j=6, k;
13 long l=10, m=5, n;
k=GetMax<int>(i,j);
14
n=GetMax<long>(l,m);
15 cout << k << endl;
16 cout << n << endl;
17 return 0;
18 }
19
20
In this case, we have used T as the template parameter name instead of myType because it is shorter
and in fact is a very common template parameter name. But you can use any identifier you like.
In the example above we used the function template GetMax() twice. The first time with arguments
of type int and the second one with arguments of type long. The compiler has instantiated and then
called each time the appropriate version of the function.
As you can see, the type T is used within the GetMax() template function even to declare new
objects of that type:
T result;
Therefore, result will be an object of the same type as the parameters a and b when the function
template is instantiated with a specific type.
In this specific case where the generic type T is used as a parameter for GetMax the compiler can find
out automatically which data type has to instantiate without having to explicitly specify it within angle
brackets (like we have done before specifying <int> and <long>). So we could have written instead:
1 int i,j;
2 GetMax (i,j);
Since both i and j are of type int, and the compiler can automatically find out that the template
parameter can only be int. This implicit method produces exactly the same result:
1 // function template II 6
2 #include <iostream> 10
3 using namespace std;
4
5 template <class T>
T GetMax (T a, T b) {
6
return (a>b?a:b);
7 }
8
9 int main () {
10 int i=5, j=6, k;
11 long l=10, m=5, n;
12 k=GetMax(i,j);
13 n=GetMax(l,m);
cout << k << endl;
14
cout << n << endl;
15 return 0;
16 }
17
18
Notice how in this case, we called our function template GetMax() without explicitly specifying the
type between angle-brackets <>. The compiler automatically determines what type is needed on each
call.
Because our template function includes only one template parameter (class T) and the function
template itself accepts two parameters, both of this T type, we cannot call our function template with
two objects of different types as arguments:
1 int i;
2 long l;
3 k = GetMax (i,l);
This would not be correct, since our GetMax function template expects two arguments of the same
type, and in this call to it we use objects of two different types.
We can also define function templates that accept more than one type parameter, simply by specifying
more template parameters between the angle brackets. For example:
In this case, our function template GetMin() accepts two parameters of different types and returns
an object of the same type as the first parameter (T) that is passed. For example, after that
declaration we could call GetMin() with:
1 int i,j;
2 long l;
3 i = GetMin<int,long> (j,l);
or simply:
i = GetMin (j,l);
even though j and l have different types, since the compiler can determine the appropriate
instantiation anyway.
Class templates
We also have the possibility to write class templates, so that a class can have members that use
template parameters as types. For example:
The class that we have just defined serves to store two elements of any valid type. For example, if we
wanted to declare an object of this class to store two integer values of type int with the values 115
and 36 we would write:
mypair<int> myobject (115, 36);
this same class would also be used to create an object to store any other type:
The only member function in the previous class template has been defined inline within the class
declaration itself. In case that we define a function member outside the declaration of the class
template, we must always precede that definition with the template <...> prefix:
Confused by so many T's? There are three T's in this declaration: The first one is the template
parameter. The second T refers to the type returned by the function. And the third T (the one
between angle brackets) is also a requirement: It specifies that this function's template parameter is
also the class template parameter.
Template specialization
If we want to define a different implementation for a template when a specific type is passed as
template parameter, we can declare a specialization of that template.
For example, let's suppose that we have a very simple class called mycontainer that can store one
element of any type and that it has just one member function called increase, which increases its
value. But we find that when it stores an element of type char it would be more convenient to have a
completely different implementation with a function member uppercase, so we decide to declare a
class template specialization for that type:
1 // template specialization 8
2 #include <iostream> J
3 using namespace std;
4
5 // class template:
template <class T>
6
class mycontainer {
7 T element;
8 public:
9 mycontainer (T arg)
10 {element=arg;}
11 T increase () {return +
+element;}
12
};
13
14
// class template
15 specialization:
16 template <>
17 class mycontainer <char> {
18 char element;
19 public:
mycontainer (char arg)
20
{element=arg;}
21 char uppercase ()
22 {
23 if
24 ((element>='a')&&(element<='z'))
25 element+='A'-'a';
26 return element;
}
27
};
28
29 int main () {
30 mycontainer<int> myint (7);
31 mycontainer<char> mychar
32 ('j');
33 cout << myint.increase() <<
34 endl;
cout << mychar.uppercase() <<
endl;
return 0;
}
First of all, notice that we precede the class template name with an emptytemplate<> parameter list.
This is to explicitly declare it as a template specialization.
But more important than this prefix, is the <char> specialization parameter after the class template
name. This specialization parameter itself identifies the type for which we are going to declare a
template class specialization (char). Notice the differences between the generic class template and
the specialization:
The first line is the generic template, and the second one is the specialization.
When we declare specializations for a template class, we must also define all its members, even those
exactly equal to the generic template class, because there is no "inheritance" of members from the
generic template to the specialization.
Type conversions
An expression of a given type is implicitly converted in the following situations:
• The expression is used as an operand of an arithmetic or logical operation.
• The expression is used as a condition in an if statement or an iteration statement (such as a for
loop). The expression will be converted to a Boolean (or an integer in C89).
• The expression is used in a switch statement. The expression will be converted to an integral type.
• The expression is used as an initialization. This includes the following:
○ An assignment is made to an lvalue that has a different type than the assigned value.
○ A function is provided an argument value that has a different type than the parameter.
○ The value specified in the return statement of a function has a different type from the
defined return type for the function.
You can perform explicit type conversions using a cast expression, as described in Cast expressions. The
following sections discuss the conversions that are allowed by either implicit or explicit conversion, and the
rules governing type promotions:
• Arithmetic conversions and promotions
• Lvalue-to-rvalue conversions
• Pointer conversions
• Reference conversions (C++ only)
• Qualification conversions (C++ only)
• Function argument conversions
What is Type Conversion
It is the process of converting one type into another. In other words converting an expression of a
given type into another is called type casting.
#include <iostream.h>
void main()
{
short x=6000;
int y;
y=x;
}
In the above example the data type short namely variable x is converted to int and is assigned to
the integer variable y.
datatype (expression);
Here in the above datatype is the type which the programmer wants the expression to gets
changed as
In C++ the type casting can be done in either of the two ways mentioned below namely:
C-style casting
C++-style casting
(type) expression
Apart from the above the other form of type casting that can be used specifically in C++
programming language namely C++-style casting is as below namely:
type (expression)
This approach was adopted since it provided more clarity to the C++ programmers rather than
the C-style casting.
Say for instance the as per C-style casting
is not clear but when a programmer uses the C++ style casting it is much more clearer as below
Let us see the concept of type casting in C++ with a small example:
#include <iostream.h>
void main()
{
int a;
float b,c;
cout<< “Enter the value of a:”;
cin>>a;
cout<< “n Enter the value of b:”;
cin>>b;
c = float(a)+b;
cout<<”n The value of c is:”<<c;
}
Entity-relationship model
From Wikipedia, the free encyclopedia
Jump to: navigation, search
[edit] Overview
The first stage of information system design uses these models during the requirements analysis
to describe information needs or the type of information that is to be stored in a database. The
data modeling technique can be used to describe any ontology (i.e. an overview and
classifications of used terms and their relationships) for a certain area of interest. In the case of
the design of an information system that is based on a database, the conceptual data model is, at a
later stage (usually called logical design), mapped to a logical data model, such as the relational
model; this in turn is mapped to a physical model during physical design. Note that sometimes,
both of these phases are referred to as "physical design".
There are a number of conventions for entity-relationship diagrams (ERDs). The classical
notation mainly relates to conceptual modeling. There are a range of notations employed in
logical and physical database design, such as IDEF1X.
Primary key
Is This Answer 20
Correct ? Yes 3 No
Re: Difference: Object Oriented Analysis (OOA)
and Object Oriented Design (OOD)?
Class Diagrams
A class diagram focuses on a set of classes (see Chapter 1) and the structural relationships among them
(see Chapter 2). It may also show interfaces (see the section “Interfaces, Ports, and Connectors” in Chapter
1).
The UML allows you to draw class diagrams that have varying levels of detail. One useful way to classify
these diagrams involves three stages of a typical software development project: requirements, analysis, and
design. These stages are discussed in the following sections.
Object diagram
From Wikipedia, the free encyclopedia
Jump to: navigation, search
An object diagram in the Unified Modeling Language (UML), is a diagram that shows a
complete or partial view of the structure of a modeled system at a specific time.
An Object diagram focuses on some particular set of object instances and attributes, and the links
between the instances. A correlated set of object diagrams provides insight into how an arbitrary
view of a system is expected to evolve over time. Object diagrams are more concrete than class
diagrams, and are often used to provide examples, or act as test cases for the class diagrams.
Only those aspects of a model that are of current interest need be shown on an object diagram.
•
Object diagram topics
Instance specifications
Each object and link on an object diagram is represented by an InstanceSpecification. This can
show an object's classifier (e.g. an abstract or concrete class) and instance name, as well as
attributes and other structural features using slots. Each slot corresponds to a single attribute or
feature, and may include a value for that entity.
The name on an instance specification optionally shows an instance name, a ':' separator, and
optionally one or more classifier names separated by commas. The contents of slots, if any, are
included below the names, in a separate attribute compartment. A link is shown as a solid line,
and represents an instance of an association.
Object diagram example
Initially, when n=2, and f(n-2) = 0, and f(n-1) = 1, then f(n) = 0 + 1 = 1.
As an example, consider one possible way of modeling production of the Fibonacci sequence.
In the first UML object diagram on the right, the instance in the leftmost instance specification is
named v1, has IndependentVariable as its classifier, plays the NMinus2 role within the
FibonacciSystem, and has a slot for the val attribute with a value of 0. The second object is
named v2, is of class IndependentVariable, plays the NMinus1 role, and has val = 1. The
DependentVariable object is named v3, and plays the N role. The topmost instance, an
anonymous instance specification, has FibonacciFunction as its classifier, and may have an
instance name, a role, and slots, but these are not shown here. The diagram also includes three
named links, shown as lines. Links are instances of an association.
After the first iteration, when n = 3, and f(n-2) = 1, and f(n-1) = 1, then f(n) = 1 + 1
= 2.
In the second diagram, at a slightly later point in time, the IndependentVariable and
DependentVariable objects are the same, but the slots for the val attribute have different values.
The role names are not shown here.
After several more iterations, when n = 7, and f(n-2) = 5, and f(n-1) = 8, then f(n) =
5 + 8 = 13.
In the last object diagram, a still later snapshot, the same three objects are involved. Their slots
have different values. The instance and role names are not shown here.
[edit] Usage
If you are using a UML modeling tool, you will typically draw object diagrams using some other
diagram type, such as on a class diagram. An object instance may be called an instance
specification or just an instance. A link between instances is generally referred to as a link. Other
UML entities, such as an aggregation or composition symbol (a diamond) may also appear on an
object diagram.
Object oriented technology is based on a few simple concepts that, when combined,
produce significant improvements in software construction. Unfortunately, the basic
concepts of the technology often get lost in the excitement of advanced features and
advantageous features. The basic characteristics of the OOM are explained ahead.
Characteristics of Object Oriented Technology:
* Identity
* Classification
* Polymorphism
* Inheritance
Identity:
The term Object Oriented means that we organize the software as a collection of
discrete objects. An object is a software package that contains the related data and the
procedures. Although objects can be used for any purpose, they are most frequently
used to represent real-world objects such as products, customers and sales orders.
The basic idea is to define software objects that can interact with each other just as
their real world counterparts do, modeling the way a system works and providing a
natural foundation for building systems to manage that business.
Classification:
In principle, packaging data and procedures together makes perfect sense. In practice,
it raises an awkward problem. Suppose we have many objects of the same general
type- for example a thousand product objects, each of which could report its current
price. Any data these objects contained could easily be unique for each object. Stock
number, price, storage dimensions, stock on hand, reorder quantity, and any other
values would differ from one product to the next. But the methods for dealing with
these data might well be the same. Do we have to copy these methods and duplicate
them in every object?
No, this would be ridiculously inefficient. All object-oriented languages provide a simple
way of capturing these commonalties in a single place. That place is called a class.
The class acts as a kind of template for objects of similar nature.
Polymorphism:
Polymorphism is a Greek word meaning ¡§many forms¡¨. It is used to express the fact
that the same message can be sent to many different objects and interpreted in
different ways by each object. For example, we could send the message "move" to
many different kinds of objects. They would all respond to the same message, but they
might do so in very different ways. The move operation will behave differently for a
window and differently for a chess piece.
Inheritance:
Inheritance is the sharing of attributes and operations among classes on a hierarchical
relationship. A class can be defined as a generalized form and then it specialized in a
subclass. Each subclass inherits all the properties of its superclass and adds its own
properties in it. For example, a car and a bicycle are subclasses of a class road
vehicle, as they both inherits all the qualities of a road vehicle and add their own
properties to it.
Dynamic Binding
Last updated Mar 1, 2004.
Earlier, I explained how dynamic binding and polymorphism are related. However, I didn't explain how this
relationship is implemented. Dynamic binding refers to the mechanism that resolves a virtual function call at
runtime. This mechanism is activated when you call a virtual member function through a reference or a pointer
to a polymorphic object. Imagine a class hierarchy in which a class called Shape serves as a base class for
other classes (Triangle and Square):
class Shape
{
public:
void virtual Draw() {} //dummy implementation
//..
};
class Square
{
public:
void Draw(); //overriding Shape::Draw
}
class Triangle
{
public:
void Draw(); //overriding Shape::Draw
}
Draw() is a dummy function in Shape. It's declared virtual in the base class to enable derived classes to
override it and provide individual implementations. The beauty in polymorphism is that a pointer or a reference
to Shape may actually point to an object of class Square or Triangle:
void func(const Shape* s)
{
s->Draw()
}
int main()
{
Shape *p1= new Triangle;
Shape *p2 = new Square;
func(p1);
func(p2);
}
C++ distinguishes between a static type and a dynamic type of an object. The static type is determined at
compile time. It's the type specified in the declaration. For example, the static type of both p1 and p2 is
"Shape *". However, the dynamic types of these pointers are determined by the type of object to which they
point: "Triangle *" and "Square *", respectively. When func() calls the member function Draw(), C++
resolves the dynamic type of s and ensures that the appropriate version of Draw() is invoked. Notice how
powerful dynamic binding is: You can derive additional classes from Shape that override Draw() even after
func() is compiled. When func() invokes Draw(), C++ will still resolve the call according to the dynamic
type of s.
As the example shows, dynamic binding isn't confined to the resolution of member function calls at runtime;
rather, it applies to the binding of a dynamic type to a pointer or a reference that may differ from its static type.
Such a pointer or reference is said to be polymorphic. Likewise, the object bound to such a pointer is a
polymorphic object.
Dynamic binding exacts a toll, though. Resolving the dynamic type of an object takes place at runtime and
therefore incurs performance overhead. However, this penalty is negligible in most cases. Another advantage
of dynamic binding is reuse. If you decide to introduce additional classes at a later stage, you only have to
override Draw() instead of writing entire classes from scratch. Furthermore, existing code will still function
correctly once you've added new classes. You only have to compile the new code and relink the program.
< BackPage 87 of 438Next >
Multiple inheritance
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Contents
[hide]
• 1 Overview
• 2 Criticisms
• 3 See also
• 4 References
• 5 Further reading
• 6 External links
[edit] Overview
Multiple inheritance allows a class to take on functionality from multiple other classes, such as
allowing a class named StudentMusician to inherit from a class named Person, a class named
Musician, and a class named Worker. This can be abbreviated StudentMusician : Person,
Musician, Worker.
Ambiguities arise in multiple inheritance, as in the example above, if for instance the class
Musician inherited from Person and Worker and the class Worker inherited from Person. This is
referred to as the Diamond problem. There would then be the following rules:
Worker : Person
Musician : Person, Worker
StudentMusician : Person, Musician, Worker
If a compiler is looking at the class StudentMusician it needs to know whether it should join
identical features together, or whether they should be separate features. For instance, it would
make sense to join the "Age" features of Person together for StudentMusician. A person's age
doesn't change if you consider them a Person, a Worker, or a Musician. It would, however, make
sense to separate the feature "Name" in Person and Musician if they use a different stage name
than their given name. The options of joining and separating are both valid in their own context
and only the programmer knows which option is correct for the class they are designing.
Languages have different ways of dealing with these problems of repeated inheritance.
• Eiffel allows the programmer to explicitly join or separate features that are
being inherited from superclasses. Eiffel will automatically join features
together if they have the same name and implementation. The class writer
has the option to rename the inherited features to separate them. Eiffel also
allows explicit repeated inheritance such as A: B, B.
• C++ requires that the programmer state which parent class the feature to
use should come from i.e. "Worker::Person.Age". C++ does not support
explicit repeated inheritance since there would be no way to qualify which
superclass to use (see criticisms). C++ also allows a single instance of the
multiple class to be created via the virtual inheritance mechanism (i.e.
"Worker::Person" and "Musician::Person" will reference the same object).
• Perl uses the list of classes to inherit from as an ordered list. The compiler
uses the first method it finds by depth-first searching of the superclass list or
using the C3 linearization of the class hierarchy. Various extensions provide
alternative class composition schemes. Python has the same structure, but
unlike Perl includes it in the syntax of the language. In Perl and Python, the
order of inheritance affects the class semantics (see criticisms).
• The Common Lisp Object System allows full programmer control of method
combination, and if this is not enough, the Metaobject Protocol gives the
programmer a means to modify the inheritance, method dispatch, class
instantiation, and other internal mechanisms without affecting the stability of
the system.
• Logtalk supports both interface and implementation multi-inheritance,
allowing the declaration of method aliases that provide both renaming and
access to methods that would be masked out by the default conflict
resolution mechanism.
• Curl allows only classes that are explicitly marked as shared to be inherited
repeatedly. Shared classes must define a secondary constructor for each
regular constructor in the class. The regular constructor is called the first
time the state for the shared class is initialized through a subclass
constructor, and the secondary constructor will be invoked for all other
subclasses.
• Ocaml chooses the last matching definition of a class inheritance list to
resolve which method implementation to use under ambiguities. To override
the default behavior one simply qualifies a method call with the desired class
definition.
• Tcl allows multiple parent classes- their serial affects the name resolution for
class members.[2]
Smalltalk, C#, Objective-C, Object Pascal / Delphi, Java, Nemerle, and PHP do not allow
multiple inheritance, and this avoids any ambiguity. However, all but Smalltalk allow classes to
implement multiple interfaces.
#include <iostream>
using std::ostream;
using std::cout;
using std::endl;
class Base1 {
public:
Base1( int parameterValue )
{
value = parameterValue;
}
class Base2
{
public:
Base2( char characterData )
{
letter = characterData;
}
private:
double real;
};
int main()
{
Base1 base1( 10 ), *base1Ptr = 0;
Base2 base2( 'Z' ), *base2Ptr = 0;
Derived derived( 7, 'A', 3.5 );
base1Ptr = &derived;
cout << base1Ptr->getData() << '\n';
base2Ptr = &derived;
cout << base2Ptr->getData() << endl;
return 0;
}
10Z Integer: 7
Character: A
Real number: 3.57A3.5
7
A
#include <iostream>
#include <string>
#include <vector>
class Customer
{
public:
Customer() : m_name("") {}
std::string& getName() { return m_name; }
void setName(std::string const& name) { m_name = name; }
friend std::ostream& operator<<(std::ostream& os, Customer const&
rhs)
{
os << rhs.m_name;
return os;
}
friend std::istream& operator>>(std::istream& is, Customer& rhs)
{
is >> rhs.m_name;
return is;
}
private:
std::string m_name;
};
class Bank
{
public:
Bank() {}
Bank(const size_t numCustomers)
{
for(size_t i = 0; i < numCustomers; i++)
{
Customer c;
m_vCustomers.push_back(c);
}
}
void addCustomer(Customer const& customer)
{
m_vCustomers.push_back(customer);
}
void showCustomers()
{
vCustomers::iterator it;
for(it = m_vCustomers.begin(); it != m_vCustomers.end(); it++)
{
std::cout << (*it) << endl;
}
}
private:
vCustomers m_vCustomers;
};
int main()
{
size_t numCustomers = 0;
cout << "Enter the number of customers for a new Bank object: ";
cin >> numCustomers;
if(numCustomers > 0)
{
Bank b;
return 0;
}
Section 7.3: Constructors and Destructors
In addition to all of the member functions you'll create for your objects, there are two special
kinds of functions that you should create for every object. They are called constructors and
destructors. Constructors are called every time you create an object, and destructors are called
every time you destroy an object.
Constructors
The constructor's job is to set up the object so that it can be used. Remember in Chapter 3.2,
when we first declared a variable? Before we initialized the variable, it stored a garbage value.
We needed to initialize the variable to 0 or to some other useful value before using it. The same
is true of objects. The difference is that with an object, you can't just assign it a value. You can't
say:
Player greenHat = 0;
because that doesn't make sense. A player is not a number, so you can't just set it
to 0. The way object initialization happens in C++ is that a special function, the
constructor, is called when you instantiate an object. The constructor is a function
whose name is the same as the object, with no return type (not even void). For our
video game, we'll probably want to initialize our Players' attributes so that they
don't contain garbage values. We might decide to write the constructor like this:
Player::Player() {
strength = 10;
agility = 10;
health = 10;
}
We would also have to change the class declaration so that it looks like this:
class Player {
int health;
int strength;
int agility;
Player::Player(int s, int a) {
strength = s;
agility = a;
health = 10;
}
Now, when we want to instantiate the Player object four times, we can do the
following:
Destructors
Destructors are less complicated than constructors. You don't call them explicitly (they are called
automatically for you), and there's only one destructor for each object. The name of the
destructor is the name of the class, preceeded by a tilde (~). Here's an example of a destructor:
Player::~Player() {
strength = 0;
agility = 0;
health = 0;
}
Since a destructor is called after an object is used for the last time, you're probably
wondering why they exist at all. Right now, they aren't very useful, but you'll see
why they're important in Section 8.3.
#include
class exforsys
{
private:
int a,b;
public:
void test()
{
a=100;
b=200;
}
friend int compute(exforsys e1)
//Friend Function Declaration with keyword friend and with the object of class
exforsys to which it is friend passed to it
};
main()
{
exforsys e;
e.test();
cout<<"The result is:"<<COMPUTE(E);
//Calling of Friend Function with object as argument.
}
The function compute() is a non-member function of the class exforsys. In order to make this
function have access to the private data a and b of class exforsys , it is created as a friend
function for the class exforsys. As a first step, the function compute() is declared as friend in
the class exforsys as:
hi
I am learning to program in C++. I have got some difficulties in defining a class. public ans
private type is confusing.
anyway
the class I want to define should collect information about integers. she accepts integers
through a method add(). at any time she returns the average of the integers, the median
(value on the middle of the list, the sum, and the number of integer.
class statistics {
private:
double mean(void)
vector<int> mode(void)
double median(void)
int sum(void)
int count(void)
public:
void add(int t)
};
mean=sum/a;
cout << "the mean is " << mean << endl;
}
I don't think it is working because I don't think I need to read the file "list"
if someone can void
statistics::count (void)
{
ifstream list("list.txt")
container c;
int n;
a=0;
while (list >> n)
{
c.push_back(n);
a=a++;
}
c.sort();
cout << a << endl;
}
void
statistics::sum (void)
{
ifstream list("list.txt")
container c;
int n;
sum=0;
while (list >> n)
c.push_back(n);
for (container::iterator i = c.begin(); i != c.end(); i++)
sum = addsum(*i);
void
statistics::mean (void)
{
ifstream list("list.txt")
container c;
int n;
mean=0;
a=0;
while (list >> n)
{
c.push_back(n);
a=a++;
}
for (container::iterator i = c.begin(); i != c.end(); i++)
sum = addsum(*i);
mean=sum/a;
cout << "the mean is " << mean << endl;
}
I don't think it is working because I don't think I need to read the file "list"
if someone can give me hints about that, that give me hints about that, that would
Definition
The definition of the operator<< function can be in any file. It is not a member
function, so it is defined with two explicit operands. The operator<< function must
return the value of the left operand (the ostream) so that multiple << operators may
be used in the same statement. Note that operator<< for your type can be defined
in terms of << on other types, specifically the types of the data members of your
class (eg, ints x and y in the Point class).
// example usage
Point p;
. . .
cout << p; // not legal without << friend function
Operator Overloading
by Andrei Milea
In C++ the overloading principle applies not only to functions, but to operators too. That is, the meaning of
operators can be extended from built-in types to user-defined types. In this way a programmer can provide
his or her own operator to a class by overloading the built-in operator to perform some specific computation
when the operator is used with objects of that class. One question may arise here: is this really useful in
real world implementations? Some programmers consider that overloading is not useful most of the time.
This and the fact that overloading makes the language more complicated is the main reason why operator
overloading is banned in Java. Even if overloading adds complexity to the language it can provide a lot of
syntactic sugar, and code written by a programmer using operator overloading can be easy, but sometimes
misleading, to read. We can use operator overloading easily without knowing all the implementation's
complexities. A short example will make things clear:
The addition without having overloaded operator + could look like this:
a.Add(b);
Complex c(a);
This piece of code is not as suggestive as the first one and the readability becomes poor. Using operator
overloading is a design decision, so when we deal with concepts where some operator seems fit and its use
intuitive, it will make the code more clear than using a function to do the task. However, there are many
cases when programmers abuse this technique, when the concept represented by the class is not related to
the operator (like using + and - to add and remove elements from a data structure). In this cases operator
overloading is a bad idea, creating confusion.
In order to be able to write the above code we must have the "+" operator overloaded to make the proper
addition between the real members and the imaginary ones and also the assignment operator. The
overloading syntax is quite simple, similar to function overloading, the keyword operator followed by the
operator we want to overload as you can see in the next code sample:
class Complex
{
public:
Complex(double re,double im)
:real(re),imag(im)
{};
Complex operator+(Complex);
Complex operator=(Complex);
private:
double real;
double imag;
}
Complex Complex::operator+(Complex num)
{
real = real + num.GetRealPart();
imag = imag + num.GetImagPart();
return *this;
}
The assignment operator can be overloaded similarly. Notice that we had to call the accessor function in
order to get the real and imaginary parts from the parameter since they are private. In order to bypass this
difficulty we could have made the operator + a friend (a friend function is a function which is permitted to
access the private members of a class) in the complex class:
friend Complex operator+(Complex);
We could have defined the addition operator globally and called a member to do the actual work:
Complex operator+(Complex &num1,Complex &num2)
{
Complex temp(num1); //note the use of a copy constructor here
temp.Add(num2);
return temp;
}
The motivation for doing so can be understood by examining the difference between the two choices: when
the operator is a member the first object in the expression must be of that particular type, when it's a global
function, the implicit or user-defined conversion can allow the operator to act even if the first operand is not
exactly of the same type:
Complex c = 2+b; //if the integer 2 can be converted by the Complex class,
this expression is valid
The number of operands can't be overridden, that is, a binary operator takes two operands, a unary only
one. The same restriction acts for the precedence too, for example the multiplication takes place before
addition. There are some operators that need the first operand to be left value: operator=, operator(),
operator[] and operator->, so their use is restricted just as member functions(non-static), they can't be
overloaded globally. The operator=, operator& and operator, (sequencing) have already defined meanings
by default for all objects, but their meanings can be changed by overloading or erased by making them
private.
Another intuitive meaning of the "+" operator from the STL string class which is overloaded to do
concatenation:
string prefix("de");
string word("composed");
string composed = prefix+word;
Using "+" to concatenate is also allowed in Java, but note that this is not extensible to other classes, and it's
not a user defined behavior. Almost all operators can be overloaded in C++:
+ - * / % ^ & |
~ ! , = < > <= >=
++ -- << >> == != && ||
+= -= /= %= ^= & = |= *=
<<= >>= [ ] ( ) -> ->* new delete
exceptions are the operators for scope resolution (::), member selection (.), and member
selection through a pointer to a function(.*). Overloading assumes you specify a behavior for an
operator that acts on a user defined type and it can't be used just with general pointers. The
standard behavior of operators for built-in (primitive) types cannot be changed by overlo C++
Encapsulation
Introduction
Encapsulation is the process of combining data and functions into a single unit called class.
Using the method of encapsulation, the programmer cannot directly access the data. Data is only
accessible through the functions present inside the class. Data encapsulation led to the important
concept of data hiding. Data hiding is the implementation details of a class that are hidden from
the user. The concept of restricted access led programmers to write specialized functions or
methods for performing the operations on hidden members of the class. Attention must be paid to
ensure that the class is designed properly.
Neither too much access nor too much control must be placed on the operations in order to make
the class user friendly. Hiding the implementation details and providing restrictive access leads
to the concept of abstract data type. Encapsulation leads to the concept of data hiding, but the
concept of encapsulation must not be restricted to information hiding. Encapsulation clearly
represents the ability to bundle related data and functionality within a single, autonomous entity
called a class.
For instance:
class Exforsys
{
public:
int sample();
int example(char *se)
int endfunc();
.........
......... //Other member functions
private:
int x;
float sq;
..........
......... //Other data members
};
In the above example, the data members integer x, float sq and other data members and member
functions sample(),example(char* se),endfunc() and other member functions are bundled and put
inside a single autonomous entity called class Exforsys. This exemplifies the concept of
Encapsulation. This special feature is available in object-oriented language C++ but not available
in procedural language C. There are advantages of using this encapsulated approach in C++. One
advantage is that it reduces human errors. The data and functions bundled inside the class take
total control of maintenance and thus human errors are reduced. It is clear from the above
example that the encapsulated objects act as a black box for other parts of the program through
interaction. Although encapsulated objects provide functionality, the calling objects will not
know the implementation details. This enhances the security of the application.
The key strength behind Data Encapsulation in C++ is that the keywords or the access specifiers
can be placed in the class declaration as public, protected or private. A class placed after the
keyword public is accessible to all the users of the class. The elements placed after the keyword
private are accessible only to the methods of the class. In between the public and the private
access specifiers, there exists the protected access specifier. Elements placed after the keyword
protected are accessible only to the methods of the class or classes derived from that class.
The concept of encapsulation shows that a non-member function cannot access an object's
private or protected data. This adds security, but in some cases the programmer might require an
unrelated function to operate on an object of two different classes. The programmer is then able
to utilize the concept of friend functions. Encapsulation alone is a powerful feature that leads to
information hiding, abstract data type and friend functions.
Complex and critical applications are difficult to maintain. The cost associated with maintaining
the application is higher than that of developing the application properly. To resolve this
maintenance difficulty, the object-oriented programming language C++ created the concept of
encapsulation which bundles data and related functions together as a unit called class. Thus,
making maintenance much easier on the class level.
* Enhanced Security:
There are numerous reasons for the enhancement of security using the concept of Encapsulation
in C++. The access specifier acts as the key strength behind the concept of security and provides
access to members of class as needed by users. This prevents unauthorized access. If an
application needs to be extended or customized in later stages of development, the task of adding
new functions becomes easier without breaking existing code or applications, there by giving an
additional security to existing application.
4. System Design
Before you purchase any hardware, it may be a good idea to consider the design of your system.
There are basically two hardware issues involved with design of a Beowulf system: the type of
nodes or computers you are going to use; and way you connect the computer nodes. There is one
software issue that may effect your hardware decisions; the communication library or API. A
more detailed discussion of hardware and communication software is provided later in this
document.
While the number of choices is not large, there are some important design decisions that must be
made when constructing a Beowulf systems. Because the science (or art) of "parallel computing"
has many different interpretations, an introduction is provided below. If you do not like to read
background material, you may skip this section, but it is advised that you read section Suitability
before you make you final hardware decisions.
4.4 Suitability
Most questions about parallel computing have the same answer:
"It all depends upon the application."
Before we jump into the issues, there is one very important distinction that needs to be made -
the difference between CONCURRENT and PARALLEL. For the sake of this discussion we will
define these two concepts as follows:
CONCURRENT parts of a program are those that can be computed independently.
PARALLEL parts of a program are those CONCURRENT parts that are executed on separate
processing elements at the same time.
The distinction is very important, because CONCURRENCY is a property of the program and
efficient PARALLELISM is a property of the machine. Ideally, PARALLEL execution should
result in faster performance. The limiting factor in parallel performance is the communication
speed and latency between compute nodes. (Latency also exists with threaded SMP applications
due to cache coherency.) Many of the common parallel benchmarks are highly parallel and
communication and latency are not the bottle neck. This type of problem can be called
"obviously parallel". Other applications are not so simple and executing CONCURRENT parts
of the program in PARALLEL may actually cause the program to run slower, thus offsetting any
performance gains in other CONCURRENT parts of the program. In simple terms, the cost of
communication time must pay for the savings in computation time, otherwise the PARALLEL
execution of the CONCURRENT part is inefficient.
The task of the programmer is to determining what CONCURRENT parts of the program
SHOULD be executed in PARALLEL and what parts SHOULD NOT. The answer to this will
determine the EFFICIENCY of application. The following graph summarizes the situation for
the programmer:
| *
| *
| *
% of | *
appli- | *
cations | *
| *
| *
| *
| *
| *
| ****
| ****
| ********************
+-----------------------------------
communication time/processing time
In a perfect parallel computer, the ratio of communication/processing would be equal and
anything that is CONCURRENT could be implemented in PARALLEL. Unfortunately, Real
parallel computers, including shared memory machines, are subject to the effects described in
this graph. When designing a Beowulf, the user may want to keep this graph in mind because
parallel efficiency depends upon ratio of communication time and processing time for A
SPECIFIC PARALLEL COMPUTER. Applications may be portable between parallel
computers, but there is no guarantee they will be efficient on a different platform.
IN GENERAL, THERE IS NO SUCH THING AS A PORTABLE AND EFFICIENT
PARALLEL PROGRAM
There is yet another consequence to the above graph. Since efficiency depends upon the
comm./process. ratio, just changing one component of the ratio does not necessary mean a
specific application will perform faster. A change in processor speed, while keeping the
communication speed that same may have non- intuitive effects on your program. For example,
doubling or tripling the CPU speed, while keeping the communication speed the same, may now
make some previously efficient PARALLEL portions of your program, more efficient if they
were executed SEQUENTIALLY. That is, it may now be faster to run the previously
PARALLEL parts as SEQUENTIAL. Furthermore, running inefficient parts in parallel will
actually keep your application from reaching its maximum speed. Thus, by adding faster
processor, you may actually slowed down your application (you are keeping the new CPU from
running at its maximum speed for that application)
UPGRADING TO A FASTER CPU MAY ACTUALLY SLOW DOWN YOUR
APPLICATION
So, in conclusion, to know whether or not you can use a parallel hardware environment, you
need to have some insight into the suitability of a particular machine to your application. You
need to look at a lot of issues including CPU speeds, compiler, message passing API, network,
etc. Please note, just profiling an application, does not give the whole story. You may identify a
computationally heavy portion of your program, but you do not know the communication cost
for this portion. It may be that for a given system, the communication cost as do not make
parallelizing this code efficient.
A final note about a common misconception. It is often stated that "a program is
PARALLELIZED", but in reality only the CONCURRENT parts of the program have been
located. For all the reasons given above, the program is not PARALLELIZED. Efficient
PARALLELIZATION is a property of the machine.
Object-oriented design
From Wikipedia, the free encyclopedia
Jump to: navigation, search
"OOD" redirects here. OOD may also refer to Officer of the Deck, Officer of the day,
or the Ood.
Object-oriented design is the process of planning a system of interacting objects for the purpose
of solving a software problem. It is one approach to software design.
Contents
[hide]
• 1 Overview
• 2 Object-oriented design topics
○ 2.1 Input (sources) for object-oriented design
○ 2.2 Object-oriented concepts
○ 2.3 Designing concepts
○ 2.4 Output (deliverables) of object-oriented design
○ 2.5 Some design principles and strategies
• 3 See also
• 4 References
• 5 External links
[edit] Overview
An object contains encapsulated data and procedures grouped together to represent an entity. The
'object interface', how the object can be interacted with, is also defined. An object-oriented
program is described by the interaction of these objects. Object-oriented design is the discipline
of defining the objects and their interactions to solve a problem that was identified and
documented during object-oriented analysis.
From a business perspective, Object Oriented Design refers to the objects that make up that
business. For example, in a certain company, a business object can consist of people, data files
and database tables, artifacts, equipment, vehicles, etc.
What follows is a description of the class-based subset of object-oriented design, which does not
include object prototype-based approaches where objects are not typically obtained by instancing
classes but by cloning other (prototype) objects.
Structured Systems Analysis and Design Method (SSADM) is a systems approach to the
analysis and design of information systems. SSADM was produced for the Central Computer and
Telecommunications Agency (now Office of Government Commerce), a UK government office
concerned with the use of technology in government, from 1980 onwards.
Contents
[hide]
• 1 Overview
• 2 History
• 3 SSADM techniques
• 4 Stages
○ 4.1 Stage 0 - Feasibility study
○ 4.2 Stage 1 - Investigation of the current
environment
○ 4.3 Stage 2 - Business system options
○ 4.4 Stage 3 - Requirements specification
○ 4.5 Stage 4 - Technical system options
○ 4.6 Stage 5 - Logical design
○ 4.7 Stage 6 - Physical design
• 5 Advantages and disadvantages
• 6 References
• 7 External links
[edit] Overview
SSADM is a waterfall method by which an Information System design can be arrived at.
SSADM can be thought to represent a pinnacle of the rigorous document-led approach to system
design, and contrasts with more contemporary Rapid Application Development methods such as
DSDM.
SSADM is one particular implementation and builds on the work of different schools of
structured analysis and development methods, such as Peter Checkland's Soft Systems
Methodology, Larry Constantine's Structured Design, Edward Yourdon's Yourdon Structured
Method, Michael A. Jackson's Jackson Structured Programming, and Tom DeMarco's Structured
Analysis.
The names "Structured Systems Analysis and Design Method" and "SSADM" are now
Registered Trade Marks of the Office of Government Commerce (OGC), which is an Office of
the United Kingdom's Treasury.[citation needed]
[edit] History
• 1980: Central Computer and Telecommunications Agency (CCTA) evaluate
analysis and design methods.
• 1981: Learmonth & Burchett Management Systems (LBMS) method chosen
from shortlist of five.
• 1983: SSADM made mandatory for all new information system developments
• 1984: Version 2 of SSADM released
• 1986: Version 3 of SSADM released, adopted by NCC
• 1988: SSADM Certificate of Proficiency launched, SSADM promoted as ‘open’
standard
• 1989: Moves towards Euromethod, launch of CASE products certification
scheme
• 1990: Version 4 launched
• 1993: SSADM V4 Standard and Tools Conformance Scheme Launched
• 1995: SSADM V4+ announced, V4.2 launched
This is the process of identifying, modeling and documenting how data moves
around an information system. Data Flow Modeling examines processes
(activities that transform data from one form to another), data stores (the
holding areas for data), external entities (what sends data into a system or
receives data from a system), and data flows (routes by which data can flow).
This is the process of identifying, modeling and documenting the events that
affect each entity and the sequence in which these events occur.
[edit] Stages
The SSADM method involves the application of a sequence of analysis, documentation and
design tasks concerned with the following.
[edit] Stage 0 - Feasibility study
In order to determine whether or not a given project is feasible or not, there must be some form
of investigation into the goals and implications of the project. For very small scale projects this
may not be necessary at all as the scope of the project is easily apprehended. In larger projects,
the feasibility may be done but in an informal sense, either because there is not time for a formal
study or because the project is a “must-have” and will have to be done one way or the other.
When a feasibility study is carried out, there are four main areas of consideration:
• Technical - is the project technically possible?
• Financial - can the business afford to carry out the project?
• Organizational - will the new system be compatible with existing practices?
• Ethical - is the impact of the new system socially acceptable?
To answer these questions, the feasibility study is effectively a condensed version of a fully-
blown systems analysis and design. The requirements and users are analyzed to some extent,
some business options are drawn up and even some details of the technical implementation.
The product of this stage is a formal feasibility study document. SSADM specifies the sections
that the study should contain including any preliminary models that have been constructed and
also details of rejected options and the reasons for their rejection.
[edit] Stage 1 - Investigation of the current environment
This is one of the most important stages of SSADM. The developers of SSADM understood that
though the tasks and objectives of a new system may be radically different from the old system,
the underlying data will probably change very little. By coming to a full understanding of the
data requirements at an early stage, the remaining analysis and design stages can be built up on a
firm foundation.
In almost all cases there is some form of current system even if it is entirely composed of people
and paper. Through a combination of interviewing employees, circulating questionnaires,
observations and existing documentation, the analyst comes to full understanding of the system
as it is at the start of the project. This serves many purposes:
• the analyst learns the terminology of the business, what users do and how
they do it
• the old system provides the core requirements for the new system
• faults, errors and areas of inefficiency are highlighted and their reparation
added to the requirements
• the data model can be constructed
• the users become involved and learn the techniques and models of the
analyst
• the boundaries of the system can be defined
The products of this stage are:
• Users Catalogue describing all the users of the system and how they interact
with it
• Requirements Catalogues detailing all the requirements of the new system
• Current Services Description further composed of
• Current environment logical data structure (ERD)
• Context diagram (DFD)
• Levelled set of DFDs for current logical system
• Full data dictionary including relationship between data stores and entities
To produce the models, the analyst works through the construction of the models as we have
described. However, the first set of data-flow diagrams (DFDs) are the current physical model,
that is, with full details of how the old system is implemented. The final version is the current
logical model which is essentially the same as the current physical but with all reference to
implementation removed together with any redundancies such as repetition of process or data.
In the process of preparing the models, the analyst will discover the information that makes up
the users and requirements catalogues.
[edit] Stage 2 - Business system options
Having investigated the current system, the analyst must decide on the overall design of the new
system. To do this, he or she, using the outputs of the previous stage, develops a set of business
system options. These are different ways in which the new system could be produced varying
from doing nothing to throwing out the old system entirely and building an entirely new one. The
analyst may hold a brainstorming session so that as many and various ideas as possible are
generated.
The ideas are then collected to form a set of two or three different options which are presented to
the user. The options consider the following:
• the degree of automation
• the boundary between the system and the users
• the distribution of the system, for example, is it centralized to one office or
spread out across several?
• cost/benefit
• impact of the new system
Where necessary, the option will be documented with a logical data structure and a level 1 data-
flow diagram.
The users and analyst together choose a single business option. This may be one of the ones
already defined or may be a synthesis of different aspects of the existing options. The output of
this stage is the single selected business option together with all the outputs of stage 1.
[edit] Stage 3 - Requirements specification
This is probably the most complex stage in SSADM. Using the requirements developed in stage
1 and working within the framework of the selected business option, the analyst must develop a
full logical specification of what the new system must do. The specification must be free from
error, ambiguity and inconsistency. By logical, we mean that the specification does not say how
the system will be implemented but rather describes what the system will do.
To produce the logical specification, the analyst builds the required logical models for both the
data-flow diagrams (DFDs) and the entity relationship diagrams (ERDs). These are used to
produce function definitions of every function which the users will require of the system, entity
life-histories (ELHs) and effect correspondence diagrams, these are models of how each event
interacts with the system, a complement to entity life-histories. These are continually matched
against the requirements and where necessary, the requirements are added to and completed.
The product of this stage is a complete Requirements Specification document which is made up
of:
• the updated Data Catalogue
• the updated Requirements Catalogue
• the Processing Specification which in turn is made up of
• user role/function matrix
• function definitions
• required logical data model
• entity life-histories
• effect correspondence diagrams
Though some of these items may be unfamiliar to you, it is beyond the scope of this unit to go
into them in great detail.
[edit] Stage 4 - Technical system options
This stage is the first towards a physical implementation of the new system. Like the Business
System Options, in this stage a large number of options for the implementation of the new
system are generated. This is honed down to two or three to present to the user from which the
final option is chosen or synthesised.
However, the considerations are quite different being:
• the hardware architectures
• the software to use
• the cost of the implementation
• the staffing required
• the physical limitations such as a space occupied by the system
• the distribution including any networks which that may require
• the overall format of the human computer interface
All of these aspects must also conform to any constraints imposed by the business such as
available money and standardisation of hardware and software.
The output of this stage is a chosen technical system option.
[edit] Stage 5 - Logical design
Though the previous level specifies details of the implementation, the outputs of this stage are
implementation-independent and concentrate on the requirements for the human computer
interface.
The three main areas of activity are the definition of the user dialogues. These are the main
interfaces with which the users will interact with the system. The logical design specifies the
main methods of interaction in terms of menu structures and command structures.
The other two activities are concerned with analyzing the effects of events in updating the
system and the need to make enquiries about the data on the system. Both of these use the events,
function descriptions and effect correspondence diagrams produced in stage 3 to determine
precisely how to update and read data in a consistent and secure way.
The product of this stage is the logical design which is made up of:
• Menu structures
• Command structures
• Requirements catalogue
• Data catalogue
• Required logical data structure
Logical process model which includes dialogues and model for the update and enquiry processes
[edit] Stage 6 - Physical design
This is the final stage where all the logical specifications of the system are converted to
descriptions of the system in terms of real hardware and software. This is a very technical stage
and an simple overview is presented here.
The logical data structure is converted into a physical architecture in terms of database
structures. The exact structure of the functions and how they are implemented is specified. The
physical data structure is optimized where necessary to meet size and performance requirements.
The product is a complete Physical Design which could tell software engineers how to build the
system in specific details of hardware and software and to the appropriate standards.
[edit] Advantages and disadvantages
Using this methodology involves a significant undertaking which may not be suitable to all
projects.
The main advantages of SSADM are:
• Three different views of the system
• Mature
• Separation of logical and physical aspects of the system
• Well-defined techniques and documentation
• User involvement
The size of SSADM is a big hindrance to using it in all circumstances. There is a large
investment in cost and time in training people to use the techniques. The learning curve is
considerable as not only are there several modeling techniques to come to terms with, but there
are also a lot of standards for the preparation and presentation of documents.