Escolar Documentos
Profissional Documentos
Cultura Documentos
3. Describe the following system models giving appropriate real time examples:
A) Data Flow Models B) Semantic Models C) Object Models
August 2010
Master of Computer Application (MCA) – Semester 3
MC0071 – Software Engineering– 4 Credits
(Book ID: B0808 & B0809)
Assignment Set – 2 (60 Marks)
There are so many systems in existence that complete replacement or radical restructuring is financially
unthinkable for most organizations. Maintenance of old systems is increasingly expensive so re-
engineering these systems extends their useful lifetime. As discussed in Chapter 26, re-engineering
a system is cost- effective when it has a high business value but is expensive to maintain. Re-
engineering improves the system structure, creates new system documentation and makes it easier to
understand.
Re-engineering a software system has two key advantages over more radical approaches to system
evolution:
1. Reduced risk there is a high risk in re-developing software that is essential for an organization. Errors
may be made in the system specification; there may be development problems, etc.
2. Reduced cost the cost of re-engineering is significantly less than the costs of developing new
software. Ulrich (Ulrich, 1990) quotes an example of a commercial system where the re-implementation
costs were estimated at $50 million. The system was successfully re-engineered for £12 million. If
these figures are typical, it is about 4 times cheaper to re-engineer than to re-write.
Following figure illustrates a possible re-engineering process. The input to the process is a legacy
program and the output is a structured, modularized version of the same program. At the same time as
program re-engineering, the data for the system may also be re-engineered. The activities in this re-
engineering process are:
1. Source code translation The program is converted from an old programming language to a more
modern version of the same language or to a different language.
2. Reverse engineering the program is analyzed and information extracted from it which helps to
document its organization and functionality.
3. Program structure improvement the control structure of the program is analyzed and modified to
make it easier to read and understand.
B) Software Refactoring
Software Recapturing is the process of changing a computer program's source code without modifying
its external functional behavior in order to improve some of the nonfunctional attributes of the software.
Advantages include improved code readability and reduced complexity to improve the maintainability of
the source code, as well as a more expressive internal architecture or object model to improve
extensibility.
Recapturing is usually motivated by noticing a code smell. For example the method at hand may be very
long, or it may be a near duplicate of another nearby method. Once recognized, such problems can be
addressed by recapturing the source code, or transforming it into a new form that behaves the same as
before but that no longer "smells". For a long routine, extract one or more smaller subroutines. Or for
duplicate routines, remove the duplication and utilize one shared function in their place. Failure to
perform recapturing can result in accumulating technical debt.
Maintainability. It is easier to fix bugs because the source code is easy to read and the intent of its
author is easy to grasp. This might be achieved by reducing large monolithic routines into a set of
individually concise, well-named, single-purpose methods. It might be achieved by moving a method to
a more appropriate class, or by removing misleading comments.
Extensibility. It is easier to extend the capabilities of the application if it uses recognizable design
patterns, and it provides some flexibility where none before may have existed
Before refactoring a section of code, a solid set of automatic unit tests is needed. The tests should
demonstrate in a few seconds that the behavior of the module is correct. The process is then an
iterative cycle of making a small program transformation, testing it to ensure correctness, and making
another small transformation. If at any point a test fails, you undo your last small change and try again in
a different way. Through many small steps the program moves from where it was to where you want it to
be. Proponents of extreme programming and other agile methodologies describe this activity as an
integral part of the software development cycle.
o meets the business and technical requirements that guided its design and development;
o works as expected; and
o can be implemented with the same characteristics.
Software testing, depending on the testing method employed, can be implemented at any time in the
development process. However, most of the test effort occurs after the requirements have been defined
and the coding process has been completed. As such, the methodology of the test is governed by the
software development methodology adopted.
Testing methods:
o API testing (application programming interface) - testing of the application using public
and private APIs
o Code coverage - creating tests to satisfy some criteria of code coverage (e.g., the test
designer can create tests to cause all statements in the program to be executed at least
once)
o Fault injection methods - improving the coverage of a test by introducing faults to test
code paths
o Mutation testing methods
o Static testing - White box testing includes all static testing
Test coverage
White box testing methods can also be used to evaluate the completeness of a test suite that was
created with black box testing methods. This allows the software team to examine parts of a system that
are rarely tested and ensures that the most important function points have been tested.
Two common forms of code coverage are:
Black box testing treats the software as a "black box"—without any knowledge of
internal implementation. Black box testing methods include: equivalence partitioning,
boundary value analysis, all-pairs testing, fuzz testing, model-based testing,
traceability matrix, exploratory testing and specification-based testing.
Grey box testing (American spelling: gray box testing) involves having knowledge of
internal data structures and algorithms for purposes of designing the test cases, but
testing at the user, or black-box level. Manipulating input data and formatting output do
not qualify as grey box, because the input and output are clearly outside of the "black-
box" that we are calling the system under test. This distinction is particularly important
when conducting integration testing between two modules of code written by two
different developers, where only the interfaces are exposed for test. However,
modifying a data repository does qualify as grey box, as the user would not normally
be able to change the data outside of the system under test. Grey box testing may
also include reverse engineering to determine, for instance, boundary values or error
messages.
A) Change Management
Change requests apply to not only new or changed requirements, but also system
failures and defects in work products
The change request process typically contains the following steps:
_The change request is recorded
_The impact the change will have on the work product, related work products, and
schedule and cost is determined
_The change request is reviewed and agreement is reached with those affected by the
change request
_The change request is tracked to closure
Version and Release Management is the relatively new but rapidly growing discipline
within software engineering of managing software releases.
Some of the challenges facing a Software Release Manager include the management
of:
• Software Defects
• Issues
• Risks
• Software Change Requests
• New Development Requests (additional features and functions)
• Deployment and Packaging
• New Development Tasks
As e-commerce transforms the way business is conducted -- with companies using the
Internet to enter new markets, shrink supply chains, create value chains and meet the
challenges of increased competition and global markets -- optimization will play a key
role.
E-commerce requires intelligent supply chains, which must provide instant access to
the right data anywhere. Sophisticated Software technology engines can not only
make possible real-time collaborative decision making among all partners in the supply
chain with the web as the medium, but can greatly increase responsiveness to
customers.
This is the emerging world of e-collaboration, which takes supply chain management
to the next level. Companies will move beyond the singular mentality of intra-company
Software technology to focus on how inter-company e-collaboration can transform
consumer demand into consumer satisfaction. For example, a company can do
forecasts collaboratively across its virtual organization, using optimized planning
applications within its manufacturing, distribution and transportation resources to meet
expected demand and actual customers orders. E-collaboration addresses volume
planning, production scheduling, sequencing, distribution management, and
procurement planning.
The Internet is already changing how companies deal with customers via quickly
emerging Customer Relationship Management (CRM) applications such as those that
include product configuration software. The best configurations offer intelligent support
to help customers select online the parts or features they want to include in their
product to meet their specifications.
New uses for Software technology are appearing every day, as more and more
industries adopt the techniques and the technology. What began with airlines and
hotels is now expanding into areas that have never considered Software technology,
such as retailing and financial services. Optimization is helping these industries take
advantage of changing markets and evolving opportunities, giving rise to fresh
challenges and approaches—problems that will lead scientists and engineers to the
next generation of optimization applications.
Requirements and business goals. In solution level the diversity is driven due to:
a. Project management approaches
b. General standards
c. Quality-assurance standards
d. Hardware and software tools
e. Networking tools
f. Data mining and automation tools
g. Nature, scope, and domain of applications
h. Need for business-driven software engineering
i. Secure software engineering
j. “Killer” applications
k. Mobile or wireless software engineering
These are the various factors which are responsible for the diversity in the software
engineering.
Driving Forces of Diversity in Development Strategies
Diversity is a prevalent characteristic of the software process modeling literature. This
reflects the evolution in software development in response to changes in business
requirements, technological capabilities, methodologies,
and developer experience. Process diversity also reflects the changing dimensions of
project requirements, with process models maturing over time in their ability to address
evolving project requirements. Diversification is also driven by the increasing
importance of interdisciplinary views in modeling software processes.
Diversity can be acquired through inheritance as well as by overriding the
presuppositions that derive from inheritance. Cultural differences are examples of
inherited characteristics that affect the degree of diversification in an environment.
Scientific, social, political, psychological, philosophical, experiential, and other
differences modulate acquired diversity through exposure to values, education,
involvement, and interaction.
The culture difference, technology difference etc leads to diversification, but at the
same time such differences or changes will also leads to increase in “RISK”.
Hence it is important to deal with the diversification to improve the market stability,
while at the same time reducing the risk.