Você está na página 1de 114

CSQA Preparation

Quality Principles

Quality Principles
Definitions of Quality
Quality Concepts Quality Objectives Quality Attributes QA v/s QC

Quality Pioneers
Quality Vocabulary

Definitions of Quality
Quality Software and Software Quality Software that exhibits all the functional capabilities and non-functional attributes that ensure that it can be put to all its intended uses with the least effort, inconvenience and resource cost to the user is Quality Software

Software Product Evaluation lists six key factors in producing Quality Software
functionality
reliability usability

efficiency
maintainability portability

Definitions of Software Quality


"Quality is Conformance to requirements" - CROSBY

"Software Quality means fitness for purpose" - OULD


Quality is all the features that allow a product to satisfy stated or implied needs at an affordable cost - ISO-8402

Definitions of Software Quality


GARVIN gives five views of Quality
transcendent

product-based
user-based manufacturing-based

value-based

Conclusions
It is generally accepted that quality of the process plays a crucial role in determining the quality of the product Quality must be built into software from the outset - it cannot be added on later

It is people that determine whether or not a quality product is produced

Quality Concepts
Cost Of Quality
Plan-Do-Check-Act Six Sigma Benchmarking Continuous Improvement Best Practices

Software Quality Concepts


Quality is conformance to product requirements and should be free.
Quality is achieved through prevention of defects . Quality control is aimed at finding problems as early as possible and fixing them. Doing things right the first time is the performance standard which results in zero defects and saves the expenses of doing things over. The expense of quality is nonconformance to product requirements

Software Quality Concepts


Quality is what distinguishes a good company from a great one.
Quality is meeting or exceeding our customer's needs and requirements. Software Quality is measurable. Quality is continuous improvement. The quality of a software product comes from the quality of the process used to create it.

Cost of Quality
Quality costs are the costs associated with preventing, finding and correcting defective work One of the key functions of a quality engineer is the reduction of the total cost of quality associated with a product

Prevention Costs
These are costs of activities specifically designed to prevent poor quality(coding errors, design errors, bad documentation, un maintainable coding)
E.g.,Staff Training, Requirement Analysis, Fault-tolerant design, defensive programming, usability analysis, clear specification, accurate internal documentation, evaluation of the reliability of development tools

Appraisal Costs
These are Costs of activities designed to find quality problems, such as code inspections and any type of testing E.g.,Design Review, Code Inspection, White box testing, Black box testing, training testers, Beta testing, test automation, usability testing

Failure Costs
Costs that result from poor quality, such as the cost of fixing bugs and cost of dealing with customer complaints
Internal failure costs are failure costs that arises before your company supplies the product to customer External failure costs are failure costs that arises after your company supplies the product to customer

Internal failure Costs


Bug fixes, Regression testing
Wasted in-house user time Wasted tester time, Wasted writer time Wasted marketer time Wasted advertisements Direct cost of late shipment Opportunity cost of late shipment

External failure costs


Technical support calls, Investigation of customer complaints, refunds and recalls, coding/testing of interim bug fix releases, shipping of updated product, added expense of supporting multiple versions of the product, lost sales, lost customer goodwill, warranty costs, liability costs, penalties.

Total Cost of Quality = Sum of all costs Prevention + Appraisal + Internal failure + External failure

Plan-Do-Check-Act

Step 1 Plan - Identify the Problem


Select the problem to be analyzed
Clearly define the problem and establish a precise problem statement Set a measurable goal for the problem solving effort

Establish a process for coordinating with and gaining approval of leadership

Step 2 Plan - Analyze the Problem


Identify the processes that impact the problem and select one
List the steps in the process as it currently exists Map the Process

Validate the map of the process


Identify potential cause of the problem

Step 2 Plan - Analyze the Problem (continued)


Collect and analyze data related to the problem
Verify or revise the original problem statement Identify root causes of the problem

Collect additional data if needed to verify root causes

Step 3 Do - Develop Solutions


Establish criteria for selecting a solution
Generate potential solutions that will address the root causes of the problem

Select a solution
Gain approval and supporter for the chosen solution Plan the solution

Step 4 Do - Implement Solutions


Implement the chosen solution on a trial or pilot basis
If the Problem Solving Process is being used in conjunction with the Continuous Improvement Process, return to Step 6 of the Continuous Improvement Process If the Problem Solving Process is being used as a standalone, continue to Step 5

Step 5 Check - Evaluate the Results


Gather data on the solution Analyze the data on the solution

Achieved the Desired Goal?


If YES, go to Step 6. If NO, go back to Step 1.

Step 6 Act - Standardize the solution


Identify systematic changes and training needs for full implementation
Adopt the solution

Plan ongoing monitoring of the solution


Continue to look for incremental improvements to refine the solution Look for another improvement opportunity

Six Sigma
The word 'Sigma' is a statistical term that measures how far a given process deviates from perfection. The significance of Six Sigma is that if you can measure how many defects you have in a process, you can systematically figure out how to eliminate them and get as close to 'zero defects' as possible.

Key Concepts of Six Sigma


Critical to Quality:Attributes most important to the customer Defect:Failing to deliver what the customer wants Process Capability:What your process can deliver Variation:What the customer sees and feels Stable operations:Ensuring consistent, predictable processes to improve what the customer sees and feels Design:Designing to meet customer needs and process capability

Critical to Quality
Understanding Customer's needs and expectations by employing six approaches to communicating with customer Measure business performance against dynamic customer requirements and respond to changing market place conditions Quality function deployment (QFD) and failure modes and effects analysis (FMEA) can help identify critical to quality characteristics.

Defect
Reducing the defect rate Determining the defect cost

Mistake Proofing techniques eliminate the sources of errors and ensure that a process is free of defects.

Process Capability
Understanding Process Capability principles and calculating Process Capability are integral to staying competitive and meeting customer requirements.

Variation
Two sources of variation : Common cause and special cause
Multi-Vari Analysis offers a means of reducing large numbers of unrelated causes of variation to a family of related causes. Reducing common-cause variation so that the distribution has a very small standard deviation

Stable operations
Operational Excellence methodology for identifying the right projects, using the right people to lead projects, right tools and roadmap Turning quality into a management system Improving cycle time to process applications for long term disability benefits

Design
Follow a four phase process to achieve design. Identify, Design, Optimize and Validate Design helps eliminate designed-in quality problems that account for 7080% of defects Link Six Sigma with QS 9000

Benchmarking
The continuous process of measuring products, services, and practices against the toughest competitors and industry leaders
The search for industry best practices that lead to superior performance

It is not a mechanism for determining resource reductions

Elements of Benchmarking
A structured process/approach
Continuous/ongoing Involves measuring, evaluating, and comparing results (and the process of benchmarking) Focus on best practices/results Goal is to improve to level of best

Fundamentals of Benchmarking
Focus on key services/processes Learn from others

Apply what has been learned

Types of Benchmarking
Internal - best within organization Competitive - best within competition

Functional - best within industry


Collaborative - best within voluntary network

10 Step Benchmarking Process


Planning
Identify benchmarking subject & team Identify & select benchmarking partners

Identify data collection techniques

Analysis
Determine current performance gap Project future performance levels

10 Step Benchmarking Process


Integration
Communicate findings & gain acceptance Establish functional improvement goals

Action
Develop action plan

Implement plans & monitor progress


Recalibrate & reset benchmark performance levels

Benchmarking Made easy


Observe
Whos best? How do you know? Identify practices.

Understand
Are others better? Why? How much better? What can be adopted?

Act
Commit to findings. Set goals. Communicate new direction. Take action to change. Recalibrate

Common Pitfalls
No clear purpose/wrong purpose
Poor team balance Too many subjects Too many metrics/poor metrics No management buy-in Unrealistic schedule Wrong partners

Accelerators of Benchmarking
Leadership commitment
Organizational preparation Identification & mapping of key processes Capacity for learning Knowledge of customers & competitors Resources

Continuous Improvement
Software development mainly focuses on Problem Solving.
Continuous improvement is a TQM concept which involves examining processes to proactively determine improvement opportunities apart from problem solving. Both problems and opportunities can be addressed using the same methodology (Continuous Improvement)

Steps in continuous improvement


Describe the issue Identify the cause

Resolve the issue


Follow up

Describe the issue


Identify improvement opportunities from sources such as trouble reports, cutomer complaints and employee ideas. Utilize information gathered through out software development life cycle such as defect analysis, post project reviews or reviewing project attribute data

Describe the issue(contd)


Once an issue has been identified, describe it fully and quantify the consequences of making changes. Consequences can be the cost of not fixing a problem or the cost of savings resulting from the improvement. These consequences are due to reduced rework or increased quality and productivity

Determine the cause


Addressing the symptoms instead of causes leaves the problems for long and the improvements would be ineffective Techniques used to determine root causes are process flow analysis, requirements reviews, cause/effect or fishbone diagramming, and measurement.

Brainstorming is an effective technique for smaller issues

Resolve the issue


Resolutions involves brainstorming or research.(also the necessity to understand individual authority and responsibility to determine what level of commitment and involvement from others is necessary is important) Impacts on time, cost, customers, suppliers and schedules should be analyzed to select the appropriate action

Implementing the Resolution


Need to be planned to avoid problems This can be a project plan which involves specifying activities, responsibilities and time frames for making process changes Project plan also ensures resources are available when following up on the improvement

Follow up
Making sure that the changes have been correctly implemented and the desired outcome was achieved Follow-up enables organizations to show successes and demonstrate the impact of improvements It involves measuring the impact against the initial objectives
More specifically, results are measured to see if costs have been reduced or avoided and whether defects have decreased or productivity has increased

Conclusions
Improving processes should be part of what is done every day As industries, customers and technologies grow and change, organizations need to move ahead as well. Often what has worked in the past will not work in the future Individuals must be willing to look for ways to resolve problems and improve processes so their jobs will be efficient, effective and meet company needs.

Best Practices
What is a best practice ?
Best practices" are documented strategies and tactics employed by highly admired companies
Due to the nature of competition and their drive for excellence, the profiled practices have been implemented and honed to help place their practitioners as the most admired, the most profitable, and the keenest competitors in business.

Integrated Performance Systems


Key Performace dimensions identified using best practise
Link Best Practices to Strategy Fulfillment Best Practice Identification Systems Best Practice Recognition Systems Communicating Best Practices Best Practice Knowledge Sharing Systems Ongoing Nurturing of Best Practices

Quality Objectives
Improve Customer Satisfaction Reduce development costs

Improve time-to-market capability


Improve processes

Customer Satisfaction
Knowing what to ask Knowing how to ask

Knowing who to ask


Turning the results into information

Customer Satisfaction Survey


Provides the management with the information they need to determine their customers level of satisfaction with their software products and with the services associated with those products.
Technical staff can use the survey info to identify opportunities for ongoing process improvements and to monitor the impact of those improvements

C S Survey : Key Steps


Focusing on Key Customer Quality Requirements
Creating the Questionnaire

Deciding Who to Ask


Designing a Customer Satisfaction Database Reporting Survey Results: Turning Data into Information

Reduce development costs/improve time-to-market capability

Seven ways to better software projects in terms of money, effort and quality
Minimize overhead work during a project. Stop tinkering the project plan. Put away the audit checklists. Cut meeting times to bare minimum and keep them focussed. Take lengthy discussions off-line. You need to ship software, so make it your mantra. Dont do any activity that makes it harder. To make sure you ship good software keep reading.. Cutting buggy features and excess new functionality to meet release dates will allow you to get revenue in the door sooner rather than later. It is hard to decide what and when to let go. Get help deciding. Prioritization of features will also give your project team more guidance on what the key high-priority items are so they know where to spend their time instead of investing long hours on less important or trivial features.

Manage Stakeholders' Expectations Work with your stakeholders to make sure that, as things change and schedules slip, they know what's going on. They can help with determining priorities, and provide valuable input into making decisions. At the same time, they will appreciate you being honest with them about any changes needed or problems encountered. Asking your stakeholders for input will make them part of the process - and they will have a greater interest in seeing you succeed. As well, the stakeholders' might be willing to let certain things go - such as being more flexible on a shipping date in order to keep a feature, or on the feature set in order to meet a release date. Before buying a tool, consider how well the tool fits into the processes you are currently working with, regardless of whether those processes are formal or informal. Remember, you need to spend time and money to train your resources to be able to integrate the new tool into your project effectively. If you can't do that and ship your next release simultaneously, while doing a good job at each, the tool will quickly become shelf-ware and the organization will have lost money, time, and buy-in for future tool purchases.

Proper risk analysis can provide guidance for you and your team in deciding what you must do, what you can avoid, and what you are not going to worry about. Perhaps appoint one person on the project team to be the "Risk Officer", responsible for tracking the project's risks and the status of mitigation/avoidance plans, and to report on this information to the rest of the team during project status meetings. Regular risk reviews and implementation of mitigation strategies will make your software journey a safer, more successful experience.

Look for Quick Wins Quick Wins are things that are easy to implement or adopt and have a large potential return on investment (ROI) in the short term. Continually looking for Quick Wins in a planned manner means you are now doing continuous process improvement at a rate that makes sense for you. Quick wins are the kinds of small changes that you can make in the way you do things that, while consuming less than 5% of your daily available time, can add up to significant savings (time, effort) at the end of the project.

Skip the training - Hire the brains. Bringing in experts in the present can pave the way for leveraging less senior resources in the future. This is one good way to avoid or mitigate risks. These "hired guns" are experts in ramping up quickly on new projects, and can become effective in a very short period of time.
In short: keep your product and processes simple, do it well manually first and automate only when it makes sense, make changes and improvements in process as you go rather than all at once, and never stop looking for better ways to do things that make sense for your organization, product, and market

Quality Attributes
Reliability Maintainability

Usability
Portability

Software Reliability
Reliability is the ability of a system or component to perform its required functions under stated conditions for a specified period of time Software Reliability is the application of statistical techniques to data collected during system development and operation to specify, predict, estimate, and assess the reliability of software-based systems
Software Reliability Engineering (SRE) is a standard, proven best practice that makes testing more reliable, faster, and cheaper. It can be applied to any system using software and to frequently-used members of software component libraries

Software Reliability Engineering


Set quantitative reliability objectives that balance customer needs for reliability, timely delivery, and cost Characterize quantitatively how users will employ your product Track reliability during test Maximize the efficiency of development and test by focusing resources on the operations that are most used and/or most critical, by realistically reproducing field conditions, and by delivering just enough reliability

The advantages of SRE


is unique in helping testers ensure necessary reliability in minimum delivery time and cost increases tester productivity and reduces time to market of a product

improves customer satisfaction and reduces the risk of angry customers

Applying SRET
SRET should be applied over the entire software life cycle, including all releases with particular focus on testing Apply to feature, load, performance, regression, certification or acceptance testing

Determine which associated system requires separate testing


Decide which type of SRET required for each system to be tested Descision is made to apply development testing to the product and certification testing to the operating system

Applying SRET(Contd)
Determine Operational Mode(Operational mode is a distinct pattern of system use and/or environment that needs separate testing). Define failure in terms of severity classes

Set failure intensity objectives


Engineer reliability strategies (means finding the right balance among them to achieve the failure intensity objective in the required time and at minimum cost) Develop Operational profiles (Operational profile is simply the set of operations and their probabilities of occurrence)

Applying SRET(Contd)
Prepare for testing (includes preparing test cases, test procedures and any autmated tools decided to use)
Execute tests(conducting testing and then identifying failures, determining when they occurred, and establishing the severity of their impact) Interpret failure data in development testing and certification testing. (applying failure data to guide decisions) For further details refer http://www.cs.bsu.edu/homepages/metrics/cs639d /CS639WWW/musa-oneil/index.htm

Maintainability
Qualitative Definition
the characteristics of material design and installation which make it possible to meet operational objectives with a minimum expenditure of maintenance effort under operational environmental conditions in which scheduled and unscheduled maintenance will be performed

Quantitative Definition
maintainability is a characteristic of design and installation which is expressed as the probability that an item will be restored to specified conditions within a given period of time when maintenance action is performed in accordance with prescribed procedures and resources

Software Maintenance
Is the process of modifying the existing program to keep the system up and functioning Is the final phase of Software life cycle designed using Water fall model

Role of a Software Maintainer


Understand the software completely and the changes to be made for modification Modify the software to incorporate changes by creating new programs or changing existing programs Revalidate the modified software to ensure correctness and to ensure that no side effects were introduced Interact with customers to identify and correct all problems

Software maintenance problems


Program understanding

Top-down approach
Poor software design Poorly coded software Outdated hardware Lack of common data definitions More than one programming language Increasing Inventory Excessive resource requirements

User requirements

Classes of Software Maintenance


Corrective maintenance : improving a system as the result of an error Adaptive maintenance : improving a system as a result of changes in the environment

Perfective maintenance : improving a system as a result of the needs of end users


Emergency maintenace : unscheduled corrective maintenance to prevent disaster Preventive maintenance : developing maintainable software that in turn reduces the amount of maintenence expense

Standards for software maintenance


Phase 1:Problem/modification identification and classification The maintainer identifies, classifies and prioritizes the modification request
Phase 2:Analysis - The maintainer uses repository information and modification request to analyze the scope of the modification and devises a preliminary plan for design, implementation, test and delivery Phase 3:Design - the maintainer designs the modification to the system Phase 4:Implementation - the maintainer implements the changes to the system

Standards (Contd)
Phase 5:Regression/System Testing - the system is tested for completeness and accuracy and also to validate that the modified code does not introduce faults.

Phase 6:Acceptance Testing - the system is tested to ensure that the modifications are satisfactory. Problems encountered are docuemented
Phase 7:Delivery - Once the system has been approved the system is delivered to the customer

Usability - Definitions
Usability is defined in ISO 9241 part 11 as the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use
Effectiveness - How well the user achieves the goals they set out to achieve using the system. Efficiency - The resources consumed in order to achieve their goals. Satisfaction - How the user feels about their use of the system.

Usability - Definitions
User - the person who will use the product to do their job

User-centered design - an approach to design in which a high level of usability of the end product is an objective. It includes involving users, obtaining their feedback on the design and use of the system, providing prototypes for users to try out and redesigning as a result of user feedback.
Usability evaluation - the process by which the level of usability of a system is measured. It involves observing users as they try out certain aspects of a prodcut or process

Aspects of usability
User-centered design
Designers must understand who the users will be and what tasks they will do. If possible, designers should learn to do some or all of the users' tasks. This must take place before the system design work starts, and design for usability must start by creating a usability specification.

Participative design
A panel of expected users should work closely with the design team, especially during the early formulation stages and when creating the usability specification. To enable these users to make useful contributions, they will need to show a range of possibilities and aletrnatives by means of mock-ups and simulations

Usability Aspects
Experimental design
Early in the development process, the expected users should do pilot trials and then subsequently use the simulations, and later the prototypes, to do real work. Whenever possible alternative versions of important features and interfaces should be simulated or prototyped for evaluation by comparative testing.

Iterative design
The difficulties revealed in user tests must be remedied by redesign, so the cycle design, test and measure, redesign must be repeated as often as is necessary until the usability specification is satisfied.

Usability Aspects (Contd)


User Supportive design
Careful attention to user support facilities such as documentation, help screens can significantly assist usability

Design for all


By taking account of the needs of the people with say, impaired hearing, vision, speech and motor skills, the future product will be more useful to a wider range of people, and be more successful as a result.

Usability activities
User requirement specification
analysing the user population and the tasks they perform in a given working environment will help in producing a more precise user requirement specification

Studies have found that a major cause of IT system

failure is that user requirements are not identified properly, so the software was not matched to them.

Design guidelines and standards


Use and advise design guidelines and ergonomic standards relevant to IT applications

Usability Acitivities(contd)
The user interface may follow the latest styles guides,

and it'll look great. But unless the system helps the user carry out what they want to do effectively and efficiently, this glossy front may restrict rather than help them.

Prototyping
Use the methods for making and evaluating prototypes, to validate the functions of the system and to develop the user interface. An International rent-a-car company wanted to provide a

24-hour service via stand alone terminals in airport lounges. By using prototyping techniques, an efficient and user-friendly design (based upon touchscreen terminals) was produced in less than two months.

Usability Activities(contd)
User Acceptance Testing
There are well estd methods for testing systems in terms of user performance and peoples attitudes crucial to acceptance in the work place. A telecommunications company, developed a desktop

videophone. They wanted to ensure that it could be used by senior, non-technical, staff. Acceptance testing with typical users confirmed that the videophones simple and elegant user interface would be well accepted.

Usability Activities(contd)
Introduction of new technologies
Advise on how to introduce systems into the workplace so that users learn and cope with the changes as easily as possible, without the need for expensive re-training courses and minimising disruption to the business.
Creating a usable system will also involve considering

how it will fit into the customer's organisation and be widely acceptable to its employees. This requires user involvement throughout the design process.

Usability Questions
What is user-centered design
It means that the design is based on the needs and requirements of the users of the future system.

How can user centred design be achieved?


Firstly by studying the users, their tasks and the environment in which the system will be located. Secondly the system should be developed iteratively, so that it gradualy meets user requirements.

How do you know when a usable design has been achieved?


As part of user requirements, a set of of usability goals should be defined in specific terms. Thus usability testing can show to what extent the goals have been achieved.

Usability Questions(contd)
What would be an example of such a goal ?
For example, for a new telephone system, a usability goal might be: "It should be possible for 95% of users to make a call successfully within 30 seconds, making no errors".

How do you identify such goals ?


By studying the user performing tasks, and identifying the required performance levels for those tasks.

How do you define usability ?


It is the effectiveness, efficiency and satisfaction with which a product can be used by a given set of users performing a given set of tasks in a given environment.

Usability Questions(contd)
Why is it such a long definition ?
This is because it recognises that usability is not a unique property. It depends on the particular circumstances in which the systen will be used i.e. its 'context of use'

How do go about measuring usability?


By studying the context of use of the intended product or system. From this study, a sample of typical users and tasks will emerge. based on the tasks, a set of usability goals (or test criteria) can be defined. A set of tests can then be run with a sample of users to see if when they perform the defined tasks, the criteria levels are achieved.

How many users do you need ?


From experience, about 10 users are employed from each major user group e.g. 10 novices and 10 experts. In this way the results from different user groups can be compared.

Usability Standards
Standards related to human-centred design
process-oriented: these specify procedures and processes to be followed. product-oriented: these specify required attributes of the user interface.

Process oriented standards


1981, Ergonomic principles in the design of work systems 1997, Human-centred design processes for interactive systems 1993, Ergonomic requirements for office work with visual display terminals (VDTs)

Usability Standards(contd)
Process oriented Standards
1993, Guidance on task requirements Guidance on usability(how to identify the information which it is necessary to take into account when specifying or evaluating usability in terms of measures of user performance and satisfaction) 1994, Ergonomic principles related to mental workload

Evaluation of Software Products(the extent to which an entity satisfies stated and implied needs when used under stated conditions)

Usability Standards(contd)
Product oriented standards
Visual display requirements Keyboard requirements
Workstation layout and postural requirements

Environmental requirements
Display requirements with reflections Requirements for displayed colours Requirements for non-keyboard input devices Dialogue(b/w human & information systems)principles

Usability Standards(contd)
Product oriented Standards
Presentation of information of visual displays
User guidance for user interfaces User-computer Menu dialogues

Command language user-computer dialogues


Direct Manipulation dialogues Form filling dialogues

Dialogue interaction - cursor control for text editing


Framework for icon symbols and functions

Usability Evaluation Methods


Testing Approach
Here the representative users work on typical tasks using the system and evaluators use the results to see how the user interface supports the user to do their tasks

Inspection Approach
Here usability specialists and sometimes software developers, users and other professionals, examine usability-related aspects of a user interface

Inquiry approach
Here usability evaluators obtain information about users' likes, dislikes, needs, and understanding of the system by talking to them, observing them using the system in real work or letting them answer questions verbally or in written form

Portability - Definition
Portability is an attribute which may be possessed by a software unit to a specific degree with respect to a specific class of environments. Portability may also be an attribute of auxiliary elements such as data, documentation, and human experience. A software unit is portable (exhibits portability) across a class of environments to the degree that the cost to transport and adapt it to a new environment in the class is less than the cost of redevelopment.

Portability - Key Concepts


Software Units Environments Classes of Environments Degree of Portability Costs and benefits Phases of Porting : Transportation & Adaptation Porting vs Redevelopment

Why Should we port ?


Many Hardware and Software Platforms We want familiar software in different environments

We want easier migration to new system versions and to totally new environments
We want more new development, less redevelopment, and lower software costs

Who should care for Portability


Users
Portable software should be cheaper

Portable software should work the same in various environments

Developers
Portable software costs less to develop for multiple environments

Portable software is easire to maintain for multiple environments

Vendors
Portable software is easier to support

Users will repurchase the same product for new environments

Managers
Portable software reduces maintenance costs

Portability reduces headaches during product enhancement

What can we port ?


Programs , Components, Systems
Data Libraries Tools System Software Documentation Experience

Levels of Porting
Source
Programming language or higher-level form. Adaptation is feasible.

Binary
Executable form. Most convenient, but adaptation is difficult.

Intermediate
May allow limited adaptation without exposing source.

Typical Activities
Adapting existing programs to new environments

Designing programs to be portable


Improving portability of existing programs System support for portability

Goals & Tasks


Application Installers
Goal: To port applications to specific new environments.

Tasks: Analyze environment; adapt and compile (perhaps); configure and install.
Resources: Source (perhaps) or executable files; application documentation; system documentation.

Application designers
Goal: To design applications which can be easily ported among different environments. Tasks: Define environment classes; develop portable design; document for portability. Resources: Requirements specification; language specification and other relevant standards.

Goals & Tasks


System Implementors
Goal: To provide mechanisms for specific environments which facilitate porting. Tasks: Identify relevant services and resources; support standards; document for porting. Resources: System documentation; relevant standards.

Three Related Concepts


Portability
Ability to use the same program (or component) in multiple environments

Reusability
Ability to use the same software component in multiple programs

Interoperability
Ability of different programs to "work together," especially by exchanging data

Portability Myths
It is claimed that portability has been solved by
standard languages (e.g., FORTRAN, COBOL, Ada, C, C++, Java)
universal operating systems (e.g., Unix, MS-DOS, Windows, JavaOS) universal platforms (e.g., IBM-PC, SPARC, JavaVM) open systems and POSIX OOP and distributed object models (e.g., OLE, CORBA) Software patterns, architectures, and UML The World Wide Web

Quality Pioneers
What relevance do general quality principles that have been developed in other fields have with software development and software quality? This is addressed by studying the principles advocated by Qualtiy Experts

JURAN - Strategy for achieving Quality

structured annual improvements in quality

a massive quality-oriented training programme


upper management must lead company's approach to product quality

JURAN -Achieving Quality Improvement


Study the symptoms of defects and failures

Develop a theory on the causes of symptoms


Test the theory until the cause is known

Stimulate the remedial action by appropriate action

Juran - Classification of defects


Worker Controllable and Management controllable Worker Responsibility
worker knows what to do worker knows result of own work

worker has means of controlling result

Management Responsibility
Sequence of events for improving quality and reducing quality costs Universal feedback loop for control Fundamental is data collection and analysis

Deming - 14 Principles
Create constancy of purpose towards improvement of product and service

Adopt the new philosophy


Cease dependence on inspection to achieve quality build quality in, in the first place End the practice of awarding business on the basis of price tag - get single supplier for any one item. Instead minimize total cost Improve constantly and forever the system of production and service to improve quality and productivity - this constantly decreases costs

Deming - 14 Principles
Institute training on the job Institute leadership. The aim of supervision is to help people to do a better job Drive out fear, so everyone may work effectively for the company Break down the barriers between departments - work in teams Eliminate slogans, and targets for the workforce asking for zero-defects and new levels of productivity. They create adversarial relationships. The bulk of the causes of low quality and low productivity belong to the system

Deming - 14 Principles
Eliminate work standards and management by objectives - substitute leadership

Remove barriers that rob workers/managers of the right to pride of workmanship - abolish annual merit rating
Institute a vigorous program of education and self improvement

Put everybody in the company to work to accomplish the transformation - the transformation is everybody's job

Deming - Contribution to QC
The economic and social revolution which took hold in Japan, upset in 15 years the economy of the world and shows what can be accomplished by serious study and adoption of statistical methods and statistical logic in industry at all levels from the top downwards The analysis of errors for either type or cause will help control errors - this is particularly important for software. The results enable improvement of the process so that less errors are produced. You cannot inspect quality into a product - you must build in quality right from the outset.

CROSBY
Crosby suggests there are five maturing stages through which quality management evolves.
uncertainty
awakening enlightenment

wisdom
certainty

Crosby used a Quality Management Maturity Grid to define his approach. The advantage in this is it defines a quality improvement path for an organization as well as a means for assessing where at any time the organization is on the path to quality

Crosbys defintion of Software Quality


"conformance to requirements
Misconceptions about software quality
quality means goodness, cannot be defined or measured
people do not produce quality because they don't care it costs a lot more to produce quality software people make mistakes - it is inevitable there will be errors in large systems

There is an underlying assumption that the three goals of quality, cost, schedule are conflicting and mutually exclusive. In contrast, Deming claims that the only way to increase productivity and lower cost is to increase quality

Você também pode gostar