Você está na página 1de 33

10/18/2010

Samuel Torres
Lilly del Caribe, Inc.

Key concepts in Agenda:


Understanding Software Validation
y Key terminology that is used in the validation process
y Software categories and how this may impact testing/validation
activities
y Development Life Cycle Approach as a management tool
y Contrast how compliance and savings are related by using a
Risk Based Approach
Developing and Testing Software
y Benefits of using solid Software Development Standards
y Source Code Reviews
y Planning the Testing Strategy
y Recommended test strategies based on results of risk
assessments, supplier assessments, and other critical
factors
Interactive Exercise

1
10/18/2010

Understanding Software Validation

Key terminology
and Concepts

Computer System

Application software
Platform
y System Software
y Hardware

2
10/18/2010

Computer System
Software: (ANSI) Programs, procedures, rules, and any associated
documentation pertaining to the operation of a system.

Program: (ISO) A sequence of instructions suitable for processing.


Processing may include the use of an assembler, a compiler, an interpreter,
or another translator to prepare the program for execution. The instructions
may include statements and necessary declarations.

Application software: (IEEE) Software designed to fill specific needs of a


user; for example, software for navigation, payroll, or process control.
Contrast with support software; system software.

System Software: (ISO) Application- independent software that supports


the running of application software.

Computer System, Contd

Hardware: (ISO) Physical equipment, as opposed to programs,


procedures, rules, and associated documentation. Contrast with
software.

Platform: The hardware and software which must be present and


functioning for an application program to run [perform] as intended.
A platform includes, but is not limited to the operating system or
executive software, communication software, microprocessor,
network, input/output hardware, any generic software libraries,
database management, user interface software, and the like.

3
10/18/2010

Software Categories (based on


GAMP5)
Category 1 Infrastructure Software
Infrastructure elements link together to form an integrated environment for
running and supporting applications and services.

Category 2 This Category is no longer used in GAMP 5

Category 3 Non-Configured Products


This category includes off-the-shelf products used for business purposes. It
includes both systems that cannot be configured to conform to business
processes and systems that are configurable but for which only the default
configuration is used. In both cases, configuration to run in the users
environment is possible and likely (e.g., for printer setup).

Notes:
Configuration is the process of modifying an application program by changing
its configuration parameters without changing or deleting its program code or
adding additional custom code.

Be careful when classifying a software as COTS.


Commercial Off-the-Shelf Software (IEEE) Is software defined by a
market-driven need, commercially available, and whose fitness for use has
been demonstrated by a broad spectrum of commercial users.

Software Categories (based on


GAMP5)
Category 4 Configured Products
Configurable software products provide standard interfaces and
functions that enable configuration of user specific business processes.
This typically involves configuring predefined software modules.
Much of the risk associated with the software is dependent upon how
well the system is configured to meet the needs of user business
processes. There may be some increased risk associated with new
software and recent major upgrades. Judgment based on risk and
complexity should determine whether systems used with default
configuration only are treated as a Category 3 or Category 4.

Category 5 Custom Applications


These systems or subsystems are developed to meet the specific needs
of the regulated company. The risk inherent with custom software is
high. The life cycle approach and scaling decisions should take into
account this increased risk, because there is no user experience or
system reliability information available.

Customization is the process of modifying an application


program by changing or deleting its program code or adding
additional custom code.

4
10/18/2010

Software Development Life


Cycle
software life cycle. (NIST) Period of time beginning when a
software product is conceived and ending when the product
is no longer available for use. The software life cycle is
typically broken into phases denoting activities such as
requirements, design, programming, testing, installation,
and operation and maintenance.

In general software is

Sample

Specification and Verification


(general approach for achieving computerized system
compliance and fitness for intended use within the system life
cycle)

5
10/18/2010

Validation
validation. (1) (FDA) Establishing documented evidence which provides a high degree
of assurance that a specific process will consistently produce a product meeting its
predetermined specifications and quality attributes. Contrast with data validation.
validation, software. (NBS) Determination of the correctness of the final program or
software produced from a development project with respect to the user needs and
requirements. Validation is usually accomplished by verifying each stage of the
software development life cycle. See: verification, software.
Verification (ISO) (1) (ASTM) (2)
(1) Confirmation, through the provision of objective evidence that specified
requirements have been fulfilled. (2) A systematic approach to verify that
manufacturing systems, acting singly or in combination, are fit for intended use, have
been properly installed, and are operating correctly. This is an umbrella term that
encompasses all types of approaches to assuring systems are fit for use such as
qualification, commissioning and qualification, verification, system validation, or other.

The specific terminology used to describe life cycle activities and deliverables
varies from company to company and from system type to system type.
Whatever terminology is used for verification activity, the overriding
requirement is that the regulated company can demonstrate that the system
is compliant and fit for intended use.

Computer System Validation


Computer System Validation (CSV) is a process that
provides a high degree of assurance that a computer
system is reliably doing what it is intended to do and
will continue to do so. Computer System Validation
(CSV) is intended to provide assurance that a
computer system is working in the way:
The business needs it to work
It is intended to work
Any applicable laws or regulations require it to work

6
10/18/2010

Intended Use is Central to


Validation
Intended Use as a term defines the context of a computer
systems essential requirements and the boundaries of that
system. It is central to building, testing and validating a
system. It is required by the FDA.
y The FDAs General Principles of Software Validation Guidance
states: All production and/or quality system software, even if
purchased off-the-shelf, should have documented requirements
that fully define its intended use, and information against which
testing results and other evidence can be compared, to show
that the software is validated for its intended use.
y 21 CFR 820.70(i) Quality System Regulation (devices)
specifies: Any software used to automate any part of the
device production process or any part of the quality system must
be validated for its intended use.

EXAMPLES OF ADDITIONAL REGULATORY


REQUIREMENTS FOR SOFTWARE VALIDATION

Software validation is a requirement of the Quality System regulation,


which was published in the Federal Register on October 7, 1996 and took
effect on June 1, 1997. (See Title 21 Code of Federal Regulations (CFR)
Part 820.) Validation requirements apply to software used as components
in medical devices, to software that is itself a medical device, and to
software used in production of the device or in implementation of the
device manufacturer's quality system.
Computer systems used to create, modify, and maintain electronic
records and to manage electronic signatures are also subject to the
validation requirements. (See 21 CFR 11.10(a).) Such computer
systems must be validated to ensure accuracy, reliability, consistent
intended performance, and the ability to discern invalid or altered records.

The FDAs analysis of 3140 medical device recalls conducted between 1992
and 1998 reveals that 242 of them (7.7%) are attributable to software
failures. Software validation and other related good software engineering
practices are a principal means of avoiding such defects and resultant
recalls.

7
10/18/2010

Quality risk management*


Quality risk management is a systematic process
for the assessment, control, communication, and
review of risks to patient safety, product quality, and
data integrity, based on a framework consistent with
ICH Q9. It is used:
to identify risks and to remove or reduce them to an acceptable
level
as part of a scalable approach that enables regulated companies
to select the appropriate life cycle activities for a specific system

* ICH Q9: Quality Risk Management Q9 International Conference on


Harmonization of Technical Requirements for Registration of
Pharmaceuticals for Human Use (ICH)

Quality risk management,


Contd
y Risk (ICH Q9) The combination of the probability of occurrence of harm and the
severity of that harm (ISOIIEC Guide 51).
y Harm (ICH Q9) Damage to health, including the damage that can occur from loss of
product quality or availability.
y Hazard (ICH Q9) The potential source of harm.
y Risk Assessment (ICH Q9)
A systematic process of organizing information to support a risk decision to be made
within a risk management process. It consists of the identification of hazards and the
analysis and evaluation of risks associated with exposure to those hazards.
y Severity (ICH Q9) A measure of the possible consequences of a hazard.

Appropriate controls for removal or reduction of the identified risks should be identified
based on the assessment. A range of options is available to provide the required
control depending on the identified risk. These include, but are not limited to:
-modification of process design
-modification of system design
-application of external procedures
-increasing the detail or formality of spececifications
-increasing the extent or rigor of verification activities (TESTING)

Where possible, elimination of risk by design is the preferred approach.

8
10/18/2010

Applying Risk Management


Based on the Business
Process
In order to effectively apply a quality risk management program to computerized
systems, it is important to have a thorough understanding of the business process
supported by the computerized systems, including the potential impact on patient
safety, product quality, and data integrity.
y What are the hazards?
To recognize the hazards to a computerized system requires judgment and
understanding of what could go wrong with the system, based on relevant
knowledge and experience of the process and its automation. Consideration
should include both system failures and user failures.
y What is the harm?
Potential harm should be identified based on hazards. Examples of potential
harm include:
- production of adulterated product caused by the failure of a computerized
system
- failure of an instrument at a clinical site that leads to inaccurate clinical study
conclusions
- failure of a computerized system used to assess a toxicology study that leads to
incomplete understanding of a drugs toxicological profile

Applying Risk Management Based on the Business


Process, Contd
What is the impact ?
In order to understand the impact on patient safety, product quality, and data integrity, it is
necessary to estimate the possible consequence of a hazard.
What is the probability of a failure?
Understanding the probability of a failure occurring to a computerized system assists with
the selection of appropriate controls to manage the identified risks. For some types of
failure such as software failure, however, it may be very difficult to assign such a value,
thus precluding the use of probability in quantitative risk assessments.
What is the detectability of a failure?
Understanding the detectability of a failure also assists with the selection of appropriate
controls to manage the identified risks. Failures may be detected automatically by the
system or by manual methods. Detection is useful only if it occurs before the
consequences of the failure cause harm to patient safety, product quality, or data integrity.
How will the risk be managed?
Risk can be eliminated or reduced by design, or reduced to an acceptable level by
applying controls which reduce the probability of occurrence, or increase detectability.
Controls may be automated, manual, or a combination of both.

The above considerations are context sensitive. For example, risks associated with solid oral dosage
manufacturing area are very different to those in a sterile facility, even when the same
computerized systems are used.
Similarly, the risks associated with an adverse event reporting system are very different to those
in a training records database. The former can have a direct effect of patient safety, whereas the
latter system is very unlikely to affect patient safety.
The acceptable level of risk, sometimes known as risk tolerance, should be considered.

9
10/18/2010

Need to Validate but


..What to do?
...How
much?
Validation Planning
Use the Software Development Life Cycle as a guide to
identify and establish the required and applicable
validation activities for your project
The Validation Plan defines the approach, extent of
validation, and key roles and responsibilities for the
CSV activities/deliverables. It serves as the criteria for
accepting the system and approving the Validation
Report.

Validation Benefits
Validation provides tangible benefits to the business and its
customers by:
assuring the system works as intended
reducing long-term costs by reducing rework and defects
providing a knowledge base that can support the product
long after the people involved in creating it have moved on

Conversely, inadequate or insufficient system validation poses


risks to the business and its customers, as illustrated below:
2007: Software failure in Pharmacy Compound System
allowed up to 50 mL extra volume be added to IV solutions,
which could be life-threatening.
2006: Newly released software may improperly deliver
power to the cardiac catheter, resulting in catheter overheating
and thermal damage, and serious patient injury.

10
10/18/2010

Suitable Life Cycle Strategy


Software and hardware components of a system may be analyzed and
categorized and then, these categories may be used along with Risk
Assessment and Supplier Assessment to determine a suitable life cycle
strategy.

There is generally increasing risk of failure or defects with the progression


from standard software and hardware to custom software and hardware.
The increased risk derives from a combination of greater complexity and
less user experience. When coupled with risk assessment and supplier
assessment , categorization can be part of an effective quality risk
management approach.

Effort should be concentrated as follows:


Custom > Configured> Non-Configured> Infrastructure

Categorization can help focus effort where risk is greatest.

Practical Considerations

Examples of practical considerations in right-sizing:


y Which system functions need more rigorous
testing including negative testing (atypical or
unexpected inputs)?
y Which system functions require special security
due to special approval authorities or sensitive
information?
y What data protection is needed?
y To what extent must a suppliers quality practices
be scrutinized based on intended use of the
system?

11
10/18/2010

VALIDATION OF OFF-THE-SHELF
SOFTWARE AND AUTOMATED
EQUIPMENT
Most of the automated equipment and systems used by drug or device
manufacturers are supplied by third-party vendors and are purchased off-
the-shelf (OTS).
The vendor audit should demonstrate that the vendors procedures for and
the results of the verification and validation activities performed to the OTS
software are appropriate and sufficient for the requirements of the drug or
medical device to be produced using that software. The vendors life cycle
documentation can be useful in establishing that the software has been
validated.
However, such documentation is frequently not available from commercial
equipment vendors, or the vendor may refuse to share their proprietary
information. In these cases, the drug or device manufacturer will need to
perform sufficient system level black box testing to establish that the
software meets their user needs and intended uses.
For many applications black box testing alone is not sufficient. Depending
upon the risk of the product or device produced, the role of the OTS
software in the process, the ability to audit the vendor, and the sufficiency
of vendor-supplied information, the use of OTS software or equipment may
or may not be appropriate, especially if there are suitable alternatives
available.

What do I need the software to do?


(User Requirements Definition)

Requirements define the detailed intended use


of the system and are the foundation of CSV. If
the requirements are not complete, correct, or at
an appropriate level of detail, the system will not
be fit for its intended use or purpose.
Security requirements for the software/system
should be defined.

12
10/18/2010

Security Requirements

The security of our computer systems and data has become more
important than ever before given the widespread use of the internet
to access our systems and the increased incidents of viruses and
attacks by outside hackers. Physical and logical security controls
ensure that data residing in a computer system is protected from
unauthorized access, inadvertent or unauthorized alteration or
destruction.
Security Plan and Security Administration SOP are the two
primary security-related CSV deliverables.

Are solutions to the user needs


available in the market?
Vendor/Supplier Management may be
required if the application software or
services that are needed are not available
or could not be developed within the user
company.

13
10/18/2010

Supplier Management
Supplier Management assures that the suppliers
quality practices are adequate to deliver and support
a reliable software product or service. The scope
and extent of supplier management depends on the
risk associated with the system/service and the
extent to which the regulated company depends
directly on supplier activities.
Supplier management is a joint responsibility of
IT/Automation, Business and Quality, and involves
activities to conduct the initial and ongoing
evaluation of the suppliers quality practices as well
as managing the ongoing relationship with the
supplier.

How will my software be


developed? (Design Additional
Specification Levels)

14
10/18/2010

How will my software be developed?


(Design)
System Specifications
There are a number of types of specification that may be required to adequately define a system.
These may include Functional Specifications, Configuration Specifications, and Design
Specifications.
The applicability of, and reed for, these different specifications depends upon the specific system
and should be defined during planning.

Functional Specifications are normally written by the supplier and describe the detailed
functions of the system, i.e., what the system will do to meet the requirements. The regulated
company should review and approve Functional Specifications where produced for a custom
application or configured product. In this situation, they are often considered to be a contractual
document.

Configuration Specifications are used to define the required configuration of one or more
software packages that comprise the system. The regulated company should review and approve
Configuration Specifications.

Design Specifications for custom systems should contain sufficient detail to enable the system
to be built and maintained. In some cases, the design requirements can be included in the
Functional Specification. SMEs should be involved in reviewing and approving design
specifications.

Note: A current system description (overview) should be available for regulatory


inspection and training. This may be covered by the URS or Functional Specification,
or a separate document may be produced.

Software Development

Coding (IEEE) (1) In software engineering, the process of


expressing a computer program in a programming language. (2)
The transforming of logic and data from design specifications
(design descriptions) into a programming language.
Software Development Standards Written procedures describing
coding [programming] style conventions specifying rules
governing the use of individual constructs provided by the
programming language, and naming, formatting, and
documentation requirements which prevent programming errors,
control complexity and promote understandability of the source
code. Syn: coding standards, programming standards.

A solid Software Development Standards assures all these


benefits and also are critical when performing changes
during the Maintenance (operation) phase of the
software/system.

15
10/18/2010

Source Code Reviews


Source code reviews have two objectives:
y to ensure that programming standards are consistently and
correctly applied
y to ensure that the code is written in accordance with the design
specifications

The review aims to ensure that the code is fit to enter testing (module,
integration or system tests), and that the code can be effectively and
efficiently maintained during the period of use of the application.
The review should be carried out in accordance with a documented
procedure, and performed by at least one independent person with
sufficient knowledge and expertise, in conjunction with the author of the
code.

The extent of source code review should be driven by the criticality of the
requirements/design that the source code is implementing. For example:
All source code implementing critical requirements (e.g., perform critical
calculations or make GxP decisions) should be reviewed. Reviewing only a
representative sample of source code implementing noncritical functionality
may be appropriate.

Traceability Matrix

traceability matrix. (IEEE) A matrix that records


the relationship between two or more products;
e.g., a matrix that records the relationship between
the requirements and the design of a given
software component. See: traceability, traceability
analysis.
Typically, it is used to record the relationship
between the requirements, design, and testing
assuring that all requirements had being tested.

16
10/18/2010

Is the Software working as


expected?
(Testing)
Verification confirms that specifications have been met.
Testing computerized systems is a fundamental
verification activity. In general, Testing is performed to
find significant errors in the software and to demonstrate
that the system functions as intended and it meets the
defined requirements.

testing. (IEEE) (1) The process of operating a system or component


under specified conditions, observing or recording the results, and
making an evaluation of some aspect of the system or component. (2)
The process of analyzing a software item to detect the differences
between existing and required conditions, i.e. bugs, and to evaluate
the features of the software items. See: dynamic analysis, static
analysis, software engineering.

Testing/Verification Levels
(Approach for a Configured Product (GAMP Category 4)

17
10/18/2010

Testing/Verification Levels
(Approach for a Custom Application (GAMP Category 5)

Testing/Verification Levels
Installation Testing (GAMP):
Many companies call this installation Qualification or IQ. The purpose is to verify and document that system
components are combined and installed in accordance with specifications, supplier documentation and local
and global requirements. Installation testing provides a verified configuration baseline for subsequent
verification and validation activities and also verifies any installation methods, tools or scripts used.
Requirements (System) Testing:
testing, system. (IEEE) The process of testing an integrated hardware and software system to verify that
the system meets its specified requirements. Such testing may be conducted in both the development
environment and the target environment.
Design Based Functional Testing:
testing, design based functional. (NBS) The application of test data derived through functional analysis
extended to include design functions as well as requirement functions.
Configuration Testing:
Configuration testing. (GAMP) For each Configuration Specification, an associated Configuration Test
Specification should be produced. The tests should verify that the package has been configured in
accordance with the specification. The tests could take the form of inspections or check of supplier
documentation.

Integration Testing:
testing, integration. (IEEE) An orderly progression of testing in which software elements, hardware
elements, or both are combined and tested, to evaluate their interactions, until the entire system has been
integrated.
Module (Unit ) Testing:
testing, unit. (1) (NIST) Testing of a module for typographic, syntactic, and logical errors, for correct
implementation of its design, and for satisfaction of its requirements. (2) (IEEE) Testing conducted to verify
the implementation of the design for one software element; e.g., a unit or module; or a collection of software
elements. Syn: component testing.

18
10/18/2010

Testing Deliverables and Activities

There are two primary deliverables for Testing:


y The Test Plan (also sometimes known as the Test
Strategy) describes the testing approach and the
roles and responsibilities for testing
y The Test Summary Report summarizes the testing
that was performed
In addition, a traceability matrix may be an option in
order to assure that all the requirements had being
tested and to trace its relationship to the Design
documents.

Test Planningwhat to consider?


Testing Limitationsa quote from FDA Guidance

Software testing is a time consuming, difficult, and imperfect activity.


The real effort of effective software testing lies in the definition of what
is to be tested rather than in the performance of the test.
Software testing has limitations that must be recognized and
considered when planning the testing of a particular software product.
Except for the simplest of programs, software cannot be exhaustively
tested. Generally it is not feasible to test a software product with all
possible inputs, nor is it possible to test all possible data processing
paths that can occur during program execution.

General Principles of Software Validation; Final Guidance for


Industry and FDA Staff (2002)

Everything cannot be tested. Testing must be right-sized to


focus on the portions of the system with the highest potential
business impact.

19
10/18/2010

Test Planningwhat to consider?,


Contd
Inputs to the Test Plan
y Company Policies and Procedures
Company procedures should define the general framework for testing including
documentation and terminology. The test strategy should define and document
how the general framework described by company procedures is to be applied
to a specific system.
y Using Results of Risk Assessments
Risk assessments carried out during the life cycle may have identified various
controls to manage risks to an acceptable level. These controls may require
testing. Alternatively the risk assessments may have identified the need for
particular types of testing such as invalid case testing (negative case or
resistance testing).
The test strategy should incorporate the results of such risk assessments.
y Using Results of Supplier Assessments
The results of the supplier assessment should indicate the supplier tests that
have been performed and which ones can be leveraged to avoid unnecessary
repetition or duplication of effort.
The test strategy should incorporate or reference the results of such supplier
assessments.
y Using GAMP Categories and other considerations
The test strategy should be based also in an understanding of the system
components (GAMP categories), system complexity, and system novelty. The
number of levels of testing and the number of test specifications required will
vary based in part on GAMP categories.

Test Planning..what to consider?,


Contd
The test plan should define, but is not limited to:
which types of testing are required
The number and purpose of test specifications
the use of existing supplier documentation in accordance with
the results of the supplier assessment
format of test documentation
testing environment
y Production/operational environment is the platform where the application
software will run during its maintenance life cycle phase.
y If the testing (at least part of it) will be executed in other environment (e.g.,
different system/platform used for development, testing), then the
configuration of the other environment should be equivalent to the one in the
production environment.
procedures for managing test failures (e.g., Test Problem
Reports)
Test Summary Reporting requirements
Other requirements based on regulated company policies
and procedures

20
10/18/2010

Two General Types of Testing


White Box Testing is also known as code-based testing, glass-
box testing, white box testing, logic driven testing, or structural
testing. Test cases are identified based on source code
knowledge, knowledge of Detailed Design Specifications and
other development documents.
y testing, structural. (1) (IEEE) Testing that takes into account the internal mechanism
[structure] of a system or component. Types include branch testing, path testing,
statement testing.

Black Box Testing is based on the functional specification, thus


often known as functional testing. It is also known as definition-
based or specification-based testing.
y testing, functional. (IEEE) (1) Testing that ignores the internal mechanism or structure
of a system or component and focuses on the outputs generated in response to selected
inputs and execution conditions. (2) Testing conducted to evaluate the compliance of a
system or component with specified functional requirements and corresponding predicted
results. Syn: black-box testing, input/output driven testing.

Black box testing may be sufficient providing the supplier


assessment has found adequate evidence of white box testing.

Specific Types of Testing


Specific types of testing should be considered, depending on the complexity and
novelty of the system and the risk and supplier assessments of the system to be
tested, including:

Normal Case testing (Positive Case or Capability testing) challenges the systems
ability to do what it should do, including triggering significant alerts and error
messages, according to specifications.

Testing with usual inputs is necessary. However, testing a software product only
with expected, valid inputs does not thoroughly test that software product. By
itself, normal case testing cannot provide sufficient confidence in the
dependability of the software product.

Invalid Case testing (Negative Case or Resistance testing) challenges the systems
ability not to do what it should not according to specifications.

Repeatability testing challenges the systems ability to repeatedly do what it should,


or continuously if associated with real time control algorithms.

Performance testing challenges the systems ability to do what it should as fast and
effectively as it should, according to specifications.

Volume/Load testing challenges the systems ability to manage high loads as it


should. Volume/Load testing is required when system resources are critical.

21
10/18/2010

Specific Types of Testing, Contd


Structural/Path testing challenges a programs internal structure by exercising
detailed program code.

The level of structural testing can be evaluated using metrics that are designed to show
what percentage of the software structure has been evaluated during structural testing.
These metrics are typically referred to as coverage and are a measure of completeness
with respect to test selection criteria. Common structural coverage metrics include:
Statement Coverage
Decision (Branch) Coverage
Condition Coverage
Multi-Condition Coverage
Loop Coverage
Path Coverage
Data Flow Coverage

Regression testing challenges the systems ability to still do what it should after
being modified according to specified requirements, and that portions of the
software not involved in the change were not adversely affected.

Acceptance Testing
testing, acceptance. (IEEE) Testing conducted to determine whether or not
a system satisfies its acceptance criteria and to enable the customer to
determine whether or not to accept the system.

There may be a need for specific tests to satisfy contractual requirements.


which are typically called acceptance tests. Typically these are a pre-defined
set of functional tests that demonstrate fitness for intended use and
compliance with user requirements.

In such circumstances the test strategy should leverage these tests to


satisfy GxP verification requirements and avoid duplication.

Acceptance may be carried out in two stages. Factory Acceptance and Site
Acceptance.
y Factory Acceptance Tests (sometimes abbreviated to FAT) are performed at the
supplier site before delivery to show that the system is working well enough to be
installed and tested on-site
y Site Acceptance Tests (sometimes abbreviated to SAT and sometimes called
System Acceptance Testing) show that the system is working in its operational
environment and that it interfaces correctly with other systems and peripherals
This approach is often used for automated equipment and process control systems.

22
10/18/2010

Other Validation
Activities

Validation Report
The Validation Report is the gate between developing
the system and moving the system into the Production
(maintenance) phase. The Validation Report must be
completed in order for the system to be in Production.

The report summarizes that all of the acceptance criteria


documented in the Validation Plan were met and
appropriate personnel conducted a review of the
completed validation deliverables and supporting
documentation. If there are any outstanding issues, the
Validation Report must include a justification for
implementing the system with those unresolved issues.

23
10/18/2010

Production Support
(Maintenance phase)
These are the CSV activities performed once a system is in
production. All CSV processes must be defined during the
system development phase so that a clear process is in
place and related roles are resourced. Upon system
production, these processes are then executed as needed.
The following are the post-production CSV deliverables that
will be presented:
Business Continuity Plan (BCP) and Disaster Recovery
Change Control
Business Procedures and Training
Periodic Review
System Retirement

What to do in the event of software /


system unavailability?
(Business Continuity Plan)
Business Continuity Plan (BCP) Provides operational
plans in the event of software/system unavailability

Right-sizing the BCP involves critically assessing


business functions, risks, and prioritizing how work
should be performed without a system. In some cases,
the BCP may be
wait until the system comes back up, because no critical
business processes are disrupted when the system goes down.
switch to a redundant system
to invest in more complex planning to identify manual processes
that will allow the business to continue to complete critical tasks
such as taking sales orders, shipping product, or controlling tank
temperature, until the system can be brought back on line.

24
10/18/2010

What to do if the software/system is


damaged?
(Disaster Recovery)
Disaster Recovery is the responsibility of IT/Automation. It
is the process and plan to restore the system after some type
of system outage or disaster. The only involvement from the
business is after a disaster has been declared.
Following execution of the Disaster Recovery Plan,
IT/Automation documents the activities that occurred and
notes any departures from the planned activities. At that
point, business roles perform the following activities:
Reviews the documentation of the disaster recovery
activities
Evaluates any potential data integrity issues prior to
restoring the system

What to do if additions, corrections or other


modifications are required?
(Change Control and Configuration
Management)
Change management is a critical activity that is fundamental to maintaining the compliant status
of systems and processes. All changes that are proposed during the project or operational
phase of a computerized system, whether related to software, hardware, infrastructure, or use of
the system, should be subject to a formal change control process. This process should ensure
that proposed changes are appropriately reviewed to assess impact and risk of implementing
the change. The process should ensure that changes are suitably evaluated, authoriezed,
documented, tested, and approved before implementation, and subsequently closed.

Configuration management includes those activities necessary to precisely define a


computerized system at any point during its life cycle, from the initial steps of development
through to retirement.
A configuration item is a component of the system which does not change as a result of the
normal operation of the system. Configuration items should only be modified by application of a
change management process. Examples of configuration items are application software, layered
software, hardware
components, and system documentation.

Configuration management and change management are closely related. When changes are
proposed, both activities need to be considered in parallel, particularly when evaluating impact
of changes.
Both change and configuration management processes should be applied to the full system
scope including hardware and software components and to associated documentation and
records, particularly those with GxP impact.

25
10/18/2010

Business Procedures and


Training
Business Procedures and Training provide the assurance
that the end users will use the system properly and
according to its validated intended use.
The Business is responsible for creation and implementation
of both of these deliverables.
Ensures the appropriate business procedures are developed and
approved
Develops and maintains appropriate user training materials,
often in conjunction with individuals from regulated companys
Learning & Development organization.
Ensures audiences are trained on business procedures and
other relevant system training prior to system implementation.

Does my software/system remains


in a validated state? (Periodic
Review)
Periodic Review is a monitoring process for the key control
indicators (e.g., problems, changes, deviations, downtime) to
verify that the computer system remains in a validated state.
The System Custodian is accountable for this activity and the
completion and approval of the Periodic Review Report.
Change Control and Periodic Review assures the
software/system is maintained in a validated state.
Business activities for Periodic Review include but are not
limited to the following:
Reviews and approves the report to attest that the system
remains in a validated state and is fit for continued use.
Acknowledges any issues documented in the report and any
responsibility to provide the specified resources to complete the
actions by the date indicated on the action plan.

26
10/18/2010

What to do if my software/system is no
needed any more?
(System Retirement)
The primary purpose of System Retirement activities is to ensure that the
data residing in the system are retained, secure, and readily retrievable
throughout the record retention period. The data may need to be retrieved,
for an inspection or some other reason, years after the system is
retired. This possibility underscores the need for both sound retirement
processes and related documentation.
Inadequate system retirement processes can result in data not being
available or readable. Depending on the situation, this can be a significant
issue, especially if this data is requested by a regulator or if product safety is
in question.
Common challenges related to the system retirement process:
Determination of record retention for data, software, and
documentation
Data migration and archival
Clear ownership for continued stewardship of the data after the system
is retired

VALIDATION OF OFF-THE-SHELF
SOFTWARE AND AUTOMATED
EQUIPMENT
Most of the automated equipment and systems used by drug or device manufacturers
are supplied by third-party vendors and are purchased off-the-shelf (OTS).

The vendor audit should demonstrate that the vendors procedures for and the results
of the verification and validation activities performed to the OTS software are
appropriate and sufficient for the requirements of the drug or medical device to be
produced using that software. The vendors life cycle documentation can be useful in
establishing that the software has been validated.

However, such documentation is frequently not available from commercial equipment


vendors, or the vendor may refuse to share their proprietary information. In these
cases, the drug or device manufacturer will need to perform sufficient system level
black box testing to establish that the software meets their user needs and intended
uses.

For many applications black box testing alone is not sufficient. Depending upon the
risk of the product or device produced, the role of the OTS software in the process,
the ability to audit the vendor, and the sufficiency of vendor-supplied information, the
use of OTS software or equipment may or may not be appropriate, especially if there
are suitable alternatives available.

27
10/18/2010

Lets see how to apply all


these concepts!

Are you Ready?


Note: The following examples are provided for illustrative purposes and only suggest possible uses of
quality risk management.

Software Categories, Examples,


and Typical Life Cycle Approach
(GAMP5)
Category Description Typical Examples Typical Approach
1. Infrastructure Layered software (i.e., upon Operating Systems Record version number, verify correct
Software which applications are built) installation by following approved
Software used to manage the Database Engines installation procedures
operating
environment Programming
languages

Statistical packages

Spreadsheets
Network monitoring tools
Scheduling tools
Version control tools
3. Non- Run-time parameters may be Firmware-based applications Abbreviated life cycle approach
Configured entered and stored, but the COTS software
software cannot be configured Instruments (See the URS
to suit the business process GAMP Good Practice Risk-based approach to supplier
Guide: Validation of assessment
Laboratory Record version number, verify correct
Computerized installation
Systems for further Risk-based tests against requirements
guidance) as dictated by use (for simple systems
regular calibration may substitute for
testing)

Procedures in place for maintaining


compliance and fitness for intended use

28
10/18/2010

Software Categories, Examples,


and Typical Life Cycle Approach
(GAMP5)
Category Description Typical Examples Typical Approach
4. Configured Software, often very LIMS Life cycle approach
complex, that can be
configured by the user Data acquisition systems Risk-based approach to supplier
to meet the specific SCADA assessment
needs of the users
business process. Clinical Trial Demonstrate supplier has adequate
Software code is not monitoring QMS
altered. DCS
Building Management Some life cycle documentation retained
Systems only by supplier (e.g., Design
Spreadsheets Specifications)
Simple Human Record version number, verify correct
Machine Interfaces (HMI) installation
Note: specific examples of the above Risk-based testing to demonstrate
system types may contain substantial application works as designed in a test
custom elements environment
Risk-based testing to demonstrate
application works as designed within
the business process
Procedures in place for maintaining
compliance and fitness for intended
use
Procedures in place for managing data

Software Categories, Examples,


and Typical Life Cycle Approach
(GAMP5)
Category Description Typical Examples Typical Approach
5. Custom Software custom Varies, but includes: Same as for configurable, plus:
designed and coded to Internally and externally developed IT
suit the business More rigorous supplier assessment,
applications
process. with possible supplier audit
Internally and externally developed
process control applications Possession of full life cycle
Custom ladder logic documentation (FS, DS, structural
Custom firmware testing, etc.)
Spreadsheets (macro) Design and source code review

29
10/18/2010

Example: Sterilizer (autoclave) automated system


A User Requirements document was approved for a sterilizer (autoclave) automated
system that will be used in a GMP injectables production area. The system should
provide to configure two types of sterilization cycles (dry goods and liquids). System
should provide a printed report at the end of the cycle that will include a graph of the
time-temperature relationship and list of activated alarms. Configuration parameters
will also permit to set the cycle time, temperature and high/low temperature alarms
limits. Additional details for two case studies are included below:

Case A:
The system is considered a low complexity system and is based on mature
technology. The system/software has been commercially available (for the last ten
years) and its fitness for use has been demonstrated by a broad spectrum of
commercial users.
A supplier assessment was performed and it was determined that the supplier has
adequate Quality Management Systems (QMS) and that the basic functionality of
system has been adequately tested.
Case B:
The system is considered a low complexity system but it is based on new technology.
The system/software has been commercially available for the last year.
The supplier refused to share their internal testing documentation and little information
about vendor quality practices was obtained.

How much testing could be recommended in each case?

Example (continued)
Results of Risk Assessments
The following sample functions takes into consideration only software related
hazards.
Function Specified Hazard Consequence Impact Probability Detectability3 Risk Priority
(Harm) (severity)
Alarm alarm fail to Product not High Impact Case A: Low1 Case A: High Case A: Low
Management activate low sterilized Case B: High2 Case B: Low Case B: High
temperature
Alarm alarm fail to Product is High Impact Case A: Low1 Case A: High Case A: Low
Management activate- high damaged Case B: High2 Case B: Low Case B: High
temperature
Temperature Control failure- Product not High Impact Case A: Low1 Case A: High Case A: Low
control low temperature sterilized Case B: High2 Case B: Low Case B: High
Temperature Control failure- Product is High Impact Case A: Low1 Case A: High Case A: Low
control high damaged Case B: High2 Case B: Low Case B: High
temperature
1: For Case A, a low probability of occurrence is considered
based on systems low complexity, mature technology,
product had being in the market for 10 years, and basic
functionality of system has been adequately tested by the
supplier.
2: For Case B, considerations are based on systems low
complexity and new technology. In addition, the
experience with the product/software is limited (i.e., only
one year in the market) and the likelihood of regulated
company uncovering a significant issue with the software
for the first time is high. The supplier refused to share
their internal testing documentation and little information
about vendor quality practices was obtained. A worst case
scenario is being assumed.
3: Detectability is based on reliance on the systems
generated report.

30
10/18/2010

Example (continued)
Recommended Testing Activities
(based on GAMP5 Categories and Risk Priority)
Typical testing activities for a Category 4 (Configured Product) should focus on:

Record version number, verify correct installation


y Equally applicable to Cases A and B
Verify correct configuration
y Equally applicable to Cases A and B in relation to the alarms configuration and other parameters. For each
Configuration Specification, an associated Configuration Test Specification will be produced. The tests will verify
that the package has been configured in accordance with the specification.
Risk-based testing to demonstrate application works as designed in a test environment
y Not applicable to any case. Testing will be performed in the actual production system.
Risk-based testing to demonstrate application works as designed within the business process
y Requirements testing that demonstrate fitness for intended use. Additional and more rigorous testing will be
performed in Case B based on the result of risk and supplier assessment. Examples of type of test to be
performed in each case are:
Case A:
y Test to verify that the sterilization cycle (temperature control) performs as configured (time and
temperature parameters) and verify cycle performance using the information provided in the system
generated report.
Case B:
y Test to verify that the sterilization cycle (temperature control) performs as configured (time and
temperature parameters) and monitor cycle performance using an external calibrated chart recorder.
Finally, compare chart recorder graph with the information in the system generated report.
y In addition to configuration verification of the system alarms performed previously, run an additional
cycle and simulate alarm conditions in order to verify that all critical alarms are correctly generated and
reported by the system as expected.

Questions?

Next: Interactive Exercise

31
10/18/2010

Interactive Exercise

Given an example of a Computerized


System:
Meet for 10 minutes in your assigned
group and discuss and agree on your
answers.
Your group may be requested to
present your recommendation to the
audience.

Bonus Material
Example of a Test Plan
List of Relevant Regulations

32
10/18/2010

Thank you for your


participation!

Integrity / Excellence / Respect for People

33

Você também pode gostar