Você está na página 1de 13

Safety Critical Software 1

Running head: SAFETY CRITICAL SOFTWARE

Safety Critical Software - Models and verification

Alexandra Maria Vieru


University Politehnica of Bucharest, Romania

Safety Critical Software 2


Abstract
This is a documentation paper about the models used for developing safety critical
software. This type of systems require different models than the standard ones used in
software developing because safety critical systems must pass a severe verification and also
be certified by an authority. We will present an iterative model of development for
safety-critical software that comes as an alternative to the waterfall models used before
and that are heavyweight. We also present a new model with the stages that a safety
critical application should go through before it can be certified.

Safety Critical Software 3


Safety Critical Software - Models and verification

Introduction
Software safety refers to the ability of a software to execute within a system context
without causing hazards. Hazards are events that endanger life, health, property, or
environment. A software that is used in a safety critical system must undergo a safety
analysis and it should deal hazards identified by this analysis. Some domains in which
safety critical systems are used are: medical systems, avionics, vehicle control systems,
power systems, manufacturing. While a hazardous software is one that can cause hazards
or contribute to the infliction of a hazard by other components, a safe software is one that
is highly unlikely to produce an output that will cause a catastrophic event.
Safety critical operations are those that if are not performed, or are performed
incorrectly, or are performed in a different order can lead to hazardous conditions. These
operations can be divided into three categories:
1. operations that exercise direct command over hardware components
2. operations that monitor the state of hardware and provide wrong data, which can
lead to erroneous desicions of humans
3. operations that exercise direct command over hardware and in combination with
another human, environmental or hardware failure can cause a hazard
The main characteristics of a safety critical software are:
availability - is the probability of a system to be operational at a given time, t.
reliability - is the probability of a system to produce correct otputs until a given time, t.
robustness - is the ability of a computer system to deal with errors during execution.

Safety Critical Software 4


Safety critical software quality must comply with certain guidelines and standards.
For example, the standard for avionics is DO-178B. These standards show how critical
systems must be developed based on previous experience and best practices. The present
model used for safety critical software in order to ensure its quality must follow a
heavyweight process, named V model. As the existing V model is a difficult heavyweight
one, new iterative models, more agile are being developed.
In this paper we will discuss the software safety model for safety critical systems
described in (Swarup & Ramaiah, 2009) and we will also present an iterative approach for
the development of safety critical software that tries to offer a more agile alternative to
the waterfall life-cycle model used before. The iterative approach is described in (Ge,
Paige, & McDermid, 2010). There are also other papers (Guo & Hirschmann, 2012) that
discuss an agile approach for safety-critical systems.
Models
Software safety model for safety critical applications
The model described in (Swarup & Ramaiah, 2009) is inspired from McCalls
Quality Model (1977). A review of this model is presented in (Berander et al., 2005).
McCalls model comes from the US military, its author aiming to bridge the gap between
users and developers by introducing a series of software quality factors to reflect the views
of users and the priorities of the developers. The quality factors are identified from three
perspectives:
product revision - ability to undergo changes (maintainability, flexibility, testability)
product transition - adaptability to new environments (portability, reusability,
interoperability)
product operations - its operation characteristics (quality of operations depends on:

Safety Critical Software 5


correctness, reliability, efficiency, integrity, usability)
The three quality characteristics are split into factors, criteria and metrics:
11 factors - the external view of the software - users perspective
23 quality criteria - internal view of the software - developers perspective
measures - provide a scale and measure for development
The application of this software quality model follows the next steps:
deduce quality factors based on the characteristics of the system
prioritize the quality factors based on the needs of the users
deduce quality criteria and metrics using the framework
specification, design, coding and testing using the deduced factors, criteria and
metrics
A modified version of this model was proposed in (Singh, 1999). Only 4 factors were
preserved (correctness, efficiency, reliability, tastability) and another one was added
(responsiveness - real time performance).
Although, McCalls, Boehms and the modified model by Singh have their
limitations:
many factors are not directly related to hazards
safety (a safe system may fail frequently, but in a safe way) is assumed to be
equivalent to reliability (a reliable system doesnt fail frequently, but when it fails the
concequences are unknown)
focus is on efficiency and other attributes, which are not specific to safety critical
systems
In the first paper that we focus on, the model proposed is also based on factors,
criteria and metrics. There are 6 quality criteria defined:

Safety Critical Software 6


1. System hazard analysis
As software doesnt work on its own, all components of the system must be safe (software,
hardware, users, environment). A preliminary Hazard Analisys (PHA) is made in order to
identify when the software might be a potential cause of hazards. It is made at the
begining, when the role of the software is identified. If the software is classified as
safety-critical, then it is be submitted to a software safety analysis and the resulting
software safety requirements will be included in the software requirements document. The
software safety requirements are: limits, sequence of events, timing constraints,
interrelationship of limits, voting logic, hazardous hardware failure recognition, failure
tolerance, hazardous commands. The system safety analysis must continue during the
lifecycle of the application.
The system development process includes 4 parts relevant to safety:
identification of hazards and associated safety requirements
design system to meet identified requirements
analyze the system to show that it meets the requirements
demonstrate the safety of the system
2. Completness of requirements
It was observed that most errors come from requirements flows and the systems might
reflect wrong assumptions about the functioning of the system. Completeness of
requirements means that they are sufficient to distinguish this system from any other
system that might be designed.
3. Identification of safety-critical requirements
These requirements are extremely important for the safe system operation or use. After
identifying the safety-critical requirements, a criticality analysis is performed and each
module is classified in one of the criticallity levels:
Safety critical

Safety Critical Software 7


Safety related
Inference free
Not safety related
4. Design based on safety constraints
The first step of a safety-constraint centered design approach consists in specifying the
safety constraints. For hardware, the most common ways to reduce hazards are
redundancy and diversity. The ways in which potential hazards identified can be handeled
are: design for minimum risk, incorporate safety devices, provide warning devices, develop
and implement procedures and training. Some mitigation measures can be: software fault
detection, software fault isolation, software fault tolerance, hardware and software fault
recovery
5. Rush time issues management
Because we cannot ensure that a software works correctly for unseed inputs using only
unit testing, runtime testing is also necessary - it always monitors the software for
correctness and enables the system to react whenever an unexpected behaviour occurs.
6. Safety critical testing
It should verify the correct incorporation of software safety requirements or at least that
the hazards were eliminated or reduced to an acceptable level of risk. The system must be
verified in the presence of system faults. The hazards can be split into 2 groups:
catastrophic or critical and marginal or negligeable, and special attention should be given
to the hazards that pose the highest risk.
Iterative approach for development of safety-critical software
The second paper that we discuss focuses on the design of an iterative approach for
software development. The approaches used before were based on waterfall models. The
design was up-front (the program design must be finished and refined before the

Safety Critical Software 8


implementation starts) and resulted into documentation to be used by safety engineers
and certifying authorities. Agile approaches are attractive for software engineers and
project managers, but not so appealing for safety engineers, but they were not used
before, because they were not found suitable enough for safety-critical systems.
This paper proposes an approach that is more agile and they prove that an
lightweight and iterative approach can bring improvements to safety-critical systems.
The v-model used before, whose scheme is ilustrated in 1 was appreciated for its
sequential approach that is beneficial for comunication and scale, scheduling and
certification purposes.
The avionic standards DO-178B specified 3 processes to be followed in the
development of an application: planning, technical development, integration. But such
heavy-weight approaches make more difficult the management of requirements volatility,
the introduction of new technologies, and the production and maintenance of
specifications.
Agile methods are generaly characterized by 4 phases: preparation, planning, short
iterations to release and integration. And each iteration incorporates stages of analysis,
design, implementation and testing, which are specific to a waterfall life cycle model. And
the whole agile process starts with analyze and design and ends with testing.
The difficulty that is encountered when using an agile approach in the development
of a safety-critical system is the fact that in Agile the design is evolutionary, while for a
safety-critical system a high level o details is needed from de begining. The problem is to
find of way of specifying enough details for the hazard analysis, but in the same time to
have a light-weight design.
The idealized iterative process proposed by the authors would suppose to have for
each iteration of the release an argument that the release is safe. Therefore, a document
with a safety argument is required. The challenge for the iterative process is to find a way

Safety Critical Software 9


to incrementally build safety arguments by using the arguments created in previous
iterations. But this is not easy because the arguments are generaly monolithic. The
solution is to use modular safety arguments - create modules with argument dependencies.
Applications and Evaluation
The model proposed in the first paper discussed was applied on a Road Traffic
Control System (RTCS). The components of the system are: a traffic signal controller, a
user interface, driver circuitry, sensors.
When the system is in a normal state, the system will run normal operations, it will
cycle through the signals periodically. If an abnormal situation is encountered, the system
executes code related to emergency situations, will flash red signal. The system was
passed through the entire process described in the paper, in order to identify the 6 factors.
The steps followed for this application were:
A system-level hazard analysis was done and the potential hazards identified are:
failure of controller, failure of driver circuitry, failure of sensors, failure of reading user
interface input information.
The hazards were classified in the 4 categories: catastrophic, critical, marginal,
negligeable.
Completeness of the requiremnts was checked manually and using peer-review.
Choose a design that enforces safety constraints: avoid single point of failure (use
a redundant controller), mitigate hazards identified
Monitor run-time performance: exceptions, deadlocks, memory issues
Safety-critical testing was done and code was separated in the 2 groups mentioned
above.
As for the evaluation of the model proposed in the second paper, it is more difficult to be
done due to its subjectivity. The agile processes are considered to be: incremental,

Safety Critical Software 10


cooperative, straightforward and adaptive. As the last 3 characteristics are subjective, the
evaluation of such a system is not an easy task. In the second paper discussed the
example used is an Integrated Altitude Data Display System. By applying their new
approach on this system they observed that in order to have a safety argument after each
iteration, longer iterations of about 2-4 weeks are necessary. But it is difficult for them to
measure if the arguments are strong enough.
Conclusions
Safety critical applications require more laborious models for development in order
to reach the required standards from each domain. Different models have been created for
this type of software, but many of them are not complete and do not focus enough on the
specific characteristics of a safety-critical system, but on general attributes of software,
like efficiency, portability etc.
The model proposed in (Swarup & Ramaiah, 2009) tries to focus more on the
attributes of safety-critical systems. It describes a model for designing a safety-critical
system in a way that will lead it to being certified.
On the orther hand, in (Ge et al., 2010) the authors concentrate on a model of
development, presenting an approach that wants to be as agile as possible, but in the
same time to be still appropriate for a safety-critical system. Designing a classical agile
approach in this case is problematic, because by their nature, agile processes allow many
changes in the design of the software during the development, whike a safety-critical
system needs to be specified in great detail from the beginning.

Safety Critical Software 11


References
Berander, P., Damm, L.-O., Eriksson, J., Gorschek, T., Henningsson, K., Jonsson, P., et al.
(2005). Software quality attributes and trade-offs. Blekinge Institute of Technology.
Ge, X., Paige, R. F., & McDermid, J. A. (2010). An iterative approach for development of
safety-critical software and safety arguments. In Agile conference (agile), 2010 (pp.
3543).
Guo, Z., & Hirschmann, C. (2012). An integrated process for developing safety-critical
systems using agile development methods. In Icsea 2012, the seventh international
conference on software engineering advances (pp. 647649).
Singh, R. (1999). A systematic approach to software safety. In Software engineering
conference, 1999.(apsec99) proceedings. sixth asia pacific (pp. 420423).
Swarup, M. B., & Ramaiah, P. S. (2009). A software safety model for safety critical
applications. International Journal of Software Engineering and Its Applications,
3 (4), 2132.
Appendices

Safety Critical Software 12


Figure Captions
Figure 1. V-model

Você também pode gostar