Escolar Documentos
Profissional Documentos
Cultura Documentos
Software Testing
for
Microcontroller Applications
by Eberhard De Wille
Page 1
Page 2
Motivation:
Some Famous Software Bugs
Page 3
Page 4
Page 5
The sea depth is 220m at this place. The platform was supported by tanks at 89m depth.
The wall of a tank cracked. The pumps could not cope with the water and the platform
tilted over and sunk. The event was noticed as a level 3.0 earthquake at the seismic
stations in the area!
The failure was cause by the simulation and design software NASTRAN. The stress on
the tanks was underestimated by 47% due to a calculation error in a complex formula.
Therefore the tanks were build in a way that they only could last at a maximum depth of
62m.
Copyright 2010 Eberhard De Wille
Page 6
Among other HW related failures, some SW bugs were detected which lead to this
biggest recall in the history of DC.
Failures were detected in the software of the power control module
Other SW failures were detected in the brake system control units.
There are already some minor accidents which customers claim to be caused by these
bugs.
The whole cost and dimension of the problem is currently not know. But it seems to be
of a big financial impact. Suppliers (mainly BOSCH are also affected).
SPIEGEL ONLINE - 31. Mrz 2005, 16:04 URL: http://www.spiegel.de/auto/werkstatt/0,1518,349049,00.html
Copyright 2010 Eberhard De Wille
Page 7
Some Definitions
related to SW Testing
Page 8
Page 9
System
Tests
System Test Report
SW Requirement
Specification
Software Req. Test
Specification
SW Requirements Tests
SW Requirements
Test Report
Integration
(Integrated Modules) Integration Tests
SW Design
Document(s)
Module Test
Specification
s
Debugging and / or
some form of
Module Tests
SW Implementation
(Source Code)
Integration Test
Reports
Module Tests
Module Test
Reports
Static SW Tests
(Lint + Review)
Static SW Test
Reports
Page 10
What is Test?
Test is a formal activity. It involves a strategy and a systematic approach.
The different stages of tests supplement each other. Tests are always
specified and recorded.
Test can be planned. The workflow and the expected results are specified.
Therefore the duration of the activities can be estimated. The point in time
where tests are executed is defined.
Test is the required formal proof of software quality.
Page 11
Verification:
(1) The process of evaluating a system or component to determine whether
the products of a given development phase satisfy the conditions imposed at
the start of that phase. Contrast with: validation.
(2) Formal proof of program correctness. See: proof of correctness.
Proof of correctness:
(1) A formal technique used to prove mathematically that a computer
program satisfies its specified requirements.
(2) A proof that results from applying the technique in (1).
Source: IEEE Std 610.12-1990, IEEE Standard Glossary of Software Engineering Terminology
Copyright 2010 Eberhard De Wille
Page 12
Failure:
The inability of a system or component to perform its required functions
within specified performance requirements. Note: The fault tolerance
discipline distinguishes between a human action (a mistake), its
manifestation (a hardware or software fault), the result of the fault (a
failure), and the amount by which the result is incorrect (the error).
Source: IEEE Std 610.12-1990, IEEE Standard Glossary of Software Engineering Terminology
Copyright 2010 Eberhard De Wille
Page 13
Conclusion:
Fault:
Is a static property and can be identified before running the SW
It can be identified in the source code or design as a mistake
Not all faults fail
Failure:
Is a dynamic property and can be identified at runtime
It is the difference between expected and actual runtime behavior
Every failure is caused by at least one fault
Page 14
All faults
Faults which
will fail
Page 15
100%
5000h
10000h
15000h
Page 16
Faults/KLOC
Grade
0.1 - 1
very good
1-5
5 - 10
> 10
Page 17
12
10
8
6
4
2
0
1976 1978 1980 1982
Page 18
SW Testing Overview
Page 19
System
Tests
typically done by system team
System Integration Tests
regression tests by SW
team
Software Tests
Component Tests
Software Integration Tests
dynamic
tests
static
tests
functional
tests
(black box)
structural
tests
(white box)
Page 20
Dynamic tests
This requires the execution of the software or parts of the software (using stubs).
It can be executed in the target system, an emulator or simulator, or a test bench e.g.
on a PC. Within the dynamic tests the state of the art distinguishes between
structural tests (white box) and functional tests (black box).
Page 21
Page 22
Page 23
Page 24
Can be found by
Example
Operating
system errors,
architecture and
design errors
Integration errors
Integration tests,
system tests
System errors
System tests
Page 25
Can be found by
Example
Syntax errors
Compiler, Lint
Data errors
Software
inspection,
module tests
Algorithm errors
Logical errors
Software
inspection,
module tests
Interface errors
Software
inspection,
module tests,
component tests
Page 26
Faults
per
KLOC
found
faults per
day
Page 28
Page 29
Page 30
Test plan
Test plan
Scope of the test activities
Test Methods
Test Tools
Schedule and sequence
Test objects
Test attributes
Responsibilities
Risks
Page 31
Page 32
Test design
specification
Method
Approach
Test environment
(test stubs, make
files, recording
facilities, etc.).
Test case /
procedure
specification
Test object(s)
Test Attributes
Steps to be executed
Expected results
Page 33
Expected
Result
Description / Comment
Page 34
Hex-value
Represents
DP_ub_VehSpeed
0xA3
185 km/h
DP_ub_VehAcc
0x12
0.5 m/s
For manually executed tests the Test Case Specification can be also used as Test Log
Copyright 2010 Eberhard De Wille
Page 35
Test log
Test recording
Documentation of detailed results
Data for each test case
are recorded
Manual (log) or
Automatic (recording)
Test Reporting
Identification of test
objects
Identification of used
test specification
Document the results in
condensed form
Clear statement of
passed or failed
Trend
Page 36
Recommended Solution
Test Plan
Test Specification
Release Note
Test Log
Test Incident Report
Test Summary Report
Test Report
Page 37
Page 38
Collect your defect data to receive test end criteria if required (different
collection than the one for process improvement)
Collect your defect data to improve your development and testing process
Set up a database with search and filter functions (Excel may be sufficient)
Classify your bugs, sources of bugs, possible prevention, etc.
Design your reporting templates and test logs to support an easy defect
data collection
Name a person who is responsible to maintain the defect database!
Page 39
Where did the problem occur (portion of the code, interaction of components)
When did the problem occur (at an early test or inspection, at a later test, in
the field)
Why did the problem occur (what was the reason for the problem, what failed
and why did it fail, why was it not discovered previously)
How severe was the problem (range from formal to serious system failure)
How could the problem have been prevented earlier (what can be done to
improve the process to make it not happen again)
Page 40
Page 41
Page 42
Page 43
Test Organization
Page 44
Page 45
Page 46
Page 47
White-Box
Testing Techniques
Page 48
C0, C1, C1+ and C1p have to be covered 100% plus a combination of the contained
loops has to be achieved: 1. Loop not executed, 2. Loop executed with a low counter
value, 3. Loop executed with a high counter value.
Cik C0, C1, C1+, C1p and C2 have to be covered 100% plus the test of the contained
loops has to be achieved: i = 1,2,3...k.
Ct The combination of all possible paths through the test objectis covered by white-box tests
Source: Georg Erwin Thaller (2002), Software-Test, Verification and Validation, Heise Verlag
Copyright 2010 Eberhard De Wille
Page 49
Page 50
if (a == 1)
{
/* statement coverage tests the following line */
b = c * x;
}
else
{
/* this branch will not be tested by statement coverage! */
}
Page 51
Page 52
Page 53
Page 54
a != 1
z != 0
y != 3
Test Case 2
a=1
z != 0
y != 3
Test Case 3
a != 1
z=0
y != 3
Test Case 4
a != 1
z != 0
y=3
Page 55
a != 1
z != 0
y != 3
Test Case 2
a=1
z != 0
y != 3
Test Case 3
a != 1
z=0
y != 3
Test Case 4
a=1
z=0
y != 3
Test Case 5
a != 1
z != 0
y=3
Test Case 6
a=1
z != 0
y=3
Test Case 7
a != 1
z=0
y=3
Test Case 8
a=1
z=0
y=3
This is an extension of the C1p coverage, but not required to achieve C1p.
Page 56
Page 57
Loop 1
Loop 2
Loop 3
b=0
x=5
n.r.
b=0
x=6
y=0
b=0
x=6
y=1
b=0
x=6
y = max
b=0
x = max
y=1
b=0
x = max
y = max
b=1
x=5
n.r.
b=1
x=6
y=0
b=1
x=6
y=1
b=1
x=6
y = max
b=1
x = max
y=1
b=1
x = max
y = max
Page 58
Page 59
if (a < 5)
/* then */
b = 30;
/* else */
b = 15;
/* then */
b = 30;
/* else */
b = 15;
if (a == 1 || x == 5 || y > 3)
if (a == 1 || x == 5 || y > 3)
/* then */
b = a * c;
/* then */
b = a * c;
/* else */
/* do nothing */
if (a < 5)
if (a < 5)
/* then */
b = 30;
/* else */
b = 15;
if (a == 1 || x == 5 || y > 3)
/* then */
b = a * c;
/* else */
/* do nothing */
/* else */
/* do nothing */
/* then */
b = 30;
/* else */
b = 15;
if (a == 1 || x == 5 || y > 3)
/* then */
b = a * c;
/* else */
/* do nothing */
Page 60
Page 61
To State
t1
Off
Init
t2
Init
Run
t3
Run
Run
t4
Run
Init
t5
Run
Power Dn
KL15 down
t6
Power Dn
Init
t7
Power Dn
Off
t8
Init
Init
Page 62
Page 63
Page 64
Page 65
Black-Box
Testing Techniques
Page 66
Equivalence partitioning has the aim to reduce test cases to the really useful test
cases.
Equivalence partitioning has the aim to ensure that test cases have been selected
which cover also unexpected inputs.
Test Theory says: A partition contains a set or range of values of which can be
reasonably expected to be treated by the component in the same way (i.e. they may
be considered equivalent).
The input and output values are derived from the specification of the components
behavior.
For components with many input parameters it might be useful to select appropriate
combinations of values for each parameter partition. I.e., to test with combinations of
inputs that interfere with each other.
Copyright 2010 Eberhard De Wille
Page 67
day
<= 31
-4
15
1
43
31
valid partition
Page 68
Page 69
day
<= 31
-4
0 1
15
31 32
43
31
valid partition
Page 70
Page 71
Integration
Testing Techniques
Page 72
Page 73
Page 74
memory problems,
overwriting and reading outside array bounds,
memory allocated but not freed, (usually no problem in embedded systems)
reading and using not initialized memory.
Test completeness criteria: achievement of 100% requirements coverage.
Application: All test level but esp. SW Validation, System Integration
Comments: Applied methods are inspection for out of bounds access and un-initialized
memory (variables). Alternatively tools like PolySpace can be used. For un-initialized
variables Perl scripts can be used. Un-freed memory can be detected with DevPartner.
Page 75
The different real time tasks are executed in the specified and designed order
(chronology rule)
That tasks that exclude each other are not executed at the same time (exclusion rule)
That tasks are synchronized in the specified way (synchronization rule)
That tasks are executed with the specified priorities (priority rule)
That tasks are not delayed
That tasks do not overrun
That RAM usage fits the available RAM (stack, C-stack and if applicable heap)
Processor load is acceptable
Worst case load scenarios are considered
Page 76
Page 77
Page 78
Page 79
Page 80
if (x == ~0xFF)
{
printf("I should come out here\n\n");
}
else
{
printf("I should not be here\n\n");
}
getchar();
0000 == FF00
Page 81
Page 82
//global
+e900
+e914
+e916
message enabling
// Always produce an output message
// Implicit adjustment of function return value from type to type
// Implicit pointer assignment conversion
Page 83
Page 84
Clean version:
377 uw_Temp = (T_UWORD)(((((T_ULONG)uw_nom<<1)+(T_ULONG)1) *
(T_ULONG) AL_uw_Lookup [ (T_UBYTE)uw_div ]) >> 17);
378
379 if (uw_Sign == 1u)
Page 85
Page 86
Page 87
Page 88
review
walkthrough
document
inspection
Document of
measurable quality
Page 89
Source: Michael Fagan, July 1986, Advances in Software Inspecton, IEEE Trans. Software Engineering, Vol12, No 7, p745
Copyright 2010 Eberhard De Wille
Page 90
Source: Wheeler David, 1996, Software Inspection: an industy best practice, IEEE Computer Society Press, p7
Copyright 2010 Eberhard De Wille
Page 91
1. Planning
Educate inspectors
conducted by moderator
presented by author
2. Overview
Checklist
3. Preparation
4. Examination /
Meeting /
Inspection
Report
Defect List
5. Rework
Defect
Summary
6. Follow-up
Page 92
Role
Description
Moderator
Recorder
Reader
Tester
Other
Author
Developer of the work product. Must not assume any other role
Page 93
Technical Review
Inspection
Objective
Roles
Input
Page 94
Output
Entry Criteria
For the
Meeting
Meeting
Outcome of the
Meeting
Technical Review
Inspection
Page 95
Page 96
1. Planning
Work Product,
Standards, Specs
2. Overview
3. Inspection
Detailed
specification
Report
4. Problem
resolution
meeting
5. Rework
Defect List
Defect
Summary
Action List
6. Follow-up
Page 97
Role
Description
Test Manager
Inspectors
Author
Developer of the work product. Must not assume any other role.
Is responsible to rework the work product
Page 98
Page 99
Page 100
LOC / hour
100 LOC / hour
Copyright 2010 Eberhard De Wille
Page 101
Data Check
Control Flow Check
Design
Calculation and
numeric check
Formal Aspects
Page 102
Design
nesting too deep?, too many or too less sub-functions, for every if an else,
include structure, interfaces, etc.
Page 103
Is a calculation overflow /
underflow possible?
Page 104
Functional Tests
(Black Box Tests)
Page 105
SW
component
Output - Interface
Input- Interface
Stimulation / Pre-set
Performance check
A component is any unit which can be tested from a functional point of view
Thus a component can be typically a module or a group of modules
up to the complete software
A component has defined interfaces at input and output
The requirements (functionality) of the interfaces has to be clearly defined
Page 106
Page 107
Page 108
Page 109
Dynamic Testing
with a
Perl Testing Environment
Page 110
Page 111
__END__
__C__
Page 112
open(AUS, ">test-ws1.log");
printf AUS "** place some identifying text here ****************************\n";
printf AUS "** Date:
";
printf AUS ($year);
printf AUS ("-%02.0d",$month);
printf AUS ("-%02.0d \n",$day);
printf AUS "** Time:
";
printf AUS ("%02.0d",$hour);
printf AUS (":%02.0d \n",$minute);
printf AUS "** Username: ";
printf AUS ($uid,"\n");
printf AUS "\n\n";
Page 113
Page 114
Page 115
Page 116
Page 117
= value; }
Page 118
Page 119
my @ARrsw_RollRate_array
my @ARP_ARrb_sut_min_array
= (0, 255);
....
for (@ARP_ARrb_sut_min_array) {
$setval10 = $ARP_ARrb_sut_min_array [$i10];
$i10++; $i11 = 0;
for (@ARP_ARrb_sut_nom_array) {
$setval11 = $ARP_ARrb_sut_nom_array [$i11];
$i11++; $i12 = 0;
for (@ARP_ARrb_sut_max_array) {
$setval12 = $ARP_ARrb_sut_max_array [$i12];
$i12++; $cnt++;
&test_overflow_1_1();
Page 120
Perl
Test Scripts
OLD
Test Object
log
log
Test Stubs,
etc.
Perl
Test Scripts
NEW
Test Object
Test Stubs,
etc.
Variant 1
Copyright 2010 Eberhard De Wille
Page 121
Perl
Test Scripts
OLD
Test Object
Log
differences
NEW
Test Object
Test Stubs,
etc.
Variant 2
Copyright 2010 Eberhard De Wille
Page 122
Note: almost 95% of all detectable errors are detected while setting
up the tests! The execution of the test has more a formal meaning
and its biggest value is the possible regression testing.
Page 123
Page 124
Page 125
Page 126
Page 127
Page 128
if (a == 1)
{
/* statement coverage tests
b = c * x;
bit_branch1 = 1; // special
}
else
{
/* with branch coverage now
bit_branch2 = 1; // special
}
Page 129
Integration Testing
Page 130
Page 131
Page 132
Page 133
System Tests
Runtime
RAM Usage,
System behavior
Integration
Tests
SYSTEM
Component
Tests
Signal Chain
Complete SW
functionality
SW Integration
Tests
Component
Tests
ROLLOVER
ALGORITHM
Complete
SW
HW
Component
Tests
Component
Tests
Component
Tests
Component
Tests
SIDE
ALGORITHM
FRONT
ALGORITHM
OC / AWS
ALGORITHM
BASIC
SW
OS, HW
Abstraction Layer
Component
Tests
Data Preparation
Page 134
Frame Work
Operating System
OS Abstraction Layer
Functional
Library
Function
Block 1
Function
Block 2
Function
Block 3
C
Library
Physical Layer
HW Abstraction Layer
Microcontroller Hardware
Copyright 2010 Eberhard De Wille
Page 135
Page 136
Performance
check
Output - Interface
SW
component 2
Output - Interface
Input- Interface
SW
component 3
Input- Interface
Stimulation /
Pre-set
Performance
check
Output - Interface
SW
component 1
Input- Interface
Stimulation /
Pre-set
Stimulation /
Pre-set
Performance
check
Page 137
Output - Interface
Output- Interface
Output - Interface
Input- Interface
SW
component 3
SW
component 2
Input- Interface
Output - Interface
SW
component 1
Input- Interface
Input- Interface
Stimulation /
Pre-set
Performance
check
SW component 4
Copyright 2010 Eberhard De Wille
Page 138
Actuation
SW
SW
comp2 comp1
Output - Interface
Rest of SW
Test in target
Input- Interface
Stimulation
Actuation
Output - Interface
SW
SW
comp2 comp1
comp1
Input- Interface
Stimulation
Target System
Target System
Page 139
Page 140
System Testing
Page 141
Page 142
Page 143
Test Plan: can be generic if you always perform the same testing
and reference the Test Schedule
2.
3.
4.
5.
Page 144
2.
3.
4.
Integration Test Specification: There may be several ones. E.g. one for
the integration of the SW into the target hardware. E.g. a second one
for system integration. Theses specifications are system integration
specifications. Software integration has to be covered by functional test
specifications.
5.
Page 145
2.
3.
For the functional tests observe that you cover all equivalence
partitions and boundary checks.
4.
5.
Page 146
Make sure that all source code to be tested fulfills the "added value"
requirements of the coding phase.
2.
First run the automatic code checker and cancel the testing in case
there are still bugs in the code. Restart testing after the bugs were
fixed. Documentation and entry in the defect database is not
required.
3.
In case the automatic checker was run o.k. you can formally
document it in the test report and proceed with a code inspection.
4.
5.
Page 147
In case dynamic white box tests are indicated, now specify and
execute them, followed by filling in a test report.
2.
3.
4.
Page 148
SPICE Requirements
for Software Testing
Page 149
Page 150
Page 151
Page 152
Page 153
Page 154
Page 155
Page 156