Escolar Documentos
Profissional Documentos
Cultura Documentos
PHD THESIS
RECIFE, FEBRUARY/2010
UNIVERSIDADE FEDERAL DE PERNAMBUCO
CENTRO DE INFORMÁTICA
PÓS-GRADUAÇÃO EM CIÊNCIA DA COMPUTAÇÃO
RECIFE, FEBRUARY/2010
R ESUMO
Um dos maiores desafios para a indústria de embarcados é fornecer produtos com
alto nível de qualidade e funcionalidade, a um baixo custo e curto tempo de
desenvolvimento, disponibilizando-o rapidamente ao mercado, aumentando assim,
o retorno dos investimentos. Os requisitos de custo e tempo de desenvolvimento
têm sido abordados com bastante êxito pela engenharia de software baseada em
componentes (CBSE) aliada à técnica de reuso de componentes. No entanto, a
utilização da abordagem CBSE sem as devidas verificações da qualidade dos
componentes utilizados, pode trazer conseqüências catastróficas (Jezequel et al.,
1997). A utilização de mecanismos apropriados de pesquisa, seleção e avaliação da
qualidade de componentes são considerados pontos chave na adoção da abordagem
CBSE. Diante do exposto, esta tese propõe uma Metodologia para Avaliação da
Qualidade de Componentes de Software Embarcados sob diferentes aspectos. A
idéia é solucionar a falta de consistência entre as normas ISO/IEC 9126, 14598 e
2500, incluindo o contexto de componente de software e estendendo-o ao domínio
de sistemas embarcados. Estas normas provêem definições de alto nível para
características e métricas para produtos de software, mas não provêem formas de
usá-las efetivamente, tornando muito difícil aplicá-las sem adquirir mais
informações de outras fontes. A Metodologia é composta de quatro módulos que se
complementam em busca da qualidade, através de um processo de avaliação, um
modelo de qualidade, técnicas de avaliação agrupadas por níveis de qualidade e
uma abordagem de métricas. Desta forma, ela auxilia o desenvolvedor de sistemas
embarcado no processo de seleção de componentes, avaliando qual componente
melhor se enquadra nos requisitos do sistema. É utilizada por avaliadores
terceirizados quando contratados por fornecedores a fim de obter credibilidade em
seus componentes. A metodologia possibilita avaliar a qualidade do componente
embarcado antes do mesmo ser armazenado em um sistema de repositório,
especialmente no contexto do framework robusto para reuso de software, proposto
por Almeida (Almeida, 2004).
A bstract
One of the biggest challenges for the embedded industry is to provide products
of high quality and functionality at low cost and short time-to-marketing, thus
increasing the Return on Investment (RoI). The cost and development time
requirements have been addressed quite successfully by component-based
software engineering (CBSE) combined with component reuse technique.
However, the use of CBSE approach without the quality assurance of the
components used can bring catastrophic results (Jezequel et al., 1997). The use
of appropriate mechanisms for search, selection and quality evaluation of
components are considered key points in CBSE adoption. In this way, this thesis
proposes an Embedded Software Components Quality Evaluation Methodology.
Its focus is to improve the lack of consistency between the standards ISO/IEC
9126, ISO/IEC 14598, ISO/IEC 25000, also including the software component
quality context and extend it to the embedded domain. These standards provide
a high-level definition of characteristics and metrics for software products but
do not provide ways to be used effectively, making them very difficult to apply
without acquiring more knowledge from supplementary sources. The
Methodology consists of four modules that complement each other in quality
direction through an evaluation process, a quality model, evaluation techniques
grouped by levels of quality and an embedded metrics approach. Thus, it helps
the developer of embedded systems in selection of components to evaluate
which component best fits in the system requirements. It is also used in third
party evaluation when contractors are hired by components suppliers to achieve
trust in its components. The methodology permits the evaluation of the quality
of embedded component before being stored in a repository system, especially
in the context of robust framework for software reuse, proposed by Almeida
(Almeida, 2004).
L ist of Figures
Terms - Descriptions
B2B - Business to Business
CBD - Component-Based Development
CBSE - Component-Based Software Engineering
C.E.S.A.R. - Recife Center for Advanced Studies and Systems
CMU/SEI - Carnegie Mellon University’s Software Engineering Institute
COTS - Commercial Off-The-Self
CBSD - Component-Based Software Development
COM - Component Object Model
CCM - CORBA Component Model
CMM - Capability Maturity Model
CMMI - Capability Maturity Model Integrated
EQL - Embedded software component evaluation techniques on Quality Level
EQM - Embedded software component Quality Model
EQP - Embedded software component Quality Process
EMA - Embedded Metrics Approach
GQM - Goal Question Metric Paradigm
ISO - International Organization for Standardization
IEC - International Electro-technical Commission
OMG - Object Management Group
PECT - Prediction-Enabled Component Technology
PACC - Predictable Assembly from Certifiable Components
RiSE - Reuse in Software Engineering group
SPICE - Software Process Improvement and Capability dEtermination
UART - Universal Asynchronous Receiver Transmitter
XML - eXtensible Markup Language
C ontents
1 Introduction .................................................................1
1.1 Motivation ..............................................................................1
1.2 Problem formulation ............................................................. 4
1.3 General Objective .................................................................. 5
1.4 Specific Objetive .................................................................... 5
1.5 Proposed solution.................................................................. 6
1.6 Out of Scope ........................................................................ 10
1.7 Statement of Contributions .................................................. 11
1.8 Organization of the Thesis....................................................12
Appendix C.....................................................................183
Chapter 1 - Introduction 1
1 Introduction
1.1 Motivation
The following sections show the main motivations that led the preparation
of this work.
From this data and from the interviews, the study concludes that the
market perceives the following key inhibitors for component adoption,
presented here in decreasing order of importance:
Moreover, the process is based on the state-of-the-art in the area and its
foundations and elements are discussed in details. In this way, the main goals of
this thesis were to define Embedded Software Component Quality Evaluation
Methodology that is composed of four inter-related modules in order to assure
the component quality degree. This methodology was proposed with basis in the
standards ISO/IEC 25010, ISO/IEC 9126, ISO/IEC 14598 adapted for
component context and embedded domain.
7
Recife Center for Advanced Studies and Systems, http://www.cesar.org.br.
8
Reuse in Software Engineering (RiSE) group – http://www.rise.com.br.
Chapter 1 - Introduction 8
The framework (Figure 1.1) that is being developed has two layers. The
first layer (on the left) is composed of best practices related to software reuse.
Non-technical factors, such as education, training, incentives, and
organizational management are considered. This layer constitutes a
fundamental step prior to the introduction of the framework in the
organizations. The second layer (on the right) is composed of important
technical aspects related to software reuse, such as processes, environments,
tools, and a evaluation process, which is the focus of this thesis.
Figure 1.2: The quality evaluation layer was divided of two modules.
• Cost Model: Cost estimation is a key requirement for CBSD before the
actual development activities can proceed. Cost is a function of the
enterprise itself, its particular development process, the selected
solution, and the management and availability of the resource during the
development project (Cechich et al., 2003), (Mahmooda et al, 2005). A
cost model is useful to help the software engineer during the analysis of
the software component available to purchase (or to select or to
evaluate). However, it makes sense when, first, you have defined
processes, methods, techniques and tools to execute the selection and/or
the evaluation task in order to identify the cost/benefit to purchase or to
evaluate a component.
• Formal Proof: Meyer (Meyer, 2003) and Karlsson (Karlsson, 2006)
proposes a formal approach in order to acquire trust in software
components. His idea is to build or to use software components with fully
proved properties or characteristics. The intention is to develop software
components that could be reliable to the software market.
This thesis does not consider cost model because the whole methodology
that is the basis for embedded software component evaluation will be
considered in this first moment and, after that a cost model to help
organizations to define if it is viable evaluate certain kinds of components (or
not) will be useful. The formal quality assurance is not considered mainly
Chapter 1 - Introduction 11
because the software component market is not inclined to formally specify their
software components. This kind of approach is highly expensive, in terms of
development time and level of expertise that is needed, and component
developers still do not know if it is cost effective to spend effort in this direction
without having specific requirements such as strict time constraints or high
reliability. However, the Embedded software component evaluation techniques
based on Quality Level (EQL) provides formal proof evaluation techniques that
could be useful in some scenarios, depending of the customer’s necessities and
the cost/benefit to do so;
The conclusions of the developed work and the analysis of the proposed
methodology are shown in Chapter 6, as well as contributions to academia and
industry. The enhancements and features that were not addressed or developed
in this work were listed in the topic further work.
The papers reviewed and used as inputs for the development of this
thesis are listed alphabetically in Chapter 7 as references.
Three appendices were added at the end of this thesis. Appendix A shows
the step by step instructions for performing a quality evaluation using the
methodology proposed. The Appendix B shows the evaluator’s feedback in the
use of the methodology for quality evaluation of the embedded software
components, showing the strengths and weaknesses found in the
implementation and evaluation. Appendix C details the process and results
achieved in the BRConverter evaluation, which was used in the experimental
study reported in Chapter 5.
Chapter 2 - Embedded Systems Design 14
Embedded systems comprise a scale from ultra small devices with simple
functionality, through small systems with sophisticated functions, to large,
possibly distributed systems, where the management of the complexity is the
main challenge. An embedded system may be represented by a dedicated
processor surrounded by dedicated hardware systems, performing very specific
tasks (Junior et al., 2004a). Further, it can distinguish between systems
produced in large quantities, in which the low production costs are very
important and low-volume products in which the system dependability is the
most important feature. All these different requirements have an impact on
feasibility, on use, and on approach in component-based development. A
common characteristic of all systems is increasing importance of software. For
example, software development costs for industrial robots currently constitute
about 75% of total costs, while in car industry it is currently about 30%. Some
ten, fifteen years ago this number was about 25% for robots and negligible for
cars (Crnkovic, 2005). A second common characteristic is increasing
interoperability. While previously the embedded systems were mainly used for
controlling different processes, today they are integrated with information
systems of infotainment technologies. In this chapter, a short overview of
embedded systems design will be shown.
Chapter 2 - Embedded Systems Design 15
Figure 2.1: An embedded system encompasses the CPU as well as many other
resources.
Taking into account all the constraints for real-time and embedded
systems, we can conclude that there are several reasons to perform component
deployment and composition at design time rather than run-time (Crnkovic &
Larsson, 2002). This allows composition tools to generate a monolithic
Chapter 2 - Embedded Systems Design 17
firmware for the device from the component-based design and in this way
achieve better performance and better predictability of the system behavior.
This also enables global optimizations, e.g., in a static component composition
known at design time, connections between components could be translated
into direct function calls instead of using dynamic event notifications. Finally,
verification and prediction of system requirements can be done statically from
the given component properties.
There may also be a need for a run-time environment, which supports the
component methodology by a set of services. The methodology enables
component intercommunication (those aspects which are not performed at
design time), and (where relevant) control of the behavior of the components.
Figure 2.1 shows different environments in a component life cycle. The figure is
adopted from (Crnkovic & Larsson, 2002).
For large embedded systems the resource constraints are not the primary
concerns. Complexity and interoperability play a much more important role.
Also due to complexity, the development of such systems is very expensive and
cutting the development costs is highly prioritized. For this reason general-
purpose component technologies are of more interest than in the case for small
systems.
Chapter 2 - Embedded Systems Design 18
Embedded systems vary from very small systems to very large systems.
For small systems there are strong constrains related to different recourses such
as power or memory consumption and others. In most of the cases, embedded
systems are real-time systems. For these as well as for large embedded systems
the demands on reliability, functionality, efficiency and other characteristics
that depends on domain or application. Finally, in many domains, the product
life cycle is very long – in can stretch to several decades.
In this way, the research concludes that many of the most important
requirements of the embedded systems are related to extra-functional
properties. This implies that development and maintenance of such systems are
very costly. In particular activities related to verification and guaranteed
behavior (formal verification, modeling, tests, etc.) and maintenance (adaptive
maintenance, debugging, regressive testing, etc.) require a lot of effort. For
Chapter 2 - Embedded Systems Design 21
these reasons the technologies and processes that lead to lower costs for these
activities are very attractive and desirable.
(i) Process level (for example, a valve in a water pipeline, a boiler, etc.),
(ii) Field level that concerns sensors, actuators, drivers, etc.
(iii) Group control level that concerns controller devices and
applications which control a group of related process level devices in a
closed-loop fashion,
Chapter 2 - Embedded Systems Design 22
Notice that, even if the higher levels are not embedded, they are of
uttermost importance as they need to be interoperable with the lower level
which greatly influences the possible choices of the component model and in
fine the design choices. The integration requirements have in many cases led to
a decision to use component technologies which are not appropriate for
embedded systems but provide better integration possibilities. Depending on
the level, the nature of the requirements and the implementation will be quite
different. In general, the lower the level, the stronger are the real-time
requirements (including timing predictability) and the resource
limitations. Also, the component based approach will include different
concepts at different levels. The most important quality attributes in industrial
automation, following the researchers, is:
1 - During the first step a list of relevant quality attributes was gathered;
2 - In the next step technical representatives from a number of vehicular
companies placed priorities on each of the attributes in the list reflecting
their companies view respectively;
3 - Finally a synthesis step was performed, resulting in a description of
the desired quality attribute support in a component technology for
vehicular systems.
The list of quality attributes have been collected from different literature
trying to cover qualities of software that interest vehicular manufactures. In
order to reduce a rather long list, attributes with clear similarities in their
definitions have been grouped in more generic types of properties, e.g.,
portability and scalability are considered covered by maintainability. Although
such grouping could fade the specific characteristics of a particular attribute, it
put focus on the main concerns. In the ISO/IEC 9126 standard (ISO/IEC 9126,
2001), 6 quality attributes (functionality, reliability, usability, efficiency,
maintainability, and portability) are defined for evaluation of software quality.
However, the standard has not been adopted fully in this research; it is
considered too brief and does not cover attributes important for embedded
systems (e.g., safety, and predictability). Furthermore, concepts that sometimes
are mixed with quality attributes (for example fault tolerance) are not classified
as quality attributes, rather as methods to achieve qualities (as for example
safety). Finally, functionality is of course one of the most important quality
attributes of a product, indicating how well it satisfies stated or implied needs.
However, it focuses on quality attributes beyond functionality often called extra-
functional or non-functional properties. The resulting list of quality attributes is
presented below. Having been presented with the basic characteristics of quality
attributes related to component technologies tailored for vehicular systems
below:
1. Safety
2. Reliability
3. Predictability
4. Usability
5. Extendibility
Chapter 2 - Embedded Systems Design 24
6. Maintainability
7. Efficiency
8. Testability
9. Security
10. Flexibility
As the last step the authors provide a discussion where we have combined
the collected data from the companies with the classification of how to support
different quality attributes in (Larsson, 2004). The combination gives an
abstract description of where, which, and how different quality attributes should
Chapter 2 - Embedded Systems Design 25
2.2.3 Medical
1. Reliability
2. Safety
3. Functionality
4. Portability
5. Modifiability
a. Configurability
b. Extensibility and Evolvability
6. Testability
7. Serviceability
2.2.4 Consumer Electronics
2.4 Summary
In the second part of this chapter the search results in the literature of the
main quality characteristics that involve embedded systems in different
application areas were presented, including:
Industrial Automation
Automotive
Medical
Electronics Consumer
General
This research was of great importance to the work, because the quality
characteristics that make up the quality reference model, which will be shown in
Chapter 4 of this thesis, was based on this survey.
Finally, the chapter ends by listing the needs and priorities in research
into the use of the component-based development (CBD) approach for
embedded systems. The list was constructed by reports in the literature by
several researchers in the field of embedded systems.
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 31
This Chapter presents the correlates works and a survey of cutting edge
embedded software component quality, evaluation and certification research, in
an attempt to analyze the trends in CBSE/CBD applied embedded systems
projects and to probe some of the component quality, evaluation and
certification research directions.
processes. In order to provide a general vision, Table 3.1 shows a set of national
and international standards in embedded domain.
The embedded software market has grown in the decade, as well as the
necessity of producing software with quality and trust. Thus, obtaining quality
certificates has been a major concern for software companies. In 2002, Weber
(Weber et al., 2002) showed how this tendency influenced the Brazilian
software companies from 1995 until 2002.
Management and Assurance. The graph on the right shows this growth in
relation to CMM, which assures the software development processes quality.
The standard covers the complete safety life cycle, and may need
interpretation to develop sector specific standards. It has its origins in the
process control industry sector.
The safety life cycle has 16 phases which roughly can be divided into three
groups as follows: phases 1-5 address analysis, phase 6-13 address realization
and phase 14-16 address operation. All phases are concerned with the safety
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 36
function of the system. The standard has seven parts. Parts 1-3 contain the
requirements of the standard (normative), while 4-7 are guidelines and
examples for development and thus informative.
Central to the standard are the concepts of risk and safety function. The
risk is a function of frequency (or likelihood) of the hazardous event and the
event consequence severity. The risk is reduced to a tolerable level by applying
safety functions which may consist of E/E/PES and/or other technologies.
While other technologies may be employed in reducing the risk, only those
safety functions relying on E/E/PES are covered by the detailed requirements of
IEC 61508.
Thus, the general objective for this next series is to respond to the
evolving needs of users through an improved and unified set of normative
documents covering three complementary quality processes: requirements
specification, measurement and evaluation. The motivation for this effort is to
supply those responsible for developing and acquiring software products with
quality engineering instruments supporting both the specification and
evaluation of quality requirements.
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 37
• Quality model and guide: to describe the model for software product
internal and external quality, and quality in use. The document will
present the characteristics and sub-characteristics for internal and
external quality and characteristics for quality in use.
The next section will present more close the Quality Model Division,
Quality Evaluation Division and Quality Measurement Division. These three
divisions are the basis of the SQuaRE project and contain the
guidelines/techniques that guide this thesis during the software component
quality methodology proposal. It is important to say that those five modules
from SQuaRE have been in its draft version and, probably, some modifications
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 40
will be made until its final version. Thus, the idea has accomplished the
following modification according to its evolutions.
hired these developers. In both cases, the quality attributes may be directly
observed and assured by these stakeholders.
The ISO/IEC 2502n (ISO/IEC 25020, 2007) division tries to improve the
quality measurements provided by previous standards like ISO/IEC 9126-2
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 43
(external metrics) (ISO/IEC 9126, 2001), ISO/IEC 9126-3 (internal metrics) and
ISO/IEC 9126-4 (quality in use metrics). However, this standard improves some
aspects of quality measurement and the most significantly is the adoption of the
Goal-Question- Metrics (GQM) paradigm (Basili et al., 1994), thus, the metrics
definition becomes more flexible and adaptable to the software product
evaluation context.
survey is on processes for assuring component quality, it does not cover these
works, which deal only with isolated aspects of component quality.
One year later, in 1994, Wohlin et al. (Wohlin et al., 1994) presented
the first method of component certification using modeling techniques, making
it possible not only to certify components but to certify the system containing
the components as well. The method is composed of the usage model and the
usage profile. The usage model is a structural model of the external view of the
components, complemented with a usage profile, which describes the actual
probabilities of different events that are added to the model. The failure
statistics from the usage test form the input of a certification model, which
makes it possible to certify a specific reliability level with a given degree of
confidence. An interesting point of this approach is that the usage and profile
models can be reused in subsequent certifications, with some adjustments that
may be needed according to each new situation. However, even reusing those
models, the considerable amount of effort and time that is needed makes the
certification process a hard task.
One different work that can be cited was published in 1994. Merrit
(Merrit, 1994) presented an interesting suggestion: the use of components
certification levels. These levels depend on the nature, frequency, reuse and
importance of the component in a particular context, as follows:
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 45
After analyzing this certification process, Rohde et al. found some points
that require improved formulation in order to increase the certification quality,
such as the techniques to find errors (i.e. the major errors are more likely to be
semantic, not locally visible, rather than syntactic, which this process was
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 46
looking for) and thus the automatic tools that implements such techniques. In
summary, Rohde et al. considered only the test techniques to obtain the defects
result in order to certify software components. This is only one of the important
techniques that should be applied to component certification.
According to Voas, this approach is not foolproof and perhaps not well
suited to all situations. For example, the methodology does not certify that a
component can be used in all systems. In other words, Voas focused his
approach in certifying a certain component within a specific system and
environment, performing several types of tests according to the three
techniques that were cited above.
The work is restricted because it verifies the component quality from only
one point of view, use of data memory, among many other characteristics to be
considered. Furthermore, the language most commonly used for embedded
development is C and C++, Java is widely used for the development of desktops
systems.
In 2001, Stafford et al. (Stafford et al., 2001) developed a model for the
component marketplaces that supports prediction of system properties prior to
component selection. The model is concerned with the question of verifying
functional and quality-related values associated with a component. This work
introduced notable changes in this area, since it presents a CBD process with
support for component certification according to the credentials (like a
component quality label), provided by the component developer. Such
credentials are associated to arbitrary properties and property values with
components, using a specific notation such as <property,value,credibility>.
Through credentials, the developer chooses the best components to use in the
application development based on the “credibility” level.
Besides these questions, there are others that must be answered before a
component certification process is achieved, some of these apparently as simple
as: what does it mean to trust a component? (Hissam et al., 2003), or as
complex as: what characteristics of a component make it certifiable, and what
kinds of component properties can be certified? (Wallnau, 2003).
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 49
(i) Planning the evaluation, where the evaluation staff is defined, the
stakeholders are identified, the required resources is estimated and
the basic characteristics of the evaluation activity is determined;
(ii) Establishing the criteria, where the evaluation requirements
are identified and the evaluation criteria is constructed;
(iii) Collecting the data, where the component data are collected,
the evaluations plan is done and the evaluation is executed; and
(iv) Analyzing the data, where the results of the evaluation are
analyzed and some recommendations are given. However, the
proposed process is an ongoing work and, until now, no real case
study was accomplished in order to evaluate this process, becoming
unknown the real efficiency to evaluate software components.
However, the method proposed was not evaluated in a real case study,
and, thus its real efficiency in evaluating software components is still unknown.
This section describes two failure cases that can be found in the
literature. The first failure occurred in the US government, when trying to
establish criteria for certificating components, and the second failure
happened with an IEEE committee, in an attempt to obtain a component
certification standard.
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 54
Thus, from 1993 until 1996, NSA and the NIST used the Trusted
Computer Security Evaluation Criteria (TCSEC), a.k.a. “Orange Book.”5, as the
basis for the Common Criteria6, aimed at certifying security features of
components. Their effort was not crowned with success, at least partially
because it had defined no means of composing criteria (features) across classes
of components and the support for compositional reasoning, but only for a
restricted set of behavioral assembly properties (Hissam et al., 2003).
5
http://www.radium.ncsc.mil/tpep/library/tcsec/index.html
6
http://csrc.nist.gov/cc
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 55
Still, third party evaluation is often viewed as a good way of bringing trust
in software components. Trust is a property of an interaction and is achieved to
various degrees through a variety of mechanisms. For example, when
purchasing a light bulb, one expects that the base of the bulb will screw into the
socket in such a way that it will produce the expected amount of light. The size
and threading has been standardized and a consumer “trusts” that the
manufacturer of any given light-bulb has checked to make certain that each bulb
conforms to that standard within some acceptable tolerance of some set of
Chapter 3 – Correlates Works and Component Quality, Evaluation and Certification 56
property values. The interaction between the consumer and the bulb
manufacturer involves an implicit trust (Stafford et al., 2001).
In the case of the light-bulb there is little fear that significant damage
would result if the bulb did not in fact exhibit the expected property values. This
is not the case when purchasing a gas connector. In this case, explosion can
occur if the connector does not conform to the standard. Gas connectors are
certified to meet a standard, and nobody with concern for safety would use a
connector that does not have such a certificate attached. Certification is a
mechanism by which trust is gained. Associated with certification is a higher
requirement for and level of trust than can be assumed when using implicit trust
mechanisms (Stafford et al., 2001).
When these notions are applied to CBSD, it makes sense to use different
mechanisms to achieve trust, depending upon the level of trust that is required.
Regarding the certification process, the CBSE community is still far from
reaching a consensus on how it should be carried out, what are its requirements
and who should perform it. Further, third party certification can face some
difficulties, particularly due to the relative novelty of this area (Goulao et al.,
2002a).
However, these works still need some effort to conclude the proposed
models and to prove their trust, and needs a definition on which requirements
are essential to measure quality in components. Even so, a unified and
prioritized set of CBSE requirements for reliable components is a challenge in
itself (Schmidt, 2003).
3.6 Summary
This Chapter was divided in two parts. This first part of the Chapter
presented the main correlates works in the context of this thesis, embedded
software components quality evaluation in different domain. It also presents
SQuaRE project, a software product quality requirements and evaluation
standard that have some ideas regarding component quality assurance.
The second part of the Chapter presented a survey related to the state-of-
the-art in the embedded software component quality, evaluation and
certification research. Some approaches found in the literature, including the
failure cases, were described. Through this survey, it can be noticed that
embedded software components quality, evaluation and certification is still
immature and further research is needed in order to develop processes,
methods, techniques, and tools aiming to obtain well-defined standards for
embedded software component evaluation. Since trust is a critical issue in
CBSE, this Chapter also presented some concepts of component certification, in
general as shown, and some research is needed in order to acquire the reliability
that the market expects from CBSE.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 59
that hardware vendors currently have. They still cannot count on the hardware
data sheets and catalogues available for hardware components, which describe
all their characteristics in detail.
The methodology was developed with two main goals: (i) Evaluate the
embedded software component quality, from the view point of acquirers and
evaluators; and (ii) to be the default quality evaluation layer applied to
embedded software component before storing the component in a system
repository of the Robust Framework for Software Reuse project (Almeida et al.,
2004), which it is in development by RiSE group. According to Councill
(Councill, 2001), it is better to develop components and systems from scratch
than reuse a component without quality or with unknown quality, consequently
running the risk of negatively impacting the project planning, quality and time-
to-market.
The layer of robust framework for software reuse that considers the
quality aspects of the components was divided into two groups;
module of EQP, and after it is used for to create the evaluation requirements
document, as shown the Figure 4.2. Differently from other software product
quality models found in the literature, such as (McCall et al., 1977), (Boehm et
al., 1978), (Hyatt et al., 1996), (ISO/IEC 9126, 2001), (Georgiadou, 2003), this
model should consider in the embedded domain, the Component-Based
Development (CBD) characteristics and attributes quality that improves reuse.
Figure 4.2: Flow Diagram of Embedded Software Component Quality Evaluation Methodology.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 64
Table 4.1: The detail of the EQP, modules, activities and steps.
The quality evaluation from the developer’s view point is not covered by
this thesis, because providing requirements and recommendations for the
practical implementation of software product evaluation when the evaluation is
conducted in parallel with development.
The Figure 4.3 showed the whole EQP using Business Process Modeler
Notation (BPM) (Williams, 1967). The BPMN is a standard for business process
modeling, and provides a graphical notation for specifying business processes in
a Business Process Diagram (BPD), based on a flowcharting technique very
similar to activity diagrams from Unified Modeling Language (UML). The
primary goal of BPMN is to provide a standard notation that is readily
understandable by all stakeholders.
In the first module of the EQP, the scope, purpose, objectives and
requirements of evaluation will be defined. This module includes the definition
of the component for evaluation, the architectures, environment and scenarios.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 68
The evaluation staff is formed and it must agree on which quality characteristics
present in the embedded quality model are to be evaluated and to which
evaluation level they are to be assured. The formal record of this agreement of
what will be covered by the evaluation is called the evaluation requirements.
Model that will be defined in next section 4.3 is used. Table 4.2 shows the
summary of the establish evaluation requirements module, and Figure 4.4,
shows the module in graphic form.
The evaluation staff should create a specific quality model for the
evaluation. It should be based in EQM, which will be presented in the next
section, 4.2, in this chapter. At this moment the evaluation staff selects a sub-set
Chapter 4 - Embedded Software component Quality Evaluation Methodology 71
of the quality characteristics to compose the specific EQM for performing the
evaluation.
This module includes guidelines to help the evaluation staff to define the
optimum evaluation level for each quality characteristics according to
application domain. In this way, evaluation techniques are proposed, metrics
and criteria are established and score levels are defined. The quality evaluation
techniques are used in conjunction with quality characteristics importance for
the evaluation staff to define the evaluation techniques for the embedded
component. The second activity is to specify the metrics that will be used in the
evaluation. The embedded metrics approach was created to aid the specification
of metrics for the evaluation. The last activity is to define a meaning for the
measurement values. The Table 4.3 shows the summary of the evaluation
module specification, and Figure 4.5, shows the module in graphic form.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 72
i. Select Metrics
The evaluation staff should define, following the guidelines, one of the
three quality levels present in EQL for each quality characteristic selected to
compose the evaluation, considering the importance of each quality
characteristic. To finalize this activity, the evaluation staff must select the
quality attribute and evaluation techniques based on the defined quality model,
which metrics will be assigned and then measured.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 73
This scale is proposed by ISO 15504-2, but the evaluation staff should
analyze if the scale proposed is appropriate to evaluation, if not the
evaluation staff can propose a new scale or modify it. This is
dependent on the reliability level expected by the component, i.e. the
EQL defined to evaluate the component.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 75
Planning for each evaluation is different, since the evaluation may involve
both different types of components (from simple to extremely complex) and
different system expectations placed on the end product (from trivial to highly
demanding) (Comella-Dorda,2003). This activity is composed of four steps until
the evaluation plan is defined, as show in Table 4.4.
i. Measure characteristics
Once the evaluation plan was built, in this module it will be executed using the
following steps:
The EQM defined, as shown in the Table 4.6, is composed of three parts:
(i) quality characteristics and sub-characteristics, quality attributes, metrics,
(ii) quality in use and (iii) additional information. Some relevant component
information is not supplied in other component quality models analyzed,
(Goulão et al., 2002b), (Bertoa et al., 2002), (Meyer, 2003), (Simão et al.,
2003), (Alvaro et al., 2005), the negative and positive aspects of each model
were considered and contribute to defining the EQM.
Table 4.6: The Embedded software component Quality Model and its parts
Embedded software component Quality Model (EQM)
=
Quality Characteristics
Evaluation
Characteristics Sub-characteristics Attributes Techniques
+
Quality in Use Characteristics
Productivity Satisfaction Security Effectiveness
+
Additional Information
Technical Information Organizational Information Marketing Information
Component Version CMMi Level Development time
Programming Language Organization’s Reputation Cost
Design and Project Patterns Time to market
Operational Systems Supported Targeted market
Compiler Version Affordability
Compatible Architecture Licensing
Minimal Requirements
Technical Support
Compliance
Quality Model (Third column). The main idea is to refine and customize it in
order to accommodate to the particular characteristics of components in
embedded domain.
Table 4.8 and Figure 4.8 summarize the changes that were performed in
relation to ISO/IEC 25010. The characteristics and sub-characteristics that are
represented in bold were added due to the need for evaluating certain CBSD-
related properties that were not covered on ISO/IEC 25010. The sub-
characteristic that is crossed was present in ISO/IEC 25010, but was removed in
the proposed model. Finally, the sub-characteristic in italics had its name
changed.
characteristics are not present in the research, as show in Table 4.7, and in
conjunction with a set of embedded engineers and quality engineers of a
Brazilian software factory, C.E.S.A.R., it was decided not to contemplate these
characteristics in the EQM.
Extended sub-characteristics:
Extended sub-characteristics:
New sub-characteristics:
Extended sub-characteristics:
New sub-characteristics:
Extended sub-characteristics:
Renamed sub-characteristics:
Chapter 4 - Embedded Software component Quality Evaluation Methodology 89
New sub-characteristics:
Extended sub-characteristics:
Extended sub-characteristics:
Renamed sub-characteristics:
New sub-characteristics:
Functionality Characteristic:
Run-time Sub-Characteristics
Real-time
1. Response time (Latency): This attribute measures the time taken
since a request is received until a response has been sent;
2. Execution time: This attribute measures the time that the
component executes a specified task, under specified conditions.
3. Throughput (“out”): This attribute measures the output that can
be successfully produced over a given period of time;
4. Processing Capacity (“in”): This attribute measures the amount
of input information that can be successfully processed by the
component over a given period of time;
Chapter 4 - Embedded Software component Quality Evaluation Methodology 93
5. Worst case execution time: This attribute measures the time that
the component executes a specified task, under any conditions
Accuracy
6. Precision: This attribute evaluates if the component executes as
specified by the user requirements
Security
7. Data Encryption: This attribute expresses the ability of a
component to deal with encryption in order to protect the data it
handles;
8. Controllability: This attribute indicates how the component is able
to control the access to its provided interfaces;
9. Auditability: This attribute shows if a component implements any
auditing mechanism, with capabilities for recording users access to
the system and to its data;
Life-cycle Sub-Characteristics
Self-contained
10. Dependability: This attribute indicates if the component is not self-
contained, i.e. if the component depends on other components to
provide its specified services;
Reliability Characteristic:
Run-time sub Characteristics:
Recoverability
11. Error Handling: This attribute indicates whether the component
can handle error situations, and the mechanism implemented in that
case;
12. Transactional: This attribute verifies the presence of behaviors,
logic and component’s transactional structures;
Fault Tolerance
13. Mechanism availability: This attribute indicates the existence of
fault-tolerance mechanisms implemented in the component;
14. Mechanism efficiency: This attribute measures the real efficiency
of the fault-tolerance mechanisms that are available in the
component;
Safety
Chapter 4 - Embedded Software component Quality Evaluation Methodology 94
32. Test suite provided: This attribute indicates whether some test
suites are provided for checking the functionality of the component
and/or for measuring some of its properties (e.g. performance);
33. Extensive component test cases: This attribute indicates if the
component was extensively tested before being made available to the
market;
34. Component tests in a set of environment: This attribute
indicates in which environments or platforms a certain component
was tested;
35. Formal proofs: This attribute indicates if the component tests were
formally proved;
Portability Characteristic:
Run-time Sub-Characteristics:
Deployability
36. Complexity level: This attribute indicates the effort needed to
deploy a component in a specified environment.
Life-cycle Sub-Characteristics:
Replaceability
37. Backward Compatibility: This attribute is used to indicate
whether the component is “backward compatible” with its previous
versions or not;
Flexibility
38. Mobility: This attribute indicates in which containers this
component was deployed and to which containers this component
was transferred;
39. Configuration capacity: This attribute indicates the percentage of
the changes needed to transfer a component to other environments;
Reusability
40. Domain abstraction level: This attribute measures the
component’s abstraction level, related to its business domain;
41. Architecture compatibility: This attribute indicates the level of
dependability of a specified architecture;
42. Modularity, cohesion, coupling and simplicity: This attribute
analyzes the modularity level, internal organization, cohesion,
Chapter 4 - Embedded Software component Quality Evaluation Methodology 97
The second part of the proposed quality model (EQM) is quality in use
characteristics; this shows the component behavior in the real world. It is
measured through feedback of the user’s satisfaction in the use and application
of the component in the real environment and analyzes the results according to
their expectations. These features bring valuable information to users who wish
to use this component; this is called Quality in use characteristics and is
composed by Effectiveness, Productivity, Safety and Satisfaction.
The third and last part of the quality model proposed is composed of
additional information, which provides useful subsidies to help the user in
component selection. These characteristics are called Additional Information
and are composed of: Technical Information, Organization Information and
Market Information. A brief description of the each type of additional
information is shown below.
Technical Information:
Organizational information:
Marketing information:
The evaluation level defines the depth of the evaluation. Levels can be
chosen independently for each characteristic. Table 4.11 gives some indication
as to which level given embedded software component should be evaluated. The
cost of evaluation will depend on the level of evaluation, the size of the
component and other factors, but the higher level of quality will tend to be more
costly the evaluation.
for all types of embedded software components. Figure 4.10 shows an example
of an embedded component whose quality characteristics were evaluated in
different quality levels (EQL). The evaluation of quality level of a component
used in a nuclear system should be more rigid than that of a component used for
entertainment because the application risks are much larger. To implements
this flexibility, the evaluation should be level-oriented. So, components with
different application risks must also be evaluated differently.
EQL I, for reliability those techniques from EQL II, for usability those
techniques from EQL III and so on). The idea is to provide more flexibility
during the selection levels as well as, in order to facilitate the model usage and
accessibility. Table 4.11 gives some indication as to which level given embedded
software component should be evaluated. Each vertical column of Table 4.11
represents different layers in which the embedded software component should
be considered when evaluating its potential damage and related risks. The level
of damage in each layer is the first guideline used to decide which EQL is more
interesting for the organization; the important aspects are those related to
environment, to safety/security and to economy. However, these are mere
guidelines, and should not be considered as a rigid classification scheme. Those
few guidelines were based on (Boegh et al., 1993), (Solingen, 2000), (ISO/IEC
25000, 2005) and extended to the component context.
Moreover, a set of works from the literature about each single technique
were analyzed in order to identify the real necessity of those evaluation
techniques. The analyzed works were from diverse areas, such as:
software/component Quality Attributes (Larsson ,2004), software/component
Chapter 4 - Embedded Software component Quality Evaluation Methodology 103
testing (Freedman, 1991), (Councill, 1999), (Gao et al., 2003), (Beydeda and
Gruhn, 2003) software/component inspection (Fagan, 1976), (Parnas and
Lawford, 2003), software/component documentation (Kotula, 1998),
(Lethbridge et al., 2003), (Taulavuori et al., 2004), component interfaces and
contracts (pre and post-conditions) (Beugnard et al., 1999), (Reussner, 2003),
software/component metrics (Brownsword et al., 2000), (Cho et al., 2001),
software/component reliability (Wohlin & Regnell, 1998), (Hamlet et al., 2001),
(McGregor et al., 2003), software component usability (Bertoa et al., 2006),
software/component performance (Bertolino & Mirandola, 2003), (Chen et al.,
2005), component reusability (Caldiera & Basili, 1991), (Gui & Scott, 2007) and
component proofs (Hall, 1990). In this way, those selected techniques bring,
each one for each specific aspect, a kind of quality assurance to software
components that are essential to integrate them into the evaluation techniques.
Table 4.12: Embedded Quality Level – EQL and the evaluation techniques.
Characteristics EQL I EQL II EQL III
• Evaluation measurement (time analysis) • System test
• Code inspection
Functionality • Precision analysis • Structural tests (white-box) with coverage criteria
• Formal proof
• Functional testing (black box) • Dependency analysis
• Error prevent, handle and recover analysis
• Reliability growth model
• Suitability analysis • Fault tolerance analysis
Reliability • Formal proof
• Code inspection • Algorithmic complexity
• Dependability analysis
• Structural tests (white-box)
• Effort to configure analysis
• Inspection of user interfaces • Conformity to standards interface
Usability • User mental model
• Documentation analysis (use guide, architectural • Analysis of the pre and post-conditions in laboratory
analysis, etc)
• Evaluation measurement (memory, energy and • Tests of performance (memory, consumption and resource)
Efficiency • Performance and resource profiling analysis
resource) • Algorithmic complexity
• Changeability analysis • Code metrics and programming rules • Analysis of the component development process
Maintainability • Documents Inspection • Static analysis • Traceability evaluation
• Analysis of the test-suite provided • Extensibility analysis • Component test formal proof
• Mobility analysis
• Backward compatibility analysis
• Conformity to programming rules • Domain abstraction analysis
• Configurable analysis
Portability • Cohesion of the documentation with the source code • Analysis of the component’s architecture
• Deployment analysis
analysis
• Hardware/software compatibility analysis
• Cohesion, coupling, modularity and simplicity analyses
Chapter 4 - Embedded Software component Quality Evaluation Methodology 105
• EQL I: In this quality level the goal is to ensure that the documentation
is consistent with the component’s functionalities, its effort to configure,
use, reuse and maintain, and check its compatibility with the specified
architecture;
• EQL II: On the second level the aim is to analyze the execution in the
environment, subjecting it to a set of metrics and evaluation techniques,
verify how the component can avoid faults and errors, analyzing the
provided and required interfaces and the use of best practices;
• EQL III: in the last quality level the source-code of the component is
inspected and tested, and the algorithm complexity is examined in order
to prove its performance. The formal proof of the component’s
functionalities and reliability is required in this level in order to achieve
the highest possible level of trust. The aim of this level is to assure the
component’s performance; and increase the trust in the component as
much as possible.
One of the main concerns during EQL definition is that the levels and the
evaluation techniques selection must be appropriate to completely evaluate the
quality attributes proposed on the EQM, presented in section 4.2. This is
achieved through a mapping of the Quality Attributes X Evaluation Technique.
For each quality attribute proposed on the EQM, it is interesting that at least
one technique is proposed in order to cover it completely, also to facilitate its
proper measurement. Table 4.13 shows this matching between the EQM quality
attributes and the proposed EQL evaluation techniques.
Table 4.13 shows that the main concern is not to propose a large amount
of isolated techniques, but to propose a set of techniques that are essential for
measuring each quality attribute, complementing each other and, thus,
becoming useful to compose the Embedded Quality Level Evaluation
Techniques.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 106
EQL III
EQL II
EQL I
Sub- Quality Evaluation
Characteristic Attributes Techniques
EQL III
EQL II
Sub- Quality Evaluation
EQL I
Characteristic Attributes Techniques
Code inspection
Code Inspection
Fault Tolerance
Fault tolerance analysis
Mechanism efficiency
Reliability growth model
Formal Proof
Code Inspection
Safety
Integrity Algorithmic Complexity
Dependability analysis
Chapter 4 - Embedded Software component Quality Evaluation Methodology 107
EQL III
EQL II
Sub- Quality Evaluation
EQL I
Characteristic Attributes Techniques
EQL III
EQL II
Sub- Quality Evaluation
EQL I
Characteristic Attributes Techniques
Evaluation measurement
Peripheral utilization
Resource Tests of performance
Behavior Algorithmic Complexity
Mechanism Efficient
Performance and resource profiling analysis
Amount of Energy Evaluation measurement
Efficiency
Evaluation measurement
Code metrics and programming rules
Stability Modifiability
Maintainability
Documents Inspection
Static Analysis
Extensibility Extensibility analysis
Changeability Change Effort Changeability analysis
Modularity Code metrics and programming rule
Test suit provided Analysis of the test-suite provided
Extensive component Analysis of the component development
test cases process
Testability
Component tests in a set
Traceability evaluation
of environment
Formal Proofs Component Test Formal Proof
Chapter 4 - Embedded Software component Quality Evaluation Methodology 108
EQL III
EQL II
Sub- Quality Evaluation
EQL I
Characteristic Attributes Techniques
The cost of the evaluation is beyond the scope of this thesis, however it is
observed empirically that the cost of evaluation tends to increase with the
highest level of quality assessment (EQL III).
The GQM was the same technique proposed to use in ISO/IEC 25000 looking to
track the software product properties.
Objective: If they depend only on the object that is being measured and
not on the viewpoint from which they are taken; E.g., number of versions of a
document, staff hours spent on a task, size of a program.
Subjective: If they depend on both the object that is being measured and
the viewpoint from which they are taken; E.g., readability of a text, level of user
satisfaction.
5. Collect, validate and analyze the data in real time to provide feedback
to projects for corrective action; and
In order to help the evaluation staff during the execution of the GQM
step, a set of metrics based in propose methodology was created.
The Table 4.14 presents the complete steps for the quality evaluation,
starting with quality characteristics, sub-characteristics, quality attributes, EQL
level, Evaluation Techniques, Metrics from EML, interpretation and references.
Chapter 4 - Embedded Software component Quality Evaluation Methodology 116
EQL III
EQL II
EQL I
Sub-
Quality Evaluation
Goal Question Metric Interpretation Reference
Attributes Techniques
How much output can be (Amount of output with success (Beydeda and
Throughput Structural Tests that can be successfully 0 <= x <= 100; which closer to
produced with success over a over a period of time * 100) / Gruhn, 2003), (Gao
(“out”) (white-box) produced over a given period 100 being better
period of time? Number of invocations et al.,2003)
of time.
Analyses the amount of input
information that can be How much input can be (Amount of input processed with (Beydeda and
Processing Structural Tests 0 <= x <= 100; which closer to
successfully processed by the processed with success over a success over a period of time * Gruhn, 2003), (Gao
Capacity (“in”) (white-box) 100 being better
component over a given period period of time? 100) / Number of invocations et al., 2003)
of time
Determine the maximum time What is the maximum time to
Worst case Maximum of Execution time in a x > 0; which closer to 0 being
Formal Proofs taken to perform the task in execute the task in the worst (Boer et al., 2002)
execution time Formal Model of the component better
Functionality
Validates required functional How precise are required Number of precise functions and (Beydeda and
Functional Tests 0 <= x <= 1; closer to 1 being
features and behaviors from functions and behaviors of the correct behavior / Number of Gruhn, 2003), (Gao
Precision (black box) better
an external view component? functions et al.,2003)
Validation of program
How well structured is the code Number of functions with good (Beydeda and
Structural Tests structures, behaviors, and 0 <= x <= 1; closer to 1 being
and logical implementation of the implementation (well structured Gruhn, 2003), (Gao
(white-box) logic of component from an better
component? and logical) / Number of function et al., 2003)
internal view
Evaluate the encryption of the Number of services that must
How complete is the data 0 <= x <= 1; closer to 1 being (Gao et al.,
System Test input and output data of the have data encryption / Number of
encryption implementation? better 2003)
component. services that have encryption
Verify coding style guidelines
Data are followed, comments in the
Security
Encryption code are relevant and of How complaint is the component Number of functions complaint to (Fagan, 1976),
0 <= x <= 1; closer to 1 being
Code Inspection appropriate length, naming using systematic approach to systematic approach / Number of (Parnas and
better
conventions are clear and examining the source code specified functions Lawford, 2003)
consistent, the code can be
easily maintained
Evaluates if the component Number of provided interfaces
How controllable is the 0 <= x <= 1; closer to 1 being (Gao et al.,
Controllability System Test provides any control that control the access / Number
component access? better 2003)
mechanism. of provided interfaces
Chapter 4 - Embedded Software component Quality Evaluation Methodology 117
Verify coding style guidelines
are followed, comments in the
code are relevant and of How complaint is the component Number of functions complaint to (Fagan, 1976),
0 <= x <= 1; closer to 1 being
Code Inspection appropriate length, naming using systematic approach to systematic approach / Number of (Parnas and
better
conventions are clear and examining the source code specified functions Lawford, 2003)
consistent, the code can be
easily maintained
Auditability System Test Number of provided interfaces
Evaluate if the component that log-in the access (or any
provides any audit How controllable is the kind of data) / Number of 0 <= x <= 1; closer to 1 being (Gao et al.,
mechanism. component audit mechanism? provided interfaces better 2003)
Code Inspection Verify coding style guidelines
are followed, comments in the
code are relevant and of
appropriate length, naming
conventions are clear and How complaint is the component Number of functions complaint to (Fagan, 1976),
consistent, the code can be using systematic approach to systematic approach / Number of 0 <= x <= 1; closer to 1 being (Parnas and
easily maintained examining the source code specified functions better Lawford, 2003)
Dependability Dependency Evaluates the ability of the Number of functions provided by
analysis component to provide itself all How many functions does the itself / Number of specified 0 <= x <= 1; closer to 1 being (Gao et al.,
Self-contained
EQL III
EQL II
EQL I
Sub-
Quality Evaluation
Goal Question Metric Interpretation Reference
Attributes Techniques
easily maintained
Reliability
Quality
EQL III
EQL II
Evaluation
Charac.
EQL I
Attribut Goal Question Metric Interpretation Reference
Techniques
Sub-
es
Evaluation How many operations are 0 <= x <= 1; closer to 1 being 2000), (Bertoa et al.,
operate the functions provided provided interfaces / Number of
operate
ess
EQL III
EQL II
EQL I
Sub-
Quality Evaluation
Goal Question Metric Interpretation reference
Attributes Techniques
Analyzes the amount of How many peripherals are Amount of peripherals necessary List of Peripheral required. x > (Bertoa et al., 2002),
Evaluation
peripherals required for its sufficient for the component to for the component to work 0; which closer to 0 being (Brownsword et al.,
measurement
correct operation. work correctly? correctly better 2000)
Peripheral Evaluates and measures
utilization peripheral utilization in other The component has the same
Number of context that the
Tests of contexts and operation peripheral utilization when in 0 <= x <= 1; closer to 1 being
peripheral utilization is the same / (Gao et al., 2003)
performance environments to make sure other contexts and operation better
Resource Behavior
Utilization Tests of contexts and operation same data memory utilization 0 <= x <= 1; closer to 1 being
memory utilization is the same / (Gao et al., 2003)
performance environments to make sure when in other contexts and better
Total Number of contexts
that it satisfies the operation environments?
performance requirements
Quantifies how complex a
component is in terms of the How complex is the program
Number of mechanisms
Algorithmic computer program, or set of component to implement the x >= 0; which closer to ∞ being
implemented to optimize the data (Cho et al., 2001)
Complexity algorithms, need to implement mechanism to optimize the data better
memory usage
mechanism to reduce data memory usage?
Mechanism memory utilization
Efficienty Investigates the component's Amount of program memory
behavior in dynamic/execution used after the mechanism (Bertolino &
Performance and How much program memory was
mode in order to analyze efficiently implementation / 0 <= x <= 1; closer to 0 being Mirandola,
resource profiling saved or reduced by the
performance to determine the Amount of program memory better 2003), (Chen et al.,
analysis mechanism implemented?
efficiency of the mechanism used before the mechanism 2005)
implemented efficiently implemented
Analyzes the amount of How much program memory is Amount of program memory (Bertoa et al., 2002),
Evaluation x > 0; which closer to 0 being
program memory required for enough for the component to necessary for the component to (Brownsword et al.,
measurement better
its correct operation. work correctly? work correctly 2000)
Amount of
Program Evaluates and measures
Program Memory Utilization
EQL III
EQL II
EQL I
Sub-
Quality Evaluation
Goal Question Metric Interpretation reference
Attributes Techniques
used in the component the program rules / Number of 2000), (Cho et al.,
related to a programming better
rules implementation by collecting a component functions 2001)
Modifiability languages?
set of metrics
Examines documents in detail
Number of documents with (Fagan, 1976)
Documents based in a systematic What is the quality level of 0 <= x <= 1; which closer to 1
quality / Number of documents (Parnas and
Inspection approach to assess the quality component´s documents is better
available Lawford, 2003)
of the component documents
checks the component errors
How many errors does the Number of errors found in design (Brownsword et al.,
Static Analysis without compiling/executing it x <= 0; closer to 0 being better
component have in design time? time. 2000)
through tools
Analyze the amount of (Brownsword et al.,
Evaluates the flexibility to Execute a set of extensions and
Extensibility How extensible is the extensions done and the 2000), (Bertoa et al.,
Extensibility extend the component analyze the new component
analysis component? amount of extensions that 2002), (Bertoa &
functions behavior
work well Vallecillo, 2004)
Maintainability
Analyzes the customizable How many parameters are Number of provided interfaces /
Changeability
component executed? What is the coverage interesting to analyze the number and the number of bugs
development being made available to the 2000)
test cases of these test cases? of bugs that were corrected discovered during the
process market
during the test case execution of the tests.
Component Number of environments that
Analyzes the environments In which environment can this
tests in a set Traceability work well / Number of 0 <= x <= 1; closer to 1 being
where the component can component be executed without (Gao et al., 2003)
of evaluation environments defined on better
work well errors?
environment specification
It is interesting to note if the
Formal Component Test Analyzes if the tests are How is the coverage of the proof
Proofs Analysis amount of formal proof covers (Boer et al., 2002)
Proofs Formal Proof formally proved in the test cases?
the whole test cases provided
Chapter 4 - Embedded Software component Quality Evaluation Methodology 123
by the component. As higher it
Character is better.
Charac.
EQL III
EQL II
EQL I
Sub-
Quality Evaluation
Goal Question Metric Interpretation reference
Attributes Techniques
Replace Deploya
Complexity Deployment Time taken for deploying a then compare with the actual
deploy a component in its deploy a component in its (Gao et al., 2003)
level analyses component in its environment time taken to deploy the
specific environment(s) environment?
component
(Bertoa et
Backward Backward
ability
Number of environments
described in its specification
- Analyze the component
Portability
Domain 2004)
abstraction
level
Domain Analyzes the correct Can the component be reused in Analyzes the source code and If the component does not (Bay and Pauls,
abstraction separation of concerns in the other domain applications? Does tries to reuse the component in contain business code related 2004), (Gui & Scott,
analysis component the component have inter-related other domains to specific domain and can be 2007)
business code? reused around a set of
domains, it is good candidate
to be reused. On the other
hand, if it does have code
related to a specific domain
Chapter 4 - Embedded Software component Quality Evaluation Methodology 124
and it becomes difficult to
reuse it around some domain,
the component is not good
candidate to be reusable and
should be revised.
4.5 Summary
This chapter detailed the four modules which composed the methodology
for quality evaluation of embedded software component. At the beginner, was
explained the problem of lack of quality in the software component, then the
solution was introduced through a methodology for quality evaluation.
The section 4.1 details of the embedded quality process called EQL. It is
constituted of four modules to proceeds the quality evaluation. It is Establish
evaluation requirements, specify the evaluation, design the evaluation and
execute the evaluation.
The third section (4.2) of this chapter describes the suggested evaluation
techniques as a way to assess the quality attributes of the component. These
techniques are grouped by level of quality. There are three levels of quality, EQL
I, EQL II and EQL III. In EQL I, the evaluation techniques are more basic. The
EQL II contains the evaluation techniques of EQL I and adds new intermediate
evaluation techniques. The EQL III contains the evaluation techniques of the
two previous levels and adds advanced techniques for quality evaluation.
In the last section (4.5), is shown metric approach used to quantify the
evaluation techniques adopted. It is Embedded Metrics Approach (EMA), which
is based in Goal Question Metric (GQM) paradigm and it divided into three
levels, the conceptual (GOAL), operational (Question) and quantitative (Metric).
Chapter 5 – The Experimental Study 126
Thus, we must experiment with techniques to see how and when they
really work, to understand their limits and to understand how to improve them.
5.1 Introduction
The variables that are objects of the study which are necessary to study to
see the effect of the changes in the independent variables are called dependent
variables. Often there is only one dependent variable in an experiment. All
Chapter 5 – The Experimental Study 127
Criteria. The quality focus of the study demands criteria that evaluate
the real feasibility by the use of methodology in evaluating the embedded
software component quality and the difficulties of use by users. The benefits
obtained will be evaluated quantitatively through the coverage of the EQM and
EQL, and the difficulties of the users in the methodology usage.
Question.
Metric.
The EQM proposed must contain the major quality attributes necessary
to any kind of embedded software component evaluation. In this sense, the null
hypothesis H0’ states that the coverage of the quality attributes proposed in the
Chapter 5 – The Experimental Study 131
EQM X the quality attributes used during the component evaluation is less than
80%.
The evaluation staff should define the techniques that will be used to
evaluate each quality attribute defined previously. In this way, the null
hypothesis H0’’ states that the coverage of the evaluation techniques proposed
on the EQL for the quality attributes defined on the component evaluation is
less than 80%.
The values of these hypotheses (80%, 80% and 20%, receptivity) were
achieved through the feedback of some researchers of RiSE group and,
embedded system designer and quality engineers of a Brazilian software
company CMMi level 3 (C.E.S.A.R.). Thus, these values constitute the first step
towards well-defined indices which the methodology must achieve in order to
indicate its viability.
Question.
Metric.
rate conversion is done bi-directionally from one port to another and limited
baud rates depends on specific hardware.
levels) to +3.3V and 0V (high and low levels respectively). As previously said,
the K-line protocol requires different voltage levels and has only one data line to
receive and transmit. So the physical interface between microcontroller board
and ECU needs an intervention as well: the ECU defines +12V as high level. This
conversion will be provided by a small K-line interface board connected between
ECU and microcontroller board, as shown in Figure 5.4.
Figure 5.4: K-line interface board used to connect ECU and microcontroller.
The scenario defined was to converter the PC serial port with baud rate at
115,200 bps to K-Line baud rate at 10,400 bps:
UART1 to 10,400
Compile, download and run the component on microcontroller
board;
Configure the terminal emulator of the computer at 115,200 bps;
Send request data string from the computer and wait for replay;
Save the contents of data response received in file.
Training. The evaluators who used the proposed process were trained
before the study began. The training took 12 hours, divided into 4 lectures with
three hours each, during the course. Before and after the training, each student
spent about 16h hours reading the papers about the methodology.
Utilization Utilization
After defining the quality attributes, the evaluation staff must define
which evaluation techniques will be used to measure each quality attribute
proposed previously. Table 5.2 shows the evaluation techniques defined for
evaluating components based in EQL I.
Tabela 5.2: Evaluation Techniques selected by evaluation staff
Characteristic Sub- Quality Attributes Evaluation EQL
Characteristic Techniques
Functionality Self-contained Dependability Dependency analysis II
Real-Time Response time Evaluation measurement
(Latency) (Time analysis) I
Execution time Evaluation measurement
(Time analysis) I
Accuracy Precision Precision analysis
(Evaluation
measurement) I
Functional Tests (black
box) I
Structural Tests (white-
box) II
Relability Recoverability Error Handling Code Inspection I
Safety Integrity Code Inspection
I
Algorithmic Complexity II
Usability Configurability Effort for configure Effort to Configure
analysis I
Inspection of user
interfaces I
Understandability Documentation analysis
(Use Guide, architectural
analysis, etc) I
Attractiveness Effort to operate Evaluation measurement I
Inspection of user
interfaces I
Efficiency Resource Peripheral utilization Evaluation measurement
Behavior I
Energy Amount of Energy Evaluation measurement
consumption Consumption I
Chapter 5 – The Experimental Study 142
As happens with the Ho’ for EQL I, the Ho’’ is also rejected once the
evaluation techniques selected are the basic techniques for evaluating the
quality attribute selected previously (see Table 5.1). After selecting the quality
attributes and the evaluation techniques for EQL I, the evaluation staff should
define metrics using the EML, the punctuation level and the tools to be used for
each quality attributed in order to execute the evaluation. All data generated
during the process are collected in order to be analyzed by the evaluation staff.
In this way, the evaluation staff measured the BRConverter quality using
the definitions of the EQL I and EQL II, and the quality achieved of those
Chapter 5 – The Experimental Study 143
The Figure 5.5 shows the final results of each quality characteristic, which
were obtained by weighted average. It is calculated by the importance assigned
to each sub-characteristic, which is used as weight in the calculation. Figure 5.6
shows the scores of quality in use characteristics that it is obtained from user’s
feedback by using the component in a real environment in according to
expectations. More details Can Be Seen in Appendix C.
BRConverter
0,9
0,8
0,7
0,6
Quality
0,5
0,4
0,3
0,2
0,1
0
Functionality (EQLI) Reliability (EQLII) Usability (EQLI) Efficiency (EQLI) Maintainability Portability (EQLI)
(EQLII)
Charateristics
90%
82%
80%
73% 73%
68%
70%
60%
Quality score
50%
40%
30%
20%
10%
0%
Productivity Satisfaction Security Effectiveness
Characteristics
Evaluators Skill. The process does not define the skills necessary for
each role in the process. The evaluators did not have considerable experience in
quality area. In this way, the results achieved should be better and the
methodology will be accurately analyzed. The roles were defined in an informal
way, often allocating the evaluators for the roles defined in their jobs. However,
these issues should be reviewed in order to be more systematic and to reduce
risks.
5.11 Summary
The next chapter will present the conclusions of this work, its main
contribution and directions for future works.
Chapter 6 – Conclusions and future works 148
6.1 Contributions
During the research it was perceived that there are needs for a number of
improvements. Some of them are the following:
7 References
(Bertoa et al., 2002) Bertoa, M.; Vallecillo, A. Quality Attributes for COTS
Components, In: The 6th IEEE International ECOOP Workshop on
Quantitative Approaches in Object-Oriented Software Engineering
(QAOOSE), Spain, Vol. 01, No. 02, pp. 128-144, 2002.
(Bertoa et al., 2006) Bertoa, M.F.; Troya, J.M.; Vallecillo, A. Measuring the
Usability of Software Components. In: Journal of Systems and
Software, Vol. 79, No. 03, pp. 427-439, 2006.
(Bertolino & Mirandola, 2003) Bertolino, A.; Mirandola, R. Towards
Component-Based Software Performance Engineering, In: Proceedings
of 6th ICSE Workshop on Component-Based Software Engineering,
USA, 2003.
(Beugnard et al., 1999) Beugnard, A.; Jezequel, J.; Plouzeau, N.; Watkins, D.
Making component contract aware, In: IEEE Computer, Vol. 32, No.
07, pp. 38-45, 1999.
(Beus-Dukic et al., 2003) Beus-Dukic, L.; Boegh, J. COTS Software Quality
Evaluation, In: The 2nd International Conference on COTS-Based
Software System (ICCBSS), Lecture Notes in Computer Science (LNCS),
Springer-Verlag, Canada, 2003.
(Beydeda & Gruhn, 2003) Beydeda, S.; Gruhn, V. State of the art in testing
components, In: The 3th IEEE International Conference on Quality
Software (ICQS), USA, 2003.
(Boegh et al., 1993) Boegh, J.; Hausen, H-L.; Welzel, D. A Practioners Guide to
Evaluation of Software, In: The IEEE Software Engineering Standards
Symposium, pp. 282-288, 1993.
(Boehm et al., 1976) Boehm, W.; Brown, J.R.; Lipow, M. Quantitative
Evaluation of Software Quality, In: The Proceedings of the Second
International Conference on Software Engineering, pp.592-605, 1976.
(Boehm et al., 1978) Boehm, B.; Brown, J.R.; Lipow, H.; MacLeod, G. J.; Merrit,
M. J. Characteristics of Software Quality, Elsevier North Holland, 1978.
(Brinksma et al., 2001) E. Brinksma et al., ROADMAP - Componentbased
Design and Integration Platforms, W1.A2.N1.Y1, Project IST-2001-
34820, ARTIST - Advanced Real-Time Systems
(Brown, 2000) Brown A. W., Large-Scale Component-Based Development,
Prentice Hall, 2000.
Chapter 7 - References 155
(Hamlet et al., 2001) Hamlet, D.; Mason, D.; Woit. D. Theory of Software
Component Reliability, In: 23rd International Conference on Software
Engineering (ICSE), 2001.
(Heineman et al., 2001) Heineman, G. T.; Councill, W. T. Component-Based
Software Engineering: Putting the Pieces Together, Addison-Wesley,
USA, 2001.
(Hissam et al., 2003) Hissam, S. A.; Moreno, G. A.; Stafford, J.; Wallnau, K. C.
Enabling Predictable Assembly, In: Journal of Systems and Software,
Vol. 65, No. 03, pp. 185-198, 2003.
(Hyatt et al., 1996) Hyatt, L.; Rosenberg, L.; A Software Quality Model and
Metrics for Risk Assessment, In: NASA Software Technology Assurance
Center (SATC), 1996.
(ISO/CD 8402-1, 1990) ISOlCD 8402-1, Quality Concepts and Temiinology Part
One: Generic TermsandDefinitions, htemational Standards
Organisation, December 1990.
(ISO/IEC 1131-3,1995) IEC, Application and Implementation of IEC 1131-3, IEC
Geneva, 1995.
(ISO/IEC 12119, 1994) ISO 12119, Software Packages – Quality Requirements
and Testing, International Standard ISO/IEC 12119, International
Standard Organization (ISO), 1998.
(ISO/IEC 14598, 1998) ISO 14598, Information Technology – Software product
evaluation -- Part 1: General Guide, International Standard ISO/IEC
14598, International Standard Organization (ISO), 1998.
(ISO/IEC 15504-2, 2003) ISO/IEC 15504-2, Information technology. Software
process assessment. Part 2 : a reference model for processes and
process capability, International Standard ISO/IEC 15504-2,
International Standard Organization (ISO), 2003.
(ISO/IEC 25000, 2005) ISO/IEC 25000, Software product quality
requirements and evaluation (SQuaRE), Guide to SQuaRE,
International Standard Organization, July, 2005.
(ISO/IEC 61131-3,1995) IEC. Application and implementation of IEC 61131-3.
Technical report, IEC, Geneva, 1995.
Chapter 7 - References 159
Appendix A.
Step-by-step instructions to perform the Embedded Quality evaluation
Process (EQP)
<Component Name>
Historic Changes
Date Version Description Author
18/06/2009 01.00d Initial Version Fernando
Carvalho
Appendix A - Step-by-step instructions to perform the Embedded Quality Evaluation.167
Contents
1. Introduction ...............................................................168
1.1 Overview of the Component .................................................. 168
1.2 Conventions, terms and abbreviations list............................ 168
3. References.................................................................. 177
Appendix A - Step-by-step instructions to perform the Embedded Quality Evaluation.168
1. Introduction
<This section should present a brief introduction of the component that will be
submitted to the evaluation, the context and motivation to do so.>
<This step the evaluation team should describe the set of scenarios that component will
be evaluation>
<This activity is performed by definition of the embedded quality model step >
2.1.3.1. Define the embedded quality model (internal, external and quality in use
characteristics)
<This step defines the quality characteristics and sub-characteristics that will be used
to evaluate the component quality. Still on, the team evaluation should define the
importance level of each characteristic defined according to this classification: 1-Not
Important; 2-Indiferent; 3-Reasonable; 4-Important; 5-Very Important.>
Example:
Table 2. Characteristics and Sub-Characteristics defined.
Characteristics Sub-Characteristics Importance
Functionality Accuracy 4
Functionality Security 3
… … …
<This step describes the characteristics that are not presented on the Embedded
Component Quality Model (EQM), presented on Chapter 4, and should be considered
to evaluate any component quality aspects. After defining the characteristics, it is
interesting to complement the table 2 with the new quality characteristics and to define
its relevance to the component quality evaluation.>
Functionality Accuracy I 4
Efficiency Energy Consumption II 3
… … … …
Appendix A - Step-by-step instructions to perform the Embedded Quality Evaluation.171
<This activity is performed by Establishment of score level for metrics step >
3
http://www.toolA.org
4
http://www.toolB.org
5
http://www.toolC.org
Appendix A - Step-by-step instructions to perform the Embedded Quality Evaluation.174
Example:
Table 10. Table to document the results obtained during component evaluation.
Characterist. Sub- Quality EQL Imp Evaluation Tool Result
Charact. Attributes Techniques
used
Functionality Accuracy Correctness II 4 Precision Tool A6 0.7
analyses Tool B7
Functionality Security Data III 3 Evaluation Tool C 8 0.8
Encryption Measurement
… … … … … … …
6
http://www.toolA.org
7
http://www.toolB.org
8
http://www.toolC.org
Appendix A - Step-by-step instructions to perform the Embedded Quality Evaluation.175
the evaluation staff should provide some comments in order to the customer improve
their component. The evaluator should consider if the component achieves the required
quality to be considered in the level in which it was evaluated. This could be achieved
through the analysis of the score level of each metric defined during the embedded
component evaluation process execution.>
3. References
<This section will provide the references to tools, processes, techniques, methods cited
during this documents, in such format :>
[1] Authors, Title; Conference/Journal (if applicable); Date;
Appendix B - Evaluators feedback about the use of Embedded Software
Component Quality Evaluation Methodology. 178
Appendix B.
Evaluators feedback about the use of Embedded
Software Component Quality Evaluation
Methodology
Subject: (Engº 1)
Company: C.E.S.A.R. - Recife Center of Advanced Study
Job: System engineer / embedded development
specialist
1ª Part:
Regarding the use of the methodology to evaluation the embedded
software components quality, answer:
1.1) Which difficulties or obstacles that you found?
A.: The difficulty was to understand all parts of the methodology and, more
important, how they were related to each other. A full reading on methodology’s
chapter is required in order to have a first level understanding. But I guess only
when you go through a real evaluation you get a full perspective of the complete
methodology and how small pieces interact and their importance.
2ª Part:
Appendix B - Evaluators feedback about the use of Embedded Software
Component Quality Evaluation Methodology. 179
2.1 EQP.
Specifically about the modules of the Embedded Evaluation Process,
EQP, answer the questions:
2.1.1) Which difficulties found in module Establish Evaluation
Requirements and which possible improvement?
A: The tricky part of this module is the specification of evaluation quality model.
There reside the most important decisions of the whole evaluation for me. If you
select wrong characteristics or sub-characteristics, you can turn the evaluation
incomplete, inefficient or even invalid.
A selection of main characteristics based on embedded component reliability
level or domain (automotive, x-ray, entertainment) could be helpful. The EQL is
a good point of start but it is suggested a chapter after.
2.2 EQM.
2.2.1) The Embedded Quality Model (EQM) proposed, with quality
characteristics, sub- characteristics, quality attributes, quality in use
characteristics and additional information is sufficient to evaluate
the component quality and cover the key quality aspects?
A: Yes, more than sufficient. There are too many evaluation aspects defined on
EQM. For me the author started from a good list of references and came out
with a general quality model which can be applied to any embedded component.
I guess all important characteristics are presented there and, depending in
which level and domain you want to evaluate your component, there are some
key characteristics whose can not be out of evaluation.
2.3 EQL
Appendix B - Evaluators feedback about the use of Embedded Software
Component Quality Evaluation Methodology. 180
2.3.1) The evaluation techniques based on quality levels, EQL, is
appropriate to evaluate components in different application domain
and in different quality levels?
A: It is hard to tell. My first answer is yes, because it is presented in a flexible
way. It can be used as a very good starting baseline and the evaluators can adapt
it and evolve it according to specific requirements. The evaluation techniques
selection is the key of this module I thought. If the evaluators select a wrong tool
or technique, that characteristics can be under evaluated. The evaluation
techniques suggested are a good starting point as well.
1ª Part:
Regarding the use of the methodology to evaluation the embedded
software components quality, answer:
1.1) Which difficulties or obstacles that you found?
Some sub-characteristic goals and questions were difficult to understand.
The metrics’ domains are not always a percentage. Some of them are quantities
in bytes, or seconds. Thus, is impossible to calculate the final result to some of
the functionalities.
2ª Part:
2.1 EQP.
Specifically about the modules of the Embedded Evaluation Process,
EQP, answer the questions:
2.1.1) Which difficulties found in module Establish Evaluation
Requirements and which possible improvement?
There weren’t significant difficulties in this module due to component
simplicity.
A possible improvement should be a questionnaire to help the user to define
sub-characteristics importance. There would be fixed alternatives to each sub-
Appendix B - Evaluators feedback about the use of Embedded Software
Component Quality Evaluation Methodology. 182
characteristics question. Each alternative would be related to a different level of
importance.
2.2 EQM.
2.2.1) The Embedded Quality Model (EQM) proposed, with quality
characteristics, sub- characteristics, quality attributes, quality in use
characteristics and additional information is sufficient to evaluate
the component quality and cover the key quality aspects?
I think it is. It’s an interesting model because the set of quality characteristics
proposed are complete enough to give a good idea about the weak and the
strong aspects of the analyzed components.
2.3 EQL
2.3.1) The evaluation techniques based on quality levels, EQL, is
appropriate to evaluate components in different application domain
and in different quality levels?
To better answer this question we should analyze more components. Anyway,
the idea of using a higher EQL to characteristics with greater importance seems
to be adequate to build a good component profile.
Historic Changes
Date Version Description Author
18/10/2009 01.00-D01 Initial version Engº 1
14/11/2009 01.00-D02 Evaluation planning Engº 1
and design
21/11/2009 01.00-D03 First measurements Engº 2
22/11/2009 01.00-D04 Electrical Engº 2
measurements
16/01/2010 01.00 Final revision and Engº 1
first release of
evaluation report
Appendix C - BRConverter – Embedded Quality Evaluation 185
Contents
1 Introduction. ...............................................................186
1.1 Overview of the Component .................................................. 186
1.2 Conventions, terms and abbreviations list............................ 186
3 References.................................................................. 203
Appendix:...................................................................... 204
Appendix C - BRConverter – Embedded Quality Evaluation 186
1 Introduction.
The component that will be submitted to quality evaluation is a Serial-Serial
Baud Rate Converter. This component is used to connect two devices with different
baud rates. One application example is the K-line bus gateway.
The serial vehicle diagnostics protocol known as K-line is defined on ISO 9141-
2 [1] . It has serial data communication, very similar to RS-232 but different voltage
signal levels and only one bidirectional line. The serial data rate defined is 10.400 bps, a
nonstandard baud rate and thus not available on PC’s RS-232C controllers.
The software component under evaluation resides in serial gateway which
converts data from standard PC baud rates to K-line bus and vice-versa.
The PC must have a terminal emulator which sends and receives characters at a
given baud rate. This terminal should be able to save the received characters in a file for
later analysis.
Processor: ARM7TDMI
Microcontroller: NXP LPC2148
Compiler: RealView MDK-ARM [4] version 3.50
2.1.3.1. Define the embedded quality model (internal, external and quality in use
characteristics)
Table 2. Characteristics and Sub-Characteristics defined.
Characteristics Sub-Characteristics Importance
Functionality Real-Time 4
Functionality Accuracy 5
Appendix C - BRConverter – Embedded Quality Evaluation 190
Functionality Self-contained 2
Reliability Recoverability 3
Reliability Safety 3
Usability Configurability 4
Usability Attractiveness 3
Efficiency Resource Behavior 3
Efficiency Energy consumption 3
Efficiency Data Memory Utilization 4
Efficiency Program Memory 3
Utilization
Maintainability Stability 4
Maintainability Changeability 3
Maintainability Testability 4
Portability Deployability 4
Portability Replaceability 3
Portability Flexibility 3
Portability Reusability 3
Functionality Real-Time I 4
Functionality Accuracy II 5
Functionality Self-contained II 2
Reliability Recoverability I 3
Reliability Safety II 3
Usability Configurability I 4
Usability Attractiveness I 3
Efficiency Resource Behavior I 3
Efficiency Energy consumption I 3
Efficiency Data Memory I 4
Utilization
Efficiency Program Memory I 3
Utilization
Maintainability Stability II 4
Maintainability Changeability I 3
Maintainability Testability II 4
Portability Deployability I 4
Portability Replaceability I 3
Portability Flexibility II 3
Portability Reusability I 3
EQL
Cha
Recoverability Error Handling Code Inspection Verify coding style 0- 0.3: Not acceptable 8/8=1
guidelines are followed, 0.31-0.6: Reasonable
comments in the code are 0.61-1: Acceptable
relevant and of appropriate How compliant is the
I 3
length, naming component using Number of functions
conventions are clear and systematic approach compliant to systematic 0 <= x <= 1;
consistent, the code can be to examining the approach / Number of closer to 1 being
easily maintained source code specified functions better
Safety Integrity Code Inspection Verify coding style 0- 0.3: Not acceptable 8/8=1
guidelines are followed, 0.31-0.6: Reasonable
comments in the code are 0.61-1: Acceptable
Reliability
Changeability Change Effort Changeability How much parameters 0- 0.3: Not acceptable 1/2=0,5
analysis are provided to Number of provided 0.31-0.6: Reasonable
I 3 Analyzes the customizable customize each interfaces / Number of 0 <= x <= 1; 0.61-1: Acceptable
parameters that the function of the parameters to configure the which closer to 1
component offers component? provided interface is better
Testability Test suit Analysis of the test- 0- 0.3: Not acceptable 0
provided suite provided Analyzes the ability of the Analysis of the test suites 0.31-0.6: Reasonable
I 4 component to provide is there any test suite? provided Number of test 0 <= x <= 1; 0.61-1: Acceptable
some test suite for How is the coverage suites provided / Number of closer to 1 being
checking its functions of this test suite functions better
Deployability Complexity level Deployment Estimate the time 0- 0.3: Not acceptable Not applied
analyses first and then 0.31-0.6: Reasonable
Analyzes how complex it How much time does compare with the 0.61-1: Acceptable
I 4
is to deploy a component it take to deploy a Time taken for deploying a actual time taken
in its specific component in its component in its to deploy the
environment(s) environment? environment component NA
Replaceability Backward Backward 0- 0.3: Not acceptable Not applied
Compatibility compatibility 0.31-0.6: Reasonable
I 3 What is the Correct results / Set of same 0 <= x <= 1;
analysis 0.61-1: Acceptable
Analyzes the compatibility compatibility with invocations in different closer to 1 being
with previous versions previous versions? component versions better
Flexility Configuration Configuration Analyze the component 0- 0.3: Not acceptable 0,8
capacity analysis constraints and environment 0.31-0.6: Reasonable
Analyzes the ability of the constraints - Deploy the Analyze the time 0.61-1: Acceptable
component to be component in environment taken to deploy
I 3
transferred from one How much effort is specified on documentation - the component in
Portability
environment to another, needed to adapt the Time taken to adapt the each
considering the related component to a new component in its specified environment
changes one environment? environments defined
Mobility Mobility analyses Analyze the component 0- 0.3: Not acceptable 0,75
constraints/environment 0.31-0.6: Reasonable
Deploy the component in 0.61-1: Acceptable
environment specified on 0.85
documentation * Possible
II 3
metric: Number of
Analyzes the ability of the Can the component be environments where the
component to be transferred to other component works correctly / 0 <= x <= 1;
transferred from one environment without Number of environments closer to 1 being
environment to another any changes? described in its specification better
Architecture Hardware/Software 0- 0.3: Not acceptable 1
compatibility compatibility Analyzes the real 0.31-0.6: Reasonable
analysis compatibility of the How compatible is the Number of architecture really 0.61-1: Acceptable
I 3 component with component with the compatible / Number of 0 <= x <= 1;
architecture listed in the architecture listed in architecture listed as closer to 1 being
Reusability documentation the documentation compatible better
Appendix C - BRConverter – Embedded Quality Evaluation 196
After a detailed analysis of all 21 quality attributes selected by this EQM, here are the
techniques required:
- System tests (black-box and white-box)
- Code inspection
- Real-time analysis
- Code metrics analysis (complexity)
- Electrical profiling (current consumption)
System tests:
The very first analysis can be made using simple functional test-cases. The idea
here is to verify the component based on their very basic specification requirements.
The test case described below can be used to all functional tests that should be
performed:
2. Wait for data response from ECU ECU simulator should receive the request
Simulator command on a correct baud rate, interpret
and send correspondent answer string back
to PC
3. Verify if the response command arrived A file saved on PC with all bytes answered
correct and without modification by ECU simulator
This very same test case can be used as a basis for other required tests:
- Stability: repeat the test case during 10 minutes;
- Performance: evaluator should run this test case while perform measurements on
serial signals;
- Stress: the evaluator can run step 1 over again without waiting for an answer.
Appendix C - BRConverter – Embedded Quality Evaluation 197
Code Inspection:
Most of the quality attributes selected in this particular EQM requires some sort
of source code analysis. Among those, the code inspection technique should suit. No
special tool is required for code inspection here.
The evaluation team should follow a simplified software code inspection without go
over rework on source code. The simplified code inspection process requires those
steps:
1. Inspection planning: the moderator, hereby represented by the person playing the
role of Evaluation Responsible, should plan the inspection, distribute the code,
schedule a inspection meeting and, very important, explains the quality attributes
which the inspectors should look for;
2. Preparation: each inspector should go over the code and identify possible
problems for every quality attribute designated;
3. Inspection meeting: all inspectors gather together to read the code. During the
meeting, the role of reader and scriber is played by Evaluation Responsible.
Each inspector should point out their highlights while the scriber registers on
this document.
Real-time analysis:
For the quality attributes requiring real-time analysis, a digital oscilloscope is the
only tool required. The evaluator should complete basic setup described early this
section, connect scope probes on transmitting and receiving signals, send patterns and
measure the results.
Electrical profiling:
And finally, for electrical measurements, specifically current consumption, a
multimeter equipped with milli-ampère capable Ammeter will suit. The microcontroller
board does not drain any source current from serial interfaces, so the current
measurement on power supply only will cover the complete system current
measurement.
The specific current required for the baud rate conversion component requires
two measurements: first, evaluators should perform one current measurement with
component in run-time performing conversions. The other measurement should be made
without the component only source code compiled and flashed. The result is the
difference of two measurements.
Appendix C - BRConverter – Embedded Quality Evaluation 198
Microcontroller in general
g. Minimal Requirements: 2 UARTs
h. Technical Support: www.cesar.org.br
i. Compliance: MISRA-C:2004
2. Organization Information
a. CMMi Level: 3
b. Organization’s Reputation: High
3. Market Information
a. Development time: 4 hours
b. Cost: NA
c. Time to market: NA
d. Targeted market: NA
e. Affordability: NA
f. Licensing: NA
BRConverter
0,9
0,8
0,7
0,6
Quality
0,5
0,4
0,3
0,2
0,1
0
Functionality (EQLI) Reliability (EQLII) Usability (EQLI) Efficiency (EQLI) Maintainability Portability (EQLI)
(EQLII)
Charateristics
90%
82%
80%
73% 73%
68%
70%
60%
Quality score
50%
40%
30%
20%
10%
0%
Productivity Satisfaction Security Effectiveness
Characteristics
Portability II 0.85
Quality in Use Characteristics
Productivity Satisfaction Security Effectiveness
80% 75% 100% 100%
3 References
[1] ISO 9141-2: 1994; Road Vehicles – Diagnostic System;
http://www.iso.org/iso/catalogue_detail.htm?csnumber=16738
[2] Olimex LPC-P2148 Development Board; http://www.olimex.com/dev/lpc-
p2148.html
[3] NXP LPC-2148 32-bit ARM-based microcontroller chip;
http://www.nxp.com/pip/LPC2141_42_44_46_48_4.html
[4] MDK-ARM Microcontroller Development Kit; http://www.keil.com/arm/mdk.asp
[5] CCCC – C and C++ Code Counter; http://sourceforge.net/projects/cccc/
Appendix C - BRConverter – Embedded Quality Evaluation 204
Appendix:
A - Class Diagram;
B - Wave form;
C – Source Code.
Class Diagram
Component Diagram
Wave Form:
1 – UART 0 (10400 bps K-Line)
2 – UART 1 (115200 bps PC)
Appendix C - BRConverter – Embedded Quality Evaluation 205
Source Code:
1 /************************************************************************
* NOME DO ARQUIVO: BRConverter.c *
**
* PROPOSITO: Esse modulo prove funcoes de conversao de baud rates *
* entre duas uarts disponiveis no ambiente *
**
* REFERENCIA A OUTROS ARQUIVOS: *
* Nome E/S Descricao *
* BRConverter.h E Cabecalho do modulo *
**
Appendix C - BRConverter – Embedded Quality Evaluation 206
* VARIAVEIS EXTERNAS: *
* NENHUMA *
**
* NOTAS: *
* NENHUMA *
**
* REFERENCIA A REQUISITO/ESPECIFICACAO: *
* NENHUMA *
**
* HISTORICO: *
**
* Data Autor Versao Descricao da Mudanca *
* 13/01/2006 Daniel Thiago 1.0 Versao inicial do codigo *
* 31/05/2006 Daniel Thiago 1.1 Inclusao dos comentarios *
**
************************************************************************/
#include "BRConverter.h"
/************************************************************************
**
* Declaracao de Variaveis Globais *
**
************************************************************************/
/************************************************************************
* NOME DA FUNCAO: VerifyUart0 *
**
* DESCRICAO: Funcao que monitora o recebimento de caracteres na Uart0. *
* Quando caracteres sao recebidos pela Uart0, sao enviados *
* pela Uart1. *
**
* ARGUMENTOS: *
* NENHUM *
**
* RETORNO: *
* NENHUM *
**
************************************************************************/
54 void VerifyUart0( void )
{
if ( UART0_HasChar( ) )
{
UART1_PutChar( UART0_GetChar( ) );
}
}
/************************************************************************
* NOME DA FUNCAO: VerifyUart1 *
**
* DESCRICAO: Funcao que monitora o recebimento de caracteres na Uart1. *
* Quando caracteres sao recebidos pela Uart1, sao enviados *
* pela Uart0. *
**
* ARGUMENTOS: *
* NENHUM *
**
* RETORNO: *
* NENHUM *
**
************************************************************************/
77 void VerifyUart1( void )
{
Appendix C - BRConverter – Embedded Quality Evaluation 207
if ( UART1_HasChar( ) )
{
UART0_PutChar( UART1_GetChar( ) );
}
}
.\BRConverter\BRConverter.h
1 /************************************************************************
* NOME DO ARQUIVO: BRConverter.h *
**
* PROPOSITO: Esse modulo prove funcoes de conversao de baud rates *
* entre duas uarts disponiveis no ambiente *
**
* REFERENCIA A OUTROS ARQUIVOS: *
* Nome E/S Descricao *
* Cabecalho.h E Definicao de tipos *
* Uart.h E As Uarts 0 e 1 sao requeridas pelo modulo *
**
* VARIAVEIS EXTERNAS: *
* NENHUMA *
**
* NOTAS: *
* NENHUMA *
**
* REFERENCIA A REQUISITO/ESPECIFICACAO: *
* NENHUMA *
**
* HISTORICO: *
**
* Data Autor Versao Descricao da Mudanca *
* 13/01/2006 Daniel Thiago 1.0 Versao inicial do codigo *
* 31/05/2006 Daniel Thiago 1.1 Inclusao dos comentarios *
**
************************************************************************/
#ifndef BRCONVERTER_H
#define BRCONVERTER_H
/***********************************************************************
**
* Declaracao de Includes *
**
************************************************************************/
#include "Cabecalho.h"
#include "Uart.h"
/************************************************************************
**
* Definicao de Bits de Registradores *
**
************************************************************************/
/************************************************************************
**
* Definicao de Constantes *
Appendix C - BRConverter – Embedded Quality Evaluation 208
**
************************************************************************/
/************************************************************************
**
* Declaracao de Variaveis Exportadas *
**
************************************************************************/
/************************************************************************
**
* Declaracao de Funcoes Providas *
**
************************************************************************/
/************************************************************************
* NOME DA FUNCAO: VerifyUart0 *
**
* DESCRICAO: Funcao que monitora o recebimento de caracteres na Uart0. *
* Quando caracteres sao recebidos pela Uart0, sao enviados *
* pela Uart1. *
**
* ARGUMENTOS: *
* NENHUM *
**
* RETORNO: *
* NENHUM *
**
************************************************************************/
88 extern void VerifyUart0( void );
/************************************************************************
* NOME DA FUNCAO: VerifyUart1 *
**
* DESCRICAO: Funcao que monitora o recebimento de caracteres na Uart1. *
* Quando caracteres sao recebidos pela Uart1, sao enviados *
* pela Uart0. *
**
* ARGUMENTOS: *
* NENHUM *
**
* RETORNO: *
* NENHUM *
**
************************************************************************/
105 extern void VerifyUart1( void );
/************************************************************************
**
* Declaracao de Funcoes Requeridas *
**
************************************************************************/
/************************************************************************
* NOME DA FUNCAO: UARTX_Configure *
**
* DESCRICAO: Configura a Uart0 com o baud rate passado como parametro *
**
* ARGUMENTOS: *
* NENHUM *
Appendix C - BRConverter – Embedded Quality Evaluation 209
**
* RETORNO: *
* NENHUM *
**
************************************************************************/
127 extern void UART0_Configure( uint32_t baud0 );
128 extern void UART1_Configure( uint32_t baud1 );
/************************************************************************
* NOME DA FUNCAO: UARTX_HasChar *
**
* DESCRICAO: Verifica se ha novo caractere recebido pela Uart0 *
**
* ARGUMENTOS: *
* NENHUM *
**
* RETORNO: *
* bool_t - Retorna TRUE quando ha um caractere recebido e nao lido *
**
************************************************************************/
143 extern bool_t UART0_HasChar( void );
144 extern bool_t UART1_HasChar( void );
/************************************************************************
* NOME DA FUNCAO: UARTX_PutChar *
**
* DESCRICAO: Envia o caractere passado como parametro pela Uart0 *
**
* ARGUMENTOS: *
* char_t - c - Caracter a ser transferido pela serial *
**
* RETORNO: *
* bool_t - Indica o sucesso da insercao do caracter no buffer. Se *
* existir espaço no buffer, o retorno é TRUE *
**
************************************************************************/
160 extern bool_t UART0_PutChar( char_t c );
161 extern bool_t UART1_PutChar( char_t c );
/************************************************************************
* NOME DA FUNCAO: UARTX_GetChar *
**
* DESCRICAO: Remove o caractere mais velho do buffer de recepcao da *
* Uart0. *
**
* ARGUMENTOS: *
* NENHUM *
**
* RETORNO: *
* char_t - Retorna oaractere recebido *
**
************************************************************************/
177 extern char_t UART0_GetChar( void );
178 extern char_t UART1_GetChar( void );
#endif