Você está na página 1de 10

Managing Security and Privacy in Ubiquitous eHealth Information Interchange

Ebenezer A. Oladimeji

Product Design and Dev., IT Verizon Communications Irving, Texas, USA. Contents Research Division ETRI, Yuseong-gu Daejeon, Korea.

Dept. of Computer Science University of Texas at Dallas Richardson, Texas, USA. Dept. of Computer Education Sungkyunkwan University Seoul 110-745, Korea.

Lawrence Chung

Hyo Taeg Jung

Jaehyoun Kim

ABSTRACT
Ubiquitous computing has the potential to signicantly improve the quality of healthcare delivery by making relevant patient health history and vital signs readily available ondemand to caregivers. However, this promise of the ability to track electronic health information signals from distributed ubiquitous devices, conicts with the security and privacy concerns that most people have regarding their personal information and medical history. While security and privacy concerns have been dealt with extensively in mainstream computing, there is need for new techniques and tools that can enable ubiquitous system designers in healthcare domains to build in appropriate levels of protection. Such techniques can help ensure that patient information is minimally but suciently available to dierent stakeholders in the care giving chain, and are useful in ubiquitous environments where traditional security mechanisms may be either impractical or insucient. This paper presents a goal-centric and policy-driven framework for deriving security and privacy risk mitigation strategies in ubiquitous health information interchange. Specically, we use scenario analysis and goal-oriented techniques to model security and privacy objectives, threats, and mitigation strategies in the form of safeguards or countermeasures. We demonstrate that traditional solutions are insucient, while introducing the notion of purpose-driven security policies based on sensitivity meta-tags. We also show how administrative safeguards (such as those required by HIPAA rules) can be rened into intermediate specications that can be analyzed more systematically. To validate the utility of our approach, we illustrate our major concepts using examples from ubiquitous emergency response scenarios. Corresponding author: (ebenezer.oladimeji@verizon.com)

Categories and Subject Descriptors


D.2.1 [Software Engineering]: Requirements/Specications; D.4.6 [Security and Protection]: Information ow

Keywords
ubiquitous eHealth, vulnerability points, sensitivity metatags, purpose-driven policies, goal-centric risk mitigation.

1.

INTRODUCTION

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for prot or commercial advantage and that copies bear this notice and the full citation on the rst page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specic permission and/or a fee. ICUIMC '11, February 21-23, 2011, Seoul, Korea. Copyright 2011 ACM 978-1-4503-0571-6 ...$10.00.

Ubiquitous computing (ubicomp) aims at the non-intrusive availability of information about our physical environments, through a world of wirelessly connected computing devices (such as sensors, processors and actuators) integrated into the physical world and virtually invisible to human users [29]. This integration and its potential intelligence promise to enable us to create systems that can improve eciency, convenience, and human safety [18]. For example, in public safety domains, ubicomp can help reduce emergency response time by making relevant electronic health (eHealth) information available on-demand to caregivers. Such just-intime eHealth information can provide guidance to dispatch operators, ambulance crews, and the emergency room; it can also enable the preparation of treatment plans in advance of hospital arrival. Also, in everyday non-emergency situations, healthcare workows often involve caregivers moving around constantly between very dierent work places. Ubiquitous availability of the information needed to provide care can contribute to timeliness of care, thus enhancing safety of lives and improving the quality of healthcare delivery. While ubicomp applications look promising in providing the ability to track eHealth information signals from distributed sources, regardless of time, location and communication channel, several studies have indicated that there are serious concerns over security and personal privacy. Such concerns originate from the general unease over the potential for abuse and sabotage, fear of potential lack of control, and loss of privacy over personal information [12, 9, 30, 2]. These concerns also suggest that security and privacy pose great challenges to ubiquitous computing, and may hurt its deployment in healthcare domains [14]. While these concerns have been dealt with extensively in mainstream computing, traditional security and privacy mechanisms may not be practical in ubicomp applications. This is partly because,

unlike traditional client-server computing, the identity of active entities in ubicomp system cannot be completely anticipated in advance. There is therefore need for new techniques and tools that can enable ubiquitous system designers in healthcare domains, to ensure that eHealth information is minimally but suciently available to dierent stakeholders in the care-giving chain, and on a need-to-know basis. As an enabling step for achieving this, we need systematic techniques for eliciting security and privacy requirements for ubiquitous healthcare applications. Our literature review in this space reveals that, while a lot have been written about security and privacy issues that ubicomp applications face, not many solutions have been oered for eliciting security and privacy requirements for ubiquitous healthcare applications. While some research work have attempted to explicitly address privacy [10, 1, 16, 22, 17], available solutions so far are rather ad-hoc [15]. To the best of our knowledge, there are no well-established techniques or tools aimed at specically eliciting the security and privacy requirements for ubiquitous healthcare systems, at the application level. This paper therefore presents a goal-centric and policydriven framework for deriving security and privacy requirements in ubiquitous eHealth information management systems. The approach uses goal-oriented techniques to model security and privacy objectives, threats against those objectives, and mitigation strategies in the form of safeguards or countermeasures. We also demonstrate that technical safeguards are not sucient, and introduce how administrative safeguards (such as those required by HIPAA rules [13]) can be systematically operationalized into purpose-driven security policies. We believe that the proposed approach can enable ubiquitous application designers to identify security and privacy requirements more easily. To validate the utility of our approach, we illustrate our major concepts using real examples from ubiquitous emergency response scenarios. The rest of the paper is organized as follows: Section 2 provides a contextual background by describing two scenarios in emergency response domain, one undesirable and the other desirable, and highlighting some of the security and privacy issues of concern. In section 3, we present our proposed approach for managing these issues during the early phases of system development. Section 4 highlights the signicance, strength and weakness of our proposal, while Section 5 summarizes the paper and outlines future directions.

Figure 1: An Undesirable Emergency Response Scenario uational information obtained from the caller to the ambulance crew. The crew also asked the dispatcher for other information such as the actual location of the incident, the fastest route to the place, and the current trac conditions, among others. Because it was night time and unfamiliarity with the location, the ambulance took an unusual amount of time to arrive at the scene. The ambulance crew took the patient to the nearest hospital and verbally provided emergency room sta with background information about the incident. However the patient was found dead on arrival at the hospital. A post-mortem revealed that the patient died of an heart attack. This scenario is illustrated in Figure 1. The problem of loss of life due to untimely arrival of ambulance during emergency situation is a real problem. An example is that of the well publicized London Ambulance System failure of 1992, where a girl (known as Nasima) died while waiting for an ambulance for 56 minutes, despite living two blocks away from an hospital [26]. A similar occurrence happened in the US in 2008 when a female student of the University of Wisconsin at Madison (named Brittany Zimmerman), died after waiting endlessly for response to her 911 call [5]. Madison police believe Brittany called 911 before she was stabbed and beaten to death inside her apartment, but the 911 Center failed to send help after erroneously concluding the call was a mistake. These types of situation may be caused by several factors such as verbal misinformation, GPS misleading ambulance crew to a wrong location, or imprecise policies guiding whether or not to trust a 911 caller, to mention a few. Introducing some automated mediation may lead to signicant improvement, as described next.

2.

UBIQUITOUS EMERGENCY SCENARIOS

This section describes an undesirable and a desirable scenario, and then highlights several things that could possibly go wrong, with respect to security and personal privacy.

2.2

Scenario-B: A Desirable Scenario

2.1

Scenario-A: An Undesirable Scenario

On a small street at night, a passer-by saw a person lying on the street unconscious. He seems to need medical help. The passer-by called 911 from his cell phone. A 911 operator asked him many questions about the location and the condition of the unconscious person, including age, sex, some visible signs such as breathing, bleeding, bruises or wounds. The passer-by took some valuable time to examine the patient and answer the questions. The operator nally dispatched an ambulance, and verbally transmitted the sit-

On a small street at night, a passer-by saw a person lying on the street unconscious. He seems to need medical help. The passer-by called 911 from his smartphone. He informed the 911 operator that the person is wearing some medical devise in his left upper arm. The 911 operator asked him to push some a certain button on his smartphone and a red button on the medical device. After pushing these buttons, the device electronically transmitted current health conditions to the e911 system, via the smartphone. The transmitted data included the precise location of the incident, the patients National Health ID (NHID), as well as current vital

signs such temperature, heart rate, breathing rate, blood pressure, etc. The e911 system used the patients NHID to automatically fetch his health prole from the national eHealth Registry, which included vital information such as allergies, age, gender and health history. The e911 system evaluated the vital signs and notied the operator that the person needed immediate medical help. The operator issued a request for ambulance dispatch to the scene. The 911 system then established a session with the caller smartphone to monitor all available vital signs. The phone periodically collected all vital signs sensed by device and forwarded through the session to the 911 system. Once an ambulance was dispatched, the e911 system transferred the monitoring session to the computer on board the ambulance, so that the crew can continually observe during travel and prepare necessary rst-aid treatments beforehand. At this point, the patient conditions degraded, so the ambulance crew duplicated the monitoring session to a nearby emergency room. The hospital emergency room sta saw indications that the patient was loosing blood. They then prepared for an emergency surgery and blood transfusion based on the blood-type reported by the device. The ambulance used a GPS navigation system to guide the crew to the scene of the incident at the GPS-based location transmitted by the callers cell phone. The crew took less than 4 minutes from the 911 call to arrive at the scene, and quickly provided appropriate rst aid prepared ahead based on the monitored vital signs en-route. The crew used a hand-held device to take over the vital signs collection from the passerbys smartphone. The crew then took less than 3 minutes to transfer the patient to the nearby hospital where he was quickly and successfully operated, thanks to the prepared surgery plan based on the monitored vital signs. The patient was found to have suered an heart attack and he was placed on home monitoring for four weeks. A transcript of the treatments, from rst aid to the surgery, was automatically sent to the eHealth Registry as well as the primary care doctor of the patient. While at home, the patient was required to do some exercise for complete healing. His ubiquitous health monitoring device was also congured to automatically transmit his vital signs to a computer in the monitoring station at the doctors clinic. In addition, whenever the vital signs deviate signicantly from expected ranges, the device would automatically alert his primary care doctor and send a text message to a family member. This minimizes the number of follow-up visits the patient has to make until full recovery from the heart attack, as well as the number of home visits from nurses and therapists. This scenario is illustrated in Figure 2.

E911 Dispatch Center

- e911 System fetches patients history from eHealth Registry

eHealth Registry

4 2

- Press * on your phone and, push the red button on the medical device on the patient - GPS location and patients vital signs to e911system VP-B

- history sent to Ambulance - history forwarded to a nearby nearby hospital - Device continuously transmit vital signs to Ambulance and hospital

Patient at Home

- Patient monitor alerts primary doctor whenever vital signs deviates from set threshold

Hospital
VP-C VP-D

5
VP-A - Patient revived after a quick surgery - Suffered an heart attack - placed on continuous home monitoring for 4 weeks

- Person found unconscious, wearing a medical device - Passer-by called 911

Doctors Office Monitoring Station Vulnerability Point (VP)

Figure 2: A Desirable Emergency Response Scenario of security and privacy of electronic health records (EHRs), some of which are highlighted in what follows.

2.3.1

Condentiality

Generally, condentiality is violated whenever information is disclosed to individuals (whether authorized or not) without a legitimate need-to-know basis. Ubicomp is prone to several vulnerabilities that can potentially lead to the violation of the need-to-know basis of information disclosures [23]. This is partly because the identities of all active entities in an ubiquitous system chain may not be known in advance. Also, ubicomp devices may not have sucient processing powers for strong encryption or other traditional mechanisms. For example, the emergency patient in Scenario-B may not be conscious enough to grant access permission for the traditional request-reply model, even if there was an interface that can enable him to do that. Also the identity of the passer-by, 911 operators, ambulance crew, and hospital sta may not be fully anticipated in implementing traditional access authorization schemes. This is denoted by the vulnerability point (VP) labeled as VP-A in Figure 2

2.3.2

Privacy

2.3

eHealth Security and Privacy Issues

It can be seen from the above desirable scenario (Scenario B), that ubicomp can potentially improve the quality of healthcare and increase the level of public safety in emergency response situations. The same can also be said of the application of ubicomp to clinical workows at primary care doctor oces or clinics. The paradigm is particularly useful for several reasons: many national demographics are having increasing number of aged people; health providers are facing shortages of health workers; costs of healthcare are skyrocketing; and incidences of medical errors are at all-time high [25]. However, as desirable as Scenario-B seems, several things could go wrong when we consider several aspects

The right to privacy is becoming a major issue in our civilized era. In the US, federal laws such as HIPAA [13] impose stringent constraints on health records handling in order to protect patients right to privacy. An important requirement is that applications must confer the ownership and control over disclosure, to the principal of that information [27], a task that is dicult to fully achieve in practice. Individual patients should have high level of control in deciding who accesses their health information, for what purpose, and under what conditions. In theory, this can be achieved by using user preferences, but these preferences are dicult to enforce in reality. In an emergency response situation, the need for information by health-care personnel, whose identities may not be known in advance, for the purpose of saving life, can potentially conicts with the privacy concern most people have regarding their personal information and medical history. For example, when we consider vulnerability point VP-A in Figure Figure 2, the emergency patient may not want anybody to know about his illness and episode. However the 911 operator, ambulance crew, and emergency

room sta will need quick, accurate, and sucient information to help treat him. This results in a conict of interest situation arise that can potentially violate his privacy preferences (his fundamental right).

2.3.3

Integrity

Data integrity is violated whenever data is modiable by an unauthorized person or agent. Ubicomp introduces several non-traditional data communication interfaces such as touch-screen icons, voice, infrared signals, direct electrical signals, ad-hoc wireless networks, etc. These means of data transmission potentially increase integrity vulnerability of systems in this domain [24]. For example, in Scenario-B, the interchange of the eHealth records over ad-hoc and pervasive communication channels is susceptible to data harvesting by malicious sniers who may be listening on to every bit of data trac. A more serious situation can arise when transmitted health data is distorted by spurious signals from a malicious attacker. The much needed information may reach the 911 system in an undecipherable form, or worse still in a distorted form. This kind of tampering attack or distortion of data in transit, may cause wrong treatment to be given to an emergency patient, thus resulting in safety sabotage that may occur at VP-D in Figure 2.

Figure 3: Ontology for the proposed approach

2.3.4

Availability
Figure 4: Goal-centric Risk Mitigation Process sensitive problems, and as such must be dened and addressed at the application level, and not only at the infrastructure level. One reason for our position is that application domains provide useful contexts for evaluating tradeos between security, privacy, and other quality attributes (such as user-friendliness, safety, etc), since these attributes cannot be successfully designed in isolation or as add-ons. Another reason is that ubicomp introduces small and pervasive devices with varying processing and storage capabilities communicating over non-traditional channels, making known security mechanisms and standards largely inadequate [23]. Figure 3 shows the ontology of our proposed approach, describing the major concepts and abstractions upon which our framework is based. The overview of our proposed approach, showing its steps and the artifacts generated, is shown in Figure 4. An important feature of this approach is that each step results in the creation of some visual artifact that can serve as a security and privacy requirement model for a given ubicomp application. These models can be used as semantically rich means of communication among requirements analysts, architects, developers and other stakeholders. In what follows, we elaborate on the details of this approach, illustrating with examples that show how the approach can be reused.

Data transmission over ad-hoc wireless and sensor networks as well as unprotected radio-frequency identication (RFID) tags are known to be vulnerable to active attacks such as trac analysis, spoong and denial of service [28, 27]. In particular, limited bandwidth of the ad-hoc communication networks that are used in ubicomp applications exposes such systems as targets for denial-of-service attacks. For example, in Scenario-B, while the patient is at home and being monitored remotely from his doctors oce, a malicious attacker can launch a ooding attack against the ubicomp device. Such attack may eectively cause the generation of false alarms or suppression of real alarms at VP-B and VP-C in Figure 2. The consequence of this could be fatal if the patients vital signs are elevated above set thresholds and the doctor needs to be contacted immediately, but the alert service is denied. We believe that there are no universal solutions to the issues described above, that t all ubicomp applications. Thus each ubicomp application needs context-sensitive evaluation of what threats and vulnerabilities would need to be addressed. As a result, we believe that these challenges are largely requirement engineering problems. In the next section, we present our approach for managing these issues at the application layer, during ubicomp system development.

3.

GOAL-CENTRIC THREAT MITIGATION FRAMEWORK

In this section, we present our goal-oriented and policybased approach to the development of security and privacy requirement models for ubiquitous applications. While several research work have highlighted security and privacy issues in ubicomp environments, and suggested several encyption/authentication mechanisms at the infrastructure layer such as wireless communication and data storage, we take a dierent approach. Our approach is built on our conviction that ubicomp security and privacy are largely context-

3.1

Context Denition

Establishing the context for an application under development is an important task in system design. To dene an application context, we need to identify the most important real-world entities (e.g. roles, software resources, hardware resources, network resources, transactions, processes, as well as key abstractions in the problem domain), and the relationships between them. For quality attributes such as

Figure 6: A Threat/Vulnerability Interdependency Graph for Scenario-A Figure 5: A Security Softgoal Interdependency Graph (SIG) for Scenario-B security and privacy, the application context will include a specic denition of what security and privacy really mean for the system being developed. This is crucial because the term security, for example, has been used by many people to mean many dierent things in dierent domains. Domain models, such as entity-relationship diagrams and high-level UML Class diagrams, are commonly used to specify software application contexts. While these models provide useful supports for functional requirements of a system, they need to be complemented by other models that represent non-functional requirement contexts. In creating the context for security and privacy for a given ubicomp application, we adopt the NFR Framework [3] because of its expressiveness in representing and analyzing non-functional requirements (NFRs) such as reliability, performance, security, usability, etc; as well as its strong formal semantics which enables rich qualitative reasoning. In this framework, NFRs are modeled as softgoals to be satisced. Softgoals are considered satisced when there is sucient positive and little negative evidence for achieving them, or denied otherwise. To determine satisceability, operationalizing softgoals representing design decisions for realizing the NFRs are identied and analyzed. To provide more specic contextual information, softgoals are rened by AND/OR decomposition and contribution links. Positive and negative contributions of ospring softgoals to their parent softgoals are evaluated and trade-os are made, while rationales are recorded with claim softgoals. The entire process is recorded in a softgoal interdependency graph (SIG). The selected design decisions are then used as the basis for a solution. Figure 5 shows an example Security SIG for Scenario-B described in Section 2. The light clouds icons represent NFR softgoals in general. For clarity, we use a yellow-lightning symbol on the corner of the light cloud icon to represent security-related and privacy-related softgoals, thus dierentiating them from other softgoals such as Ubiquity and Safety. These and other kinds of softgoals are labeled by nomenclature of the form Type[Topic] where Type is a descriptor (e.g. Ubiquity, Safety, Integrity). The Topic establishes the scope (e.g. Electronic Health Record (EHR), Emergency patient) of the softgoal. NFR softgoals may be rened, typically by type or topic, one at a time. Renements can be done using AND-decomposition (denoted by a single arc), OR-Decomposition (denoted by a double arc), or a contribution link (denoted by an arrow). For example, in Figure 5, the high level softgoal Ubiquity[Electronic Health Record (EHR)] is shown to hurt the Privacy[Emergency Patient] and Security[EHR] softgoals. The softgoal Security[EHR] is AND-decomposed into more specic softgoals namely Condentiality[EHR], Integrity[EHR], and Availability[EHR]. The Privacy[Emergency Patient] softgoal is shown to hurt the the Safety[Emergency Patient] softgoal, due to the conicts described in Section 2.3. Also, an operationalizing softgoal NeedToKnowBasedDisclosure[EHR] is used to model a potential solution that can help achieving the security and privacy softgoals. Several alternative implementation mechanisms may be evaluated for implementing this high-level solution, and the evaluation of such will be based on the stated softgoals. After setting out the most important NFR softgoals for a given ubicomp application, the high-level data objects (information asset) that require protection from security and privacy breaches, need to be identied. These data objects are represented as the topics of the softgoals. For each of these data objects, we model the associated risks (such as threats and vulnerabilities) as negative softgoals to denote undesirable situations or problems. The interrelationships between the threats and vulnerabilities are also recorded in a security problem interdependency graph (or Security PIG), the semantics of which are fully described in [20, 26]. For example, in the Security PIG shown in Figure 6, the undesirable situation described in Scenario-A is denoted by the N-softgoal PatientDeadOnArrival[Emergency Response System (ERS)], which is OR-decomposed into the possible root-causes denoted by MisCommunication[ERS] and AmbulanceArrivesLate[ERS] N-softgoals. The gure also shows how these undesirable phenomena are further rened by ORdecomposition and contribution links. These renements can continue until the graph captures sucient level of details for the application being designed.

3.2

Sensitivity Characterization

The primary aim of any security and privacy protection eort is to safeguard some useful information assets. While information assets can include hardware, middleware, people, and services, of primary signicance are data objects that we want to protect. A rst step in risk management is to identify these information assets [24]. For example, in our ubiquitous emergency response scenario, we will like to protect the electronic health record (EHR) of the emergency patient. The EHR has been referred to with several terms in the literature including personal health record (PHR), pro-

Figure 7: A Sensitivity Meta-tag Hierarchy tected health information (PHI), and the electronic medical record (EMR), albeit with subtle dierences in meaning. It is important to note that dening security and privacy constraints on high-level data assets such as the EHR will be rather too restrictive. A more desirable approach is to classify the dierent data elements making up a data asset into an hierarchical structure and attach to each level an appropriate security and privacy constraints in a given application context. Our notion of sensitivity characterization is conceptually similar to the US Militarys notion of clearance levels [4], except that it provides support for purpose-driven enforcement policies at the application level, instead of a clearance level that needs to be obtained from a certifying authority. This is particularly signicant in ubicomp applications because the identity of the potential users/agents may not be predictable in advance. One technique for doing this context-sensitive characterization is to create a sensitivity meta-tag hierarchy for the given ubicomp application. We dene our notion of sensitivity meta-tag as a schema-level attribute of a data element that indicates its security and privacy sensitivity level. An example of a meta-tag hierarchy for EHRs is given in Figure 7, which shows dierent sensitivity meta-tags such as health information (HI), protected health information (PHI), highly sensitive health information (HSHI), sensitive personal information (SPI), and personally identiable information (PII). The grouping of data elements for the purpose of dening sensitivity meta-tags for composite elements can be guided by the principle of minimality. This principle requires that data elements be grouped together in such a way that enables the availability of minimum amount of composite eHealth information for a given intended purpose. Thus, it supports the popular principle of least privilege in mainstream computer security. The sensitivity meta-tag hierarchy can be implemented in several ways such as using enumerations such as (HI, PHI, HSHI, PROP, SPI, PI, SPI PII) or using numeric integer values that intuitively indicate sensitivity levels. Once a meta-tag hierarchy is established for a given ubicomp application domain, it can then be used in producing a meta-tag matrix for a specic application context. This customizable meta-tag matrix can be used requirements en-

Social Security Number National Health ID Drivers License Number Patient Basic Information Full Name Contact Info Address Street Address City Zip Code Phone Number Date of Birth Gender Health Information Genotype Phenotype HIV Status TB Status Vital Signs Blood Pressure Temperature Heart Rate Credit Card Information Credit Card Number CVV Code Expiry Date

PII 10 x x

SPI 9

PCI 8

x x x x x x x x x x x x x x x x x x x x x x x x

Figure 8: A Meta-tag Matrix for Patient Health Information gineers, policy analysts, and software architects, as the basis for specifying policies that impose appropriate levels of security and privacy constraints on a given data element or group of data elements. An example of such a matrix is illustrated in Figure 8. It is important to note that composite data elements (such as ContactInfo, derived from Address and PhoneNumber ) are characterized with their own sensitivity meta-tags, which can be dierent from the meta-tags of its constituent data elements. This feature enables us to dene more restrictive security and privacy constraints on aggregated information. For example, we characterize City, and ZipCode as PUBLIC because they pose no threat. However, when StreetAddress is added, the composite data element is collectively named Address, which we characterize with an SPI meta-tag because it uniquely identies a patients physical address. This ne-grained level of sensitivity characterization is particularly useful in health information managements. This is because it is desirable to use appropriate (and potentially dierent) levels of protection for dierent types of patient records, such as Depression-History, Sexually Transmitted Disease (STD) Status, HIV-status, Psychiatric records, and blood transfusion preference. For example, during an emergency response situation, blood type and blood transfusion preference data should be readily available to the ambulance and hospital crew, while HIV-status can be more restricted.

3.3

Risk and Tradeoff Analysis

We subscribe to the position that managing security and privacy can be largely reduced to risk management [24, 21].

 
HSHI PHI 7 6 HI 5

CPNI Public 4 1

Figure 9: Emergency Response: From Goals to Safeguards and Countermeasures In our approach, the risk management process starts with the identication of the information assets that needs protection. The associated undesirable phenomena such as security vulnerabilities, threats, attacks, etc., are then identied. While security threats are bad things that can potentially happen, vulnerabilities are weaknesses in a system that can be explored to mount an attack (an actualization of a threat). These undesirable phenomena are and modeled with N-softgoals as illustrated in Section 3.1. By doing threat elicitation in a context-sensitive manner, identied threats will be related to the application being developed. These threats are then analyzed to determine the degree to which the system is susceptible to them. Threat prioritization is the important next step. Since it may not be feasible to eliminate all actual threats, it is important to weigh the severity of the threats against the cost of implementing mitigation mechanisms. The goal of the risk and tradeo analysis process described here is to balance what is technically feasible (in terms of safeguards and countermeasures) against what is acceptable (in terms of risk and cost) [19]. In other word, the costs of mitigating a threat may be prohibitive and unjustiable when compared to the risk that the threat may be realized in an attack. Such threats will be documented as being accepted rather than mitigated. Prioritization decisions can be based on quantitative metrics such as severity (a product of the impact of an attack and its probability), or qualitative indices such as claims based on expert knowledge and experience. Claim softgoals are used to specify the rationale behind accepting such risks instead of implementing countermeasures against them. The relative priorities are then shown in the Security-SIG with exclamation marks next to the N-softgoals that represents the threat or vulnerability. The rationale for the prioritization can be explicitly captured with claim-softgoals. For example, in Figure 9, the expert judgement denoted by the claim softgoal Claim[Patient privacy is very crucial] captures the rationale for prioritizing the threat represented by the UnauthorizedDisclosure[EHR] N-softgoal. The result of the risk and tradeo analysis process is the selection of potential mitigation strategies. These strategies can be either in the form of safeguards or countermeasures. A safeguard is preventive mechanism while a countermeasure is a mechanism that helps detective or recover from a threat being realized in an attack. These mitigation mechanisms are related to their associated vulnerabilities, threat, or attack pattern with the inv contribution link, which denote that they mitigate the N-softgoals. For example in Figure 9, the operationalizing softgoal NeedToKnowbasedDisclosure[EHR] is shown as a high-level mitigation mechanism against the threat or vulnerability represented by the UnauthorizedDisclosure[EHR] N-softgoal. At this point, the Security-SIG documenting our analysis process would have contained sucient information about the threats and their mitigation strategies. The evaluation procedure (labeling algorithm) described in details in [3] is then used to evaluate the Security-SIG. Explicit contributions and implicit correlations of the countermeasures to the identied threats and vulnerabilities are specied. These evaluations are propagated upward the Security-SIG to the satiscing or otherwise of the top-level softgoals. Figure 9 shows the evaluated Security-SIG for our Emergency Response system. It shows that the all the expert judgements denoted by the claims softgoals are satisced (upheld to be valid) and their eects reected. The risk associated with the threat denoted by Spoong[EHR] is accepted due to the highly negative contributions from the claim softgoal Claim[Cost of IP ltering outweighs the spoong risk]. This would have resulted in the denial of the Condentiality[Account] softgoal if not for the inverse contribution from the denied UnauthorizedDisclosure[EHR] N-softgoal. Other operationalizing softgoals are satisced and their effects propagated upward the Security-SIG as shown. These propagations lead to the satiscing of the top-level security and privacy softgoals.

3.4 Purpose-Driven Policy Analysis


We belief that traditional security mechanisms are insufcient to achieve sucient level of security and privacy protection in ubicomp applications. For instance, while rolebased access control (RBAC) [11] has gained wide usage in practice, its application in ubicomp is largely limited. This is partly due to the fact that RBAC (as well as several other existing security mechanisms) are based on a-priori identication of a user-role (or agent) responsible for an action or transaction, or to which access is being granted or denied. In ubicomp domains however, it is dicult (if not impossible) to anticipate in advance all the active participants (or roles) in an ubiquitous application. Another factor of greater signicance is the fact that even in traditional client-server platforms such as web applications, technical security and privacy mechanisms are often complemented by administrative safeguards in the form of policies, procedures, or physical measures. For example, HIPAA privacy rule stipulates that covered entities must use of administrative and physical safeguards in addition to technical safeguards in order to protect the privacy rights of individuals [13]. Consequently, we suggest that purposedriven access control policies are more applicable to ubicomp applications. In this paradigm, the use or disclosure of protected health information over ubicomp devices is based on the intended purpose rather than the user-role involved. Examples of such purposes for eHealth data movement or storage include treatment, billing, emergency, training, and research. However, high level policies and statements of intended purposes are seldom precise in practice. To address this issue, we describe policies at dierent levels of abstraction as follows:

3.4.2

Tactical Policies

Strategic policies, like SP1 above, mostly exists in natural language textual documents. They are thus susceptible to the problems of natural language specications such as imprecision, ambiguity, and potential inconsistencies or conicts. In addition, rening these abstract policies into denable constraints relating to specic services, and then into rules implementable by specic devices supporting the services, is a dicult task [7]. To address this, we introduce the notion of tactical policies as a bridging specication formalism that supports more precision and analysis. A tactical policy T is dened as a tuple: T = <A, O, S, P, C, M> where A is the set of actions (or transactions), O is the set of data objects, S is the set of subjects, P the set of intended purposes, C the set of constraints dened on a given action (or transaction), and M the set of suggested methods for realizing the policy. In other word, each tactical policy <t = a, o, s, p, c, m> denote that an action a is dened on a data element o, which can optionally be associated with a subject s, for the intended purpose p, subject to a constraint c, by using a method m. For example, SP1 and SP2 above can be partially rened respectively as follows: TP1 = <use, PHI, CE, Treatment, , > TP2 = <mask, PCI, , Payment, (while in transit or Storage), Last 4 digit> In general, tactical policies like TP1 and TP2 can be specied in several ways such as structured textual templates, XML documents, or tables.

3.4.1

Strategic Policies

3.4.3

Operational Policies

We dene the notion of strategic policies as high-level operationalizations of security and privacy goals and objectives for a given application. These policies describe enterprizelevel constraints imposed by corporate policies, industry standards, best-practices, laws, regulatory controls, etc. They are commonly used by organizations to dene administrative safeguards and controls over information assets. The following are two real-world examples of strategic policies: SP1 : A covered entity (CE ) is permitted, but not required, to use and disclose PHI, without an individuals authorization, for the following purposes or situations: (1) To the individual (unless required for access or accounting of disclosures); (2) Treatment, Payment, and Health Care Operations; (3) Opportunity to Agree or Object; (4) Incident to an otherwise permitted use and disclosure; [13]. SP2 : All PCI-related data shall be masked while in transit as well as in storage. While SP1 denes what is permissible but not required, and originates from a provision of the HIPAA privacy rule [13], SP2 derives from the payment card industry data security standard (PCI-DSS) [6]. Both of these strategic policies may impose signicant constraints on a given ubiquitous eHealth information system.

These are low-level means of specifying security and privacy constraints in a way that can be mapped onto various access control implementation mechanisms at the application level. Operational policies may be dened in terms of authorization, prohibition, obligation and refrainment rules. Authorization policies dene actions that are permitted under a given situation/purpose of use/transmission, while prohibitions explicitly dene the conditions or purposes under which a given data-oriented action should not be permitted. Obligation policies dene what actions must take place under a certain situation or purpose of use/transmission, while refrainment policies explicitly forbid such actions. Our classication is inuenced by the Ponder language [8], which provides subject-based constraints in terms of positive/negative authorizations and obligation policies. Since Ponder is based on the underlying notion of the subject-action-target trio, it clearly needs to be extended to support the specication of purpose-based constraints that are more applicable in ubicomp application domains, since the identity of the principal subjects may not be known in advance. The general format for specifying operational policies is given below with optional attributes within brackets: [description] identier: modality [on trigger] sensitivitytag, {action} {purposes} [constraint] where description provides for general comments, identier is a unique identier for a given policy instance, modality is the set M = {A, P, O, R} denoting authorization, prohibition, obligation, and refrainment respectively. The optional

trigger is a triggering event which can be specied as an expression, while sensitivity-tag is the class of all data-elements characterized with a given sensitivity meta-tag (as described in Section 3.2. In addition, action denotes a set of transactions or operations on the data elements, purposes is the set of intended purposes denable on the data elements, while constraint is the set of conditional expressions that can be imposed on the underlying action or transaction. The following are some concrete examples of operational policies: OP1: A[CE, {use}, PHI, {Treatment }] OP2: O[CE, {disclose}, PHI, {Training}, when (preauthorized by patient)] OP3: P[, {disclose}, PCI, {Payment}, unless (masked down to last 4 digits)] As can be noted, operational policies OP1 and OP2 derives from the tactical policy TP1, while OP3 derives from TP2. As shown above, our approach enables us to systematically reduce high-level strategic policies to a set of enforceable operational policies, that can be implemented by a given ubicomp system. The output of this purpose-driven policy analysis step of our framework is a collection of policies at dierent levels of abstractions. This collection, together with other domain models that dene the systems context, constitute what we refer to as a strategic policy conguration for a given ubicomp system.

We believe that a systematic application of the proposed approach is benecial. One major benet is that the models generated from the approach, such as scenario context diagrams, security-PIG, security-SIG, and the strategic policy conguration, can be very useful as communication tools during system design. Also, the approach can help establish traceability of implemented security and privacy policies to the threats associated with the high level goals. In addition, the disambiguation of the notion of policies by partitioning it into strategic, tactical, and operational, at decreasing level of abstractions, is very useful. However, semi-automatic mechanisms for transforming strategic policies to operational policies, as described in this paper, are still lacking. Such mechanisms can enhance the ease of use of the proposed approach, and also enable early detection of policy conicts and inconsistencies.

5.

SUMMARY AND FUTURE WORK

4.

DISCUSSIONS

Ubicomp indeed brings new dimensions to the problems of security and privacy protection that is faced during the interchange of eHealth information across several wireless devices. The proposed framework is based on the premise that the security and privacy concerns described in Section 1 can be largely regarded as those of requirements elicitation and specication. This premise derive from our observation that security in mainstream computing is today largely centered around data networks, storage, and user roles; whereas for ubicomp domains, we needs applicationoriented context-sensitive security and privacy. This work extends the goal oriented frameworks described in [20] with the notions of sensitivity meta-tag hierarchies, goal-centric risk and tradeo analysis, and purpose-driven policy specications for eHealth applications. The lack of a central administration server in most ubicomp systems underscores the fact that traditional access control mechanisms such as user authentication, are largely insucient in this domain. For the same reason, the notion of purpose-driven access control policies described in this paper seems to be more appropriate for ubiquitous healthcare applications, since role-based access control rely on user identities. We also observe that the notion of sensitivity meta-tag on which the purpose-driven policies are based can be implemented a number of ways. One way is to add the tag as an additional meta-attribute to the underlying database management system, and congure the query engine to enforce preferences and policies. Another way is to implement a query ltering layer that intercept data queries and apply the constraints imposed by the meta-tags using the preferences and the operational policies.

In this paper, we have presented a goal-centric and policybased approach for capturing security and privacy requirements in ubiquitous eHealth information management systems. The approach uses goal-oriented visual models to represent security and privacy objectives, threats against those objectives, and mitigation strategies in the form of safeguards or countermeasures. Our main contributions include 1) scenario-driven threats identication based on security and privacy goals, 2) an ontology for developing goal-centric threats and vulnerabilities mitigation models, 3) the notion of purpose-driven security policies for operationalizing risk mitigation strategies at dierent levels of abstraction, which was demonstrated as being more appropriate for ubiquitous applications, and 4) the generation of visual models that can be used as eective communication tools among stakeholders during the early phases of ubicomp system design. The utility of the proposed goal-centric and purpose-driven threat mitigation framework was validated using practical examples from real-world ubiquitous emergency response systems. To further mature this approach, we plan to explore more formal analysis of our notions of tactical and operational policy specications, as well as interaction boundaries and trust perimeters for ubiquitous system elements. Future work also includes discovering and resolving the inconsistencies that may potentially exist in strategic policy congurations generated using the approach.

6.

REFERENCES

[1] G. D. Abowd and E. Mynatt. Charting past, present and future research in ubiquitous computing. ACM Trans. on Computer-Human Interaction, Special issue on HCI in the new Millenium, 7(1):2958, Mar. 2000. [2] D. Brin. The Transparent Society: Will Technology Force us to Choose Between Privacy and Freedom? Basic Books, 1999. [3] L. Chung, B. A. Nixon, E. Yu, and J. Mylopolos. Non-Functional Requirements in Software Engineering. Kluwer Academic Publishers, 2000. [4] D. D. Clark and D. R. Wilson. A comparison of commercial and military computer security policies. In Proceedings of IEEE Symposium on Security and Privacy, pages 184194, 1987. [5] CNN. 911 Caller Ignored, Killed. Available from CNN over the Internet, Oct. 2010.

http://www.cnn.com/video/#/video/us/2008/05/02/. [6] P. C. I. Council. Payment card industry (pci) data security standard. Available over the Internet - July 2010. https://www.pcisecuritystandards.org. [7] N. Damianou, A. K. B, M. Sloman, and E. C. Lupu. A Survey of Policy Specication Approaches. Technical report, Department of Computing, Imperial College, London, 2002. [8] N. Damianou, N. Dulay, E. Lupu, and M. Sloman. The Ponder Policy Specication Language. Policy Workshop 2001, Bristol, U.K., Springer-Verlag, LNCS, Jan. 2001. [9] S. Doheny-Farina. The Last Link: Default = Oine, Or Why Ubicomp Scares Me. Computer-mediated Communication, 1(6):1820, Oct. 1994. [10] M. Esler, J. Hightower, T. Anderson, and G. Borriello. Next century challenges: Data-centric networking for invisible computing. In Proceedings of MobiComS99, Seattle, WA, 1999. [11] D. Ferraiolo and R. Kuhn. Role-based access controls. In 15th NIST-NCSC National Computer Security Conference, pages 554563, 1992. [12] R. H. R. Harper. Why Do People Wear Active Badges? Technical Report EPC-1993-120, Rank Xerox, Cambridge, MA, 1993. [13] HHS. Hipaa privacy rule. Available from Health and Human Services Department over the Internet - July 2010. www.hhs.gov/ocr/privacy/hipaa/ administrative/privacyrule/index.html. [14] J. I. Hong and J. A. Landay. An architecture for Privacy-sensitive Ubiquitous Computing. In Proc. 2nd Interl Conference on Mobile Systems, Applications, and Services, pages 177189, New York, 2004. [15] M. Langheinrich. Privacy by Design - Principles of Privacy-aware Ubiquitous Systems. In Proc. Interl Conference on Ubiquitous Computing, pages 273291, Gresham College, London, 2001. [16] S. Lederer, J. Hong, X. Jiang, A. Dey, J. Landay, and J. Manko. Towards everyday privacy for ubiquitous computing. Tech.Report UCBCSD-03-1283, Computer Sc. Division, Univ. of California, Berkeley, 2003. [17] A. Leung and C. J. Mitchell. Ninja: Non Identity Based, Privacy Preserving Authentication for Ubiquitous Environments. UbiComp 2007: Ubiquitous Computing, Lecture Notes in Computer Science, LNCS-4717:7390, Sept. 2007. [18] A. Mihailidis, B. Carmichael, J. Boger, and G. Fernie. An intelligent environment to support aging-in-place,

[19]

[20]

[21]

[22]

[23] [24]

[25]

[26]

[27]

[28]

[29] [30]

safety, and independence of older adults with dementia. In Proc. 2nd Interl Workshop on Ubiquitous Computing for Pervasive Healthcare Applications (UbiHealth-2003), Seattle, WA, 2003. S. Myagmar, A. Lee, and W. Yurcik. Threat Modeling as a Basis for Security Requirements. In Proc. of the Symposium on Requirements Engineering for Info. Security (SREIS05), Paris, France, Aug. 2005. E. Oladimeji, S. Supakkul, and L. Chung. Security Threat Modeling and Analysis: A Goal-Oriented Approach. In Proc. of the 10th IASTED International Conference on Software Engineering and Applications (SEA 2006), pages 1315, Dallas, USA, Nov. 2006. E. Oladimeji, S. Supakkul, and L. Chung. A Model-driven Approach to Architecting Secure Software. In The 19th International Conference on Software Engineering and Knowledge Engineering (SEKE07), pages 535 540, Boston, USA, July 2007. R. Sasaki, S. Qing, E. Okamoto, and H. Yoshiura. Security and Privacy in the Age of Ubiquitous Computing. In Proc. IFIP TC11 20th Interl Information Security Conference, Chiba, Japan, 2005. F. Stajano. Security for Ubiquitous Computing. John Wiley & Sons Ltd, 2002. F. Stajano. Security Issues in Ubiquitous Computing. In H. Nakashima, H. Aghajan, J. C. Augusto (Eds.). Handbook of Ambient Intelligence and Smart Environments, Springer 2010. V. Stanford. Pervasive Health Care Applications Face Tough Security Challenges. IEEE Pervasive Computing, 8(12), April-June 2002. S. Supakkul and L. Chung. Extending Problem Frames to Deal with Stakeholder Problems: An Agentand Goal-Oriented Approach. In Proc. of the 24th ACM Symposium on Applied Computing (RE Track), pages 389394, Honolulu, Hawai, Mar. 9-12 2006. K. Venkatasubramanian and S. K. S. Gupta. Security Solutions for Pervasive Healthcare. In Proc. of 3rd Interl Conference on Security in Pervasive Computing, York, UK, April 18-21 2006. E. Weippl, A. Holzinger, and A. Tjoa. Security Aspects of Ubiquitous Computing in Health Care. E&I Journal, pages 156161, 2006. M. Weiser. Hot Topics: Ubiquitous Computing. IEEE Computer, pages 7172, Oct. 1993. N. Winters. Personal Privacy and Popular Ubiquitous Technology. In Proc. of UbiComp 2004: The 2nd Interl Workshop on Ubiquitous Computing for Pervasive Healthcare Applications, London, UK, 2004.

Você também pode gostar