Você está na página 1de 13

This is a pre-print version of the article. This version is distributed for non-commercial purposes.

Alessandro Mantelero

Aggregate Professor, Politecnico di Torino


Director of Privacy and Faculty Fellow, Nexa Center for Internet and Society
alessandro.mantelero@polito.it

Giuseppe Vaciago

Assistant Professor, University of Insubria


giuseppe.vaciago@uninsubria.it

DATA PROTECTION IN A BIG DATA SOCIETY. IDEAS FOR A FUTURE


REGULATION.

(2015) 15 Digital Investigation 104109

Post-print version available at


http://dx.doi.org/10.1016/j.diin.2015.09.006

Abstract:
Big data society has changed the traditional forms of data analysis and created a
new predictive approach to knowledge and investigation. In this light, it is
necessary to consider the impact of this new paradigm on the traditional notion of
data protection and its regulation.
Focusing on the individual and collective dimension of data use, the authors
briefly outline the challenges that big data poses for individual information selfdetermination, reasonable suspicion and collective interests. Therefore, the article
suggests some innovative proposals that may update the existing data protection
legal framework and contribute to make it respondent to the present algorithmic
society.
Keywords: algorithms, big data, privacy, data protection, profiling, decisional
model, social control, discrimination, reasonable suspicion, fourth amendment

* Alessandro Mantelero is author of sections 1, 2.1, 3 and 4.


* Giuseppe Vaciago is author of section 2.2.

This is a pre-print version of the article. This version is distributed for non-commercial purposes.

1. Introduction
In order to briefly depict the main challenges associated with big data analytics
and suggest possible regulatory solutions, it is necessary to consider two different
scenarios: the individual dimension of big data use (micro scenario) and its
collective dimension (macro scenario). The first dimension concerns the way in
which big data analytics affect individuals chances to assume aware decisions
about the use of their personal information and affect individuals expectations of
privacy. 1 The second dimension focuses on the social impact of the categorical
approach that characterizes the logic of big data analytics and their use for
decisional purposes. 2
Regarding to the micro scenario, an interesting piece of speculative fiction written
by Sara Watson envisions a future domestic world dominated by intelligent
devices (IoT), which take care of their users and take decisions in the interest of
them (Watson, 2014). Obviously, although this is not considered in Watsons
piece, users have received detailed information about the terms and conditions of
these devices and about their privacy policies (with links to third parties privacy
policies, terms and conditions, etc.).3
In near future, millions of sensors and devices will be connected and able to
interact each other in order to collect data about users and predict individual
behaviour, support, anticipate and (in some cases) nudge (Thaler et al, 2008)
users decisions (Howard, 2015). Unread legal notices (Mantelero, 2015; Solove,
2013; Barndimarte et al., 2010; Turow et al., 2007) and users consent driven by
must-to-have devices or services will legitimate personal data use, as it already
happens with regard to hundreds of apps, online services, loyalty cards, etc.
Against this background, two questions rise: is this the end of the traditional idea
of individual self-determination with regard personal data? Should big data
analytics lead rule-makers to reconsider the way in which the idea of selfdetermination has been embedded in the data protection regulation?
From a different perspective, it should be noted that, in the big data context,
decisions concerning individuals are assumed on the basis of group-profiling
technologies (Hildebrandt and Serge, 2008) and predictive knowledge provided
by analytics (Mayer-Schnberger and Cukier, 2013; Bollier, 2010). Complicated
and obscure data processes (Pasquale, 2015) drive decisions concerning
individuals, which become mere units of one or more groups generated by
analytics (FTC, 2014). Moreover, in the field of data processing for law and

See below paras. 2.1 and 2.2.


See below para 3.
3
See
e.g.
Fitbit
Privacy
Policy
(August
10,
http://www.fitbit.com/uk/privacy#PrivacyPolicy> accessed August 31, 2015.
2

2014)

<

This is a pre-print version of the article. This version is distributed for non-commercial purposes.

enforcement purposes, this poses serious questions in terms of interfering with


constitutional liberties and the principle of reasonable suspicion. 4
Focusing on the macro scenario, the algorithmic approach is creating a new truth
regime (Rouvroy, 2014), where primetime television usage or propensity to buy
general merchandise become predictor variables that are used by insurance
companies to asses risks associated to segments of their clients (FTC, 2014; Garla
et al., 2013). In the same way, the neighbourhoods general credit score 5 affects
the chance to access to credit of the individuals living in a certain area or, in
another field, mere social connections with authors of serious crimes are sufficient
to define lists of potential offenders (Gorner, 2013).
All these decisional models disregard the specific case and its peculiar aspects,
since they adopt a categorical approach in mapping our society. Nevertheless, a
map is not the territory (Korzybski, 1933) and the logic of the author of the map,
the way in which the territory is represented, as well as the potential errors of the
representation, may produce different and, in some cases, biased results
(Robinson + Yu, 2014; National Immigration Law Center, 2013; Gandy, 2000).
For these reasons, it is important that people affected by these representations of
society are actively involved in the process and are adequately protected against
biased representations or lack of accuracy in the portrayal of groups of
individuals.
Moreover, a categorical approach may also induce self-fulfilling cycles of bias
and consequent discriminatory effects. This is the case of predictive policing
software, which may put the spotlight on specific territorial areas and induce
police departments to allocate more resources to these areas. The potential
outcomes is a rise in crime detection at local level that reinforces the original
prediction, while a reduced police presence in the remaining districts lowers crime
detection in these areas and apparently confirm the positive prediction for these
districts (Koss, 2015).
In the light of the above, a second series of questions rises: is the traditional
individualistic model of data protection still adequate to face the new predictive
society? In a society where group profiling is used for decision purposes, should
rule makers consider the supra-individual and collective dimension of data
processing?

2.1 The micro scenario: beyond the notice and consent


The purpose specification principle and the use limitation principle are the
traditional pillars of data protection regulations and, with regard to consumer data
protection, the so-called notice and consent model (i.e. an informed, freely
4

See below para 2.1.


This score predicts credit risks referring to individuals that live in a small geographic area and is
defined on the basis of aggregate credit scores.

This is a pre-print version of the article. This version is distributed for non-commercial purposes.

given and specific consent) represents one of the most used mechanisms to
legitimate data processing (Article 29 Data Protection Working Party, 2011;Van
Alsenoy et al., 2014; Mayer-Schnberger, 1997; Roder Brownsword, 2009; The
White House, 2012; Ohm, 2013; Cranor, 2012). 6 Nevertheless, the
transformative use of big data (Tene and Polonetsky, 2012) contrasts with this
legal framework.
Since analytics are designed to extract hidden or unpredictable inferences and
correlations from datasets, it becomes difficult to define ex ante the purposes of
data processing (Article 29 Data Protection Working Party, 2013) and be
compliant with the limitation principle. Therefore, a notice that explains all the
possible uses of data is hard to be given to data subjects at the time of the initial
data collection.
Not only descriptions of the purposes of data processing (notices, privacy
policies) are becoming more and more evanescent. The same idea of selfdetermination embodied in data subjects consent is also challenged by an
increasing concentration of information in the hands of a few entities (data
barons), both public and private (Cate and MayerSchnberger, 2013), and its
consequences in terms of technological and social lock-in effects (Mantelero,
2014).
Finally, the complexity of data processing and legalese wording lead users to
disregard privacy policies and provide their data on the basis of the mere interest
in obtaining specific services or on the basis of the reputation of service providers
(Mantelero, 2015).
For these reasons, it is necessary to reconsider the existing regime based on data
subjects (pseudo) self-determination and accept that data subjects are often not
able to take meaningful decisions about the use of their data without an adequate
external advice. This is not different from what it happens when people uses cars
or takes medicines: users are not asked to know into details how these products
work due to their lack of competence or possibility of choice and third parties
(producers, agencies, etc.) assess the risks of these products in the interests of the
users.
In the same way, data protection regulations should require a rigorous prior
impact assessment of big data processing, which should not be only focused on
data security, but should also consider the social and ethical impact of the use of
information (Calo, 2013; Dwork and Mulligan, 2013; Wright, 2011; Schwartz,
2010). Adequate forms of publicity of the results of this assessment would make
data subjects informed about data processing and aware of the risks of data uses.
This would make them able to decide to take part of data processing or not.
Nonetheless, in the presence of complex data collection and processing systems
influenced by lock-in effects, such an impact assessment should not be conducted
either by consumers, or by companies. It should be conducted by third parties,
under the supervision of national data protection authorities that define the
professional requirements of these third parties.
6

See art. 2 (h), Directive 95/46/EC and art. 4 (8) PGDPR-LIBE.

This is a pre-print version of the article. This version is distributed for non-commercial purposes.

Unfortunately, this regulatory model is only partially taken into account by


legislators. Privacy impact assessment procedures already exist, as well as data
protection impact assessment is part of the new EU Proposal on data protection.
Nevertheless, these assessment procedures are frequently limited to specific cases,
mainly focused on security, weakened by the lack of resources for an adequate
enforcement by the data protection authorities and often affected by the risk to
become mere formal procedures. On the contrary, a detailed multiple impact
assessment should become the first step of an effective strategy in designing new
data protection-oriented products and services (data protection by design).

2.2 Reasonable suspicion and expectations of privacy in the Big Data era
While EU data protection rules do not apply to the processing of personal data
concerning public security, defence or State security 7, the use of big data for
investigative, preventive and predictive purposes introduces new significant issues
regarding the possible impact on privacy and due process for the defendant.
Suppose police are investigating a series of robberies in a particular
neighborhood. A police officer sees a potential suspect in the area and uploads a
photo from his patrol car to a computerized database. Facial recognition software
scans the police database and suddenly there is a match. The personal information
about the suspect that appears on the officers computer screen mentions prior
arrests and convictions for robbery. The officer then searches additional sources
of third party data, including information on social media and the suspects GPS
location, which link the suspect with the robberies in question (Koss, 2015). At
the end of a brief analysis that requires no more than a few minutes, the police
officer has formed a particularized, individualized suspicion about a man who is
not in fact doing anything overtly criminal.
The question is: can this aggregation of individualized information be sufficient to
justify interfering with a persons constitutional liberty and with the principle of
reasonable suspicion 8?
George Mason University professor Cynthia Lum does not think predictive
policing is all that different from conventional crime-prevention strategies

Article 3 of Directive 95/46/EC


The Fourth Amendment of US Constitution and the Article requires reasonable suspicion to stop
a suspect. At EU level, article 5 1 of European Convention on Human Right requires a
reasonable suspicion that a criminal offence has been committed presupposes the existence of
facts or information which would satisfy an objective observer that the person concerned may have
committed an offence (Ilgar Mammadov v. Azerbaijan, 88; Erdagz v. Turkey, 51; Fox,
Campbell and Hartley v. the United Kingdom, 32). Therefore, a failure by the authorities to
make a genuine inquiry into the basic facts of a case in order to verify whether a complaint was
well-founded disclosed a violation of Article 5 1 (c) (Stepuleac v. Moldova, 73; Eli and
Others v. Turkey, 674).

This is a pre-print version of the article. This version is distributed for non-commercial purposes.

(Gordon, 2013). Before predictive policing, we had crime analysis where analysts
studied patterns, trends, repeat offenders and modus operandi.
But the use of big data through predictive software provides the opportunity to
rapidly obtain information that could be used on the basis of the following
principle: law enforcement officers may access many of these records without
violating the Fourth Amendment, on the basis of the theory that there can be no
reasonable expectation of privacy in relation to information that we have
knowingly revealed to third parties 9. This issue was addressed in People v.
Harris 10. On January 26, 2012, the New York County District Attorneys Office
sent a subpoena to Twitter Inc. seeking to obtain the Twitter records of a user
suspected of having taken part in the Occupy Wall Street movement. Twitter
refused to provide the law enforcement officers with the information requested
and sought to quash the subpoena. Rejecting the arguments put forward by
Twitter, the Criminal Court of New York upheld the subpoena, stating that tweets
are, by definition, public, and that a warrant is not required in order to compel
Twitter to disclose them. The District Attorneys Office argued that the third
party disclosure principle that had been put forward for the first time in United
States v. Miller applied 11.
In the European Union, whilst this type of data collection frequently takes place, it
could potentially be in breach of ECHR case law, with the ruling in the Rotaru vs.
Romania case 12 dictating that public information can fall within the scope of
private life where it is systematically collected and stored in files held by the
authorities. As OFloinn observes: Non-private information can become private
information depending on its retention and use. The accumulation of information
is likely to result in the obtaining of private information about that person
(OFloinn and Ormerod, 2011).
Underlying the question of legitimacy is the issue of whether predictive policing
technologies and, especially, big data are reliable and accurate (Ferguson, 2014).
There is a huge difference between the use of big data for commercial purposes
and use for investigative purposes. In the latter case, there is a risk that
investigations will be started in relation to a person who is innocent. The standard
of reasonable suspicion arose in the US with the Terry vs Ohio case, which
requires that the police are able to point to specific and articulable facts which,
taken together with rational inferences from those facts, reasonably warrant th[e]
intrusion 13. If this principle could be applied in a small data world, the use of
big data results in the risk of the level of protection provided by the reasonable
9

See United States v. Miller (425 US 425 [1976]).


10 See 2012 NY Slip Op 22175 [36 Misc 3d 868].
11 See United States v. Miller (425 US 425 [1976]).
12 See Rotaru v Romania (App. No. 28341/95) (2000) 8 B.H.R.C. at [43].
13
See Terry v. Ohio, 392 U.S. 1, 21-22 (1968) (articulating the standard and explaining the
rationale behind it)

This is a pre-print version of the article. This version is distributed for non-commercial purposes.

suspicion standard being reduced, as the police can easily obtain information
about a particular suspect.
Another fundamental aspect is the transparency issue in the use of big data.
Predictive policing programs will need to be explained to courts in a way that
accurately addresses concerns about data collection, analysis, and the creation of
the probabilities. A lack of transparency in the use of big data for predictive
policing purposes could amount to a breach of the principle of non-discrimination
(Capers, 2011). The following provides a good example of how the near future
might unfold.
The ACLUs recent national study on marijuana arrests demonstrates that African
Americans are more likely to be arrested for offences involving marijuana than
whites, despite equivalent usage rates. It follows that more data has been collected
about minority marijuana arrests, even though the actual crime rates are the same.
If the data collected only concerns certain classes of people, then those people are
more likely to become targets14.
Andrew Guthrie Ferguson, a law professor at the University of the District of
Columbia, recently made a perfectly plausible prediction, saying that very soon
well see a Fourth Amendment case before the court (Ferguson, 2014). At the
same time, in Europe we await approval of the Directive on the protection of
individuals with regard to the processing of personal data by competent
authorities for the purposes of prevention, investigation, detection or prosecution
of criminal offences or the execution of criminal penalties, and the free movement
of such data. 15 This directive represents the first piece of legislation to have direct
effect when compared to previous attempts by way of Council of Europe
Recommendation No. R (87) and Framework Decision 2008/977/JHA .
The founding principles of this Directive are twofold. First, there is the need for
fair, lawful and adequate data processing during criminal investigations or in
order to prevent a crime, with all data being collected for specified, explicit and
legitimate purposes and erased or rectified without delay. Secondly, there is the
duty to make a clear distinction between the various categories of possible data
subjects in criminal proceedings (persons in relation to whom there are serious
grounds for believing that they have committed or are about to commit a criminal
offence, convicted offenders, victims of criminal offences and third parties to a

14

Aclu Report, The War on Marijuana in Black and White, 2013, available at the following Url:
https://www.aclu.org/sites/default/files/field_document/1114413-mj-report-rfs-rel1.pdf.
15
European Commission. Proposal for a Directive of the European Parliament and of the Council
on the protection of individuals with regard to the processing of personal data by competent
authorities for the purposes of prevention, investigation, detection or prosecution of criminal
offences or the execution of criminal penalties, and the free movement of such data
(COM/2012/010 final). 2012. Available from, http://eur-lex.europa.eu/legalcontent/EN/TXT/?uri=celex:52012PC0010. Accessed 15.01.14.

This is a pre-print version of the article. This version is distributed for non-commercial purposes.

criminal offence). This directive could have also a fundamental impact on the way
that law enforcement officers are able to use big data for investigative purposes.

3. The macro scenario: beyond the individual dimension of data protection


The focus on the assessment of the social and ethical impact of data uses lead rule
makers to consider the collective dimension of data protection. Regarding to this
dimension, it is worth to point out that the traditional notions of privacy and data
protection are mainly based on the model of individual rights. The social
dimension of these rights has been recognised by policymakers, legislators and
courts, but the right holder has remained the data subject and the rights related to
informational privacy have mainly been exercised by individuals.
Nevertheless, in a society in which large amount of data are collected to
investigate attitudes and behaviours of large groups and communities, up to entire
countries, this atomistic approach shows its limits. The use of massive datasets for
social surveillance and predictive purposes in different fields (marketing,
employment, social care, etc.) and its potential negative effect, in terms of unfair
discrimination (The White House, 2014), make it necessary to consider the
collective dimension of data protection and, more in general, of the use of data.
This collective dimension seems to be only partially recognised by the (few) legal
scholars that have devoted contributions to group privacy and collective interests
in data processing. They take into account groups in the sociological or legal
meaning of the term (family relationships, interpersonal relationships, organised
or non-organised collective entities) and remain mainly focused on the model of
individual rights (i.e. the rights of the member of a group or the rights of the group
as an autonomous collective body) (Bloustein, 1977; Bloustein, 1978; Westin,
1970; Bygrave, 2002).
Moreover, it should be noted that the collective issues that stem from the big data
society are different and not related to the traditional idea of groups, which
concerns aggregations of individuals that are conscious to be part of a group and
interact with each other.
In the big data era, group are generated by analytics: data gatherers shape the
population they intend to investigate, collect information about various people,
who do not know the other members of the group and, in many cases, are not
aware of the consequences of being part of a group. This is the case of consumer
group profiling, scoring solutions and predictive policing applications, which have
been mentioned in the previous paragraphs.
Finally, the nature of the interests that assume relevance in this context is
different, since the focus is on the collective dimension and the potential harm to
groups (Crawford et al., 2013; boyd et al. 2014). In this sense, prejudice can result
not only from the well-known privacy-related risks (e.g. illegitimate use of

This is a pre-print version of the article. This version is distributed for non-commercial purposes.

personal information, data security), but also from discriminatory and invasive
forms of data processing (Mantelero, 2015).
Against this background, two critical aspects arise: (i) the representation of
collective interests, in situations where the common attributes of the group
become evident only in the hands of the data gatherer; (ii) the enforcement of
these collective interests.
In addressing these crucial aspects, a central role is played by the risks assessment
process. 16 This assessment represents the moment in which it is possible to realize
how data processing affects collective interests and to identify the stakeholders
that should be involved in the process, in order to give voice to collective
interests.
Regarding the authorities that should be responsible for the enforcement of the
collective rights concerning data protection, it is worth to point out that many
countries have different bodies that are responsible for the supervision of social
surveillance activities, as well as bodies focused on anti-discrimination actions.
Nevertheless these agencies have different approaches, different resources, use
different remedies and do not necessarily cooperate to solve cases with multiple
impacts.
At the same time, analysis of data processing plays a central role in the
assessment of the impact that the use of big data has on collective interests.
Moreover, data processing represents the element that is common to all these
situations, regardless the nature of the potential harm to collective interests. For
this reason, data protection authorities may have a key role in risk assessment
processes regarding collective interests, although they are not focused on specific
social implications (e.g. discrimination).

4. Conclusion
In the present big data society, new individual and collective issues are arising
from the predictive use of information by private and public entities. The new
algorithmic society and its predictive regime of truth, which is based on obscure
processes and mere correlations, seem not to be adequately regulated by the
existing legal framework and, mainly, by data protection laws.
The limits of the traditional paradigm of data protection, in terms of protected
interests and remedies, make it necessary to outline new models. These regulatory
models should partially shift the focus from the individual dimension and data
subjects self-determination towards a broader vision focused on the use of data,
which encompasses the assessment of the various individual and collective risks
related to data processing, as well as adequate procedural solutions to involve all
the potential stockholders and protect collective interests in data protection.

16

See above para 2.1.

This is a pre-print version of the article. This version is distributed for non-commercial purposes.

5. References
Article 29 Data Protection Working Party. Opinion 03/2013 on purpose limitation.
2013.
Available
from,
http://ec.europa.eu/justice/data-protection/article29/documentation/opinion-recommendation/files/2013/wp203_en.pdf. Accessed
27.0214.
Article 29 Data Protection Working Party. Opinion 15/2011 on the definition of
consent.
2011.
Available
from,
http://ec.europa.eu/justice/policies/privacy/docs/wpdocs/2011/wp187_en.pdf.
Accessed 27.02.14
Bloustein EJ. Group Privacy: The Right to Huddle. Rutgers-Cam. L.J. 1977; 8:
219-83.
Bloustein EJ. Individual and Group Privacy. New Brunswick: Transaction Books;
1978.
Bollier D. The Promise and Perils of Big Data. 2010. Aspen Institute,
Communications
and
Society
Program.
Available
from,
http://www.aspeninstitute.org/sites/default/files/content/docs/pubs/The_Promise_a
nd_Peril_of_Big_Data.pdf. Accessed 27.02.14.
Brandimarte L, Acquisti A, Loewenstein G. Misplaced Confidences: Privacy and
the Control Paradox. Ninth Annual Workshop on the Economics of Information
Security.
2010.
Available
from,
http://www.heinz.cmu.edu/~acquisti/papers/acquisti-SPPS.pdf.
Accessed
27.02.14.
Brownsword R. Consent in Data Protection Law: Privacy, Fair Processing and
Confidentiality. In Gutwirth S et al. Reinventing data protection? London:
Springer 2009.
Bygrave LA. Data Protection Law. Approaching Its Rationale, Logic and Limits.
The Hague: Kluwer Law International; 2002.
Calo RM. Consumer Subject Review Boards: A Thought Experiment. Stan. L.
Rev. Online 2013; 66: 97-102.
Capers B. Rethinking the Fourth Amendment: Race, Citizenship, and the Equality
Principle. Harv. C.R.-C.L. L. Rev. 2011; 46:1-49.
Cate FH, MayerSchnberger V. Data Use and Impact. Global Workshop.
Available
from,
http://cacr.iu.edu/sites/cacr.iu.edu/files/Use_Workshop_Report.pdf.
Accessed
27.02.14.

This is a pre-print version of the article. This version is distributed for non-commercial purposes.

Cranor LF. Necessary but not sufficient: standardized mechanisms for privacy and
choice. J. on Telecom & High Tech L. 2012; 10: 273-307.
Crawford K et al. Big Data, Communities and Ethical Resilience: A Framework
for
Action.
2013.
Available
from,
http://www.rockefellerfoundation.org/app/uploads/71b4c457-cdb7-47ec-81a9a617c956e6af.pdf. Accessed 05.04.15
Federal Trade Commission. Data Brokers: A Call for Transparency and
Accountability.
2014.
Available
from,
https://www.ftc.gov/system/files/documents/reports/data-brokers-calltransparency-accountability-report-federal-trade-commission-may2014/140527databrokerreport.pdf. Accessed 27.02.14.
Ferguson A. Predictive Policing: The Future of Reasonable Suspicion. Emory L.
J.
2012;
62:
259-325.
Available
from,
http://www.law.emory.edu/fileadmin/journals/elj/62/62.2/Ferguson.pdf. Accessed
15.09.15
Gandy OH Jr. Exploring Identity and Identification in Cyberspace. Notre Dame
J.L. Ethics & Pub. Pol'y. 2000; 14:1085-1111. Available from,
http://scholarship.law.nd.edu/ndjlepp/vol14/iss2/10. Accessed 10.07.15.
Garla S, Hopping A, Monaco R, Rittman S. What Do Your Consumer Habits Say
About Your Health? Using Third-Party Data to Predict Individual Health Risk and
Costs. Proceedings. SAS Global Forum; 2013. Available from,
http://support.sas.com/resources/papers/proceedings13/170-2013.pdf. Accessed
28.02.15.
Gordon L.A., Predictive Policing May Help Bag Burglarsbut it May Also be a
Constitutional Problem, A.B.A. J. (Sept. 1, 2013), Available from
http://www.abajournal.com/magazine/article/predictive_policing_may_help_bag_
burglars but_it_may_also_be_a_constitutio/. Accessed 14.09.15
Gorner J., Chicago police use 'heat list' as strategy to prevent violence. Officials
generate analysis to predict who will likely be involved in crime, as perpetrator or
victim, and go door to door to issue warnings. Chicago Tribune. Chicago, 21
August 2013. Available from, http://articles.chicagotribune.com/2013-0821/news/ct-met-heat-list-20130821_1_chicago-police-commander-andrewpapachristos-heat-list. Accessed 25.02.15.
Hildebrandt M, Gutwirth S. Profiling the European Citizen. Cross-Disciplinary
Perspective. Dordrecht: Springer; 2008.
Howard PN. Pax Technica : How the internet of things may set us free or lock us
up. New Haven & London: Yale University Press; 2015.

This is a pre-print version of the article. This version is distributed for non-commercial purposes.

Koss KK. Leveraging Predictive Policing Algorithms to Restore Fourth


Amendment Protections in High-Crime Areas in a Post-Wardlow World. Chi.Kent. L. Rev. 2015; 90: 301-34.
Korzybski A. Science and sanity: An Introduction to Non-Aristotelian Systems
and General Semantics. Lancaster, Pa.: The International Non-Aristotelian
Library Pub. Co.; 1933.
Mantelero A. Children online and the future EU data protection framework:
empirical evidences and legal analysis. Int. J. Technology Policy and Law 2015
(forthcoming).
Mantelero A. Personal data for decisional purposes in the age of analytics: from
an individual to a collective dimension of data protection. CLSR 2015
(forthcoming).
Mantelero A. The future of consumer data protection in the E.U. Re-thinking the
notice and consent paradigm in the new era of predictive analytics. CLSR.
2014; 30: 643-60
Mayer-Schnberger V, Cukier K. Big Data. A Revolution That Will Transform
How We Live, Work and Think. London: John Murray; 2013.
Mayer-Schnberger V. Generational development of data protection in Europe. In
Agre PE, Rotenberg M (eds). Technology and privacy: The new landscape.
Cambridge, Mass.: MIT Press 1997.
National Immigration Law Center. Verification Nation. 2013. Available from,
www.nilc.org/document.html?id=957. Accessed 29.01.15
OFloinn M. and Ormerod D. , Social networking sites RIPA and criminal
investigations, Crim. L.R., vol. 24, p. 766, 2011
Ohm P. Branding Privacy. Minn. L. Rev. 2013; 97: 907-89.
Tene O, Polonetsky J. Privacy in the Age of Big Data: A Time for Big Decisions.
Stan. L. Rev. Online 2012; 64: 63-69.
Pasquale F. The Black Box Society. The Secret Algorithms That Control Money
and Information. Cambridge: Harvard University Press; 2015.
Robinson + Yu. Civil Rights, Big Data, and Our Algorithmic Future. A September
2014 report on social justice and technology. 2014. Available from,
http://bigdata.fairness.io/wpcontent/uploads/2014/09/Civil_Rights_Big_Data_and_Our_AlgorithmicFuture_2014-09-12.pdf. Accessed 10.03.15.

This is a pre-print version of the article. This version is distributed for non-commercial purposes.

Rouvroy A. Des donnes sans personne: le ftichisme de la donne caractre


personnel l'preuve de l'idologie des Big Data. 2014. Available from,
http://works.bepress.com/antoinette_rouvroy/55. Accessed 08.03.15.
Rouvroy A. Algorithmic Governmentality and the End(s) of Critique. 2013.
Available from, https://vimeo.com/79880601. Accessed 10.03.15.
Schwartz, PM. Data Protection Law and the Ethical Use of Analytics. 2010.
Available
from,
http://www.huntonfiles.com/files/webupload/CIPL_Ethical_Undperinnings_of_A
nalytics_Paper.pdf. Accessed 7.02.14.
Thaler RH, Sunstein CR. Nudge: Improving decisions about health, wealth, and
happiness. New Haven: Yale University Press; 2008.
Solove DJ. Introduction: Privacy Self-Management and the Consent Dilemma.
Harv. L. Rev. 2013; 126: 1880-903.
The White House, Executive Office of the President. Big Data: Seizing
Opportunities,
Preserving
Values.
2014.
Available
from,
http://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_may
_1_2014.pdf. Accessed 26.12.14.
The White House. A Consumer Data Privacy in a Networked World: A
Framework for Protecting Privacy and Promoting Innovation in the Global Digital
Economy.
2012.
Available
from,
http://www.whitehouse.gov/sites/default/files/privacy-final.pdf.
Accessed
25.06.14
Turow J, Hoofnagle CJ, Mulligan DK, Good N. The Federal Trade Commission
and Consumer Privacy in the Coming Decade. ISJLP 2007; 3:723-49.
Van Alsenoy B, Kosta E, Dumortier J. Privacy notices versus informational selfdetermination: Minding the gap. Int. Rev. Law, Comp. & Tech. 2014; 28(2): 185203.
Watson SM. Dada Data and the Internet of Paternalistic Things. Available from,
http://www.saramwatson.com/blog/2014/12/16/dada-data-and-the-internet-ofpaternalistic-things. Accessed 05.09.15.
Westin AF. Privacy and Freedom. New York: Atheneum; 1970.
Wright D. A framework for the ethical impact assessment of information
technology. Ethics Inf. Technol. 2011; 13: 199226.