Você está na página 1de 10

IFIP Working Group 9.

4
12th International Conference on
Social Implications of Computers in Developing Countries


Conference Theme:
Into the Future: Themes Insights and Agendas for
ICT4D Research and Practice

Sunset Jamaica Grande, Ocho Rios, Jamaica
May 19 - 22, 2013




THE
UNIVERSITY
OF THE
WEST INDIES
MONA




Conference Proceedings


TABLE OF CONTENTS
WELCOME CONFERENCE CHAIRS 2
WELCOME ORGANIZING CHAIRS 3
CONFERENCE COMMITTEE 4
TRACK CHAIRS 5
PROGRAMME COMMITTEE 6
SPONSORS 7
EXHIBITORS 10
KEYNOTE SPEAKERS 16
PANELS 20
TRACK INTO THE FUTURE 25
TRACK UNIVERSITY-COMMUNITY ENGAGEMENT 334
TRACK SENS CAPABILITY APPROACH AND ICT4D 394
TRACK SOCIAL MEDIA AND DEVELOPMENT 467
TRACK UNDERSTANDING THE ACTORS: ACTOR-NETWORK THEORY
IN ICT FOR DEVELOPMENT RESEARCH
504
TRACK HOW ICT FRAME DEVELOPMENT GOALS 568
TRACK ICTS, COLLABORATION AND SERVICE INNOVATION:
BRIDGING BOUNDARIES AND CULTURES
682
TRACK ICTD IN THE CARIBBEAN - ARTICULATING UNIQUE
CHALLENGES AND SOLUTIONS
766
TRACK CARING FOR A CONNECTED HUMANITY: EHEALTH, AND THE
TRANSFORMATION OF HEALTHCARE IN THE GLOBAL SOUTH
826
TRACK DESIGNING APPLICATIONS, SERVICES, SYSTEMS AND
INFRASTRUCTURE FOR DEVELOPMENT
903
PHD GRADUATE STUDENT TRACK 950
1
UMATI: KENYAN ONLINE DISCOURSE TO CATALYZE AND COUNTER
VIOLENCE


Kagonya Awori, iHub Research, Kenya
Susan Benesch, American University, U.S.A
Angela Crandall, iHub Research, Kenya
Email: kagonya@ihub.co.ke, benesch@american.edu, angelac@ihub.co.ke

Abstract: Email and SMS were heavily used in Kenya to spread inflammatory speech, rumors
and threats during the months before the 2007 presidential election and subsequent
mass violence. It is widely believed that online discourse helped catalyze the
violence, but this remains a hypothesis. Building on research in genocide studies,
speech act theory and discourse theory, Susan Benesch has proposed a system of
discourse analysis to identify dangerous speech, which is discourse that may
catalyze violence by one group against members of another. To test this theory and
to build the first database of inflammatory speech in a countrys online space, iHub
Research and Ushahidi captured, and performed analysis of, Kenyan inflammatory
discourse online in seven separate languages since September 2012. In its first six
months, this Umati (crowd in Kiswahili) monitoring project yielded more
inflammatory speech than expected, some of it explicit and violent, especially in the
weeks surrounding the March 2013 presidential election (Kenyas first since 2007).
We responded by designing a small experiment to diminish inflammatory speech
online. We also captured a strikingly large body of social media discourse calling
for peace and calm, in the immediate aftermath of the 2013 election, which was
almost entirely free of violence. In this report on our work in progress, we describe
our findings thus far and pose new questions for research, including further study of
our data.


Keywords: Dangerous Speech; Election Monitoring; Kenya; Social Media; iHub; Ushahidi
468
Awori et al. Umati: Kenyan Online Discourse to Catalyze and Counter Violence
UMATI: KENYAN ONLINE DISCOURSE TO CATALYZE AND COUNTER
VIOLENCE

1. INTRODUCTION
On March 4, 2013, Kenya held its first general election since the 2007 polls when disputed
results triggered a deadly crisis. In 2007/8, more than 1,300 people were killed and an estimated
663,921 displaced in inter-tribal attacks (Associated Press, 2011). Media technologies such as
mobile phone Short Message Service (SMS) and community and vernacular radio were used so
widely to advocate hatred and violence, and apparently to mobilize communities to action
(Osborn, 2008) that some observers suggested that the mobile phone had become a weapon of
war (Bangre, 2008). Anecdotal evidence suggests that social media also played a role in
mobilizing Kenyans, who have been adopting ICT (Information and Communication
Technology) and social media platforms very rapidly, even at the bottom of the socioeconomic
scale (Mkinen & Kuira, 2008; Goldstein, 2008; infoDev, 2012).
However documentation of inflammatory speech online is lacking, due to scant monitoring in
2007/8. This has made it difficult to study. Systematic study is also hindered by the lack of a
definition of inflammatory speech or hate speech. Since the goal of our work is to prevent
violence while also protecting freedom of expression, we chose to define our dataset more
narrowly than hate speech. We use an analytical framework designed to describe discourse
that has a reasonable possibility of catalyzing violence, in the context in which it was made or
disseminated (Benesch, 2008, 2013).
If it is true that discourse transmitted via ICT platforms helps to catalyze violence in the
Kenyan context and in other countries
1
, we wish to find ways of diminishing that effect (again,
without impinging on freedom of expression - or privacy). The unique network-building
capacities of social media may be well suited to this effort, and experiments are already
underway in several countries, including Kenya (PeaceTXT, 2012).
In the weeks leading up to the 2013 vote, we began an online peace-keeping effort of our own
called Nipe Ukweli (Give me Truth in Kiswahili), designed to counter dangerous speech and
especially malicious rumors, which were a common and destructive form of discourse during
the clashes of 2007/8 (Osborn, 2008). In fact, during the 2013 election and the five tense days
following it, while votes were being counted at a frustratingly slow rate, many Kenyans posted
and Tweeted appeals for peace, calm, patience, and national unity. Since Uhuru Kenyatta was
declared president-elect on March 9, we have unfortunately documented a dramatic spike in
inflammatory speech. On the Monday following the announcement of the new President-elect
alone, the Umati project collected 61 examples of Dangerous Speech, the highest daily count
noted in over three months (The Umati Project, 2013).

2. RESEARCH QUESTIONS
This project aims to help fill a gap in the literature pointed out by Garrett (2006) on the negative
consequences of new ICTs, by testing a methodology to systematically track and classify levels
of inflammatory speech online. Our first questions therefore address the effectiveness of the
methodology for classification, itself:
1
The use of SMS and social media to incite violence and fear of violence has been documented in many countries
such as India and Australia. See example, Yardley, 2012.
Proceedings of the 12
th
International Conference on Social Implications of Computers in Developing Countries, Ocho Rios, Jamaica, May 2013

469
Awori et al. Umati: Kenyan Online Discourse to Catalyze and Counter Violence
Does the Benesch framework for discourse analysis provide consistent classification among
monitors and among languages? Do categorization decisions made using the analysis
framework match monitors subjective estimations of the dangerousness of a particular act of
speech?
Our projects purpose (and Beneschs goal in designing the framework) is to probe the
relationship between inflammatory speech (especially online discourse) and violence. The
following questions inspired this project, therefore, although further research will be required to
answer them:
What are the effects of inflammatory speech online? Is there a causal link between
inflammatory speech online and violence offline?
Online public speech provides a special opportunity for data collection to answer these
questions, since reaction to speech online can be tracked and quantified more effectively than
reaction to offline speech.
We suggest further study using three distinct methodologies: 1) using network analysis to
investigate the impact that particular acts of speech have on audiences online; 2) comparing
data on inflammatory speech acts and data on acts of violence, for correlations in geo-location
and timing, building upon the unique work of Yanagizawa-Drott (2012); 3) qualitative
fieldwork such as Osborn (2008)s study of rumor in the informal Nairobi settlement of Kibera
in 2007/8. Osborn concluded that rumors not only raised apprehension and fear, but also incited
to action. Like Yanagizawa-Drott, Osborn not only posits a link between inflammatory speech
and collective violence, but supports it with evidence
2
. Thus far, this is exceedingly rare in the
literature.

3. DISCOURSE ANALYSIS: DANGEROUS SPEECH VERSUS HATE
SPEECH
Beneschs methodology for discourse analysis of inflammatory speech (2008, 2013) is
designed to identify hate speech that has a special capacity to inspire violence because of the
construction and reconstruction of narrative that it helps to drive. Building on the work of social
psychologists, historical sociologists and genocide scholars such as Ervin Staub (1989, 2003),
Helen Fein (1979), Frank Chalk and Kurt Jonassohn (1990), Philip Zimbardo (2007), and James
Waller (2007), and drawing on speech act theory (Austin, 1962; Searle, 1975), as well as
discourse analysis of many historical cases of inflammatory speech that preceded episodes of
mass violence, Benesch has built an analytical framework designed to provide a qualitative but
systematic evaluation of the capacity of a particular act of speech to inspire an audience to
violence against members of another group. Since the force, or capacity of a speech act to
inspire action, is context-dependent, the evaluation must take into account the context in which
the speech was made or disseminated, and must be conducted by an analyst with knowledge of
that cultural, social, and historical context.
Hate speech is a very broad category, defined in disparate ways in law and in common
parlance, but generally understood to mean speech that denigrates people on the basis of their
membership in a group, such as an ethnic or religious group. It includes speech that does not
2
To our knowledge, only one scholar has produced quantitative evidence of a link between inflammatory speech
and mass violence: David Yanagizawa-Drott (2012) found higher levels of killing in the 1994 Rwandan genocide
in villages that received the radio signal of the notorious station Radio Television Libre des Milles Colllines
(RTLM), then in villages where the signal did not reach. For a contrasting view, see Straus (2007).
Proceedings of the 12
th
International Conference on Social Implications of Computers in Developing Countries, Ocho Rios, Jamaica, May 2013

470
Awori et al. Umati: Kenyan Online Discourse to Catalyze and Counter Violence
appreciably increase the risk of violence, although it may well cause emotional and
psychological damage and increase tensions between groups.
Dangerous Speech is a narrower category, first coined and defined by Benesch (2013). It is
speech or another form of expression that has a reasonable chance of catalyzing or amplifying
violence by one group against another. We were especially interested in Dangerous Speech as
we began our project only months away from the date of Kenyas first presidential and full
parliamentary elections since the post-election violence of 2007-8. Renewed violence was
widely feared.
Our coding sheet was based on Beneschs analytical framework for identifying Dangerous
Speech. The framework, in turn, is built around five criteria that affect the dangerousness of a
particular speech act in the time and place in which it was made or disseminated: the speaker,
the audience, the speech act itself, the social and historical context, and the mode of
dissemination of the text.
To illustrate, the idea-type of the most dangerous speech act would be one for which all five
variables are maximized:
a powerful speaker with a high degree of influence over the audience;
an audience with grievances, fear and other vulnerabilities that the speaker can cultivate,
and that make the audience especially susceptible to incitement;
a speech act which, although it may be coded or elliptical on its face, is clearly
understood as a call to violence by the audience most likely to react;
a social or historical context that is propitious for violence, for any of a variety of
reasons, including long-standing competition between groups for resources, lack of efforts
to solve grievances, or previous episodes of violence;
a means of dissemination that is influential in itself, for example because it is the sole or
primary source of news for the relevant audience.
The criteria are not ranked, nor are they weighted equally across cases: in many circumstances,
one or more variables will weigh more than others. For example, an especially outrageous or
frightening speech act may be more important (i.e., more dangerous) than other factors in a
particular instance. In other cases, an audience may be especially susceptible. It is also possible
for an act of speech to cross the dangerous threshold based on only two, three, or four of the
five criteria.
In our coding sheet, we emphasized the first three criteria (i.e., an influential speaker, a
susceptible audience and inflammatory content of the text) rather than the latter two for the
following reasons: we found it difficult to determine temporal boundaries for the historical
context of each piece of text collected in the project, and the means of dissemination for all
discourse collected was always similar, i.e. the blogosphere and social media sites.

4. MONITORING AND CATEGORIZATION PROCESS
Over a period of at least 8 months, beginning September 2012 and continuing until April 2013
or later, the Umati project is monitoring Kenyan online discourse in order to estimate the
occurrence and virulence of hate and dangerous speech. We employ teams of six human
monitors, working in the most prevalent languages online in Kenya: the vernacular languages of
the four largest ethnic groups in Kenya (Kikuyu, Luhya, Kalenjin and Luo), Kenyas national
language Swahili and the unofficial slang language, Sheng, which is used widely in urban
Proceedings of the 12
th
International Conference on Social Implications of Computers in Developing Countries, Ocho Rios, Jamaica, May 2013
471
Awori et al. Umati: Kenyan Online Discourse to Catalyze and Counter Violence
centers, and Somali, spoken by the largest immigrant community in Kenya (Kenya National
Bureau of Statistics, 2010; Githiora, 2002).

Each monitor is required to scan online platforms for incidences of hate speech and save them
into an online database through the use of a coding sheet. Apart from providing discourse
analysis about each statement, answers to these questions allow for additional qualitative
research to be conducted in other areas including crowd sourcing, machine learning, human
monitoring, ethnic diversity, influence of religion on speech, and translation of vernacular
languages.
All texts are translated into English, and then sorted into three categories: offensive speech,
moderately dangerous speech, and extremely dangerous speech. For this, the monitors consider
two questions:
i) On a scale of 1 to 3 with 1 being little influence and 3 being a lot of influence, how
much influence does the speaker have on the audience? (code = N)
ii) On a scale of 1 to 3 with 1 being barely inflammatory and 3 being extremely
inflammatory, how inflammatory is the content of the text? (code = M)
The answers given to these two questions are dependent on the perceivable influence the
speaker has on the online audience most likely to react with violence, the content of the
statement, and the social and historical context of the speech statement. The sorting formula is
this:
SORTING
M1 + N1 = Bucket 1
M1 + N2 = Bucket 1
M1 + N3 = Bucket 2
M2 + N1 = Bucket 2
M2 + N2 = Bucket 2
M2 + N3 = Bucket 3
M3 + N1 = Bucket 3
M3 + N2 = Bucket 3
M3 + N3 = Bucket 3

BUCKETS
Bucket 1 = Offensive Speech
Bucket 2 = Moderate dangerous speech
Bucket 3 = Extremely dangerous speech
Categorization of the hate speech statements in these three buckets facilitates more
comprehensive qualitative and quantitative research. Some of the findings from the data thus far
are:
a) Influential speakers have a significant impact on discussions online. For example, in
September 2012 when Ferdinand Waititu, a member of parliament from Nairobi, gave a public
speech calling for the expulsion of Maasai people from the city (Jambo News, 2012), the
Proceedings of the 12
th
International Conference on Social Implications of Computers in Developing Countries, Ocho Rios, Jamaica, May 2013
472
Awori et al. Umati: Kenyan Online Discourse to Catalyze and Counter Violence
impact online was significant. A Twitter protest was started calling for Waititus arrest, but
within three days of his statements, 45 incidences of moderate to extremely dangerous speech
were recorded either against Waititu or in line with his incitement against the Maasai.
b) Events reported by mainstream media have a direct impact on online conversations; in the
example above, the circulation of a video clip (NTV Kenya, 2012) of Waititus speech resulted
in a rise in hateful comments online about that ethnic group.

5. WAY FORWARD; NEXT STEPS IN RESEARCH
The Umati project will continue past the March 2013 elections. We believe our data provides
fruitful avenues for further work; both the data and the methodology will be made available to
scholars and practitioners. While welcoming ideas, we suggest the following next steps:

5.1 Automation
We hope to find ways to automate aspects of the monitoring process. It will not be sufficient
simply to search for key words, since words and phrases can be highly offensive or innocuous,
depending on context. Perhaps it would be possible to employ machine learning to search for
attributes of dangerous speech that humans are unable to detect. We wish also to compile
dangerous speech datasets in the online spaces of other countries, in order to compare attributes
of dangerous speech across environments.

5.2 Countering Inflammatory Speech
We are also working to develop and test non-government methods for countering dangerous
speech online, as we have done with our nascent project Nipe Ukweli, which encourages online
actors to resist inflammatory speech, especially rumors, and to refute rumors with evidence of
their falsehood. We seek to better understand online audiences and the effects of diverse
platforms on online discourse norms. We are interested in comparing techniques across
platforms. For example, PeaceTXT (2012) and Sisi ni Amani (sisiniamani.org) distribute peace
messages to mobile phone users via SMS. What effect might result when such messages arrive
instead as Tweets? Similarly, research should be conducted to understand whether some
inflammatory discourse that was previously disseminated on SMS has now migrated to social
media in Kenya. This may present new opportunities for working to shift the norms of
discourse in online spaces.

6. ACKNOWLEDGEMENTS
The Umati Project would like to thank our generous donors PACT, Chemonics International -
Kenya Transition Initiative, the MacArthur Foundation, and our partner Ushahidi for making
this work possible.

7. REFERENCES AND CITATIONS
Associated Press. (2011, September 1). Kenya violence suspects face ICC hearing. Al Jazeera
English. Retrieved from
http://www.aljazeera.com/video/africa/2011/09/201191171934823573.html
Proceedings of the 12
th
International Conference on Social Implications of Computers in Developing Countries, Ocho Rios, Jamaica, May 2013
473
Awori et al. Umati: Kenyan Online Discourse to Catalyze and Counter Violence
Austin, J. L. (1975). How to do things with words. (2nd ed.). Cambridge, MA: Harvard
University Press.
Bangre, H. (2008). Kenya: SMS Text Messages the New Guns of War?. Afrik.com. Retrieved
from http://en.afrik.com/article12629.html.
Benesch, S. (2008). Vile Crime or Inalienable Right: Defining Incitement to Genocide. Virginia
Journal of International Law, 48(3), 485-528. Retrieved from
http://www.vjil.org/assets/pdfs/vol48/issue3/48_485-528.pdf.
Benesch, S. (2013, February 23). Dangerous Speech: A Proposal to Prevent Group Violence.
Retrieved from http://voicesthatpoison.org/proposed-guidelines-on-dangerous-speech/;
Chalk, F., & Jonassohn, K. (1990). The History and Sociology of Genocide. New Haven, CT:
Yale University Press.
Das, V. (1998). Specificities: Official Narratives, Rumour, and the Social Production of Hate.
Social Identities 4(1): 109-130.
Fein, H. (1979). Accounting for Genocide. New York, NY: Free Press.
Garrett, R. K. (2006). Protest in an Information Society: A Review of Literature on Social
Movements and New ICTs. Information, Communication and Society, 9(2), 202-224.
Githiora, C. (2002). Sheng: Peer Language, Swahili Dialect or Emerging Creole? Journal of
African Cultural Studies, 15(2), 159-181. Retrieved from
http://www.jstor.org/stable/3181415.
Goldstein, J. (2008, Feb 21). When SMS Messages Incite Violence in Kenya [Web log].
Retrieved from https://blogs.law.harvard.edu/idblog/2008/02/21/when-sms-messages-
incite-violence-in-kenya/.
infoDev. (2012, December). Mobile Usage at the Base of the Pyramid in Kenya. Retrieved from
http://www.infodev.org/en/Publication.1194.html.
Jambo News (2012, September 24). Ferdinand Waititu Incites Kayole Residents into Violence
[Video file]. Retrieved from http://www.jambonewspot.com/video-ferdinand-waititu-
incites-kayole-residents-into-violence/
Kenya National Bureau of Statistics. (2010). Population and Housing Census: Ethnic
Affiliation. Retrieved from http://www.knbs.or.ke/censusethnic.php
Mkinen, M. & Kuira, M.W. (2008). Social Media and Postelection Crisis in Kenya. The
International Journal of Press/Politics, 13, 328-336.
NTV Kenya (2012, September 24). Waititu in incitement remarks [Video file]. Retrieved
from http://www.youtube.com/watch?v=eSmlKCYJsb8.
Osborn, M. (2008). Fuelling the Flames: Rumour and Politics in Kibera. Journal of Eastern
African Studies, 2(2), 315-327.
PeaceTXT. (2012, December). Using Mobile Phones to End Violence. Retrieved from
http://poptech.org/peacetxt.
Searle, J.R. (1975). A Taxonomy of Illocutionary Acts. In K. Gnderson (Ed.), Language,
Mind, and Knowledge (pp. 344-369). Minneapolis, MN: University of Minnesota
Press.
Staub, E. (1989). The roots of evil: The origins of genocide and other group violence. New
York, NY: Cambridge University Press.
Proceedings of the 12
th
International Conference on Social Implications of Computers in Developing Countries, Ocho Rios, Jamaica, May 2013
474
Awori et al. Umati: Kenyan Online Discourse to Catalyze and Counter Violence
Staub, E. (2003). The psychology of good and evil: Why children, adults, and groups help and
harm others. New York, NY: Cambridge University Press.
Straus, S. (2007). What is the Relationship Between Hate Radio and Violence? Rethinking
Rwandas Radio Machete Politics & Society, 35(4), 609-637
The Umati Project. (2013, March 13). iHub Research. Internal Database.
Waller, J. (2007). Becoming evil: How ordinary people commit genocide and mass killing.
(2nd ed.). New York: Oxford University Press.
Yanagizawa-Drott, D. (2012, August). Propaganda and Conflict: Theory and Evidence from
the Rwandan Genocide. Retrieved from
http://www.hks.harvard.edu/fs/dyanagi/Research/RwandaDYD.pdf.
Yardley, J. (2012, August 18) Panic Seizes India as a Regions Strife Radiates. The New York
Times. Retrieved from http://www.nytimes.com/2012/08/18/world/asia/panic-radiates-
from-indian-state-of-assam.html?ref=world&_r=0.
Zimbardo, P. (2007). The Lucifer Effect: How Good People Turn Evil. London: Rider Books.
Proceedings of the 12
th
International Conference on Social Implications of Computers in Developing Countries, Ocho Rios, Jamaica, May 2013
475

Você também pode gostar